Back to blog

Publishing 15+ npm Packages as a Solo Developer: What I Learned

Hard-won lessons from publishing across three npm scopes: monorepo trade-offs, versioning, CI/CD, MCP servers, and the download numbers that surprised me.

9 min read
Gagan Deep Singh

Gagan Deep Singh

Founder | GLINR Studios


I didn't set out to publish fifteen npm packages. I set out to solve specific problems, and the packages were what happened when I decided the solutions were good enough to share. Three scopes, three different philosophies, one person maintaining all of it. Here's what that experience actually looks like.

The Three Scopes and Why They're Different

@glincker packages are infrastructure utilities that power GLINCKER and related projects. They're pragmatic, internal-first tools that got extracted because they're genuinely general-purpose: @glincker/geokit for AI-readiness auditing, glin-profanity for content moderation, @glincker/palrun for terminal command palettes.

@typeweaver packages are developer tooling focused on the TypeScript/Node ecosystem. @typeweaver/commitweave is a conventional commits validator and changelog generator. The scope exists because I wanted a clean namespace for tools aimed at other developers' workflows rather than application infrastructure.

@thesvg packages are the newest scope, powering the theSVG brand icon ecosystem. @thesvg/react gives you typed React components for 4,000+ brand icons. @thesvg/cli lets you search and add icons from the terminal. @thesvg/mcp-server exposes the library to AI coding assistants. This scope grew out of a single project into a full ecosystem faster than any of my other work.

The philosophies differ. @glincker packages optimize for reliability and API stability, because breaking changes in production infrastructure are expensive. @typeweaver packages optimize for developer experience and can move faster, since a broken CLI tool in someone's local workflow is painful but not a 3am incident. @thesvg packages optimize for coverage and convenience - the value is in having every brand, in every variant, available everywhere.

Monorepo vs. Multi-Repo: I've Done Both, Here's the Truth

My first four packages were separate repos. Seemed fine at the time. Then I needed to share a validateSchema utility across three of them and suddenly I was copying code and forgetting to sync fixes. That's when I moved to a monorepo with pnpm workspaces.

The monorepo setup for @glincker packages:

glincker-packages/
├── packages/
│   ├── geokit/
│   │   ├── src/
│   │   ├── package.json    # name: "@glincker/geokit"
│   │   └── tsconfig.json
│   ├── queue-utils/
│   └── shared/             # internal shared utilities
├── pnpm-workspace.yaml
├── tsconfig.base.json
└── .github/workflows/
    └── release.yml

What the monorepo bought me: shared TypeScript config, shared testing setup, atomic cross-package changes, and a single CI pipeline that only builds affected packages. What it cost me: more upfront configuration, occasional pnpm workspace quirks, and the psychological overhead of "this one repo owns a lot."

For @typeweaver packages I kept multi-repo because the tools are genuinely independent. Commitweave has no shared code with anything else I publish. The overhead of a monorepo isn't worth it when there's nothing to share.

My rule of thumb: if packages share more than one utility function, monorepo. If they're truly independent, multi-repo is simpler.

Versioning Strategy: Conventional Commits All the Way Down

All my packages use semantic-release with conventional commits. Every repository has this in CI:

# .github/workflows/release.yml
- name: Release
  env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
    NPM_TOKEN: ${{ secrets.NPM_TOKEN }}
  run: npx semantic-release

The .releaserc.json is minimal:

{
  "branches": ["main"],
  "plugins": [
    "@semantic-release/commit-analyzer",
    "@semantic-release/release-notes-generator",
    "@semantic-release/changelog",
    "@semantic-release/npm",
    "@semantic-release/github"
  ]
}

feat: commits bump minor, fix: bumps patch, feat!: or BREAKING CHANGE: in the footer bumps major. Zero manual version management. I never think about version numbers. I just write descriptive commits.

The one gotcha: in a monorepo, you need multi-semantic-release or similar to handle changed-package detection. The standard semantic-release doesn't know about workspace packages. I use multi-semantic-release in the @glincker monorepo and it handles the dependency graph. If geokit changes, it releases geokit. If shared changes, it can cascade releases to everything that depends on it.

CI/CD for npm Publishing

The pipeline has three jobs that gate each other:

  1. Test: unit tests, type checking, lint. No test skips, ever. A package that ships with failing tests is worse than no package.
  2. Build: compile TypeScript to both esm and cjs. I dual-publish everything because in 2024 the ecosystem still isn't uniformly ESM.
  3. Release: only runs on main, only after test and build pass, uses NPM_TOKEN from GitHub secrets.

The dual-publish setup in package.json:

{
  "exports": {
    ".": {
      "import": "./dist/esm/index.js",
      "require": "./dist/cjs/index.js",
      "types": "./dist/types/index.d.ts"
    }
  },
  "main": "./dist/cjs/index.js",
  "module": "./dist/esm/index.js",
  "types": "./dist/types/index.d.ts"
}

Two separate tsconfig files: one targeting ES2022 with module: NodeNext for ESM, one targeting CommonJS. The build script runs both. Worth the setup overhead because you stop getting "cannot use import statement" issues from users on older toolchains.

The GeoKit Pipeline Architecture

@glincker/geokit is the package I'm most proud of architecturally. It's a Generative Engine Optimization toolkit built around three pipeline stages: audit, generate, convert.

  • audit (@glincker/geo-audit): scores any website 0-100 for AI-readiness, checking structured data, llms.txt, semantic HTML
  • generate (@glincker/geo-seo): creates llms.txt, JSON-LD, and robots.txt for AI discoverability
  • convert (@glincker/geomark): transforms any URL into clean markdown for LLM context windows

The pipeline API is composable:

import { auditUrl } from '@glincker/geo-audit';
import { generateLlmsTxt } from '@glincker/geo-seo';
 
const result = await auditUrl('https://example.com');
// { score: 72, checks: [...], recommendations: [...] }
 
const llmsTxt = await generateLlmsTxt('https://example.com');

Each package works standalone or together. The audit tells you what's missing, the generator fixes it, and the converter makes your content AI-ingestible. I wrote a full deep-dive on GEO and GeoKit if you want the architectural details.

Packaging MCP Servers

The most recent category I've added: Model Context Protocol (MCP) servers as npm packages. MCP is the protocol that lets AI tools like Claude, Cursor, and Windsurf connect to external context providers. I now ship two MCP servers: glin-profanity-mcp for content moderation and @thesvg/mcp-server for brand icon search. Both run with npx with zero setup:

npx glin-profanity-mcp
npx @thesvg/mcp-server

The @thesvg/mcp-server lets AI coding assistants search and fetch brand SVGs directly. Ask for "the Stripe logo" and get the actual SVG without leaving your editor. The MCP packaging adds a thin stdio transport layer over the existing core libraries:

import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { audit, generate, convert } from '@glincker/geokit';
 
const server = new Server(
  { name: 'geokit-mcp', version: '1.0.0' },
  { capabilities: { tools: {} } }
);
 
// Register tools that map to geokit pipeline stages
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  // ... tool dispatch
});
 
const transport = new StdioServerTransport();
await server.connect(transport);

The interesting constraint: MCP servers run as subprocesses communicating over stdio. No HTTP, no WebSocket. The entire API surface is tool calls and responses in JSON. Designing good tool schemas is its own discipline: what parameters to expose, how to describe them so the AI uses them correctly.

Writing READMEs That Actually Get Read

Nobody reads a wall of text. My README structure for every package:

  1. One-sentence description at the top (not the package name, the purpose)
  2. Install command: copy-pasteable, immediately
  3. 30-second example: the simplest useful thing the package can do
  4. API reference: generated from JSDoc via TypeDoc, linked not inlined
  5. Edge cases and gotchas: the things that will bite you if you don't know them

The gotchas section is the most valuable part and the most skipped. I've fielded a dozen GitHub issues that were answered in the gotchas section. Now I put them in a callout block so they're visually impossible to miss.

Handling Issues from Strangers

The first GitHub issue from someone I don't know is a milestone. The second one is a test of your patience. Most issues fall into four categories:

  • Legitimate bugs: fix promptly, thank them, add a test case
  • Documentation gaps: fix the docs, not just the response
  • Feature requests: evaluate against the package's stated scope, say no clearly and kindly when out of scope
  • "This doesn't work" with no reproduction: ask for a minimal reproduction, wait, close if no response in two weeks

I added issue templates to every repo. The bug template requires a minimal reproduction. Issues opened without it get automatically commented with a gentle reminder. This one change cut "this doesn't work" reports by about 60%.

Download Numbers That Surprised Me

glin-profanity consistently outperforms everything else by a wide margin. A content moderation utility being the most downloaded package makes sense in retrospect. It's solving a problem every social app has and nobody wants to build themselves. The narrow, specific utility beats the ambitious general-purpose toolkit.

@glincker/geokit downloads are lower but the issues are more interesting. The users are building real geographic applications with genuine technical depth. Quality of engagement matters more than quantity.

The @typeweaver/commitweave numbers plateau around teams that have 5-20 engineers, small enough that one person's tool recommendation propagates the whole team but large enough that commit discipline matters. That's useful market signal.

What I'd Tell Myself at Package Number One

Solve your own problem first. The packages that work best are the ones I built because I needed them, not the ones I built because I thought the ecosystem needed them. Genuine use in production is the best quality signal.

Stability is a feature. I've broken @glincker package APIs once in three years. Every breaking change cost me more in downstream migration time than the API improvement saved. Deprecation cycles, not breaking changes.

A package without tests is a liability. The packages I published early without robust test suites are the ones I'm afraid to touch. 100% coverage is cargo-culting, but meaningful tests for every public API surface is non-negotiable.

Publishing open source as a solo developer is a different discipline than writing open source. The code is the easy part. The documentation, the issue triage, the backward compatibility promises, the release automation: that's the maintenance surface you're signing up for. Fifteen packages across three scopes means fifteen READMEs, fifteen CI pipelines, and fifteen sets of users with different expectations. Sign up deliberately, and it's genuinely rewarding. Sign up accidentally, and it's a burden.


Contact