Turn your integration into a product. Build a reliable n8n community node, earn trust, and unlock revenue
Ecosystem basics
What you’ll learn: How n8n discovers community nodes, where they appear, and how verification affects adoption
Package rules
- Publish name: n8n-nodes-your-node or @scope/n8n-nodes-your-node
- Keywords: add n8n-community-node-package and your service name
- Indexing: include nodes/, credentials/, and metadata so n8n can find your node
Visibility and trust
- Self-hosted: users can install any community node
- Verified: gains in-product visibility and trust badges
- Impact: better visibility lowers CAC and raises install rate
| Aspect | Unverified | Verified |
|---|---|---|
| Availability | Self-hosted only | Cloud and self-hosted |
| Trust signal | None | n8n reviewed |
| Discovery | Node list after manual install | Marketplace and node panel |
| Monetization impact | Lower | Higher |
Reliability earns trust. Trust drives installs. Installs fuel revenue
Adoption checklist
- Name: pick a unique, scannable name
- Keywords: add accurate keywords and descriptions
- Verification: plan for marketplace verification early
Discovery flow
flowchart TD
A[Publish to npm] --> B[n8n discovers]
B --> C[UI listing]
C --> D{Verified?}
D -->|Yes| E[Marketplace]
D -->|No| F[Manual install]
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B,C process
class E,F action
With discovery in place, design your node so it is easy to maintain and upgrade
Production architecture
What you’ll learn: A clean project layout, thin execute pattern, and separation of credentials and client logic
Project setup
- Toolchain: Node 18+, TypeScript, ESLint, Prettier, Jest
- Separation: keep execute thin and move logic into pure functions
- I/O models: model inputs and outputs explicitly with small operations
Folder layout
my-node/
├─ package.json
├─ nodes/
│ ├─ MyService.node.ts
│ └─ descriptions/
│ ├─ operations.ts
│ └─ fields.ts
├─ credentials/
│ └─ MyServiceApi.credentials.ts
├─ src/
│ ├─ client.ts # HTTP client, auth, helpers
│ └─ transforms.ts # pure data mappers
└─ test/
├─ client.test.ts
└─ transforms.test.ts
Technical terms
- Thin execute: keep execute() as orchestration only; push logic into helpers
- Pure function: a function with no side effects and predictable output
Execution pattern
// nodes/MyService.node.ts
import { IExecuteFunctions } from 'n8n-workflow';
import { myClient } from '../src/client';
import { toItems } from '../src/transforms';
export async function execute(this: IExecuteFunctions) {
const items = this.getInputData();
const results = [] as any[];
for (const item of items) {
const resource = this.getNodeParameter('resource', 0) as string;
const operation = this.getNodeParameter('operation', 0) as string;
const res = await myClient(this).call({ resource, operation, item });
results.push(...toItems(res));
}
return [results];
}
Client separation
// src/client.ts
import type { IExecuteFunctions } from 'n8n-workflow';
export function myClient(ctx: IExecuteFunctions) {
const { baseUrl, apiKey } = ctx.getCredentials('myServiceApi') as any;
async function call({ resource, operation, item }: any) {
// Build URL and headers here; keep fetch logic isolated
// Return raw JSON for transforms.ts to shape into n8n items
}
return { call };
}
Orchestration flow
flowchart TD
A[Input items] --> B[Get params]
B --> C[Client call]
C --> D[Transform data]
D --> E[Return items]
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A,B,C,D process
class E action
Mini-checklist
- Separate creds, client, and transforms
- Keep execute orchestration only
- Prefer small ops over mega-nodes
With architecture set, protect users from provider limits and network spikes
Rate limits
What you’ll learn: How to detect throttling, back off safely, and expose controls to users
Technical terms
- HTTP 429: status code for too many requests
- Retry-After: response header telling when to retry
- Exponential backoff: increase delay after each failure
- Jitter: add randomness to avoid synchronized retries
- Token bucket: simple algorithm to cap requests per second
- QPS: queries per second, a common rate metric
User controls
- Limits: items per batch, max retries, base delay
- Batching: use batch writes when the API supports it
- Recovery: enable Retry On Fail with sensible defaults
Workflow tactics
- Wait and loop to pace bulk jobs
- Retry On Fail for transient errors
- Batch writes when supported by the API
Node patterns
- Token bucket per credential to cap QPS
- Backoff with jitter for 429 and 5xx
- Soft queue to serialize hot endpoints
// src/rate.ts
export async function backoff(attempt: number, baseMs = 300) {
const cap = 10_000;
const delay = Math.min(cap, baseMs * 2 ** attempt);
const jitter = Math.random() * delay;
return new Promise((r) => setTimeout(r, jitter));
}
export async function withRetry<T>(fn: () => Promise<T>, max = 5) {
for (let i = 0; i <= max; i++) {
try {
return await fn();
} catch (e: any) {
const status = e?.response?.status;
if (status !== 429 && status < 500) throw e;
if (i === max) throw e;
await backoff(i);
}
}
throw new Error('unreachable');
}
Backoff flow
flowchart TD
A[API call] --> B{Throttle?}
B -->|Yes| C[Backoff jitter]
C --> D[Retry]
D --> A
B -->|No| E[Success]
classDef alert fill:#f3e5f5,stroke:#7b1fa2
classDef action fill:#e8f5e8,stroke:#2e7d32
class C alert
class E action
Strategy guide
| Strategy | Best for | Notes |
|---|---|---|
| Retry On Fail | Light burstiness | Easy setup, not global aware |
| Wait and loop | Bulk imports | Precise pacing, more nodes |
| In-node backoff | Third-party caps | Great UX, more code |
Stable throughput beats spiky speed. Slow is smooth. Smooth is fast
Now validate behavior so users can trust your node in production
Testing and hardening
What you’ll learn: How to test pure logic, mock HTTP, and ship safer defaults that cut support
Unit and integration
- Transforms: unit test pure transforms and validators
- HTTP: mock integration paths with nock (HTTP mocking library)
- Smoke: run manual smoke tests in a local n8n instance
// test/client.test.ts
import nock from 'nock';
import { myClient } from '../src/client';
it('fetches a resource', async () => {
nock('https://api.example.com').get('/v1/things').reply(200, { data: [1, 2] });
const ctx = fakeCtx({ baseUrl: 'https://api.example.com', apiKey: 'k' });
const res = await myClient(ctx).call({ resource: 'thing', operation: 'list' });
expect(res.data).toHaveLength(2);
});
Hardening moves
- Validate early with clear error messages
- Map errors from provider to friendly messages
- Safe defaults for timeouts and pagination
if (!projectId) {
throw new Error('Project ID is required to list tasks');
}
Manual QA
- Workflows for each operation
- Invalid creds and expired tokens
- Load test a 1k item batch with retries
Test flow
flowchart TD
A[Pure transforms] --> B[Unit tests]
C[HTTP client] --> D[Mock HTTP]
E[Local n8n] --> F[Smoke tests]
classDef process fill:#fff3e0,stroke:#ef6c00
class A,C,E process
With quality proven, close the loop with versioning, submission, and monetization
Versioning and growth
What you’ll learn: How to communicate changes, pass review, and pick a revenue model
SemVer and CI
- SemVer: MAJOR for breaking changes, MINOR for features, PATCH for fixes
- Conventional Commits: commit style that auto-generates changelogs
- Migrations: provide deprecation notes and upgrade guides
# .github/workflows/release.yml
name: release
on:
push:
tags:
- 'v*.*.*'
jobs:
publish:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with: { node-version: '18', registry-url: 'https://registry.npmjs.org' }
- run: npm ci && npm test && npm run build
- run: npm publish
env: { NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }} }
Submission essentials
{
"name": "n8n-nodes-my-service",
"version": "1.0.0",
"keywords": ["n8n-community-node-package", "my-service", "automation"],
"n8n": {
"nodes": ["dist/nodes/MyService.node.js"],
"credentials": ["dist/credentials/MyServiceApi.credentials.js"]
}
}
Release pipeline
flowchart TD
A[Git tag] --> B[CI build]
B --> C[Run tests]
C --> D[Publish npm]
D --> E[Submit review]
classDef action fill:#e8f5e8,stroke:#2e7d32
class A,B,C,D,E action
Monetization models
- Free plus consulting: sell time and custom work
- Premium tier: extra operations, higher limits, SLAs (service guarantees)
- SaaS backend: hosted features with quotas and billing
- Bundle: related nodes with docs and support
| Model | What you sell | Trade-offs |
|---|---|---|
| Free plus consulting | Time and custom work | Low friction, time bound |
| Premium tier | Extra features and SLAs | Recurring revenue, support load |
| SaaS backend | Hosted features | High LTV, infra and billing |
Go-to-market
- README with screenshots and importable examples
- Changelog that highlights ROI and safety
- Roadmap and support policy with response times
Mini-checklist
- Tag releases with SemVer and changelogs
- Submit via Creator Portal with docs and examples
- State pricing and support model clearly
Data model overview
erDiagram
NodePackage ||--o{ NodeEntry : contains
NodePackage ||--o{ Credential : includes
NodeEntry ||--o{ Operation : offers
NodePackage {
int id
string name
string scope
string version
datetime created_at
}
NodeEntry {
int id
string title
string status
datetime created_at
}
Credential {
int id
string provider
string api_key
datetime created_at
}
Operation {
int id
string resource
string action
datetime created_at
}
Reliable nodes win by default. Nail architecture and rate limits, prove it with tests, communicate with SemVer, and the marketplace will do the rest