This guide deploys a production‑ready n8n on Kubernetes with the official n8n Helm chart. You will configure secrets, Postgres, Redis queue mode, ingress with TLS, persistence, and autoscaling
Why n8n on k8s
What you’ll learn: key benefits of n8n on Kubernetes, how the n8n helm chart standardizes releases, and when to use queue mode
Deploying n8n on Kubernetes gives repeatable releases, fast scaling, and safer upgrades. The n8n helm chart standardizes packaging and configuration
- Immutable releases and rollbacks via Helm
- Horizontal scale with queue mode for heavy workflows
- Encrypted traffic and durable data with TLS and persistence
You will see the phrases n8n kubernetes, n8n helm chart, and n8n k8s used naturally throughout
Transition: With the why covered, prepare your cluster and chart sources
flowchart TD
A[User] --> B[Ingress]
B --> C[n8n Main]
C --> D[Redis]
D --> E[Worker Pods]
C --> F[Postgres]
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
classDef alert fill:#f3e5f5,stroke:#7b1fa2
class A trigger
class B process
class C action
class D process
class E action
class F action
Prereqs and Setup
What you’ll learn: required tools, cluster networking, storage, and how to fetch the n8n helm chart
Get the basics right now to avoid slow debugging later
Tools
- kubectl v1.24 or later configured for the target cluster
- Helm v3.8 or later, the Kubernetes package manager
- OpenSSL or pwgen to generate secure secrets
Cluster and networking
- Kubernetes cluster reachable from your terminal (EKS, GKE, AKS, K3s, Kind)
- Ingress controller such as nginx or traefik, plus a DNS record pointing to your host
- TLS plan using cert-manager or an existing Kubernetes Secret containing a certificate
Note: Ingress is the cluster edge router that exposes services. TLS encrypts traffic between clients and your service
Storage
- Default StorageClass available for persistent volumes
- Optional ReadWriteMany class if multiple pods share volumes
StorageClass is a cluster template that provisions persistent volumes
Get the chart
Classic HTTP repo
helm repo add n8n <REPO_URL>
helm repo update
helm search repo n8n
helm show values n8n/n8n > values.example.yaml
OCI registry
export CHART="oci://<REGISTRY>/n8n/n8n"
helm show chart "$CHART"
helm show values "$CHART" > values.example.yaml
Tip: Run helm template with your values to catch errors before you deploy
Transition: With requirements in place, build a minimal but production‑ready values.yaml
flowchart TD
A[Check Tools] --> B[Cluster Ready]
B --> C[Ingress Live]
C --> D[TLS Plan]
D --> E[Storage Ready]
E --> F[Chart Fetched]
classDef process fill:#fff3e0,stroke:#ef6c00
class A,B,C,D,E,F process
Build values.yaml
What you’ll learn: how to set secrets, core env, Postgres, Redis queue mode, ingress, and persistence in values.yaml
You will create a minimal values.yaml that is safe for production and easy to extend
Secret: encryption key
Changing this later breaks credential decryption
kubectl create namespace n8n || true
openssl rand -base64 32 | tr -d '\n' | kubectl -n n8n create secret generic n8n-secrets \
--from-literal=N8N_ENCRYPTION_KEY=$(cat)
Technical term: The encryption key secures stored credentials inside n8n
Core image and env
# values.yaml
image:
repository: n8nio/n8n
pullPolicy: IfNotPresent
nameOverride: "n8n"
fullnameOverride: "n8n"
extraEnv:
- name: N8N_LOG_LEVEL
value: info
- name: N8N_PROTOCOL
value: https
- name: N8N_PORT
value: "5678"
- name: NODE_ENV
value: production
extraEnvFrom:
- secretRef:
name: n8n-secrets # provides N8N_ENCRYPTION_KEY
Helm renders Kubernetes manifests from chart templates and your values
Database: Postgres
Choose internal or external Postgres
| Option | Pros | Use when |
|---|---|---|
| Internal subchart | One command setup | Labs and small teams |
| External managed | High availability and backups | Production |
Internal Postgres
postgresql:
enabled: true
auth:
username: n8n
database: n8n
existingSecret: n8n-pg-secret # create separately with password
extraEnv:
- name: DB_TYPE
value: postgresdb
- name: DB_POSTGRESDB_HOST
value: n8n-postgresql
- name: DB_POSTGRESDB_PORT
value: "5432"
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_POSTGRESDB_USER
value: n8n
# password comes from postgresql auth secret or extraEnvFrom
External Postgres
postgresql:
enabled: false
extraEnv:
- name: DB_TYPE
value: postgresdb
- name: DB_POSTGRESDB_HOST
value: <your-managed-host>
- name: DB_POSTGRESDB_PORT
value: "5432"
- name: DB_POSTGRESDB_DATABASE
value: n8n
- name: DB_POSTGRESDB_USER
value: n8n
- name: DB_POSTGRESDB_SSL
value: "true" # if your managed DB enforces TLS
extraEnvFrom:
- secretRef:
name: n8n-db-credentials # contains DB_POSTGRESDB_PASSWORD
Note: Managed databases such as RDS or Cloud SQL provide automated backups and scaling
Queue mode with Redis
Queue mode pushes executions to Redis so workers can scale horizontally
# Enable queue execution
extraEnv:
- name: EXECUTIONS_MODE
value: queue
- name: QUEUE_BULL_REDIS_HOST
value: n8n-redis-master
- name: QUEUE_BULL_REDIS_PORT
value: "6379"
redis:
enabled: true
worker:
enabled: true
replicaCount: 3
extraEnv:
- name: EXECUTIONS_MODE
value: queue
- name: QUEUE_BULL_REDIS_HOST
value: n8n-redis-master
- name: QUEUE_BULL_REDIS_PORT
value: "6379"
External Redis
redis:
enabled: false
extraEnv:
- name: QUEUE_BULL_REDIS_HOST
value: <redis-host>
- name: QUEUE_BULL_REDIS_PORT
value: "6379"
- name: QUEUE_BULL_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: n8n-redis-auth
key: password
Ingress and TLS
Set WEBHOOK_URL to the public host. Ingress routes traffic into the cluster
ingress:
enabled: true
className: nginx
hosts:
- host: n8n.example.com
paths:
- path: /
pathType: Prefix
tls:
- hosts: [n8n.example.com]
secretName: n8n-tls
extraEnv:
- name: N8N_HOST
value: n8n.example.com
- name: WEBHOOK_URL
value: https://n8n.example.com/
If WEBHOOK_URL does not match your public host, external triggers and subscriptions will fail
Persistence
Persist /home/node/.n8n for local assets and state
persistence:
enabled: true
mountPath: /home/node/.n8n
accessModes:
- ReadWriteOnce
size: 10Gi
storageClass: <your-storage-class>
Transition: With values.yaml ready, harden the deployment, ship it, and verify behavior
erDiagram
Workflow ||--o{ Execution : has
Credential ||--o{ Workflow : used_by
Workflow {
int id
string name
datetime created_at
}
Execution {
int id
int workflow_id
datetime started_at
}
Credential {
int id
string name
datetime created_at
}
Harden and Deploy
What you’ll learn: resource limits, probes, autoscaling with HPA or KEDA, helm upgrade, and health checks
Resources and probes
Requests and limits stabilize scheduling. Probes help Kubernetes restart unhealthy pods
resources:
requests:
cpu: 150m
memory: 512Mi
limits:
cpu: 500m
memory: 1Gi
livenessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 5678
initialDelaySeconds: 10
periodSeconds: 5
worker:
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 1
memory: 1.5Gi
Autoscale workers
Start with CPU in an HPA. Add KEDA for queue depth later
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: n8n-worker-hpa
namespace: n8n
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: n8n-worker
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
KEDA on Redis list length
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: n8n-worker-keda
namespace: n8n
spec:
scaleTargetRef:
name: n8n-worker
minReplicaCount: 2
maxReplicaCount: 20
triggers:
- type: redis
metadata:
address: <redis-host:6379>
listName: bull:jobs:wait
listLength: "50"
authenticationRef:
name: n8n-redis-trigger-auth
HPA is HorizontalPodAutoscaler. KEDA is event‑driven autoscaling that scales from external metrics such as Redis list length
Deploy with Helm
# Install
helm upgrade --install n8n "$CHART" -n n8n -f values.yaml
# Classic repo
helm upgrade --install n8n n8n/n8n -n n8n -f values.yaml
Verify health
kubectl -n n8n get pods,svc,ingress
kubectl -n n8n logs deploy/n8n -f | sed -n '1,120p'
kubectl -n n8n logs deploy/n8n-worker -f
Quick test workflow
- Open https://n8n.example.com and create a simple workflow
- Add a Webhook node and a Set node, then activate it
- Send a test request: curl -X POST https://n8n.example.com/webhook/:id and confirm a 200 and a new execution
Transition: If something is off, use the checks below to fix common issues fast
flowchart TD
A[Helm Deploy] --> B[Pods Ready]
B --> C[Ingress OK]
C --> D[Webhook Test]
D --> E[Queue Runs]
classDef action fill:#e8f5e8,stroke:#2e7d32
class A,B,C,D,E action
Troubleshoot and Next
What you’ll learn: fast fixes for webhooks, credentials, and connectivity, plus operations best practices
Webhooks 404 or not firing
- Match N8N_HOST, N8N_PROTOCOL=https, and WEBHOOK_URL to your ingress host
- Check ingress TLS and DNS records
- Recreate third‑party webhooks after URL changes
Credentials fail to decrypt
- Keep a stable N8N_ENCRYPTION_KEY in one Secret
- Ensure all pods use the same Secret
- If rotated, restore from backup or re‑enter credentials
DB or Redis connectivity stalls
- Exec into a pod and test with nc, psql, or redis-cli
- Check NetworkPolicy, Service names, ports, and TLS flags
- For managed services, allow the cluster egress range
Operational tips
- Version everything by committing values.yaml and HPA or KEDA manifests to Git
- Add monitoring by scraping pod metrics and tracking Redis or DB health
- Promote with GitOps across environments using the same chart and different values
You now run a hardened n8n on k8s with secrets, Postgres, Redis queue mode, TLS ingress, persistence, probes, and autoscaling. Next: add monitoring, adopt GitOps for values.yaml, and roll this pattern across environments