Goal: deploy a reliable, backed‑up, and scalable n8n on Docker. You’ll map the right volumes, keep credentials safe, migrate cleanly, and avoid overload.
Introduction (goal + reader benefit)
You want n8n running 24/7 without surprises. Backups should be routine, not heroic.
This guide shows exactly how to:
- Start n8n with the correct timezone and persistent storage.
- Back up and migrate workflows and credentials safely.
- Scale responsibly and keep concurrency in check.
One setup. Clear backups. Predictable scaling.
Setup and persistence (prereqs, Step 1, Step 2)
Prerequisites and environment overview
Bring a Linux host or VM with Docker and Compose installed. A small box handles light use. Bigger workloads need Postgres and Redis.
- Recommended starter specs: 2 vCPU, 4 GB RAM, 20 GB SSD.
- Default DB is SQLite. Postgres is better for production and scaling.
- n8n persists data under /home/node/.n8n inside the container. Map it to a Docker volume.
Keep sensitive values in .env. Do not hardcode secrets in YAML.
Step 1: Run n8n in Docker with correct timezone settings
Time matters. Schedules fire based on the instance timezone.
- Create a named volume and start n8n.
docker volume create n8n_data
docker run -it --rm \
--name n8n \
-p 5678:5678 \
-e GENERIC_TIMEZONE="America/New_York" \
-e TZ="America/New_York" \
-e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true \
-e N8N_RUNNERS_ENABLED=true \
-v n8n_data:/home/node/.n8n \
docker.n8n.io/n8nio/n8n
- GENERIC_TIMEZONE controls schedule‑oriented nodes. TZ sets the OS time.
- Keep ports and volume mapping as shown. You can change the host port later.
Now open http://YOUR_HOST:5678 and finish setup.
Step 2: Configure Docker volumes for n8n data persistence
Persist or lose work. There is no middle ground.
- The SQLite DB lives at ~/.n8n/database.sqlite in the mapped volume. Back it up.
- Credentials are encrypted with an instance key. Persist that key or exports cannot decrypt.
A minimal docker-compose.yml with a bind mount for easy backups:
version: "3.8"
services:
n8n:
image: docker.n8n.io/n8nio/n8n:latest
container_name: n8n
restart: unless-stopped
ports:
- "5678:5678"
env_file: [.env]
environment:
- GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
- TZ=${GENERIC_TIMEZONE}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
volumes:
- ./n8n_data:/home/node/.n8n
Store .env in a secure path. Keep it out of git.
Pro tip: if you prefer Postgres, set DB_TYPE=postgresdb and DB_POSTGRESDB_* variables. Use a managed Postgres for reliability.
Backup and migration (Step 3, Step 4)
Step 3: Export and back up workflows and credentials (including decryption)
Back up three things on a schedule.
- The data volume (contains SQLite DB and config).
- Workflows export (JSON files).
- Credentials export (encrypted for normal backups, decrypted only for migrations).
Mount a backup folder and run CLI from the container.
# create a host folder for exports
mkdir -p $(pwd)/backups
# attach it temporarily and run the exports
docker run --rm \
-v n8n_data:/home/node/.n8n \
-v $(pwd)/backups:/home/node/backups \
docker.n8n.io/n8nio/n8n \
bash -lc '
mkdir -p /home/node/backups/latest && \
n8n export:workflow --backup --output=/home/node/backups/latest && \
n8n export:credentials --backup --output=/home/node/backups/latest
'
For a one‑time migration to a different key, export decrypted credentials. Treat this as highly sensitive.
# DECRYPTED export for migration only
n8n export:credentials --all --decrypted --output=/home/node/backups/decrypted.json
- Use docker exec on a running container if you prefer. The command is the same.
- Store decrypted.json in a vault or offline. Delete it after the move.
Add a quick rsync job for the volume folder.
# if using a bind mount like ./n8n_data
rsync -aH --delete ./n8n_data/ backup-host:/srv/backup/n8n_data/
This gives you file‑level rollback and disaster recovery.
Step 4: Restore and migrate workflows/credentials between instances
Two clean paths exist. Choose one.
- Same encryption key across source and target - import encrypted credentials.
- Different key on target - import using the decrypted export.
First, set the key. Then import.
# ensure the target uses the intended key
export N8N_ENCRYPTION_KEY="your-32+char-random-string"
# run imports inside the container
n8n import:workflow --separate --input=/home/node/backups/latest/
n8n import:credentials --separate --input=/home/node/backups/latest/
If the key was lost, read it from /home/node/.n8n/config in the source volume and place it in N8N_ENCRYPTION_KEY on the target.
Use decrypted.json only when keys differ. Rotate or destroy that file after import.
Heads‑up: SQLite name limits may differ from Postgres. Fix long names if imports complain.
Scaling and safety (Step 5, Step 6)
Step 5: Plan concurrency and scaling (single box vs many containers)
Don’t push a single container too hard. Keep concurrent production executions around 10–20 on modest hosts. Use an explicit cap.
# set a safe ceiling in REGULAR (non-queue) mode
export N8N_CONCURRENCY_PRODUCTION_LIMIT=20
- This queues extra executions and protects the event loop.
- Some nodes have their own limits, so real throughput can be lower.
When you outgrow one box, move to queue mode with Redis and workers.
version: "3.8"
services:
postgres:
image: postgres:15
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=strongpass
volumes:
- pg_data:/var/lib/postgresql/data
redis:
image: redis:7
n8n-main:
image: docker.n8n.io/n8nio/n8n:latest
depends_on: [postgres, redis]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=strongpass
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- GENERIC_TIMEZONE=America/New_York
- TZ=America/New_York
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
volumes:
- n8n_data:/home/node/.n8n
ports:
- "5678:5678"
n8n-worker:
image: docker.n8n.io/n8nio/n8n:latest
depends_on: [postgres, redis]
command: ["worker"]
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=strongpass
- EXECUTIONS_MODE=queue
- QUEUE_BULL_REDIS_HOST=redis
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
volumes:
n8n_data:
pg_data:
- Set EXECUTIONS_MODE=queue on main and workers. Point both at Redis. Share the same encryption key everywhere.
- Use Postgres when you use queue mode. Skip SQLite here.
Single big box vs many smaller services:
- One big box is simpler, yet it fails all‑at‑once under heavy spikes.
- Many small Docker deployments isolate blast radius and reduce noisy‑neighbor issues.
Choose fewer moving parts early. Split later when the graphs prove it.
Step 6: Example scaling patterns and when to move to custom code (Ruby/Python)
Use patterns that match your workload. Don’t fight the tool.
- Fan‑out webhooks to queue mode workers for bursty APIs.
- Daily batch jobs stay on a single node with a safe concurrency cap.
- Heavy database transforms run better in custom code.
When to move away from n8n for a specific step:
- Long ETL across huge tables.
- Complex window functions and joins.
- Tight memory budgets with big in‑memory maps.
Write that segment in Ruby or Python as a service. Call it from n8n with HTTP or a queue. Keep n8n as the orchestrator while code handles the grind.
Common pitfalls and troubleshooting + wrap‑up
Common pitfalls and quick fixes
- Lost credentials after redeploy
- Cause: container generated a new encryption key. Fix by restoring the original key from /home/node/.n8n/config or set N8N_ENCRYPTION_KEY and restart.
- Crashes or lockups under load
- Cause: too many concurrent executions in regular mode. Set N8N_CONCURRENCY_PRODUCTION_LIMIT and watch CPU and memory.
- Queue mode not processing jobs
- Cause: Redis or DB misconfig. Verify EXECUTIONS_MODE=queue and the QUEUE_BULL_REDIS_* values on both main and workers.
- Backups exist yet imports fail
- Cause: name length or ID conflicts. Trim long names or import into an empty DB. Use —separate and fix IDs when needed.
What to back up (quick table)
| Item | Location | Notes |
|---|---|---|
| SQLite DB | ./n8n_data/database.sqlite | Copy nightly. Keep 7–30 days. |
| Config (incl. key) | ./n8n_data/config | Store off‑box and encrypt. |
| Workflows | backups/latest/*.json | Created by export:workflow. |
| Credentials | backups/latest/*.json | Use encrypted for routine backups. Decrypted only for migrations. |
Conclusion / recommended next actions
- Lock in timezone, volumes, and the encryption key first.
- Automate exports and file backups. Test a restore monthly.
- Start with a modest concurrency cap. Move to queue mode when graphs justify it.
Next steps:
- Add health checks and logs. Monitor CPU, memory, and queued jobs.
- Plan a blue‑green instance for upgrades.
- Document the recovery steps in your runbook.
Keep the encryption key, the database, and your JSON exports together. Test restores on a fresh VM before you need them.