The post title is the only h1. All sections below use h2+ for clean SEO and scannability
Environment Variables
What you’ll learn: What n8n environment variables are, why they matter, and how precedence works
Environment variables are key value settings passed to a process at runtime. In n8n, they control behavior without code changes
- Use them across Docker, Compose, and Kubernetes for consistent deploys
- Load sensitive values from secret files with the _FILE pattern to avoid plain text
- Defaults help you start fast, explicit config keeps you stable in prod
How n8n reads config
n8n resolves configuration in a clear order. This helps you know which value wins when the same setting appears in multiple places
flowchart TD
A[Env Vars] --> B[Config File]
B --> C[Defaults]
D{Final Value} --- A
D --- B
D --- C
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B process
class C process
class D action
- n8n reads process environment first, then config files, then built in defaults
- The _FILE form tells n8n to read the value from a mounted file, common with secret stores
- Prefer variables for immutable images and repeatable deploys
Precedence rules
- Environment variables override config files and defaults
- If both VAR and VAR_FILE are set, the file value wins
- Keep encryption keys and database passwords in a secret manager, not in source control
Terminology
- Secret manager: a service that stores credentials and rotates them securely
- Connection pool: a cache of database connections reused by the app for efficiency
- mTLS: mutual TLS, client and server present certificates to verify each other
Transition: With precedence understood, let’s map the main configuration surfaces you will tune in production
Config Categories
What you’ll learn: The major config groups and when to use them
Each group maps to a deployment concern. You will combine several in production
- Database: DB_TYPE, DB_POSTGRESDB_* for Postgres or SQLite for local dev
- Execution: EXECUTIONS_*, N8N_CONCURRENCY_PRODUCTION_LIMIT
- Security: N8N_ENCRYPTION_KEY, N8N_BASIC_AUTH_*, N8N_BLOCK_ENV_ACCESS_IN_NODE
- Network: WEBHOOK_URL, N8N_PROTOCOL, N8N_HOST, N8N_PORT, proxy hops
- Scaling and queue: EXECUTIONS_MODE queue, QUEUE_BULL_, REDIS_
Short on time: switch to Postgres, set a strong N8N_ENCRYPTION_KEY, and set WEBHOOK_URL to the public https address behind your proxy
Transition: Start with the database, because it unlocks scaling, clustering, and safer upgrades
Database Setup
What you’ll learn: When to choose SQLite vs Postgres and the key variables to configure
SQLite is simple and great for local development. Postgres scales for multi instance and queue mode
Postgres core settings
| Variable | Purpose | Example or default |
|---|---|---|
| DB_TYPE | Selects database backend | sqlite default, postgresdb for scale |
| DB_POSTGRESDB_HOST | Postgres host name | postgres |
| DB_POSTGRESDB_PORT | Postgres port | 5432 |
| DB_POSTGRESDB_DATABASE | Database name | n8n or n8n_prod |
| DB_POSTGRESDB_SCHEMA | Schema name | public |
| DB_POSTGRESDB_USER | Database user | n8n |
| DB_POSTGRESDB_PASSWORD or DB_POSTGRESDB_PASSWORD_FILE | Password or file based secret | use a secret file with _FILE |
Connection tuning
| Variable | Purpose | Typical value |
|---|---|---|
| DB_POSTGRESDB_POOL_SIZE | Max pooled connections | 10 for queue mode |
| DB_POSTGRESDB_CONNECTION_TIMEOUT | Connect timeout ms | 30000 on slow links |
| DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT | Idle timeout ms | 60000 for bursty load |
TLS and certificates
TLS encrypts traffic to the database. mTLS also validates the client
| Variable | Purpose | Typical value |
|---|---|---|
| DB_POSTGRESDB_SSL_ENABLED | Enable TLS to Postgres | true for remote or managed |
| DB_POSTGRESDB_SSL_CA | CA cert path | path to CA cert |
| DB_POSTGRESDB_SSL_CERT | Client cert | path to client cert |
| DB_POSTGRESDB_SSL_KEY | Client key | path to client key |
| DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED | Verify server cert | true, disable only for tests |
When to move to Postgres
- You run multiple n8n instances or workers
- Execution volume grows beyond a single file database
- You need backups, high availability, or read replicas
| SQLite | PostgreSQL |
|---|---|
| Zero setup, file based | Network server, scalable |
| Great for dev and labs | Required for queue mode at scale |
| No concurrent writers | Optimized for concurrency |
flowchart TD
A[Start on SQLite] --> B{Add workers}
B -->|No| C[Stay on SQLite]
B -->|Yes| D[Move to Postgres]
D --> E[Enable TLS]
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B process
class C action
class D action
class E action
Optional schema view to anchor concepts
erDiagram
Workflow ||--o{ Execution : runs
Credential ||--o{ Workflow : used_by
Workflow {
int id
string name
datetime created_at
}
Execution {
int id
int workflow_id
datetime started_at
}
Credential {
int id
string name
datetime created_at
}
Transition: With storage chosen, tune execution behavior and data retention for reliability and cost
Execution Tuning
What you’ll learn: How to set timeouts, prune data, and cap concurrency
Runtime and retention
| Variable | Purpose | Typical value |
|---|---|---|
| EXECUTIONS_MODE | Where jobs run | regular or queue for scale |
| EXECUTIONS_TIMEOUT | Hard stop per run seconds | 3600 to kill loops |
| EXECUTIONS_TIMEOUT_MAX | Max allowed per workflow seconds | 7200 for long jobs |
| EXECUTIONS_DATA_SAVE_ON_SUCCESS | Save success data | none in prod |
| EXECUTIONS_DATA_SAVE_ON_ERROR | Save error data | all for incident review |
| EXECUTIONS_DATA_SAVE_ON_PROGRESS | Save per node progress | true only for deep debug |
| EXECUTIONS_DATA_PRUNE | Enable pruning | true in prod |
| EXECUTIONS_DATA_MAX_AGE | Keep data hours | 168 for one week |
| EXECUTIONS_DATA_PRUNE_MAX_COUNT | Max records kept | 50000 high volume |
| EXECUTIONS_DATA_MANUAL_EXECUTIONS_SAVE | Save manual runs | false in prod |
| N8N_CONCURRENCY_PRODUCTION_LIMIT | Max concurrent executions | 20 to prevent spikes |
Quick tips
- Start with strict timeouts in prod and relax per workflow as needed
- Save minimal success data, keep errors for root cause analysis
Transition: Next, lock down credentials and tighten surface area
Security basics
What you’ll learn: How to protect stored credentials and restrict access
| Variable | Purpose | Typical value |
|---|---|---|
| N8N_ENCRYPTION_KEY or N8N_ENCRYPTION_KEY_FILE | Encrypt stored credentials | set before first prod boot |
| N8N_BASIC_AUTH_ACTIVE | Gate the UI with Basic Auth | true for small teams |
| N8N_BASIC_AUTH_USER | Basic Auth user name | admin or team user |
| N8N_BASIC_AUTH_PASSWORD | Basic Auth password | a long random string |
| N8N_BLOCK_ENV_ACCESS_IN_NODE | Block env reads in Code node | true for shared installs |
| NODES_EXCLUDE or NODES_INCLUDE | Disable or allowlist node ids | reduce attack surface |
| N8N_COMMUNITY_PACKAGES_ENABLED | Allow community packages | false for hardened clusters |
Definitions
- Basic Auth: browser prompt for user and password before loading the app
- Allowlist: only listed items are permitted, all others are blocked
Security stance
- Set and persist N8N_ENCRYPTION_KEY before storing any credentials
- Block environment access for untrusted builders
- Disable risky nodes you do not need
Back up your encryption key in a secure vault. If you lose it, you lose the ability to decrypt stored credentials
Transition: Finally, make sure webhooks resolve correctly behind reverse proxies
Network and webhooks
What you’ll learn: How to make public URLs and client IPs resolve correctly
| Variable | Purpose | Typical value |
|---|---|---|
| N8N_HOST | Host name for URLs | n8n.example.com |
| N8N_PORT | Internal port | 5678 |
| N8N_PROTOCOL | URL scheme | https behind a proxy |
| WEBHOOK_URL | External base URL override | public https address |
| N8N_PROXY_HOPS | Trusted proxy hops count | match proxy chain |
Common setup
- Terminate TLS on Nginx or Traefik
- Set WEBHOOK_URL to the public https URL
- Keep N8N_PORT at 5678 internally
| Symptom | Likely cause | Fix |
|---|---|---|
| Webhooks show localhost | Missing WEBHOOK_URL | Set the public https URL |
| Wrong client IP or scheme | Proxy hops not trusted | Set N8N_PROXY_HOPS to match chain |
flowchart TD
A[Client] --> B[Proxy]
B --> C[n8n]
C --> D[Webhook Flow]
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B process
class C process
class D action
Transition: When load grows, switch to queue mode and scale workers
Queue mode
What you’ll learn: How to enable distributed workers with Redis
- Redis is an in memory data store used here as a job queue
- Worker is a process that pulls jobs from the queue and executes workflows
| Variable | Purpose | Typical value |
|---|---|---|
| EXECUTIONS_MODE | Execution backend | queue to distribute load |
| QUEUE_BULL_REDIS_HOST | Redis host name | redis service name |
| QUEUE_BULL_REDIS_PORT | Redis port | 6379 |
| REDIS_HOST and REDIS_PORT | Alternative Redis vars | use if your image expects them |
| REDIS_PASSWORD or REDIS_PASSWORD_FILE | Redis password or file | required on secured Redis |
| QUEUE_WORKER_CONCURRENCY | Jobs per worker | tune by CPU and memory |
| N8N_DISABLE_PRODUCTION_MAIN_PROCESS | Disable main duties | true for worker only containers |
flowchart TD
A[Main] --> B[Redis]
B --> C[Worker 1]
B --> D[Worker 2]
C --> E[Run Jobs]
D --> E
classDef trigger fill:#e1f5fe,stroke:#01579b
classDef process fill:#fff3e0,stroke:#ef6c00
classDef action fill:#e8f5e8,stroke:#2e7d32
class A trigger
class B process
class C process
class D process
class E action
Starter Env
What you’ll learn: A copy paste .env for Postgres and queue mode to get you production ready
Minimal production .env example
# ---- Identity and URLs ----
N8N_HOST=n8n.example.com
N8N_PORT=5678
N8N_PROTOCOL=https
WEBHOOK_URL=https://n8n.example.com/
# ---- Security ----
# Generate once and store in a secret manager, 64 hex chars
N8N_ENCRYPTION_KEY=YOUR_64_HEX_SECRET
N8N_BLOCK_ENV_ACCESS_IN_NODE=true
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=change-me-now
# ---- Database Postgres ----
DB_TYPE=postgresdb
DB_POSTGRESDB_HOST=postgres
DB_POSTGRESDB_PORT=5432
DB_POSTGRESDB_DATABASE=n8n
DB_POSTGRESDB_SCHEMA=public
DB_POSTGRESDB_USER=n8n
DB_POSTGRESDB_PASSWORD_FILE=/run/secrets/db_password
DB_POSTGRESDB_POOL_SIZE=10
DB_POSTGRESDB_CONNECTION_TIMEOUT=30000
DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT=60000
DB_POSTGRESDB_SSL_ENABLED=false
# ---- Executions and retention ----
EXECUTIONS_MODE=queue
EXECUTIONS_TIMEOUT=3600
EXECUTIONS_TIMEOUT_MAX=7200
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_PROGRESS=false
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168
EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000
N8N_CONCURRENCY_PRODUCTION_LIMIT=20
# ---- Queue and Redis ----
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
# REDIS_PASSWORD_FILE=/run/secrets/redis_pw
QUEUE_WORKER_CONCURRENCY=5
TZ=UTC
Optional extras
# Use mTLS to Postgres
DB_POSTGRESDB_SSL_ENABLED=true
DB_POSTGRESDB_SSL_CA=/certs/ca.pem
DB_POSTGRESDB_SSL_CERT=/certs/client.crt
DB_POSTGRESDB_SSL_KEY=/certs/client.key
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=true
# Worker only container
N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true
Transition: If something breaks, use the checklist below to zero in fast
Debugging
What you’ll learn: How to diagnose the most common misconfigurations
Lost credentials after restart
- Cause: N8N_ENCRYPTION_KEY was auto generated and lost during redeploy
- Fix: Set a persistent N8N_ENCRYPTION_KEY or N8N_ENCRYPTION_KEY_FILE before storing credentials and back it up
Wrong webhook URLs behind proxies
- Cause: WEBHOOK_URL unset, N8N_PROTOCOL wrong, or proxy hops not trusted
- Fix: Set WEBHOOK_URL to the public https URL, set N8N_PROTOCOL to https, set N8N_PROXY_HOPS to match your chain
Database connection or pool issues
- Cause: Pool too small, wrong host or port, or TLS mismatch
- Fix: Verify DB_POSTGRESDB_* values, raise DB_POSTGRESDB_POOL_SIZE gradually, and match TLS to the server
Execution data bloat
- Cause: Saving all success data or pruning disabled
- Fix: Set EXECUTIONS_DATA_SAVE_ON_SUCCESS to none and enable pruning with sane age and count
Queue jobs stuck in running
- Cause: Redis unreachable, wrong host, or no worker running
- Fix: Point QUEUE_BULL_REDIS_HOST to the correct service, confirm ports, and start at least one worker with valid DB and Redis env
Checklist
What you’ll learn: A fast order of operations for safe rollouts
- Set and persist N8N_ENCRYPTION_KEY before first prod boot
- Use Postgres and queue mode for multi instance or high volume setups
- Always set WEBHOOK_URL behind a proxy and use https
- Enable pruning and keep success data to a minimum
- Tune Postgres pool and worker concurrency using real metrics
Ship small, measure, then scale. Tweak EXECUTIONS_, DB_, and QUEUE_* in that order to avoid surprises