7 min read

How to Backup and Restore n8n Docker Volumes (With Cron Examples) – Step‑by‑Step Guide

Hero image for How to Backup and Restore n8n Docker Volumes (With Cron Examples) – Step‑by‑Step Guide
Table of Contents

đź’ˇ

Backups are real only when restores work. You’ll protect the n8n data directory, keep the N8N_ENCRYPTION_KEY safe, and automate snapshots with cron

What to back up

What you’ll learn: Exactly which n8n files matter, why the encryption key is critical, and common data‑loss risks

n8n (workflow automation platform) stores workflows, credentials, and config in /home/node/.n8n. That folder is your lifeline

  • Lose the folder or encryption key and credentials won’t decrypt
  • Docker upgrades or bad mounts can wipe data fast

In a few steps you’ll script backups, schedule them, then restore on the same or a new server

Where data lives

  • n8n data directory: /home/node/.n8n (workflows, credentials, config)
  • Encryption key: N8N_ENCRYPTION_KEY (symmetric key that decrypts credentials)
  • Database: SQLite by default inside .n8n, or external Postgres/MySQL (relational databases)

Below is a simplified view of core n8n data objects to orient your backup scope

flowchart TD
    A[n8n data dir] --> B[Workflows]
    A --> C[Credentials]
    A --> D[Config]

    classDef trigger fill:#e1f5fe,stroke:#01579b
    classDef process fill:#fff3e0,stroke:#ef6c00
    class A trigger
    class B process
    class C process
    class D process

Prep your data

What you’ll learn: How to locate the n8n container, find the Docker volume, and save the encryption key

Before scripting, confirm your setup. You’ll use Docker (container runtime), Docker Compose (multi‑container tool), bash, and sudo

Transition: With the stakes clear, let’s identify the exact container and volume you’ll back up

Find the container

# If you used docker compose:
docker compose ps

# Or list all and grep
docker ps --format 'table {{.Names}}\t{{.Image}}' | grep n8n

Find the volume

A Docker volume is persistent storage mapped into a container

# Replace n8n with your container name
docker inspect -f '{{ range .Mounts }}{{ .Name }} -> {{ .Destination }}{{ "\n" }}{{ end }}' n8n | grep '/home/node/.n8n'
# Example output: n8n_data -> /home/node/.n8n

Save the encryption key

# Try env first
docker exec n8n env | grep '^N8N_ENCRYPTION_KEY='

# Or read from config file if not set as env
docker exec n8n sh -lc "grep -oE 'encryptionKey.*:.*\"[^\"]+\"' /home/node/.n8n/config || true"

If you use an external database (Postgres/MySQL), still back up /home/node/.n8n for the encryption key and config, and back up the database separately

Backup script and cron

What you’ll learn: A copy‑paste script to archive the n8n volume, retention cleanup, and a cron schedule

Transition: With your container and volume identified, create a repeatable backup workflow

Write the script

sudo tee /usr/local/bin/backup_n8n.sh >/dev/null <<'EOF'
#!/usr/bin/env bash
set -euo pipefail

# Customize these
N8N_CONTAINER="n8n"          # container name
N8N_VOLUME="n8n_data"        # docker volume for /home/node/.n8n
BACKUP_DIR="/var/backups/n8n"# host path for .tar.gz files
RETAIN_DAYS=14                # delete local backups older than N days

mkdir -p "$BACKUP_DIR"
STAMP="$(date +%F-%H%M%S)"
FILE="n8n-${STAMP}.tar.gz"

# Optional: quiesce for SQLite users to avoid in-flight writes
# docker stop "$N8N_CONTAINER"

# Create compressed archive straight from the volume (read-only)
docker run --rm \
  -v "${N8N_VOLUME}:/src:ro" \
  -v "${BACKUP_DIR}:/dest" \
  alpine:3.20 sh -lc "tar -C /src -czf /dest/${FILE} ."

# Ownership fix for root-created files (adjust user:group if needed)
chown --reference="$BACKUP_DIR" "$BACKUP_DIR/${FILE}" || true

# Prune old backups
find "$BACKUP_DIR" -name 'n8n-*.tar.gz' -type f -mtime +"${RETAIN_DAYS}" -delete

# docker start "$N8N_CONTAINER" || true

echo "Created: $BACKUP_DIR/${FILE}"
EOF
sudo chmod +x /usr/local/bin/backup_n8n.sh

Test a backup

/usr/local/bin/backup_n8n.sh
ls -lh /var/backups/n8n | tail -n 1

Schedule with cron

cron is a Linux job scheduler that runs commands at fixed times

# Open user crontab
crontab -e

# Daily 02:15
15 2 * * * /usr/local/bin/backup_n8n.sh >> /var/log/n8n_backup.log 2>&1

# Or every 6 hours (00,06,12,18)
0 */6 * * * /usr/local/bin/backup_n8n.sh >> /var/log/n8n_backup.log 2>&1

Quick schedule picks

FrequencyRPOUse case
Daily 02:15≤ 24hSmall teams, low change rate
Every 6h≤ 6hActive instances
Hourly≤ 1hCritical automations

RPO means Recovery Point Objective, the maximum acceptable data loss window

flowchart TD
    A[Start] --> B[Find container]
    B --> C[Find volume]
    C --> D[Archive data]
    D --> E[Prune old files]
    E --> F[Offsite copy]
    F --> G[Done]

    classDef trigger fill:#e1f5fe,stroke:#01579b
    classDef process fill:#fff3e0,stroke:#ef6c00
    classDef action fill:#e8f5e8,stroke:#2e7d32
    classDef alert fill:#f3e5f5,stroke:#7b1fa2
    class A trigger
    class B process
    class C process
    class D action
    class E action
    class F action
    class G alert
đź’ˇ

Tip: push /var/backups/n8n to offsite storage with rclone or rsync to survive host loss

Restore anywhere

What you’ll learn: How to restore on the same host or a new server, fix file permissions, and start n8n safely

Transition: With backups running, verify you can bring an instance back to life

Same‑server restore

# Vars
echo BACKUP="/var/backups/n8n/n8n-2025-12-23-021500.tar.gz"
echo VOLUME="n8n_data"
echo CONTAINER="n8n"

# Stop app
docker stop "$CONTAINER" || true

# Extract into the volume
docker run --rm \
  -v "$VOLUME:/dst" -v "$(dirname "$BACKUP"):/src" \
  alpine:3.20 sh -lc "tar -C /dst -xzf /src/$(basename "$BACKUP")"

# Ensure n8n user (uid 1000) can read/write
docker run --rm -v "$VOLUME:/dst" alpine:3.20 sh -lc "chown -R 1000:1000 /dst"

# Start app
docker start "$CONTAINER"

New‑server restore

# Copy a backup file to the new host, then:
VOLUME="n8n_data"
BACKUP="/var/backups/n8n/n8n-2025-12-23-021500.tar.gz"

docker volume create "$VOLUME"
docker run --rm -v "$VOLUME:/dst" -v "$(dirname "$BACKUP"):/src" \
  alpine:3.20 sh -lc "tar -C /dst -xzf /src/$(basename "$BACKUP") && chown -R 1000:1000 /dst"

Start with the original key

# docker-compose.yml (excerpt)
services:
  n8n:
    image: n8nio/n8n:latest
    environment:
      - N8N_ENCRYPTION_KEY=REPLACE_WITH_YOUR_OLD_KEY
    volumes:
      - n8n_data:/home/node/.n8n
volumes:
  n8n_data: {}
đź’ˇ

Critical: use the same N8N_ENCRYPTION_KEY from the source instance or credentials will not decrypt

flowchart TD
    A[Stop app] --> B[Extract data]
    B --> C[Fix perms]
    C --> D[Start app]
    D --> E[Verify logs]
    E --> F[Test flows]

    classDef trigger fill:#e1f5fe,stroke:#01579b
    classDef process fill:#fff3e0,stroke:#ef6c00
    classDef action fill:#e8f5e8,stroke:#2e7d32
    classDef alert fill:#f3e5f5,stroke:#7b1fa2
    class A trigger
    class B action
    class C action
    class D action
    class E process
    class F alert

Verify and harden

What you’ll learn: How to validate a restore and reduce future risks with offsite copies and integrity checks

Transition: After restoring, prove it works and make it harder to break

Verify the restore

# Watch logs for errors during startup
docker logs -f n8n
  • Open n8n, confirm workflows exist, and run a test execution
  • Use a credential in a simple node and ensure it authenticates

Harden the setup

  • Offsite copies: rclone and rsync move backups to object storage or another server
  • Integrity tests: periodically extract the latest archive into a temporary Docker volume and spot‑check
  • Permissions: n8n runs as UID 1000, so chown -R 1000:1000 on the volume if files are owned by root
# Offsite copy example with rclone (configure remote first)
rclone copy /var/backups/n8n remote:n8n-backups --transfers 4 --checkers 8

# Optional: weekly integrity test (extract to temp volume)
TESTVOL="n8n_test_$(date +%F)"; docker volume create "$TESTVOL"
docker run --rm -v "$TESTVOL:/dst" -v /var/backups/n8n:/src \
  alpine:3.20 sh -lc "LATEST=\"$(ls -1 /src/n8n-*.tar.gz | tail -n1)\"; tar -C /dst -xzf \"$LATEST\""

Data model context (simplified)

This ERD is a high‑level view to explain why both the data directory and the database matter

erDiagram
    Workflow ||--o{ Execution : has

    Workflow {
        int id
        string name
        datetime created_at
    }

    Execution {
        int id
        int workflow_id
        datetime started_at
    }

    Credential {
        int id
        string type
        datetime created_at
    }

    Config {
        int id
        string key
        datetime created_at
    }
đź’ˇ

Next steps: run a timed restore test, store the encryption key in a password manager, move backups off‑server, and document this runbook where your team can find it

đź“§