This repository contains a Kubernetes Helm chart for deploying n8n, an extendable workflow automation tool. The chart is located in the n8n/
directory.
See n8n/README.md for a quick start guide and common configuration options.
See CHANGELOG.md for release notes.
Add the chart repository and install the release:
helm repo add n8n https://anyfavors.github.io/n8n-helm
helm repo update
# install the chart with the default values
helm install my-n8n n8n/n8n
Customise the deployment by editing the values in n8n/values.yaml
or by supplying your own values file.
Override the replica count and image tag directly on the command line:
helm install my-n8n n8n/n8n \
--set replicaCount=3 \
--set image.tag=1.0.0
Ingress can be enabled and host names customised using Helm values:
helm install my-n8n n8n/n8n \
--set ingress.enabled=true \
--set ingress.hosts[0].host=n8n.example.com \
--set ingress.hosts[0].paths[0].path=/
To upgrade the release when new chart versions are available:
helm upgrade my-n8n n8n/n8n -f values.yaml
To completely remove the deployment:
helm uninstall my-n8n
To expose n8n using a Kubernetes ingress controller, enable the ingress
block in your values and provide host and path information:
ingress:
enabled: true
className: nginx
hosts:
- host: n8n.example.com
paths:
- path: /
pathType: Prefix
If your cluster does not provide a default ingress class, ensure ingress.className
matches your ingress controller.
TLS certificates can be configured via the ingress.tls
section. When using cert-manager, reference the secret created for your certificate:
ingress:
enabled: true
className: nginx
hosts:
- host: n8n.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: n8n-tls
hosts:
- n8n.example.com
Annotate the ingress with your issuer to have cert-manager obtain the certificate automatically:
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: letsencrypt
hosts:
- host: n8n.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: n8n-tls
hosts:
- n8n.example.com
Create the ClusterIssuer itself:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
email: you@example.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt
solvers:
- http01:
ingress:
class: nginx
Optionally use a hook Job to apply a Certificate after installation:
apiVersion: batch/v1
kind: Job
metadata:
name: n8n-cert-provision
annotations:
helm.sh/hook: post-install
spec:
template:
spec:
restartPolicy: OnFailure
containers:
- name: kubectl
image: bitnami/kubectl:latest
command: ["kubectl", "apply", "-f", "/manifests/certificate.yaml"]
Set service.type
to expose n8n using a Kubernetes LoadBalancer or NodePort service.
Example using a LoadBalancer:
helm install my-n8n n8n/n8n \
--set service.type=LoadBalancer
Watch for the external IP to appear:
kubectl get svc --namespace default -w my-n8n
Example using a NodePort:
helm install my-n8n n8n/n8n \
--set service.type=NodePort
Retrieve the address:
NODE_PORT=$(kubectl get svc --namespace default my-n8n -o jsonpath="{.spec.ports[0].nodePort}")
NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
Set persistence.enabled
to true
to store workflows and other n8n data on a persistent volume. The claim size and storage class can be adjusted with the size
and storageClass
values, or supply existingClaim
to mount a pre-created PersistentVolumeClaim. Data is mounted at /home/node/.n8n
inside the pod.
persistence:
enabled: true
size: 8Gi
storageClass: standard
existingClaim: my-data
Install with these settings from the command line:
helm install my-n8n n8n/n8n \
--set persistence.enabled=true \
--set persistence.size=8Gi \
--set persistence.storageClass=standard
The chart also includes a values.schema.json
file that defines the allowed structure of values.yaml
. Helm uses this schema to validate any custom values supplied during installation or upgrades.
Set networkPolicy.enabled
to true
to create a NetworkPolicy
resource that denies all traffic by default. Custom ingress and egress rules can be added under networkPolicy.ingress
and networkPolicy.egress
.
Example enabling the policy from the command line:
helm install my-n8n n8n/n8n \
--set networkPolicy.enabled=true
To allow specific traffic, extend the policy in your values file:
networkPolicy:
enabled: true
ingress:
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 5678
egress: []
The chart runs n8n with a non-root user, drops all capabilities and mounts the
root filesystem read-only. These defaults align with the “restricted” Pod
Security Standard. To enforce this profile on your namespace add the
pod-security.kubernetes.io/*
labels:
kubectl label --overwrite namespace my-n8n \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted
Alternatively, apply the labels via a manifest:
apiVersion: v1
kind: Namespace
metadata:
name: my-n8n
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
Create a PodDisruptionBudget to control the number of pods that may be
voluntarily evicted at once. Enable the budget and specify either
pdb.minAvailable
or pdb.maxUnavailable
:
helm install my-n8n n8n/n8n \
--set pdb.enabled=true \
--set pdb.minAvailable=1
Enable creation of a Role and RoleBinding for the service account by setting rbac.create
:
helm install my-n8n n8n/n8n \
--set rbac.create=true
The chart sets conservative resource requests and limits. For production
deployments edit the values under the resources
block in values.yaml
or
override them on the command line:
helm install my-n8n n8n/n8n \
--set resources.requests.cpu=500m \
--set resources.requests.memory=512Mi \
--set resources.limits.cpu=1 \
--set resources.limits.memory=1Gi
Set autoscaling.enabled
to true
to create a HorizontalPodAutoscaler using
the manifest in n8n/templates/hpa.yaml. Configure
minReplicas
and maxReplicas
to control the scaling range and optionally set
targetCPUUtilizationPercentage
or targetMemoryUtilizationPercentage
for
resource based scaling.
Example enabling the autoscaler:
helm install my-n8n n8n/n8n \
--set autoscaling.enabled=true \
--set autoscaling.minReplicas=2 \
--set autoscaling.maxReplicas=5 \
--set autoscaling.targetCPUUtilizationPercentage=70
To use an external PostgreSQL server instead of the built in SQLite
storage, populate the values under the database
block. These map
directly to the connection fields:
database.host
– address of the database serverdatabase.port
– listening portdatabase.user
– database user namedatabase.password
– user passworddatabase.passwordSecret.name
– name of a secret containing the passworddatabase.passwordSecret.key
– key within the secret (defaults to password
)database.database
– name of the database to connect ton8n also requires the following environment variables:
DB_TYPE=postgresdb
DB_POSTGRESDB_DATABASE
– should match database.database
Example snippet:
database:
host: postgres.example.com
port: 5432
user: n8n
# password: mysecret
passwordSecret:
name: n8n-db
key: password
database: n8n
extraEnv:
- name: DB_TYPE
value: postgresdb
- name: DB_POSTGRESDB_DATABASE
value: n8n
Or supply the settings on the command line:
helm install my-n8n n8n/n8n \
--set database.host=postgres.example.com \
--set database.port=5432 \
--set database.user=n8n \
--set database.passwordSecret.name=n8n-db \
--set database.passwordSecret.key=password \
--set database.database=n8n \
--set extraEnv[0].name=DB_TYPE \
--set extraEnv[0].value=postgresdb \
--set extraEnv[1].name=DB_POSTGRESDB_DATABASE \
--set extraEnv[1].value=n8n
Generate a 256‑bit key and store it in a secret to encrypt credentials:
kubectl create secret generic n8n-key \
--from-literal=encryptionKey=$(openssl rand -hex 32)
Reference the secret in your values:
encryptionKeySecret:
name: n8n-key
key: encryptionKey
Or set it on the command line:
helm install my-n8n n8n/n8n \
--set encryptionKeySecret.name=n8n-key
Existing Kubernetes resources can be mounted using the extraSecrets
and
extraConfigMaps
values:
extraSecrets:
- name: my-secret
mountPath: /etc/secret
extraConfigMaps:
- name: my-config
mountPath: /etc/config
Install with command line flags:
helm install my-n8n n8n/n8n \
--set extraSecrets[0].name=my-secret \
--set extraSecrets[0].mountPath=/etc/secret \
--set extraConfigMaps[0].name=my-config \
--set extraConfigMaps[0].mountPath=/etc/config
Chart releases are handled automatically by chart-releaser. To publish a new version:
gh-pages
exists in the repository and configure GitHub Pages to use it. The branch can start with an empty index.yaml
file.version
(and optionally appVersion
) fields in n8n/Chart.yaml
.CHANGELOG.md
.main
branch.release.yaml
workflow packages the chart from the n8n
directory and uploads it to a GitHub release.gh-pages
branch is updated at https://anyfavors.github.io/n8n-helm.This project is licensed under the MIT License.