The Docker Image Server Approach

The most common way to do it, which is part of the offical solution is to create a Docker image server capable of responding to any request with 404 content, except /healthz and /metrics. This could be an Nginx instance.

/healthz should return 200

/metrics is optional, but it should return data that is readable by Prometheus in case you are using it for k8s metrics.

Note: Nginx can provide some basic data that Prometheus can read.

/returns a 404 with your custom HTML content.

Thus, the Dockerfile looks like this:

FROM nginx:alpine

# Remove default NGINX Config
# Take care of Nginx logging

RUN rm /etc/nginx/conf.d/default.conf && \
    ln -sf /dev/stdout /var/log/nginx/access.log && \
    ln -sf /dev/stderr /var/log/nginx/error.log

# NGINX Config

COPY ./default.conf /etc/nginx/conf.d/default.conf

# Resources 

COPY content/ /var/www/html/

CMD ["nginx", "-g", "daemon off;"]
What are incident management tools? How do they help organizations? Explore here!

In the same folder where Dockerfile is located, create this default.conf Nginx configuration file:

server {
    root /var/www/html;
    index 404.html;

    location / {

    location /healthz {
        access_log off;
        return 200 "healthy\n";
    location /metrics {
        # This creates a readable and somewhat useful response for Prometheus
        stub_status on;

    error_page 404 /404.html;
    location = /404.html {

At last, provide a content/404.html file with HTML/CSS to your own liking.

Now build the Docker image with:

docker build -t custom-default-backend .

Tag this image so that it is ready to be pushed into DockerHub (or your own private docker registry):

docker tag custom-default-backend:latest <your_dockerhub_username>/custom-default-backend

Push the image to a DockerHub repository:

docker push <your_dockerhub_username>/custom-default-backend

Create k8s resource file (custom_default_backend.yaml)

Integrate this custom-default-backend image into the Helm installation.

apiVersion: v1
kind: Service
  name: custom-default-backend
  namespace: ingress-nginx
  labels: custom-default-backend ingress-nginx
  selector: custom-default-backend ingress-nginx
  - port: 80
    targetPort: 80
    name: http
apiVersion: apps/v1
kind: Deployment
  name: custom-default-backend
  namespace: ingress-nginx
  labels: custom-default-backend ingress-nginx
  replicas: 1
    matchLabels: custom-default-backend ingress-nginx
      labels: custom-default-backend ingress-nginx
      - name: custom-default-backend
        # Don't forget to edit the line below
        image: <your_dockerhub_username>/custom-default-backend:latest
        imagePullPolicy: Always
        - containerPort: 80

Create k8s namespace ingress-nginx if not created already. Then proceed to create these two resources.

kubectl apply -f custom_default_backend.yaml

In order to tie the Nginx Ingress Controller with the new service, you can either just edit the deployment of the Ingress Controller or remove it completely via Helm:

helm delete nginx-ingress -n ingress-nginx

And install it again with the command below

Note: make sure you have the --set flag with proper arguments included

helm install nginx-ingress --namespace ingress-nginx stable/nginx-ingress --set defaultBackend.enabled=false,controller.defaultBackendService=ingress-nginx/custom-default-backend

Following the steps will ensure the successful implementation of your custom default backend.

However, the newest version of ingress-nginx allows the user to only specify the docker image to pull - no need for other k8s resource files (i.e. service and deployment).

The current values.yaml for the nginx ingress controller will allow a custom default backend:

  enabled: true
  name: custom-default-backend
    repository: <ABCD>/custom-default-backend
    tag: "latest"
    pullPolicy: Always
  port: 8080
    - name: tmp
      mountPath: /tmp
    - name: tmp
      emptyDir: {}

The Template Modification

There is another way to provide the custom error page in ingress-nginx, by just modifying the template of ingress-Nginx.(/etc/nginx/template).

    - name: custom-errors
      mountPath: /usr/local/nginx/html/
      readOnly: true
    - name: nginx-ingress-template-volume
      mountPath: /etc/nginx/template
      readOnly: true

In the above YAML example, use path (/usr/local/nginx/html) for mounting the custom error pages. Below is a preview of nginx template default server.

# backend for when default-backend-service is not configured or it does not have endpoints
    server {
        listen  default_server reuseport backlog=;
        listen [::]: default_server reuseport backlog=;
        set $proxy_upstream_name "internal";

        access_log off;
        root /usr/local/nginx/html/;
        error_page 404 /404.html;
        error_page 500 502 503 504 /50x.html;
        location / {
          return 404;
        location = /404.html {
        location = /50x.html {

Provide a custom 404.html and 50xhtml page inside the root (/usr/local/nginx/html/).

To mount the voulme with custom pages use,

    - name: custom-errors
        # Provide the name of the ConfigMap you want to mount.
        name: custom-ingress-pages
        - key: "404.html"
          path: "404.html"
        - key: "50x.html"
          path: "50x.html"
        - key: "index.html"
          path: "index.html"

Benefits of this solution

This solution doesn’t require you to spawn another service or pod (container) of any kind to work. This will be taken care of from the ingress-nginx controller pod deployment(or daemonset). You also don’t need to warm up your cluster for extra service, you just need to do it for custom error messages (or pages).

This approach will save you resources and serve you custom error pages from the ingress-nginx controller itself.

Looking for an end-to-end incident alerting, on-call scheduling and response orchestration platform?

Sign up for a 14-day free trial of Zenduty. No CC required. Implement modern incident response and SRE best practices within your production operations and provide industry-leading SLAs to your customers