Skip to content
Nginx 📅 2026-02-07

How to Resolve Nginx Container Exit Code 137: Memory Limits and Daemon Signal Handling

🚨 Symptoms & Diagnosis

When an Nginx container abruptly terminates with exit code 137, it's a critical indicator of resource exhaustion within your Kubernetes or Docker environment. This often points directly to an Out-Of-Memory (OOM) kill event initiated by the Linux kernel. You'll typically observe signatures like:

nginx container exited with code 137 (OOMKilled)
Exit Code: 137
kubectl get pods: nginx-xxxxx OOMKilled: true, Exit Code: 137
docker logs nginx: no logs (SIGKILL abrupt termination)
dmesg: Out of memory: Killed process 1234 (nginx) total-vm:XYZkB, anon-rss:ABCkB
/var/log/syslog: kernel: Memory cgroup out of memory: Killed process

Root Cause: Exit code 137 signifies an Nginx container process was terminated by the Linux kernel's Out-Of-Memory (OOM) killer, which sends a SIGKILL (signal 9) when the container exceeds its assigned cgroup memory limits, leading to an exit code calculated as 128 + 9 = 137. This is often due to misconfigured memory limits, memory leaks in Nginx modules, or high traffic spikes overwhelming existing allocations.


🛠️ Solutions

Production Impact Warning

Modifying resource limits in a production environment can impact other workloads or temporarily disrupt service. Always test changes in a staging environment before deploying to production.

Immediate Mitigation: Increase Memory Limits

This quick fix involves immediately bumping your container's memory limits to prevent further OOM kills and pod restarts, restoring service stability.

  1. Identify the Nginx Pod/Deployment:
    kubectl get pods | grep nginx
    
  2. Edit the Nginx Deployment:
    kubectl edit deployment nginx-deployment
    
  3. Locate spec.template.spec.containers[?nginx].resources and adjust limits.memory: Increase the value to provide more headroom. For example, from 512Mi to 1Gi.
  4. Rollout Restart (if needed): Kubernetes will typically perform a rolling update automatically. If not, explicitly trigger one:
    kubectl rollout restart deployment nginx-deployment
    
resources:
  limits:
    memory: "1Gi"
    cpu: "500m"
  requests:
    memory: "512Mi"
    cpu: "100m"

Best Practice Fix: Tune Nginx + Cgroup Limits + Monitoring

A sustainable solution involves a combination of Nginx configuration optimization, precise cgroup memory limit setting based on profiling, and robust monitoring.

  1. Profile Memory Usage:
    kubectl top pod <nginx-pod-name> --containers
    
    This helps identify current resource consumption to inform appropriate limits.
  2. Inspect Kernel Logs for OOM Events:
    dmesg | grep -i 'killed\|oom\|nginx'
    
    Confirm OOM events and details of what processes were killed.
  3. Update Nginx Configuration for Efficiency: Tune worker processes and file descriptors to handle load efficiently without excessive memory.
  4. Apply Tuned Deployment YAML: Based on profiling and Nginx configuration, set limits.memory and requests.memory more accurately. The requests value should reflect the minimum guaranteed memory, while limits is the hard cap.
  5. Set Namespace Resource Quota (Optional but Recommended): Enforce aggregate resource limits for all pods within a namespace to prevent any single deployment from monopolizing cluster resources.
    kubectl create quota nginx-quota --hard=memory=10Gi --namespace=default
    
  6. Enable Horizontal Pod Autoscaler (HPA): For dynamic scaling based on CPU or memory metrics, preventing OOM kills under fluctuating load.
    kubectl autoscale deployment nginx-deployment --min=2 --max=10 --cpu-percent=70
    
# In deployment YAML for Nginx container
resources:
  limits:
    memory: "768Mi"
  requests:
    memory: "384Mi"
# nginx.conf snippet for optimal resource usage
worker_processes auto;
worker_rlimit_nofile 65535; # Increase max open files
worker_connections 10240;   # Max concurrent connections per worker
# Docker run equivalent with memory limits and reservations
docker run --memory=768m --memory-reservation=384m -d nginx

🧩 Technical Context (Visualized)

The Nginx Exit Code 137 scenario unfolds within the Kubernetes or Docker container runtime, deeply intertwined with the Linux kernel's memory management. When an Nginx container's memory usage surpasses the configured cgroup v2 limits, the kernel's OOM killer steps in. It identifies and terminates the Nginx process by sending a SIGKILL (signal 9), leading to an abrupt container exit with code 137 and subsequent restarts orchestrated by the container orchestrator.

graph TD
    A[Nginx Container Running] --> B{Memory Usage Exceeds cgroup Limit};
    B -- Yes --> C[Linux Kernel OOM Killer Activated];
    C -- "> D["SIGKILL (Signal 9) Sent to Nginx Process"];
    D" --> E[Nginx Process Abruptly Terminated];
    E -- "> F["Container Exits with Code 137 (128 + 9)"];
    F" --> G{"Container Runtime (K8s/Docker) Restarts Pod/Container"};
    B -- No --> A;

✅ Verification

After implementing the solutions, verify the Nginx containers are running stably and not experiencing further OOM kills:

  • Check Pod Status: Ensure pods are in a Running or Completed state without OOMKilled events.
    kubectl get pods -l app=nginx -o wide | grep -E 'Running|Completed'
    
  • Monitor Container Resource Usage:
    kubectl top pod <nginx-pod-name> --containers
    
    For Docker standalone:
    docker stats <container-id> | grep nginx
    
  • Review Kernel Logs for OOM: Check dmesg to confirm no new OOM events related to Nginx.
    dmesg | tail -20 | grep -i oom
    
  • Check Previous Container Logs for Abnormal Termination:
    kubectl logs nginx-pod --previous | tail
    

📦 Prerequisites

To effectively diagnose and resolve Nginx container exit code 137, ensure you have the following tools and environment configurations: * kubectl version 1.27+ * Docker version 24+ * Helm 3+ (if managing deployments via Helm) * Cluster-admin rights or appropriate RBAC permissions for Kubernetes operations * A stable Nginx container image (e.g., nginx:alpine) * Linux kernel 5.15+ with cgroup v2 enabled for optimal resource management.