Skip to content
AWS Lambda 📅 2026-02-03

How to Fix AWS Lambda Exit Code 137 OOM Error

When operating serverless architectures on AWS Lambda, encountering exit status 137 is a critical signal that your function's runtime environment has been terminated due to an Out-Of-Memory (OOM) condition. This error signifies that the underlying Linux kernel's OOM killer sent a SIGKILL signal (signal 9) to your Lambda process, halting its execution abruptly. Prompt diagnosis and resolution are essential for maintaining operational reliability.

🚨 Symptoms & Diagnosis

Identifying exit status 137 typically involves observing specific log messages in CloudWatch. These signatures confirm an OOM termination event:

Runtime exited with error: exit status 137
Exit Code: 137 (SIGKILL signal 9 + 128)
RequestId: xxx Error: Runtime exited with error: exit status 137
CloudWatch Log: Process 1 (lambda-handler) killed by signal 9
Status: OOMKilled or Task terminated due to memory pressure

Root Cause: AWS Lambda functions are terminated with exit code 137 when they exceed their allocated memory limit (ranging from 128MB to 10GB). This triggers the underlying Linux kernel's Out-Of-Memory (OOM) killer, which sends a SIGKILL signal (signal 9) to the process, halting execution. Common culprits include memory leaks, large payloads, inefficient libraries, or cold start spikes.


🛠️ Solutions

Immediate Mitigation: Increase Memory Allocation

Immediate Mitigation: Increase Memory Allocation

The quickest and most common fix for exit status 137 is to increase the Lambda function's allocated memory. This often resolves immediate OOM issues and proportionally doubles the CPU power available to your function.

  1. Update Memory Allocation via AWS CLI: Increment your function's memory limit. For example, to set it to 1024MB:
    aws lambda update-function-configuration --function-name your-function --memory-size 1024
    
    For a production API handler, you might immediately jump to a higher value:
    aws lambda update-function-configuration --function-name prod-api-handler --memory-size 2048 --region us-east-1
    
  2. Wait for Propagation: Allow 1-2 minutes for the configuration change to propagate across the AWS Lambda service.
  3. Test Invocation: Invoke your function to confirm the error is resolved.
    aws lambda invoke --function-name your-function response.json
    

Best Practice Fix: Optimize + Monitor Memory

Best Practice Fix: Optimize + Monitor Memory

For sustained production reliability, a permanent solution involves profiling memory usage, setting proactive alarms, and optimizing your function's code.

  1. Enable Lambda Insights and X-Ray Tracing: Enable Lambda Insights for detailed performance metrics and X-Ray for tracing function invocations, which helps pinpoint resource bottlenecks.
    aws lambda update-function-configuration --function-name your-function --tracing-config Mode=Active
    
  2. Monitor Memory Metrics: Navigate to the AWS Console, then Lambda > Functions > your-function > Monitoring tab. Review the "Memory usage" metric to understand actual consumption patterns. Consider increasing ephemeral-storage if your /tmp directory is also a constraint for large file processing.
    aws lambda update-function-configuration --function-name your-function --memory-size 1536 --ephemeral-storage Size=512
    
  3. Add CloudWatch Alarm for High Memory Usage: Proactively alert your SRE team if memory usage exceeds a defined threshold (e.g., 90% of allocated memory).
    aws cloudwatch put-metric-alarm \
        --alarm-name LambdaOOMAlert \
        --metric-name MaxMemoryUsed \
        --namespace AWS/Lambda \
        --threshold 90 \
        --comparison-operator GreaterThanThreshold \
        --evaluation-periods 2 \
        --period 300 \
        --statistic Maximum \
        --dimensions Name=FunctionName,Value=your-function
    
  4. Refactor Code for Memory Efficiency:
    • Large Payloads: Use streaming for processing large data payloads instead of loading entire objects into memory.
    • Resource Management: Ensure external connections, file handles, and large data structures are properly closed or released after use.
    • Profiling: Utilize profiling tools (e.g., Node.js heapdump, Python memory_profiler) in development/staging environments to identify memory leaks or excessive allocations.

Log Parsing + Diagnosis

Before or after applying a fix, it's crucial to confirm the OOM event from CloudWatch Logs.

  1. Query Recent Logs: Use CloudWatch Logs Insights to query for recent termination events. This allows you to verify exit status 137 and compare maxMemoryUsed against memorySize.
    aws logs start-query \
        --log-group-name /aws/lambda/your-function \
        --start-time $(date -d '1 hour ago' +%s000) \
        --end-time $(date +%s000) \
        --query-string 'fields @timestamp, @message, @memorySize, @maxMemoryUsed | filter @type = "REPORT" and (@message like /137|OOM|SIGKILL|Killed/ or @maxMemoryUsed > (@memorySize * 0.9)) | limit 20'
    
  2. Retrieve Query Results: After initiating the query, use the query-id returned to fetch the results.
    aws logs get-query-results --query-id <query-id>
    
    This will provide insights into which invocations experienced OOM, their memory consumption, and allocated limits.

🧩 Technical Context (Visualized)

AWS Lambda functions execute within a containerized Linux environment. When a function's processes consume memory beyond the configured limit, the Linux kernel's Out-Of-Memory (OOM) killer intervenes. This kernel mechanism sends a SIGKILL (signal 9) to the offending process, forcefully terminating it and resulting in the exit status 137 error.

graph TD
    A[Lambda Function Invocation] --> B(Container Initialization)
    B --> C{Function Code Execution}
    C -- Memory Usage Increases --> D[Check against Allocated Memory Limit]
    D -- Exceeds Limit --> E(Linux Kernel OOM Killer)
    E -- "Sends SIGKILL (Signal 9)" --> F[Lambda Process Termination]
    F --> G["Runtime exited with error: exit status 137"]
    D -- Within Limit --> H[Function Completes Successfully]

✅ Verification

After implementing changes, verify the fix by invoking the function and checking its memory metrics.

  1. Invoke the Lambda Function:
    aws lambda invoke --function-name your-function --payload '{\"test\":\"data\"}' response.json && cat response.json
    
    Review response.json for successful execution and check CloudWatch logs for any new exit status 137 errors.
  2. Retrieve Recent Memory Statistics: Check the MaxMemoryUsed metric in CloudWatch to ensure it is now well within the newly allocated memory-size.
    aws cloudwatch get-metric-statistics \
        --namespace AWS/Lambda \
        --metric-name MaxMemoryUsed \
        --dimensions Name=FunctionName,Value=your-function \
        --start-time $(date -d '5 minutes ago' +%s) \
        --end-time $(date +%s) \
        --period 60 \
        --statistics Maximum
    
    Confirm that the Maximum value for MaxMemoryUsed is significantly lower than your memory-size setting.

📦 Prerequisites

To perform these diagnostic and remediation steps, ensure you have the following in your environment: * AWS CLI v2+ installed and configured. * jq for parsing JSON output from CLI commands (optional, but highly recommended). * An IAM role with administrative permissions or specific policies allowing: * lambda:UpdateFunctionConfiguration * logs:DescribeLogGroups, logs:StartQuery, logs:GetQueryResults * cloudwatch:PutMetricAlarm, cloudwatch:GetMetricStatistics * Familiarity with runtime-specific profiling tools for Node.js or Python, if delving into code optimization.