Top 20 AWS Lambda Interview Questions and Answers
- What is AWS Lambda?
- What are Lambda execution environments?
- What are Lambda triggers and event sources?
- What is cold start and how do you mitigate it?
- How does Lambda scaling work?
- What are Lambda Layers?
- What are Lambda environment variables and secrets?
- How do you handle errors in Lambda?
- What is Lambda Destinations?
- How do you configure Lambda VPC access?
- What are Lambda extensions?
- How do you optimize Lambda performance?
- What is Provisioned Concurrency?
- How do you monitor and debug Lambda?
- What is Lambda@Edge?
- How do you implement API Gateway with Lambda?
- What are Lambda best practices for security?
- How do you handle large payloads in Lambda?
- What is AWS SAM for Lambda?
- What are Lambda pricing and cost optimization strategies?
AWS Interview Questions - All Topics
1. What is AWS Lambda?
AWS Lambda is a serverless compute service that runs code in response to events without provisioning or managing servers.Key Features:
- Event-driven execution
- Automatic scaling
- Pay per request and compute time
- Supports multiple runtimes
- Up to 15 minutes execution time
# Basic Lambda Handler (Python)
import json
def lambda_handler(event, context):
"""
event: Input data (varies by trigger)
context: Runtime information
"""
# Process event
name = event.get('name', 'World')
# Return response
return {
'statusCode': 200,
'body': json.dumps({
'message': f'Hello, {name}!',
'requestId': context.aws_request_id
})
}
# Context object properties:
# - function_name: Lambda function name
# - memory_limit_in_mb: Memory allocated
# - aws_request_id: Unique request ID
# - get_remaining_time_in_millis(): Time remaining
# - log_group_name: CloudWatch log group
Supported Runtimes:
- Python 3.9, 3.10, 3.11, 3.12
- Node.js 18.x, 20.x
- Java 11, 17, 21
- .NET 6, 8
- Ruby 3.2, 3.3
- Custom Runtime (provided.al2023)
2. What are Lambda execution environments?
Lambda Execution Environment Lifecycle:
INIT Phase (Cold Start):
âââ Extension init
âââ Runtime init
âââ Function init (outside handler)
âââ Initialize connections, load config
INVOKE Phase:
âââ Handler execution
SHUTDOWN Phase:
âââ Cleanup (extensions, runtime)
# Execution environment reuse
import boto3
# Initialize outside handler (reused across invocations)
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('MyTable')
def lambda_handler(event, context):
# Handler code (runs each invocation)
response = table.get_item(Key={'id': event['id']})
return response['Item']
# Environment reuse benefits:
# - Faster subsequent invocations
# - Connection reuse
# - Cached data persists
# - /tmp directory persists (512MB - 10GB)
Memory and CPU:
| Memory | vCPUs | Use Case |
|---|---|---|
| 128 MB | Fraction | Simple tasks |
| 1,769 MB | 1 vCPU | Standard workloads |
| 3,538 MB | 2 vCPUs | Parallel processing |
| 10,240 MB | 6 vCPUs | Memory/CPU intensive |
3. What are Lambda triggers and event sources?
Lambda Event Sources:
Synchronous (Push):
âââ API Gateway
âââ Application Load Balancer
âââ Cognito
âââ Alexa
âââ CloudFront (Lambda@Edge)
Asynchronous (Push):
âââ S3
âââ SNS
âââ EventBridge
âââ CloudWatch Events
âââ CodeCommit
âââ IoT
âââ CloudFormation
Stream-based (Poll):
âââ Kinesis Data Streams
âââ DynamoDB Streams
âââ Amazon MQ
âââ Amazon MSK
âââ SQS
# S3 trigger event
{
"Records": [{
"eventSource": "aws:s3",
"eventName": "ObjectCreated:Put",
"s3": {
"bucket": {"name": "my-bucket"},
"object": {"key": "uploads/file.json"}
}
}]
}
# SQS trigger event
{
"Records": [{
"messageId": "xxx",
"body": "{\"order_id\": \"123\"}",
"attributes": {
"ApproximateReceiveCount": "1"
}
}]
}
# Configure S3 trigger
lambda_client.create_event_source_mapping(
EventSourceArn='arn:aws:s3:::my-bucket',
FunctionName='my-function',
Events=['s3:ObjectCreated:*']
)
4. What is cold start and how do you mitigate it?
Cold start occurs when Lambda creates a new execution environment to handle a request.Cold Start Components:
Cold Start Timeline: âââ Download code (~50-200ms) âââ Start container (~100-200ms) âââ Initialize runtime (~100-500ms) âââ Run init code (~varies) âââ VPC ENI attachment (~500-1000ms, if VPC) Warm Start: âââ Direct handler invocation (~1-10ms overhead)
Mitigation Strategies:
1. Provisioned Concurrency (Best)
lambda_client.put_provisioned_concurrency_config(
FunctionName='my-function',
Qualifier='prod',
ProvisionedConcurrentExecutions=10
)
2. Keep Functions Warm (EventBridge)
# Scheduled ping every 5 minutes
# Not recommended for production
3. Optimize Package Size
# Smaller = faster download
# Use Lambda Layers for dependencies
4. Optimize Init Code
# Bad: Initialize inside handler
def handler(event, context):
import pandas as pd # Slow!
# Good: Initialize outside handler
import pandas as pd # Runs once
def handler(event, context):
pass
5. Use SnapStart (Java)
# Pre-initializes execution environment
# ~90% reduction in cold starts
6. Choose Optimal Memory
# More memory = more CPU = faster init
5. How does Lambda scaling work?
Lambda Scaling Model:
Concurrency = Requests per second * Duration in seconds
Scaling Limits:
âââ Account level: 1,000 concurrent (soft limit)
âââ Function level: Unreserved concurrency
âââ Burst: 3,000 initial, +500/minute
âââ Maximum: 10,000+ (request increase)
# Reserved Concurrency
# Guarantees capacity, limits max
lambda_client.put_function_concurrency(
FunctionName='critical-function',
ReservedConcurrentExecutions=100
)
# Scaling scenarios:
1. Synchronous (API Gateway)
- Returns 429 if throttled
- Client retries
2. Asynchronous (S3, SNS)
- Retries twice with delays
- Dead letter queue on failure
3. Stream (Kinesis, DynamoDB)
- One Lambda per shard
- Parallelization factor (1-10)
- Batching control
# SQS Scaling
Event Source Mapping:
âââ BatchSize: 1-10,000
âââ MaximumBatchingWindowInSeconds: 0-300
âââ MaximumConcurrency: 2-1000
âââ Scales based on queue depth