Why Serverless Is More Than a Buzzword
The term "serverless" is somewhat misleading. Servers still exist, of course. But the operational model has fundamentally shifted. Instead of provisioning, patching, and scaling servers yourself, you write functions that execute in response to events, and the cloud provider handles everything else. AWS Lambda, launched in 2014, pioneered this model, and it has matured into a platform capable of handling enterprise-grade workloads.
At StrikingWeb, we started adopting serverless architecture for client projects in 2020, and it has become our default recommendation for API-driven applications where traffic patterns are variable or unpredictable. The combination of AWS Lambda, API Gateway, and DynamoDB provides a stack that scales from zero to millions of requests without any infrastructure management.
The Serverless API Stack
A typical serverless API architecture on AWS consists of three core services, each playing a distinct role.
API Gateway — The Front Door
Amazon API Gateway acts as the entry point for all client requests. It handles routing, request validation, authentication, rate limiting, and CORS configuration. When a request arrives, API Gateway validates it against your defined schema and forwards it to the appropriate Lambda function.
API Gateway supports both REST APIs and HTTP APIs. For most new projects, we recommend HTTP APIs because they offer lower latency, lower cost (up to 70 percent cheaper), and simpler configuration. REST APIs are still valuable when you need features like request/response transformation, API keys, or usage plans.
AWS Lambda — The Business Logic
Lambda functions contain your application code. Each function handles a specific piece of business logic: creating a user, processing an order, generating a report. Functions can be written in Node.js, Python, Java, Go, .NET, or Ruby, and they execute in a managed runtime environment.
Key characteristics of Lambda functions include:
- Stateless execution: Each invocation is independent. Any state must be stored externally in a database, cache, or file storage.
- Automatic scaling: Lambda scales horizontally by running multiple instances of your function in parallel. There is no configuration needed to handle traffic spikes.
- Pay-per-use pricing: You pay only for the compute time your function actually uses, measured in milliseconds. There is no charge when your function is not running.
- Execution limits: Functions have a maximum execution time of 15 minutes, a maximum memory allocation of 10 GB, and a deployment package size limit of 250 MB unzipped.
DynamoDB — The Database
DynamoDB is Amazon's fully managed NoSQL database, and it pairs naturally with Lambda because it too operates on a serverless model. You do not provision servers, manage patches, or worry about scaling. DynamoDB handles all of this automatically.
DynamoDB excels at workloads that require consistent, single-digit-millisecond latency at any scale. For serverless APIs, it is an ideal choice because it supports on-demand capacity mode, where you pay only for the reads and writes your application performs, just like Lambda's pay-per-use model.
Building a Serverless API — A Practical Example
Let us walk through the architecture of a real API we built for a client's order management system. The system needed to handle product catalog queries, order creation, order status updates, and webhook notifications.
Project Structure with the Serverless Framework
We use the Serverless Framework to define and deploy our Lambda functions. It provides a clean configuration format and handles the CloudFormation orchestration behind the scenes.
service: order-management-api
provider:
name: aws
runtime: nodejs14.x
region: ap-south-1
environment:
ORDERS_TABLE: ${self:service}-orders-${sls:stage}
PRODUCTS_TABLE: ${self:service}-products-${sls:stage}
functions:
createOrder:
handler: src/handlers/orders.create
events:
- httpApi:
path: /orders
method: post
getOrder:
handler: src/handlers/orders.get
events:
- httpApi:
path: /orders/{id}
method: get
listProducts:
handler: src/handlers/products.list
events:
- httpApi:
path: /products
method: get
Handling Cold Starts
Cold starts are the most discussed limitation of serverless architecture. When a Lambda function has not been invoked recently, AWS needs to provision a new execution environment, which adds latency to the first request. Cold start times vary by runtime: Node.js and Python typically see 100 to 300 milliseconds, while Java and .NET can see 1 to 3 seconds.
For the vast majority of API workloads, cold starts are not a meaningful problem. They affect only the first request after a period of inactivity, and the added latency is usually imperceptible to users. However, if your application requires consistently low latency for every request, Lambda offers Provisioned Concurrency, which keeps a specified number of execution environments warm and ready.
Authentication and Authorization
API Gateway integrates with Amazon Cognito for user authentication, or you can use Lambda authorizers for custom authentication logic. For our client projects, we typically use Cognito User Pools for user-facing APIs and API keys with IAM authorization for service-to-service communication.
Cost Analysis — Serverless vs. Traditional
The cost conversation is where serverless truly shines for many workloads. Let us compare the costs of running an API that handles one million requests per month with an average execution time of 200 milliseconds and 256 MB of memory.
Serverless Costs (Lambda + API Gateway + DynamoDB)
- Lambda: 1 million requests at 200ms with 256 MB memory = approximately $0.83 per month (after free tier)
- API Gateway (HTTP API): 1 million requests = $1.00 per month
- DynamoDB (on-demand): Varies by read/write volume, but a typical API with mixed operations costs $5 to $15 per month at this scale
- Total: approximately $7 to $17 per month
Traditional Server Costs
- EC2 (t3.medium): approximately $30 per month on-demand
- RDS (db.t3.micro): approximately $15 per month
- Load Balancer: approximately $18 per month
- Total: approximately $63 per month minimum
At low to moderate traffic levels, serverless is dramatically cheaper. The economics shift at very high, consistent traffic volumes, where reserved EC2 instances become more cost-effective. The crossover point is typically around 10 to 50 million requests per month, depending on function complexity.
When Not to Go Serverless
Despite our enthusiasm for serverless, we advise against it in several scenarios.
- Long-running processes: If your workload consistently runs longer than 15 minutes, Lambda is not the right tool. Consider AWS Fargate or ECS instead.
- WebSocket-heavy applications: While API Gateway supports WebSockets, the connection management overhead and cost can make traditional servers more practical for real-time applications with many concurrent connections.
- Complex relational data: If your application requires complex SQL joins, transactions across multiple tables, and relational integrity constraints, a traditional database on RDS with an EC2 or Fargate backend may be simpler and more performant.
- Vendor lock-in concerns: Serverless architectures are deeply integrated with specific cloud providers. Migrating from Lambda to Azure Functions or Google Cloud Functions requires significant rework.
Best Practices from Production Deployments
After deploying dozens of serverless APIs for our clients, we have compiled a set of best practices that consistently improve reliability and performance.
- Keep functions small and focused. Each function should handle one specific operation. This improves cold start times, makes testing easier, and allows independent scaling.
- Use environment variables for configuration. Never hardcode connection strings, API keys, or stage-specific values. Use SSM Parameter Store or Secrets Manager for sensitive values.
- Implement structured logging. Use JSON-formatted logs with correlation IDs to trace requests across multiple functions. CloudWatch Logs Insights makes querying structured logs straightforward.
- Set up dead letter queues. Configure SQS dead letter queues for asynchronous Lambda invocations so that failed events are captured rather than lost.
- Use Lambda Layers for shared code. Common utilities, SDK configurations, and middleware should live in Lambda Layers, reducing deployment package sizes and ensuring consistency.
Serverless is not about eliminating servers. It is about eliminating the undifferentiated heavy lifting of server management so you can focus on building what matters to your business.
Getting Started with Serverless at StrikingWeb
If you are considering a serverless architecture for your next API project, we can help you evaluate the approach and build a production-ready solution. We bring hands-on experience with AWS Lambda, API Gateway, DynamoDB, and the supporting ecosystem of services like SQS, SNS, Step Functions, and EventBridge.
Whether you are building a new API from scratch or migrating an existing monolith to serverless, the key is starting with a clear understanding of your traffic patterns, latency requirements, and cost constraints. Reach out, and we will help you determine whether serverless is the right fit.