When serverless hit the scene, it felt like magic. “Pay only for what you use.” No more over-provisioning, no idle servers quietly draining your budget, no wasted capacity. For small teams and unpredictable workloads, this promise is powerful.
But here’s the myth:
Serverless is always cheaper.
The reality? At scale or under steady workloads, serverless can actually cost more.
Why Serverless Feels Cheaper (At First)
The pricing model is hard not to love. Instead of renting full servers, you’re billed per execution — sometimes down to the millisecond. That’s revolutionary compared to the old days of buying a whole VM just to handle traffic spikes that may never come.
For small, scrappy teams, this is a dream:
- No upfront infrastructure. You don’t need to guess future capacity.
- Automatic scaling. Traffic surges are absorbed without a thought.
- Granular billing. You can calculate the cost of a single request with precision.
This is why early adopters — from hobbyists to startups — rave about serverless. It lowers the barrier to entry and lets you get to market without an ops team.
When the Bill Creeps Up
But as usage grows, the shine wears off. Suddenly:
- Steady workloads are punished.If your app runs 24/7 with predictable traffic, paying per invocation often costs more than keeping a VM (or Kubernetes pod) running. A single n2-standard VM on GCP with a sustained-use discount may outpace millions of Lambda or Cloud Function calls in cost efficiency.
- Cold starts introduce trade-offs.To avoid latency from cold starts, many teams pay for provisioned concurrency. That fixed cost looks suspiciously like the “always-on server” you thought you escaped.
- Hidden services add up.Serverless apps don’t run in isolation — they often chain together databases, queues, storage, and APIs. Each call is metered, and costs accumulate in ways you may not predict at design time.
- You miss out on discounts.Reserved instances, committed use, and sustained use discounts can make traditional compute far cheaper for heavy workloads. With serverless, those discounts rarely apply.
Example: Startup vs Enterprise
- A startup with traffic that spikes unpredictably benefits hugely. They avoid the nightmare of running idle infrastructure and can scale instantly when they land on the front page of Hacker News.
- An enterprise running a steady 500 requests per second, 24/7, may find the per-invocation pricing far more expensive than a managed Kubernetes cluster or reserved VMs.
The economics shift depending on the scale and nature of your workload.
The Real Equation Isn’t Just Price
Even when serverless costs more in raw dollars, many teams still stick with it — and that’s worth examining.
- Speed of development. Faster delivery might justify the higher runtime bill.
- Reduced operational overhead. No patching, no scaling logic, no fleet management. That saves people-hours — often more expensive than compute.
- Business agility. The ability to experiment quickly can outweigh infrastructure costs entirely.
In other words: the cheapest option on paper isn’t always the best option in practice.
Busting the Myth
Serverless is not a universal cost-saver. It’s a trade-off.
- Small, spiky workloads → cheaper.
- Large, steady workloads → often more expensive.
- High agility needs → sometimes worth the premium.
The myth falls apart when you assume economics scale linearly. They don’t.
Part of the “Infrastructure Myths” Series
This post is part of our ongoing series where we challenge common assumptions in DevOps and cloud:
- Terraform is Always Better Than ClickOps
- More Microservices = More Scalability
- Containers Solve All Deployment Problems
- And now: Serverless Is Always Cheaper
Because in infrastructure, there are no silver bullets — just trade-offs.
Want more myth-busting takes on DevOps and cloud? Subscribe here.