When I first started thinking about using serverless architecture for our apps, I justified it by thinking about the amount of workload it was going to generate. I usually lump workloads into these buckets:

  1. Smaller than 1 server – when the workload won’t even be close to the workload required for a single server’s worth of hardware, even a tiny VM.
  2. One or more servers – when the load happens frequently enough that the performance, price, and predictability of a dedicated virtual or physical server starts to make sense. Here, you build a server (or a few), and typically you do it manually rather than write code to deploy servers.
  3. Multiple servers, but wildly unpredictable – in the physical data center model, we have to overprovision servers to handle peak demand. With virtual servers, we write code to build multiple servers on demand, and in Amazon, use Auto Scaling to do it automatically.
  4. Several servers, predictable – where again, we might get lazy and just build the servers once rather than write code to build the servers.

At first glance, you’d think serverless architecture makes sense at smaller-than-one-server workloads. You write Lambda functions, and you only pay by the second while your code runs. For super-tiny workloads, like my Amazon IOT button that turns on my lab, it’s brilliant.

But the cost of a single hosted server just isn’t that high anymore. Linux users can grab a 2-core, 2GB RAM box at DigitalOcean for $20/month. While that might sound like a low amount of memory, keep in mind that your Lambda functions max out at 1.5GB RAM. Prefer Windows? A reserved t2.small (1 core, 2GB RAM) running Windows will run you $280/year. (Check out EC2instances.info for a slick pricing grid.)

Serverless architectures actually make more sense as your workload grows – not because of growing hardware costs for multiple EC2 VMs, but because of the costs of managing those VMs. If we had one VM, I might wing the patching and troubleshooting myself, doing it in my spare time. But if we’re going to grow into the nebulous tiers of 1-10 servers, ain’t nobody got time for that. I simply don’t have the sysadmin teams required to do it right: I need to treat infrastructure like code, automating it so that it can scale faster. The answer isn’t Amazon EC2 or Azure VMs – those are unmanaged servers, and I still have to do my own patching and OS troubleshooting.

It gets worse as I start to think about geographic redundancy – rolling another server in another region, load balancing between them with Amazon Elastic Load Balancing, troubleshooting connectivity between them.

Serverless architectures appeal to me because the apps we’re building have unpredictable load. I bet, based on what we’re seeing from users, that our hardware loads are going to stick in the sub-1-server range for the first 6-12 months.

If I have to choose between manual one-off work, versus designing automated processes, I’ll take the latter.

While I love systems administration, playing with hardware, and building a little empire of flashing lights, customers aren’t paying for that. I have to build value that people are willing to pay for. That means if I have to choose between spending time patching and troubleshooting boxes, versus investing in building code to automate it, only the latter pays off long term. Long term, I gotta spend more time marketing our services and growing our user base.

One of the excellent things about serverless architecture is that I don’t have to stay serverless forever. If the apps we build gather a big user base, and we can start to afford 2-3 site reliability engineers, then I can justify moving back to bare metal and get the awesome performance I know and love.

But I wouldn’t be surprised if I hired devops people instead, and enable even larger administration at scale.

I can’t wait to have those kinds of problems.

Brent Ozar

I founded Brent Ozar Unlimited to make Microsoft SQL Server faster and more reliable. While I've been a developer, sysadmin, and DBA, I'm blogging here as a small business owner. I'm bridging the gap between the tech side (who want their own servers) and the business (who has to balance the books).

Leave a Reply

Your email address will not be published. Required fields are marked *