Is Serverless Architecture Only for Small Workloads?

When I first started thinking about using serverless architecture for our apps, I justified it by thinking about the amount of workload it was going to generate. I usually lump workloads into these buckets:

  1. Smaller than 1 server – when the workload won’t even be close to the workload required for a single server’s worth of hardware, even a tiny VM.
  2. One or more servers – when the load happens frequently enough that the performance, price, and predictability of a dedicated virtual or physical server starts to make sense. Here, you build a server (or a few), and typically you do it manually rather than write code to deploy servers.
  3. Multiple servers, but wildly unpredictable – in the physical data center model, we have to overprovision servers to handle peak demand. With virtual servers, we write code to build multiple servers on demand, and in Amazon, use Auto Scaling to do it automatically.
  4. Several servers, predictable – where again, we might get lazy and just build the servers once rather than write code to build the servers.

At first glance, you’d think serverless architecture makes sense at smaller-than-one-server workloads. You write Lambda functions, and you only pay by the second while your code runs. For super-tiny workloads, like my Amazon IOT button that turns on my lab, it’s brilliant.

But the cost of a single hosted server just isn’t that high anymore. Linux users can grab a 2-core, 2GB RAM box at DigitalOcean for $20/month. While that might sound like a low amount of memory, keep in mind that your Lambda functions max out at 1.5GB RAM. Prefer Windows? A reserved t2.small (1 core, 2GB RAM) running Windows will run you $280/year. (Check out for a slick pricing grid.)

Serverless architectures actually make more sense as your workload grows – not because of growing hardware costs for multiple EC2 VMs, but because of the costs of managing those VMs. If we had one VM, I might wing the patching and troubleshooting myself, doing it in my spare time. But if we’re going to grow into the nebulous tiers of 1-10 servers, ain’t nobody got time for that. I simply don’t have the sysadmin teams required to do it right: I need to treat infrastructure like code, automating it so that it can scale faster. The answer isn’t Amazon EC2 or Azure VMs – those are unmanaged servers, and I still have to do my own patching and OS troubleshooting.

It gets worse as I start to think about geographic redundancy – rolling another server in another region, load balancing between them with Amazon Elastic Load Balancing, troubleshooting connectivity between them.

Serverless architectures appeal to me because the apps we’re building have unpredictable load. I bet, based on what we’re seeing from users, that our hardware loads are going to stick in the sub-1-server range for the first 6-12 months.

If I have to choose between manual one-off work, versus designing automated processes, I’ll take the latter.

While I love systems administration, playing with hardware, and building a little empire of flashing lights, customers aren’t paying for that. I have to build value that people are willing to pay for. That means if I have to choose between spending time patching and troubleshooting boxes, versus investing in building code to automate it, only the latter pays off long term. Long term, I gotta spend more time marketing our services and growing our user base.

One of the excellent things about serverless architecture is that I don’t have to stay serverless forever. If the apps we build gather a big user base, and we can start to afford 2-3 site reliability engineers, then I can justify moving back to bare metal and get the awesome performance I know and love.

But I wouldn’t be surprised if I hired devops people instead, and enable even larger administration at scale.

I can’t wait to have those kinds of problems.

The Pros and Cons of Serverless Architecture

Today’s serverless architecture design is new. Really, really new. (Yeah, yeah, Grandpa, you could argue that your mainframe apps were serverless, but you’re missing the point. Go back to yelling at the cloud.)

That immaturity has a lot of drawbacks by itself:

  • Learning serverless is hard – the awesome list of serverless resources is a good place to start
  • Hiring is nearly impossible – the tech is so new that few people know it yet, and those who do are expensive
  • Getting help is tough – there’s hardly any questions & answers on StackOverflow, for example (AWS LamdbaWindows Azure Service Fabric)
  • Best practices don’t exist yet
  • If it’s down, it’s just down – it’s outside of your control
  • Vendor lock-in – each vendor implements it differently right now, and porting code between Amazon and Microsoft would be very expensive

That last one is ugly because in theory, you’re worried about any vendor screwing it up. They could jack up prices, make a breaking change, or just deprecate the whole platform.

The next major drawback is single-transaction performance. Today’s serverless platforms have much higher latency – for example, if your function hasn’t run recently, AWS Lambda has to start up a container for it. Forget running an e-commerce site on this – after a second or two, Google-referred users will just hit the back button and try someone else’s store.

The cons above add up to one thing: if you’re a midsize profitable company, building a traditional application or web site, you should probably not use serverless design. Your application will be slower to build, slower to access, and harder to troubleshoot.

But if you go in knowing those drawbacks, the advantages can make it a good fit for a few types of applications, like the ones we’re building at the moment.

The Advantages of Using Serverless Architecture

Someone else manages uptime and that related staffing. When the serverless provider’s servers go down, they’re the ones who have to manage it, not me. For a non-critical app like the one we’re working on now, that makes perfect sense.

Hosting costs and performance scale linearly. If no one is using your app, you don’t pay. As more people use it, your costs go up. For the applications we’re working on now, we’re only projecting dozens of users per hour, which means hardware or VMs would be sitting around idle. If it catches on later, great – but even if only dozens of folks use it, we’re still quite happy with the costs.

We’re getting valuable experience. We have a lot of application & service ideas that all involve asynchronous access (queues), low performance requirements, and analyzing stored data. With one company, we picked an app that was the easiest one to bring to production first, and we’re testing whether serverless architecture will work for the rest of the ideas.


Choosing a Serverless Platform

Mid-2016 is a tough time to bet on a platform. Several smaller independent players got in before the big guns, and we won’t review the smaller folks here since we don’t have any experience with them. Focusing on the big ones:

Amazon Web Services offers Lambda, which charges by the number of times your code runs, plus a per-second cost for the memory you use. You can run Node.js, Python, and Java code as Lambda functions.

Microsoft’s equivalent is Azure Functions, but it’s brand spankin’ new:

Microsoft is playing one heck of a game of catch-up in the cloud business. Given how new and undocumented AWS Lambda is, Microsoft stands a pretty good chance of being competitive in the serverless space.

Finally, Google Cloud Functions is only in alpha, and the documentation includes this terrifying disclaimer:

This is an Alpha release of Google Cloud Functions. This feature might be changed in backward-incompatible ways and is not recommended for production use. It is not subject to any SLA or deprecation policy.

Ouch. Your platform decision will come down to the serverless landscape at the time you’re making the decision, plus your reliance on the other cloud services provided by each vendor.