My first serverless app

This is going to be a fairly technical blog post about building a serverless application. It’s mostly for me, so I can refer back

Okay gang. Building a serverless application is incredibly fucking hard. Do not do it if you value your sanity, your relationships with your loved ones, or the quantity of hair on your head.

I, being already both bald and mad and with more faith than might reasonably be expected in my relationship, jumped in with two feet. I can tell you that it is as easy as walking on a beam over a river, if that beam sometimes turned into porridge, or glass, or particularly springy rubber. There are long periods of thinking that it is just like writing code in the good old days, when all had to worry about was writing code well. Then there are short periods of realising that you own the responsibility for making your infrastructure repeatable, and the person who wrote your libraries thinks your programming language is for children and their programming language is for grown-ups, and consequently I spent ninety fucking minutes today debugging a Typescript error.

Zero-trust is pretty much baked into serverless. Each resource has to be told what it can and cannot talk to, so while there is a danger of misconfiguration there’s a much smaller attack surface for anyone trying to get in. In addition, it means you actually have to think about it: what functions need to talk to what resources? Are there functions that just have to read, and some that just have to write? Of course all this adds massive cognitive load which in a more traditional setup would be filed under ‘Doesn’t matter’.

Nonetheless…now that I’ve got it working, or at least mostly working, it is absolutely gorgeous to behold. If I were running a server then, right now, it would be merrily using up electricity while nobody looked at it. I could automatically switch it off, but then I run the risk of someone needing to use it while it’s switched off. That’s not effective, and I will surely have lost a customer. Serverless rather elegantly solves the problem of being charged for something while nobody’s using it. This latest app I’ve built, assuming it doesn’t get a whole lot of users, could probably run for a couple of dollars a month for the rest of its life. Serverless, so far, is cheap.

That’s basically it. If I were looking for ease of development, I wouldn’t look to serverless. It’s complicated. I’ve spent the past day thinking about how to enqueue things to effectively hack around the fact that my little functions are incredibly short lived and have no memory of their lives before. State, it turns out, is very difficult. Distributed systems are very difficult. Sending people to Mars is also difficult, but in quite a different way.

The other difficult thing is ordering. I’m using Chalice, which is a great web framework for developing serverless. It comes with a few handy decorators that automatically link resources together – for example, one decorator called on_sqs_message keeps an eye on a queue and processes anything that comes down it as soon as it happens. The reason I’m using a queue for this, rather than just processing it locally, is because I want to be able to distribute the load – if 400 requests come in I want to process 400 requests in parallel, rather than one after the other. However, I’m also deploying this whole thing via the AWS Cloud Development Kit (CDK). That means I create the queue somewhere other than where I link it with my process, and that means the compiler throws an enormous tantrum and says I’m trying to link things together that don’t exist.

In a really, really weird turn of events someone else experienced the exact same issue at almost the exact same time. Creepy.

The project I’m actually building is a service to enable a user to follow everyone on a Twitter list. It uses the Tweepy library to authenticate users, Simple Queueing Service (SQS) to queue up requests to follow someone, Lambda to actually do the processing and DynamoDB to store state and some credentials. It’s dealing with Twitter’s (sensible) rate limits, and consequently there are queues all over the place. It’s a beautiful factory floor written in code.

Lessons learned:

  • accept that state is necessary. Do not try to manage it via environment variables
  • work out how to deploy when you’ve got a small app. We’re really getting to the heart of DevOps here where we deploy small changes, to infrastructure and code, all the time
  • pick something easier. I’ve ended up trying to mock out an external library, which itself reaches out to an external API. I am slightly out of my depth on this one, and it’s something I could do with learning better
  • embrace small files

I’ve really enjoyed building this. It’s been a really significant challenge and quite a steep learning curve, but nonetheless I’m confident I’ve got a better grip of what I’m doing with it. Honestly, this technology stuff is pretty good.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s