Is Serverless the future of Cloud Computing?

An interview with Luciano Mammino and Gojko Adzic

Gojko Adzic is a name you already know if you’ve been following us in the past years… raise your hand if you knew Mr Adzic is also the go-to serverless expert!
We asked Luciano Mammino, author of “Node.js design patterns second edition”, serverless enthusiast and international speaker, to interview Gojko on all the things serverless.

What follows is an in-depth conversation among experts about one of the most revolutionary and disruptive technologies out there… feeling curious? Read on!

Luciano: Hi Gojko, it’s really a pleasure for me to share some time with someone so influential in the sphere of agile, testing and serverless, so thanks a lot for being here! How are you today?
Ready to have a chat about serverless?

Gojko: Sure, it’s always a pleasure to talk to people with similar interests.

Luciano: Brilliant! I would like to start with some definitions.
The word “serverless” always sounds a bit weird at first and people have come up with all sorts of definitions for it.
The last one I heard was “serverless is a billing model“.
A very unconventional and provocative definition, but also a very interesting one. Of course there is also who likes to see it differently.
What do you think would be a good definition for serverless?

Gojko: The most revolutionary thing in the serverless space for me is the billing model.
I wrote about that in 2016  and I still think that.
Technically, spinning up containers quickly and scaling them is nice, but that already existed before Amazon offered to rent it on tap and bill for individual requests.
Whether that’s the whole definition or not, I don’t know.

I kind of like Simon Wardley’s idea that serverless is finally a good implementation of platform-as-a-service — because Lambda functions really become magic when people use them as a universal glue for a ton of platform services that Amazon provides.

Instead of thinking about lambda as a web server, we can use it to store files to S3, process client requests using Kinesis and push messages to clients using AWS IOT.
Instead of implementing our own authorisation, we can use Cognito and IAM.
That’s how we dropped most of the infrastructure code and saved a lot of cash when moving from Heroku to Lambda.

Running “serverless” allowed us to effectively rent the platform from Amazon and pay everything per request, without thinking about reserving capacity or scaling.
Just running the same app inside lambda and paying for it differently wouldn’t have given us results that good.

Luciano: One of the things I generally tend to be curious about when discussing IT trends such as serverless is to understand why such principles or models went mainstream.
I recently did a little research on the history of serverless as I was trying to understand why this model got so much attention and how it evolved from other cloud models.
I don’t know if you ever asked yourself why we ended up with serverless and what makes it so special.
If you did, I am really curious to know what conclusions did you draw.

Gojko: We ended up with serverless because Amazon is really good at mining metadata about client workflows on their system, spotting trends and then building products around those trends.

In 2002-2003 I worked as an editor for a local computer magazine, and I remember reviewing an article about “utility computing”, a write-up of some conference where people from HP then presented their vision for internet trends.
They talked about how infrastructure will become so commoditised that for most companies it will make most sense to just pay a provider for it, similar how people are paying for electricity or water.

If you’re a really big or really special company, you may have your own electricity production or water sourcing, most companies out there don’t even consider running their own power plant.
The HP people argued that computing is going in the same direction, and computing will just become a public utility.
In many ways, this is turning out to be true at least for me.

I think Serverless is the next step in that trend.
Cloud computing is still very much vendor controlled and we’re in the days similar to Westinghouse and Edison offering different types of electricity, but it’s available on tap.

One day we’ll be able to say: “A glass of cloud computing from the tap, please!”

Luciano: Let’s take a more practical angle for a moment.
Can you tell me something about a serverless application you contributed building?
What did you like about it? Was the serverless model a good fit for the given project? Any particular challenge the team had to face because of the constraints of the serverless model?

Gojko: I’m working on, a collaborative mind mapping application.
In 2016, we started gradually moving services off Heroku because of a combination of factors, and tried out Lambda as one of the potential solutions. We were so impressed by the capabilities that over the next year pretty much everything moved there.

It’s a great fit because the application needs to scale massively, but it’s throughput bound, and our users generally wouldn’t care about a few dozen milliseconds of latency more.

By our internal metrics during the migration period, there was actually no noticeable difference in end-user latency, but we were able to cut hosting costs by about two thirds, remove a lot of infrastructure code and benefit from Amazon’s autoscaling capabilities to handle load.

All in all, we’re a lot more productive this way.
We really embraced the AWS platform, so we don’t write almost any infrastructure code, meaning that we can focus on building out important business features instead.

Over time, one of the biggest advantages for us was that AWS tends to patch things without us even caring about that.
A recent example was the CVE-2018-5390 vulnerability.
I learnt about it when a user asked on a forum if MindMup is impacted by that, a few hours after warnings started to appear on major news sites.
Amazon, it turns out, already had a fix in for Lambda that morning.

If we were running on our containers or on EC2, I’d have to waste a lot of time patching and restarting services.
This way, I spend my time working on what makes our product better, not chasing after infrastructure problems.

Luciano: Do you think serverless is mature enough to be adopted as a default for developing new cloud based services?
If not, what do you think is missing?

Gojko: As always, this depends on the context.
I think Lambda is a pretty solid service for applications that require a lot of throughput and auto-scaling, but if you need to squeeze out milliseconds from average response times, that’s not a good platform.
So I wouldn’t build an algorithmic trading app that needs to win against competitors on being a few milliseconds faster, but for typical Web apis, it’s perfectly fine.

Likewise, the current limit of five minutes per execution makes it impractical for long-polling or cases when you need to keep an open socket, such as a Twitter client.

Luciano: Of course, serverless is not a silver bullet.
Personally, I try to adopt serverless by default, but I had to work on some applications where in due progress we realized serverless wasn’t going to be a good fit and we had to go back to more classical architectures.
I’d like to hear some war story of yours about giving up on serverless if you have one.

Gojko: I don’t have anything in that direction. Generally, we were able to move everything into Lamda or use it as a glue for associated services for MindMup.

Luciano: Do you think it’s easy to get started with serverless?
How would you suggest a team to start embracing the serverless paradigm?

Gojko: I don’t know about other infrastructure providers such as Google and MS, but Lambda requires some heavy lifting to understand the configuration. It’s universal glue so people can do incredibly complicated things with it, but it’s not the easiest thing to get started with.

When we started out with MindMup on Lambda, our first functions were twenty-thirty lines long, accompanied with a two-hundred line shell script to deploy.
We built an internal tool to take care of all that boilerplate and enable us to work the way we used to with lightweight web servers, such as express or ruby on rails.
The tool is now opensource, and it makes simple things on AWS really easy to do.
I suggest people try it first when learning about Lambda, and then perhaps move on to more complicated stuff later if they need more heavy support.
It has a ton of simple example projects for the most common tasks, so it’s easy to copy and paste stuff when people experiment.
Check it out at

I also run a workshop for programmers to help them get started with Lambda, based on our experience with MindMup.
Check out the one I’m organising with Avanscoperta, called  Serverless Development with AWS Lambda (currently on demand only).

Luciano: I have seen many many tutorials about building hello world applications or other trivial applications like webhooks, slack commands, image manipulations or backup systems.
It’s very hard to find examples and tutorials on how to build real production-like applications with many components.
Is this because serverless is good only for simple apps or is it just that the community isn’t mature enough to produce more realistic examples?

Gojko: I think that’s because Lambda is best used as a glue between other services.
If the lambda function is itself very complex, it’s probably trying to do too much. Lots of our lambda functions at MindMup are single-purpose, single-task actions.

Rather than trying to find complex examples with Lambdas, I suggest people look into the services that the platform offers, such as Cognito, AWS IOT, Kinesis and Cloudwatch, then see what they need to benefit from those services and just wire that part with Lambda functions.

Luciano: I’d like to finish this wonderful chat by asking you if you believe serverless is the future of cloud computing and why.
Will we live and breath serverless in the upcoming years or there will be something different to rule our days as cloud developers?

Gojko: I think the general trend of getting towards paying for computing as a public utility will continue, so we’ll see more commoditization in that space.
There are still things like long-running workflows or low-latency applications where current serverless architectures won’t apply, so I assume that we’ll see some development in that area.

It’s difficult to predict what the future will hold — even though someone like HP might have talked about utility computing fifteen years ago, when Amazon was still trying to figure out how to sell books, AWS beat everyone to reach this stage of the game.

Luciano: Thanks Gojko! This chat has been really illuminating to me and I am sure it was the same for all our readers. Thanks again for joining me and for shedding some light on this serverless trend. All the best.

Gojko: Thanks for a lovely chat!

Pics credits: Adam Wilson, Imani, Imgix.

Learn from Gojko Adzic

Gojko is the trainer of the Impact Mapping Remote Workshop and Specification By Example Remote Workshop.

Check out our upcoming workshops: Avanscoperta Workshops and Training courses.

Get our updates, a hand-picked selection of articles, events and videos straight into your inbox... Once a week!
Subscribe to Avanscoperta's Newsletter (available in Italian and English).

Gojko Adzic

Author of: Impact Mapping, Fifty Quick Ideas To Improve Your User Stories, Humans vs Computers.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.