Understanding the success of the "Serverless" model

Understanding the success of the "Serverless" model

Anyone who has ever done infrastructure on a cloud provider has already heard of the serverless model, behind this name is actually hiding many aspects. Let's take a look...

The serverless model: logical evolution of containers?

For several years now, we have been talking about containers. A revolution over the last 5 years, containers (and orchestrators) have profoundly changed the approach to infrastructure, allowing applications composed of microservices to be deployed more and more simply and quickly. I won't talk about this evolution here.

Basically, the principle of serverless is to go further on this approach, saying that we don't want to have to worry about the infrastructure and the underlying middleware anymore. Basically, I have my application code, I want it to run, and that's it, I don't need to know where it's running, how the machine is installed etc... I just want it to run.

The serverless model brings that flexibility, just having code to run without worrying about the underlying infrastructure. Even better, it allows you to drastically reduce the cost of some applications adapted to this model thanks to the on-demand billing model, which allows you to bill only for the resources used rather than an infrastructure running 24/7 to receive connections.

Easy to set up

As described above, the serverless model reduces the time to market by having to worry only about the application code. Companies such as ACloudGuru for example were able to set up their website with these technologies at a lower cost. Moreover, the fact that you don't have to manage an infrastructure means that you don't need system and network engineers for example.

If I take the example of the AWS Lambda service, in a few minutes I can set up my code in python 3 for example, that I can link directly to an ALB (possible since the end of last year) or to the API Gateway service, and by hosting my assets on S3, in static webhosting, and I have a complete site that can be set up very quickly, having had at no time to manage infrastructure.

Doing serverless or not doing it at all

One of the points very often put forward on serverless is the fact of having redundancy managed natively, allowing for highly available and potentially infinitely scalable solutions. The mistake that is often made is to make serverless on some aspects but not all. For example putting a Gateway API with Lambda (so far so good), but with a "classical" data source such as a MariaDB RDS server, which won't be able to scale as much as Lambda for example. This is the way to create a highly available SPOF! You can still use serverless on some components, as long as you are aware that you lose the potential of infinite scalability for example.

Cost reduction

On this aspect, which is also often highlighted, you have to be very careful, as every time you venture into FinOps not everything is black or white!

I myself am a fervent advocate of serverless, but I am aware that this model is not adapted to all needs.  Applications with very high loads for example, or being very busy can sometimes cost more in serverless than on a "classic" infrastructure.

Similarly, I mentioned above the infinite scalability, which is a significant advantage. It can also backfire on its owner, for example in case of a DDOS type attack, which will try to overload the service. In this case, the service will continue to respond, which is very good for the user's image, however, without a firewall, the bill will literally skyrocket. To have already seen a small attack on an EMR fleet, it can sometimes be violent, in terms of order of magnitude, in 1 hour, the billing was equal to a full day of operation on all AWS accounts of the company.

Fortunately, it is possible to set limits. For example Lambda allows to set a number of Concurrent Execution to limit the number of simultaneous executions.

On the other hand, I have seen applications running with Lambda + API Gateway + S3 that cost less than $10 per month, instead of several hundred dollars on an ALB + EC2 backend infrastructure.

Cloud players invest massively in serverless

I'm going to focus here on AWS, which I know much better than these competitors. It must be admitted that today's cloud players are investing massively in the serverless model. Last example for Amazon, EKS (Kubernetes managed) compatible with the FarGate service, which allows to run containers in serverless mode, simply by paying CPU/RAM credit.

But let's remain pragmatic, with all the advantages mentioned above, what interest do these providers have in reducing your bill?

From my point of view, I see at least 2:

  • It creates a strong grip with AWS, reducing even more the possibility to go elsewhere. Tools such as SAM (Serverless Application Model) and SAR (Serverless Application Repository) are specifically designed to achieve this.
  • This allows Amazon to mutualize its infrastructure in a very atomic way, much more than a VM, which is more cumbersome to manage.

In conclusion

The purpose of this post is not to list all the existing solutions to set up serverless, but more to make a quick overview to explain why we talk more and more about serverless.

As described above, this model brings undeniable advantages:

  • Infinite" scalability
  • High availability
  • No infrastructure to manage
    Reduces time to market

But it also comes with new issues to take into account, both in terms of security and billing. For example FarGate costs about 3 times more for the same CPU/RAM than an EC2 machine, but allows you to avoid having to manage the latter.

What do you think about this? Feel free to react in the comments!