Why the serverless revolution is deadlocked

Key Points

  • For several years now we have been promised that serverless computing (serverless) will open a new era without a specific OS to run applications. We were told that such a structure would solve a lot of scalability problems. In fact, everything is different.
  • While many see serverless technology as a new idea, its roots can be traced back to 2006 with Zimki PaaS and Google App Engine, both of which use a serverless architecture.
  • There are four reasons why the serverless revolution has stalled, from limited programming language support to performance issues.
  • Serverless computing isn't all that useless. Far from it. However, they should not be seen as a direct replacement for servers. For some applications, they can be a handy tool.

The server is dead, long live the server!

This is the battle cry of the adherents of the serverless revolution. A quick glance at the industry press over the past few years is enough to conclude that the traditional server model is dead and that in a few years we will all be using serverless architectures.

As anyone in the industry knows, and as we also pointed out in our article on the state of serverless computing, this is wrong. Despite many articles on the merits serverless revolution, it never took place. In fact, recent studies showthat this revolution may have reached a dead end.

Some of the promises for serverless models have certainly come true, but not all. Not everyone.

In this article I want to consider the reasons for this condition. Why the lack of flexibility of serverless models is still an obstacle to their wider adoption, although they remain useful in specific, well-defined circumstances.

What the adepts of serverless computing promised

Before moving on to the problems of serverless computing, let's see what they had to provide. Promises of a serverless revolution were numerous and - at times - very ambitious.

For those unfamiliar with the term, here is a brief definition. Serverless computing defines an architecture in which applications (or parts of applications) run on demand in runtime environments that are typically hosted remotely. In addition, serverless systems can be hosted. Building robust serverless systems has been a major concern of system administrators and SaaS companies over the past few years, as (it is claimed) this architecture offers several key advantages over the "traditional" client/server model:

  1. Serverless models don't require users to maintain their own operating systems or even build applications that are compatible with specific operating systems. Instead, developers create shared code, upload it to a serverless platform, and watch it run.
  2. Resources in serverless frameworks are usually billed by the minute (or even seconds). This means that clients only pay for the time they actually execute the code. This compares favorably with the traditional cloud VM, where the machine is idle most of the time, but you have to pay for it.
  3. The problem of scalability was also solved. Resources in serverless frameworks are assigned dynamically so that the system can easily cope with sudden spikes in demand.

In short, serverless models provide flexible, low-cost, scalable solutions. I'm surprised we didn't think of this idea earlier.

Is this really a new idea?

Actually the idea is not new. The concept of allowing users to pay only for the time the code actually runs has been around since it was introduced under Zimki PaaS in 2006, and around the same time, Google App Engine came up with a very similar solution.

In fact, what we now call the “serverless” model is older than many of the technologies now called “cloud native” that provide almost the same. As noted, serverless models are essentially just an extension of the SaaS business model that has been around for decades.

It is also worth recognizing that the serverless model is not a FaaS architecture, although there is a connection between the two. FaaS is essentially the compute-centric part of a serverless architecture, but it doesn't represent the entire system.

So why all this hype? Well, as the rate of Internet penetration in developing countries continues to skyrocket, so does the demand for computing resources. For example, many countries with rapidly growing e-commerce sectors simply do not have the computing infrastructure for applications on these platforms. This is where paid serverless platforms come in.

Problems with Serverless Models

The catch is that serverless models have… problems. Don't get me wrong: I'm not saying they're bad in and of themselves or don't provide significant value to some companies in some circumstances. But the main claim of the "revolution" - that the serverless architecture will quickly replace the traditional one - never comes to fruition.

That's why.

Limited support for programming languages

Most serverless platforms only allow applications written in certain languages ​​to run. This severely limits the flexibility and adaptability of these systems.

Serverless platforms are considered to support most major languages. AWS Lambda and Azure Functions also provide a wrapper for running applications and functions in unsupported languages, although this often comes at a performance cost. So for most organizations, this limitation is usually not a big deal. But here's the thing. One of the benefits of serverless models is supposed to be that obscure, infrequently used programs can be used cheaper because you only pay for the time they run. And obscure, rarely used programs are often written in... obscure, rarely used programming languages.

This undermines one of the key advantages of the serverless model.

Binding to a vendor

The second problem with serverless platforms, or at least the way they are currently implemented, is that they usually don't look alike at the operational level. There is practically no standardization in terms of writing functions, deployment and management. This means that migrating features from one platform to another is extremely time consuming.

The hardest part of moving to a serverless model isn't the computational features, which are usually just snippets of code, but how applications communicate with connected systems such as object storage, identity management, and queues. Functions can be moved, but the rest of the application cannot. This is the exact opposite of the promised cheap and flexible platforms.

Some argue that serverless models are new and there hasn't been time to standardize how they work. But they are not that new, as I noted above, and many other cloud technologies such as containers have already become much more convenient due to the development and widespread adoption of good standards.

Performance

Computing performance of serverless platforms is difficult to measure, partly because vendors tend to keep information secret. Most argue that features on remote, serverless platforms run just as fast as they do on internal servers, save for a few inevitable latency issues.

However, some evidence suggests otherwise. Functions that have not previously run on a particular platform, or have not run for some time, take some time to initialize. This is likely due to their code having been ported to some less accessible storage medium, although - as with benchmarks - most vendors won't tell you about porting data.

Of course, there are several ways to get around this. One is to optimize features for whatever cloud language your serverless platform runs on, but that somewhat undermines the claim that these platforms are "agile".

Another approach is to ensure that performance-critical programs run regularly to keep them "fresh". This second approach, of course, is a little counter to the claim that serverless platforms are more cost-effective because you only pay for the time your programs run. Cloud providers have introduced new ways to reduce cold launches, but many of them require "scale to one", which undermines the original value of FaaS.

The cold start problem can be partly addressed by running serverless systems in-house, but this comes at its own expense and remains a niche option for well-resourced teams.

You can't run entire applications

Finally, perhaps the most important reason why serverless architectures won't replace traditional models anytime soon is that they (generally) can't run entire applications.

More precisely, it is impractical from a cost point of view. Your successful monolith probably shouldn't be turned into a set of four dozen functions tied together by eight gateways, forty queues, and a dozen database instances. For this reason, serverless is better suited for new developments. Virtually no existing application (architecture) can be ported. You can migrate, but you have to start from scratch.

This means that in the vast majority of cases, serverless platforms are used as a complement to back-end servers to perform computationally intensive tasks. This is very different from the other two forms of cloud computing, containers and virtual machines, which offer a holistic way to perform remote computing. This illustrates one of the challenges of migrating from microservices to serverless systems.

Of course, this is not always a problem. The ability to periodically use huge computing resources without buying your own hardware can bring real and lasting benefits to many organizations. But if some applications are on internal servers and others are on serverless cloud architectures, then management enters a new level of complexity.

Long live the revolution?

Despite all these complaints, I am not opposed to serverless solutions per se. Honestly. It's just that developers need to understand - especially if they are exploring serverless models for the first time - that this technology is not a direct replacement for servers. Instead, check out our tips and resources on building serverless applications and decide how best to apply this model.

Source: habr.com

Add a comment