Tips and resources for building serverless applications

Tips and resources for building serverless applications
Although serverless technologies have rapidly gained popularity in recent years, there are still many misconceptions and fears associated with them. Vendor dependency, tooling, cost management, cold start, monitoring, and development lifecycle are all hot topics when it comes to serverless technologies. In this article, we'll explore some of the topics mentioned, as well as share tips and links to helpful sources of information to help beginners create powerful, flexible, and cost-effective serverless applications.

Misconceptions about serverless technologies

Many people believe that serverless and serverless data processing (Functions as a Service, FaaS) are almost the same thing. This means that the difference is not too great and it is worth introducing a new product. Although AWS Lambda was one of the stars of the rise of serverless technology and one of the most popular elements of serverless architecture, there is more to this architecture than FaaS.

The basic principle behind serverless technologies is that you don't have to worry about managing and scaling your infrastructure, you only pay for what you use. Many services fit these criteria - AWS DynamoDB, S3, SNS or SQS, Graphcool, Auth0, Now, Netlify, Firebase and many others. In general, serverless means using the full power of cloud computing without the need to manage infrastructure and optimize it for scaling. It also means that security at the infrastructure level is no longer your concern, which is a huge benefit given the difficulty and complexity of meeting security standards. Finally, you don't have to buy the infrastructure provided to you.

Serverless can be considered a β€œstate of mind”: a certain mentality when designing solutions. Avoid approaches that require maintenance of any infrastructure. With a serverless approach, we spend time solving problems that directly impact the project and bring value to our users: creating robust business logic, developing user interfaces, and developing responsive and reliable APIs.

For example, if it is possible to avoid managing and maintaining a free text search platform, then that is what we will do. This approach to building applications can greatly speed up the time to market, because you no longer need to think about managing complex infrastructure. Eliminate the responsibilities and costs of infrastructure management and focus on building the applications and services your customers need. Patrick Debois called this approach 'servicefull', the term is adopted in the serverless community. Functions should be thought of as a link to services as deployable modules (instead of deploying an entire library or web application). This provides incredible granularity for managing deployment and changes to the application. If you can't deploy functions in this way, then it may indicate that the functions perform too many tasks and need to be refactored.

Some are confused by the dependence on the vendor when developing cloud applications. The same is true with serverless technologies, and this is hardly a misconception. In our experience, building serverless applications on AWS, combined with AWS Lambda's ability to bundle other AWS services together, is part of the strength of serverless architectures. This is a good example of synergy, when the result of the combination is more than just the sum of the terms. Trying to avoid vendor dependency can run into even more problems. When working with containers, it's easier to manage your own abstraction layer between cloud providers. But when it comes to serverless solutions, the effort won't pay off, especially if cost-effectiveness is taken into account from the start. Be sure to find out how vendors provide services. Some specialized services rely on integration points with other vendors and may provide plug-and-play connectivity out of the box. It's easier to provide a Lambda call from a gateway API endpoint than to proxy the request to some container or EC2 instance. Graphcool provides easy configuration with Auth0, which is easier than using third party authentication tools.

Choosing the right vendor for your serverless application is an architectural-level decision. When you create an application, you don't expect to one day return to managing servers. Choosing a cloud vendor is no different than choosing to use containers or a database, or even a programming language.

Consider:

  • What services do you need and why.
  • What services do cloud providers provide and how you can combine them using your chosen FaaS solution.
  • What programming languages ​​are supported (dynamically or statically typed, compiled or interpreted, what are the benchmarks, what is the cold start performance, what is the open source ecosystem, etc.).
  • What are your security requirements (SLA, 2FA, OAuth, HTTPS, SSL, etc.).
  • How to manage your CI/CD and software development cycles.
  • Which infrastructure-as-code solutions can you take advantage of?

If you extend an existing application and incrementally add serverless functionality, this may limit the available capabilities somewhat. However, almost all serverless technologies provide some kind of API (via REST or message queues) that allows you to create extensions independent of the application core and with easy integration. Look for services with clear APIs, good documentation, and a strong community, and you can't go wrong. Ease of integration can often be a key metric, and is probably one of the main reasons why AWS has been so successful since Lambda was released in 2015.

When Serverless Is Good

Serverless technologies can be applied almost everywhere. However, their advantages are not limited to only one way of application. The barrier to entry for cloud computing today is so low thanks to serverless technologies. If developers have an idea, but they don't know how to manage cloud infrastructure and optimize costs, then they don't need to look for some kind of engineer to do it. If a startup wants to build a platform but fears that costs might get out of control, they can easily turn to serverless solutions.

Due to cost savings and ease of scaling, serverless solutions are equally applicable for both internal and external systems, up to a web application with a multi-million audience. Accounts are measured rather than in euros, but in cents. Renting the simplest instance of AWS EC2 (t1.micro) for a month will cost €15, even if you do nothing with it (who never forgot to turn it off?!). In comparison, to reach this level of spending over the same period of time, you would need to run a 512 MB Lambda for 1 second about 3 million times. And if you don't use this feature, then you don't pay anything.

Since serverless is primarily event-driven, it's fairly easy to add serverless infrastructure to legacy systems. For example, using AWS S3, Lambda, and Kinesis, you can create an analytics service for a legacy retail system that can receive data through an API.

Most serverless platforms support multiple languages. Most often it is Python, JavaScript, C#, Java and Go. Usually there are no restrictions on the use of libraries in all languages, so you can use your favorite open source libraries. However, it is advisable not to abuse dependencies so that your functions perform optimally and do not negate the benefits of the huge scalability of your serverless applications. The more packages that need to be loaded into the container, the longer the cold start will take.

A cold start is when you first need to initialize the container, runtime, and error handler before using them. Because of this, the delay in the execution of functions can be up to 3 seconds, and this is not the best option for impatient users. However, cold starts happen at the first call after a few minutes of idle function. So many consider this a minor annoyance that can be worked around by regularly pinging the function to keep it idling. Or they ignore this aspect altogether.

Although AWS released serverless SQL database Serverless AuroraHowever, SQL databases are not ideal for this type of use because they rely on connections to perform transactions, which can quickly become a bottleneck when there is a lot of traffic on AWS Lambda. Yes, developers are constantly improving Serverless Aurora, and you should experiment with it, but today NoSQL solutions like DynamoDB. However, there is no doubt that this situation will change very soon.

The toolkit also imposes many limitations, especially in the area of ​​local testing. Although there are solutions like Docker-Lambda, DynamoDB Local and LocalStack, they require painstaking work and a significant amount of configuration. However, all these projects are actively developing, so it is only a matter of time before the tools reach the level we need.

The impact of serverless technologies on the development cycle

Since your infrastructure is simply configuration, you can define and deploy code using scripts, such as shell scripts. Or you can resort to configuration-as-code class solutions like AWS Cloud Training. Although this service does not provide configuration for all areas, it does allow you to define specific resources for use as Lambda functions. That is, where CloudFormation fails you, you can write your own resource (Lambda function) that will close this gap. This way you can do anything, even configure dependencies outside of your AWS environment.

Because it's all just configuration, you can customize your deployment scripts for specific environments, regions, and users, especially if you're using infrastructure-as-code solutions like CloudFormation. For example, you can deploy a copy of the infrastructure for each branch in the repository so that you can test them completely in isolation during development. This drastically speeds up feedback for developers when they want to understand whether their code works adequately in a live environment. Managers do not need to worry about the cost of deploying multiple environments, as they only pay for actual usage.

DevOps have less to worry about since they only need to make sure that developers have the correct configuration. No more managing instances, balancers, or security groups. Therefore, the term NoOps is increasingly being used, although it is still important to be able to configure the infrastructure, especially when it comes to IAM configuration and optimization of cloud resources.

There are very powerful monitoring and visualization tools like Epsagon, Thundra, Dashbird and IOPipe. They allow you to monitor the current state of your serverless applications, provide logging and tracing, capture performance metrics and architecture bottlenecks, perform cost analysis and forecasting, and more. They not only give DevOps engineers, developers, and architects a comprehensive view of application performance, but also allow managers to monitor the situation in real time, with per-second resource costs and cost forecasting. It is much more difficult to organize this with a managed infrastructure.

Designing serverless applications is much easier because you don't have to deploy web servers, manage virtual machines or containers, patch servers, operating systems, Internet gateways, etc. Abstracting all these responsibilities allows serverless architecture to focus on what matters most: the solution business and customer needs.

While the toolkit could be better (it gets better every day), developers can focus on implementing the business logic and best distributing the complexity of the application across different services within the architecture. Serverless application management is event based and abstracted by the cloud provider (eg SQS, S3 events or DynamoDB streams). Therefore, developers only need to write business logic to respond to certain events, and do not have to worry about how best to implement databases and message queues, or how to organize optimal work with data in specific hardware storages.

Code can be executed and debugged locally, as with any development process. Unit testing remains the same. The ability to deploy an entire application infrastructure using a custom stack configuration allows developers to quickly get important feedback without worrying about the cost of testing or the impact on expensive managed environments.

Tools and techniques for building serverless applications

There is no specific way to build serverless applications. As well as a set of services for this task. The leader among powerful serverless solutions today is AWS, but pay attention to Google Cloud, Time ΠΈ Firebase. If you are using AWS, then we can recommend as an approach to collecting applications Serverless Application Model (SAM), especially when using C#, because Visual Studio has great tools. The SAM CLI can do everything Visual Studio can do, so you won't lose anything if you switch to a different IDE or text editor. Of course, SAM works with other languages ​​as well.

If you write in other languages, the Serverless Framework is an excellent open source tool that allows you to configure anything using very powerful YAML configuration files. Serverless Framework also supports various cloud services, so we recommend it to those who are looking for a multi-cloud solution. It has a huge community that has created a bunch of plugins for any need.

For local testing, the open source tools Docker-Lambda, Serverless Local, DynamoDB Local, and LocalStack are well suited. Serverless technologies are still in their early stages of development, as are the tools for them, so when setting up for complex test scenarios, you will have to work hard. However, simply deploying the stack in an environment and testing there is incredibly cheap. And you don't need to make an exact local copy of cloud environments.

Use AWS Lambda Layers to reduce deployed package sizes and speed up loading times.

Use the right programming languages ​​for specific tasks. Different languages ​​have their own advantages and disadvantages. There are many benchmarks, but JavaScript, Python, and C# (.NET Core 2.1+) are the leaders in terms of AWS Lambda performance. AWS Lambda recently introduced a Runtime API that allows you to specify your desired language and runtime environment, so experiment.

Keep package sizes small for deployment. The smaller they are, the faster they load. Avoid using large libraries, especially if you use a couple of features from them. If you're programming in JavaScript, use a build tool like Webpack to optimize your build and only include what you really need. .NET Core 3.0 has QuickJit and Tiered Compilation which improves performance and helps a lot on cold starts.

The reliance of serverless functions on events can make it difficult to coordinate business logic at first. In this regard, message queues and state machines can be incredibly useful. Lambda functions can call each other, but only do this if you're not expecting a response ("fire and forget") - you don't want to get billed for waiting for another function to complete. Message queues are useful for isolating parts of business logic, managing application bottlenecks, and processing transactions (using FIFO queues). AWS Lambda functions can be assigned to SQS queues as stuck message queues that keep track of failed messages for later analysis. AWS Step Functions (state machines) are very useful for managing complex processes that require chaining of functions. Instead of a Lambda function calling another function, step functions can coordinate state transitions, pass data between functions, and manage the global state of functions. This allows you to define retry conditions, or what to do when a particular error occurs - a very powerful tool in certain conditions.

Conclusion

In recent years, serverless technologies have been developing at an unprecedented pace. There are certain misconceptions associated with this paradigm shift. By abstracting infrastructure and managing scalability, serverless solutions offer significant benefits, from simplified development and DevOps processes to large reductions in operational costs.
Although the serverless approach is not without its drawbacks, there are reliable design patterns that can be used to create robust serverless applications or integrate serverless elements into existing architectures.

Source: habr.com

Add a comment