Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

Let's start with a little theory. What's happened The Twelve Factor App?

In simple words, this document is designed to simplify the development of SaaS applications, helps by making developers and DevOps engineers aware of the problems and practices that are most often encountered in the development of modern applications.

The document was created by the developers of the Heroku platform.

The Twelve-Factor App methodology can be applied to applications written in any programming language and using any combination of third-party backing services (databases, message queues, caches, etc.).

Briefly about the factors themselves on which this methodology is based:

  1. Codebase – One source-tracked codebase – multiple deployments
  2. Addictions – Explicitly declare and isolate dependencies
  3. Configuration – Save configuration at runtime
  4. Third Party Services (Backing Services) – Treat backing services as pluggable resources
  5. Build, release, run – Strictly separate build and run stages
  6. Processes - Run the application as one or more stateless processes
  7. Port binding – Export services via port binding
  8. Parallelism – Scale your app with processes
  9. Disposability – Maximize reliability with fast startup and graceful shutdown
  10. Application Development/Operation Parity – Keep development, staging, and production environments as similar as possible
  11. Logging - Treat the log as a stream of events
  12. Administration tasks – Perform administration/management tasks with one-time processes

For more information on the 12 factors, see the following resources:

What is Blue-Green deployment?

Blue-Green deployment is a way to deliver an application to production in such a way that the end client does not see any changes on his part. In other words, deploying an application with zero downtime.

The classic BG Deploy scheme looks like the image below.

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

  • At the start, there are 2 physical servers with exactly the same code, application, project, and there is a router (balancer).
  • The router initially directs all requests to one of the servers (green).
  • At the moment when you need to make a release again, the entire project is updated on another server (blue), which is not currently processing any requests.
  • After the code on blue the server is completely updated, the router is given a command to switch from green on blue server.
  • Now all clients see the result of the code with blue server.
  • For some time, green the server serves as a backup in case of an unsuccessful deployment to blue server and in case of failure and bugs, the router switches the user flow back to green server with the old stable version, and the new code is sent for revision and testing.
  • And at the end of the process, it is updated in the same way green server. And after updating it, the router switches the request flow back to green server.

It all looks very good and at first glance there should be no problems with it.
But since we live in the modern world, the option with physical switching, as indicated in the classical scheme, does not suit us. Record the information for now, we will return to it later.

Bad and good advice

Disclaimer: The examples below show the utilities / methodologies that I use, you can use absolutely any alternatives with similar functions.

Most of the examples will somehow intersect with web development (what a surprise), with PHP and Docker.

In the paragraphs below there is a simple practical description of the use of factors on certain examples, if you want to get more theory on this topic, refer to the links above to the original source.

1. Codebase

Use FTP and FileZilla to upload files to servers one at a time, don't store code anywhere except on a production server.

A project should always have a single code base, that is, all code comes from one Go repository. Servers (production, staging, test1, test2 ...) use code from the branches of one shared repository. Thus, we achieve code consistency.

2. Dependencies

Download all libraries in folders directly to the root of the project. Make updates simply by transferring the new code to the folder with the current version of the library. Put all the necessary utilities directly on the host server where another 20 services are running.

The project should always have a clearly understandable list of dependencies (by dependencies, I also mean the environment). All dependencies must be explicitly defined and isolated.
As an example, let's take Compose ΠΈ Docker.

Compose - a package manager that allows you to install libraries in PHP. Composer gives you the ability to strictly or not strictly specify versions, and explicitly define them. There can be 20 different projects on a server, and each will have a private list of packages and libraries independent of the other.

Docker - a utility that allows you to define and isolate the environment in which the application will work. Accordingly, just like with composer, but more thoroughly, we can determine what the application works with. Choose a specific version of PHP, install only the packages necessary for the project to work, without adding anything extra. And most importantly, without interfering with packages and the environment of the host machine and other projects. That is, all projects on the server running through Docker can use absolutely any set of packages and completely different environments.

3. Configuration

Store configs as constants right in the code. Separate constants for the test server, separate for production. Tie the work of the application depending on the environment directly in the business logic of the project using the if else constructs.

Configurations - this is the only thing that should differ in the deployment of the project (deployment). Ideally, configurations should be passed through environment variables (env vars).

That is, even if you store several configuration files .config.prod .config.local and rename them at the time of deployment to .config (the main config from which the application reads data) - this will not be the right approach, since in this case the information from the configurations will be publicly available to all application developers and data from the production server will be compromised. All configurations must be stored directly in the deployment system (CI / CD) and generated for different environments with different values ​​necessary for a particular environment at the time of deployment.

4. Third party services (Backing Services)

Tie hard on the environment, use different connections for the same services in certain environments.

In fact, this item is strongly intersected with the item about configurations, since without the presence of this item, normal configuration data cannot be made and, in general, the possibility of configuring will disappear.

All connections to external services such as queue servers, databases, caching services must be the same for both the local environment and the third-party / production environment. In other words, at any time I can change the connection string to replace calls to base #1 with base #2 without changing the application code. Or, looking ahead, as an example, when scaling the service, you do not have to indicate the connection in some special way for an additional cache server.

5. Build, release, run

Keep only the final version of the code on the server, with no chance of rolling back the release. No need to fill up disk space. Whoever thinks that he can put the code into production with an error is a bad programmer!

All deployment stages should be separated from each other.

Have a chance to roll back. Make releases with quick access to old copies of the application (already assembled and ready for battle), in order to restore the old version in case of errors. That is, conditionally there is a folder Releases and folder current, and after successful deployment and assembly, the folder current is associated with a symbolic link to the new release that lies inside Releases with the conditional name of the release number.

This is where we remember Blue-Green deployment, which allows you to not only switch between code, but also switch between all resources and even environments with the ability to roll everything back.

6. Processes

Store application state data directly in the application itself. Use sessions in the RAM of the application itself. Use as much as possible shared between third party services. Tie on the fact that the application can only have one process and do not allow scaling.

Regarding sessions, store data only in a cache controlled by third-party services (memcached, redis), so even if you have 20 application processes running, any of them accessing the cache will be able to continue working with the client in the same state in which the user was working with the application in another process. With this approach, it turns out that no matter how many copies of third-party services you use, everything will work properly and without problems with access to data.

7. Port binding

Only the web server should know how to work with third-party services. And it's better to generally raise third-party services right inside the web server. For example, as a PHP module in Apache.
All your services must be accessible to each other through a call to some address and port (localgost:5432, localhost:3000, nginx:80, php-fpm:9000), that is, from nginx I can access both php- fpm and postgres, and from php-fpm to postgres and nginx, and from each service itself, I can access another service. Thus, the health of a service is not tied to the health of another service.

8. Parallelism

Work with one process, and then suddenly several processes will not be able to get along with each other!

Leave the option to scale. Docker swarm is great for this.
Docker Swarm is a tool for creating and managing clusters of containers both between different machines and a bunch of containers on the same machine.

Using swarm, I can determine how many resources I will allocate for each process and how many processes of the same service I will run, and the internal balancer, receiving data on a given port, will automatically proxy it to processes. Thus, seeing that the load on the server has increased, I can add more processes, thereby reducing the load on certain processes.

9. Disposability

Do not use queues to work with processes and data. Killing one process should affect the operation of the entire application. If one service goes down, everything goes down.

Each process and service can be turned off at any time and this should not affect other services (of course, this is not about the fact that the service will be inaccessible to another service, but that another service will not turn off after this one). All processes should be terminated softly, so that when they are terminated, data will not be affected and the system will work correctly the next time it is turned on. That is, even in the event of an abort, the data should not be affected (the transaction mechanism is suitable here, queries to the database work only in groups, and if at least one query from the group failed or was executed with an error, then no other query from the group is eventually executed in fact).

10. Application Development/Operation Parity

Production, staging and local version of the application must be different. In production, we have the Yii Lite framework, and locally Yii, so that it works faster in production!

In reality, all deployments and work with code should be in almost identical environment (we are not talking about physical hardware). Also, any development employee should be able to deploy the code to production if necessary, and not some specially trained devops department, which can only raise the application in production due to special strength.

Docker also helps us with this. Subject to all the previous points, using docker will bring the process of deploying the environment both on production and on the local machine to entering one or two commands.

11. Logging (Logs)

We write logs to files and database! We do not clean files and database from logs. Let's just buy a hard drive for 9000 Peta bytes and norms.

All logs should be considered as a stream of events. The application itself should not deal with the processing of logs. The logs should be issued either to stdout or sent over a protocol such as udp so that the application does not create any problems with the logs. Graylog works well for this. Graylog accepting all logs via udp (using this protocol, it is not required to wait for a response about the successful reception of the packet) does not interfere with the application in any way and is only engaged in structuring and processing logs. The application logic does not change to work with these approaches.

12. Administration tasks

To update data, database, etc., use a separately created endpoint in api, the execution of which 2 times in a row will lead to the fact that everything can be duplicated for you. But you are not fools, you won’t click 2 times, and we don’t need migrations.

All administration tasks must be performed in the same environment as all code, at the release level. That is, if we need to change the structure of the database, then we will not do it manually by changing the names of the columns and adding new ones through some kind of visual database management tools. For such things, we create separate scripts - migrations that are performed everywhere and on all environments with the same common and understandable result. For all other tasks, such as populating a project with data, similar methodologies should be applied.

Implementation example in PHP, Laravel, Laradock, Docker-Compose

PS All examples were made on MacOS. Most will work for Linux as well. Sorry, Windows users, but I haven't worked with Windows for a long time.

Imagine a situation that we do not have any version of PHP installed on our PC and nothing at all.
Install latest versions of docker and docker-compose. (this can be found online)

docker -v && 
docker-compose -v

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

1. We put Laradock

git clone https://github.com/Laradock/laradock.git && 
ls

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

Regarding Laradock, I will say that this is a very cool thing, in which a lot of containers and auxiliary things are collected. But to use Laradock as such without modifications in production - I would not recommend it because of its redundancy. It is better to create your containers based on examples in Laradock, so there will be much optimization, because no one needs everything that is there at the same time.

2. Configuring Laradock for our application to work.

cd laradock && 
cp env-example .env

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

2.1. Open the habr directory (the parent folder into which laradock is cloned) in any editor. (In my PHPStorm case)

At this stage, we put only the name of the project.

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

2.2. We launch the workspace image. (In your case, the images will build for some time)
Workspace is a specially prepared image for working with the framework on behalf of the developer.

Go inside the container with

docker-compose up -d workspace && 
docker-compose exec workspace bash

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

2.3. Installing Laravel

composer create-project --prefer-dist laravel/laravel application

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

2.4. After installation, we check whether the directory with the project has been created, and kill compose.

ls
exit
docker-compose down

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

2.5. We return back to PHPStorm and set the correct path to our laravel application in the .env file.

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

3. Add all the code to Git.

To do this, we will create a repository on Github (or anywhere else). Let's go to the habr directory in the terminal and execute the following code.

echo "# habr-12factor" >> README.md
git init
git add README.md
git commit -m "first commit"
git remote add origin [email protected]:nzulfigarov/habr-12factor.git # здСсь Π±ΡƒΠ΄Π΅Ρ‚ ссылка Π½Π° ваш Ρ€Π΅ΠΏΠΎ
git push -u origin master
git status

We check if everything is in order.

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

For convenience, I recommend using some kind of visual interface for Git, in my case this is GitKraken. (referral link here)

4. Launch!

Before starting, make sure that you have nothing hanging on ports 80 and 443.

docker-compose up -d nginx php-fpm

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

Thus, our project consists of 3 separate services:

  • nginx - web server
  • php-fpm - php for receiving requests from a web server
  • workspace - php for developer

At the moment, we have achieved that we have created an application corresponding to 4 points out of 12, namely:

1. Codebase - all the code is in one repository (a small note: it may be right to bring docker inside the laravel project, but this is not important).

2. Addictions - All of our dependencies are explicitly written in application/composer.json and in each Dockerfile of each container.

3. Third Party Services (Backing Services) - Each of the services (php-fom, nignx, workspace) lives its own life and is connected from the outside and when working with one service, the other will not be affected.

4. Processes Each service is one process. Each service does not store internal state.

5. Port binding

docker ps

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

As we can see, each service is running on its own port and is available to all other services.

6. Parallelism

Docker allows us to spawn multiple processes of the same services with automatic load balancing between them.

Stop containers and start them with a flag --scale

docker-compose down && 
docker-compose up -d --scale php-fpm=3 nginx php-fpm

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

As we can see, the php-fpm container has copies. We do not need to change anything in working with this container. We also continue to access it on port 9000, and Docker regulates the load between containers for us.

7. Disposability - each container can be killed without harming the other. Stopping or restarting the container will not affect the operation of the application on subsequent launches. Each container can also be lifted at any time.

8. Application Development/Operation Parity All our environments are the same. By running the system on the server in production, you do not have to change anything in your commands. Everything will be based on Docker in the same way.

9. Logging - all logs in these containers go to the stream and are visible in the Docker console. (in this case, in fact, with other homemade containers, it may not be the case if you do not take care of it)

 docker-compose logs -f

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

But, there is a catch in that the Default values ​​in PHP and Nginx also write logs to a file. To meet the 12 factors, you need deactivate writing logs to a file in the configurations of each container separately.

Docker also provides the ability to send logs not just to stdout, but also to things like graylog, which I mentioned above. And inside graylog, we can operate with logs as we like and our application will not notice this in any way.

10. Administration tasks - all administration tasks are solved by laravel thanks to the artisan tool exactly as the creators of the 12 factor application would like.

As an example, I will show how some commands are executed.
We go into the container.

 
docker-compose exec workspace bash
php artisan list

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

Now we can use any command. (Please note that we did not set up the database and cache, so half of the commands will not be executed correctly, because they are designed to work with the cache and database).

Application development and Blue-Green deployment based on The Twelve-Factor App methodology with php and docker examples

11. Configurations and 12. Build, release, run

I wanted to dedicate this part to Blue-Green Deployment, but it turned out to be too detailed for this article. I will write a separate article about this.

In a nutshell, the concept is based on CI / CD systems like Jenkins ΠΈ Gitlab CI. In both, you can set environment variables associated with a specific environment. Accordingly, in this scenario, item c Configurations.

And the point about Build, release, run solved by the built-in functions in both utilities called Pipeline.

Pipeline allows you to divide the deployment process into many stages, highlighting the stages of assembly, release and execution. Also in Pipeline, you can create backups, and indeed anything. This tool has limitless potential.

The application code is on Github.
Don't forget to initialize the submodule when cloning this repository.

PS: All these approaches can be used with any other utilities and programming languages. The main thing is that the essence does not differ.

Source: habr.com

Add a comment