Kubernetes Tutorial Part 1: Applications, Microservices, and Containers

At our request, Habr created a hub Kubernetes and we are pleased to place the first publication in it. Subscribe!

Kubernetes is easy. Why do banks pay me a lot of money to work in this area, while anyone can master this technology in just a few hours?

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers

If you doubt that Kubernetes can be learned so quickly, I suggest you try to do it yourself. Namely, having mastered this material, you will be able to run an application based on microservices in a Kubernetes cluster. I can guarantee this, because it is in the same methodology that I use here that I teach our clients how to work with Kubernetes. What makes this guide different from others? Actually, a lot of things. So, most of these materials begin with an explanation of simple things - the concepts of Kubernetes and the features of the kubectl command. The authors of these articles assume that their reader is familiar with application development, microservices, and Docker containers. We'll go the other way. First, let's talk about how to run an application based on microservices on a computer. Then we'll look at building container images for each microservice. And after that, we will get acquainted with Kubernetes and analyze the deployment of an application based on microservices in a cluster managed by Kubernetes.

This approach, with a gradual approach to Kubernetes, will give the depth of understanding of what is happening that the average person needs in order to understand how simply everything is arranged in Kubernetes. Kubernetes is certainly a simple technology, provided that whoever wants to master it knows where and how it is used.

Now, without further ado, let's get to work and talk about the application we'll be working with.

Experimental app

Our application will perform only one function. It takes, as input, one sentence, after which, using text analysis tools, it performs a sentiment analysis of this sentence, obtaining an assessment of the emotional attitude of the author of the sentence to a certain object.

This is what the main window of this application looks like.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Sentiment Analysis Web Application

From a technical point of view, the application consists of three microservices, each of which solves a certain set of tasks:

  • SA-Frontend is an Nginx web server that serves React static files.
  • SA-WebApp is a web application written in Java that handles requests from the frontend.
  • SA-Logic is a Python application that performs text sentiment analysis.

It is important to note that microservices do not exist in isolation. They implement the idea of ​​"separation of duties", but they, at the same time, need to interact with each other.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Data flows in the application

In the above diagram, you can see the numbered stages of the system, illustrating the data flows in the application. Let's break them down:

  1. The browser requests a file from the server index.html (which in turn loads the React app package).
  2. The user interacts with the application, this causes a call to a web application based on Spring.
  3. The web application forwards the request to parse the text to the Python application.
  4. The Python application analyzes the sentiment of the text and returns the result as a response to the request.
  5. The Spring application sends a response to the React application (which, in turn, shows the result of the parsed text to the user).

The code for all these applications can be found here. I recommend that you copy this repository to yourself right now, as there are many interesting experiments ahead of us with it.

Running an application based on microservices on the local machine

In order for the application to work, we need to start all three microservices. Let's start with the prettiest of them - the front-end application.

▍Setting up React for local development

In order to run a React application, you need to install the Node.js framework and NPM on your computer. After you install all this, go using the terminal to the project folder sa-frontend and run the following command:

npm install

By executing this command in the folder node_modules the dependencies of the React application will be loaded, the records of which are in the file package.json. After downloading the dependencies in the same folder, run the following command:

npm start

That's all. The React app is now running and can be accessed by navigating to the browser address localhost:3000. You can change something in his code. You will immediately see the effect of these changes in the browser. This is possible thanks to the so-called "hot" replacement of modules. Thanks to this, front-end development turns into a simple and enjoyable experience.

▍Preparing a React app for production

For the purposes of actually using a React app, we need to convert it into a set of static files and serve them to clients using a web server.

To build the React app, again using the terminal, navigate to the folder sa-frontend and run the following command:

npm run build

This will create a directory in the project folder build. It will contain all the static files required for the React application to work.

▍Serving static files with Nginx

First you need to install and run the Nginx web server. Here you can download it and find instructions for installing and running it. Then you need to copy the contents of the folder sa-frontend/build to folder [your_nginx_installation_dir]/html.

With this approach, the file generated during the assembly of the React application index.html will be available at [your_nginx_installation_dir]/html/index.html. This is the file that, by default, the Nginx server issues when accessing it. The server is configured to listen on a port 80, but you can customize it the way you want by editing the file [your_nginx_installation_dir]/conf/nginx.conf.

Now open your browser and go to localhost:80. You will see the React app page.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
React app served by Nginx server

If you now enter something in the field Type your sentence and press the button Send - nothing will happen. But, if you look at the console, you can see error messages there. In order to understand exactly where these errors occur, let's analyze the application code.

▍Analysis of the code of the front-end application

Looking at the file's code App.js, we can see that clicking on the button Send calls a method analyzeSentence(). The code for this method is shown below. At the same time, pay attention to the fact that for each line to which there is a comment of the form # Номер, there is an explanation given below the code. In the same way, we will parse other code fragments.

analyzeSentence() {
    fetch('http://localhost:8080/sentiment', {  // #1
        method: 'POST',
        headers: {
            'Content-Type': 'application/json'
        },
        body: JSON.stringify({
                       sentence: this.textField.getValue()})// #2
    })
        .then(response => response.json())
        .then(data => this.setState(data));  // #3
}

1. The URL to which the POST request is made. This address is assumed to be an application waiting for such requests.

2.The request body sent to the application. Here is an example request body:

{
    sentence: "I like yogobella!"
}

3.When a response to a request is received, the state of the component is updated. This causes the component to re-render. If we receive data (that is, a JSON object containing the entered data and the calculated text score), we will output the component Polarityas long as the conditions are met. Here is how we describe the component:

const polarityComponent = this.state.polarity !== undefined ?
    <Polarity sentence={this.state.sentence} 
              polarity={this.state.polarity}/> :
    null;

The code seems to work quite well. What is wrong here, anyway? If you assume that at the address to which the application is trying to send a POST request, there is nothing yet that can accept and process this request, then you will be absolutely right. Namely, to process requests coming to the address http://localhost:8080/sentiment, we need to run a web application based on Spring.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
We need a Spring application that can accept a POST request

▍Setting up a web application based on Spring

In order to deploy a Spring application, you need JDK8 and Maven and properly configured environment variables. After you install all this, you can continue working on our project.

▍Packing the application into a jar file

Navigate, using the terminal, to the folder sa-webapp and enter the following command:

mvn install

After executing this command in the folder sa-webapp directory will be created target. This is where the Java application will be located, packaged in a jar file, represented by the file sentiment-analysis-web-0.0.1-SNAPSHOT.jar.

▍Launching a Java Application

Go to folder target and run the application with the following command:

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar

An error will occur while executing this command. In order to start fixing it, we can parse the exception details in the stack trace data:

Error creating bean with name 'sentimentController': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'sa.logic.api.url' in value "${sa.logic.api.url}"

For us, the most important thing here is the mention of the impossibility of clarifying the meaning sa.logic.api.url. Let's analyze the code where the error occurs.

▍Java application code analysis

Here is the code snippet where the error occurs.

@CrossOrigin(origins = "*")
@RestController
public class SentimentController {
    @Value("${sa.logic.api.url}")    // #1
    private String saLogicApiUrl;
    @PostMapping("/sentiment")
    public SentimentDto sentimentAnalysis(
        @RequestBody SentenceDto sentenceDto) 
    {
        RestTemplate restTemplate = new RestTemplate();
        return restTemplate.postForEntity(
                saLogicApiUrl + "/analyse/sentiment",    // #2
                sentenceDto, SentimentDto.class)
                .getBody();
    }
}

  1. In SentimentController there is a field saLogicApiUrl. Its value is set by the property sa.logic.api.url.
  2. Line saLogicApiUrl concatenates with value /analyse/sentiment. Together they form an address for making a call to the microservice that performs text analysis.

▍Setting a property value

In Spring, the default source of property values ​​is a file application.properties, which can be found at sa-webapp/src/main/resources. But using it is not the only way to set property values. You can also do this with the following command:

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=WHAT.IS.THE.SA.LOGIC.API.URL

The value of this property should point to the address of our Python application.

By configuring it, we tell the Spring web application where it needs to go to make text parsing requests.

In order not to complicate our lives, we will decide that the Python application will be available at localhost:5000 and try not to forget about it. As a result, the command to start the Spring application will look like this:

java -jar sentiment-analysis-web-0.0.1-SNAPSHOT.jar --sa.logic.api.url=http://localhost:5000

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Our system is missing a Python application

Now we just have to run the Python application and the system will work as expected.

▍Setting up a Python application

In order to run a Python application, you must have Python 3 and Pip installed, and you must have the appropriate environment variables set correctly.

▍Install dependencies

Go to project folder sa-logic/sa and run the following commands:

python -m pip install -r requirements.txt
python -m textblob.download_corpora

▍App launch

With the dependencies installed, we are ready to run the application:

python sentiment_analysis.py

After executing this command, we will be told the following:

* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)

This means that the application is running and waiting for requests at localhost:5000/

▍Code research

Let's look at the Python application code in order to understand how it responds to requests:

from textblob import TextBlob
from flask import Flask, request, jsonify
app = Flask(__name__)                                   #1
@app.route("/analyse/sentiment", methods=['POST'])      #2
def analyse_sentiment():
    sentence = request.get_json()['sentence']           #3
    polarity = TextBlob(sentence).sentences[0].polarity #4
    return jsonify(                                     #5
        sentence=sentence,
        polarity=polarity
    )
if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)                #6

  1. Object initialization Flask.
  2. Specifying the address for making POST requests to it.
  3. Retrieving a property sentence from the request body.
  4. Anonymous object initialization TextBlob and getting the value polarity for the first proposal received in the body of the request (in our case, this is the only proposal submitted for analysis).
  5. Returning a response, the body of which contains the text of the offer and the indicator calculated for it polarity.
  6. Launching the Flask application, which will be available at 0.0.0.0:5000 (you can also access it using a construction of the form localhost:5000).

Now the microservices that make up the application are running. They are set to interact with each other. Here is what the application diagram looks like at this stage of work.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
All microservices that make up the application are brought to a healthy state

Now, before we continue, open the React app in a browser and try to parse some sentence with it. If everything is done correctly - after pressing the button Send you will see the analysis results below the text box.

In the next section, we'll talk about how to run our microservices in Docker containers. This is necessary in order to prepare the application to run in the Kubernetes cluster.

Docker containers

Kubernetes is a system for automating the deployment, scaling and management of containerized applications. It is also called a "container orchestrator". If Kubernetes works with containers, then before using this system, we first need to acquire these containers. But first, let's talk about what containers are. Perhaps the best answer to the question of what it is can be found in documentation to Docker:

A container image is a lightweight, self-contained, executable package that contains an application, which includes everything necessary to run it: application code, runtime environment, system tools and libraries, settings. Containerized programs can be used in both Linux and Windows environments and will always work the same regardless of the infrastructure.

This means that containers can be run on any computer, including production servers, and in any environment, the applications enclosed in them will work the same way.

To explore the features of containers and compare them to other ways to run applications, let's look at the example of serving a React application using a virtual machine and a container.

▍Serving static files of a React application using a virtual machine

Trying to organize the maintenance of static files using virtual machines, we will encounter the following disadvantages:

  1. Inefficient use of resources, since each virtual machine is a complete operating system.
  2. Platform dependency. What works on some local computer may well not work on a production server.
  3. Slow and resource intensive scaling of a virtual machine solution.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Nginx web server serving static files running in a virtual machine

If containers are used to solve a similar problem, then, in comparison with virtual machines, the following strengths can be noted:

  1. Efficient use of resources: work with the operating system using Docker.
  2. Platform independence. A container that a developer can run on their own computer will run anywhere.
  3. Lightweight deployment through the use of image layers.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Nginx web server serving static files running in a container

We've only compared virtual machines and containers on a few points, but even that's enough to get a feel for the strengths of containers. Here You can find details about Docker containers.

▍Building a container image for a React app

The basic building block of a Docker container is the file Dockerfile. At the beginning of this file, a base image of the container is recorded, then a sequence of instructions is included indicating how to create a container that will meet the needs of an application.

Before we start working with the file Dockerfile, remember what we did in order to prepare the files of the React application for uploading to the Nginx server:

  1. Building a React app package (npm run build).
  2. Starting the Nginx server.
  3. Copying the contents of a directory build from project folder sa-frontend to the server folder nginx/html.

Below you can see the parallels between creating a container and the above actions performed on the local computer.

▍Preparing a Dockerfile for the SA-Frontend Application

Instructions to be included in Dockerfile for application SA-Frontend, consist of only two teams. The fact is that the Nginx development team has prepared a basic image for Nginx, which we will use to build our image. Here are the two steps we need to describe:

  1. You need to make the Nginx image the basis of the image.
  2. Folder content sa-frontend/build need to copy to the image folder nginx/html.

If we go from this description to the file Dockerfile, then it will look like this:

FROM nginx
COPY build /usr/share/nginx/html

As you can see, everything here is very simple, while the contents of the file even turn out to be quite readable and understandable. This file tells the system to take the image nginx with everything that it already has, and copy the contents of the directory build to directory nginx/html.

Here you may have a question regarding how I know where exactly to copy the files from the folder build, i.e. where did the path come from /usr/share/nginx/html. In fact, there is nothing complicated here either. The fact is that the relevant information can be found in description image.

▍Assembling the image and uploading it to the repository

Before we can work with a completed image, we need to submit it to the image repository. To do this, we will use the free cloud-based image hosting platform Docker Hub. At this stage of work, you need to do the following:

  1. Install Docker.
  2. Register on the Docker Hub site.
  3. Log in to your account by running the following command in the terminal:
    docker login -u="$DOCKER_USERNAME" -p="$DOCKER_PASSWORD"

Now you need, using the terminal, go to the directory sa-frontend and run the following command there:

docker build -f Dockerfile -t $DOCKER_USER_ID/sentiment-analysis-frontend .

Here and below in similar commands $DOCKER_USER_ID should be replaced with your username on Docker Hub. For example, this part of the command might look like this: rinormaloku/sentiment-analysis-frontend.

In this case, this command can be shortened by removing from it -f Dockerfile, since the folder in which we execute this command already has this file.

In order to send the finished image to the repository, we need the following command:

docker push $DOCKER_USER_ID/sentiment-analysis-frontend

After completing it, check your list of repositories on Docker Hub to see if the image was successfully pushed to the cloud storage.

▍Starting a container

Now anyone can download and run the image known as $DOCKER_USER_ID/sentiment-analysis-frontend. In order to do this, you need to run the following sequence of commands:

docker pull $DOCKER_USER_ID/sentiment-analysis-frontend
docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend

Now the container is running, and we can continue to work by creating other images we need. But before we continue, let's understand the design 80:80, which is found in the command to run the image and may seem confusing.

  • First number 80 is the port number of the host (that is, the local computer).
  • Second number 80 is the port of the container to which the request should be redirected.

Consider the following illustration.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Port Forwarding

The system forwards requests from the port <hostPort> to the port <containerPort>. That is, accessing the port 80 computer is redirected to a port 80 container.

Since the port 80 opened on the local computer, you can access the application from this computer at localhost:80. If your system does not support Docker, you can run the application on a Docker virtual machine, the address of which will look like <docker-machine ip>:80. To find out the IP address of the Docker virtual machine, you can use the command docker-machine ip.

At this point, once the front-end app container has successfully launched, you should be able to open its page in a browser.

▍.dockerignore file

Building the application image SA-Frontend, we could notice that this process is extremely slow. This is because the image build context must be sent to the Docker daemon. The directory that represents the build context is given as the last argument to the command docker build. In our case, there is a dot at the end of this command. This results in the following structure being included in the assembly context:

sa-frontend:
|   .dockerignore
|   Dockerfile
|   package.json
|   README.md
+---build
+---node_modules
+---public
---src

But of all the folders present here, we only need a folder build. Downloading anything else is a waste of time. You can speed up the build by telling Docker which directories to ignore. In order to do this, we need a file .dockerignore. You, if you are familiar with the file .gitignore, the structure of this file will probably look familiar. It lists directories that the image build system can ignore. In our case, the contents of this file looks like this:

node_modules
src
public

File .dockerignore must be in the same folder as the file Dockerfile. Now the assembly of the image will take a few seconds.

Let's now deal with the image for a Java application.

▍Building a container image for a Java application

You know what, and you've already learned everything you need to create container images. That is why this section will be very short.

Open the file Dockerfile, which is located in the project folder sa-webapp. If you read the text of this file, then in it you will meet only two new constructions that begin with keywords ENV и EXPOSE:

ENV SA_LOGIC_API_URL http://localhost:5000
…
EXPOSE 8080

Keyword ENV allows you to declare environment variables inside Docker containers. In particular, in our case, it allows you to set a URL to access the API of the application that performs text analysis.

Keyword EXPOSE allows you to tell Docker to open a port. We are going to use this port as we work with the application. Here it can be seen that in Dockerfile for application SA-Frontend there is no such command. This is for documentation purposes only, in other words, this construct is for the reader Dockerfile.

Building the image and pushing it to the repository looks exactly like the previous example. If you are not yet very confident in your abilities, the corresponding commands can be found in the file README.md in the folder sa-webapp.

▍Building a container image for a Python application

If you take a look at the contents of the file Dockerfile in the folder sa-logicyou won't find anything new there. The commands for building the image and pushing it to the repository should already be familiar to you, but, as in the case of our other applications, they can be found in the file README.md in the folder sa-logic.

▍Testing containerized applications

Can you trust something that you haven't tested? I can not too. Let's test our containers.

  1. Let's start the application container sa-logic and configure it to listen on a port 5050:
    docker run -d -p 5050:5000 $DOCKER_USER_ID/sentiment-analysis-logic
  2. Let's start the application container sa-webapp and configure it to listen on a port 8080. In addition, we need to set the port on which the Python application will listen for requests from the Java application by reassigning the environment variable SA_LOGIC_API_URL:
    $ docker run -d -p 8080:8080 -e SA_LOGIC_API_URL='http://<container_ip or docker machine ip>:5000' $DOCKER_USER_ID/sentiment-analysis-web-app

To learn how to find out the IP address of a container or Docker VM, refer to the file README.

Let's start the application container sa-frontend:

docker run -d -p 80:80 $DOCKER_USER_ID/sentiment-analysis-frontend

Now everything is ready to navigate in the browser to the address localhost:80 and test the app.

Please note that if you change the port for sa-webapp, or if you're running a Docker VM, you'll need to edit the file App.js from folder sa-frontendby changing the IP address or port number in the method analyzeSentence()by substituting current information instead of obsolete data. After that, you need to reassemble the image and use it.

This is what our application diagram looks like now.

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers
Microservices run in containers

Summary: why do we need a Kubernetes cluster?

We just reviewed the files Dockerfile, talked about how to build images and push them to a Docker repository. In addition, we learned how to speed up the assembly of images using the file .dockerignore. As a result, our microservices are now running in Docker containers. Here you may have a completely justified question about why we need Kubernetes. The answer to this question will be devoted to the second part of this material. In the meantime, consider the following question:
Let's assume that our text analysis web application has become popular worldwide. Millions of requests come to him every minute. This means that microservices sa-webapp и sa-logic will be under enormous stress. How to scale containers that run microservices?

Kubernetes Tutorial Part 1: Applications, Microservices, and Containers

Source: habr.com

Add a comment