Preparing the Application for Istio

Preparing the Application for Istio

Istio is a handy tool for connecting, securing and monitoring distributed applications. Istio uses a variety of technologies to run and manage software at scale, including containers to package application code and dependencies for deployment, and Kubernetes to manage these containers. Therefore, to work with Istio, you must know how an application with multiple services based on these technologies works. without Istio. If these tools and concepts are already familiar to you, feel free to skip this guide and jump straight to the section Installing Istio on Google Kubernetes Engine (GKE) or installing an extension Istio on GKE.

This is a step-by-step guide where we will go through the whole process from source code to GKE container so that you get a basic understanding of these technologies with an example. You will also see how Istio uses the power of these technologies. This assumes you don't know anything about containers, Kubernetes, service meshes, or Istio.

Tasks

In this guide, you will complete the following tasks:

  1. Exploring a simple hello world application with a few services.
  2. Launching the application from source code.
  3. Packing the application into containers.
  4. Create a Kubernetes cluster.
  5. Deploying containers in a cluster.

Before you start

Follow the instructions to enable the Kubernetes Engine API:

  1. Check out Kubernetes Engine page in the Google Cloud Platform console.
  2. Create or select a project.
  3. Wait for the API and related services to turn on. This may take several minutes.
  4. Make sure your Google Cloud Platform project is set up for billing. Learn how to enable billing.

In this tutorial, you can use Cloud Shell, which prepares the virtual machine g1-small in Google Compute Engine with Debian-based Linux, or a Linux or macOS computer.

Option A: Use Cloud Shell

Benefits of using Cloud Shell:

  • Python 2 and Python 3 development environments (including virtualenv) are fully configured.
  • Command line tools gcloud, docker, git ΠΈ kubectl, which we will use, are already installed.
  • You have several to choose from text editors:
    1. Code editor, which opens with the edit icon at the top of the Cloud Shell window.
    2. Emacs, Vim or Nano, which are opened from the command line in Cloud Shell.

To use Cloud Shell:

  1. Go to the GCP console.
  2. НаТмитС ΠΊΠ½ΠΎΠΏΠΊΡƒ Activate Cloud Shell (Activate Cloud Shell) at the top of the GCP console window.

Preparing the Application for Istio

In the lower part GCP consoles a Cloud Shell session with a command line will open in a new window.

Preparing the Application for Istio

Option B: Using Command Line Tools Locally

If you will be working on a Linux or macOS computer, you need to set up and install the following components:

  1. Customize Python 3 and Python 2 development environment.

  2. Install Cloud SDK with command line tool gcloud.

  3. Set kubectl - command line tool for working with Kubernetes.

    gcloud components install kubectl

  4. Set Docker Community Edition (CE). You will use the command line tool dockerto create container images for the sample app.

  5. Install tool Git version controlto get the sample app from GitHub.

Download sample code

  1. Download source code helloserver:

    git clone https://github.com/GoogleCloudPlatform/istio-samples

  2. Change to the sample code directory:

    cd istio-samples/sample-apps/helloserver

Exploring an application with multiple services

The sample application is written in Python and consists of two components that interact using REST:

  • server: simple server with one endpoint GET /, which prints "hello world" to the console.
  • loadgen: script that sends traffic to server, with a configurable number of requests per second.

Preparing the Application for Istio

Running the application from source code

To explore the sample application, run it in Cloud Shell or on a desktop.
1) In the catalog istio-samples/sample-apps/helloserver run server:

python3 server/server.py

When you run server the following is displayed:

INFO:root:Starting server...

2) Open another terminal window to send requests to server. If you're using Cloud Shell, click the add icon to open another session.
3) Submit an inquiry to server:

curl http://localhost:8080

server replies:

Hello World!

4) From the directory where you downloaded the sample code, navigate to the directory that contains loadgen:

cd YOUR_WORKING_DIRECTORY/istio-samples/sample-apps/helloserver/loadgen

5) Create the following environment variables:

export SERVER_ADDR=http://localhost:8080
export REQUESTS_PER_SECOND=5

6) Run virtualenv:

virtualenv --python python3 env

7) Activate the virtual environment:

source env/bin/activate

8) Set requirements for loadgen:

pip3 install -r requirements.txt

9) Run loadgen:

python3 loadgen.py

When you run loadgen outputs something like the following message:

Starting loadgen: 2019-05-20 10:44:12.448415
5 request(s) complete to http://localhost:8080

In another terminal window server outputs the following messages to the console:

127.0.0.1 - - [21/Jun/2019 14:22:01] "GET / HTTP/1.1" 200 -
INFO:root:GET request,
Path: /
Headers:
Host: localhost:8080
User-Agent: python-requests/2.22.0
Accept-Encoding: gzip, deflate
Accept: */*

From a networking point of view, the entire application runs on a single host (local machine or Cloud Shell VM). Therefore, you can use localhostto send requests to server.
10) To stop loadgen ΠΈ server, enter ctrl-c in every terminal window.
11) In the terminal window loadgen deactivate the virtual environment:

deactivate

Packaging the application in containers

To run the application on GKE, you need to package the sample application βˆ’ server ΠΈ loadgen - In containers. A container is a way to package an application in order to isolate it from the environment.

To package an application in a container, you need Dockerfile. Dockerfile is a text file where commands are defined for building the source code of the application and its dependencies in docker image. Once built, you upload the image to a container registry such as Docker Hub or Container Registry.

The example already has Dockerfile for server ΠΈ loadgen with all the necessary commands to assemble the images. Below - Dockerfile for server:

FROM python:3-slim as base
FROM base as builder
RUN apt-get -qq update 
    && apt-get install -y --no-install-recommends 
        g++ 
    && rm -rf /var/lib/apt/lists/*

# Enable unbuffered logging
FROM base as final
ENV PYTHONUNBUFFERED=1

RUN apt-get -qq update 
    && apt-get install -y --no-install-recommends 
        wget

WORKDIR /helloserver

# Grab packages from builder
COPY --from=builder /usr/local/lib/python3.7/ /usr/local/lib/python3.7/

# Add the application
COPY . .

EXPOSE 8080
ENTRYPOINT [ "python", "server.py" ]

  • Team FROM python:3-slim as base tells Docker to use the latest Python 3 image as a base.
  • Team COPY. . copies the source files to the current working directory (in our case, only server.py) to the container's file system.
  • ENTRY POINT defines the command that is used to start the container. In our case, this command is almost the same as the one you used to run server.py from source code.
  • Team STATEMENT indicates that server waiting for data on a port 8080. This command is not provides ports. This is some kind of documentation that is needed to open the port 8080 when starting the container.

Preparing to containerize your application

1) Set the following environment variables. Replace PROJECT_ID to your GCP project ID.

export PROJECT_ID="PROJECT_ID"

export GCR_REPO="preparing-istio"

With the help of values PROJECT_ID ΠΈ GCR_REPO you tag the Docker image when you build and push it to the private Container Registry.

2) Set default GCP project for command line tool gcloud.

gcloud config set project $PROJECT_ID

3) Set the default zone for the command line tool gcloud.

gcloud config set compute/zone us-central1-b

4) Make sure the Container Registry service is enabled in the GCP project.

gcloud services enable containerregistry.googleapis.com

server containerization

  1. Change to the directory where the example is located server:

    cd YOUR_WORKING_DIRECTORY/istio-samples/sample-apps/helloserver/server/

  2. Build the image with Dockerfile and the environment variables you defined earlier:

    docker build -t gcr.io/$PROJECT_ID/$GCR_REPO/helloserver:v0.0.1 .

Parameter -t represents a Docker tag. This is the name of the image you are using when deploying the container.

  1. Submit the image to the Container Registry:
    docker push gcr.io/$PROJECT_ID/$GCR_REPO/helloserver:v0.0.1

loadgen containerization

1) Change to the directory where the example is located loadgen:

cd ../loadgen

2) Build the image:

docker build -t gcr.io/$PROJECT_ID/$GCR_REPO/loadgen:v0.0.1 .

3) Submit the image to the Container Registry:

docker push gcr.io/$PROJECT_ID/$GCR_REPO/loadgen:v0.0.1

Viewing a List of Images

View the list of images in the repository and verify that the images have been pushed:

gcloud container images list --repository gcr.io/$PROJECT_ID/preparing-istio

The command prints out the names of the images just uploaded:

NAME
gcr.io/PROJECT_ID/preparing-istio/helloserver
gcr.io/PROJECT_ID/preparing-istio/loadgen

Create a GKE cluster.

These containers could be run on a Cloud Shell virtual machine or on a machine with the command dockerrun. But in a production environment, you need a way to centrally orchestrate containers. For example, you need a system that keeps containers running at all times, and you need a way to scale up and launch additional container instances as traffic increases.

To run containerized applications, you can use G.K.E.. GKE is a container orchestration platform that clusters virtual machines. Each virtual machine is called a node. GKE clusters are based on the open source Kubernetes cluster management system. Kubernetes provides mechanisms for interacting with the cluster.

Create a GKE cluster:

1) Create a cluster:

gcloud container clusters create istioready 
  --cluster-version latest 
  --machine-type=n1-standard-2 
  --num-nodes 4

Team gcloud creates an istioready cluster in the GCP project and default zone you specified. To run Istio, we recommend having at least 4 nodes and a virtual machine n1-standard-2.

The command creates a cluster for several minutes. When the cluster is ready, the command issues something like message.

2) Specify the credentials in the command line tool kubectlto manage the cluster with it:

gcloud container clusters get-credentials istioready

3) Now you can communicate with Kubernetes via kubectl. For example, the following command can be used to check the status of the nodes:

kubectl get nodes

The command gives a list of nodes:

NAME                                       STATUS   ROLES    AGE    VERSION
gke-istoready-default-pool-dbeb23dc-1vg0   Ready    <none>   99s    v1.13.6-gke.13
gke-istoready-default-pool-dbeb23dc-36z5   Ready    <none>   100s   v1.13.6-gke.13
gke-istoready-default-pool-dbeb23dc-fj7s   Ready    <none>   99s    v1.13.6-gke.13
gke-istoready-default-pool-dbeb23dc-wbjw   Ready    <none>   99s    v1.13.6-gke.13

Key Kubernetes Concepts

The diagram shows an application on GKE:

Preparing the Application for Istio

Before deploying containers to GKE, learn the key concepts of Kubernetes. There are links at the very end if you want to know more.

  • Nodes and clusters. In GKE, a node is a virtual machine. On other Kubernetes platforms, the node can be a computer or a virtual machine. A cluster is a set of nodes that can be considered as a single entity and where you deploy a containerized application.
  • Pods. In Kubernetes, containers run in pods. Pod in Kubernetes is an indivisible unit. A pod holds one or more containers. You deploy server containers and loadgen in separate pods. When there are several containers in a pod (for example, an application server and proxy server), containers are managed as a single entity and share the pod's resources.
  • Deployments. In Kubernetes, a deployment is an object that is a collection of identical pods. Deployment launches multiple replica pods distributed across cluster nodes. Deployment automatically replaces pods that have failed or are not responding.
  • Kubernetes service. When running application code in GKE, the connection between loadgen ΠΈ server. When you started services on a Cloud Shell VM or desktop, you sent requests to server by address localhost: 8080. Once deployed to GKE, pods run on available nodes. By default, you have no control over which node a pod is running on, so pods no permanent IP addresses.
    To get an IP address for server, you need to define a network abstraction on top of pods. That's what it is Kubernetes service. The Kubernetes service provides a persistent endpoint for a set of pods. There are a few service types. server uses LoadBalancerwhich provides the external IP address to contact server from outside the cluster.
    Kubernetes also has a built-in DNS system that assigns DNS names (for example, helloserver.default.cluster.local) services. This allows pods within the cluster to communicate with other pods in the cluster at a fixed address. The DNS name cannot be used outside of the cluster, such as in Cloud Shell or on a computer.

Kubernetes Manifests

When you ran the application from source, you used the imperative command python3

server.py

Imperative implies the verb: "do it".

Kubernetes uses declarative model. This means that we do not tell Kubernetes exactly what to do, but describe the desired state. For example, Kubernetes starts and stops pods as needed to keep the actual state of the system as desired.

You specify the desired state in manifests, or files YAML. A YAML file contains specifications for one or more Kubernetes objects.

The example contains a YAML file for server ΠΈ loadgen. Each YAML file specifies the desired state of the deployment object and the Kubernetes service.

server.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: helloserver
spec:
  selector:
    matchLabels:
      app: helloserver
  replicas: 1
  template:
    metadata:
      labels:
        app: helloserver
    spec:
      terminationGracePeriodSeconds: 5
      restartPolicy: Always
      containers:
      - name: main
        image: gcr.io/google-samples/istio/helloserver:v0.0.1
        imagePullPolicy: Always

  • child specifies the type of the object.
  • metadata.name specifies the name of the deployment.
  • First field spec contains a description of the desired state.
  • spec.replicas specifies the desired number of pods.
  • Section spec.template defines a pod template. There is a field in the pod specification image, which specifies the name of the image to retrieve from the Container Registry.

The service is defined as follows:

apiVersion: v1
kind: Service
metadata:
  name: hellosvc
spec:
  type: LoadBalancer
  selector:
    app: helloserver
  ports:
  - name: http
    port: 80
    targetPort: 8080

  • LoadBalancer: Clients send requests to an IP address of a load balancer that has a fixed IP address and is reachable from outside the cluster.
  • targetPort: as you remember, the command PRESENTATION 8080 Π² Dockerfile did not provide ports. You provide a port 8080so that the container can be contacted server outside the cluster. In our case hellosvc.default.cluster.local:80 (short name: hellosvc) corresponds to the port 8080 Pod IP addresses helloserver.
  • port: This is the port number where the rest of the services in the cluster will send requests.

loadgen.yaml

Deployment object in loadgen.yaml looks like server.yaml. The difference is that the deployment object contains a section env. It defines the environment variables that are needed loadgen and which you set when running the app from source.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: loadgenerator
spec:
  selector:
    matchLabels:
      app: loadgenerator
  replicas: 1
  template:
    metadata:
      labels:
        app: loadgenerator
    spec:
      terminationGracePeriodSeconds: 5
      restartPolicy: Always
      containers:
      - name: main
        image: gcr.io/google-samples/istio/loadgen:v0.0.1
        imagePullPolicy: Always
        env:
        - name: SERVER_ADDR
          value: "http://hellosvc:80/"
        - name: REQUESTS_PER_SECOND
          value: "10"
        resources:
          requests:
            cpu: 300m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi

Time loadgen does not accept incoming requests, for the field type Specified Cluster IP. This type provides a permanent IP address that services in the cluster can use, but this IP address is not provided to external clients.

apiVersion: v1
kind: Service
metadata:
  name: loadgensvc
spec:
  type: ClusterIP
  selector:
    app: loadgenerator
  ports:
  - name: http
    port: 80
    targetPort: 8080

Deploying containers in GKE

1) Change to the directory where the example is located server:

cd YOUR_WORKING_DIRECTORY/istio-samples/sample-apps/helloserver/server/

2) Open server.yaml in a text editor.
3) Replace the name in the field image with the name of your Docker image.

image: gcr.io/PROJECT_ID/preparing-istio/helloserver:v0.0.1

Replace PROJECT_ID to your GCP project ID.
4) Save and close server.yaml.
5) Deploy the YAML file to Kubernetes:

kubectl apply -f server.yaml

Upon successful completion, the command issues the following code:

deployment.apps/helloserver created
service/hellosvc created

6) Change to the directory where it is located loadgen:

cd ../loadgen

7) Open loadgen.yaml in a text editor.
8) Replace the name in the field image with the name of your Docker image.

image: gcr.io/PROJECT_ID/preparing-istio/loadgenv0.0.1

Replace PROJECT_ID to your GCP project ID.
9) Save and close loadgen.yaml, close the text editor.
10) Deploy the YAML file to Kubernetes:

kubectl apply -f loadgen.yaml

Upon successful completion, the command issues the following code:

deployment.apps/loadgenerator created
service/loadgensvc created

11) Check the status of the pods:

kubectl get pods

The command shows the status:

NAME                             READY   STATUS    RESTARTS   AGE
helloserver-69b9576d96-mwtcj     1/1     Running   0          58s
loadgenerator-774dbc46fb-gpbrz   1/1     Running   0          57s

12) Extract the application logs from the pod loadgen. Replace POD_ID to the id from the previous answer.

kubectl logs loadgenerator-POD_ID

13) Get external IP addresses hellosvc:

kubectl get service

The command response looks something like this:

NAME         TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
hellosvc     LoadBalancer   10.81.15.158   192.0.2.1       80:31127/TCP   33m
kubernetes   ClusterIP      10.81.0.1      <none>          443/TCP        93m
loadgensvc   ClusterIP      10.81.15.155   <none>          80/TCP         4m52s

14) Submit an inquiry to hellosvc: replace EXTERNAL_IP to an external IP address hellosvc.

curl http://EXTERNAL_IP

Taking on Istio

You already have an application deployed to GKE. loadgen can use Kubernetes DNS (hellosvc:80) to send requests to serverand you can send requests to server by external IP address. Although Kubernetes has a lot of features, some information about services is missing:

  • How do services interact? What are the relationships between services? How does traffic flow between services? Are you aware that loadgen sends requests to server, but imagine that you don't know anything about the application. To answer these questions, look at the list of running pods in GKE.
  • Metrics. How long server responding to an incoming request? How many requests per second are sent to the server? Does it give error messages?
  • Security Information. Traffic between loadgen ΠΈ server just goes through HTTP or by mTLS?

All these questions are answered by Istio. To do this, Istio puts a sidecar proxy Envoy to each pod. The Envoy proxy intercepts all incoming and outgoing traffic to the application containers. It means that server ΠΈ loadgen received via the Envoy sidecar proxy, and all traffic from loadgen ΠΊ server passes through the Envoy proxy.

Connections between Envoy proxies form a service mesh. The service mesh architecture provides a layer of control on top of Kubernetes.

Preparing the Application for Istio

Since the Envoy proxies are running in their own containers, Istio can be installed on top of a GKE cluster with little change to the application code. But you've done some work to get your app ready to be managed with Istio:

  • Services for all containers. To deployments server ΠΈ loadgen bound by Kubernetes service. Even loadgen, which does not receive incoming requests, is a service.
  • Ports in services must have names. While service ports can be left unnamed in GKE, Istio requires you to specify port name according to his protocol. In the YAML file, the port for server called httpbecause the server is using the protocol HTTP. If service used gRPC, you would name the port grpc.
  • Deployments are tagged. Therefore, you can use Istio's traffic management features, such as splitting traffic between versions of the same service.

Installation

There are two ways to install Istio. Can enable Istio on GKE extension or install open source version of Istio on the cluster. With Istio on GKE, you can easily manage the installation and upgrade of Istio within the GKE cluster life cycle. If you need the latest version of Istio or more control over the configuration of the Istio control panel, install the open source version instead of the Istio on GKE extension. Read the article to decide on the approach. Do I need Istio on GKE?.

Select an option, read the appropriate guide, and follow the instructions to install Istio on your cluster. If you want to use Istio with a newly deployed application, enable sidecar injection for namespace default.

cleaning

To prevent your Google Cloud Platform account from being charged for the resources you used in this tutorial, remove the container cluster when you install Istio and play around with the sample app. This will remove all cluster resources, such as compute instances, disks, and network resources.

What's next?

Source: habr.com

Add a comment