Canary Deployment in Kubernetes #3: Istio

Using Istio+Kiali to launch and visualize a Canary deployment

Canary Deployment in Kubernetes #3: Istio

Articles in this series

  1. Canary Deployment in Kubernetes #1: Gitlab CI
  2. Canary Deployment in Kubernetes #2: Argo Rollouts
  3. (This article)
  4. Canary Deployment using Jenkins-X Istio Flagger

Canary Deployment

We hope you read The first part, where we briefly explained what Canary deployments are and showed how to implement them using standard Kubernetes resources.

Istio

And we assume that by reading this article, you already know what Istio is. If not, then you can read about it. here.

Application for tests

Canary Deployment in Kubernetes #3: Istio

Each pod contains two containers: our application and istio-proxy.

We will use a simple test application with frontend-nginx and backend python pods. The nginx pod will simply forward every request to the backend pod and act as a proxy. Details can be viewed in more detail in the following yamls:

Running the test application yourself

If you want to follow my example and use this test application yourself, see project readme.

Initial Deployment

When starting the first Deployment, we see that the pods of our application have only 2 containers each, that is, the Istio sidecar is just being implemented:

Canary Deployment in Kubernetes #3: Istio

And we also see Istio Gateway Loadbalancer in namespace istio-system:

Canary Deployment in Kubernetes #3: Istio

Generating Traffic

We will use the following IP to generate traffic that will be received by the frontend pods and forwarded to the backend pods:

while true; do curl -s --resolve 'frontend.istio-test:80:35.242.202.152' frontend.istio-test; sleep 0.1; done

We will also add frontend.istio-test to our hosts file.

View Mesh via Kiali

We installed a test app and Istio along with Tracing, Grafana, Prometheus, and Kiali (see here for details). project readme). Hence, we can use Kiali via:

istioctl dashboard kiali # admin:admin

Canary Deployment in Kubernetes #3: Istio

Kiali visualizes current traffic through Mesh

As we can see, 100% of the traffic goes to the frontend service, then to the frontend sub with label v1, since we are using a simple nginx proxy that redirects requests to the backend service, which in turn redirects them to backend pods with label v1.

Kiali works great with Istio and provides a boxed Mesh rendering solution. Just great.

Canary Deployment

Our backend already has two k8s deployments, one for v1 and one for v2. Now we just need to tell Istio to redirect a certain percentage of requests to v2.

Step 1: 10%

And all we need to do is adjust the VirtualService's weight istio.yaml:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backend
  namespace: default
spec:
  gateways: []
  hosts:
  - "backend.default.svc.cluster.local"
  http:
  - match:
    - {}
    route:
    - destination:
        host: backend.default.svc.cluster.local
        subset: v1
        port:
          number: 80
      weight: 90
    - destination:
        host: backend.default.svc.cluster.local
        subset: v2
        port:
          number: 80
      weight: 10

Canary Deployment in Kubernetes #3: Istio

We see that 10% of requests are redirected to v2.

Step 2: 50%

And now it's enough just to increase it to 50%:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backend
  namespace: default
spec:
...
    - destination:
        host: backend.default.svc.cluster.local
        subset: v1
        port:
          number: 80
      weight: 50
    - destination:
        host: backend.default.svc.cluster.local
        subset: v2
        port:
          number: 80
      weight: 50

Canary Deployment in Kubernetes #3: Istio

Step 3: 100%

Now Canary deployment can be considered completed and all traffic is redirected to v2:

Canary Deployment in Kubernetes #3: Istio

Testing Canary manually

Let's say we are now sending 2% of all requests to the v10 backend. What if we want to manually test v2 to make sure everything works as expected?

We can add a custom matching rule based on HTTP headers:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backend
  namespace: default
spec:
  gateways: []
  hosts:
  - "backend.default.svc.cluster.local"
  http:
  - match:
    - headers:
        canary:
          exact: "canary-tester"
    route:
    - destination:
        host: backend.default.svc.cluster.local
        subset: v2
        port:
          number: 80
      weight: 100
  - match:
    - {}
    route:
    - destination:
        host: backend.default.svc.cluster.local
        subset: v1
        port:
          number: 80
      weight: 90
    - destination:
        host: backend.default.svc.cluster.local
        subset: v2
        port:
          number: 80
      weight: 10

Now using curl we can force a v2 request by sending a header:

Canary Deployment in Kubernetes #3: Istio

Requests without a header will still be governed by the 1/10 ratio:

Canary Deployment in Kubernetes #3: Istio

Canary for two dependent versions

Now we will consider the option where we have v2 version for both frontend and backend. For both, we specified that 10% of the traffic should go to v2:

Canary Deployment in Kubernetes #3: Istio

We can see that frontend v1 and v2 are both forwarding traffic at a ratio of 1/10 to backend v1 and v2.

But what if we needed to forward traffic from frontend-v2 only to backend-v2, because it is not compatible with v1? To do this, we will set a 1/10 ratio for the frontend, which controls what traffic gets to the backend-v2 using negotiation sourceLabels :

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backend
  namespace: default
spec:
  gateways: []
  hosts:
  - "backend.default.svc.cluster.local"
  http:
...
  - match:
    - sourceLabels:
        app: frontend
        version: v2
    route:
    - destination:
        host: backend.default.svc.cluster.local
        subset: v2
        port:
          number: 80
      weight: 100

As a result, we get what we need:

Canary Deployment in Kubernetes #3: Istio

Differences from the Manual Canary Approach

Π’ the first part we did Canary deployment manually, also using two k8s deployments. There we controlled the ratio of requests by changing the number of replicas. This approach works, but has serious drawbacks.

Istio makes it possible to determine the ratio of requests regardless of the number of replicas. This means, for example, that we can use HPAs (Horizontal Pod Autoscalers - horizontal scaling of pods) and it does not need to be configured in accordance with the current state of the Canary deployment.

Π‘onclusion

Istio works great and when combined with Kiali we get a very powerful combo. Next on my list of interests is the combination of Spinnaker with Istio for automation and Canary analytics.

Source: habr.com

Add a comment