We hope you read The first part, where we briefly explained what Canary deployments are and showed how to implement them using standard Kubernetes resources.
Istio
And we assume that by reading this article, you already know what Istio is. If not, then you can read about it. here.
Application for tests
Each pod contains two containers: our application and istio-proxy.
We will use a simple test application with frontend-nginx and backend python pods. The nginx pod will simply forward every request to the backend pod and act as a proxy. Details can be viewed in more detail in the following yamls:
If you want to follow my example and use this test application yourself, see project readme.
Initial Deployment
When starting the first Deployment, we see that the pods of our application have only 2 containers each, that is, the Istio sidecar is just being implemented:
And we also see Istio Gateway Loadbalancer in namespace istio-system:
Generating Traffic
We will use the following IP to generate traffic that will be received by the frontend pods and forwarded to the backend pods:
while true; do curl -s --resolve 'frontend.istio-test:80:35.242.202.152' frontend.istio-test; sleep 0.1; done
We will also add frontend.istio-test to our hosts file.
View Mesh via Kiali
We installed a test app and Istio along with Tracing, Grafana, Prometheus, and Kiali (see here for details). project readme). Hence, we can use Kiali via:
istioctl dashboard kiali # admin:admin
Kiali visualizes current traffic through Mesh
As we can see, 100% of the traffic goes to the frontend service, then to the frontend sub with label v1, since we are using a simple nginx proxy that redirects requests to the backend service, which in turn redirects them to backend pods with label v1.
Kiali works great with Istio and provides a boxed Mesh rendering solution. Just great.
Canary Deployment
Our backend already has two k8s deployments, one for v1 and one for v2. Now we just need to tell Istio to redirect a certain percentage of requests to v2.
Step 1: 10%
And all we need to do is adjust the VirtualService's weight istio.yaml:
Now using curl we can force a v2 request by sending a header:
Requests without a header will still be governed by the 1/10 ratio:
Canary for two dependent versions
Now we will consider the option where we have v2 version for both frontend and backend. For both, we specified that 10% of the traffic should go to v2:
We can see that frontend v1 and v2 are both forwarding traffic at a ratio of 1/10 to backend v1 and v2.
But what if we needed to forward traffic from frontend-v2 only to backend-v2, because it is not compatible with v1? To do this, we will set a 1/10 ratio for the frontend, which controls what traffic gets to the backend-v2 using negotiation sourceLabels :
Π the first part we did Canary deployment manually, also using two k8s deployments. There we controlled the ratio of requests by changing the number of replicas. This approach works, but has serious drawbacks.
Istio makes it possible to determine the ratio of requests regardless of the number of replicas. This means, for example, that we can use HPAs (Horizontal Pod Autoscalers - horizontal scaling of pods) and it does not need to be configured in accordance with the current state of the Canary deployment.
Π‘onclusion
Istio works great and when combined with Kiali we get a very powerful combo. Next on my list of interests is the combination of Spinnaker with Istio for automation and Canary analytics.