@Kubernetes Meetup #3 at Mail.ru Group: June 21

@Kubernetes Meetup #3 at Mail.ru Group: June 21

From February Love Kubernetes It seems to us that an eternity has passed. The separation was a little brightened up only by the fact that we managed to enter the Cloud Native Computing Foundation, certify our Kubernetes distribution kit according to the Certified Kubernetes Conformance Program, and also launch it in the service Mail.ru Cloud Containers your implementation of Kubernetes Cluster Autoscaler.

It's time for the third @Kubernetes Meetup! In short:

  • Gazprombank will tell you how they use Kubernetes in their R&D to manage OpenStack;
  • Mail.ru Cloud Solutions - how to scale applications in K8S using scalers and how they prepared their implementation of Kubernetes Cluster Autoscaler;
  • and Wunderman Thompson on how Kubernetes helps them streamline their development approach and why DevOps has more Dev than Ops.

The meeting will take place on June 21 (Friday) at 18:30 at the Moscow office of Mail.ru Group (Leningradsky Prospekt, 39, building 79). Register is required and closes on June 20 at 11:59 am (or earlier if space runs out).

"Kubernetes for Developers: How many Devs are in DevOps?"

Grigory Nikonov, Wunderman Thompson, Managing Director

We do not have clusters of 500 nodes. We don't have severe DevOps. We do not have dedicated product teams. But we have many interesting projects and answers to questions that we have found while developing and supporting these projects. First of all, we are developers and are used to creating tools ourselves, which we will use later. Perhaps they will help you in your work.

Wunderman Thompson is one of the pioneers in the development of Internet solutions in Russia, and is now developing both simple landing pages and complex distributed systems. Kubernetes helps to optimize the approach to development, and for agency customers - hosting and operation of the created solutions.

In distributed systems with a large number of integrations and internal components, a microservice architecture is a natural answer to the requirements for upgradability and maintainability of the solution, however, the transition to such an architecture gives rise to a whole series of problems associated with versioning and publishing. The fact that we are an agency, not a dedicated product team, and our developers do not constantly keep the detailed context of a specific solution on their machines, imposes its own requirements on the reproducibility of the development environment, the ability to make changes to several teams at the same time and return to the project after some time . In response to these challenges, we have developed processes and tools that allow our developers and DevOps to more easily develop and maintain the solutions they create.

You'll learn why DevOps is more Dev than Ops, and how laziness can reduce development/maintenance time and cost, as well as:

  • how Kubernetes has changed our approach to project development;
  • what the life cycle of our code looks like;
  • what tools we use for controlled publishing of microservices;
  • how we solve the problem of building obsolete artifacts;
  • how we deploy to the cluster with pleasure.

“Scaling applications with Kubernetes Cluster Autoscaler: nuances of Autoscaler and implementation of Mail.ru Cloud Solutions”

Alexander Chadin, Mail.ru Cloud Solutions, PaaS developer

In today's world, users expect as a given that your application is always online and always available - which means that it can withstand any traffic flow, no matter how big it is. Kubernetes offers a rather elegant solution that allows you to scale itself according to load - Kubernetes Cluster Autoscaler.

In general, in Kubernetes there are two types of scaling according to what exactly we are scaling: more copies of the application or more resources. Vertical scaling, when we increase the number of application replicas within existing nodes. And more complex horizontal scaling - we increase the number of nodes itself.

In the second case, we will be able to lift even more copies of the application - which will ensure its high availability. We'll talk about horizontal scaling using Cluster Autoscaler. He can not only increase, but also reduce the number of nodes depending on the load. For example, the load peak passes - then Autoscaler itself will reduce the number of nodes to the required one and thus the fee for the provider's resources.

At the meetup, we will tell you more about the nuances of the Kubernetes Cluster Autoscaler, as well as what difficulties we encountered when launching our Cluster Autoscaler implementation as part of the Mail.ru Cloud Containers service. You will learn:

  • what scalers are available in Kubernetes, what are the features of their use;
  • What should you pay attention to when using scalers?
  • how we segmented nodes into availability zones using Node Groups;
  • how we implemented support for Kubernetes Cluster Autoscaler in MCS.

"R&D at Gazprombank: how K8S helps manage OpenStack"

Maxim Kletskin, Gazprombank, Product Manager

In a world where the trend is set for everything as a service, Time-to-Market is above all. Applications must be developed quickly to test hypotheses and find new markets at the time of their initial formation. For banks, speed is especially important, and new technologies help here - in particular, containerization technologies and Kubernetes.

Maxim Kletskin is a product manager at Gazprombank and is developing a sandbox for launching pilot products. R&D of Gazprombank conduct various studies in their cloud, which is OpenStack. Kubernetes is used here in two guises: 1) Kubernetes on Bare Metal as the management layer of the OpenStack cloud and 2) K8S as an OpenShift distribution for development.

In the report, we will talk about the first case and find out how Gazprombank uses Kubernetes to manage OpenStack. If you look at the architecture of OpenStack, you can see that it is quite atomic, so using Kubernetes as an OpenStack management layer seems very interesting and logical. It will also make it easier to add nodes to an OpenStack cluster and increase the reliability of Control Plane. And, like a cherry on top, it will simplify the collection of telemetry from the cluster.

You will learn:

  • why R&D to the bank: we test and experiment;
  • how we containerize OpenStack;
  • how and why to deploy OpenStack in K8S.

After the speeches, we will smoothly switch to the @Ku formatbeernetes After-Party, and we have prepared some cool announcements for you. Be sure to register here:, we review all applications within a couple of days.

About new events in the @Kubernetes Meetup series and other events Mail.ru Cloud Solutions we immediately announce in our channel in Telegram: t.me/k8s_mail

Interested in speaking at the next @Kubernetes Meetup? You can submit your application here: mcs.mail.ru/speak

Source: habr.com

Add a comment