DUMP conference | grep 'backend|devops'

Last week I went to the DUMP IT conference (https://dump-ekb.ru/) in Yekaterinburg and I want to tell you what was discussed in the Backend and Devops sections, and whether regional IT conferences are worth your attention.

DUMP conference | grep 'backend|devops'
Nikolay Sverchkov from Evil Martians about Serverless

What was there anyway?

In total, the conference had 8 sections: Backend, Frontend, Mobile, Testing and QA, Devops, Design, Science and Management.

By the way, Science and Management have the largest halls)) For ~350 people each. Backend and Frontend are not much smaller. Devops hall was the smallest but active.

I listened to the reports in the Devops and Backend sections and talked a little with the speakers. I want to talk about the topics covered and review these sections at the conference.

Representatives of SKB-Kontur, DataArt, Evil Martians, Flag web studio from Yekaterinburg, Miro (RealTimeBoard) spoke in the Devops and Backend sections. Topics related to CI / CD, working with queue services, logging, Serverless topics and working with PostgreSQL in Go were well covered.

There were also reports from Avito, Tinkoff, Yandex, Jetstyle, Megafon, Ak Bars Bank, but I physically did not have time to visit them (video recordings and slides of the reports are not yet available, they promise to put them on dump-ekb.ru within 2 weeks).

Devops section

What surprised me was that the section was held in the smallest hall, about 50 seats. People even stood in the aisles 🙂 I'll tell you about the reports that I managed to listen to.

Petabyte elastic

The section began with a report by Vladimir Lil (SKB-Kontur) about Elasticsearch in Kontur. They have a fairly large and loaded Elastic (~800 TB of data, ~1.3 petabytes with redundancy). Elasticsearch is the same for all Kontur services, it consists of 2 clusters (of 7 and 9 servers), and is so important that there is a special Elasticsearch engineer in Kontur (actually, Vladimir himself).

Vladimir also shared his thoughts on the benefits of Elasticsearch and the problems it brings.

The Good:

  • All logs in one place, easy access to them
  • Storage of logs for a year and their easy analysis
  • High speed of work with logs
  • Cool data visualization out of the box

Problems:

  • message broker - must have (in the Contour, Kafka plays its role)
  • features of working with Elasticsearch Curator (periodically created high load from regular tasks in Curator)
  • no built-in authorization (only for separate, rather large money, or as open source plugins of varying degrees of readiness for production)

There were only positive reviews about Open Distro for Elasticsearch 🙂 The same authorization issue was resolved there.

Where does the petabyte come from?Their nodes consist of servers with 12*8 Tb SATA + 2*2 Tb SSD. Cold storage on SATA, SSD only for hot cache (hot storage).
7+9 servers, (7 + 9) * 12 * 8 = 1536 Tb.
Part of the space is in reserve, laid down for redundancy, etc.
About 90 application logs are sent to Elasticsearch, including all reporting services of Kontur, Elba, etc.

Features of development on Serverless

Further, the report of Ruslan Serkin from DataArt about Serverless.

Ruslan talked about what serverless development is in general, and what are its features.

Serverless is an approach to development where the developers don't touch the infrastructure in any way. Examples are AWS Lambda Serverless, Kubeless.io (Serverless inside Kubernetes), Google Cloud Functions.

The ideal Serverless application is simply a function that sends a request to the Serverless provider through a special API Gateway. An ideal microservice, while AWS Lambda supports a large number of modern programming languages. The cost of maintaining and deploying infrastructure becomes zero in the case of cloud providers, support for small applications will also be very cheap (AWS Lambda - $ 0.2 / 1 million simple requests).

The scalability of such a system is almost perfect - the cloud provider takes care of this itself, Kubeless scales automatically inside the Kubernetes cluster.

There are disadvantages:

  • development of large applications becomes more difficult
  • there is a difficulty with application profiling (only logs are available to you, but not profiling in the usual sense)
  • no versioning

To be honest, I heard about Serverless a few years ago, but all these years it was not clear to me how to use it correctly. After Ruslan's report, understanding appeared, and after the report of Nikolay Sverchkov (Evil Martians) from the Backend section, it was fixed. It’s not in vain that I went to the conference 🙂

CI for the poor, or is it worth writing your own CI for a web studio

Mikhail Radionov, head of the web studio Flag from Yekaterinburg, spoke about self-written CI / CD.

His studio has gone from “manual CI/CD” (SSH into a server, git pull, repeat 100 times a day) to Jenkins to a custom code control and release tool called Pullkins.

Why did not suit Jenkins? It did not provide enough flexibility by default and was too complex to customize.

“Flag” develops on Laravel (PHP framework). When developing a CI / CD server, Mikhail and his colleagues used the built-in Laravel mechanisms called Telescope and Envoy. The result is a php server (pay attention) that processes incoming webhook requests, can build the frontend, backend, deploy to different servers and report to Slack.

Further, in order to be able to perform blue / green deploy, have uniform settings in dev-stage-prod environments, they switched to Docker. The advantages remained the same, the possibility of environment homogenization and seamless deployment was added, and the need to learn Docker was added to work properly with it.

The project is on Github

How we reduced server rollbacks by 99%

The last talk in the Devops section was from Viktor Eremchenko, Lead devops engineer at Miro.com (former RealTimeBoard).

RealTimeBoard, the main product of the Miro team, is based on a monolithic Java application. Collecting, testing and deploying it without downtime is a difficult task. At the same time, it is important to deploy such a version of the code so that it does not have to be rolled back (heavy monolith is the same).

On the way to building a system that allows you to do this, Miro has gone through a journey that includes work on the architecture, tools used (Atlassian Bamboo, Ansible, etc), and work on team building (they now have a dedicated Devops team + many separate Scrum teams from developers of different profiles).

The path turned out to be difficult and thorny, and Victor shared the accumulated pain and optimism that did not end there.

DUMP conference | grep 'backend|devops'
Won a book for questions

backend section

I had time for 2 reports - from Nikolai Sverchkov (Evil Martians), also about Serverless, and from Grigory Koshelev (Kontur company) about telemetry.

Serverless for mere mortals

While Ruslan Sirkin talked about what Serverless is, Nikolay showed simple applications using Serverless and talked about the details that affect the cost and speed of applications in AWS Lambda.

An interesting detail: the minimum paid element is 128 Mb of memory and 100 ms CPU, it costs $0,000000208. At the same time, 1 million such requests per month are free.

Some of Nikolay's functions often went beyond the 100 ms limit (the main application was written in Ruby), so rewriting them in Go gave great savings.

Vostok Hercules — make telemetry great again!

The last report of the Backend section from Grigory Koshelev (Kontur company) about telemetry. Telemetry is logs, metrics, application traces.

The contour uses for this his own written tools posted on Github. The instrument from the report is Hercules, github.com/vostok/hercules, used to deliver telemetry data.

Vladimir Lila's report in the Devops section discussed the storage and processing of logs in Elasticsearch, but there is still the task of delivering logs from many thousands of devices and applications, and tools like Vostok Hercules solve them.

The circuit went the way known to many - from RabbitMQ to Apache Kafka, but not everything is so simple)) They had to add Zookeeper, Cassandra and Graphite to the scheme. I will not fully disclose information on this report (not my profile), if you are interested, you can wait for the slides and videos on the conference website.

How does it compare to other conferences?

I cannot compare with conferences in Moscow and St. Petersburg, I can compare with other events in the Urals and with 404fest in Samara.

DUMP is held in 8 sections, this is a record for the Ural conferences. Very large Science and Management sections, this is also unusual. The audience in Yekaterinburg is quite structured - the city has large development departments of Yandex, Kontur, Tinkoff, which also affects the reports.

Another interesting point is that many companies have 3-4 speakers at the conference at once (this was the case with Kontur, Evil Martians, Tinkoff). Many of them were sponsors, but the reports are quite on a par with others, these are not promotional reports.

To go or not to go? If you live in the Urals or nearby, you have the opportunity and are interested in topics - yes, of course. If you are thinking about a long trip, I would look at the topics of reports and video reports from past years www.youtube.com/user/videoitpeople/videos and made a decision.
Another plus of conferences in the regions, as a rule, is that it is easy to communicate with the speaker after the reports, there are simply fewer applicants for such communication.

DUMP conference | grep 'backend|devops'

Thanks to Dump and Yekaterinburg! )

Source: habr.com

Add a comment