The evolution of delivery tools, or thoughts on Docker, deb, jar and more

The evolution of delivery tools, or thoughts on Docker, deb, jar and more

Somehow at one point I decided to write an article about delivery in the form of docker containers and deb packages, but when I started, for some reason I was carried back to the distant times of the first personal computers and even calculators. In general, instead of dry comparisons of docker and deb, these are the reflections on the topic of evolution, which I present to your judgment.

Any product, whatever it may be, must somehow get to the product servers, must be configured and launched. That's what this article will be about.

I will think in a historical context, β€œwhat I see is what I sing about”, what I saw when I just started writing code and what I observe now, what we ourselves use at the moment and why. The article does not claim to be a full-fledged study, some points are missed, this is my personal view of what was and what is now.

So, in the good old days... the earliest delivery method that I found was cassettes from tape recorders. I had a computer BK-0010.01 ...

The era of calculators

No, there was an even earlier moment, there was also a calculator MK-61 ΠΈ MK-52.

The evolution of delivery tools, or thoughts on Docker, deb, jar and more So that's when I had MK-61, then the way to transfer the program was an ordinary piece of paper in a box, on which a program was written, which, if necessary, to run it manually was recorded in the calculator. If you want to play (yes, even on this antediluvian calculator there were games), you sit down and enter the program into the calculator. Naturally, when the calculator was turned off, the program went into oblivion. In addition to the calculator codes written out on paper, the programs were published in the journals Radio and Technique of Youth, and were also printed in books of that time.

The next modification was the calculator MK-52, he already has some kind of non-volatile data storage. Now the game or program did not have to be entered manually, but after doing some magic passes with the buttons, it loaded itself.

The volume of the largest program in the calculator was 105 steps, and the size of the permanent memory in the MK-52 was 512 steps.

By the way, if there are fans of these calculators who are reading this article, in the process of writing the article, I found both a calculator emulator for android and programs for it. Forward to the past!

A small digression about the MK-52 (from wikipedia)

MK-52 flew into space on the Soyuz TM-7 spacecraft. It was supposed to be used to calculate the landing trajectory in case the on-board computer fails.

The MK-52 with the Elektronika-Astro memory expansion unit has been supplied to the Navy ships since 1988 as part of the navigator's computer set.

The first personal computers

The evolution of delivery tools, or thoughts on Docker, deb, jar and more Let's go back to the times BK-0010. It is clear that there was more memory there, and it was no longer an option to drive in code from a piece of paper (although at first I did just that, because there was simply no other medium). Audio cassettes for tape recorders are becoming the main means of storing and delivering software.





The evolution of delivery tools, or thoughts on Docker, deb, jar and moreThe storage on the cassette was usually in the form of one or two binary files, everything else was contained inside. Reliability was very low, I had to keep 2-3 copies of the program. The loading time was also not encouraging, enthusiasts experimented with different frequency coding to overcome these shortcomings. I myself at that time was not yet engaged in professional software development (not counting simple BASIC programs), so, unfortunately, I will not tell you in detail how everything was arranged inside. The very fact that a computer had only RAM for the most part determined the simplicity of the data storage scheme.

The emergence of reliable and large storage media

Later, floppy disks appear, the copying process is simplified, and reliability grows.
But the situation changes dramatically only when sufficiently large local storages appear in the form of HDDs.

The type of delivery changes fundamentally: installer programs appear that manage the process of configuring the system, as well as cleaning up after removal, since the programs are not just read into memory, but are already copied to local storage, from which you need to be able to clean up unnecessary if necessary.

In parallel, the complexity of the supplied software is increasing.
The number of files in the distribution increases from a few to hundreds and thousands, library version conflicts and other joys begin when different programs use the same data.

The evolution of delivery tools, or thoughts on Docker, deb, jar and more At that time, the existence of Linux had not yet been discovered for me, I lived in the world of MS DOS and, later, Windows, and wrote in Borland Pascal and Delphi, sometimes glancing towards C ++. In those days, many used InstallShield to deliver products. en.wikipedia.org/wiki/InstallShield, which quite successfully solved all the tasks of deploying and configuring the software.




Internet era

Gradually, the complexity of software systems becomes even more complicated, from a monolith and desktop applications there is a transition to distributed systems, thin clients and microservices. Now you need to configure not one program, but a set of them, and so that they are friends all together.

The concept changed completely, the Internet came, the era of cloud services came. So far, only at the initial stage, in the form of sites, no one dreamed of especially services. but it was a turning point in both the development and delivery of applications industry.

For myself, I noted that at that moment there was a change in the generations of developers (or it was only in my environment), and there was a feeling that all the good old delivery methods were forgotten at one moment and everything started from the very beginning: they began to do all the delivery knee scripts and proudly called it "Continuous delivery". In fact, some period of chaos began, when the old was forgotten and not used, but there was simply no new one.

I remember the times when in our company where I worked then (I won't name it), instead of building via ant (maven wasn't popular then or didn't exist at all), people just built the jar in the IDE and quietly committed it in SVN. Accordingly, the deployment consisted in getting the file from SVN and copying it via SSH to the desired machine. It's so simple and clumsy.

At the same time, the delivery of simple sites in PHP was done quite primitively by simply copying the corrected file via FTP to the target machine. Sometimes there was no such thing - the code was corrected live on the product server, and it was a special chic if there were backups somewhere.


RPM and DEB packages

The evolution of delivery tools, or thoughts on Docker, deb, jar and moreOn the other hand, with the development of the Internet, UNIX-like systems began to gain more and more popularity, in particular, it was at that time that I discovered RedHat Linux 6 for myself, around 2000. Naturally, there were also certain means for delivering software, according to Wikipedia, RPM as the main package manager appeared as early as 1995, in the RedHat Linux 2.0 version. And from that time to the present day, the system has been supplied in the form of RPM packages and has been successfully existing and developing.

The distributions of the Debian family followed a similar path and implemented the distribution in the form of deb packages, which is also unchanged to this day.

Package managers allow you to supply the software products themselves, configure them during the installation process, manage dependencies between different packages, remove products and clean up excess during the uninstallation process. Those. for the most part, this is all that is needed, which is why they have lasted for several decades with little or no change.

Cloudiness has added installation to package managers not only from physical media, but also from cloud repositories, but fundamentally little has changed.

It's worth noting that there is currently some movement away from deb and towards snap packages, but more on that later.

So, this new generation of cloud developers, who did not know either DEB or RPM, also slowly grew, gained experience, products became more complicated, and some more reasonable delivery methods were needed than FTP, bash scripts and similar student crafts.
And this is where Docker comes into play, a mixture of virtualization, resource allocation, and delivery method. It is now fashionable, youthful, but is it needed for everything? Is it a panacea?

According to my observations, very often Docker is offered not as a reasonable choice, but simply because, on the one hand, it is talked about in the community, and those who offer only know it. On the other hand, for the most part they are silent about the good old packaging systems - they are and are, they do their job quietly and imperceptibly. In such a situation, there is really no other choice - the choice is obvious - Docker.

I will try to share my experience, how we implemented Docker, and what happened as a result.


Self-written scripts

Initially, there were bash scripts that deployed jar archives to the desired machines. Managed this Jenkins process. This worked successfully, since the jar archive itself is already an assembly containing classes, resources, and even a configuration. If you put everything into it to the maximum, then decomposing it with a script is not the most difficult thing you need

But scripts have a few disadvantages:

  • scripts are usually written in haste and are therefore so primitive that they contain only one most successful script. This is facilitated by the fact that the developer is interested in the fastest delivery, and a normal script requires a decent amount of resources to be invested.
  • as a consequence of the previous paragraph, the scripts do not contain the uninstall procedure
  • there is no established upgrade procedure
  • when a new product appears, you need to write a new script
  • no dependency support

Of course, you can write a fancy script, but, as I wrote above, this is development time, and not the smallest, and, as you know, there is always not enough time.

All this obviously limits the scope of this deployment method to only the simplest systems. The time has come to change that.


Docker

The evolution of delivery tools, or thoughts on Docker, deb, jar and moreAt some point, freshly baked middles began to come to us, seething with ideas and raving about the docker. Well, flag in hand - let's do it! There were two attempts. Both failed - let's say, because of the great ambitions, but the lack of real experience. Was it necessary to force and finish by any means? Unlikely - the team must evolve to the right level before they can use the appropriate tools. In addition, when using ready-made docker images, we often encountered the fact that the network did not work correctly there (which, perhaps, was also due to the dampness of the docker itself) or it was difficult to expand other people's containers.

What inconvenience did we face?

  • Network problems in bridge mode
  • It is inconvenient to look at the logs in the container (if they are not placed anywhere separately in the file system of the host machine)
  • Periodically strange hangup of ElasticSearch inside the container, the reason has not been established, the container is official
  • It is tedious to use the shell inside the container - everything is greatly curtailed, there are no familiar tools
  • Large collection containers - expensive to store
  • Due to the large size of containers, it is difficult to support multiple versions
  • Longer build than other methods (scripts or deb packages)

On the other hand, why is it worse to deploy a Spring service in the form of a jar archive through the same deb? Is resource isolation really necessary? Is it worth losing convenient operating system tools by stuffing a service into a heavily truncated container?

As practice has shown, in reality this is not necessary, the deb package is enough in 90% of cases.

When did the good old deb fail and when did we really need a docker?

For us, it was deploying python services. A lot of libraries needed for machine learning and not included in the standard distribution of the operating system (and what was there were not those versions), hacks with settings, the need for different versions for different services living on the same host system led to that the only reasonable way to supply this nuclear mixture turned out to be a docker. The complexity of building a docker container turned out to be lower than the idea of ​​​​packing all this into separate deb packages with dependencies, and in fact, no one in their right mind would have undertaken this.

The second point where it is planned to use docker is to deploy services according to the blue-green deploy scheme. But here I want to get a gradual increase in complexity: first, deb packages are assembled, and then a docker container is assembled from them.


Snap packages

The evolution of delivery tools, or thoughts on Docker, deb, jar and more Let's get back to snap packages. They first officially appeared in Ubuntu 16.04. Unlike the usual deb packages and rpm packages, snaps carry all the dependencies. On the one hand, this avoids library conflicts; on the other hand, it means larger sizes of the resulting package. In addition, this can also affect the security of the system: in the case of a snap delivery, all changes to the included libraries must be monitored by the developer who creates the package. In general, not everything is so simple and general happiness does not come from their use. But, nevertheless, this is quite a reasonable alternative if the same Docker is used only as a means of packaging, and not virtualization.



As a result, we now use both deb packages and docker containers in a reasonable combination, which, perhaps, in some cases we will replace with snap packages.

Only registered users can participate in the survey. Sign in, you are welcome.

What do you use for delivery?

  • Self-written scripts

  • Copy by handles to FTP

  • deb packages

  • rpm packages

  • snap packages

  • docker-images

  • VM images

  • Cloning the entire HDD

  • puppet

  • responsive

  • Other

109 users voted. 32 users abstained.

Source: habr.com

Add a comment