Modern Platform for Software Development and Deployment

This is the first post in a series of articles covering the changes, improvements, and additions in the upcoming Red Hat OpenShift Platform 4.0 update to help you prepare for your migration to the new version.

Modern Platform for Software Development and Deployment

From the moment the fledgling Kubernetes community first met in the fall of 2014 at Google's Seattle office, it was clear that the Kubernetes project was destined to revolutionize the way we develop and deploy software today. At the same time, public cloud service providers continued to invest heavily in the development of infrastructure and services, which made it much easier and easier to work with IT and create software, and made them incredibly accessible, which few could have imagined at the beginning of the decade.

Of course, the announcement of each new cloud service was accompanied by numerous discussions of experts on Twitter, and the debate was on a variety of topics - including the end of the era of open source codes, the decline of IT on the client side (on-premises IT), the inevitability of a new software monopoly in the cloud, and how the new X paradigm will replace all other paradigms.

Needless to say, all these disputes were very stupid

The reality is that nothing is lost, and today we can see the exponential growth of final products and the way they are developed, due to the constant introduction of new software into our lives. And despite the fact that everything around will change, at the same time, in its essence, everything will remain unchanged. Software developers will still write code with errors, operations engineers and reliability specialists will still go around with pagers and receive automatic notifications in Slack, managers will still operate with the concepts of OpEx and CapEx, and whenever a failure occurs, a senior the developer will sigh sadly with the words: “I told you so” ...

Oh really should be discussedis what tools we can have at our disposal to create better software products, and how they improve security and make development easier and more reliable. As projects become more complex, so do new risks, and today people's lives are so dependent on software that developers simply have to try to do their job better.

Kubernetes is one such tool. Red Hat OpenShift is being worked on to integrate it with other tools and services into a single platform that would make the software more reliable, manageable, and secure for users.

With that said, the OpenShift team asks one simple question:

How can you make working with Kubernetes easier and more convenient?

The answer is surprisingly obvious:

  • automate difficult moments in deployment on the cloud or outside the cloud;
  • focus on reliability while hiding complexity;
  • continue to continuously work on the release of simple and safe updates;
  • seek accountability and auditability;
  • strive to initially provide high security, but not at the expense of usability.

The next release of OpenShift should take into account both the experience of the creators and the experience of other developers who are implementing software on a large scale in the largest companies in the world. In addition, it must take into account all the accumulated experience of open ecosystems, which today underlie the modern world. At the same time, it is necessary to abandon the old mentality of the amateur developer and move on to a new philosophy of the automated future. It should be a "bridge" between old and new ways of deploying software and make full use of all available infrastructure - whether it is hosted by the largest cloud provider or running on tiny systems at the edge.

How to achieve this result?

At Red Hat, it is customary to do boring and thankless work for a long time in order to preserve the established community and prevent the closure of projects in which the company participates. The open-source community has a huge number of talented developers who create the most extraordinary things - entertaining, educating, opening up new possibilities and simply beautiful, but, of course, no one expects that all participants will move in the same direction or will pursue common goals. Harnessing this energy, redirecting it in the right direction, is sometimes necessary to develop directions that would be useful to our users, but at the same time, we must monitor the development of our communities and learn from them.

At the beginning of 2018, Red Hat acquired the CoreOS project, which had similar views on the future - more secure and reliable, created on open-source principles. The company has worked to further develop these ideas and implement them, implementing our philosophy - trying to ensure the safe operation of all software. All of this work is built on Kubernetes, Linux, public clouds, private clouds, and thousands of other projects that underpin our modern digital ecosystem.

The new release of OpenShift 4 will be clearer, more automated and more natural

The OpenShift platform will run the best and most reliable Linux operating systems, with bare-metal hardware support, easy virtualization, automated infrastructure programming, and of course containers (which are basically just Linux images).

The platform must be secure from the outset, yet provide easy iteration for developers—that is, be flexible and robust enough to still allow administrators to audit and manage easily.

It should allow software to run “as a service” and not lead to unmanaged infrastructure growth for operators.

It will allow developers to focus on creating real products for users and customers. No need to wade through the jungle of hardware and software settings, and all random complications will be a thing of the past.

OpenShift 4: NoOps platform that does not require maintenance

В this publication described the tasks that helped shape the company's vision for OpenShift 4. The team's task is to simplify the daily tasks of operating and maintaining software as much as possible, to make these processes easy and relaxed - both for implementation specialists and for developers. But how can you get closer to this goal? How to create a platform for running software that requires minimal intervention? What does NoOps even mean in this context?

If we try to abstract, then for developers, the concepts of “serverless” or “NoOps” mean tools and services that allow you to hide the “operational” component or minimize this burden for the developer.

  • Work not with systems, but with application interfaces (APIs).
  • Do not engage in the implementation of software - let the provider do it instead of you.
  • Don't start building a big framework right away - start by writing small pieces that will act as "building blocks", try to make this code work with data and events, and not with disks and databases.

The challenge, as before, is to speed up iterations in software development, provide an opportunity to create better products, and at the same time, so that the developer does not have to worry about the systems on which his software runs. An experienced developer is well aware that if you focus on users, the picture can change quickly, so you should not invest too much effort in writing software if you are not absolutely sure that it is necessary.

For maintenance and operations professionals, the word "NoOps" can sound a little intimidating. But during communication with field engineers, it becomes obvious that the patterns and methods they use to ensure reliability of reliability (Site Reliability Engineering, SRE) have much in common with the patterns described above:

  • Do not manage systems - automate the processes of managing them.
  • Do not implement software - create a pipeline for its deployment.
  • Avoid bundling all of your services together and letting a failure in one cause the entire system to fail – disperse them throughout your infrastructure using automation tools and connect them with control and visibility.

SREs know that something can go wrong and they have to track down and fix the problem - so they automate routine work and pre-determine error budgets to be ready to prioritize and make decisions when a problem occurs .

Kubernetes in OpenShift is a platform designed to solve two main problems: instead of forcing you to deal with virtual machines or load balancer APIs, you work with higher-order abstractions - with deployment processes and services. Instead of installing software agents, you can run containers, and instead of writing your own monitoring stack, you can use the tools already available in the platform. So the secret ingredient of OpenShift 4 is really no secret - you just need to take SRE principles and serverless concepts as a basis, and bring them to their logical conclusion, to help developers and operations engineers:

  • Automate and standardize the infrastructure that applications use
  • Connect deployment and development processes together without limiting the developers themselves
  • Getting the XNUMXth service, feature, application, or entire stack up and running is no more difficult than the first.

But what is the difference between the OpenShift 4 platform and its predecessors and from the "standard" approach to solving such problems? How is scale achieved for implementation and operations teams? Due to the fact that the king in this situation is a cluster. So,

  • We make the purpose of the clusters clear (Dear cloud, I raised this cluster because I could)
  • Machines and operating systems exist to serve the cluster (Your Majesty)
  • Manage the state of hosts from the cluster, minimize their rebuilds (drift).
  • For each important element of the system, a nanny (mechanism) is needed to monitor and fix problems.
  • The failure of *every* aspect or element of the system, the corresponding recovery mechanisms are a normal part of life
  • All infrastructure must be configured via the API.
  • Use Kubernetes to run Kubernetes. (yes, that's not a typo)
  • Updates should be installed easily and naturally. If it takes more than one click to install an update, then obviously we are doing something wrong.
  • Monitoring and debugging any component should not be a problem, and, accordingly, monitoring and reporting on the entire infrastructure should also be simple and convenient.

Do you want to see the possibilities of the platform in action?

A preview version of OpenShift 4 has been made available to developers. With an easy-to-use installer, you can run a cluster on AWS on top of Red Had CoreOS. To use the preview, you only need an AWS account to provide the infrastructure and a set of accounts to access the preview images.

  1. To get started, go to try.openshift.com and click "Get Started".
  2. Sign in to your Red Hat account (or create a new one) and follow the instructions to set up your first cluster.

After successful installation, check out our tutorials OpenShift Trainingfor a deeper understanding of the systems and concepts that make the OpenShift 4 platform such an easy and convenient tool for running Kubernetes.

Try the new release of OpenShift and share your opinion. We strive to make working with Kumbernetes as accessible and effortless as possible – the future of NoOps starts today.

And now attention!
At the conference, DevOps Forum 2019 On April 20, one of the OpenShift developers, Vadim Rutkovsky, will hold a master class - he will break ten clusters and force them to repair them. Conf is paid, but with the promo code #RedHat 37% discount

Master class at 17:15 - 18:15, and the booth is open all day. T-shirts, hats, stickers - as usual!

Hall #2
“Here the whole system needs to be changed: we fix broken k8s clusters together with certified locksmiths.”


Source: habr.com

Add a comment