I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

This post was written because our staff had quite a few conversations with clients about developing applications on Kubernetes and the specifics of such development on OpenShift.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

We usually start with the thesis that Kubernetes is just Kubernetes, and OpenShift is already a Kubernetes platform, like Microsoft AKS or Amazon EKS. Each of these platforms has its own advantages, focused on a particular target audience. And after that, the conversation already flows into a comparison of the strengths and weaknesses of specific platforms.

In general, we thought of writing this post with an output like β€œListen, it doesn’t matter where you run the code, on OpenShift or on AKS, on EKS, on some custom Kubernetes, yes on any Kubernetes (let's call it KUK for short) β€œIt’s really simple, both there and there.”

Then we planned to take the simplest "Hello World" and use it to show what is common and what are the differences between the CMC and the Red Hat OpenShift Container Platform (hereinafter, OCP or simply OpenShift).

However, in the course of writing this post, we realized that we have become so used to using OpenShift that we simply do not realize how it has grown and turned into an amazing platform that has become much more than just a Kubernetes distribution. We tend to take the maturity and simplicity of OpenShift for granted, while overlooking its magnificence.

In general, the time has come for active repentance, and now we will step by step compare the commissioning of our β€œHello World” on KUK and on OpenShift, and we will do it as objectively as possible (well, except sometimes showing a personal attitude to the subject). If you are interested in a purely subjective opinion on this issue, then you can read it here (EN). And in this post we will stick to the facts and only the facts.

Clusters

So, our "Hello World" needs clusters. Let's just say "no" to any public clouds, so as not to pay for servers, registries, networks, data transfer, etc. Accordingly, we choose a simple one-node cluster on minicube (for KUK) and Code Ready Containers (for an OpenShift cluster). Both of these options are really easy to install, but require quite a lot of resources on your laptop.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Assembly on KUK-e

So, let's go.

Step 1 - Building Our Container Image

Let's start by deploying our "Hello World" to minikube. This will require:

  1. 1. Installed Docker.
  2. 2. Installed Git.
  3. 3. Installed Maven (actually this project uses the mvnw binary, so you can do without it).
  4. 4. Actually, the source itself, i.e. repository clone github.com/gcolman/quarkus-hello-world.git

The first step is to create a Quarkus project. Don't be scared if you've never used Quarkus.io - it's easy. You just select the components you want to use in the project (RestEasy, Hibernate, Amazon SQS, Camel, etc.), and then Quarkus itself, without any of your participation, sets up the maven archetype and puts everything on github. That is, literally one click of the mouse - and you're done. This is why we love Quarkus.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

The easiest way to build our "Hello World" into a containerized image is to use the quarkus-maven extensions for Docker, which will do all the necessary work. With the advent of Quarkus, this has become really easy and simple: add the container-image-docker extension and you can create images with maven commands.

./mvnw quarkus:add-extension -Dextensions=”container-image-docker”

And finally, we build our image using Maven. As a result, our source code turns into a ready-made container image, which can already be run in the container runtime.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

./mvnw -X clean package -Dquarkus.container-image.build=true

That, in fact, is all, now you can run the container with the docker run command, having mapped our service to port 8080 so that it can be accessed.

docker run -i β€” rm -p 8080:8080 gcolman/quarkus-hello-world

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

After the container instance has started, all that remains is to check with the curl command that our service is running:

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

So, everything works, and it was really easy and simple.

Step 2 - Submit our container to the container image repository

For now, the image we created is stored locally in our local container storage. If we want to use this image in our KUK environment, then we need to put it in some other repository. Kubernetes does not have these features, so we will use dockerhub. Because, firstly, it's free, and secondly, (almost) everyone does it.

This is also very simple, and only a dockerhub account is needed here.

So, we install dockerhub and send our image there.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Step 3 - Start Kubernetes

There are many ways to put together a kubernetes configuration to run our "Hello World", but we will use the simplest of them, because we are such people ...

First, we start the minikube cluster:

minikube start

Step 4 - Deploying Our Container Image

Now we need to convert our code and container image to kubernetes configuration. In other words, we need a pod and a deployment definition pointing to our container image on dockerhub. One of the easiest ways to do this is to run the create deployment command pointing to our image:

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

kubectl create deployment hello-quarkus β€” image =gcolman/quarkus-hello-world:1.0.0-SNAPSHOT

With this command, we told our COOK to create a deployment configuration, which should contain the pod specification for our container image. This command will also apply this configuration to our minikube cluster, and create a deployment that will download our container image and run a pod on the cluster.

Step 5 - open access to our service

Now that we have a deployed container image, it's time to think about how to configure external access to this Restful service, which, in fact, is programmed in our code.

There are many ways here. For example, you can use the expose command to automatically create appropriate Kubernetes components such as services and endpoints. Actually, this is what we will do by executing the expose command for our deployment object:

kubectl expose deployment hello-quarkus β€” type=NodePort β€” port=8080

Let's dwell on the "-type" option of the expose command for a moment.

When we expose and create the components needed to run our service, we need, among other things, to be able to connect from the outside to the hello-quarkus service that sits inside our software-defined network. And parameter type allows us to create and connect things like load balancers to route traffic to that network.

For example, writing type=LoadBalancer, we automatically initialize the public cloud load balancer to connect to our Kubernetes cluster. This, of course, is great, but you need to understand that such a configuration will be tightly tied to a specific public cloud and it will be more difficult to transfer it between Kubernetes instances in different environments.

In our example type=NodePort, that is, the call to our service goes by the IP address of the node and the port number. This option allows you not to use any public clouds, but requires a number of additional steps. First, you need your own load balancer, so we will deploy the NGINX load balancer in our cluster.

Step 6 - Set up a load balancer

minikube has a number of platform features that make it easy to create the components you need for external access, such as ingress controllers. Minikube comes bundled with the Nginx ingress controller, and all we have to do is enable it and configure it.

minikube addons enable ingress

Now, with just one command, we will create an Nginx ingress controller that will work inside our minikube cluster:

ingress-nginx-controller-69ccf5d9d8-j5gs9 1/1 Running 1 33m

Step 7 - Set up the ingress

Now we need to configure the Nginx ingress controller to accept hello-quarkus requests.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

And finally, we need to apply this configuration.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

kubectl apply -f ingress.yml

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Since we're doing all this on our own machine, we simply add our node's IP address to the /etc/hosts file in order to direct http requests to our minikube to the NGINX load balancer.

192.168.99.100 hello-quarkus.info

That's it, now our minikube service is available from the outside through the Nginx ingress controller.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Well, that was easy, right? Or not so much?

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Run on OpenShift (Code Ready Containers)

And now let's see how it's all done on the Red Hat OpenShift Container Platform (OCP).

As in the case of minikube, we choose a scheme with a single-node OpenShift cluster in the form of Code Ready Containers (CRC). It used to be called minishift and was based on the OpenShift Origin project, but now it's CRC and built on Red Hat's OpenShift Container Platform.

Here, sorry, we can't help but say: "OpenShift is great!"

Initially, we thought to write that development on OpenShift is no different from development on Kubernetes. And in fact, that's the way it is. But in the process of writing this post, we remembered how many unnecessary movements you have to make when you do not have OpenShift, and therefore, again, it is beautiful. We love things to be easy, and how easy it is to deploy and run our example on OpenShift compared to minikube is what inspired us to write this post.

Let's run through the process and see what we need to do.

So in the minikube example, we started with Docker… Wait, we don't need Docker installed on the machine anymore.

And we don't need a local git.
And Maven is not needed.
And you don't have to create a container image by hand.
And you don't have to look for any repository of container images.
And you don't need to install an ingress controller.
And you don't need to configure ingress either.

Do you understand? To deploy and run our application on OpenShift, none of the above is needed. And the process itself is as follows.

Step 1 – Starting Your OpenShift Cluster

We use Code Ready Containers from Red Hat, which is essentially the same Minikube, but only with a full single-node Openshift cluster.

crc start

Step 2 - Build and Deploy the Application to the OpenShift Cluster

It is at this step that the simplicity and convenience of OpenShift manifest itself in all its glory. As with all Kubernetes distributions, we have many ways to run an application on a cluster. And, as in the case of KUK, we specifically choose the simplest one.

OpenShift has always been built as a platform for building and running containerized applications. Building containers has always been an integral part of this platform, so there are a bunch of additional Kubernetes resources for the corresponding tasks.

We'll be using OpenShift's Source 2 Image (S2I) process, which has several different ways to take our source (code or binaries) and turn it into a containerized image that runs on an OpenShift cluster.

For this we need two things:

  • Our source code in the git repository
  • Builder-image, based on which the assembly will be performed.

There are many such images, maintained both by Red Hat and by the community, and we will use the OpenJDK image, well, since I am building a Java application.

You can run an S2I build both from the OpenShift Developer graphical console and from the command line. We will use the new-app command, telling it where to get the builder image and our source code.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

oc new-app registry.access.redhat.com/ubi8/openjdk-11:latest~https://github.com/gcolman/quarkus-hello-world.git

That's it, our application is created. In doing so, the S2I process did the following things:

  • Created a service build-pod for all sorts of things related to building the application.
  • Created an OpenShift Build config.
  • I downloaded the builder image to the internal OpenShift docker registry.
  • Cloned "Hello World" to local repository.
  • Saw there was a maven pom in there and so compiled the app with maven.
  • Created a new container image containing the compiled Java application and put this image into the internal container registry.
  • Created a Kubernetes Deployment with specifications for a pod, service, etc.
  • Launched deploy container image.
  • Removed service build-pod.

There is a lot on this list, but the main thing is that the entire build takes place exclusively inside OpenShift, the internal Docker registry is inside OpenShift, and the build process creates all Kubernetes components and runs them on the cluster.

If you visually monitor the launch of S2I in the console, you can see how the build pod is launched during the build.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

And now let's take a look at the builder pod's logs: firstly, there you can see how maven does its job and downloads dependencies to build our java application.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

After the maven build is completed, the build of the container image is started, and then this built image is sent to the internal repository.

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

Everything, the assembly process is completed. Now let's make sure that the pods and services of our application have started in the cluster.

oc get service

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

That's all. And there is only one team. All we have to do is expose this service for outside access.

Step 3 - make the service expose for access from the outside

As in the case of KUK, on ​​the OpenShift platform, our β€œHello World” also needs a router to direct external traffic to a service inside the cluster. In OpenShift this makes it very easy. Firstly, the HAProxy routing component is installed in the cluster by default (it can be changed to the same NGINX). Secondly, there are special and highly configurable resources called Routes, which are reminiscent of Ingress objects in good old Kubernetes (in fact, OpenShift's Routes heavily influenced the design of Ingress objects, which can now be used in OpenShift) , but for our "Hello World", and in almost all other cases, the standard Route is enough for us without additional configuration.

To create a routable FQDN for β€œHello World” (yes, OpenShiift has its own DNS for routing by service names), we simply expose our service:

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

oc expose service quarkus-hello-world

If you look at the newly created Route, then you can find the FQDN and other routing information there:

oc get route

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

And finally, we access our service from the browser:

I'm sorry, OpenShift, we didn't appreciate you enough and took you for granted

But now it was really easy!

We love Kubernetes and everything that this technology allows you to do, and we also love simplicity and lightness. Kubernetes was designed to make distributed, scalable containers incredibly easy to operate, but its simplicity is no longer enough to bring applications into production today. And this is where OpenShift comes into play, which keeps up with the times and offers Kubernetes, focused primarily on the developer. A lot of effort has been invested to tailor the OpenShift platform specifically for the developer, including the creation of tools such as S2I, ODI, Developer Portal, OpenShift Operator Framework, IDE integration, Developer Catalogs, Helm integration, monitoring, and many others.

We hope that this article was interesting and useful for you. And you can find additional resources, materials and other things useful for developing on the OpenShift platform on the portal Red Hat Developers.

Source: habr.com

Add a comment