Knative - platform as a service based on k8s with serverless support

Knative - platform as a service based on k8s with serverless support

Kubernetes has undoubtedly become the dominant container deployment platform. It provides the ability to manage almost everything using its APIs and custom controllers that extend its API with custom resources.

However, the user still has to make detailed decisions about exactly how to deploy, configure, manage, and scale applications. At the discretion of the user, there are issues of scaling the application, protection, and traffic flow. This is where Kubernetes differs from conventional platforms as a service (PaaS) like Cloud Foundry and Heroku.

The platforms have a simplified user interface and are aimed at application developers, who are most often involved in customizing individual applications. Routing, deployment, and metrics are managed transparently to the user by the underlying PaaS system.

The source-delivery workflow is handled by PaaS by creating a custom container image, deploying it, setting up a new route and DNS subdomain for incoming traffic. All this is run on command. git push.

Kubernetes (intentionally) provides only the building blocks for such platforms, leaving the community free to do the work themselves. How said Kelsey Hightower:

Kubernetes is a platform for building platforms. The best position to start but not finish.

As a result, we see a bunch of Kubernetes builds, as well as hosts that are trying to create a PaaS for Kubernetes, such as OpenShift and Rancher. Against the background of the growing Kube-PaaS market, Knative, created in July 2018 by Google and Pivotal, enters the ring.

Knative is a collaboration between Google and Pivotal, with little help from other companies such as IBM, RedHat, and Solo.im. It offers PaaS-like things for Kubernetes with top-notch serverless application support. Unlike Kubernetes builds, Knative is installed as an add-on on any compatible Kubernetes cluster, configured through custom resources.

What is Knative?

Knative is described as "A Kubernetes-based platform for delivering and managing workloads with modern serverless computing." Knative, claiming to be such a platform, actively scales containers automatically in proportion to concurrent HTTP requests. Unused services end up scaling to zero, providing serverless compute-style on-demand scaling.

Knative consists of a set of controllers that can be installed in any Kubernetes cluster and provide the following features:

  • building containerized applications from source (provided by the component Build),
  • granting access to incoming traffic to applications (provided by the Serving),
  • delivery and automatic scaling of applications on demand (also provided by the component Serving),
  • definition of event sources that lead to the launch of applications (provided by the eventing).

The key component is Serving, which provides provisioning, autoscaling, and traffic management for managed applications. Installing Knative still retains full access to the Kubernetes API, allowing users to manage applications the usual way, and also serves to debug Knative services by working with the same API primitives that these services use (modules, services, etc.).

Serving also automates blue-green traffic routing, ensuring that traffic is split between new and old versions of the application when a user ships an updated version of the application.

Knative itself depends on installing a compatible ingress controller. At the time of writing, supported Gloo API Gateway ΠΈ Istio Service Mesh. It will set up an available ingress to route traffic to Knative-managed applications.

Istio Service Mesh can be a big dependency for Knative users who want to try it without installing the Istio control panel, since Knative only depends on the gateway.

For this reason, most users prefer Gloo as a gateway to Knative, providing a similar set of features to Istio (in terms of Knative-only purpose) while also using significantly fewer resources and lower maintenance costs.

Let's check Knative in action on the bench. I will use a freshly installed cluster running in GKE:

kubectl get namespace
NAME          STATUS   AGE
default       Active   21h
kube-public   Active   21h
kube-system   Active   21h

Let's start installing Knative and Gloo. This can be done in any order:

# ставим Knative-Serving
kubectl apply -f 
 https://github.com/knative/serving/releases/download/v0.8.0/serving-core.yaml
namespace/knative-serving created
# ...
# ставим Gloo
kubectl apply -f 
  https://github.com/solo-io/gloo/releases/download/v0.18.22/gloo-knative.yaml
namespace/gloo-system created
# ...

Check that all Pods are in "Running" status:

kubectl get pod -n knative-serving
NAME                              READY   STATUS    RESTARTS   AGE
activator-5dd55958cc-fkp7r        1/1     Running   0          7m32s
autoscaler-fd66459b7-7d5s2        1/1     Running   0          7m31s
autoscaler-hpa-85b5667df4-mdjch   1/1     Running   0          7m32s
controller-85c8bb7ffd-nj9cs       1/1     Running   0          7m29s
webhook-5bd79b5c8b-7czrm          1/1     Running   0          7m29s
kubectl get pod -n gloo-system
NAME                                      READY   STATUS    RESTARTS   AGE
discovery-69548c8475-fvh7q                1/1     Running   0          44s
gloo-5b6954d7c7-7rfk9                     1/1     Running   0          45s
ingress-6c46cdf6f6-jwj7m                  1/1     Running   0          44s
knative-external-proxy-7dd7665869-x9xkg   1/1     Running   0          44s
knative-internal-proxy-7775476875-9xvdg   1/1     Running   0          44s

Gloo is ready for routing, let's create a Knative auto-scaling service (let's call it kservice) and route traffic to it.

Knative services provide an easier way to deliver applications to Kubernetes than the usual Deployment+Service+Ingress model. Let's work with this example:

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
 name: helloworld-go
 namespace: default
spec:
 template:
   spec:
     containers:
       - image: gcr.io/knative-samples/helloworld-go
         env:
           - name: TARGET
             Value: Knative user

I copied this into a file, then applied it to my Kubernetes cluster like so:

kubectl apply -f ksvc.yaml -n default

We can view the resources created by Knative on the cluster after our 'helloworld-go' is delivered kservice:

kubectl get pod -n default
NAME                                              READY   STATUS    RESTARTS   AGE
helloworld-go-fjp75-deployment-678b965ccb-sfpn8   2/2     Running   0          68s

The pod with our 'helloworld-go' image is started when the kservice is deployed. If there is no traffic, the number of pods will be reduced to zero. Conversely, if the number of simultaneous requests exceeds some configurable threshold, the number of pods will grow.

kubectl get ingresses.networking.internal.knative.dev -n default
NAME            READY   REASON
helloworld-go   True

Knative sets up its ingress using a special 'ingress' resource in Knative's internal API. Gloo takes this API as its configuration to provide features native to PaaS, including a blue-green deployment model, automatic TLS enforcement, timeouts, and other advanced routing features.

After some time, we see that our pods have disappeared (because there was no incoming traffic):

kubectl get pod -n default

No resources found.
kubectl get deployment -n default
NAME                             DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
helloworld-go-fjp75-deployment   0         0         0            0           9m46s

Finally, we will try to reach them. Get the URL for Knative Proxy easily and naturally with glooctl:

glooctl proxy url --name knative-external-proxy
http://35.190.151.188:80

Without installed glooctl you can peek at the address and port in the kube service:

kubectl get svc -n gloo-system knative-external-proxy
NAME                     TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                      AGE
knative-external-proxy   LoadBalancer   10.16.11.157   35.190.151.188   80:32168/TCP,443:30729/TCP   77m

Let's run some data with cURL:

curl -H "Host: helloworld-go.default.example.com" http://35.190.151.188
Hello Knative user!

Knative provides a near-PaaS for developers on top of out-of-the-box Kubernetes using the high-performance full-featured Gloo API Gateway. This post only scratched the surface of the vast number of Knative options available for customization, as well as additional features. Similarly with Gloo!

Even though Knative is still a young project, the team is releasing new versions every six weeks and has begun implementing advanced features such as TLS auto-deployment, control panel auto-scaling. There is a strong possibility that as a result of the collaboration of numerous cloud companies, and as the basis of Google's new Cloud Run offering, Knative may become the main option for serverless computing and PaaS in Kubernetes. Follow the news!

From the editors of SouthBridge
Readers' opinions are important to us, so we ask you to take part in a short survey related to future articles about Knative, Kubernetes, serverless computing:

Only registered users can participate in the survey. Sign in, you are welcome.

Translate to write more articles and tutorials about Knative and Serverless Computing?

  • Yes please.

  • No, thanks.

28 users voted. 4 users abstained.

Source: habr.com

Add a comment