Integration of Kubernetes Dashboard and GitLab Users

Integration of Kubernetes Dashboard and GitLab Users

Kubernetes Dashboard is an easy-to-use tool for getting up-to-date information about a running cluster and minimal management of it. You begin to appreciate it even more when access to these features is needed not only by administrators / DevOps engineers, but also by those who are less accustomed to the console and / or do not intend to deal with all the intricacies of interacting with kubectl and other utilities. This happened to us: the developers wanted quick access to the Kubernetes web interface, and since we use GitLab, the solution suggested itself.

Why is that?

Direct developers may be interested in a tool like K8s Dashboard for debugging tasks. Sometimes you want to view logs and resources, and sometimes kill pods, scale Deployments/StatefulSets, and even go to the container console (there are also such requests, for which, however, there is another way - for example, through kubectl-debug).

In addition, there is a psychological moment for managers when they want to look at the cluster - to see that “everything is green”, and thus calm down that “everything works” (which, of course, is very relative ... but this is already beyond the scope of the article ).

As a standard CI system, we have applies GitLab: All developers use it too. Therefore, in order to give them access, it was logical to integrate Dashboard with accounts in GitLab.

Also note that we are using NGINX Ingress. If you are working with others ingress solutions, you will need to find analogues of annotations for authorization yourself.

Trying integration

Dashboard Installation

Attention: If you are going to repeat the steps described below, then - in order to avoid unnecessary operations - first read to the next subheading.

Since this integration is used by us in many installations, we have automated its installation. The source codes that are required for this are published in dedicated GitHub repository. They are based on slightly modified YAML configurations from official Dashboard repository, as well as a Bash script for quick deployment.

The script installs Dashboard into the cluster and configures it to integrate with GitLab:

$ ./ctl.sh  
Usage: ctl.sh [OPTION]... --gitlab-url GITLAB_URL --oauth2-id ID --oauth2-secret SECRET --dashboard-url DASHBOARD_URL
Install kubernetes-dashboard to Kubernetes cluster.
Mandatory arguments:
 -i, --install                install into 'kube-system' namespace
 -u, --upgrade                upgrade existing installation, will reuse password and host names
 -d, --delete                 remove everything, including the namespace
     --gitlab-url             set gitlab url with schema (https://gitlab.example.com)
     --oauth2-id              set OAUTH2_PROXY_CLIENT_ID from gitlab
     --oauth2-secret          set OAUTH2_PROXY_CLIENT_SECRET from gitlab
     --dashboard-url          set dashboard url without schema (dashboard.example.com)
Optional arguments:
 -h, --help                   output this message

However, before using it, you need to go to GitLab: Admin area → Applications - and add a new application for the future panel. Let's call it "kubernetes dashboard":

Integration of Kubernetes Dashboard and GitLab Users

As a result of adding it, GitLab will provide the hashes:

Integration of Kubernetes Dashboard and GitLab Users

They are used as arguments to the script. As a result, the installation looks like this:

$ ./ctl.sh -i --gitlab-url https://gitlab.example.com --oauth2-id 6a52769e… --oauth2-secret 6b79168f… --dashboard-url dashboard.example.com

After that, check that everything started:

$ kubectl -n kube-system get pod | egrep '(dash|oauth)'
kubernetes-dashboard-76b55bc9f8-xpncp   1/1       Running   0          14s
oauth2-proxy-5586ccf95c-czp2v           1/1       Running   0          14s

Sooner or later everything will start, however authorization will not work immediately! The fact is that in the image used (the situation is similar in other images), the process of catching a redirect in the callback is incorrectly implemented. This circumstance leads to the fact that oauth erases the cookie that oauth itself provides us with ...

The problem is solved by building your own oauth image with a patch.

Patch to oauth and reinstall

To do this, we will use the following Dockerfile:

FROM golang:1.9-alpine3.7
WORKDIR /go/src/github.com/bitly/oauth2_proxy

RUN apk --update add make git build-base curl bash ca-certificates wget 
&& update-ca-certificates 
&& curl -sSO https://raw.githubusercontent.com/pote/gpm/v1.4.0/bin/gpm 
&& chmod +x gpm 
&& mv gpm /usr/local/bin
RUN git clone https://github.com/bitly/oauth2_proxy.git . 
&& git checkout bfda078caa55958cc37dcba39e57fc37f6a3c842  
ADD rd.patch .
RUN patch -p1 < rd.patch 
&& ./dist.sh

FROM alpine:3.7
RUN apk --update add curl bash  ca-certificates && update-ca-certificates
COPY --from=0 /go/src/github.com/bitly/oauth2_proxy/dist/ /bin/

EXPOSE 8080 4180
ENTRYPOINT [ "/bin/oauth2_proxy" ]
CMD [ "--upstream=http://0.0.0.0:8080/", "--http-address=0.0.0.0:4180" ]

And here is what the rd.patch itself looks like

diff --git a/dist.sh b/dist.sh
index a00318b..92990d4 100755
--- a/dist.sh
+++ b/dist.sh
@@ -14,25 +14,13 @@ goversion=$(go version | awk '{print $3}')
sha256sum=()
 
echo "... running tests"
-./test.sh
+#./test.sh
 
-for os in windows linux darwin; do
-    echo "... building v$version for $os/$arch"
-    EXT=
-    if [ $os = windows ]; then
-        EXT=".exe"
-    fi
-    BUILD=$(mktemp -d ${TMPDIR:-/tmp}/oauth2_proxy.XXXXXX)
-    TARGET="oauth2_proxy-$version.$os-$arch.$goversion"
-    FILENAME="oauth2_proxy-$version.$os-$arch$EXT"
-    GOOS=$os GOARCH=$arch CGO_ENABLED=0 
-        go build -ldflags="-s -w" -o $BUILD/$TARGET/$FILENAME || exit 1
-    pushd $BUILD/$TARGET
-    sha256sum+=("$(shasum -a 256 $FILENAME || exit 1)")
-    cd .. && tar czvf $TARGET.tar.gz $TARGET
-    mv $TARGET.tar.gz $DIR/dist
-    popd
-done
+os='linux'
+echo "... building v$version for $os/$arch"
+TARGET="oauth2_proxy-$version.$os-$arch.$goversion"
+GOOS=$os GOARCH=$arch CGO_ENABLED=0 
+    go build -ldflags="-s -w" -o ./dist/oauth2_proxy || exit 1
  
checksum_file="sha256sum.txt"
cd $DIR/dists
diff --git a/oauthproxy.go b/oauthproxy.go
index 21e5dfc..df9101a 100644
--- a/oauthproxy.go
+++ b/oauthproxy.go
@@ -381,7 +381,9 @@ func (p *OAuthProxy) SignInPage(rw http.ResponseWriter, req *http.Request, code
       if redirect_url == p.SignInPath {
               redirect_url = "/"
       }
-
+       if req.FormValue("rd") != "" {
+               redirect_url = req.FormValue("rd")
+       }
       t := struct {
               ProviderName  string
               SignInMessage string

Now we can build the image and push it to our own GitLab. Next in manifests/kube-dashboard-oauth2-proxy.yaml specify the use of the desired image (replace it with your own):

 image: docker.io/colemickens/oauth2_proxy:latest

If you have a registry closed by authorization, do not forget to add the use of a secret for pulling images:

      imagePullSecrets:
     - name: gitlab-registry

... and add the registry secret itself:

---
apiVersion: v1
data:
 .dockercfg: eyJyZWdpc3RyeS5jb21wYW55LmNvbSI6IHsKICJ1c2VybmFtZSI6ICJvYXV0aDIiLAogInBhc3N3b3JkIjogIlBBU1NXT1JEIiwKICJhdXRoIjogIkFVVEhfVE9LRU4iLAogImVtYWlsIjogIm1haWxAY29tcGFueS5jb20iCn0KfQoK
=
kind: Secret
metadata:
 annotations:
 name: gitlab-registry
 namespace: kube-system
type: kubernetes.io/dockercfg

The attentive reader will see that the long string above is the base64 from the config:

{"registry.company.com": {
 "username": "oauth2",
 "password": "PASSWORD",
 "auth": "AUTH_TOKEN",
 "email": "[email protected]"
}
}

This is the user data in GitLab, the code that Kubernetes will use to pull the image from the registry.

After everything is done, you can remove the current (incorrectly working) installation of Dashboard with the command:

$ ./ctl.sh -d

... and install everything again:

$ ./ctl.sh -i --gitlab-url https://gitlab.example.com --oauth2-id 6a52769e… --oauth2-secret 6b79168f… --dashboard-url dashboard.example.com

It's time to go to the Dashboard and find a rather archaic login button:

Integration of Kubernetes Dashboard and GitLab Users

After clicking on it, GitLab will meet us, offering to log in to our usual page (of course, if we were not previously authorized there):

Integration of Kubernetes Dashboard and GitLab Users

Log in with GitLab credentials - and it's all done:

Integration of Kubernetes Dashboard and GitLab Users

About Dashboard features

If you are a developer who has not worked with Kubernetes before, or simply for some reason have not come across Dashboard before, I will illustrate some of its capabilities.

First, you can see that "everything is green":

Integration of Kubernetes Dashboard and GitLab Users

For pods, more detailed data is also available, such as environment variables, downloaded image, launch arguments, their state:

Integration of Kubernetes Dashboard and GitLab Users

Deployments have visible statuses:

Integration of Kubernetes Dashboard and GitLab Users

... and other details:

Integration of Kubernetes Dashboard and GitLab Users

... and there is also the ability to scale the deployment:

Integration of Kubernetes Dashboard and GitLab Users

The result of this operation:

Integration of Kubernetes Dashboard and GitLab Users

Among other useful features already mentioned at the beginning of the article is viewing logs:

Integration of Kubernetes Dashboard and GitLab Users

… and the function to log in to the container console of the selected pod:

Integration of Kubernetes Dashboard and GitLab Users

Also, for example, you can see the limits / requests on the nodes:

Integration of Kubernetes Dashboard and GitLab Users

Of course, these are not all the features of the panel, but I hope that the general idea has developed.

Disadvantages of integration and Dashboard

In the described integration, there is no access control. With it, all users with any access to GitLab get access to the Dashboard. They have the same access in the Dashboard itself, corresponding to the rights of the Dashboard itself, which defined in RBAC. Obviously, this is not suitable for everyone, but for our case it turned out to be sufficient.

Of the noticeable disadvantages in the Dashboard panel itself, I will note the following:

  • it is impossible to get into the console of the init container;
  • it is impossible to edit Deployments and StatefulSets, although this is fixable in ClusterRole;
  • Dashboard compatibility with the latest versions of Kubernetes and the future of the project is questionable.

The last problem deserves special attention.

Status and Dashboard Alternatives

Dashboard compatibility table with Kubernetes releases, provided in the latest version of the project (v1.10.1), not very happy:

Integration of Kubernetes Dashboard and GitLab Users

Despite this, there is (already adopted in January) PR #3476, which announces support for K8s 1.13. In addition, among the issues of the project, you can find mentions of users working with the panel in K8s 1.14. Finally, commits into the project's codebase do not stop. So (at least!) the actual status of the project is not as bad as it might first appear from the official compatibility table.

Finally, Dashboard has alternatives. Among them:

  1. K8Dash - a young interface (the first commits date back to March of this year), already offering good features, such as a visual representation of the current status of the cluster and managing its objects. It is positioned as a "real-time interface", because automatically updates the displayed data without requiring you to refresh the page in the browser.
  2. OpenShift Console - a web interface from Red Hat OpenShift, which, however, will bring other project developments to your cluster, which is not suitable for everyone.
  3. Kubernator - an interesting project, created as a lower-level (than Dashboard) interface with the ability to view all cluster objects. However, it looks like its development has stopped.
  4. Polaris - just a few days ago announced a project that combines the functions of a panel (shows the current state of the cluster, but does not manage its objects) and automatic “best practices validation” (checks the cluster for the correctness of the configurations of Deployments running in it).

Instead of conclusions

Dashboard is the standard tool for the Kubernetes clusters we maintain. Its integration with GitLab has also become part of our “default installation”, as many developers are happy with the possibilities that they get with this panel.

Kubernetes Dashboard periodically has alternatives from the Open Source community (and we are happy to consider them), but at this stage we remain with this decision.

PS

Read also on our blog:

Source: habr.com

Add a comment