Integration of Kubernetes Dashboard and GitLab Users
Kubernetes Dashboard is an easy-to-use tool for getting up-to-date information about a running cluster and minimal management of it. You begin to appreciate it even more when access to these features is needed not only by administrators / DevOps engineers, but also by those who are less accustomed to the console and / or do not intend to deal with all the intricacies of interacting with kubectl and other utilities. This happened to us: the developers wanted quick access to the Kubernetes web interface, and since we use GitLab, the solution suggested itself.
Why is that?
Direct developers may be interested in a tool like K8s Dashboard for debugging tasks. Sometimes you want to view logs and resources, and sometimes kill pods, scale Deployments/StatefulSets, and even go to the container console (there are also such requests, for which, however, there is another way - for example, through kubectl-debug).
In addition, there is a psychological moment for managers when they want to look at the cluster - to see that “everything is green”, and thus calm down that “everything works” (which, of course, is very relative ... but this is already beyond the scope of the article ).
As a standard CI system, we have applies GitLab: All developers use it too. Therefore, in order to give them access, it was logical to integrate Dashboard with accounts in GitLab.
Also note that we are using NGINX Ingress. If you are working with others ingress solutions, you will need to find analogues of annotations for authorization yourself.
Trying integration
Dashboard Installation
Attention: If you are going to repeat the steps described below, then - in order to avoid unnecessary operations - first read to the next subheading.
Since this integration is used by us in many installations, we have automated its installation. The source codes that are required for this are published in dedicated GitHub repository. They are based on slightly modified YAML configurations from official Dashboard repository, as well as a Bash script for quick deployment.
The script installs Dashboard into the cluster and configures it to integrate with GitLab:
$ ./ctl.sh
Usage: ctl.sh [OPTION]... --gitlab-url GITLAB_URL --oauth2-id ID --oauth2-secret SECRET --dashboard-url DASHBOARD_URL
Install kubernetes-dashboard to Kubernetes cluster.
Mandatory arguments:
-i, --install install into 'kube-system' namespace
-u, --upgrade upgrade existing installation, will reuse password and host names
-d, --delete remove everything, including the namespace
--gitlab-url set gitlab url with schema (https://gitlab.example.com)
--oauth2-id set OAUTH2_PROXY_CLIENT_ID from gitlab
--oauth2-secret set OAUTH2_PROXY_CLIENT_SECRET from gitlab
--dashboard-url set dashboard url without schema (dashboard.example.com)
Optional arguments:
-h, --help output this message
However, before using it, you need to go to GitLab: Admin area → Applications - and add a new application for the future panel. Let's call it "kubernetes dashboard":
As a result of adding it, GitLab will provide the hashes:
They are used as arguments to the script. As a result, the installation looks like this:
$ kubectl -n kube-system get pod | egrep '(dash|oauth)'
kubernetes-dashboard-76b55bc9f8-xpncp 1/1 Running 0 14s
oauth2-proxy-5586ccf95c-czp2v 1/1 Running 0 14s
Sooner or later everything will start, however authorization will not work immediately! The fact is that in the image used (the situation is similar in other images), the process of catching a redirect in the callback is incorrectly implemented. This circumstance leads to the fact that oauth erases the cookie that oauth itself provides us with ...
The problem is solved by building your own oauth image with a patch.
diff --git a/dist.sh b/dist.sh
index a00318b..92990d4 100755
--- a/dist.sh
+++ b/dist.sh
@@ -14,25 +14,13 @@ goversion=$(go version | awk '{print $3}')
sha256sum=()
echo "... running tests"
-./test.sh
+#./test.sh
-for os in windows linux darwin; do
- echo "... building v$version for $os/$arch"
- EXT=
- if [ $os = windows ]; then
- EXT=".exe"
- fi
- BUILD=$(mktemp -d ${TMPDIR:-/tmp}/oauth2_proxy.XXXXXX)
- TARGET="oauth2_proxy-$version.$os-$arch.$goversion"
- FILENAME="oauth2_proxy-$version.$os-$arch$EXT"
- GOOS=$os GOARCH=$arch CGO_ENABLED=0
- go build -ldflags="-s -w" -o $BUILD/$TARGET/$FILENAME || exit 1
- pushd $BUILD/$TARGET
- sha256sum+=("$(shasum -a 256 $FILENAME || exit 1)")
- cd .. && tar czvf $TARGET.tar.gz $TARGET
- mv $TARGET.tar.gz $DIR/dist
- popd
-done
+os='linux'
+echo "... building v$version for $os/$arch"
+TARGET="oauth2_proxy-$version.$os-$arch.$goversion"
+GOOS=$os GOARCH=$arch CGO_ENABLED=0
+ go build -ldflags="-s -w" -o ./dist/oauth2_proxy || exit 1
checksum_file="sha256sum.txt"
cd $DIR/dists
diff --git a/oauthproxy.go b/oauthproxy.go
index 21e5dfc..df9101a 100644
--- a/oauthproxy.go
+++ b/oauthproxy.go
@@ -381,7 +381,9 @@ func (p *OAuthProxy) SignInPage(rw http.ResponseWriter, req *http.Request, code
if redirect_url == p.SignInPath {
redirect_url = "/"
}
-
+ if req.FormValue("rd") != "" {
+ redirect_url = req.FormValue("rd")
+ }
t := struct {
ProviderName string
SignInMessage string
Now we can build the image and push it to our own GitLab. Next in manifests/kube-dashboard-oauth2-proxy.yaml specify the use of the desired image (replace it with your own):
image: docker.io/colemickens/oauth2_proxy:latest
If you have a registry closed by authorization, do not forget to add the use of a secret for pulling images:
It's time to go to the Dashboard and find a rather archaic login button:
After clicking on it, GitLab will meet us, offering to log in to our usual page (of course, if we were not previously authorized there):
Log in with GitLab credentials - and it's all done:
About Dashboard features
If you are a developer who has not worked with Kubernetes before, or simply for some reason have not come across Dashboard before, I will illustrate some of its capabilities.
First, you can see that "everything is green":
For pods, more detailed data is also available, such as environment variables, downloaded image, launch arguments, their state:
Deployments have visible statuses:
... and other details:
... and there is also the ability to scale the deployment:
The result of this operation:
Among other useful features already mentioned at the beginning of the article is viewing logs:
… and the function to log in to the container console of the selected pod:
Also, for example, you can see the limits / requests on the nodes:
Of course, these are not all the features of the panel, but I hope that the general idea has developed.
Disadvantages of integration and Dashboard
In the described integration, there is no access control. With it, all users with any access to GitLab get access to the Dashboard. They have the same access in the Dashboard itself, corresponding to the rights of the Dashboard itself, which defined in RBAC. Obviously, this is not suitable for everyone, but for our case it turned out to be sufficient.
Of the noticeable disadvantages in the Dashboard panel itself, I will note the following:
it is impossible to get into the console of the init container;
it is impossible to edit Deployments and StatefulSets, although this is fixable in ClusterRole;
Dashboard compatibility with the latest versions of Kubernetes and the future of the project is questionable.
The last problem deserves special attention.
Status and Dashboard Alternatives
Dashboard compatibility table with Kubernetes releases, provided in the latest version of the project (v1.10.1), not very happy:
Despite this, there is (already adopted in January) PR #3476, which announces support for K8s 1.13. In addition, among the issues of the project, you can find mentions of users working with the panel in K8s 1.14. Finally, commits into the project's codebase do not stop. So (at least!) the actual status of the project is not as bad as it might first appear from the official compatibility table.
Finally, Dashboard has alternatives. Among them:
K8Dash - a young interface (the first commits date back to March of this year), already offering good features, such as a visual representation of the current status of the cluster and managing its objects. It is positioned as a "real-time interface", because automatically updates the displayed data without requiring you to refresh the page in the browser.
OpenShift Console - a web interface from Red Hat OpenShift, which, however, will bring other project developments to your cluster, which is not suitable for everyone.
Kubernator - an interesting project, created as a lower-level (than Dashboard) interface with the ability to view all cluster objects. However, it looks like its development has stopped.
Polaris - just a few days ago announced a project that combines the functions of a panel (shows the current state of the cluster, but does not manage its objects) and automatic “best practices validation” (checks the cluster for the correctness of the configurations of Deployments running in it).
Instead of conclusions
Dashboard is the standard tool for the Kubernetes clusters we maintain. Its integration with GitLab has also become part of our “default installation”, as many developers are happy with the possibilities that they get with this panel.
Kubernetes Dashboard periodically has alternatives from the Open Source community (and we are happy to consider them), but at this stage we remain with this decision.