We fasten LDAP authorization to Kubernetes

We fasten LDAP authorization to Kubernetes

A small tutorial on how to use Keycloak to connect Kubernetes to your LDAP server and set up import of users and groups. This will allow you to set up RBAC for your users and use auth-proxy to protect Kubernetes Dashboard and other applications that do not know how to authorize themselves.

Keycloak Installation

Let's assume that you already have an LDAP server. It could be Active Directory, FreeIPA, OpenLDAP or whatever. If you do not have an LDAP server, then in principle you can create users directly in the Keycloak interface, or use public oidc providers (Google, Github, Gitlab), the result will be almost the same.

First of all, let's install Keycloak itself, the installation can be performed separately, or directly to the Kubernetes cluster, as a rule, if you have several Kubernetes clusters, it would be easier to install it separately. On the other hand, you can always use official helm chart and install it directly into your cluster.

To store Keycloak data, you will need a database. The default is h2 (all data is stored locally), but it is also possible to use postgres, mysql or mariadb.
If you still decide to install Keycloak separately, you can find more detailed instructions in official documentation.

Federation setup

First of all, let's create a new realm. Realm is the space of our application. Each application can have its own realm with different users and authorization settings. The master realm is used by Keycloak itself and using it for anything else is wrong.

Click here add realm

Option
Value

Name
kubernetes

Display Name
Kubernetes

HTML Display Name
<img src="https://kubernetes.io/images/nav_logo.svg" width="400" >

Kubernetes by default checks whether the user's email is confirmed or not. Since we are using our own LDAP server, this check will almost always return false. Let's disable the representation of this setting in Kubernetes:

Client scopes -> Email -> Mappers -> email verified (delete)

Now let's set up the federation, for this we go to:

User federation -> Add provider… -> ldap

Here is an example setup for FreeIPA:

Option
Value

Console Display Name
freeipa.example.org

Vendor
Red Hat Directory Server

UUID LDAP attribute
ipauniqueid

Connection URL
ldaps://freeipa.example.org

User DN
cn=users,cn=accounts,dc=example,dc=org

Bind DN
uid=keycloak-svc,cn=users,cn=accounts,dc=example,dc=org

Bind Credential
<password>

Allow Kerberos authentication:
on

Kerberos Realm:
EXAMPLE.ORG

Server Principal:
HTTP/[email protected]

key tab:
/etc/krb5.keytab

User keycloak-svc must be created in advance on our LDAP server.

In the case of Active Directory, simply select Vendor: Active Directory and the necessary settings will be inserted into the form automatically.

Click here Save

Now let's move on:

User federation -> freeipa.example.org -> Mappers -> First Name

Option
Value

Ldap attributes
givenName

Now enable group mapping:

User federation -> freeipa.example.org -> Mappers -> Create

Option
Value

Name
groups

Mapper type
group-ldap-mapper

LDAP Groups DN
cn=groups,cn=accounts,dc=example,dc=org

User Group Retrieve Strategy
GET_GROUPS_FROM_USER_MEMBEROF_ATTRIBUTE

This completes the federation setup, let's move on to setting up the client.

Client setup

Let's create a new client (an application that will receive users from Keycloak). Let's go:

Customers -> Create

Option
Value

customer ID
kubernetes

AccessType
confidenrial

root URL
http://kubernetes.example.org/

Valid Redirect URIs
http://kubernetes.example.org/*

Admin URL
http://kubernetes.example.org/

We will also create a scope for groups:

Client scopes -> Create

Option
Value

template
No template

Name
groups

Full group path
false

And set up a mapper for them:

Client scopes -> groups -> Mappers -> Create

Option
Value

Name
groups

Mapper Type
Group membership

Token Claim Name
groups

Now we need to enable group mapping in our client scope:

Customers -> kubernetes -> Client scopes -> Default Client Scopes

Choosing groups Π² Available Client Scopespush Add selected

Now let's set up the authentication of our application, go to:

Customers -> kubernetes

Option
Value

Authorization Enabled
ON

Let's push save and this completes the client setup, now on the tab

Customers -> kubernetes -> Credentials

you will be able to get Secret which we will use later.

Configuring Kubernetes

Setting up Kubernetes for OIDC authorization is quite trivial and not something very complicated. All you need to do is put the CA certificate of your OIDC server into /etc/kubernetes/pki/oidc-ca.pem and add the necessary options for kube-apiserver.
To do this, update /etc/kubernetes/manifests/kube-apiserver.yaml on all your masters:

...
spec:
  containers:
  - command:
    - kube-apiserver
...
    - --oidc-ca-file=/etc/kubernetes/pki/oidc-ca.pem
    - --oidc-client-id=kubernetes
    - --oidc-groups-claim=groups
    - --oidc-issuer-url=https://keycloak.example.org/auth/realms/kubernetes
    - --oidc-username-claim=email
...

And also update the kubeadm config in the cluster so as not to lose these settings during the update:

kubectl edit -n kube-system configmaps kubeadm-config

...
data:
  ClusterConfiguration: |
    apiServer:
      extraArgs:
        oidc-ca-file: /etc/kubernetes/pki/oidc-ca.pem
        oidc-client-id: kubernetes
        oidc-groups-claim: groups
        oidc-issuer-url: https://keycloak.example.org/auth/realms/kubernetes
        oidc-username-claim: email
...

This completes the Kubernetes setup. You can repeat these steps across all of your Kubernetes clusters.

Initial Authorization

After these steps, you will already have a Kubernetes cluster with OIDC authorization configured. The only point is that your users do not yet have a client configured, as well as their own kubeconfig. To solve this problem, you need to configure the automatic issuance of kubeconfig to users after successful authorization.

To do this, you can use special web applications that allow you to authenticate the user and then download the finished kubeconfig. One of the most convenient is Kuberos, it allows you to describe all Kubernetes clusters in one config and easily switch between them.

To configure Kuberos, it is enough to describe the template for kubeconfig and run it with the following parameters:

kuberos https://keycloak.example.org/auth/realms/kubernetes kubernetes /cfg/secret /cfg/template

For more details see Usage on Github.

It is also possible to use kubelogin if you want to authorize directly on the user's computer. In this case, the user will open a browser with an authorization form on localhost.

The resulting kubeconfig can be checked on the site jwt.io. Just copy the value users[].user.auth-provider.config.id-token from your kubeconfig to a form on the site and get the transcript right away.

RBAC setup

When configuring RBAC, you can refer to both the username (field name in the jwt token) and for a group of users (field groups in jwt token). Here is an example of setting permissions for a group kubernetes-default-namespace-admins:

kubernetes-default-namespace-admins.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: default-admins
  namespace: default
rules:
- apiGroups:
  - '*'
  resources:
  - '*'
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-default-namespace-admins
  namespace: default
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: default-admins
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: kubernetes-default-namespace-admins

More examples for RBAC can be found in official Kubernetes documentation

Setting auth-proxy

There is a wonderful project keycloak-gatekeeper, which allows you to secure any application by allowing the user to authenticate to the OIDC server. I'll show you how you can set it up using the Kubernetes Dashboard as an example:

dashboard-proxy.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kubernetes-dashboard-proxy
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: kubernetes-dashboard-proxy
    spec:
      containers:
      - args:
        - --listen=0.0.0.0:80
        - --discovery-url=https://keycloak.example.org/auth/realms/kubernetes
        - --client-id=kubernetes
        - --client-secret=<your-client-secret-here>
        - --redirection-url=https://kubernetes-dashboard.example.org
        - --enable-refresh-tokens=true
        - --encryption-key=ooTh6Chei1eefooyovai5ohwienuquoh
        - --upstream-url=https://kubernetes-dashboard.kube-system
        - --resources=uri=/*
        image: keycloak/keycloak-gatekeeper
        name: kubernetes-dashboard-proxy
        ports:
        - containerPort: 80
          livenessProbe:
            httpGet:
              path: /oauth/health
              port: 80
            initialDelaySeconds: 3
            timeoutSeconds: 2
          readinessProbe:
            httpGet:
              path: /oauth/health
              port: 80
            initialDelaySeconds: 3
            timeoutSeconds: 2
---
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard-proxy
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: kubernetes-dashboard-proxy
  type: ClusterIP

Source: habr.com

Add a comment