Adding LDAP authentication to Kubernetes

Andrei Kvapil
5 min readFeb 26, 2019

Short guide how to setup Keycloak for connect Kubernetes with your LDAP-server and import users and groups. It will allow you to configure RBAC and use auth-proxy to secure Kubernetes Dasboard and another applications, which have no authentification from begining.

Keycloak Installation

Let’s assume that you already have LDAP-server. It can be Active Directory, FreeIPA, OpenLDAP or something else. If you have no LDAP-server you can just use Keycloak for creating users directly on its interface, or connect another public oidc-providers (Google, Github, Gitlab), result will be same.

First you need to install Keycloak. You can install it separated or inside your Kubernetes-cluster. Usually if you have many Kubernetes-clusters, there a sense to install it separated. Otherwise you can just use ready helm-chart for Keycloak and install it into Kubernetes.

Keycloak uses database for storing data. By default it uses h2 for storing data locally, but you can also use postgress, mysql or mariadb. If you want to install Keycloak separated, you can just follow official documentation.

Federation configuration

First you need to create new realm. Realm — it is our application’s environment. Every application can have own realm with own users and authorization settings. Master realm is using by Keycloak itself, we shouldn’t use it for somthing else.

Select Add realm

  • Name: kubernetes
  • Display Name: Kubernetes
  • HTML Display Name: <img src="https://kubernetes.io/images/nav_logo.svg" width="400" \>

Kubernetes requires email verification passed for any user by default. This check will always return false since we are using our own LDAP-server. Let's disable providing this parameter to Kubernetes.

Client scopesEmailMappersEmail verified (Delete)

Let’s configure a federation, we should go:

User federationAdd provider…ldap

Example configuration for FreeIPA:

  • Console Display Name: freeipa.example.org
  • Vendor: Red Hat Directory Server
  • UUID LDAP attribute: ipauniqueid
  • Connection URL: ldaps://freeipa.example.org
  • Users DN: cn=users,cn=accounts,dc=example,dc=org
  • Bind DN: uid=keycloak-svc,cn=users,cn=accounts,dc=example,dc=org
  • Bind Credential: <password>
  • Allow Kerberos authentication: on
  • Kerberos Realm: EXAMPLE.ORG
  • Server Principal: HTTP/freeipa.example.org@EXAMPLE.ORG
  • KeyTab: /etc/krb5.keytab

The user keycloak-svc should be created in LDAP-server beforehand.

In Active Directory case, you can simple use Vendor: Active Directory setting, and all the parameters will be added automatically.

Press Save

And go:

User federationfreeipa.example.orgMappersFirst Name

  • Ldap attribure: givenName

Now we need to enable grpups mapping:

User federationfreeipa.example.orgMappersCreate

  • Name: groups
  • Mapper type: group-ldap-mapper
  • LDAP Groups DN: cn=groups,cn=accounts,dc=example,dc=org
  • User Groups Retrieve Strategy: GET_GROUPS_FROM_USER_MEMBEROF_ATTRIBUTE

Federation configured, let’s move to client configuration.

Client configuration

We need to create new client (application which will get users from Keycloak). Go:

ClientsCreate

Also create new scope for the groups:

Client ScopesCreate

  • Template: No template
  • Name: groups
  • Full group path: false

And configure mapper for them:

Client ScopesgroupsMappersCreate

  • Name: groups
  • Mapper Type: Group membership
  • Token Claim Name: groups

Now we need to enable mapping of the groups in our client scope:

ClientskubernetesClient ScopesDefault Client Scopes

Select groups in Available Client Scopes and press Add selected

Now we will configure authentifaction for our application, go:

Clientskubernetes

  • Authorization Enabled: ON

Press save and now client configuration is finished. On the tab

ClientskubernetesCredentials

you can take the Secret which we will use on the next steps.

Kubernetes configuration

Kubernetes configuration for OIDC-authentification is quite trivial. All what you need is just put CA-certificate from your OIDC-server into /etc/kubernetes/pki/oidc-ca.pem and add needed options to kube-apiserver.

Update /etc/kubernetes/manifests/kube-apiserver.yaml an all your masters:

...
spec:
containers:
- command:
- kube-apiserver
...
- --oidc-ca-file=/etc/kubernetes/pki/oidc-ca.pem
- --oidc-client-id=kubernetes
- --oidc-groups-claim=groups
- --oidc-issuer-url=https://keycloak.example.org/auth/realms/kubernetes
- --oidc-username-claim=email
...

Also update your kubeadm-config on the cluster, to not lost this settings during upgrade:

kubectl edit -n kube-system configmaps kubeadm-config...
data:
ClusterConfiguration: |
apiServer:
extraArgs:
oidc-ca-file: /etc/kubernetes/pki/oidc-ca.pem
oidc-client-id: kubernetes
oidc-groups-claim: groups
oidc-issuer-url: https://keycloak.example.org/auth/realms/kubernetes
oidc-username-claim: email
...

Now Kubernetes is configured. You can repeat these steps on all your Kubernetes-clusters.

Initial Authorization

After these steps you already have Kubernetes cluster with OIDC-authorization. One more thing that your users still have no configured client and own kubeconfig file. For solve that you should confugre automatic issuance of kubeconfig to users after successful authorization.

You can use special web-applications, which can do initial authentication and generate kubeconfig file for download. The most useful — is Kuberos, it allows to specify all your Kubernetes-clusters in single kubeconfig file, and easily swith between them.

To configure Kuberos you just need to create template for kubeconfig and run it with the next parameters:

kuberos https://keycloak.example.org/auth/realms/kubernetes kubernetes /cfg/secret /cfg/template

For more details see Usage section on Github.

You can also use kubelogin if you want to authorize user directly on the his computer. In this case user will see the authorization page on the localhost.

The final kubeconfig you can check on jwt.io site. Just copy value of users[].user.auth-provider.config.id-token from kubeconfig into form on the site and immediately get a transcription.

RBAC-configuration

When configuring RBAC you can refer on username (field name in jwt-token), same like user groups (field group in jwt-token). Example role binding for the group kubernetes-default-namespace-admins:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: default-admins
namespace: default
rules:
- apiGroups:
- '*'
resources:
- '*'
verbs:
- '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-default-namespace-admins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: default-admins
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: kubernetes-default-namespace-admins

More examples for RBAC you can find on official Kubernetes documentation page

Auth-proxy configuration

There is awesome project keycloak-gatekeeper, which allows you for secure any application, providing authentication page to user. I’ll show an example for Kubernetes Dashboard:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard-proxy
spec:
replicas: 1
template:
metadata:
labels:
app: kubernetes-dashboard-proxy
spec:
containers:
- args:
- --listen=0.0.0.0:80
- --discovery-url=https://keycloak.example.org/auth/realms/kubernetes
- --client-id=kubernetes
- --client-secret=<your-client-secret-here>
- --redirection-url=https://kubernetes-dashboard.example.org
- --enable-refresh-tokens=true
- --encryption-key=ooTh6Chei1eefooyovai5ohwienuquoh
- --upstream-url=https://kubernetes-dashboard.kube-system
- --resources=uri=/*
image: keycloak/keycloak-gatekeeper
name: kubernetes-dashboard-proxy
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /oauth/health
port: 80
initialDelaySeconds: 3
timeoutSeconds: 2
readinessProbe:
httpGet:
path: /oauth/health
port: 80
initialDelaySeconds: 3
timeoutSeconds: 2
---
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard-proxy
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: kubernetes-dashboard-proxy
type: ClusterIP

--

--