How to expose Kubernetes OIDC JWKS endpoints

3 months ago 12

This blog introduces k8s-jwks-proxy, a lightweight reverse proxy that securely exposes Kubernetes API Server's OIDC discovery endpoints without enabling anonymous access. It demonstrates how to deploy the endpoints publicly accessible under a custom domain.

Seamlessly provision Kubernetes OIDC endpoints without anonymous auth enabled

🧩 The Problem

Kubernetes supports OIDC authentication, enabling identity integration with other platforms or systems, where you can authenticate kubernetes jwt tokens. However, there's a hidden challenge with securely exposing the OIDC discovery endpoints (i.e. /.well-known/openid-configuration and /openid/v1/jwks) from the Kubernetes API Server and also preventing anonymous access.

If you are running a hardened Kubernetes cluster with --anonymous-auth=false (you should in production), these endpoints are not accessible publicly — which breaks identity integrations that rely on open JWKS URIs.

✅ The Solution: JWKS Reverse Proxy for Kubernetes

We’re excited to introduce k8s-jwks-proxy — a secure, lightweight reverse proxy writing in go purpose-built to expose your Kubernetes OIDC discovery endpoints safely and with minimal configuration.

🔗 GitHub Repository →

https://github.com/gawsoftpl/k8s-apiserver-oidc-reverse-proxy

🛠 What It Does

The proxy authenticates to the Kubernetes API Server utilizing its in-cluster Service Account token, verifies the TLS certificate of the server using the Kubernetes CA bundle, and securely forwards only those two endpoints:

  • /.well-known/openid-configuration
  • /openid/v1/jwks

There is no need to lower the security of the cluster or mess with extra certificate.

🚀 Key Features

  • Secure by Default: No anonymous access is permitted. All traffic is authenticated.
  • Light & Fast: Developed in Go with no dependencies on anything external whatsoever.
  • Production Grade: It's easy, stable, and deploys in any Kubernetes cluster.
  • RBAC compliant: Our Service Account requires only one minimal non-resource access.

📦 How to Install?

Helm Chart (Recommended for Kubernetes)

helm repo add k8s-jwks-proxy https://gawsoftpl.github.io/k8s-apiserver-oidc-reverse-proxy helm repo update helm install k8s-jwks-proxy k8s-jwks-proxy/k8s-jwks-proxy

Full example:

Below example showcases how to create a local test Kubernetes cluster with kind while also setting custom OIDC issuer settings for the API server, installing the NGINX Ingress Controller, deploying the k8s-jwks-proxy Helm chart with ingress enabled (thereby publicly exposing the OIDC discovery endpoints under a custom domain (oidc.example.com).

The important aspect here is that the proxy was able to expose the /.well-known/openid-configuration and the /openid/v1/jwks endpoints without any changes made to the Kubernetes API server's --anonymous-auth setting, or adding any additional RBAC permissions other than the absolute minimal non-resource read access.

Create test cluster

# Create test cluster with custom oidc issuer cat <<EOF > kind-cluster.yaml # kind-cluster.yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: custom-api-server-args kubeadmConfigPatches: - | kind: ClusterConfiguration apiServer: extraArgs: service-account-issuer: "https://oidc.example.com" service-account-jwks-uri: "https://oidc.example.com/openid/v1/jwks" nodes: - role: control-plane EOF # Create cluster with custom config kind create cluster --config ./kind-cluster.yaml # Create local load balancer for cluster read more: https://gawsoft.com/blog/how-to-enable-load-balancer-in-kind-cluster/ cloud-provider-kind &

Install nginx ingress controller

helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install nginx-ingress ingress-nginx/ingress-nginx -n nginx --create-namespace

Update helm repo

helm repo add k8s-jwks-proxy https://gawsoftpl.github.io/k8s-apiserver-oidc-reverse-proxy helm repo update

Install helm chart with k8s jwks proxy to your cluster

cat <<EOF > values.yaml ingress: enabled: true className: nginx host: oidc.example.com annotations: # If you have installed cert-manager and setup clusterIssuer with name letsencrypt-http, you can auto generate tls cert # More info: https://cert-manager.io/docs/usage/certificate/ cert-manager.io/cluster-issuer: "letsencrypt-http" tls: - hosts: - oidc.example.com secretName: oidc.example.com EOF helm upgrade --install k8s-jwks-proxy -f values.yaml k8s-jwks-proxy/k8s-jwks-proxy # Get ingress ip IP=`kubectl get svc nginx-ingress-ingress-nginx-controller -n nginx -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` echo $IP # Example requests curl -k https://oidc.example.com/.well-known/openid-configuration --resolve "oidc.example.com:443:$IP" curl -k https://oidc.example.com/openid/v1/jwks --resolve "oidc.example.com:443:$IP"

The JSON responses returned from the endpoint both contain the required OIDC metadata, and also the JWKS keys, and are ready for consumption by either identity providers or third-party systems — securely, and with very little configuration needed to get started.

{ "issuer": "https://oidc.example.com", "jwks_uri": "https://oidc.example.com/openid/v1/jwks", "response_types_supported": [ "id_token" ], "subject_types_supported": [ "public" ], "id_token_signing_alg_values_supported": [ "RS256" ] } { "keys": [ { "use": "sig", "kty": "RSA", "kid": "gK8g4Dw_EkSBBa-IfWnXOyLLQwomf8ZyiRvM77BubTs", "alg": "RS256", "n": "tqXkdNE30zn2Gmi5NeT_EomMYKJQRmcIOUqOoNKUvveiL1a_51OYtLjsssvwVY80nJ6nNqSZGkV2PSjorOa9H4pwimlKXRPRJmfme6MmRIQIOUZCxntB2JToP0aQdbo5O_rTH-z6hW_iyd9Y6yN8uW2zDBRw5aNQXx665008ooDQ2p3GR7aaF3KvBiv1rYKLk97PlTGvCDZNb_zuiZKuKR1sNp1QHQolSMTILKEpCRV8vP8j-SG-9LjGJGFiv1JNyYn0n7uyNL076qkU6YVjVu-0I6tcIlTNFvYzVzLnn8qCiMoEf1ZQSDyCdpoGBu6jtw2NvYg95EEp7W-VZ8wHVw", "e": "AQAB" } ] }

After retrieving and storing the response from the JWKS endpoint, it is possible to authenticate outbound requests from the Kubernetes cluster Kubernetes service or id_token, enhancing security and access control.

Read Entire Article