Entra Workload Identity on AKS: No More Secrets

Published:15 April 2026 - 10 min. read

Audit your Active Directory for weak passwords and risky accounts. Run your free Specops scan now!

You’re three weeks into your new AKS deployment and you just found a client secret in a Kubernetes Secret object—base64-encoded, sitting right there in the cluster, rotating on a schedule that nobody has actually verified in six months. Congratulations: you’ve discovered what most AKS teams discover eventually. Kubernetes Secrets are not secret enough.

Microsoft Entra Workload Identity solves this problem by eliminating the credential entirely. Instead of storing a client secret your pods can use to authenticate to Azure resources, your cluster itself becomes the identity provider. Pods receive a projected Kubernetes ServiceAccount token, exchange it for an Entra access token through an OIDC handshake, and access Key Vault, Storage, SQL—whatever they need—without a single secret touching your cluster’s etcd store (Kubernetes’ internal secrets database).

This guide walks you through enabling Workload Identity on AKS from scratch, wiring up the federated identity credential, and deploying a workload that authenticates cleanly. Bicep and Terraform snippets are included for both camps.

Prerequisites

Before starting, verify you have the right versions in place. Workload Identity has minimum version requirements for AKS and Azure CLI—check the prerequisites section before starting. Run these to confirm your current versions:

az --version
az aks show --resource-group myRG --name myAKS --query kubernetesVersion -o tsv

You’ll also need kubectl configured against your cluster and permission to create managed identities in your subscription.

Requirement What to Check Command
AKS cluster Minimum supported version az aks show ... --query kubernetesVersion
Azure CLI Minimum supported version az --version
kubectl Configured against your cluster kubectl cluster-info
IAM permissions Create managed identities in subscription Azure portal → Subscriptions → IAM

Pro Tip: If you’re still running Azure AD Pod Identity, that add-on lost official support in September 2025. It’s not a question of “should you migrate”—you already should have. The migration guide covers three approaches depending on which Azure Identity SDK version your apps are on.


Enable the OIDC Issuer on Your Cluster

Workload Identity starts with your AKS cluster publishing an OpenID Connect (OIDC) issuer endpoint—a URL that Entra ID uses to fetch the cluster’s public signing keys and verify tokens. Without it, the token exchange has nowhere to go.

For a new cluster:

export RESOURCE_GROUP="myResourceGroup"
export CLUSTER_NAME="myAKSCluster"
export LOCATION="eastus"

az aks create \
  --resource-group "${RESOURCE_GROUP}" \
  --name "${CLUSTER_NAME}" \
  --location "${LOCATION}" \
  --enable-oidc-issuer \
  --enable-workload-identity \
  --generate-ssh-keys

For an existing cluster, it’s a single update command:

az aks update \
  --resource-group "${RESOURCE_GROUP}" \
  --name "${CLUSTER_NAME}" \
  --enable-oidc-issuer \
  --enable-workload-identity

Once that completes, capture the issuer URL—you’ll need it for every federated credential you create:

export AKS_OIDC_ISSUER="$(az aks show \
  --name "${CLUSTER_NAME}" \
  --resource-group "${RESOURCE_GROUP}" \
  --query "oidcIssuerProfile.issuerUrl" \
  --output tsv)"

echo $AKS_OIDC_ISSUER

The issuer URL follows the pattern https://{region}.oic.prod-aks.azure.com/.... Verify it’s populated before moving on—an empty variable here causes a mismatch error later that’s annoyingly difficult to trace back to this step.

Create the Managed Identity and Federated Credential

Your workload needs a user-assigned managed identity in Azure. This identity is what gets RBAC assignments to Azure resources. The federated credential is the trust link between that managed identity and a specific Kubernetes ServiceAccount.

The federated credential has four fields that must be exact. Here’s what each one maps to:

Field What It Is Example Value
issuer Your cluster’s OIDC issuer URL https://eastus.oic.prod-aks.azure.com/...
subject Kubernetes ServiceAccount in system:serviceaccount:<namespace>:<name> format system:serviceaccount:my-namespace:my-service-account
audiences Always this fixed value for Azure api://AzureADTokenExchange
name Label for this credential (your choice) my-app-federation
export USER_ASSIGNED_IDENTITY_NAME="myWorkloadIdentity"
export SERVICE_ACCOUNT_NAMESPACE="my-namespace"
export SERVICE_ACCOUNT_NAME="my-service-account"
export FEDERATED_CREDENTIAL_NAME="my-app-federation"

az identity create \
  --name "${USER_ASSIGNED_IDENTITY_NAME}" \
  --resource-group "${RESOURCE_GROUP}"

export USER_ASSIGNED_CLIENT_ID="$(az identity show \
  --name "${USER_ASSIGNED_IDENTITY_NAME}" \
  --resource-group "${RESOURCE_GROUP}" \
  --query 'clientId' \
  --output tsv)"

az identity federated-credential create \
  --name "${FEDERATED_CREDENTIAL_NAME}" \
  --identity-name "${USER_ASSIGNED_IDENTITY_NAME}" \
  --resource-group "${RESOURCE_GROUP}" \
  --issuer "${AKS_OIDC_ISSUER}" \
  --subject "system:serviceaccount:${SERVICE_ACCOUNT_NAMESPACE}:${SERVICE_ACCOUNT_NAME}" \
  --audience api://AzureADTokenExchange

The subject field is the piece most people get wrong. It must match the Kubernetes namespace and ServiceAccount name exactly—case-sensitive, no trailing spaces. system:serviceaccount:My-Namespace:my-service-account is a different subject than system:serviceaccount:my-namespace:my-service-account. The federated credential lookup will fail silently at token exchange time, and you’ll get a generic AADSTS70021 error with no indication of which field is mismatched. Write down what you set here.


Key Insight: You can have up to 20 federated identity credentials per managed identity. For teams with many microservices, plan your identity-to-service mapping early. Multiple service accounts can reference the same managed identity (many-to-one), which simplifies RBAC but reduces blast radius isolation.


Configure the Kubernetes ServiceAccount

With the federated credential in place, create the ServiceAccount on the cluster side. The annotation ties the Kubernetes identity to the Azure managed identity:

kubectl create namespace "${SERVICE_ACCOUNT_NAMESPACE}"

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ${SERVICE_ACCOUNT_NAME}
  namespace: ${SERVICE_ACCOUNT_NAMESPACE}
  annotations:
    azure.workload.identity/client-id: "${USER_ASSIGNED_CLIENT_ID}"
EOF

Any pod that uses this ServiceAccount and has the label azure.workload.identity/use: "true" in its pod template gets the Workload Identity injection automatically. The mutating admission webhook—installed as part of --enable-workload-identity—injects the required environment variables and mounts the projected token volume. Your application code doesn’t need to know any of this is happening. (That’s the point—the credential management is entirely outside the application boundary.)

When the webhook fires successfully, your pod gets three environment variables injected automatically:

Variable What It Contains
AZURE_CLIENT_ID The managed identity’s client ID
AZURE_TENANT_ID Your Azure tenant ID
AZURE_FEDERATED_TOKEN_FILE Path to the projected Kubernetes token on disk
AZURE_AUTHORITY_HOST The Entra ID authority endpoint URL

Deploy a Workload That Uses the Identity

Here’s a minimal pod spec that authenticates using Workload Identity:

apiVersion: v1
kind: Pod
metadata:
  name: workload-identity-demo
  namespace: my-namespace
  labels:
    azure.workload.identity/use: "true"
spec:
  serviceAccountName: my-service-account
  containers:
  - name: app
    image: mcr.microsoft.com/azure-cli:latest
    command: ["sleep", "infinity"]

The label on the pod template is not optional. Without azure.workload.identity/use: "true", the webhook does nothing and your pod authenticates as… nothing. This is the second most common setup mistake after the subject mismatch.

Apply it and verify the injection worked:

kubectl apply -f pod.yaml
kubectl describe pod workload-identity-demo -n my-namespace | grep -A 5 "Environment:"

You should see AZURE_CLIENT_ID, AZURE_TENANT_ID, and AZURE_FEDERATED_TOKEN_FILE in the environment. If they’re absent, the webhook didn’t fire—check that the label is on the pod template spec, not on the pod’s outer metadata.

IaC Modules for Both Camps

Bicep

If your team provisions AKS through Bicep, enable OIDC and Workload Identity in the cluster resource, then wire up the federated credential as a child resource of your managed identity:

“`plain text
resource aks ‘Microsoft.ContainerService/managedClusters@2023-10-01’ = {
name: ‘my-aks’
location: resourceGroup().location
identity: {
type: ‘SystemAssigned’
}
properties: {
oidcIssuerProfile: {
enabled: true
}
securityProfile: {
workloadIdentity: {
enabled: true
}
}
// … agentPoolProfiles, dnsPrefix, etc.
}
}

resource federatedCredential ‘Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials@2023-01-31’ = {
name: ‘my-app-federation’
parent: userAssignedIdentity
properties: {
issuer: aks.properties.oidcIssuerProfile.issuerURL
subject: ‘system:serviceaccount:my-namespace:my-service-account’
audiences: [‘api://AzureADTokenExchange’] }
}

Note that `aks.properties.oidcIssuerProfile.issuerURL` pulls the URL directly from the cluster resource output—no manual copy-paste, no trailing slash mismatch. The trailing slash problem is more common than you'd expect; Entra ID treats `https://example.com/` and `https://example.com` as different issuers.

### Terraform

The `azurerm` provider exposes `oidc_issuer_url` as an output from the `azurerm_kubernetes_cluster` resource:

resource “azurerm_kubernetes_cluster” “aks” {
name = “my-aks”
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
dns_prefix = “myaks”

oidc_issuer_enabled = true
workload_identity_enabled = true

# … identity block, default_node_pool, etc.
}

resource “azurerm_user_assigned_identity” “workload” {
name = “my-workload-identity”
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
}

resource “azurerm_federated_identity_credential” “aks_federation” {
name = “my-app-federation”
resource_group_name = azurerm_resource_group.rg.name
audience = [“api://AzureADTokenExchange”] issuer = azurerm_kubernetes_cluster.aks.oidc_issuer_url
user_assigned_identity_id = azurerm_user_assigned_identity.workload.id
subject = “system:serviceaccount:my-namespace:my-service-account”
}

Both modules follow the same principle: let the IaC layer handle the issuer URL hand-off rather than hardcoding it. If you hardcode the issuer URL and then recreate the cluster, the URL changes and every federated credential silently breaks at token exchange time—with the same AADSTS70021 error that's impossible to trace from the application side.

## Common Errors and How to Fix Them

You'll hit at least one of these. Everyone does.

| Error | Most Likely Cause | Fix |
| --- | --- | --- |
| AADSTS70021 | Issuer or subject mismatch in federated credential | Compare issuer URL against `az aks show` output; verify subject is case-exact |
| Webhook not injecting | Missing `azure.workload.identity/use: "true"` label | Label must be on `spec.template.metadata.labels`, not outer pod metadata |
| Virtual node failure | Workload Identity not supported on virtual nodes | Pin Workload Identity workloads to regular node pools with nodeSelector |
| AADSTS70021 after credential update | Propagation delay | Wait 10–15 seconds after updating the federated credential before testing |

**AADSTS70021 — No matching federated identity record found**

The `issuer` or `subject` in your federated credential doesn't match what the cluster sent. Start by comparing the issuer URL in the credential against what `az aks show` returns. Then verify the `subject` field matches your namespace and ServiceAccount name character-for-character. If you updated the federated credential recently, wait 10–15 seconds—[propagation can take a few seconds](https://learn.microsoft.com/en-us/entra/workload-id/workload-identity-federation-considerations) and early token requests during that window will fail.

**Webhook injection not happening**

Missing the `azure.workload.identity/use: "true"` label on the pod template. This is distinct from the pod metadata—it needs to be in `spec.template.metadata.labels` for Deployments and StatefulSets, not on the outer object.

**Virtual Node workloads fail**

[Virtual Nodes (Virtual Kubelet)](https://learn.microsoft.com/en-us/azure/aks/workload-identity-overview) do not support Workload Identity. If your pods schedule onto virtual nodes, they won't get the token injection. Keep Workload Identity workloads on regular node pools.

---

***Warning: The ******`subject`****** claim in your federated credential is strictly case-sensitive. ******`my-namespace`****** and ******`My-Namespace`****** are different subjects. Entra ID will reject the token exchange with no indication of which field caused the mismatch. Store the exact values in your IaC and never type them by hand downstream.***

---

## How the Token Exchange Actually Works

Understanding the flow helps when things go wrong. Here's what happens from the moment your pod starts:

The Kubelet projects a signed Kubernetes ServiceAccount token into your pod's filesystem at `/var/run/secrets/azure/tokens/azure-identity-token`. This token is a JWT signed with the cluster's private OIDC key.

When your application calls an Azure SDK method that requires a credential—say, [`DefaultAzureCredential`](https://learn.microsoft.com/en-us/dotnet/api/azure.identity.defaultazurecredential) in the [Azure Identity library](https://learn.microsoft.com/en-us/dotnet/api/overview/azure/identity-readme)—the SDK reads that token file (via the `AZURE_FEDERATED_TOKEN_FILE` environment variable), then sends it to Entra ID's token endpoint.

Entra ID fetches the cluster's public OIDC signing keys from the issuer URL, verifies the Kubernetes token's signature, checks that the `subject` claim matches a federated identity credential on the target managed identity, and—if everything checks out—returns an Azure access token scoped to your requested resource.

Your application never touches a client secret. It never stores credentials. The Kubernetes token it holds is short-lived and scoped specifically to the ServiceAccount. If the pod is compromised, the blast radius is limited to whatever RBAC the managed identity has—which you control entirely through Azure role assignments.

## Assigning Azure Roles to Your Identity

The federated credential lets your workload authenticate. RBAC determines what it can actually do. Grant the managed identity the minimum permissions it needs:

export KEYVAULT_RESOURCE_ID=$(az keyvault show \
–name myKeyVault \
–resource-group “${RESOURCE_GROUP}” \
–query id \
–output tsv)

az role assignment create \
–assignee “${USER_ASSIGNED_CLIENT_ID}” \
–role “Key Vault Secrets User” \
–scope “${KEYVAULT_RESOURCE_ID}”

Scope role assignments to the specific resource, not the subscription. The whole point of eliminating static secrets is reducing credential scope—don't undermine that with a Contributor assignment on the subscription. (Yes, we know subscription-scope is faster to configure. That's not a reason.)

## Cleaning Up

If you're done with this workload, remove the pieces in order:

az identity federated-credential delete \
–name “${FEDERATED_CREDENTIAL_NAME}” \
–identity-name “${USER_ASSIGNED_IDENTITY_NAME}” \
–resource-group “${RESOURCE_GROUP}”

az role assignment delete \
–assignee “${USER_ASSIGNED_CLIENT_ID}” \
–role “Key Vault Secrets User” \
–scope “${KEYVAULT_RESOURCE_ID}”

az identity delete \
–name “${USER_ASSIGNED_IDENTITY_NAME}” \
–resource-group “${RESOURCE_GROUP}”
“`

Remove role assignments before deleting the identity. If you skip that step, the orphaned assignments don’t disappear—they lose their display name and become unattributed clutter in your subscription’s IAM view. They can’t do anything, but they’re confusing during audits and you’ll spend time hunting down what they were.

The Kubernetes ServiceAccount and namespace can be deleted with kubectl delete namespace ${SERVICE_ACCOUNT_NAMESPACE}.

What You’ve Built

Your workload now authenticates to Azure resources without holding any credential. The cluster issues tokens, Entra ID validates them, and your pods get access tokens scoped to exactly what you’ve granted. No rotation schedule. No secret sprawl. No expiring certificates in ConfigMaps.

The comparison to Azure AD Pod Identity—now end-of-support—is stark. Pod Identity worked by intercepting IMDS traffic through an NMI (Node Managed Identity) DaemonSet—one pod per node—that proxied requests cluster-wide. It was Linux-only, had identity assignment latency measured in seconds, and expanded your attack surface to the node level. Workload Identity is a clean OIDC handshake using Kubernetes primitives. It works on Windows nodes. Token projection is immediate. And the scope of a compromised pod is the ServiceAccount, not the node.

If you’re still running Pod Identity workloads, the migration guide is worth the afternoon it takes. If you’re starting fresh, you’re already doing it the right way.

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!