If you're struggling with understanding how to create an Azure Kubernetes Service (AKS) cluster, you've come to the right place. In this article, you're going to learn how to build an AKS cluster from scratch and learn some best practices on how to approach set up.

AKS is a Microsoft Azure service providing you with managed Kubernetes clusters. Kubernetes is notoriously difficult to install and maintain. Because of that, Microsoft has simplified the process by directly managing the AKS cluster master nodes and taking care of patching and scaling the cluster.

There are a few ways to create an AKS cluster but in this article, you're going to get your hands dirty by automating the build with the Azure CLI in Azure Cloud Shell.

Project Overview

This is a project article where you'll cover how to build a project or implement a solution. Each section will be cumulative steps that build upon the previous steps.

In this project, you are going to see how easy it is to get started with AKS and Kubernetes by launching your own AKS cluster. You will check prerequisites, deploy the cluster, and launch a sample application.

This Project should take you 30 minutes to complete.

Environment and Knowledge Requirements

In this project, you're going to follow along as I walk through the process of launching an AKS cluster. To ensure your environment matches mine, please be sure you have the following items in place before getting started:

  • A Microsoft Azure subscription
  • Owner or User Access Administrator role on the Azure subscription
  • Permission to create and register applications on Azure Active Directory

Warning: Potential Cloud Costs

Since you'll be spinning up cloud resources in this Project, be sure you're aware of the potential costs you may incur.

The cost of the AKS cluster you deploy includes the compute time for worker nodes deployed in the cluster, storage consumed by those nodes, and egress network traffic from the nodes. To minimize costs, deploy a single node of the B-family of Azure Virtual Machines.  The examples in this Project, deploy a node-pool with three virtual machines using the default size of DS2 v2. These VMs cost $0.14 an hour when running in the East US region.

Assuming you run the cluster for about an hour and then delete it, the total cost should not exceed two dollars.

Launching Cloud Shell

In this Project, you will be using the Azure Cloud Shell to simplify the setup requirements. Cloud Shell comes with many common tools already installed, including the Azure CLI and the Kubernetes command-line tool, kubectl.

First, launch Azure Cloud Shell by clicking on the Cloud Shell icon from the Azure Portal. You can also open a Cloud Shell terminal directly by going to shell.azure.com.

If this is your first time loading Cloud Shell, you will be prompted to select Bash or PowerShell for your experience, as you can see below. For this Project, select Bash since many kubectl-related commands and tools expect a Linux type shell for proper execution.

Cloud Shell Prompt for Bash or PowerShell

The next prompt will ask you to create a storage account to use for persistent storage of files stored in Cloud Shell. Simply click Create storage to continue, as shown below.

Creating an Azure Cloud Shell storage account

After a brief moment, the Cloud Shell terminal will connect and a Bash prompt will appear, as shown below.

Connected to Azure Cloud Shell

Preparing for the AKS Cluster

Now that you are logged into Cloud Shell, it's time to prepare your environment to build an AKS cluster.

Select the Correct Subscription

The first step is to ensure you're using the correct subscription. After all, you may have more than one subscription associated with your user account. The example below uses the az account show command to show which subscription is currently selected.

~$ az account show

{
  "environmentName": "AzureCloud",
  "id": "000000-0000-0000-0000-000000000000",
  "isDefault": true,
  "name": "SUBSCRIPTION_NAME",
  "state": "Enabled",
  "tenantId": "000000-0000-0000-0000-000000000000",
  "user": {
    "cloudShellID": true,
    "name": "ned",
    "type": "user"
  }
}

If you need to select a different account, you can list all existing subscriptions by running:

~$ az account list

Then you can select the correct account by noting the subscription name field and running:

~$ az account select --subscription SUBSCRIPTION_NAME

Checking Azure Resource Providers

All services in Microsoft Azure are consumed through Resource providers. Unfortunately, these providers are not automatically registered when invoked through command-line utilities like the Azure CLI.

To build an AKS cluster, you need to register a few Resource products. These providers are:

  • Microsoft.Compute
  • Microsoft.Storage
  • Microsoft.Network
  • Microsoft.ContainerService

To check if these providers are currently registered, you can use the az provider list subcommand as seen below. This subcommand will query the providers and limit the registrationState to Registered.

~$ az provider list --query "[?registrationState=='Registered'].{Name:namespace, State:registrationState}" -o table

If any of the required providers are not registered, run the az provider register command as shown below, replacing the PROVIDER_NAME with the actual name of the provider.

~$ az provider register --namespace PROVIDER_NAME --wait

If you had to register a provider, the process can take between five and ten minutes.

Creating a Resource Group

Next, you have to create a resource group or use an existing one. To easily manage and remove this resource group when done with this Project, let's create your own. All resources created in Microsoft Azure must reside in a resource group.

In this step, you will create a resource group to contain the AKS cluster, and specify which region the resource group will be created in. The example below shows the creation of a resource group named k8s in the useast2 region.

~$ az group create --name k8s --location eastus2

{
  "id": "/subscriptions/000000-0000-0000-0000-000000000000/resourceGroups/k8s",
  "location": "eastus2",
  "managedBy": null,
  "name": "k8s",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

Getting the Current AKS Versions

There are a number of configuration options that can be submitted when creating an AKS cluster. One such setting is the version of Kubernetes to run on the cluster. Depending on your needs, you may choose to run an older or newer version of Kubernetes.

The example below shows the command for retrieving the current versions available, with the results formatted in a table for easy reference.  Based on the table output, the most current stable version is 1.14.6. Document this version somewhere. You'll need it a little later.

~$ az aks get-versions --location eastus2 -o table

KubernetesVersion    Upgrades
-------------------  ------------------------
1.15.3(preview)      None available
1.14.6               1.15.3(preview)
1.14.5               1.14.6, 1.15.3(preview)
1.13.10              1.14.5, 1.14.6
1.13.9               1.13.10, 1.14.5, 1.14.6
1.12.8               1.13.9, 1.13.10
1.12.7               1.12.8, 1.13.9, 1.13.10
1.11.10              1.12.7, 1.12.8
1.11.9               1.11.10, 1.12.7, 1.12.8
1.10.13              1.11.9, 1.11.10
1.10.12              1.10.13, 1.11.9, 1.11.10

Creating the AKS Cluster

Now that you have all the prerequisite work complete, it is time to create your AKS cluster!

Using the AKS commands

In the Azure CLI, all commands to work with AKS start with the az aks subcommand. To see a full list of available commands, simply run az aks --help. Help for any specific subcommand can be found by adding the --help flag to that command.

For instance, the example below shows the help for creating an AKS cluster, something that will be very useful to you in the next step.

~$ az aks create --help

Command
    az aks create : Create a new managed Kubernetes cluster.

Arguments
    --name -n                      [Required] : Name of the managed cluster.
    --resource-group -g            [Required] : Name of resource group. You can configure the
                                                default group using `az configure --defaults
                                                group=<name>`.

--snip--

Creating the Cluster

To create a cluster with the Azure CLI, you will use the az aks create subcommand. To do so, there are only two required parameters: name and resource-group. However, to ensure you have more control over the creation, you'll typically want to be more explicit about your options. For this Project, specify settings like the version (as you gathered above) and the number of nodes the cluster will contain as shown below.

  • resource-group: k8s
  • name: my-cluster
  • kubernetes-version: 1.14.6
  • location: eastus2
  • node-count: 3
  • generate-ssh-keys

Using these parameters, Azure will create a three-node cluster, running Kubernetes version 1.14.6, with automatically generated ssh keys for the nodes. The ssh keys will be used for secure shell access to the worker nodes.

The example below shows the command and its resulting output.

~$ az aks create --resource-group k8s --name my-cluster \\
  --kubernetes-version 1.14.6 --location eastus2 \\
  --node-count 3 --generate-ssh-keys

{
  "aadProfile": null,
  "addonProfiles": null,
  "agentPoolProfiles": [
    {
      "availabilityZones": null,
      "count": 3,
      "enableAutoScaling": null,
      "enableNodePublicIp": null,
      "maxCount": null,
      "maxPods": 110,
      "minCount": null,
      "name": "nodepool1",
      "nodeTaints": null,
      "orchestratorVersion": "1.14.6",
      "osDiskSizeGb": 100,
      "osType": "Linux",
      "provisioningState": "Succeeded",
      "scaleSetEvictionPolicy": null,
      "scaleSetPriority": null,
      "type": "AvailabilitySet",
      "vmSize": "Standard_DS2_v2",
      "vnetSubnetId": null
    }
  ],
  "apiServerAccessProfile": null,
  "dnsPrefix": "my-cluster-k8s-4d8e57",
  "enablePodSecurityPolicy": null,
  "enableRbac": true,
  "fqdn": "my-cluster-k8s-4d8e57-7a24ca9b.hcp.eastus2.azmk8s.io",
  "id": "/subscriptions/000000-0000-0000-0000-000000000000/resourcegroups/k8s/providers/Microsoft.ContainerService/managedClusters/my-cluster",
  "identity": null,
  "kubernetesVersion": "1.14.6",
  "linuxProfile": {
    "adminUsername": "azureuser",
    "ssh": {
      "publicKeys": [
        {
          "keyData": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDMXvPgO6LGv/OjL1gwMImUrTWTfmy5w+E5D1VD0snO/QhyZzDXTap8o6EaBZos5DrMTy5gOPrnFhldZ7Gx7gBsv/5qah7kggh/o8tfZ07DuZjmbZhoE34AU4iGtay+G7E0nMVOBTUKxLEoXjQoDwfBYdqf94e0IYnRn3wqJclEoXQbFZYg4vfZ03o7JZ0T9q/8FN5erd0nTQG8XJyjoE8fFdyzKVVK8W7Iodl4eKD4DjteFLaw3nmtUW7GcXpBoEKdSsx1ky/pvzKtJlXE6Jdj5u5xaEj1Q+34jHB54j2sNHddyTHDw1zUsSvtDopFSRkETTWdSqB0xHBFF5kObJwf"
        }
      ]
    }
  },
  "location": "eastus2",
  "maxAgentPools": 1,
  "name": "my-cluster",
  "networkProfile": {
    "dnsServiceIp": "10.0.0.10",
    "dockerBridgeCidr": "172.17.0.1/16",
    "loadBalancerProfile": null,
    "loadBalancerSku": "Basic",
    "networkPlugin": "kubenet",
    "networkPolicy": null,
    "podCidr": "10.244.0.0/16",
    "serviceCidr": "10.0.0.0/16"
  },
  "nodeResourceGroup": "MC_k8s_my-cluster_eastus2",
  "provisioningState": "Succeeded",
  "resourceGroup": "k8s",
  "servicePrincipalProfile": {
    "clientId": "000000-0000-0000-0000-000000000000",
    "secret": null
  },
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters",
  "windowsProfile": null
}

In addition to creating an AKS cluster, the az aks create subcommand also automatically creates a service principal for the cluster to use when interacting with other services in Microsoft Azure.

The service principal can be used to allocate Azure Managed Disks for use as persistent storage in the cluster or allocate an Azure Load Balancer and public IP address. You'll see an example of this in the sample application deployment.

Retrieving your Credentials

Once the AKS cluster is built, the next step is to retrieve the credentials that will be used with kubectl. Kubectl is the command-line program that manages the cluster.

To download the credentials, query the AKS cluster by using the az aks get-credentials command specifying the resource group and name of the cluster as shown in the example below.

~$ az aks get-credentials --resource-group k8s --name my-cluster

/home/ned/.kube/config has permissions "644".
It should be readable and writable only by its owner.
Merged "my-cluster" as current context in /home/ned/.kube/config

Validating the Cluster

The final step in creating the cluster is to validate it is running and that your credentials have been retrieved successfully. You can do this by running commands with kubectl to retrieve the current nodes. The output below shows the three worker nodes you specifed during creation in the Ready state.

~$ kubectl get nodes

NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-57306094-0   Ready    agent   8m46s   v1.14.6
aks-nodepool1-57306094-1   Ready    agent   9m45s   v1.14.6
aks-nodepool1-57306094-2   Ready    agent   9m48s   v1.14.6

If you would like to view more information about the cluster, you can run kubectl cluster-info as shown below.

~$ kubectl cluster-info

Kubernetes master is running at <https://my-cluster-k8s-4d8e57-7a24ca9b.hcp.eastus2.azmk8s.io:443>
CoreDNS is running at <https://my-cluster-k8s-4d8e57-7a24ca9b.hcp.eastus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy>
kubernetes-dashboard is running at <https://my-cluster-k8s-4d8e57-7a24ca9b.hcp.eastus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy>
Metrics-server is running at <https://my-cluster-k8s-4d8e57-7a24ca9b.hcp.eastus2.azmk8s.io:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy>

Deploying a Sample Application

Once the AKS cluster is built, it's time to deploy an application to it. That's the whole point! If you're thinking you now have to write a Kubernetes application on your own, don't worry! You don't have to write a Kubernetes application to deploy one.

Kubernetes applications are deployed using one or more YAML files. Luckily, Microsoft provides an excellent example application hosted on GitHub called Azure Voting App you can get started with.

The example application is a voting application that will be deployed as a set of pods and services - two fundamental constructs of Kubernetes. A pod is composed of one or more containers that share a common process and networking interface. A service exposes an application running on one or more pods either internally to other applications in the Kubernetes cluster or externally to users and applications outside the cluster.

Downloading Application Files

To deploy the example application, you'll first need to download a  YAML manifest file called azure-vote-all-in-one-redis.yaml. This file is located in the Azure Voting App GitHub repository. All Kubernetes applications start with a YAML manifest defining all attributes of the application.

For the Azure Voting App application, the YAML manifest file defines the following:

  • A Kubernetes deployment for the Redis backend of the voting application
  • A Kubernetes service exposing the Redis backend using an internal cluster IP Address and port 6379
  • A Kubernetes deployment for the voting application front end
  • A Kubernetes service exposing the voting application front end on an external load balancer using port 80

Go ahead and create a directory and download the YAML file into it using wget as shown below.

~$ mkdir voting-app
~$ cd voting-app
~/voting-app$ wget <https://raw.githubusercontent.com/Azure-Samples/azure-voting-app-redis/master/azure-vote-all-in-one-redis.yaml>

Deploying the App with kubectl

Now that you have the YAML file downloaded, it's time to send it to your AKS cluster. To deploy the application to the cluster, run the  kubectl apply  subcommand using the -f parameter along with the file name as shown in the example below. By doing so, Kubernetes will interpret the manifest and provision the necessary pods and services.

~/voting-app$ kubectl apply -f azure-vote-all-in-one-redis.yaml

deployment.apps/azure-vote-back created
service/azure-vote-back created
deployment.apps/azure-vote-front created
service/azure-vote-front created

Checking the Deployment Status

Now that you have deployed the application to Kubernetes, the controller will take care of creating the necessary pods and services. Let's now monitor how the deployment is going.

The example voting application creates two pods and two services. One pod provides a backend Redis cache for storing data. The other pod hosts a website where people can submit votes. The services expose each pod's application, one internally to the front-end and the other externally through an Azure Load Balancer.

You can check on the status of the pods by running kubectl get pods as shown in the example below.  You can see from the output that there are two pods running called azure-vote-back-6d4b4776b6-2z7dr and azure-vote-front-5ccf899cf6-gfzkc.  The alphanumeric string after azure-vote-back and azure-vote-front will be unique within the cluster.

Below you can see each pod is Running which is exactly what you want.

~/voting-app$ kubectl get pods

NAME                                READY   STATUS    RESTARTS   AGE
azure-vote-back-6d4b4776b6-2z7dr    1/1     Running   0          51s
azure-vote-front-5ccf899cf6-gfzkc   1/1     Running   0          50s

Next, check on the status of the services by running kubectl get svc as shown in the example below.

You can see from the output that the cluster has three services running; two services pertaining to the application just created and one default service.

  • The azure-vote-back service has been deployed with an internal cluster IP address of 10.0.214.92.
  • The azure-vote-front service has been deployed with a load balancer that exposes the service publicly on the public IP address 104.209.141.1 and port 80.
  • The kubernetes service which is created as part of the Kubernetes cluster creation, and is not part of the deployed application.
~/voting-app$ kubectl get svc

NAME               TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)        AGE
azure-vote-back    ClusterIP      10.0.214.92   <none>          6379/TCP       4m21s
azure-vote-front   LoadBalancer   10.0.127.38   104.209.141.1   80:31074/TCP   4m20s
kubernetes         ClusterIP      10.0.0.1      <none>          443/TCP        23m
The external IP address for the azure-vote-front service may take a few minutes to be provisioned. You can run kubectl get svc -w to watch the services, and it will automatically update the output once the public IP address is assigned. To stop watching the services, press Ctrl-C.

Accessing the Application

Now that you have prepped and created the cluster, deployed the application and then monitoring it's progress until completion, it's time to check out your handiwork!

On your local machine, open a browser to the public IP address allocated to the azure-vote-front service. In my instance, that IP was 104.209.141.1 found by running kubectl get svc.

If all went well, you should be greeted with the page shown below.

Example Azure Voting App

Cleaning Up

Running a three-node AKS cluster in Azure is going to cost some money. If you like, you can scale the node count down to one to try and minimize costs with the az aks scale command. However, if you are done tinkering with AKS for the moment, you can also completely delete the cluster by running az group delete --name k8s as shown in the example below.

The az group delete subcommand will delete the entire AKS cluster and any data stored on it.

~$ az group delete --name k8s

Are you sure you want to perform this operation? (y/n): y

You can clean up everything created in this Project by also removing the entire resource group the AKS cluster was created in as well by running:

~$ az group delete -n k8s

Your Takeaways

In this project, you successfully deployed an AKS cluster and ran an application on top of that cluster. That's only the beginning! You can deploy an AKS cluster to an existing Virtual Network, utilize Virtual Machine Scale Sets with autoscaling, or use Azure Active Directory for authentication. There are a plethora of additional features to dig into when it comes to AKS and you've only just scratched the surface.

Further Reading

If you enjoyed this post check out some of theses similar posts about Microsoft Azure.