Kubernetes PVC for Data Reliability: A Comprehensive Guide

Published:12 March 2024 - 10 min. read

Do you have a stateful process deployed in Kubernetes? If yes, then you absolutely need effective storage management. Why not consider turning to Kubernetes PVC?

Kubernetes PVC lets you easily store, manage, and persist data your application relies on to be completely functional. And in this tutorial, you’ll learn how to store and persist data of a MySQL database using Kubernetes PVC.

Read on and assure high data availability for your applications!

Prerequisites

This tutorial will be a hands-on demonstration. If you’d like to follow along, be sure you have the following:

  • A Linux machine – This tutorial uses Ubuntu 20.04.3 LTS.

Creating a Cluster in Minikube

Working with Kubernetes PVC, you’ll first need to create and start a cluster to containerize your application. This tutorial uses Minikube as a tool to deploy a cluster with one node on your local machine.

1. Open your terminal, and run the command below to create a cluster in Minikube.

minikube start

Wait for a little for Minikube to provision your cluster. Once completed, you’ll have an output similar to the output below.

Creating and starting a cluster in Minikube
Creating and starting a cluster in Minikube

2. Next, run the following command to get access to the minikube dashboard within your cluster so that you can view the resources within your cluster.

minikube dashboard
Accessing the Minikube Dashboard
Accessing the Minikube Dashboard

Once your dashboard is ready, your default web browser opens and redirects to your Minikube dashboard, as shown below.

Since you have no application deployed yet, your dashboard is empty.

Accessing the Minikube dashboard via a web browser
Accessing the Minikube dashboard via a web browser

3. Lastly, run the kubectl get command to get the number of nodes for your cluster. This command lets you confirm you can access your Minikube dashboard.

kubectl get nodes

Since Minikube only provides one node, your output should be similar to the output below. At this point, you can now interact with your cluster via kubectl commands.

Getting the number of nodes available in the cluster
Getting the number of nodes available in the cluster

Creating a Persistent Volume (PV)

Now that you have a cluster in Minikube and can directly interact with it via kubectl commands, the next thing to do is create a PV for your cluster.

Since you will be deploying a MySQL database to Kubernetes, you’ll need to store data on disk for persistence on the file system. And to store data, you’ll create a PV in Kubernetes.

But first, it’s ideal to figure out the provisioner for the storage you’re trying to incorporate into your PV. How? You’ll define a storage class, which helps Kubernetes know you plan to provision your storage.

1. Run the below command to check the type of storage class available to you on your local machine.

kubectl get storageclasses

The output below indicates that Minikube supports the storage class hostpath, which you’ll use for your PV.

Checking Available Storage Types
Checking Available Storage Types

2. Next, create a YAML file (mysql-pv) using any code editor of your choice, and you can name the file as you like.

nano mysql-pv.yaml

3. Add the configuration below to the mysql-pv.yaml file, save the changes, and close the editor.

The configuration settings below create a PersistentVolume named mysql-volume with a capacity of 1Gi of storage.

apiVersion: v1
kind: PersistentVolume
metadata:
   name: mysql-volume # Name of the persistent volume
   labels:
     type: local
spec:
   storageClassName: hostpath # Name of the storage class
   capacity:
     storage: 1Gi # Amount of storage this volume should hold
   accessModes:
     - ReadWriteOnce # To be read and written only once
   hostPath: # Storage class type
     path: '/mnt/data' # File path to mount volume                   

4. Now, run the kubectl apply command below to apply the configuration setting (mysql-pv.yaml) to your cluster.

kubectl apply -f mysql-pv.yaml
Creating a PV in Minikube
Creating a PV in Minikube

5. Finally, run the kubectl get command to confirm the PV exits.

kubectl get persistentVolumes

Below, you can see the PV is created and is available.

Verifying PV in the cluster on Minikube
Verifying PV in the cluster on Minikube

Alternatively, navigate to the Persistent volume option on the left panel of the Minikube dashboard, as shown below, to see your PV.

Verifying PVs in the Minikube dashboard
Verifying PVs in the Minikube dashboard

Working with a Persistent Volume via Kubernetes PVC

You now have a PV for your cluster, but how do you use it? By using a PVC. With a PVC, pods running on your cluster can claim some amount of storage from a PV to store data.

PVC references the PVs to be used and indicates how much volume you’d like your pods to use from that volume.

1. Run the kubectl create command below to create a namespace called mysql.

kubectl create namespace mysql
Creating a namespace (mysql)
Creating a namespace (mysql)

You can also confirm if the namespace exits on your Minikube dashboard by navigating to the Namespaces option on the left panel, as shown below.

Verifying the newly-created namespace (mysql) in Minikube dashboard
Verifying the newly-created namespace (mysql) in Minikube dashboard

2. Next, create a file called mysql-pvc.yaml with your text editor.

nano mysql-pvc.yaml

3. Populate the code below to the mysql-pvc.yaml file, then save and exit the editor.

The following code creates a PersistentVolumeClaim named mysql-claim that claims and uses 50Mi amount of storage from a PV.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-claim # Name of the persistent volume claim
spec:
  storageClassName: hostpath # Name of the storage class
  accessModes:
    - ReadWriteOnce # Indicates this claim can only be read and written once
  resources:
    requests:
      storage: 50Mi # Indicates this claim requests only 50Mi of storage from a PV

4. Lastly, run the following command to apply the new configuration setting (mysql-pvc.yaml) to the mysql namespace.

kubectl apply -n mysql -f mysql-pvc.yaml

The output below indicates the persistent volume claim has been created and is bound to a persistent volume called mysql-volume.

Verifying persistent volume claim in (mysql) namespace
Verifying persistent volume claim in (mysql) namespace

Alternatively, you can confirm if your persistent volume claim exists in the Minikube dashboard.

Click the drop-down box in the Minikube dashboard shown below to select the namespace you are working on (mysql). Navigate to the Persistent Volume Claim option on the left panel to see your PVC.

Accessing the mysql namespace in the Minikube dashboard
Accessing the mysql namespace in the Minikube dashboard

Creating a MySQL ConfigMap

You’ve created a PVC that is bound to the PV in your cluster, and now you’re ready to deploy your MySQL database and configure it to use that PVC.

But before you go ahead with deploying the MySQL database, you’ll need to create a ConfigMap that describes the username and password to access your MySQL database. A ConfigMap lets you store non-confidential data in key-value pairs.

1. On your terminal, create a new file called mysql-configmap.yaml.

nano mysql-configmap.yaml

2. Add the following configuration to the mysql-configmap.yaml file, save the file, and exit the code editor.

This configuration setting creates a configMap called mysql-config with the credentials to access the MySQL database.

apiVersion: v1
kind: ConfigMap
metadata:
   name: mysql-config # Name of the ConfigMap
   labels:
     app: mysql
data:
  MYSQL_DB: 'mysqldb' # The name of the MySQL database
  MYSQL_ROOT_PASSWORD: '1234' # The root password of the MySQL database

3. Lastly, run the below command to apply the new configuration settings (mysql-configmap.yaml) to your namespace (mysql).

kubectl apply -n mysql -f mysql-configmap.yaml
Creating configMap in (mysql) namespace
Creating configMap in (mysql) namespace

Alternatively, navigate to the Config Maps option under Config and Storage on the left panel of the Minikube dashboard to see your config map, as shown below.

Verifying configMap (mysql-config) in Minikube dashboard
Verifying configMap (mysql-config) in Minikube dashboard

Creating a MySQL StatefulSet

Now that you have created a config map for your MySql database, you’ll need a way to manage the deployment and scaling of your pods by creating a StatefulSet. StatefulSets are designed for applications that require a state, like a database in this context.

1. Create another YAML file called mysql-stateful-set.yaml in your text editor.

nano mysql-stateful-set.yaml

2. Paste in the code below to the mysql-stateful-set.yaml file, save the file, and exit the editor.

The code below creates a StatefulSet called mysql with only one replica that pulls the latest version of the MySQL database listening on port 3306. This code also configures the StatefulSet to store data in the file path /var/lib/mysql/data for persistence using your PVC (mysql-claim).

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql # The name of the StatefulSet
spec:
  serviceName: mysql # The name of the service this StatefulSet should use
  selector:
    matchLabels:
      app: mysql
  replicas: 1 # Indicates this StatefulSet should only create one instance of the mysql database
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql # The name of the MySQL container
          image: mysql:latest # The image of the MySQL database
          imagePullPolicy: "IfNotPresent"
          ports:
          - containerPort: 3306 # The port number MySQL listens on
          envFrom:
          - configMapRef:
              name: mysql-config # Indicates this StatfulSet should a config map called mysql-config to access the MySQL database
          volumeMounts:
          - name: data
            mountPath: /var/lib/mysql/data # Data should be mounted onto this file path
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: mysql-claim # Indicates the mysql database should use a PVC called mysql-claim

3. Now, run the kubectly apply command below to apply the new configuration setting from the mysql-stateful-set.yaml file.

kubectl apply -n mysql -f mysql-stateful-set.yaml
Creating a StatefulSet in the (mysql) namespace
Creating a StatefulSet in the (mysql) namespace

You’ll see the StatefulSet on your Minikube dashboard in the mysql namespace, as shown below.

Verifying StatefulSet (mysql) on Minikube dashboard
Verifying StatefulSet (mysql) on Minikube dashboard

Creating a MySQL Service

After creating the StatefulSet for your MySQL database, it’s time to create a service to expose your MySQL database.

1. Create a file called mysql-service.yaml with your preferred editor.

nano mysql-service.yaml

2. Add the following configurations to the mysql-service.yaml file, save the file, and exit the code editor.

The configuration setting below creates a service called mysql that listens on port 3306

apiVersion: v1
kind: Service
metadata:
   name: mysql
   labels:
     app: mysql
spec:
   selector:
     app: mysql
   ports:
     - protocol: TCP
       name: http
       port: 3306
       targetPort: 3306

3. Now, run the following command to apply the new configuration (mysql-service.yaml) in the mysql namespace.

kubectl apply -n mysql -f mysql-service.yaml

The output below confirms the new MySQL service has been created.

Verifying the MySQL service (mysql)
Verifying the MySQL service (mysql)

Alternatively, confirm that your MySQL service reflects on your Minikube dashboard by clicking on the Services option below from the left panel.

Verifying MySQL service (mysql) on Minikube dashboard
Verifying MySQL service (mysql) on Minikube dashboard

4. Now, run the kubectl get command below to get the lists of pods running in your cluster in the mysql namespace.

kubectl get -n mysql pods

You can see below that your pod is running correctly. Note down the pod name as you’ll need it to execute your container later.

Verifying pods in (mysql) namespace
Verifying pods in (mysql) namespace

Alternatively, you can also check for your pods on the Minikube dashboard by clicking on the Pods option from the left panel.

Verifying pods in the (mysql) namespace in the Minikube dashboard
Verifying pods in the (mysql) namespace in the Minikube dashboard

Executing the MySQL Container

Your MySQL service is running, but your PV serves no purpose without data. You’ll execute the MySQL container so you can create a database and a table and add data to persist.

1. Run the below command to access your pod in your namespace (mysql).

Be sure to replace mysql-0 with your pod’s name you noted in the last step of the “Creating a MySQL Service” section.

kubectl -n mysql exec -it mysql-0 bash

If you’re able to access your pod, the prompt changes, as shown below.

Prompt Changes
Prompt Changes

2. Next, the mysql command below to log into the MySQL database using root as the username and 1234 as the password.

mysql -uroot -p1234

Wait for the container in the mysql-0 pod to be ready, and you’ll get the output below.

Accessing the MySQL database
Accessing the MySQL database

3. Run the following query to CREATE a DATABASE called users.

CREATE DATABASE users;
Creating a database (users)
Creating a database (users)

4. Now, run the below command to show the list of existing databases.

show databases; 

Below, you can confirm your newly-created database (users) exits.

Verifying database (users)
Verifying database (users)

5. Run the command below to switch to the newly-created database (users).

use users;
Switching database (users)
Switching database (users)

6. Next, run the query below to create a developers table in the users database.

create table developers (dev_id INT NOT NULL AUTO_INCREMENT PRIMARY KEY, d_name TEXT, stack TEXT);

The output below indicates you’ve successfully created a table in your database.

Creating a table (developers) in the (users) database
Creating a table (developers) in the (users) database

7. After creating the table, run the below command to show all existing tables in the current database (users).

show tables;

Below, you can see the table you created (developers) exists.

Below, you can see the table you created (developers) exists.
Below, you can see the table you created (developers) exists.

8. Execute each query below to populate the developers table with two data entries in the format of name and designation.

insert into developers (d_name, stack) values ('John Doe', 'Python Developer');
insert into developers (d_name, stack) values ('Jane Doe', 'JavaScript Developer');

Once you’ve added data to your database, you’ll get the following output.

Inserting values into the (developers) table
Inserting values into the (developers) table

9. Finally, run the query below to verify the data you added to your table (developers) exists.

select * from developers;

As you can see below, the data you added to your table (developers) in step eight exists.

Verifying values in the (developers) table
Verifying values in the (developers) table

Testing Data Persistence

You’ve successfully added data to your data, but you still have to test if the data persists even if your pod goes down. To confirm data persistence, you’ll delete and recreate your pod.

1. Run the exit command twice to exit from the MySQL server and your container.

exit

2. Next, run the below kubectl delete command to delete the first pod (mysql-0) in your namespace (mysql).

kubectl delete -n mysql pod mysql-0
Deleting pod (mysql-0)
Deleting pod (mysql-0)

3. After deleting your pod, run the following command to recreate your pod in your namespace (mysql).

kubectl get -n mysql pods

The output below indicates that your pod is recreating.

Confirming the pod is being recreated
Confirming the pod is being recreated

Once recreated, you’ll see the following output where the pod’s status says Running.

Verifying the pod is running
Verifying the pod is running

4. Now, run the command below to execute (exec) the MySQL container again.

kubectl -n mysql exec -it mysql-0 bash
Executing the MySQL container
Executing the MySQL container

5. Once you’re in the MySQL container, run the following command to log into the MySQL server.

mysql -uroot -p1234
Logging into the MySQL server
Logging into the MySQL server

6. Next, run the show command below to show all available databases.

show databases;

Below, you can see that the users database still exists, indicating data persistence works with Kubernetes PVC. But you still have to check if the data in your database is still there.

Verifying the (users) database
Verifying the (users) database

7. Run the use command to switch to the users database.

use users;
Users database
Users database

8. After switching, tun the show command below to show the lists of tables in the current database (users).

show tables;

As you can see below, the developers table still exists. You’re one step closer to verifying your data is still intact.

Verifying the (developers) table in the (users) database
Verifying the (developers) table in the (users) database

9. Finally, run the query below to confirm if you still have your values in the developers table.

select * from developers

Do you see an output similar below? Congratulations! Data persistence is working perfectly, so you don’t have to worry about losing data even if you have to delete and recreate your pods.

Verifying values in the (developers) table
Verifying values in the (developers) table

Conclusion

In this tutorial, you’ve learned about Kubernetes PVC and set up a persistent volume for your application, in this case, MySQL database. You’ve confirmed your data persists even after deleting and recreating your pods. Now you don’t have to panic if you accidentally deleted your pods, so long as you have a Kubernetes PVC on your side.

So how else would you like to implement a persistent volume? Perhaps one with a PostgreSQL database accessible across namespaces using another type of storage class? Like an AzureDisk storage provisioner?

Hate ads? Want to support the writer? Get many of our tutorials packaged as an ATA Guidebook.

Explore ATA Guidebooks

Looks like you're offline!