Preparation, Docker Image Push and Deployment for Containerized Voting Application in Kubernetes Cluster using Docker, Azure Container Registry (ACR) and Azure Kubernetes Service (AKS)

A R
6 min readJun 11, 2024

--

In another project based on a real-world scenario, I had to act as a DevOps Engineer, and show a new team member how to deploy an application on a Kubernetes cluster.

This cluster is part of The Cloud Bootcamp project, and I prepared this new team member to deploy the voting application that was developed for the MultiCloud Experience, an online event where participants had the opportunity to learn about Cloud technologies.

I deployed it to Microsoft Azure cloud, where I first pushed the application’s Docker image to Azure Container Registry (ACR) and then used the Azure Kubernetes Service (AKS) service to deploy a cluster managed by Microsoft Azure.

Implementation steps overview:

1- Download or create your application. (this normally done by application developers).

2- Create a Resource Group.

3- Create Azure Container Registry (ACR).

4- Create docker image of your application code.

5- Create AKS cluster.

6- Attach the ACR to AKS.

7- Deploy the application.

8- test the application.

Prerequisites:

If you are using local machine or VScode, ensure to install required libraries such as wget, zip, having docker desktop running, Azure CLI configured and updated.

I use Azure Cloud shell to simulate real world environment.

In this hands-on, I already created my Dockerfile and AKS Deployment file. In real-life, it is a DevOps engineer job to create Dockerfile and Deployment files.

Step 1. Downloading / Unzipping the app
use wget/curl and download your app files. Then Unzip them for use.

Step 2. Create a Resource Group

az group create --name tcb-vote --location eastus

Step 3. Creating Azure Container Registry

az acr create --resource-group <your-rg> --name azr --sku Basic

Login ACR:
To login ACR, you need repository name and access token. For that reason, you need to create token first.

az acr login --n <your-repo-name> --expose-token

Copy the “accessToken” and save it!

use below command to login ACR.

docker login <acrName> -u 00000000-0000-0000-0000-000000000000 -p <token>

Step 4. Creating Docker Image

Ensure you are on the directory that contains your Dockerfile.

cd tcb-voting-app/tcb-vote/

az acr build --image tcb-vote --registry tcbazr --file Dockerfile .

Verify of the image is uploaded.

Step 5. Creating a cluster in the Azure Kubernetes Service (AKS)

az aks create \
--resource-group tcb-vote \
--name AKSClusterTCB \
--node-count 1 \
--generate-ssh-keys

Deployment process will take sometimes.

Step 6. Attaching | Associating… the ‘acr’ to the ‘aks’

Now we need to attach the acr to the aks cluster so the cluster can download the image.

az aks update -n <your-AKS-cluster-name> -g <your-resource-group-name> --attach-acr <your-acr-name> 

While acr attachment is being deployed, your may check your AKS cluster properties.

We need to login the AKS cluster to deploy our application. For that, we need get the credentials.

Getting the credentials for logging in the Kubernetes Cluster

  • Common bug and solution
az aks get-credentials --resource-group <rg_name> --name <cluster_name>

The issue is lack of permission for MSI systemassigned service principal. Use below command to provide contributor role the sp.

Solution: Assign Contributor role

az role assignment create --assignee $principalId --role Contributor --scope /subscriptions/<YourSubscriptionID>/resourceGroups/tcb-vote

Test accessing AKS

kubectl get nodes

Step 7. Deploying Application:

To deploy application, first make sure you are in the directory that contains your Kubernetes Deployment file (yaml file).

In the second step, you need to update the image properties such as ACR name and image version if applicable.

Use below command to deploy the application into the cluster

kubectl apply -f tcb-vote-plus-redis.yaml

Use below command to get the Public IP of the Load Balancer and test the application.

kubectl get service --watch
kubectl get nodes
kubectl get pods

Step 8. Testing the application:

Copy and paste the Load Balancer public IP address in a browser.

Conclusion and Insights

In the modern landscape of application deployment, the agile process has become the standard, allowing for rapid and efficient deployment cycles. Many organizations have embraced containerized application deployment due to its numerous benefits, such as accelerated deployment times, fault tolerance, resource optimization, scalability, and application portability. These advantages are particularly crucial in multi-cloud environments, where the ability to move and manage applications across different cloud providers seamlessly is essential.

Containerization, through technologies like Docker and orchestration platforms like Kubernetes, empowers organizations to deploy applications quickly and consistently across any infrastructure that supports containers. This hands-on guide emphasizes the core skills needed to leverage Docker and Kubernetes effectively, enabling teams to deploy and manage modern applications in an agile and efficient manner.

Additional Insights:

  1. Improved Development Workflow: Containerization simplifies the development and testing process by providing consistent environments, reducing the “it works on my machine” problem.
  2. Enhanced Security: Containers can encapsulate applications with their dependencies and configurations, improving security by isolating applications from each other and the host system.
  3. Cost Efficiency: By optimizing resource usage and enabling more granular scaling, containerized deployments can lead to significant cost savings on cloud resources.
  4. Continuous Integration and Continuous Deployment (CI/CD): Containers integrate seamlessly with CI/CD pipelines, enabling automated testing, deployment, and scaling, which further accelerates the development cycle.

By mastering Docker and Kubernetes, teams can harness these benefits to stay competitive and responsive in a fast-paced digital environment, ensuring that applications are resilient, scalable, and easily manageable across diverse cloud ecosystems.

--

--

A R
0 Followers

Infrastructure Engineer with focus on Cloud & DevOps | AWS | Microsoft Azure | Google Cloud | Oracle Cloud | IBM | AI-ML