Day 32 Task: Launching your Kubernetes Cluster with Deployment

Day 32 Task: Launching your Kubernetes Cluster with Deployment

·

11 min read


Index

  1. What is Deployment in Kubernetes?

    • Definition: Configuration for updating Pods and ReplicaSets.

    • Functionality: Controller manages state changes.

    • Use Cases: Scaling, updating, rolling back applications.

  2. Features

    • Auto-healing: Liveness and readiness probes.

    • Auto-scaling: Horizontal Pod Autoscaler (HPA).

  3. Task

    • File: deployment.yml

    • Steps: Apply using kubectl apply -f deployment.yml.


─── ⋆⋅☆⋅⋆ ──Introduction─── ⋆⋅☆⋅⋆ ──

Welcome to Day 32 of our journey! Today, we delve deeper into Kubernetes Deployment, exploring its rich features and practical applications through hands-on experience.

In our exploration, we'll leverage Kubernetes Deployment to deploy a sample application, showcasing its capabilities such as auto-healing and auto-scaling. By the end of the day, you'll have a solid understanding of how to harness the power of Kubernetes Deployment for managing containerized applications effectively

─── ⋆⋅☆⋅⋆ ──What is Kubernetes Deployment?─── ⋆⋅☆⋅⋆ ──

Kubernetes is an open-source Container Management tool that automates container deployment, container scaling, descaling, and container load balancing (also called as container orchestration tool). It is written in Golang and has a huge community because it was first developed by Google and later donated to CNCF (Cloud Native Computing Foundation). Deployment is an abstraction layer over pods. It is like a blueprint for creating pods.

─── ⋆⋅☆⋅⋆ ──Kubernetes Deployment─── ⋆⋅☆⋅⋆ ──

Kubernetes deployment is a high-level resource object by which you can manage the deployment and scaling of the applications while maintaining the desired state of the application. You can scale the containers with the help of Kubernetes deployment up and down depending on the incoming traffic. If you have performed any rolling updates with the help of deployment and after some time if you find any bugs in it then you can perform rollback also. Kubernetes deployments are deployed with the help of CLI like Kubectl it can be installed on any platform.

─── ⋆⋅☆⋅⋆ ──Kubernetes Deployment Spec─── ⋆⋅☆⋅⋆ ──

It mainly consists of three components:

  1. Metadata: It consists of the name and labels for the configuration file. The labels are used for establishing the connection between deployment and services.

  2. Specification: It consists of information regarding the number of replicas, selector labels, and template that is the blueprint for pods. The template itself is like a configuration file for pods and consists of metadata and specification for pods that stores information regarding the containers to be used in the pod, the image to be used for building the container, and the name and ports of the container.

  3. Status: This component is automatically generated and added by Kubernetes. This is the basis of the self-healing feature of Kubernetes. If the desired status and actual status of a deployment do not match Kubernetes fixes the pod and matches it with the desired status.

─── ⋆⋅☆⋅⋆ ──Kubernetes Deployment YAML─── ⋆⋅☆⋅⋆ ──

We will be using Minikube to use Kubernetes on our local machine. The Deployment configuration file for Nginx will be:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        resources:
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80

Now first open the directory where you have created nginx.yaml file in your terminal and create deployment using the command:

$ kubectl apply -f nginx.yaml
$ kubectl get al

Hence we have successfully deployed created a deployment for Nginx.

─── ⋆⋅☆⋅⋆ ──Updating a Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

To update a Kubernetes deployment we can simply update its config file using 2 methods:

Method 1: Using the kubectl edit command from the terminal

$ kubectl edit deployment deployment-name

Now you can edit the deployment configuration by pressing “i” for inserting and after editing it you can just press the escape key and then “:wq” to save your changes and exit.

file

Method 2: Updating Configuration Directly

You can open your config file in an IDE like VS Code and edit the config there and apply the config by using the command

$ kubectl apply -f deployment_config.yaml

Let’s say in this case we are updating the container port from 80 to 800.

─── ⋆⋅☆⋅⋆ ──Rolling Back a Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

In kubernetes deployment, you can revert back to the previous version of the application if you find any bugs in the present version. It will help you to reduce the problems which are facing by the end users in the current version or updated version of the application.

The following steps can be followed for rolling back a deployment:

Step 1: First list all the revisions by using the following command and select the version of deployment to which you want to roll back.

kubectl rollout history

Step 2: If you want to roll back to the previous version of the deployment you can do that by using the following command. it will roll back the immediate version of deployments.

kubectl rollout undo deployment 
nginx-deployment --to-revision=1 (You can Mention the required version)

Step 3: Reverting back to the previous version of the application will help us to reduce the downtime for the customers It’s crucial to regularly test and validate your rollback process to ensure its effectiveness in real-world scenarios.

⋆⋅☆⋅⋆Checking the Rollout History of a Kubernets Deployment⋆⋅☆⋅⋆

Rollout history can be seen by using the following command.

kubectl rollout history <name of the deployment>

This command allows you to view the no.of revisions available in the kubernetes cluster and changes made to them. If you want to see the detailed history of a specific version then you use the following command.

kubectl rollout history deployment/web-appdeployment --revision=3

Rollout history will help you to roll to previous versions of the application if you find bugs or problems in the currently deployed version of the application.

─── ⋆⋅☆⋅⋆ ──Command to Scale a Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

Scaling the deployment can be done in many ways we can do it by using the following command.

kubectl scale deployment/tomcat-1stdeployment --replicas= 5

You can do the scaling of the pods with the help of horizontal pod autoscaling by enabling it in the cluster where you chose the required no.of pods to run continuously if there is traffic or not and how many no.of pods should be run while the incoming traffic is increased.

kubectl autoscale deployment/tomcat-1st deployment --min=5 --max=8 --cpu-percent=75

–min= 5 defines how many minimum no.of pods are to be run if there is traffic or not and if there is sudden traffic –max= 8 to how much it can maximize it and it depends upon the –CPU-percent.

⋆⋅☆⋅⋆ Pausing and Resuming a rollout of a Kubernets Deployment⋆⋅☆⋅⋆

You can pause the deployments which you are updating currently and resume the fallout updates for deployment when you feel that the changes are made correctly you can use the following command to pause the rollouts.

kubectl rollout pause deployment/webapp-deployment

To resume the deployment which is paused you can use the following command.

kubectl rollout resume deployment/webapp-deployment

When you pause the rollouts you can update the image by using the following command. Then the current version of the deployment will be replaced with the image which we are going to update.

kubectl set image deployment/webapp-deployment webapp=webapp: 2.1

─── ⋆⋅☆⋅⋆ ──Kubernets Deployment Status─── ⋆⋅☆⋅⋆ ──

The deployment will pass different stages while it was in deploying. Each stage will say the health of the pods and shows us if any problems are arising.

There are a few deployment statuses as follows which will show the health of the pods there are.

  1. Pending: Deployment is pending representing that it going to start or it is facing some issues to start.

  2. Progressing: Deployment was in progress it was going to be deployed.

  3. Succeeded: Deployment was deployed successfully without any errors or bugs.

  4. Failed: Deployment was not deployed successfully it failed due to some issues.

  5. Unknown: This happens if the kubernetes API is not reached for deployment or the problem with the deployment itself.

Deployment status will help you to monitor the pods and you can take action immediately if you found any trouble or bugs in it.

─── ⋆⋅☆⋅⋆ ──Progressing Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

If any deployment is in progress it is meant to be that deployment is in the stage of updating or creating a new replicaset. Following are some of the reasons that deployment is in progress.

  • The deployment is creating for the first time or it was creating a new replicaset.

  • If the deployment is going under the updation or scaling then it will be in progress.

  • The deployment will be in progress when the pods are scaling down or the pod is getting started.

If you want to troubleshoot the deployment then you can use the following command.

kubectl describe deployments

─── ⋆⋅☆⋅⋆ ──Complete Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

The following conditions must be satisfied to mark deployment as completed.

  1. All the replicas of the pods must be up and running.

  2. The desired count of pods must match the running count of pods.

  3. All the replicas of the pods which are associated with the deployment must be available.

  4. There must be no single replica of old pods must be running.

To check the status of the pods which are running you can use the following command.

kubectl rollout status <deployment/name of deployment>

─── ⋆⋅☆⋅⋆ ──Failed Kubernets Deployment─── ⋆⋅☆⋅⋆ ──

Deployment may fail for several reasons when you try to deploy its newest ReplicaSet it may be in an incomplete position forever. This can cause for different reasons following are the reasons.

  • There is a failure of the readiness probe and liveness probe.

  • Error while pulling the image (Mentioned the wrong tag).

  • There may be insufficient quota while deploying as we mentioned resources.

  • The deployment cannot connect to a dependency, such as a database.

To get more information about the deployment and why is was failed you can use the following command. Which gives you a detailed description of the deployment.

kubectl describe deployments

─── ⋆⋅☆⋅⋆ ──Kubernets Canary Deployment─── ⋆⋅☆⋅⋆ ──

The canary deployment will make sure that the deployment is going to content fifty percent of replicas are updated and fifty percent of replicas are older versions the replicas which are updated will be available for some of the end users and based upon their reviews we can replace all the replicas. If the reviews are positive you can update the remaining replicas if the feedbacks are bad then you can roll back to the previous version immediately.

The following are the two approaches to implementing canary deployment.

  1. Splitting the traffic using Istio.

  2. Blue/Green Deployment.

─── ⋆⋅☆⋅⋆ ──Benefits of Kubernetes Deployments─── ⋆⋅☆⋅⋆ ──

  • Kubernetes Deployment helps in Container Orchestration that is managing the containers in pods.

  • Enhances microservice architecture.

  • Auto-scaling

  • Automated Rollouts and Rollbacks

  • Load Balancing

─── ⋆⋅☆⋅⋆ ──Use Cases of Kubernetes Deployments─── ⋆⋅☆⋅⋆ ──

  • Rollout a ReplicaSet: A Kubernetes deployment generates a replica set a pod that contains information regarding the number of pos to be generated in the background.

  • Declaring a New State of Pods: On updating pod template spec a new replica set is created and deployment moves pods from the old replica set to the new replica set.

  • Scaling: Deployment can be configured to scale up to facilitate more load.

  • Status of Deployment: It can be used to check if the deployment is stuck somewhere by matching the current status with the desired status.

  • Cleaning up old Replica Sets that are no more required.

─── ⋆⋅☆⋅⋆ ──ReplicaSets vs Deployments─── ⋆⋅☆⋅⋆ ──

ReplicaSetDeployments
ReplicaSet will ensure that the desired no.of pods are matching the specified no.of pods as mentioned in the yaml fileDeployment is an advanced replication set that will manage the lifecycle of pods.
ReplicaSet is not suitable for applications that are going to have rolling updates and rollbacks.Deployment supports the rolling update and rollbacks.
Internally replica set will take care of pods and pods will take care of containersInternally deployment is going to take care of replica set and the replica set will take care of the pod.

─── ⋆⋅☆⋅⋆ ──FAQs On Kubernetes Deployment─── ⋆⋅☆⋅⋆ ──

Is Kubernetes a Container or VM?

Kubenretes is an group virtual machine or nodes and also kubernetes is an container orchestration platform where the master application are deployed in the form of cotainers.

What is a Pod in Kubernetes?

A pod is the smallest unit that exists in Kubernetes. It is similar to that of tokens in C or C++ language. A specific pod can have one or more applications. The nature of Pods is ephemeral this means that in any case if a pod fails then Kubernetes can and will automatically create a new replica/ duplicate of the said pod and continue the operation. The pods have the capacity to include one or more containers based on the requirement. To know more about kubernets pods refer to the kubernetes pods.

Kubernets Deployment Command

You can modify the following command depending on your requirment.

kubectl create deployment <deployment-name> –image=<image-name>

Kubernets Delete Pod

Following is command which is used to delete the kubernets pods.

kubectl delete pod <pod-name>

Conclusion:

Today, we covered Kubernetes Deployment, learning how to deploy applications with features like auto-healing and auto-scaling. This hands-on experience equipped us with essential skills for managing containerized applications effectively. Now, we're ready to streamline deployment processes and ensure the reliability of our container infrastructure.