Deploying Azure Vote app on AKS

We have already created the AKS cluster successfully in last post and in this post we will utilize the same AKS-cluster and deploy Azure Vote app. Please follow the below screenshots and commands -

First lets get the Azure Cli running and get the clusters credentials and check nodes just for the confirmation as everything is running as expected as shown below -

az aks get-credentials --resource-group ResourceGroup --name AKSCluster

kubectl get nodes




Kubernetes manifest file defines a desired state for the cluster, such as what container images to run & will be using the manifest file for Azure Vote app and it would deploy all the relevant components.

Lest open the vi or nano editor to create azure-vote.yaml and copy the below definition:

Ref link - https://docs.microsoft.com/en-us/azure/aks/kubernetes-walkthrough-portal



apiVersion: apps/v1
kind: Deployment
metadata:
  name: azure-vote-back
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azure-vote-back
  template:
    metadata:
      labels:
        app: azure-vote-back
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: azure-vote-back
        image: redis
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        ports:
        - containerPort: 6379
          name: redis
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-back
spec:
  ports:
  - port: 6379
  selector:
    app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: azure-vote-front
spec:
  replicas: 1
  selector:
    matchLabels:
      app: azure-vote-front
  template:
    metadata:
      labels:
        app: azure-vote-front
    spec:
      nodeSelector:
        "beta.kubernetes.io/os": linux
      containers:
      - name: azure-vote-front
        image: microsoft/azure-vote-front:v1
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 250m
            memory: 256Mi
        ports:
        - containerPort: 80
        env:
        - name: REDIS
          value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
  name: azure-vote-front
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: azure-vote-front


Once the file is saved run the below command for the deployment :

kubectl apply -f azure-vote.yaml

When the application runs, a Kubernetes service exposes the application front end to the internet. This process can take a few minutes to complete.
To monitor progress, use the kubectl get service command with the --watch argument.

kubectl get service azure-vote-front --watch

Below you can see how it ran on cloud cli -



Now you can see the internal and external IP address for Azure-Vote application and when you try to browse the public IP address it would show you the app where you can vote for cat and dogs as shown :-





Monitor heath and logs : When you created the cluster, Azure Monitor for containers was enabled. This monitoring feature provides health metrics for both the AKS cluster and pods running on the cluster. Navigate to the cluster and follow the below instruction as shown below as well

  1. Under Monitoring on the left-hand side, choose Insights
  2. Across the top, choose to + Add Filter
  3. Select Namespace as the property, then choose <All but kube-system>
  4. Choose to view the Containers.





To see logs for the azure-vote-front pod, select the View container logs link on the right-hand side of the containers list. These logs include the stdout and stderr streams from the container.







Azure Kubernetes Service (AKS) cluster


Kubernetes is a platform for managing containers across multiple hosts. It provides lots of management features for container-oriented applications, such as auto-scaling, rolling deployments, compute resources, HA and storage management. Like containers, it's designed to run anywhere, including on bare metal, in our data center, in the public cloud, or even in the hybrid cloud.


Deployment in Kubernetes ensures that a certain number of groups of containers are up and running.


Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure.


As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes.


As a managed Kubernetes service, AKS is free - you only pay for the agent nodes within your clusters, not for the masters.


There are various ways you can deploy the AKS cluster, in this post we will be deploying through portal and sharing the azure cmd cli and Terraform code as well.


For improved security and management, AKS lets you integrate with Azure Active Directory and use Kubernetes role-based access controls. You can also monitor the health of your cluster and resources.


Navigate  >> to Kubernetes Service  >>  Click on it and follow screen shots :- 








Above is the first page of AKS deployment where we would fill in the basic settings like -

  • Project details - Select an Azure subscription, then select or create an Azure resource group.
  • Cluster Details - Select a region, Kubernetes version, and DNS name prefix for the AKS cluster.
  • Select a VM size for the AKS nodes. The VM size cannot be changed once an AKS cluster has been deployed.
  • Select the number of nodes to deploy into the cluster, I selected -1 as it is for testing purpose. 

Next click on scale & keep the default options & click on Authentication




Authentication page, configure the following options:





  • Create a new service principal by leaving the Service Principal field with (new) default service principal. Or you can choose Configure service principal to use an existing one. If you use an existing one, you will need to provide the SPN client ID and secret.

    • Enable the option for Kubernetes role-based access controls (RBAC). This will provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.





    Click for Networking , 

    By default, Basic networking is used, and Azure Monitor for containers is enabled. 



    Click Review + create and then Create when validation completes.




    Once you deploy it would take few minutes and it would be deploying all the relevant things after creating Master mode and worker node.

    To manage a Kubernetes cluster, you use kubectl, the Kubernetes command-line client. The kubectl client is pre-installed in the Azure Cloud Shell.

    Open Cloud Shell using the >_ button on the top of the Azure portal. as shown below along with the AKS overview -





    To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials command. This command downloads credentials and configures the Kubernetes CLI to use them.


    az aks get-credentials --resource-group ResourceGroup --name AKSCluster


    To verify the connection to your cluster, use the kubectl get command to return a list of the cluster nodes.


    kubectl get nodes


    Below is the commands that i ran after creation of the cluster , plz check -




    Now we have seen the deployment of AKS and few basic things about kubernetes and how to connect and check the nodes. Lets check more things regarding AKS -


    **AKS Supports RBAC & RBAC lets you control access to Kubernetes resources and namespaces, and permissions to those resources


    **You can also configure an AKS cluster to integrate with Azure Active Directory (AD). With Azure AD integration, Kubernetes access can be configured based on existing identity and group membership.


    **Azure Monitor for container health collects memory and processor metrics from containers, nodes, and controllers. Container logs are available, and you can also review the Kubernetes master logs



    **Cluster node and Pod scalling -You can use both the horizontal pod autoscaler or the cluster autoscaler. This approach to scaling lets the AKS cluster automatically adjust to demands and only run the resources needed.


    **Cluster node upgrades - Azure Kubernetes Service offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.



    **Storage Volume support -To support application workloads, you can mount storage volumes for persistent data. Both static and dynamic volumes can be used backed by either Azure Disks for single pod access, or Azure Files for multiple concurrent pod access.



    **Network settings - An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster, and other nodes in the virtual network. Pods can connect also to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.



    **The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster. As applications are deployed, publicly accessible DNS names are auto configured. The HTTP application routing configures a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.


    **AKS supports the Docker image format. For private storage of your Docker images, you can integrate AKS with Azure Container Registry (ACR).



    Kubernetes Architecture – Lets go through it


    Lets directly jump into the Architecture with out wasting any time. Kubernetes work as on Master and worker node architecture. In total Kubernetes has 7 components out of which 4 belongs to Master node and 3 belongs to worker node as described and shown below:


    Master Node components:

    1.     API Server
    2.     Controller Manager
    3.     ETCD
    4.     Scheduler



    Worker Node components:

    1.     Kube-proxy
    2.     Kubelet
    3.     Runtime


    Let’s discuss these components one by one –

    MASTER NODE – As the name suggests it is the master for the cluster and entry point for all the administrative tasks which is responsible for managing the Kubernetes cluster.

    There can be one or more than one Master inside a cluster for fault tolerance & high availability. Let’s check each component which are responsible to make a node Master.


    -        API SERVER
    This is the front end for the Kubernetes control plane. All API calls are sent to this server and server send commands to the other services.

    No component talks to each other directly , API server is responsible for all the communication.


    -        Etcd
    This is the Key Value store for the cluster. When an object is created, that objects state is stored here.

    Etcd acts as the reference for the cluster state. If the cluster differs from what is indicated in Etcd, cluster would change to match the declared state. For e.g 2 Pods supposed to run with certain image or task if any pods gets deleted it would automatically create one more to match the state.



    -        Scheduler
    When a new pod is created scheduler determines which node should run it & this decision is based on many factors including hardware, workload’s, affinity etc.


    -        Controller Manager
    It operates the cluster controllers –
    ·       Node Controller – Responsible for noticing and responding when nodes go down.
    ·       Replication Controller – responsible for maintaining the correct number of pods for every replication controller object in the system
    ·       Endpoints Controller – Populates the endpoint objects i.e. joins services and pods
    ·       Service Accounts & Token Controllers – creates default accounts and API access tokens for new namespaces


    Below is the Architecture diagram and you can see each component and their communication flow





    Worker Node (minions) – It is a physical server or VM which runs the applications using Pods which is controlled by the Master node. 

    “Pods are the smallest working unit of Kubernetes just like container of Docker” or you can say Kubernetes doesn’t run containers instead it wraps one or more containers into a higher-level structure called POD. Pods are used as the unit of replication in Kubernetes.

    Lets check the components of Worker Nodes –

    Kube-Proxy:
    -        This runs on the nodes and provides network connectivity for services on the nodes that connect to the pods.

    -        It serves as a network proxy and a load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets.

    -        Kube-proxy runs on each node to deal with individual host sub-netting and ensure that the services are available to external parties.


    Kubelet: 
    -        It is an agent which communicates with the Master node and executes on nodes or the worker nodes. It gets the Pod specifications through the API server and executes the containers associated with the Pod and ensures that the containers described in those Pod are running and healthy.


    Container Runtime:
    -      This is the container manager. It can be any container runtime that is compliant with the Open Container initiative such as Docker or to run and manage a container’s lifecycle, we need a container runtime on the worker node.

    -        Sometimes, Docker is also referred to as a container runtime, but to be precise, Docker is a platform which uses containers as a container runtime. 



    Define Budget & Configure Cost center quotas - Azure

    There are several types of quotas that are applicable to subscriptions, including resource quotas and spending quotas. We can easily see all the resource consumption and quotas from subscription node in the portal and also request to increase quota if needed by clicking button on the same page at the top.


    Submitting a request to increase a quota is only submitting a support request to Microsoft. Microsoft Support must respond to the request, and while most requests are granted, it is not guaranteed that a quota increase will be granted.

    Please see below





    There are also spending quotas in Azure. Spending quotas allow administrators to set alerts within an Azure subscription by configuring budgets to inform the business when their Azure spending has hit a certain threshold.


     These differ slightly from limits. Where a resource limit can stop resources from being created (e.g. there are not enough cores available to the subscription in the desired region) a spending quota acts as an alerting mechanism and does not stop resources from being created or consumed. While an alert can be generated from a spending quota, resources can still be created and consumed which could cause the spending quota to be exceeded.


    Budgets in Azure Cost Management provide Azure customers subscriptions under many offer types with the ability to proactively manage cost and monitor Azure spend over time at a subscription level.


    Budgets are a monitoring mechanism only, allowing users to create budgets with set thresholds and notification rules. When a budget threshold is exceeded a notification is triggered but resources continue to run. 

    Navigate to Subscription >> Click on budget under Cost <management as shown below -




     Here you can configure your Budget and once done hit next in the bottom of this screen and in the next screen you can configure your alert as shown below -






    Assigning Tags to Azure Resources - PowerShell

    We all heard a lot about Tags and this something is actually makes your job very easy specially at the enterprise level where there are lots of different teams and departments. In this post we would be assigning Tags to the existing VM with the help of Powershell.


    These Tags gives metadata to Azure resources to logically organize them into a Taxonomy. Each Tag consist of Name and Value pair. After you apply tags , you can retrieve all the resources in your subscription with that Tag name.


    Tag enables you to retrieve related resources from different resource groups. This approach is very helpful when you need to organize resources for billing or management.



    Few Limitations

    • Not all resource type support Tags.
    • Each resource or resource group can have max. of 50 Tags
    • Tags applied to resource group are not inherited.
    • Tag cant be applied to classic resources.
    • User must have write access to apply tags on that resource  


    In this scenario we have on VM as shown and no Tags applied and we would be applying tags to this resource with the help of PS.








    Get-AzResource  This can help you get the resources and you can choose the name or resource that you are looking for in my case its resource group




    $r = Get-AzResource -ResourceName "lab-vm" -ResourceGroupName "ARG"
     We are getting the resource info in a variable so that we can run the set command next easily 



    Set-AzureRmResource -Tag @{ Dept="IT"; Environment="Test" } -ResourceId $r.ResourceId -Force
     In this command we are setting the Tags as array and we can give multiple values as array or by separating with semi colan ;



    You will see the result as shown:





      Once you see its succeeded , you can refresh the resource and tag would appear as shown -





    Risk Vs Constraints

     The distinction between risks and constraints lies in their nature and impact on the project. Here's how they differ: 1. Nature Risks...