Showing posts with label Pods. Show all posts
Showing posts with label Pods. Show all posts

Azure Kubernetes Service (AKS) cluster


Kubernetes is a platform for managing containers across multiple hosts. It provides lots of management features for container-oriented applications, such as auto-scaling, rolling deployments, compute resources, HA and storage management. Like containers, it's designed to run anywhere, including on bare metal, in our data center, in the public cloud, or even in the hybrid cloud.


Deployment in Kubernetes ensures that a certain number of groups of containers are up and running.


Azure Kubernetes Service (AKS) makes it simple to deploy a managed Kubernetes cluster in Azure. AKS reduces the complexity and operational overhead of managing Kubernetes by offloading much of that responsibility to Azure.


As a hosted Kubernetes service, Azure handles critical tasks like health monitoring and maintenance for you. The Kubernetes masters are managed by Azure. You only manage and maintain the agent nodes.


As a managed Kubernetes service, AKS is free - you only pay for the agent nodes within your clusters, not for the masters.


There are various ways you can deploy the AKS cluster, in this post we will be deploying through portal and sharing the azure cmd cli and Terraform code as well.


For improved security and management, AKS lets you integrate with Azure Active Directory and use Kubernetes role-based access controls. You can also monitor the health of your cluster and resources.


Navigate  >> to Kubernetes Service  >>  Click on it and follow screen shots :- 








Above is the first page of AKS deployment where we would fill in the basic settings like -

  • Project details - Select an Azure subscription, then select or create an Azure resource group.
  • Cluster Details - Select a region, Kubernetes version, and DNS name prefix for the AKS cluster.
  • Select a VM size for the AKS nodes. The VM size cannot be changed once an AKS cluster has been deployed.
  • Select the number of nodes to deploy into the cluster, I selected -1 as it is for testing purpose. 

Next click on scale & keep the default options & click on Authentication




Authentication page, configure the following options:





  • Create a new service principal by leaving the Service Principal field with (new) default service principal. Or you can choose Configure service principal to use an existing one. If you use an existing one, you will need to provide the SPN client ID and secret.

    • Enable the option for Kubernetes role-based access controls (RBAC). This will provide more fine-grained control over access to the Kubernetes resources deployed in your AKS cluster.





    Click for Networking , 

    By default, Basic networking is used, and Azure Monitor for containers is enabled. 



    Click Review + create and then Create when validation completes.




    Once you deploy it would take few minutes and it would be deploying all the relevant things after creating Master mode and worker node.

    To manage a Kubernetes cluster, you use kubectl, the Kubernetes command-line client. The kubectl client is pre-installed in the Azure Cloud Shell.

    Open Cloud Shell using the >_ button on the top of the Azure portal. as shown below along with the AKS overview -





    To configure kubectl to connect to your Kubernetes cluster, use the az aks get-credentials command. This command downloads credentials and configures the Kubernetes CLI to use them.


    az aks get-credentials --resource-group ResourceGroup --name AKSCluster


    To verify the connection to your cluster, use the kubectl get command to return a list of the cluster nodes.


    kubectl get nodes


    Below is the commands that i ran after creation of the cluster , plz check -




    Now we have seen the deployment of AKS and few basic things about kubernetes and how to connect and check the nodes. Lets check more things regarding AKS -


    **AKS Supports RBAC & RBAC lets you control access to Kubernetes resources and namespaces, and permissions to those resources


    **You can also configure an AKS cluster to integrate with Azure Active Directory (AD). With Azure AD integration, Kubernetes access can be configured based on existing identity and group membership.


    **Azure Monitor for container health collects memory and processor metrics from containers, nodes, and controllers. Container logs are available, and you can also review the Kubernetes master logs



    **Cluster node and Pod scalling -You can use both the horizontal pod autoscaler or the cluster autoscaler. This approach to scaling lets the AKS cluster automatically adjust to demands and only run the resources needed.


    **Cluster node upgrades - Azure Kubernetes Service offers multiple Kubernetes versions. As new versions become available in AKS, your cluster can be upgraded using the Azure portal or Azure CLI. During the upgrade process, nodes are carefully cordoned and drained to minimize disruption to running applications.



    **Storage Volume support -To support application workloads, you can mount storage volumes for persistent data. Both static and dynamic volumes can be used backed by either Azure Disks for single pod access, or Azure Files for multiple concurrent pod access.



    **Network settings - An AKS cluster can be deployed into an existing virtual network. In this configuration, every pod in the cluster is assigned an IP address in the virtual network, and can directly communicate with other pods in the cluster, and other nodes in the virtual network. Pods can connect also to other services in a peered virtual network, and to on-premises networks over ExpressRoute or site-to-site (S2S) VPN connections.



    **The HTTP application routing add-on makes it easy to access applications deployed to your AKS cluster. When enabled, the HTTP application routing solution configures an ingress controller in your AKS cluster. As applications are deployed, publicly accessible DNS names are auto configured. The HTTP application routing configures a DNS zone and integrates it with the AKS cluster. You can then deploy Kubernetes ingress resources as normal.


    **AKS supports the Docker image format. For private storage of your Docker images, you can integrate AKS with Azure Container Registry (ACR).



    Kubernetes Architecture – Lets go through it


    Lets directly jump into the Architecture with out wasting any time. Kubernetes work as on Master and worker node architecture. In total Kubernetes has 7 components out of which 4 belongs to Master node and 3 belongs to worker node as described and shown below:


    Master Node components:

    1.     API Server
    2.     Controller Manager
    3.     ETCD
    4.     Scheduler



    Worker Node components:

    1.     Kube-proxy
    2.     Kubelet
    3.     Runtime


    Let’s discuss these components one by one –

    MASTER NODE – As the name suggests it is the master for the cluster and entry point for all the administrative tasks which is responsible for managing the Kubernetes cluster.

    There can be one or more than one Master inside a cluster for fault tolerance & high availability. Let’s check each component which are responsible to make a node Master.


    -        API SERVER
    This is the front end for the Kubernetes control plane. All API calls are sent to this server and server send commands to the other services.

    No component talks to each other directly , API server is responsible for all the communication.


    -        Etcd
    This is the Key Value store for the cluster. When an object is created, that objects state is stored here.

    Etcd acts as the reference for the cluster state. If the cluster differs from what is indicated in Etcd, cluster would change to match the declared state. For e.g 2 Pods supposed to run with certain image or task if any pods gets deleted it would automatically create one more to match the state.



    -        Scheduler
    When a new pod is created scheduler determines which node should run it & this decision is based on many factors including hardware, workload’s, affinity etc.


    -        Controller Manager
    It operates the cluster controllers –
    ·       Node Controller – Responsible for noticing and responding when nodes go down.
    ·       Replication Controller – responsible for maintaining the correct number of pods for every replication controller object in the system
    ·       Endpoints Controller – Populates the endpoint objects i.e. joins services and pods
    ·       Service Accounts & Token Controllers – creates default accounts and API access tokens for new namespaces


    Below is the Architecture diagram and you can see each component and their communication flow





    Worker Node (minions) – It is a physical server or VM which runs the applications using Pods which is controlled by the Master node. 

    “Pods are the smallest working unit of Kubernetes just like container of Docker” or you can say Kubernetes doesn’t run containers instead it wraps one or more containers into a higher-level structure called POD. Pods are used as the unit of replication in Kubernetes.

    Lets check the components of Worker Nodes –

    Kube-Proxy:
    -        This runs on the nodes and provides network connectivity for services on the nodes that connect to the pods.

    -        It serves as a network proxy and a load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets.

    -        Kube-proxy runs on each node to deal with individual host sub-netting and ensure that the services are available to external parties.


    Kubelet: 
    -        It is an agent which communicates with the Master node and executes on nodes or the worker nodes. It gets the Pod specifications through the API server and executes the containers associated with the Pod and ensures that the containers described in those Pod are running and healthy.


    Container Runtime:
    -      This is the container manager. It can be any container runtime that is compliant with the Open Container initiative such as Docker or to run and manage a container’s lifecycle, we need a container runtime on the worker node.

    -        Sometimes, Docker is also referred to as a container runtime, but to be precise, Docker is a platform which uses containers as a container runtime. 



    FinOps Vs Cost Optimization

     Here's the detailed comparison clearly structured in pointers FinOps Vs Cost Recommendations: Aspect 1: Scope and Approach FinOps: H...