Azure Storage Account - Lets Understand


Storage Account

It’s a Service on Azure which contains all kinds of storage data objects. You can think it like a container where you can upload your data. Lets check what all kind of data we have and what all services Storage Account provides to deal with it -

Blobs – This is unstructured data. We have 3 types of blobs

·                   Block blobs – this is made up of blocks of data that can be managed individually upto 4.7TB. It stores text and binary data.

·                    Append Blobs – this is also made up of block as block blob but are optimized for append operations. This is ideal for scenarios such as logging data from virtual machines.

·                    Page Blobs – This is for random access files up to 8 TB in size. Page blobs store virtual hard drive (VHD) files and serve as disks for Azure virtual machines




Files – This is for Azure file share which is fully managed file shares in the cloud that are accessible via the industry standard SMB protocol. Azure file shares can be mounted concurrently by cloud or on-premises deployments of Windows, Linux, and macOS.


Tables - Azure Table storage stores large amounts of structured data. The service is a NoSQL datastore which accepts authenticated calls from inside and outside the Azure cloud. Azure tables are ideal for storing structured, non-relational data.


Queue -  Azure Queue storage service enables storing large numbers of messages that can be accessed from anywhere via authenticated calls using HTTP or HTTPS. Azure Storage Queue is a type of message queuing services provided by Azure that provides queue storage infrastructure for a REST-based interface, within and between different applications and services.


Storage Account has various properties that is very imp to understand as per an architect perspective –

  •  Account Kind
  •  Performance
  •  Access Tier
  •  Replication Type



Account Kind

·        General Purpose v2 – Basic Storage account type for blob, files, queues and tables
·        General Purpose V1 – Legacy account type for blob, files, queues and tables
·        Blob – Legacy blob-only storage accounts. Use general purpose-v2 instead when possible


Performance tiers

·        A standard performance backed by magnetic drives & provide low cost storage
·        A premium performance backed by SSD with high performance and low latency.

Premium storage only used by VM disks & suitable for DB servers with high I/O applications.

VM uses Premium storage qualify for 99.9% uptime/connectivity SLA, even when running as a single instance.


Premium Storage limitations:
  •         Only VM disk can be stored
  •         Only page blob can be created in Premium storage
  •         Only LRS will be configured.


Access Tier

Each access tier in Azure Storage is optimized for a particular pattern of data usage.

Hot - optimized for frequent access of objects in the storage account means accessing is most cost-effective, while storage costs are higher. New storage accounts are created in the hot tier by default.

Cool - optimized for storing large amounts of data that is infrequently accessed and stored for at least 30 days. Storing data in the cool tier is more cost-effective, but accessing that data may be more expensive than accessing data in the hot tier.

Archive - tier is available only for individual block blobs. The archive tier is optimized for data that can tolerate several hours of retrieval latency and that will remain in the Archive tier for at least 180 days. The archive tier is the most cost-effective option for storing data. However, accessing that data is more expensive than accessing data in the hot or cool tiers.



Replication

Replication options for a storage account include:

·        Locally-redundant storage (LRS): A simple, low-cost replication strategy. Data is replicated synchronously three times within the primary region.
·        Zone-redundant storage (ZRS): Replication for scenarios requiring high availability. Data is replicated synchronously across three Azure availability zones in the primary region.
·        Geo-redundant storage (GRS): Cross-regional replication to protect against regional outages. Data is replicated synchronously three times in the primary region, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-redundant storage (RA-GRS).
·        Geo-zone-redundant storage (GZRS) (preview): Replication for scenarios requiring both high availability and maximum durability. Data is replicated synchronously across three Azure availability zones in the primary region, then replicated asynchronously to the secondary region. For read access to data in the secondary region, enable read-access geo-zone-redundant storage (RA-GZRS).



Storage account endpoints


A storage account provides a unique namespace in Azure for your data. Every object that you store in Azure Storage has an address that includes your unique account name. The combination of the account name and the Azure Storage service endpoint forms the endpoints for your storage account.

·        Blob storage: http://mystorageaccount.blob.core.windows.net
·        Table storage: http://mystorageaccount.table.core.windows.net
·        Queue storage: http://mystorageaccount.queue.core.windows.net
·        Azure Files: http://mystorageaccount.file.core.windows.net





Azure Storage provides a layered security model. This model enables you to secure and control the level of access to your storage accounts that your applications and enterprise environments demand, based on the type and subset of networks used. When network rules are configured, only applications requesting data over the specified set of networks can access a storage account. You can limit access to your storage account to requests originating from specified IP addresses, IP ranges or from a list of subnets in an Azure Virtual Network (VNet).






Azure Storage Account - Access


Azure Storage Account – Access Security

There are 3 ways we can provide access to the Azure Storage account :-
  • Access Keys
  • Account Shared Access Signatures (SAS)
  • Service Shared Access Signatures (SAS)

Storage Account access Keys are automatically generated during the creation of any Storage account. In any SA we have two 512-bit storage account access keys, Key 1 and Key 2. Both the Keys provide you full and complete access to the SA.

Your storage account access keys are like a root password for your storage account.

Always be careful to protect your access keys. Use Azure Key Vault to manage and rotate your keys securely. Microsoft recommends that you regularly rotate and regenerate your access keys. You can rotate the keys without interruption to your applications.

Access Key can be regenerated or rotate as mentioned above: -
·        Regeneration an Access Key creates brand new key and old one is disable immediately.
·        We have 2 keys so that we can regenerate one at a time without interrupting the services by ensuring there is always at least one valid key.





** For single administrator it may be ok providing access via access key but in an organization,  you should only provide access for what one needs not more than that, follow least privilege principle always and use SAS for that. **


Watch the Demo here

To granularize the Access on storage account we have SAS which are: -
·        Account SAS – can provide you access to one or more resources within a SA.
·        Service SAS – can provide access in just one of the storage services (blob, file, queue, table)

Some important Note on SAS :=

-        An SAS is a URI which can provide access to resources.
-        SAS can include start and expiry times, Permissions, IP and protocol restrictions.
-        SAS URI contains a signature constructed from SAS parameters to provide authorization.

Y You need to click on Shared access signature under settings and choose the below settings as per your need as shown in the snippet.








Rename- Azure VM

This post i am writing after the issue faced during the DR set-up & now you might be wondering how come DR set up is connected to the caption. Well it is and let me explain you how.

During the time of DR set up , ASR was showing the VM to enable to replication but it was showing Grayed out and you can not choose it or select it. I followed my own article on this scenario but still unable to fix then i dig it further and found all VMs are created from disk except this one which was created from Image. Below is the post on similar issues -

https://pachehra.blogspot.com/2019/08/vm-migration-between-regions-via-asr.html


All VM's where we could enable the replication without issue showing source as Disk because all VMs were migrated from On-prem via ASR. Only this VM showing image and also grayed out during DR set-up.

Now i tried to re-deploy the VM just to be sure as i have already followed mentioned in the above article of mine. This did not fix the issue but yes gave me the well defined error, which helped to figure out why source was image.



Hence my question to the tech who was working on the fail-over during migration regarding the Source being Image because i knew as per the error this VM was not generalized but created from the captured image of running VM. My only motive was just to know what was the reason that he needs to go through the Capture image and re-create the VM.


Guess what was the Answer -

May be you guessed right. He said needed to change the name on the VM as during migration he forgot to change the name and i was like - you dont need to capture to change the name , hence the caption and better way to do it.

Best way to change the name is - Delete  the VM and re-create from the left-over Disk , you do have option to create the VM  "easy-peasy"




So the essence of the story - Best way to change the Azure VM name is to delete the existing VM and create the VM from the left-over disk as shown in the Snippet.


Kubernetes - Imperative Commands with kubectl

This post is more like a quick summary or guide for the imperative commands that would help you many ways especially during the certification so without wasting time lets dive into it.


POD
Create an NGINX Pod: this would create the NGINX pod-

Kubectl run --generator=run-pod/v1 nginx --image=nginx


(This will automatically use the pod's labels as selectors)



Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)

kubectl run --generator=run-pod/v1 nginx --image=nginx --dry-run -o yaml

 


Deployment 

Create a deployment

kubectl run --generator=deployment/v1beta1 nginx --image=nginx

Or the newer recommended way: 

kubectl create deployment --image=nginx nginx

 

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)

kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run -o yaml

Or

kubectl create deployment --image=nginx nginx --dry-run -o yaml

 

Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 Replicas (--replicas=4)

kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run --replicas=4 -o yaml

kubectl create deployment does not have a --replicas option. You could first create it and then scale it using the kubectl scale command.

 

Save it to a file - (If you need to modify or add some other details)

kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run --replicas=4 -o yaml > nginx-deployment.yaml

 

Service

Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379

kubectl expose pod redis --port=6379 --name redis-service --dry-run -o yaml

(This will automatically use the pod's labels as selectors)



Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes:

kubectl expose pod nginx --port=80 --name nginx-service --dry-run -o yaml

(This will automatically use the pod's labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)

Or

kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run -o yaml

(This will not use the pods labels as selectors)

Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the `kubectl expose` command. If you need to specify a node port, generate a definition file using the same command and manually input the nodeport before creating the service.

 

Reference:

https://kubernetes.io/docs/reference/kubectl/conventions/



--dry-run: By default as soon as the command is run, the resource will be created. If you simply want to test your command , use the --dry-run option. This will not create the resource, instead, tell you weather the resource can be created and if your command is right.
-o yaml: This will output the resource definition in YAML format on screen.

 

MS Defenders

 Microsoft Defender offers a wide range of security solutions, similar to the ones we've discussed (Defender for Containers, Defender fo...