azure storage
71 TopicsEnable SFTP on Azure File Share using ARM Template and upload files using WinScp
SFTP is a very widely used protocol which many organizations use today for transferring files within their organization or across organizations. Creating a VM based SFTP is costly and high-maintenance. ACI service is very inexpensive and requires very little maintenance, while data is stored in Azure Files which is a fully managed SMB service in cloud. This template demonstrates an creating a SFTP server using Azure Container Instances (ACI). The template generates two resources: storage account is the storage account used for persisting data, and contains the Azure Files share sftp-group is a container group with a mounted Azure File Share. The Azure File Share will provide persistent storage after the container is terminated. ARM Template for creation of SFTP with New Azure File Share and a new Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "17013458610905703770" } }, "parameters": { "storageAccountType": { "type": "string", "defaultValue": "Standard_LRS", "metadata": { "description": "Storage account type" }, "allowedValues": [ "Standard_LRS", "Standard_ZRS", "Standard_GRS" ] }, "storageAccountPrefix": { "type": "string", "defaultValue": "sftpstg", "metadata": { "description": "Prefix for new storage account" } }, "fileShareName": { "type": "string", "defaultValue": "sftpfileshare", "metadata": { "description": "Name of file share to be created" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]", "storageAccountName": "[take(toLower(format('{0}{1}', parameters('storageAccountPrefix'), uniqueString(resourceGroup().id))), 24)]" }, "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-06-01", "name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "kind": "StorageV2", "sku": { "name": "[parameters('storageAccountType')]" } }, { "type": "Microsoft.Storage/storageAccounts/fileServices/shares", "apiVersion": "2019-06-01", "name": "[toLower(format('{0}/default/{1}', variables('storageAccountName'), parameters('fileShareName')))]", "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] }, { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('fileShareName')]", "storageAccountName": "[variables('storageAccountName')]", "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value]" } } ] }, "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountType": { "value": "Standard_LRS" }, "storageAccountPrefix": { "value": "sftpstg" }, "fileShareName": { "value": "sftpfileshare" }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } ARM Template to Enable SFTP for an Existing Azure File Share in Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "16190402726175806996" } }, "parameters": { "existingStorageAccountResourceGroupName": { "type": "string", "metadata": { "description": "Resource group for existing storage account" } }, "existingStorageAccountName": { "type": "string", "metadata": { "description": "Name of existing storage account" } }, "existingFileShareName": { "type": "string", "metadata": { "description": "Name of existing file share to be mounted" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]" }, "resources": [ { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('existingFileShareName')]", "storageAccountName": "[parameters('existingStorageAccountName')]", "storageAccountKey": "[listKeys(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('existingStorageAccountResourceGroupName')), 'Microsoft.Storage/storageAccounts', parameters('existingStorageAccountName')), '2019-06-01').keys[0].value]" } } ] } } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "existingStorageAccountResourceGroupName": { "value": null }, "existingStorageAccountName": { "value": null }, "existingFileShareName": { "value": null }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } Deploy the ARM Templates using PowerShell or Azure CLI or Custom Template deployment using Azure Portal. Choose the subscription you want to create the sftp service in Create a new Resource Group It will automatically create a storage account Give a File Share Name Provide a SFTP user name Provide a SFTP password Wait till the deployment is done successfully Click on the container sftp-group Copy the FQDN from the container group Download WinScp from WinSCP :: Official Site :: Download Provide Hostname : FQDN for ACI; Port Number: 22; User Name and Password Click on Login 13. Drag and drop a file from the left side to the Right side. 14. Now, go to the Storage Account and Navigate to File share. The file appears on the file share.10KViews2likes5CommentsHow to configure directory level permission for SFTP local user
SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users. Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution. In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim. Attention! As mentioned in Caution part of the document, the ACLs for local users are supported, but also still in preview. Please do not use this for your production environment. Preparation Before configuring local users and ACLs, the following things are already prepared: One ADLS Gen2 Storage Account. (In this example, it’s called zhangjerryadlsgen2) A container (testsftp) with two directories. (dir1 and dir2) One file uploaded into each directory. (test1.txt and test2.txt) The file system in this blog is like: Aim The aim is to have user1 which can only list files saved in dir1 and user2 which can only list files saved in dir2. Both of them should be unable to do any other operations in the matching directory (dir1 for user1 and dir2 for user2) and should be unable to do any operations in root directory and the other directory. Configuring local users From Azure Portal, it’s easy to enable SFTP feature and create local users. Here except user1 and user2, another additional user is also necessary. It will be used as the administrator to assign ACLs on user1 and user2. In this blog, it’s called admin. While creating the admin, its landing directory should be the root directory of the container and the permissions should be all given. While creating the user1 and user2, as the permission will be controlled by using ACLs, the containers and permissions should be left empty and the Allow ACL authorization should be checked. The landing directory should be configured to the directory which this user should have permission later. (In this blog, user1 should be on dir1 and user2 should be on dir2.) User1: User2: After local users are created, one more step which is needed before configuring ACL is to note down the user ID of user1 and user2. By clicking the created local user, a page as following to edit local user should show out and the user ID will be included there. In this blog, the user ID of user1 is 1002 and user ID of user2 is 1003. Configuring ACLs Before starting configuring ACLs, clarifying which permissions to assign is necessary. As explained in this document, the ACLs contains three different permissions: Read(R), Write(W) and Execute(X). And from the “Common scenarios related to ACL permissions” part of the same document, there is a table which contains most operations and their corresponding required permissions. Since the aim of this blog is to allow user1 only to list the dir1, according to table, we know that correct permission for user1 should be X on root directory, R and X on dir1. (For user2, it’s X on root directory, R and X on dir2). After clarifying the needed permissions, the next step is to assign ACLs. The first step is to connect to the Storage Account using SFTP as admin: (In this blog, the PowerShell session + OpenSSL is used but it’s not the only way. Users can also use any other way to build SFTP connection to the Storage Account.) Since assigning ACLs for local users is not possible to a specific user, and the owner of root directory is a built-in user which is controlled by Azure, the easiest way here is to give X permissions to all other users. (For concept of other users, please refer to this document) Next step is to assign R and X permission. But considering the same reason, it’s impossible to give R and X permissions for all other users again. Because if it’s done, user1 will also have R and X permissions on dir2, which does not match the aim. The best way here is to change the owner of the directory. Here we should change the owner of dir1 to user1 and dir2 to user2. (By this way, user1 will not have permission to touch dir2.) After above configurations, while connecting to the Storage Account by SFTP connection using user1 and user2, only listing file operation under corresponding directory is allowed. User1: User2: (The following test result proves that only list operation under /dir2 is allowed. All other operations will return permission denied or not found error.) About landing directory What will happen if all other configurations are correct but the landing directory is configured as root directory for user1 or user2? The answer to the above question is quite simple: The configuration will still work, but will impact the user experience. To show the the result of that case, one more local user called user3 with user ID 1005 is created but its landing directory is configured as admin, which is on root directory. The ACL permission assigned on it is same as user2 (change owner of dir2 to user3.) While connecting to the Storage Account by SFTP using user3, it will be landing on root directory. But per ACLs configuration, it only has permission to list files in dir2, hence the operations in root directory and dir1 are expected to fail. To apply further operation, user needs to add dir2/ in the command or cd dir2 at first.301Views0likes0CommentsGranting List-Only permissions for users in Azure Storage Account using ABAC
In this blog, we’ll explore how to configure list-only permissions for specific users in Azure Storage, allowing them to view the structure of files and directories without accessing or downloading their contents. Granting list-only permissions to specific users for an Azure Storage container path allows them to list files and directories without reading or downloading their contents. While RBAC manages access at the container or account level, ABAC offers more granular control by leveraging attributes like resource metadata, user roles, or environmental factors, enabling customized access policies to meet specific requirements. Disclaimer: Please test this solution before implementing it for your critical data. Pre-Requisites: Azure Storage GPV2 / ADLS Gen 2 Storage account Make sure to have enough permissions(Microsoft.Authorization/roleAssignments/write permissions) to assign roles to users , such as Owner or User Access Administrator Note: If you want to grant list-only permission to a particular container, ensure that the permission is applied specifically to that container. This approach limits the scope of access to just the intended container and enhances security by minimizing unnecessary permissions. However, in this example, I am demonstrating how to implement this role for the entire storage account. This setup allows users to list files and directories across all containers within the storage account, which might be suitable for scenarios requiring broader access. Action: You can follow the steps below to create a Storage Blob Data Reader role with specific conditions using the Azure portal: Step 1: Sign-in to the Azure portal with your credentials. Go to the storage account where you could like the role to be implemented/ scoped to. Select Access Control (IAM)->Add-> Add role assignment: Step2: On the Roles tab, select (or search for) Storage Blob Data Reader and click Next. On the Members tab, select User, group, or service principal to assign the selected role to one or more Azure AD users, groups, or service principals. Click Select members. Find and select the users, groups, or service principals. You can type in the Select box to search the directory for display name or email address. Please select the user and continue with Step 3 to configure conditions. Step 3: The Storage Blob Data Reader provides access to list, read/download the blobs. However, we would need to add appropriate conditions to restrict the read/download operations. On the Conditions tab, click Add condition. The Add role assignment condition page appears: In the Add action section, click Add action. The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. Check the box next to Read a blob, then click Select: Note: The Build Expression section is optional, so I am not using it in this case. On the Review + assign tab, click Review + assign to assign the role with the condition. After a few moments, the security principal is assigned the role. Please Note: Along with the above permission, I have given the user Reader permission at the storage account level. You could give the Reader permission at the resource level/resource group level/subscription level too. We mainly have Management Plane and Data Plane while providing permissions to the user. The Management plane consists of operation related to storage account such as getting the list of storage accounts in a subscription, retrieve storage account keys or regenerate the storage account keys, etc. The Data plane access refers to the access to read, write or delete data present inside the containers. For more info, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/role-definitions#management-and-dat... To understand about the Built-in roles available for Azure resources, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles Hence, it is important that you give minimum of ‘Reader’ role at the Management plane level to test it out in Azure Portal. Step 4: Test the condition (Ensure that the authentication method is set to Azure AD User Account and not Access key) User can list the blobs inside the container. Download/Read blob failed. Related documentations: What is Azure attribute-based access control (Azure ABAC)? | Microsoft Learn Azure built-in roles - Azure RBAC | Microsoft Learn Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC - Azure Storage | Microsoft Learn Add or edit Azure role assignment conditions using the Azure portal - Azure ABAC | Microsoft Learn363Views0likes0CommentsHow to secure the configuration file while using blobfuse2 for security compliance
Overview: The blobfuse2 functionality is used to mount Azure Storage Account as file system on Linux machine. To establish the connection with storage account via blobfuse2 and to authenticate the request against the storage account, we make use of configuration file for it. The configuration file contains the storage account details along with the container to be mounted and what mode of authentication to be used. The configuration yaml file includes parameters for blobfuse2 settings. In general, the details saved in configuration file are in plain text. Hence, if any users access the configuration file, they would be able to access the sensitive information related to the storage account, like for example, the storage account access keys and SAS token. Let’s say that, as part of security reasons, you want to safeguard the configuration file from bad actors and prevent the leak of your storage account’s sensitive details. In such scenario, you can make use of blobfuse2 secure command for it. Using blobfuse2 secure command, we can encrypt, decrypt, get or set details in the encrypted configuration file. We will be securing the configuration file using passphrase. Hence, do save the passphrase as it is needed for decrypt, get, set commands. Note: At present, the configuration file encryption is available in blobfuse2 only. Let us discuss in detail the blobfuse2 secure command and how we can mount the blobfuse2 using the encrypted config file. For holistic view regarding the blobfuse2 secure command, in this blog, we have initially mounted blobfuse2 using plain text configuration file. The blobfuse2 mount was successful and to show the contains of configuration file, we have performed “cat” command. Please do refer to the below screenshot for the same. Command used is: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./config.yaml Create an encrypted configuration yaml file: Let us secure the configuration file using blobfuse2 secure encrypt command. Performing “dir” command, we can see the configuration file before and after encryption. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure encrypt --config-file./config.yaml -- passphrase={passphrasesample} --output-file=./encryptedconfig.yaml Now, let us perform the blobfuse2 mount command using encrypted configuration file that we created using the above step. Refer to the screenshot below for further details. Command: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --secure-config Note: Do note that, post the configuration file is encrypted, the original configuration file is deleted. Hence, if there is any blobfuse2 mount that was done prior to the encryption of the configuration file, ensure that the blobfuse2 mount is using the correct configuration file. Fetch parameter from encrypted configuration file: Let’s say that you want to get a particular parameter from the encrypted config file. Using “cat” command, if we see the details of the config file, the encrypted data will not be readable. Hence, we need to use blobfuse2 secure get command for the it. Perform “blobfuse2 secure get” command to get the details from the encrypted config file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure get --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --key=file_cache.path Set parameter in encrypted configuration file: In the encrypted configuration file, if you want to set any new parameter, we can use blobfuse2 secure set command to set the details. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure set --config-file=./encrytedconfig.yaml --passphrase={passphrasesample} --key=logging.log_level --value=log_debug Decrypt the configuration yaml file: Now we know how we can encrypt the configuration file, let's understand how we can use the blobfuse2 secure command to decrypt the configuration file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure decrypt --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --output-file=./decryptedconfig.yaml We can see the contents of the decrypted configuration file using “cat” command. In this way, we can secure the config file used for blobfuse2 and meet our security requirement. References: If you face any issues with blobfuse2 troubleshooting, you can refer to the blog here: How to troubleshoot blobfuse2 issues | Microsoft Community Hub For blobfuse2 secure commands, you can refer to the link here: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file - Azure Storage | Microsoft Learn Hope this article turns out helpful! Happy Learning!222Views1like0Comments- 2.2KViews0likes0Comments
Efficient Management of Append and Page Blobs Using Azure Storage Actions
Overview In Azure Storage, Blob Lifecycle Management (BLM) allows you to automate the management of your data based on rules defined by the user. Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. However, since lifecycle management (BLM) policies are not supported for page blobs, we can effectively manage the lifecycle of page blobs and append blobs through storage tasks and actions. There are solutions like Logic Apps and Azure Functions are available to automate lifecycle management and below are reference links: Lifecycle Management, Page Blob, Azure Storage, Blob Storage (microsoft.com) Delete all the Azure Storage Blob content before N days using Logic App - Microsoft Community Hub The focus of this blog is on demonstrating how to achieve the same goal using built-in storage actions and tasks NOTE: Azure Storage Actions is currently in PREVIEW and is available these regions. Please refer: About Azure Storage Actions Preview - Azure Storage Actions Preview | Microsoft Learn By leveraging these storage actions, we can automate the retention, deletion, and archival of page blobs and append blobs based on custom-defined rules, ensuring efficient lifecycle management without relying on external services. This method provides a more direct, storage-centric approach to managing page blob lifecycles. For example, we have both page blobs and append blobs within a container, and we would like to delete them using Azure storage actions and tasks. In this article, you'll learn how to create a storage task. Create a task In the Azure portal, search for Storage Tasks. Then, under Services, select Storage tasks - Azure Storage Actions. On the Azure Storage Actions | Storage Tasks page, select Create. Basics tab On the Basics tab, provide the essential information for your storage task. Conditions tab On the Conditions tab, define the conditions that must be met by each object (container or blob), and the operations to perform on the object. You must define at least one condition and one operation. To add a clause to a condition, select Add new clause. To add operations, select Add new operation. In this scenario, we are selecting the blob types as Page Blobs and Append Blobs to perform the delete operation. Assignments tab An assignment identifies a storage account and a subset of objects in that account that the task will target. An assignment also defines when the task runs and where execution reports are stored. To add an assignment, select Add assignment. This step is optional. You don't have to add an assignment to create the task. Tags tab On the Tags tab, you can specify Resource Manager tags to help organize your Azure resources. Review + create tab When you navigate to the Review + create tab, Azure runs validation on the storage task settings that you have chosen. If validation passes, you can proceed to create the storage task. If validation fails, then the portal indicates which settings need to be modified. Once you have created the storage task then please go to the respective storage account to enable the storage task assignment. Enable the task assignment Storage task assignments are disabled by default. Enable assignments from the Assignments page. Periodically select Refresh to view an updated status. Until the task runs and then completes, the string in progress appears beneath the Last run status column. When the task completes, the string Completed appears in that column. After successfully completing the task, we observed that both the page blobs and append blobs were deleted from the container. View results of the task run After the task completes running, you can view the results of the run. Select the View report link to download a report. Useful links: Create a storage task - Azure Storage Actions Preview | Microsoft Learn https://learn.microsoft.com/en-us/azure/storage-actions/overview Define storage task conditions & operations - Azure Storage Actions Preview | Microsoft Learn805Views0likes0CommentsOptimizing Azure Table Storage: Automated Data Cleanup using a PowerShell script with Azure Automate
Scenario This blog’s aim is to manage Table Storage data efficiently. Imagine you have a large Azure Table Storage that accumulates logs from various applications or any unused older data. Over time, this data grows significantly, making it necessary to periodically clean up old entries to maintain performance and manage costs. You decide to automate this process using Azure Automation. However, lifecycle management policies are limited to the Blob service only. By scheduling a PowerShell script, you can efficiently delete outdated data from your Azure Table Storage without manual intervention. This approach ensures that your storage remains optimized, and your applications continue to run smoothly. Below is the PowerShell script which delete Table Entities based on Timestamp: - Connect-AzAccount -Identity $SubscriptionID = "xxxxxxxxxxxxxxxxxx" $AzureContext = Set-AzContext –SubscriptionId $SubscriptionID Update-AzConfig -DisplaySecretsWarning $false $StorageAccount = "xxxxxxxxxxxxxxxxxxx" $StorageAccountKey = "xxxxxxxxxxxxxxxxxxx" $ctx = New-AzStorageContext -StorageAccountName $StorageAccount -StorageAccountKey $StorageAccountKey $alltablename = (Get-AzStorageTable –Context $ctx).CloudTable foreach ($table in $alltablename) { $tabledata = Get-AzTableRow -Table $table -CustomFilter "Timestamp gt datetime 'YYYY-MM-DDThh:mm:ssZ' " | Remove-AzTableRow -Table $table } #DISCLAIMER #The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, owners of this repository or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, #without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. Here are the steps to schedule a PowerShell script in Azure Automation :- Create an Azure Automation Account by following the link: Quickstart - Create an Azure Automation account using the portal | Microsoft Learn Add Modules to Azure Automation Account: Navigate to the created automation account page. Go to the "Modules" tab under the "Shared Resources" section and choose the "Add a module" option. You can either manually import modules from your local machine or import inbuilt modules from the gallery. In this article, we will proceed with the gallery option. Search for the Storage Modules. Add the module with recommended Runtime version. Create a PowerShell Runbook: In the Azure Portal, navigate to your Automation Account. Under "Process Automation", select "Runbooks". Click on "Create a runbook". Enter a name for the runbook, select "PowerShell" as the Runbook type, and click "Create". Once Runbook is created, in the "Edit PowerShell Runbook" page. Enter your PowerShell script and click "Publish". Schedule the Runbook: Go to the respective Runbook and choose the "Link to schedule" option. Select the "Link a schedule to your Runbook" option and choose the appropriate schedule. If you go ahead wit Schedule option, you can create a new schedule by specifying the name, description, start date, time, time zone, and repeating information. Monitor the Runbook: You can monitor the runbook's execution by going to the Jobs section under Process Automation in your Automation account. Here, you can see the status of the runbook jobs, view job details, and troubleshoot any issues. Note: Using a managed identity from Microsoft Entra ID allows your runbook to securely access other Microsoft Entra-protected resources without needing to manage credentials. This identity is automatically managed by Azure, eliminating the need for you to provision or rotate secrets manually. Managed identities are the preferred authentication method for runbooks and are set as the default for your Automation account, ensuring secure and streamlined access to necessary resources. Refer :- Using a system-assigned managed identity for an Azure Automation account | Microsoft Learn These steps should help you schedule your PowerShell script in Azure Automation. If you have any more questions or need further assistance, feel free to ask! References :- Perform Azure Table storage operations with PowerShell | Microsoft Learn Quickstart - Create an Azure Automation account using the portal | Microsoft Learn Using a system-assigned managed identity for an Azure Automation account | Microsoft Learn2.3KViews2likes3CommentsRestoring Soft-Deleted Blobs with multithreading in Azure Storage Using C#
This blog explains how to restore soft-deleted blobs in Azure storage using a C# program using multithreading to restore the data as fast as possible. It provides a step-by-step guide to setting up and running a multi-threaded console application to efficiently undelete blobs within a specific container or directory.2KViews1like0CommentsUtilizing Azure Storage and Runbooks for scheduled automated backups of Azure SQL Databases
Comprehensive instructions for automating scheduled backups of an Azure SQL Database to a storage account. This includes creating an automation account that initiates a PowerShell script via a runbook, which executes the backup operation and stores the .bacpac file in a blob container.4.6KViews2likes0Comments