azure storage
104 TopicsSecuring VNet-Integrated Azure Functions with Blob Triggers: Private Endpoints and No Public Access
Azure Blob Trigger in Azure Functions enables automatic function invocation based on changes in Blob Storage, streamlining serverless integration with cloud storage. To ensure reliability, it handles failures by using poison blob queues and configurable retry mechanisms.Enable SFTP on Azure File Share using ARM Template and upload files using WinScp
SFTP is a very widely used protocol which many organizations use today for transferring files within their organization or across organizations. Creating a VM based SFTP is costly and high-maintenance. ACI service is very inexpensive and requires very little maintenance, while data is stored in Azure Files which is a fully managed SMB service in cloud. This template demonstrates an creating a SFTP server using Azure Container Instances (ACI). The template generates two resources: storage account is the storage account used for persisting data, and contains the Azure Files share sftp-group is a container group with a mounted Azure File Share. The Azure File Share will provide persistent storage after the container is terminated. ARM Template for creation of SFTP with New Azure File Share and a new Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "17013458610905703770" } }, "parameters": { "storageAccountType": { "type": "string", "defaultValue": "Standard_LRS", "metadata": { "description": "Storage account type" }, "allowedValues": [ "Standard_LRS", "Standard_ZRS", "Standard_GRS" ] }, "storageAccountPrefix": { "type": "string", "defaultValue": "sftpstg", "metadata": { "description": "Prefix for new storage account" } }, "fileShareName": { "type": "string", "defaultValue": "sftpfileshare", "metadata": { "description": "Name of file share to be created" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]", "storageAccountName": "[take(toLower(format('{0}{1}', parameters('storageAccountPrefix'), uniqueString(resourceGroup().id))), 24)]" }, "resources": [ { "type": "Microsoft.Storage/storageAccounts", "apiVersion": "2019-06-01", "name": "[variables('storageAccountName')]", "location": "[parameters('location')]", "kind": "StorageV2", "sku": { "name": "[parameters('storageAccountType')]" } }, { "type": "Microsoft.Storage/storageAccounts/fileServices/shares", "apiVersion": "2019-06-01", "name": "[toLower(format('{0}/default/{1}', variables('storageAccountName'), parameters('fileShareName')))]", "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] }, { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('fileShareName')]", "storageAccountName": "[variables('storageAccountName')]", "storageAccountKey": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName')), '2019-06-01').keys[0].value]" } } ] }, "dependsOn": [ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]" ] } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "storageAccountType": { "value": "Standard_LRS" }, "storageAccountPrefix": { "value": "sftpstg" }, "fileShareName": { "value": "sftpfileshare" }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } ARM Template to Enable SFTP for an Existing Azure File Share in Azure Storage account Resources.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "metadata": { "_generator": { "name": "bicep", "version": "0.4.63.48766", "templateHash": "16190402726175806996" } }, "parameters": { "existingStorageAccountResourceGroupName": { "type": "string", "metadata": { "description": "Resource group for existing storage account" } }, "existingStorageAccountName": { "type": "string", "metadata": { "description": "Name of existing storage account" } }, "existingFileShareName": { "type": "string", "metadata": { "description": "Name of existing file share to be mounted" } }, "sftpUser": { "type": "string", "defaultValue": "sftp", "metadata": { "description": "Username to use for SFTP access" } }, "sftpPassword": { "type": "securestring", "metadata": { "description": "Password to use for SFTP access" } }, "location": { "type": "string", "defaultValue": "[resourceGroup().location]", "metadata": { "description": "Primary location for resources" } }, "containerGroupDNSLabel": { "type": "string", "defaultValue": "[uniqueString(resourceGroup().id, deployment().name)]", "metadata": { "description": "DNS label for container group" } } }, "functions": [], "variables": { "sftpContainerName": "sftp", "sftpContainerGroupName": "sftp-group", "sftpContainerImage": "atmoz/sftp:debian", "sftpEnvVariable": "[format('{0}:{1}:1001', parameters('sftpUser'), parameters('sftpPassword'))]" }, "resources": [ { "type": "Microsoft.ContainerInstance/containerGroups", "apiVersion": "2019-12-01", "name": "[variables('sftpContainerGroupName')]", "location": "[parameters('location')]", "properties": { "containers": [ { "name": "[variables('sftpContainerName')]", "properties": { "image": "[variables('sftpContainerImage')]", "environmentVariables": [ { "name": "SFTP_USERS", "secureValue": "[variables('sftpEnvVariable')]" } ], "resources": { "requests": { "cpu": 1, "memoryInGB": 1 } }, "ports": [ { "port": 22, "protocol": "TCP" } ], "volumeMounts": [ { "mountPath": "[format('/home/{0}/upload', parameters('sftpUser'))]", "name": "sftpvolume", "readOnly": false } ] } } ], "osType": "Linux", "ipAddress": { "type": "Public", "ports": [ { "port": 22, "protocol": "TCP" } ], "dnsNameLabel": "[parameters('containerGroupDNSLabel')]" }, "restartPolicy": "OnFailure", "volumes": [ { "name": "sftpvolume", "azureFile": { "readOnly": false, "shareName": "[parameters('existingFileShareName')]", "storageAccountName": "[parameters('existingStorageAccountName')]", "storageAccountKey": "[listKeys(extensionResourceId(format('/subscriptions/{0}/resourceGroups/{1}', subscription().subscriptionId, parameters('existingStorageAccountResourceGroupName')), 'Microsoft.Storage/storageAccounts', parameters('existingStorageAccountName')), '2019-06-01').keys[0].value]" } } ] } } ], "outputs": { "containerDNSLabel": { "type": "string", "value": "[format('{0}.{1}.azurecontainer.io', reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName'))).ipAddress.dnsNameLabel, reference(resourceId('Microsoft.ContainerInstance/containerGroups', variables('sftpContainerGroupName')), '2019-12-01', 'full').location)]" } } } Parameters.json { "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#", "contentVersion": "1.0.0.0", "parameters": { "existingStorageAccountResourceGroupName": { "value": null }, "existingStorageAccountName": { "value": null }, "existingFileShareName": { "value": null }, "sftpUser": { "value": "sftp" }, "sftpPassword": { "value": null }, "location": { "value": "[resourceGroup().location]" }, "containerGroupDNSLabel": { "value": "[uniqueString(resourceGroup().id, deployment().name)]" } } } Deploy the ARM Templates using PowerShell or Azure CLI or Custom Template deployment using Azure Portal. Choose the subscription you want to create the sftp service in Create a new Resource Group It will automatically create a storage account Give a File Share Name Provide a SFTP user name Provide a SFTP password Wait till the deployment is done successfully Click on the container sftp-group Copy the FQDN from the container group Download WinScp from WinSCP :: Official Site :: Download Provide Hostname : FQDN for ACI; Port Number: 22; User Name and Password Click on Login 13. Drag and drop a file from the left side to the Right side. 14. Now, go to the Storage Account and Navigate to File share. The file appears on the file share.10KViews2likes5CommentsSet Up Endpoint DLP Evidence Collection on your Azure Blob Storage
Endpoint Data Loss Prevention (Endpoint DLP) is part of the Microsoft Purview Data Loss Prevention (DLP) suite of features you can use to discover and protect sensitive items across Microsoft 365 services. Microsoft Endpoint DLP allows you to detect and protect sensitive content across onboarded Windows 10, Windows 11 and macOS devices. Learn more about all of Microsoft's DLP offerings. Before you start setting up the storage, you should review Get started with collecting files that match data loss prevention policies from devices | Microsoft Learn to understand the licensing, permissions, device onboarding and your requirements. Prerequisites Before you begin, ensure the following prerequisites are met: You have an active Azure subscription. You have the necessary permissions to create and configure resources in Azure. You have setup endpoint Data Loss Prevention policy on your devices Configure the Azure Blob Storage You can follow these steps to create an Azure Blob Storage using the Azure portal. For other methods refer to Create a storage account - Azure Storage | Microsoft Learn Sign in to the Azure Storage Accounts with your account credentials. Click on + Create On the Basics tab, provide the essential information for your storage account. After you complete the Basics tab, you can choose to further customize your new storage account, or you accept the default options and proceed. Learn more about azure storage account properties Once you have provided all the information click on the Networking tab. In network access, select Enable public access from all networks while creating the storage account. Click on Review + create to validate the settings. Once the validation passes, click on Create to create the storage Wait for deployment of the resource to be completed and then click on Go to resource. Once the newly created Blob Storage is opened, on the left panel click on Data Storage -> Containers Click on + Containers. Provide the name and other details and then click on Create Once your container is successfully created, click on it. Assign relevant permissions to the Azure Blob Storage Once the container is created, using Microsoft Entra authorization, you must configure two sets of permissions (role groups) on it: One for the administrators and investigators so they can view and manage evidence One for users who need to upload items to Azure from their devices Best practice is to enforce least privilege for all users, regardless of role. By enforcing least privilege, you ensure that user permissions are limited to only those permissions necessary for their role. We will use portal to create these custom roles. Learn more about custom roles in Azure RBAC Open the container and in the left panel click on Access Control (IAM) Click on the Roles tab. It will open a list of all available roles. Open context menu of Owner role using ellipsis button (…) and click on Clone. Now you can create a custom role. Click on Start from scratch. We have to create two new custom roles. Based on the role you are creating enter basic details like name and description and then click on JSON tab. JSON tab gives you the details of the custom role including the permissions added to that role. For owner role JSON looks like this: Now edit these permissions and replace them with permissions required based on the role: Investigator Role: Copy the permissions available at Permissions on Azure blob for administrators and investigators and paste it in the JSON section. User Role: Copy the permissions available at Permissions on Azure blob for usersand paste it in the JSON section. Once you have created these two new roles, we will assign these roles to relevant users. Click on Role Assignments tab, then on Add + and on Add role assignment. Search for the role and click on it. Then click on Members tab Click on + Select Members. Add the users or user groups you want to add for that role and click on Select Investigator role – Assign this role to users who are administrators and investigators so they can view and manage evidence User role – Assign this role to users who will be under the scope of the DLP policy and from whose devices items will be uploaded to the storage Once you have added the users click on Review+Assign to save the changes. Now we can add this storage to DLP policy. For more information on configuring the Azure Blob Storage access, refer to these articles: How to authorize access to blob data in the Azure portal Assign share-level permissions. Configure storage in your DLP policy Once you have configured the required permissions on the Azure Blob Storage, we will add the storage to DLP endpoint settings. Learn more about configuring DLP policy Open the storage you want to use. In left panel click on Data Storage -> Containers. Then select the container you want to add to DLP settings. Click on the Context Menu (… button) and then Container Properties. Copy the URL Open the Data Loss Prevention Settings. Click on Endpoint Settings and then on Setup evidence collection for file activities on devices. Select Customer Managed Storage option and then click on Add Storage Give the storage name and copy the container URL we copied. Then click on Save. Storage will be added to the list. Storage will be added to the list for use in the policy configuration. You can add up to 10 URLs Now open the DLP endpoint policy configuration for which you want to collect the evidence. Configure your policy using these settings: Make sure that Devices is selected in the location. In Incident reports, toggle Send an alert to admins when a rule match occurs to On. In Incident reports, select Collect original file as evidence for all selected file activities on Endpoint. Select the storage account you want to collect the evidence in for that rule using the dropdown menu. The dropdown menu shows the list of storages configured in the endpoint DLP settings. Select the activities for which you want to copy matched items to Azure storage Save the changes Please reach out to the support team if you face any issues. We hope this guide is helpful and we look forward to your feedback. Thank you, Microsoft Purview Data Loss Prevention Team1.2KViews6likes1CommentPurview Webinars
Register for all webinars here🔗 Upcoming Microsoft Purview Webinars MAR 12 (8:00AM) Microsoft Purview | Microsoft Purview AMA - Data Security, Compliance, and Governance MAR 18 (8:00AM) Microsoft Purview | Microsoft Teams and Purview Information Protection: Inheriting Sensitivity Labels from Shared Files to Teams Meetings Microsoft Purview Information Protection now supports label policy settings to apply inheritance from shared files to meetings. This enhances protection in Teams when sensitive files are shared in Teams chat or live shared during meeting. MAR 19 (8:00AM) Microsoft Purview | Unlocking the Power of Microsoft Purview for ChatGPT Enterprise Join us for an exciting presentation where we unveil the seamless integration between Microsoft Purview and ChatGPT Enterprise. Discover how you can effortlessly set up and integrate these powerful tools to ensure that interactions are securely captured, meet regulatory requirements and manage data effectively. Don't miss out on this opportunity to learn about the future of intelligent data management and AI-driven insights! 2025 Past Recordings JAN 8 - Microsoft Purview AMA | Blog Post 📺 Subscribe to our Microsoft Security Community YouTube channel for ALL Microsoft Security webinar recordings, and more!430Views0likes0CommentsHow to configure directory level permission for SFTP local user
SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users. Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution. In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim. Attention! As mentioned in Caution part of the document, the ACLs for local users are supported, but also still in preview. Please do not use this for your production environment. Preparation Before configuring local users and ACLs, the following things are already prepared: One ADLS Gen2 Storage Account. (In this example, it’s called zhangjerryadlsgen2) A container (testsftp) with two directories. (dir1 and dir2) One file uploaded into each directory. (test1.txt and test2.txt) The file system in this blog is like: Aim The aim is to have user1 which can only list files saved in dir1 and user2 which can only list files saved in dir2. Both of them should be unable to do any other operations in the matching directory (dir1 for user1 and dir2 for user2) and should be unable to do any operations in root directory and the other directory. Configuring local users From Azure Portal, it’s easy to enable SFTP feature and create local users. Here except user1 and user2, another additional user is also necessary. It will be used as the administrator to assign ACLs on user1 and user2. In this blog, it’s called admin. While creating the admin, its landing directory should be the root directory of the container and the permissions should be all given. While creating the user1 and user2, as the permission will be controlled by using ACLs, the containers and permissions should be left empty and the Allow ACL authorization should be checked. The landing directory should be configured to the directory which this user should have permission later. (In this blog, user1 should be on dir1 and user2 should be on dir2.) User1: User2: After local users are created, one more step which is needed before configuring ACL is to note down the user ID of user1 and user2. By clicking the created local user, a page as following to edit local user should show out and the user ID will be included there. In this blog, the user ID of user1 is 1002 and user ID of user2 is 1003. Configuring ACLs Before starting configuring ACLs, clarifying which permissions to assign is necessary. As explained in this document, the ACLs contains three different permissions: Read(R), Write(W) and Execute(X). And from the “Common scenarios related to ACL permissions” part of the same document, there is a table which contains most operations and their corresponding required permissions. Since the aim of this blog is to allow user1 only to list the dir1, according to table, we know that correct permission for user1 should be X on root directory, R and X on dir1. (For user2, it’s X on root directory, R and X on dir2). After clarifying the needed permissions, the next step is to assign ACLs. The first step is to connect to the Storage Account using SFTP as admin: (In this blog, the PowerShell session + OpenSSL is used but it’s not the only way. Users can also use any other way to build SFTP connection to the Storage Account.) Since assigning ACLs for local users is not possible to a specific user, and the owner of root directory is a built-in user which is controlled by Azure, the easiest way here is to give X permissions to all other users. (For concept of other users, please refer to this document) Next step is to assign R and X permission. But considering the same reason, it’s impossible to give R and X permissions for all other users again. Because if it’s done, user1 will also have R and X permissions on dir2, which does not match the aim. The best way here is to change the owner of the directory. Here we should change the owner of dir1 to user1 and dir2 to user2. (By this way, user1 will not have permission to touch dir2.) After above configurations, while connecting to the Storage Account by SFTP connection using user1 and user2, only listing file operation under corresponding directory is allowed. User1: User2: (The following test result proves that only list operation under /dir2 is allowed. All other operations will return permission denied or not found error.) About landing directory What will happen if all other configurations are correct but the landing directory is configured as root directory for user1 or user2? The answer to the above question is quite simple: The configuration will still work, but will impact the user experience. To show the the result of that case, one more local user called user3 with user ID 1005 is created but its landing directory is configured as admin, which is on root directory. The ACL permission assigned on it is same as user2 (change owner of dir2 to user3.) While connecting to the Storage Account by SFTP using user3, it will be landing on root directory. But per ACLs configuration, it only has permission to list files in dir2, hence the operations in root directory and dir1 are expected to fail. To apply further operation, user needs to add dir2/ in the command or cd dir2 at first.301Views0likes0CommentsBuild 2023 recap and deep dive on jobs | Azure Container Apps Community Standup
Join the Azure Container Apps team for the latest product news, community highlights, demos, and Q&A. This month we'll go through all the Container Apps announcements from Microsoft Build and deep dive into one of the latest features — jobs. Featuring: Anthony Chu (@nthonyChu) and the Azure Container Apps team #serverless #containers1.1KViews0likes0CommentsGranting List-Only permissions for users in Azure Storage Account using ABAC
In this blog, we’ll explore how to configure list-only permissions for specific users in Azure Storage, allowing them to view the structure of files and directories without accessing or downloading their contents. Granting list-only permissions to specific users for an Azure Storage container path allows them to list files and directories without reading or downloading their contents. While RBAC manages access at the container or account level, ABAC offers more granular control by leveraging attributes like resource metadata, user roles, or environmental factors, enabling customized access policies to meet specific requirements. Disclaimer: Please test this solution before implementing it for your critical data. Pre-Requisites: Azure Storage GPV2 / ADLS Gen 2 Storage account Make sure to have enough permissions(Microsoft.Authorization/roleAssignments/write permissions) to assign roles to users , such as Owner or User Access Administrator Note: If you want to grant list-only permission to a particular container, ensure that the permission is applied specifically to that container. This approach limits the scope of access to just the intended container and enhances security by minimizing unnecessary permissions. However, in this example, I am demonstrating how to implement this role for the entire storage account. This setup allows users to list files and directories across all containers within the storage account, which might be suitable for scenarios requiring broader access. Action: You can follow the steps below to create a Storage Blob Data Reader role with specific conditions using the Azure portal: Step 1: Sign-in to the Azure portal with your credentials. Go to the storage account where you could like the role to be implemented/ scoped to. Select Access Control (IAM)->Add-> Add role assignment: Step2: On the Roles tab, select (or search for) Storage Blob Data Reader and click Next. On the Members tab, select User, group, or service principal to assign the selected role to one or more Azure AD users, groups, or service principals. Click Select members. Find and select the users, groups, or service principals. You can type in the Select box to search the directory for display name or email address. Please select the user and continue with Step 3 to configure conditions. Step 3: The Storage Blob Data Reader provides access to list, read/download the blobs. However, we would need to add appropriate conditions to restrict the read/download operations. On the Conditions tab, click Add condition. The Add role assignment condition page appears: In the Add action section, click Add action. The Select an action pane appears. This pane is a filtered list of data actions based on the role assignment that will be the target of your condition. Check the box next to Read a blob, then click Select: Note: The Build Expression section is optional, so I am not using it in this case. On the Review + assign tab, click Review + assign to assign the role with the condition. After a few moments, the security principal is assigned the role. Please Note: Along with the above permission, I have given the user Reader permission at the storage account level. You could give the Reader permission at the resource level/resource group level/subscription level too. We mainly have Management Plane and Data Plane while providing permissions to the user. The Management plane consists of operation related to storage account such as getting the list of storage accounts in a subscription, retrieve storage account keys or regenerate the storage account keys, etc. The Data plane access refers to the access to read, write or delete data present inside the containers. For more info, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/role-definitions#management-and-dat... To understand about the Built-in roles available for Azure resources, please refer to: https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles Hence, it is important that you give minimum of ‘Reader’ role at the Management plane level to test it out in Azure Portal. Step 4: Test the condition (Ensure that the authentication method is set to Azure AD User Account and not Access key) User can list the blobs inside the container. Download/Read blob failed. Related documentations: What is Azure attribute-based access control (Azure ABAC)? | Microsoft Learn Azure built-in roles - Azure RBAC | Microsoft Learn Tutorial: Add a role assignment condition to restrict access to blobs using the Azure portal - Azure ABAC - Azure Storage | Microsoft Learn Add or edit Azure role assignment conditions using the Azure portal - Azure ABAC | Microsoft Learn362Views0likes0CommentsHow to secure the configuration file while using blobfuse2 for security compliance
Overview: The blobfuse2 functionality is used to mount Azure Storage Account as file system on Linux machine. To establish the connection with storage account via blobfuse2 and to authenticate the request against the storage account, we make use of configuration file for it. The configuration file contains the storage account details along with the container to be mounted and what mode of authentication to be used. The configuration yaml file includes parameters for blobfuse2 settings. In general, the details saved in configuration file are in plain text. Hence, if any users access the configuration file, they would be able to access the sensitive information related to the storage account, like for example, the storage account access keys and SAS token. Let’s say that, as part of security reasons, you want to safeguard the configuration file from bad actors and prevent the leak of your storage account’s sensitive details. In such scenario, you can make use of blobfuse2 secure command for it. Using blobfuse2 secure command, we can encrypt, decrypt, get or set details in the encrypted configuration file. We will be securing the configuration file using passphrase. Hence, do save the passphrase as it is needed for decrypt, get, set commands. Note: At present, the configuration file encryption is available in blobfuse2 only. Let us discuss in detail the blobfuse2 secure command and how we can mount the blobfuse2 using the encrypted config file. For holistic view regarding the blobfuse2 secure command, in this blog, we have initially mounted blobfuse2 using plain text configuration file. The blobfuse2 mount was successful and to show the contains of configuration file, we have performed “cat” command. Please do refer to the below screenshot for the same. Command used is: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./config.yaml Create an encrypted configuration yaml file: Let us secure the configuration file using blobfuse2 secure encrypt command. Performing “dir” command, we can see the configuration file before and after encryption. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure encrypt --config-file./config.yaml -- passphrase={passphrasesample} --output-file=./encryptedconfig.yaml Now, let us perform the blobfuse2 mount command using encrypted configuration file that we created using the above step. Refer to the screenshot below for further details. Command: sudo blobfuse2 mount ~/<mountpath_name> --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --secure-config Note: Do note that, post the configuration file is encrypted, the original configuration file is deleted. Hence, if there is any blobfuse2 mount that was done prior to the encryption of the configuration file, ensure that the blobfuse2 mount is using the correct configuration file. Fetch parameter from encrypted configuration file: Let’s say that you want to get a particular parameter from the encrypted config file. Using “cat” command, if we see the details of the config file, the encrypted data will not be readable. Hence, we need to use blobfuse2 secure get command for the it. Perform “blobfuse2 secure get” command to get the details from the encrypted config file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure get --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --key=file_cache.path Set parameter in encrypted configuration file: In the encrypted configuration file, if you want to set any new parameter, we can use blobfuse2 secure set command to set the details. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure set --config-file=./encrytedconfig.yaml --passphrase={passphrasesample} --key=logging.log_level --value=log_debug Decrypt the configuration yaml file: Now we know how we can encrypt the configuration file, let's understand how we can use the blobfuse2 secure command to decrypt the configuration file. Please refer to the screenshot below for further details. Command used is: blobfuse2 secure decrypt --config-file=./encryptedconfig.yaml --passphrase={passphrasesample} --output-file=./decryptedconfig.yaml We can see the contents of the decrypted configuration file using “cat” command. In this way, we can secure the config file used for blobfuse2 and meet our security requirement. References: If you face any issues with blobfuse2 troubleshooting, you can refer to the blog here: How to troubleshoot blobfuse2 issues | Microsoft Community Hub For blobfuse2 secure commands, you can refer to the link here: How to use the 'blobfuse2 secure' command to encrypt, decrypt, or access settings in a BlobFuse2 configuration file - Azure Storage | Microsoft Learn Hope this article turns out helpful! Happy Learning!221Views1like0Comments- 2.2KViews0likes0Comments
Efficient Management of Append and Page Blobs Using Azure Storage Actions
Overview In Azure Storage, Blob Lifecycle Management (BLM) allows you to automate the management of your data based on rules defined by the user. Lifecycle management policies are supported for block blobs and append blobs in general-purpose v2, premium block blob, and Blob Storage accounts. However, since lifecycle management (BLM) policies are not supported for page blobs, we can effectively manage the lifecycle of page blobs and append blobs through storage tasks and actions. There are solutions like Logic Apps and Azure Functions are available to automate lifecycle management and below are reference links: Lifecycle Management, Page Blob, Azure Storage, Blob Storage (microsoft.com) Delete all the Azure Storage Blob content before N days using Logic App - Microsoft Community Hub The focus of this blog is on demonstrating how to achieve the same goal using built-in storage actions and tasks NOTE: Azure Storage Actions is currently in PREVIEW and is available these regions. Please refer: About Azure Storage Actions Preview - Azure Storage Actions Preview | Microsoft Learn By leveraging these storage actions, we can automate the retention, deletion, and archival of page blobs and append blobs based on custom-defined rules, ensuring efficient lifecycle management without relying on external services. This method provides a more direct, storage-centric approach to managing page blob lifecycles. For example, we have both page blobs and append blobs within a container, and we would like to delete them using Azure storage actions and tasks. In this article, you'll learn how to create a storage task. Create a task In the Azure portal, search for Storage Tasks. Then, under Services, select Storage tasks - Azure Storage Actions. On the Azure Storage Actions | Storage Tasks page, select Create. Basics tab On the Basics tab, provide the essential information for your storage task. Conditions tab On the Conditions tab, define the conditions that must be met by each object (container or blob), and the operations to perform on the object. You must define at least one condition and one operation. To add a clause to a condition, select Add new clause. To add operations, select Add new operation. In this scenario, we are selecting the blob types as Page Blobs and Append Blobs to perform the delete operation. Assignments tab An assignment identifies a storage account and a subset of objects in that account that the task will target. An assignment also defines when the task runs and where execution reports are stored. To add an assignment, select Add assignment. This step is optional. You don't have to add an assignment to create the task. Tags tab On the Tags tab, you can specify Resource Manager tags to help organize your Azure resources. Review + create tab When you navigate to the Review + create tab, Azure runs validation on the storage task settings that you have chosen. If validation passes, you can proceed to create the storage task. If validation fails, then the portal indicates which settings need to be modified. Once you have created the storage task then please go to the respective storage account to enable the storage task assignment. Enable the task assignment Storage task assignments are disabled by default. Enable assignments from the Assignments page. Periodically select Refresh to view an updated status. Until the task runs and then completes, the string in progress appears beneath the Last run status column. When the task completes, the string Completed appears in that column. After successfully completing the task, we observed that both the page blobs and append blobs were deleted from the container. View results of the task run After the task completes running, you can view the results of the run. Select the View report link to download a report. Useful links: Create a storage task - Azure Storage Actions Preview | Microsoft Learn https://learn.microsoft.com/en-us/azure/storage-actions/overview Define storage task conditions & operations - Azure Storage Actions Preview | Microsoft Learn804Views0likes0Comments