Storage Account
7 TopicsAccess denied on FileShare using access keys
Hi Hi! I created and Azure Storage account and a Fileshare in it. I have 2 VM's running Windows Server 2016 and both are in the same Region. On VM1 i can connect to the fileshare using the Storage account username and access keys with the New-PSDrive command without any problems On VM2 i get "access denied" when trying to connect to the fileshare the same way with the storage account username and access keys, anyone know why this would happen? i execute the exact same New-PSDrive on both servers. Error from PowerShell: PS C:\temp> .\MountBackup.ps1 CMDKEY: Credential added successfully. New-PSDrive : Access is denied At C:\temp\MountBackup.ps1:6 char:5 + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\europrod.f ... + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidOperation: (Z:PSDriveInfo) [New-PSDrive], Win32Exception + FullyQualifiedErrorId : CouldNotMapNetworkDrive,Microsoft.PowerShell.Commands.NewPSDriveCommand26KViews0likes3CommentsAzure runbook is failing to execute due to Authentication issue with azure storage account
Iam facing one issue with authentication of storage account for automation runbook in azure. Scene:- Runbook will runasaccount and its based on service principle. This runbook will get the azurevm status and triggers to store that to storage account every two days. Issue: Runbook execution is successful if I put networking as publicly accessible Runbook is failing to store vm data in storage account if changed networking to selected network. In selected networking, I added resource instance of runbook and allowed trusted azure service, But still it is showing authentication issues. I provided contributor and storage blob data contributor role to the service principle also,still authentication issue. Any idea how to resolve this. Note:I don't want to make storage account publicly accessible.1.4KViews0likes2CommentsAccess Firewall Protected (Select Vnet) Azure Storage from Azure SQL Database
i have a storage account which is firewall protected and Azure SQL. Both are in same tenant /subscription/resource group. I am unable to access the Blob from Azure SQL(Bulk import). I have tried "Resource instances" feature. But its not working. Can you anyone guide me how to solve this?999Views1like1CommentFacing CORS issue in Azure Storage account
After configuring the CORS configuration as per the Microsoft guide, but I can't get the .glb 3D file to Azure Digital twin explorer. Getting 403 error, as CORS not enabled, or no matching rule found for this request. Even though I have the data owner role in Azure digital twin and storage account...758Views0likes0CommentsAccess Firewall Protected (Select Vnet) Azure Storage from Azure SQL Database
I have a storage account which is firewall protected and Azure SQL. Both are in same tenant /subscription/resource group. I am unable to access the Blob from Azure SQL(Bulk import). I have tried "Resource instances" feature. But its not working. Can you anyone guide me how to solve this?476Views0likes0CommentsData archiving of delta table in Azure Databricks
Hi all, Currently I am researching on data archiving for delta table data on Azure platform as there is data retention policy within the company. I have studied the documentation from Databricks official (https://docs.databricks.com/en/optimizations/archive-delta.html) which is about archival support in Databricks. It said "If you enable this setting without having lifecycle policies set for your cloud object storage, Databricks still ignores files based on this specified threshold, but no data is archived." Therefore, I am thinking how to configure the lifecycle policy in azure storage account. I have read the documentation on Microsoft official (https://learn.microsoft.com/en-us/azure/storage/blobs/lifecycle-management-overview) Let say the delta table data are stored in "test-container/sales" and there are lots of "part-xxxx.snappy.parquet" data file stored in that folder. Should I simply specify "tierToArchive", "daysAfterCreationGreaterThan: 1825", "prefixMatch: ["test-container/sales"]? However, I am worried that will this archive mechanism impact on normal delta table operation? Besides, I am worried that what if the parquet data file moved to archive tier contains both data created before 5 years and after 5 years, it is possible? Will it by chance move data earlier to archive tier before 5 years? Highly appreciate if someone could help me out with the questions above. Thanks in advance.100Views0likes1Comment