azure
42 TopicsAzure File Sync: faster, more secure and Windows Server 2025 support
Azure File Sync enables seamless tiering of data from on-premises Windows Servers to Azure Files for hybrid use cases and simplified migration. It also enables you to leverage the performance, flexibility and compatibility of your on-premises File Server while leveraging the scale and cost effectiveness of Azure Files. The latest updates for Azure File Sync bring a host of exciting features and improvements: Faster server onboarding and disaster recovery (7x improvement), significantly reducing the time to access data on new server endpoints. Sync performance has been significantly improved (10x improvement), reducing the amount of time to migrate shares and sync a large number of changes (for example, permission changes). Windows Server 2025 support ensures that organizations can stay on the cutting edge of technology. Windows Server 2025 introduces enhanced capabilities, offering better scalability, security, and cloud integration. Copilot in Azure can help you quickly troubleshoot and resolve common Azure File Sync issues. Managed identities support is now in preview, enabling a more secure method to authenticate to your Azure File shares. In this blog post, we’ll explore these key updates and what they mean for businesses looking to maximize their Azure File Sync experience. Whether it's reducing your on-premises footprint or ensuring seamless and secure cloud integration, now is the ideal time to embrace Azure File Sync and take full advantage of what it has to offer. Faster server provisioning and improved disaster recovery for Azure File Sync server endpoints One of the most significant updates in Azure File Sync is the dramatic reduction in time required for provisioning new server endpoints. Previously, setting up a new server endpoint could take hours or even days, but with the v19 release and later, we’ve drastically cut down the time it takes to access data on the new server endpoint. This enhancement is critical for disaster recovery and is especially impactful when the Azure file share contains millions of files and folders. Furthermore, to enhance the management experience, we’ve introduced a Provisioning Steps tab in the portal, which allows you to easily determine when server endpoints are ready for use. You can now access data before syncing is complete. As users or applications navigate through their data, the system prioritizes relevant items for quicker access, eliminating the need to wait for a full download. These improvements help businesses quickly get their server endpoints up and running without long delays, improving overall operational efficiency. For more information, see Create an Azure File Sync server endpoint documentation. Improved sync performance for migrations & bulk updates Another exciting update for Azure File Sync is the substantial improvement in sync performance, now reaching up to 200 items per second. This marks a tenfold improvement over the past two years. This enhancement strengthens Azure File Sync's role as a seamless migration tool, enabling faster data transfers, especially those that require a large number of file changes (for example, when file permissions are changed). It's particularly beneficial for customers aiming to replace on-premises file servers and manage larger data sizes with Azure File Sync. Support for Windows Server 2025 Azure File Sync now supports Windows Server 2025 which has improved security, performance and manageability . The Azure File Sync extension for Windows Admin Center now supports Windows Servers from Windows Server 2025 down to Windows Server 2012 R2. This makes Azure File Sync suitable for a wide range of organizations regardless of their current server version. Azure File Sync facilitates the modernization of file servers, allowing organizations to seamlessly transition to newer servers running Windows Server 2025. The integration with Windows Admin Center (WAC) provides centralized management, offering a unified interface for managing configurations across multiple File Sync servers. This integration simplifies the management process, reducing complexity and saving time. With this configuration, businesses can utilize Windows Server as a fast cache for their Azure file share and optionally implement cloud tiering for more efficient data management. Enhancing File Sync with Copilot in Azure With Copilot in Azure, you can now supercharge your Azure deployments by taking advantage of cutting-edge AI technology that simplifies troubleshooting and resolution like never before. Whether it’s network misconfigurations, incorrect RBAC permissions, or accidental file share deletions, Copilot makes fixing these issues faster and easier than ever. Copilot automatically detects errors and misconfigurations, guides you through the necessary steps to resolve them, and can even take action on your behalf to fix common problems instantly. If you encounter challenges with Azure File Sync due to incorrect network settings, simply enter a prompt like, “Help me troubleshoot Azure File Sync issues.” Copilot in Azure will walk you through the steps to identify and correct the network misconfigurations, ensuring that your files sync smoothly again. By leveraging Copilot’s intelligent capabilities, you not only save time on manual troubleshooting but also gain the confidence to resolve issues independently, allowing you to focus more on growing your business instead of dealing with roadblocks. With Copilot, you stay ahead of the curve, maximizing productivity and minimizing downtime in your Azure environment. For more information, see Troubleshoot and resolve Azure File Sync issues using Microsoft Copilot. Preview: Managed identities support for enhanced security Azure File Sync now includes support for managed identities (MI). This feature allows organizations to authenticate with Azure File shares using an Entra ID identity, replacing the need for a shared key. The new managed identities support enables more secure authentication across several areas of Azure File Sync, including: Storage Sync Service authentication to Azure File shares Registered server authentication to Azure File shares Registered server authentication to Storage Sync Service For more information, see How to use managed identities with Azure File Sync (preview). Get Started with File Sync Don’t have Azure File Sync yet? To get started, see How to deploy Azure File Sync. Share Your Feedback Your feedback is invaluable to us as it shapes and refines Azure File Sync and Azure Files. Please take a moment to share your feedback with us.740Views4likes1CommentHow to Save 70% on File Data Costs
In the final entry in our series on lowering file storage costs, DarrenKomprise shares how Komprise can help lower on-premises and Azure-based file storage costs. Komprise and Azure offer you a means to optimize unstructured data costs now and in the future!14KViews1like1CommentAzure Files provisioned v2 billing model for flexibility, cost savings, and predictability
We are excited to announce the general availability of the Azure Files provisioned v2 billing model for the HDD (standard) media tier. Provisioned v2 offers a provisioned billing model, meaning that you pay for what you provision, which enables you to flexibly provision storage, IOPS, and throughput. This allows you to migrate your general-purpose workloads to Azure at the best price and performance, but without sacrificing price predictability. With provisioned v2, you have granular control to scale your file share alongside your workload needs – whether you are connecting from a remote client, in hybrid mode with Azure File Sync, or running an application in Azure. The provisioned v2 model enables you to dynamically scale up or down your application’s performance as needed, without downtime. Provisioned v2 file shares can span from 32 GiB to 256 TiB in size, with up to 50,000 IOPS and 5 GiB/sec throughput, providing the flexibility to handle both small and large workloads. If you’re an existing user of Azure Files, you may be familiar with the current “pay-as-you-go” model for the HDD (standard) media tier. While conceptually, this model is simple – you pay for the storage and transactions used – usage-based pricing can be incredibly challenging to understand and use because it’s very difficult or impossible to accurately predict the usage on a file share. Without knowing how much usage you will drive, especially in terms of transactions, you can’t make accurate predictions about your Azure Files bill ahead of time, making planning and budgeting difficult. The provisioned v2 model solves all these problems – and more! Increased scale and performance In addition to the usability improvements of a provisioned model, we have significantly increased the limits over the current “pay-as-you-go” model: Quantity HDD pay-as-you-go HDD provisioned v2 Maximum share size 100 TiB (102,400 GiB) 256 TiB (262,144 GiB) Maximum share IOPS 40,000 IOPS (recently increased from 20,000 IOPS) 50,000 IOPS Maximum share throughput Variable based on region, split between ingress/egress. 5 GiB / sec (symmetric throughput) The larger limits offered on the HDD media tier in the provisioned v2 model mean that as your storage requirements grow, your file share can keep pace without the need to resort to unnatural workarounds such as sharding, allowing you to keep your data in logical file shares that make sense for your organization. Per share monitoring Since provisioning decisions are made on the file share level, in the provisioned v2 model, we’ve brought the granularity of monitoring down to the file share level. This is a significant improvement over pay-as-you-go file shares, which can only be monitored at the storage account level. To help you monitor the usage of storage, IOPS, and throughput against the provisioned limits of the file share, we’ve added the following new metrics: Transactions by Max IOPS, which provides the maximum IOPS used over the indicated time granularity. Bandwidth by Max MiB/sec, which provides the maximum throughput in MiB/sec used over the indicated time granularity. File Share Provisioned IOPS, which tracks the provisioned IOPS of the share on an hourly basis. File Share Provisioned Bandwidth MiB/s, which tracks the provisioned throughput of the share on an hourly basis. Burst Credits for IOPS, which helps you track your IOPS usage against bursting. To use the metrics, navigate to the specific file share in the Portal, and select “Monitoring > Metrics”. Select the metric you want, in this case, “Transactions by Max IOPS”, and ensure that the usage is filtered to the specific file share you want to examine. How to get access to the provisioned v2 billing model? The provisioned v2 model is generally available now, at the time of writing, in a limited set of regions. When you create a storage account in a region that has been enabled for provisioned v2, you can create a provisioned v2 account by selecting “Standard” for Performance, and “Provisioned v2” for File share billing. See how to create a file share for more information. When creating a share in a provisioned v2 storage account, you can specify the capacity and use the recommended performance. The recommendations we provide for IOPS and throughput are based on common usage patterns. If you know your workloads performance needs, you can manually set the IOPS and throughput to further tune your share. As you use your share, you may find that your usage pattern changes or that your usage is more or less active than your initial provisioning. You can always increase your storage, IOPS and throughput provisioning to right size for growth and you can also decrease any provisioned quantity after 24 hours have elapsed since your last increase. Storage, IOPS, and throughput changes are effective within a few minutes after a provisioning change. In addition to your baseline provisioned IOPS, we provide credit-based IOPS bursting that enables you to burst up to 3X the amount of provisioned IOPS for up to 1 hour, or as long as credits remain. To learn more about credit-based IOPS bursting, see provisioned v2 bursting. Pricing example To see the new provisioned v2 model in action, let’s compare the costs of the pay-as-you-go model versus the provisioned v2 model for the following Azure File Sync deployment: Storage: 50 used TiB For the pay as we go model, we need usage as expressed in the total number of “transaction buckets” for the month: Write: 3,214 List: 7,706 Read: 7,242 Other: 90 For the provisioned v2 model, we need usage as expressed as the maximum IOPS and throughput (in MiB / sec) hit over the course of an average time period to guide our provisioning decision: Maximum IOPS: 2,100 IOPS Maximum throughput: 85 MiB / sec To deploy a file share using the pay-as-you-go model, you need to pick an access tier to store the data in between transaction optimized, hot, and cool. The correct access tier to pick depends on the activity level of your data: a really active share should pick transaction optimized, while a comparatively inactive share should pick cool. Based on the activity level of this share as described above, cool is the best choice. When you deploy the share, you need to provision more than you use today to ensure the share can support your application as your data continues to grow. Ultimately this how much to provision is up to you, but a good rule of thumb is to start with 2X more than what you use today. There’s no need to keep your share at a consistent provisioned to used ratio. Now we have all the necessary inputs to compare cost: HDD pay-as-you-go cool (cool access tier) HDD provisioned v2 Cost components Used storage: 51,200 GiB * $0.015 / GiB = $768.00 Write TX: 3,214 * $0.1300 / bucket = $417.82 List TX: 7,706 * $0.0650 / bucket = $500.89 Read TX: 7,242 * $0.0130 / bucket = $94.15 Other TX: 90 * $0.0052 / bucket = $0.47 Provisioned storage: 51,200 used GiB * 2 * $0.0073 / GiB = $747.52 Provisioned IOPS: 2,100 IOPS * 2 * $0.402 / IO / sec = $168.84 Provisioned throughput: 85 MiB / sec * 2 * $0.0599 / MiB / sec = $10.18 Total cost $1,781.33 / month $926.54 / month Effective price per used GiB $0.0348 / used GiB $0.0181 / used GiB In this example, the pay-as-you-go file share costs $0.0348 / used GiB while the provisioned v2 file share costs $0.0181 / used GiB, a ~2X cost improvement for provisioned v2 over pay-as-you-go. Shares with different levels of activity will have different results – your mileage may vary. Typically, when deploying a file share for the first time, you would not know what the transaction usage would be, making cost projections for the pay-as-you-go model quite difficult. But it would still be straightforward to compute the provisioned v2 costs. If you don’t know specifically what your IOPS and throughput utilization would be, you can use the built-in recommendations as a starting point. Resources Here are some additional resources on how to get started: Azure Files pricing page Understanding the Azure Files provisioned v2 model | Microsoft Docs How to create an Azure file share | Microsoft Docs (follow the steps for creating a provisioned v2 storage account/file share)2KViews1like0CommentsControl geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation: Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com241Views0likes0CommentsIntroducing new Storage Capabilities to Copilot in Azure (Preview)
We are excited to announce that customers can now take advantage of new Copilot in Azure (Public Preview) capabilities for Storage services. Copilot in Azure is an intelligent assistant designed to help you design, operate, optimize and troubleshoot your Azure environment. With the new Storage capabilities, Copilot in Azure can analyze your storage services metadata and logs to streamline and enhance tasks such as building cloud solutions, managing, operating, supporting, and troubleshooting cloud applications in Azure Storage. Troubleshooting Disk Performance with Copilot in Azure Available now in Public Preview Azure offers a rich variety of Disk metrics, providing insights into the performance of your Virtual Machine (VM) and Disk. These metrics help you diagnose performance issues when your application requires higher performance than what you have configured for the VM and Disks. Whether you are looking to set up and validate a new environment in Azure, or are facing issues with your existing set-up, Copilot further enhances your experience by analyzing these metrics to troubleshoot performance issues on-behalf of you, along with providing guided recommendations for optimizing the VM and disks performance to improve the experience for your application. To troubleshoot performance issues with Copilot in Azure, navigate to Copilot in the Azure Portal and enter a prompt related to VM-Disk performance, such as “Why is my Disk slow?”. Copilot will then ask you to specify the VM and Disk(s) experiencing performance issues, along with the relevant time period. Using this information, you can use Copilot to analyze your current VM-Disk configuration and performance metrics to identify whether your application is experiencing slowness due to reaching the configured performance limits for the VM or Disk. It will then provide a summary of the analysis and a set of recommended actions to resolve your performance issue, which you can apply directly in the Portal through Copilot’s guided recommendations. By leveraging the power of Copilot, you can efficiently diagnose and address performance issues within your Azure Disks environment. For more information on the Disk Performance Copilot capability, refer to the Public Documentation. Managing Storage Lifecycle Management with Copilot in Azure Available now in Public Preview With Copilot in Azure, we're providing a more efficient way to manage and optimize your storage costs. Copilot in Azure allows you to save on costs by tiering blobs that haven't been accessed or modified for a while. In some cases, you might even decide to delete those blobs. With Copilot in Azure, you can simply automate lifecycle management (LCM) rule authoring, enabling you to perform bulk actions on Storage accounts through a natural language interface. This means no more manual rule creation or risk of misconfiguration! To use this capability, simply enter a prompt related to cost management, such as “Help me reduce my storage account costs” or “I want to lower my storage costs.” Copilot will then guide you through authoring an LCM rule to help you achieve your goals. For more information on the Storage Lifecycle Management Copilot capability, refer to the Public Documentation. Troubleshoot File Sync errors with Copilot in Azure Available now in Public Preview With Copilot in Azure, you can now quickly troubleshoot and resolve common Azure File Sync issues, such as network misconfiguration, incorrect RBAC permissions, or accidental file share deletions. Copilot in Azure detects errors and misconfigurations, provides exact steps to fix them, and can act on your behalf to resolve common errors. To use this capability, simply enter a prompt related to File Sync such as “Why are my files not syncing?” or “Help me troubleshoot error 0x80C83096.” With the File Sync errors troubleshooting skill, Copilot in Azure acts as your intelligent assistant to keep your File Sync environment running smoothly. This capability not only saves you time by cutting down on troubleshooting efforts but also empowers you to resolve issues confidently and independently. For more information on the Files Sync Copilot capability, refer to the Public Documentation.727Views2likes0CommentsAnnouncing the Next generation Azure Data Box Devices
Microsoft Azure Data Box offline data transfer solution allows you to send petabytes of data into Azure Storage in a quick, inexpensive, and reliable manner. The secure data transfer is accelerated by hardware transfer devices that enable offline data ingestion to Azure. Our customers use the Data Box family to move petabytes-scale data into Azure for backup, archival, data analytics, media and entertainment, training, and different workload migrations etc. We continue to get requests about moving truly massive amounts of data in a secure, simple and quick manner. We’ve heard you and to address your needs, we’ve designed a new, enhanced product to meet your data transfer needs. About the latest innovation in Azure Data Box Family Today, we’re excited to announce the preview of Azure Data Box 120 and Azure Data Box 525, our next-generation compact, NVMe-based Data Box devices. The new offerings reflect insights gained from working with our customers over the years and understanding their evolving data transfer needs. These new devices incorporate several improvements to accelerate offline data transfers to Azure, including: Fast copy - Built with NVMe drives for high-speed transfers and improved reliability and support for faster network connections Easy to use - larger capacity offering (525 TB) in a compact form-factor for easy handling Resilient - Ruggedized devices built to withstand rough conditions during transport Secure - Enhanced physical, hardware and software security features Broader availability – Presence in more Azure regions, meeting local compliance standards and regulations What’s new? Improved Speed & Efficiency NVMe devices offer faster data transfer rates, with copy speeds up to 7 GBps via SMB Direct on RDMA (100-GbE) for medium to large files, a 10x improvement in data transfer speeds as compared to previous generation devices. High-speed transfers to Azure with data upload up to 5x faster for medium to large files, reducing the lead time for your data to become accessible in the Azure cloud. Improved networking with support for up to 100 GbE connections, as compared to 10 GbE on the older generation of devices. Two options with usable capacity of 120 TB and 525 TB in a compact form factor meeting OSHA requirements. Devices ship the next day air in most regions. Learn more about the performance improvements on Data Box 120 and Data Box 525. Enhanced Security The next generation devices come with several new physical, hardware and software security enhancements. This is in addition to the built in Azure security baseline for Data Box and Data Box service security measures currently supported by the service. Secure boot functionality with hardware root of trust and Trusted Platform Module (TPM) 2.0. Custom tamper-proof screws and built-in intrusion detection system to detect unauthorized device access. AES 256-bit BitLocker software encryption for data at rest is currently available. Hardware encryption via the RAID controller, which will be enabled by default on these devices, is coming soon. Furthermore, once available, customers can enable double encryption through both software and hardware encryption to meet their sensitive data transfer requirements. These ISTA 6A compliant devices are built to withstand rough conditions during shipment while keeping both the device and your data safe and intact. Learn more about the enhanced security features on Data Box 120 and Data Box 525. Broader Azure region coverage Recurring request from our customers has been for wider availability of our higher-capacity device to ease large migrations. We’re happy to share Data Box 525 will be available across most Azure regions where the Data Box service is currently live. This marks a significant improvement in availability of a large-capacity device as compared to the current Data Box Heavy. What our customers have to say For the last several months, we’ve been working directly with our customers of all industries and sizes to leverage the next generation devices for their data migration needs. Customers love the larger capacity with form-factor familiarity, seamless set up and faster copy. “This new offering brings significant advantages, particularly by simplifying our internal processes. With deployments ranging from hundreds of terabytes to even petabytes, we previously relied on multiple regular Data Box devices—or occasionally Data Box Heavy devices—which required extensive operational effort. The new solution offers sizes better aligned with our needs, allowing us to achieve optimal results with fewer logistical steps. Additionally, the latest generation is faster and provides more connectivity options at data centre premises, enhancing both efficiency and flexibility for large-scale data transfers.” - Lukasz Konarzewski, Senior Data Architect, Commvault “We have been using the devices to move 1PB of archival media data to Azure blob storage using the Data Box transfer devices. The next generation devices provided a very smooth setup and copy experience, and we were able to transfer data in larger chunks and much faster than before. Overall, this has helped shorten our migration lead times and land the data in the cloud quickly and seamlessly.” - Daniel Perry, Kohler “We have had a positive experience overall with the new Data Box devices to move our data to Azure Blob storage. The devices offer easy plug and play installation, detailed documentation especially for the security features and good data copy performance. We would definitely consider using it again for future large data migration projects.” – Bas Boeijink, Cloud Engineer, Eurofiber Cloud Infra Sign up for the Preview The Preview is available in the US, Canada, EU, UK, and US Gov Azure regions, and we will continue to expand to more regions in the coming months. If you are interested in the preview, we want to hear from you. Customers can sign up here ISV partners can sign up here You can learn more about the all-new Data Box devices here. We are committed to continuing to deliver innovative solutions to lower the barrier for bringing data to Azure. Your feedback is important to us. Tell us what you think about the new Azure Data Box preview by writing to us at DataBoxPM@microsoft.com – we can’t wait to hear from you. Stop by and see us! Now that you’ve heard about the latest innovation in the product family, do come by and see the new devices at the Ignite session What’s new in Azure Storage: Supercharge your data centric workloads, on 21st November starting 11:00 AM CST. You can also drop by the Infra Hub to learn more from our product experts and sign up to try the new devices for your next migration!1.2KViews7likes0CommentsAzure Backup: Protect SAP workloads (SAP HANA, SAP ASE and SQL) delivers more value with lower TCO
Azure Backup-SAP HANA DB Backup Delivers More Value at Lower TCO with Reduced Protected Instance Fee from 1 st September 2024. At Azure, our commitment to providing superior value to our customers is unwavering. We are thrilled to announce a significant update that will bring enhanced cost efficiency to our SAP HANA Database Backup service. Starting September 1, 2024, we are reducing the Protected Instance (PI) fees for "Azure Backup for SAP HANA on Azure VM." This change is designed to deliver more value at a lower cost, making it easier for enterprises to protect their critical data without compromising on quality or performance. New Pricing Structure: More Value, Lower Cost With the new Protected Instance (PI) fee structure effective 1st September 2024, both SAP HANA Streaming/Backint-based backups and SAP HANA Snapshot-based backups will see reduced costs. Here’s how the new pricing breaks down: HANA Backint/Streaming Backup HANA Snapshot Backup DB Size Old pricing (Protected Instance Fee for East US2) New pricing (Protected Instance Fee for East US2) PI Cost Savings % DB Size Old pricing (Protected Instance Fee for East US2) New pricing (Protected Instance Fee for East US2) PI Cost Savings % 500 GB $80 $80 No Change 1TB $160 $80 50% 1 TB $160 $80 50% 10 TB $1600 $160 90% 5TB $800 $80 90% 20 TB $3200 $320 90% 10 TB $1600 $80 95% 30 TB $4800 $480 90% HANA Streaming Backup: A flat rate of $80 (East US2) per instance, with standard regional uplift, regardless of the HANA database size. For example, if you are protecting 1.2 TB of HANA database in one instance running in the East US2 region, the New PI cost would be flat $80 (East US2 Region) per month. Previously, the cost would have been $240 + storage consumed. HANA Snapshot Backup: $80 (East US2) per 5 TB increment, with standard regional uplift. For example, if you have 10 TB of HANA database in one instance running in East US2 region, the New PI cost would be $160 + storage consumed, Previously, the cost would have been $1600 + storage consumed. Following SAP recommendation, if you opt for a weekly full streaming backup in addition to a Snapshot backup, we will apply a single PI fee applicable for HANA Snapshot backup. More details on HANA Backup TCO Savings - Azure Backup-SAP HANA DB Backup Delivers More Value at Lower TCO with Reduced Protected Instance Fee - Microsoft Community Hub For more details on the new pricing structure, visit Pricing - Cloud Backup | Microsoft Azure and Pricing Calculator | Microsoft Azure. For SQL workloads, one can also enable compression which can save 40-50% reduction in storage consumption - Tutorial - Back up SQL Server databases to Azure - Azure Backup | Microsoft Learn SAP ASE Database Backup Public Preview Launch on 19 th Nov’24 SAP applications can be deployed with different database backends such as - Oracle, SQL Server, DB2, SAP ASE and Max DB, not limited to SAP HANA. As part of our endeavor to enhance our backup support for SAP customers, we are thrilled to announce that, apart from SAP HANA and SQL Server, we are now launching Public Preview of backup support for SAP ASE (Sybase) Database on Azure Backup. Lot of existing SAP customers have been requesting us to support non-HANA DB’s like SAP ASE (Sybase) and this new workload support is a significant milestone in our commitment to providing comprehensive, reliable, and flexible backup solutions for our enterprise customers. In the Public Preview, ASE Backup will be available in non-US regions starting November 19, 2024, and in US regions beginning December 10, 2024. Key Features: Support for Cost-effective Backup policies : Weekly full + daily differential backups result in lower storage costs (as opposed to daily streaming full backups) Recovery Services Vault : All backups are streamed directly to the Azure Backup managed recovery services vault that provides security capabilities like Immutability, Soft Delete and Multiuser Auth. The vaulted backup data is stored in Microsoft-managed Azure subscription and is isolated from customer’s environment. These features ensure that the SAP ASE backup data is always secure and tamper-proof and can be recovered safely even when the source machines are compromised. 15-min RPO: As in the case of stream-based backup, log backups are streamed every 15 minutes and can be applied on top of DB backup which provides Point-In-Time recovery capability. Multiple Database Restore options: We support Alternate Location Restore (System refresh) , Original Location Restore and Restore as Files. Compression Support – Support for SAP ASE native compression can reduce storage consumption by 40-50% Striping Support – By enabling “Striping” one can see backup/restore throughput improvement of 50-60% Private Endpoint Support - Azure Backup allows you to securely perform the backup and restore operations of your data from the Recovery Services vaults using private endpoints. Private endpoints use one or more private IP addresses from your Azure Virtual Network (VNet), effectively bringing the service into your VNet. Cross Region Restore (CRR) - Allows you to restore SAP HANA databases that are hosted on Azure VMs in a secondary region, which is an Azure paired region.222Views0likes0CommentsDell APEX File Storage for Microsoft Azure brings a powerful new option to our customers
Dell PowerScale OneFS has been trusted by customers across all industries to provide performant, resilient, and scalable multiprotocol file storage for nearly two decades. At Ignite 2024 we announced a new Dell managed variant that compliments the existing offering to give you powerful choice from a proven industry leader.591Views0likes0Comments