clustering
141 TopicsBLOG: Windows Server / Azure Local keeps setting Live Migration to 1 - here is why
Affected products: Windows Server 2022, Windows Server 2025 Azure Local 21H2, Azure Local 22H2, Azure Local 23H2 Network ATC Dear Community, I have seen numerous reports from customers running Windows Server 2022 servers or Azure Local (Azure Stack HCI) that Live Migration settings are constantly changed to 1 per Hyper-V Host, as mirrored in PowerShell and Hyper-V Host Settings. The customer previously set the value to 4 via PowerShell, so he could prove it was a different value at a certain time. First, I didn't step into intense research why the configuration altered over time, but the stumbled across it, quite accidently, when fetching all parameters of Get-Cluster. According to an article a LCU back in September 2022 changed the default behaviour and allows to specify the live migrations at cluster level. The new live migration default appears to be 1 at cluster level and this forces to change the values on the Hyper-V nodes to 1 accordingly. In contrast to the commandlet documentation, the value is not 2, which would make more sense. Quite unknown, as not documented in the LCU KB5017381 itself, but only referenced in the documentation for the PowerShell commandlet Get-Cluster. Frankly, none of the aren't areas customers nor partners would check quite regularly to spot any of such relevant feature improvements or changes. "Beginning with the 2022-09 Cumulative Update, you can now configure the number of parallel live migrations within a cluster. For more information, see KB5017381 for Windows Server 2022 and KB5017382 for Azure Stack HCI (Azure Local), version 21H2. (Get-Cluster).MaximumParallelMigrations = 2 The example above sets the cluster property MaximumParallelMigrations to a value of 2, limiting the number of live migrations that a cluster node can participate in. Both existing and new cluster nodes inherit this value of 2 because it's a cluster property. Setting the cluster property overrides any values configured using the Set-VMHost command." Network ATC in Azure Local 22H2+ and Windows Server 2025+: When using Network ATC in Windows Server 2025 and Azure Local, it will set the live migration to 1 per default and enforce this across all cluster nodes. Disregarding the Cluster Settings above or Local Hyper-V Settings. To change the number of live migration you can specify a cluster-wide override in Network ATC. Conclusion: The default values for live migration have been changes. The global cluster setting or Network ATC forcing these down to the Hyper-V hosts based on Windows Server 2022+/ Azure Local nodes and ensure consistency. Previously we thought this would happen after using Windows Admin Center (WAC) when opening the WAC cluster settings, but this was not the initial cause. Finding references: Later the day, as my interest grew about this change I found an official announcement. In agreement to another article, on optimizing live migrations, the default value should be 2, but for some reason at most customers, even on fresh installations and clusters, it is set to 1. TLDR: 1. Stop bothering on changing the Livemigration setting manually or PowerShell or DSC / Policy. 2. Today and in future train your muscle memory to change live migration at cluster level with Get-Cluster, or via Network ATC overrides. These will be forced down quite immediately to all nodes and will be automatically corrected if there is any configuration drift on a node. 3. Check and set the live migration value to 2 as per default and follow these recommendations: Optimizing Hyper-V Live Migrations on an Hyperconverged Infrastructure | Microsoft Community Hub Optimizing your Hyper-V hosts | Microsoft Community Hub 4. You can stop blaming WAC or overeager colleagues for changing the LM settings to undesirable values over and over. Starting with Windows Admin Center (WAC) 2306, you can set the Live Migration Settings at cluster level in Cluster > Settings. Happy Clustering! 😀936Views2likes0Commentsfeature Installation Error
I am facing this issue in Windows Server 2019 STD. i am also tried to solve this issue to select sources\sxs path from the OS media but still i am getting the same error. Mistakenly i have removed .Net framework from this server and after that i am facing this issue. please help me to solve this issue.27Views0likes0CommentsASHCI cluster with different RAM amounts
been looking through server requirements and things, and i cant see a definitive answer of whether i need to have the same amount of RAM per ASHCI host. i appreciate it may not be best practice due to failing over VMs within the cluster, and being in a position where you over provision RAM and could end up in a sticky situation, but given that you can not only set preferred owners, but also have the ability to set possible owners, you should be able to account for that. tldr: ASHCI cluster can i have have one node with 4TB of RAM and 5 nodes with 2TB of RAM19Views0likes1CommentASCHI cluster different RAM amounts per node
is it a supported model for ASHCI to have 1 node in a cluster to have different amounts of RAM? i appreciate that you can specify preferred owners, and possible owners on VMs, to restrict/ allow VMs to go to specific hosts, and understand that under normal circumstances you probably wouldnt want to have hosts with different amounts of RAM to circumvent over provisioning and then getting into difficulties upon loosing hosts, but am looking at making one of my ASHCI hosts a SQL server, i dont want to remove it from the cluster due to impacting storage, but need to increase the RAM, and if i can do it to just a single host id prefer that.27Views0likes3CommentsFailover Cluster Manager error when not running as administrator (on a PAW)
I've finally been trying (hard) to use a PAW, where the user I'm signed into the PAW as does NOT have local admin privileges on that machine, but DOES have admin privileges on the servers I'm trying to manage. Most recent hiccup is that Failover Cluster Manager aka cluadmin.msc doesn't seem to work properly if you don't have admin privileges on the machine where you're running it from. Obviously on a PAW your server admin account is NOT supposed to be an admin on the PAW itself, you're just a standard user. The error I get when opening Failover Cluster Manager is as follows: Error The operation has failed. An unexpected error has occurred. Error Code: 0x800702e4 The requested operation requires elevation. [OK] Which is nice. I've never tried to run cluadmin as a non-admin, because historically everyone always just ran everything as a domain admin (right?) so you were an admin on everything. But this is not so in the land of PAW. I've run cluadmin on a different machine where I am a local admin, and it works fine. I do not need to run it elevated to make it work properly, it just works. e.g. open PowerShell, cluadmin <enter>. PowerShell has NOT been opened via "Run as administrator" (aka UAC). I've tried looking for some kind of access denied message via procmon but can't see anything obvious (to my eyes anyway). A different person on a different PAW has the same thing. Is anyone successfully able to run Failover Cluster Manager on a machine where you're just a standard user?1.3KViews1like2CommentsS2D 2 Nodes Hyper-V Cluster
The Resources: 2 physical Windows 2022 Data Center servers: 1- 2 sockets 24 physical CPU core 2- 64 GB RAM 3- 2 * 480GB SSD RAID 1 (OS Partition) 4- 4 * 3.37TB SSD RAID 5 the total approximately 10 or 11 TB after the RAID 5- 4 * 10gb Nic 1 Physical Windows 2016 standard server with DC role The Desired: I want to make 2 nodes Hyper V cluster with S2D to combine the 2 servers storage into 1 shared clustered volume and store the VMs on it and make a high availability solution which will fail over automatically when one of the nodes is down without using an external source for the storage such as SAS, is this scenario possible?179Views0likes0CommentsWhy can't I run Enable-ClusterS2D more than once per windows installation?
I have been trying to setup this bloody cluster storage spaces direct configuration for over a month and have re-installed everything on the server more than a dozen times. Every time I get to running the command for the first time, it makes some configuration that is not what I want and there is absolutely no way that I can undo the changes or fix them in any way, shape, or form except by completely wiping the server and reinstalling the entire operating system. I have tried to run Disable-ClusterS2D and then re-run the Enable-ClusterS2D and it will hang on "Waiting until all physical disks are reported by clustered storage subsystem". I try to completely detach the entire pool and wipe it manually and reclaim the disks and recreate the pool, and guess what, "Waiting until all physical disks are reported by clustered storage subsystem". I wipe the entire cluster configuration and recreate the cluster from scratch. Did that work? Of course not, "Waiting until all physical disks are reported by clustered storage subsystem". This has happened on both Windows Server 2022 and 2025, Datacenter editions. Why can I not run the command more than once per operating system??? If I need to replace a disk, am I just supposed to wipe the whole operating system??? Isn't this **bleep** thing supposed to be enterprise grade and just work???713Views0likes0CommentsHyper-V Replica Network for failover clustering
I have a Server 2019 Failover Cluster that we use for S2D and are starting to move VMs over to. I've added a dedicated network card for Hyper-V and that works fine. I am now trying to add a dedicated network card for Hyper-V replication. From what I can tell I am able to add it and the cluster sees it. I've made it available to the cluster and to clients. I gave each member of the cluster a static Ip with no gateway on that nic then installed the hyper-v replica role and gave it an IP on that same network. However it just doesn't seem to want to work correctly. Im getting Event ID 1205, 1069 and 21502. 1205 says the cluster service failed to bring clustered role 'RDS-Replica' completely online or offline 21502 says Hyper-V Replica broker RDS-Replica failed to start the network listener on destination node StorageNode02 no such host is known. I'm guessing this is due to I told it to not register the network card in DNS since this network isn't going to be able to reach anything other than other Hyper-V host. My end goal is to replicate these to a dedicated Hyper-V box at the same site and will have a dedicated adapter on the same layer 2 network. Any suggestions? Thank you1.3KViews0likes1CommentHow to manage garbage collection not to disrupt services on web application
Greetings, I need your expertise, to resolve a painful issue regarding Gc process always running on my stand-alone servers causing disruption. I limited my IIS to always run recycle after reaching 20gi of memory consumption, I also set the site to use server gc , but all the time my servers are undergoing Gc which disrupts service. Any solutions over there to get off this nightmare242Views0likes0CommentsExpanding drive in Failover Clustering
I've got a Storage Spaces direct cluster setup and I've got a volume I need to expand. It's a volume under the file server role in the cluster. It's an ReFS volume and I've tried using Windows Admin Center to expand it and it errors out, but I can expand a volume under the SoFS role with no issue. I also tried using Server manager, disk manager and diskpart with no luck. Any suggestions?218Views0likes0Comments