virtualization
265 TopicsStep-By-Step: Enabling Hyper-V for Use on Windows 11
Want to use Hyper-V on Windows 11? Hyper-V is a virtualization technology that is valuable not only for developers and IT Professionals, but also for college and university students. This step-by-step guide will show you how to enable Hyper-V on your Windows 11 machine.1.1MViews8likes35CommentsAsk the Perf Guy: How big is too BIG?
We’ve seen an increasing amount of interest lately in deployment of Exchange 2013 on “large” servers. By large, I mean servers that contain significantly more CPU or memory resources than what the product was designed to utilize. I thought it might be time for a reminder of our scalability recommendations and some of the details behind those recommendations. Note that this guidance is specific to Exchange 2013 – there are many architectural differences in prior releases of the product that will impact scalability guidance. In a nutshell, we recommend not exceeding the following sizing characteristics for Exchange 2013 servers, whether single-role or multi-role (and you are running multi-role, right?). Recommended Maximum Processor Core Count 24 Recommended Maximum Memory 96 GB Note: Version 7.5 and later of the Exchange Server 2013 Role Requirements Calculator aligns with this guidance and will flag server configurations that exceed these guidelines. As we have mentioned in various places like TechNet and our Preferred Architecture, commodity-class 2U servers with 2 processor sockets are our recommended server type for deployment of Exchange 2013. The reason for this is quite simple: we utilize massive quantities of these servers for deployment in Exchange Online, and as a result this is the platform that we architect for and have the best visibility into when evaluating performance and scalability. You might now be asking the fairly obvious follow up question: what happens if I ignore this recommendation and scale up? It’s hard, if not impossible, to provide a great answer to this question, because there are so many things that could go wrong. We have certainly seen a number of issues raised through support related to scale-up deployments of Exchange in recent months. An example of this class of issue appears in the “Oversizing” section of Marc Nivens’ recent blog article on troubleshooting high CPU issues in Exchange 2013. Many of the issues we see are in some way related to concurrency and reduced throughput due to excessive contention amongst threads. This essentially means that the server is trying to do so much work (believing that it has the capability to do so given the massive amount of hardware available to it) that it is running into architectural bottlenecks and actually spending a great deal of time dealing with locks and thread scheduling instead of handling transactions associated with Exchange workloads. Because we architect and tune the product for mid-range server hardware as described above, no tuning has been done to get the most out of this larger hardware and avoid this class of issues. We have also seen some cases in which the patterns of requests being serviced by Exchange, the number of CPU cores, and the amount of physical memory deployed on the server resulted in far more time being spent in the .NET Garbage Collection process than we would expect, given our production observations and tuning of memory allocation patterns within Exchange code. In some of these cases, Microsoft support engineers may determine that the best short-term workaround is to switch one or more Exchange services from the Workstation Garbage Collection mode to Server Garbage Collection mode. This allows the .NET Garbage Collector to manage memory more efficiently but with some significant tradeoffs, like a dramatic increase in physical memory consumption. In general, each individual service that makes up the Exchange server product has been tuned as carefully as possible to be a good consumer of memory resources, and wherever possible, we utilize the Workstation Garbage Collector to avoid a dramatic and typically unnecessary increase in memory consumption. While it’s possible that adjusting a service to use Server GC rather than Workstation GC might temporarily mitigate an issue, it’s not a long-term fix that the product group recommends. When it comes to .NET Garbage Collector settings, our advice is to ensure that you are running with default settings and the only time these settings should be adjusted is with the advice and consent of Microsoft Support. As we make changes to Exchange through our normal servicing rhythm, we may change these defaults to ensure that Exchange continues to perform as efficiently as possible, and as a result, manual overrides could result in a less optimal configuration. As server and processor technology changes, you can expect that we will make adjustments to our production deployments in Exchange Online to ensure that we are getting the highest performance possible at the lowest cost for the users of our service. As a result, we anticipate updating our scalability guidance based on our experience running Exchange on these updated hardware configurations. We don’t expect these updates to be very frequent, but change to hardware configurations is absolutely a given when running a rapidly growing service. It’s a fact that many of you have various constraints on the hardware that you can deploy in your datacenters, and often those constraints are driven by a desire to reduce server count, increase server density, etc. Within those constraints, it can be very challenging to design an Exchange implementation that follows our scalability guidance and the Preferred Architecture. Keep in mind that in this case, virtualization may be a feasible option rather than a risky attempt to circumvent scalability guidance and operate extremely large Exchange servers. Virtualization of Exchange is a well understood, fairly common solution to this problem, and while it does add complexity (and therefore some additional cost and risk) to your deployment, it can also allow you to take advantage of large hardware while ensuring that Exchange gets the resources it needs to operate as effectively as possible. If you do decide to virtualize Exchange, remember to follow our sizing guidance within the Exchange virtual machines. Scale out rather than scale up (the virtual core count and memory size should not exceed the guidelines mentioned above) and try to align as closely as possible to the Preferred Architecture. When evaluating these scalability limits, it’s really most important to remember that Exchange high availability comes from staying as close to the product group’s guidance and Preferred Architecture as possible. We want you to have the very best possible experience with Exchange, and we know that the best way to achieve that is to deploy like we do. Jeff Mealiffe Principal PM Manager Office 365 Customer Experience122KViews0likes2CommentsAsk The Perf Guy: What’s The Story With Hyperthreading and Virtualization?
There’s been a fair amount of confusion amongst customers and partners lately about the right way to think about hyperthreading when virtualizing Exchange. Hopefully I can clear up that confusion very quickly. We’ve had relatively strong guidance in recent versions of Exchange that hyperthreading should be disabled. This guidance is specific to physical server deployments, not virtualized deployments. The reasoning for strongly recommending that hyperthreading be disabled on physical deployments can really be summarized in 2 different points: The increase in logical processor count at the OS level due to enabling hyperthreading results in increased memory consumption (due to various algorithms that allocate memory heaps based on core count), and in some cases also results in increased CPU consumption or other scalability issues due to high thread counts and lock contention. The increased CPU throughput associated with hyperthreading is non-deterministic and difficult to measure, leading to capacity planning challenges. The first point is really the largest concern, and in a virtual deployment, it is a non-issue with regard to configuration of hyperthreading. The guest VMs do not see the logical processors presented to the host, so they see no difference in processor count when hyperthreading is turned on or off. Where this concern can become an issue for guest VMs is in the number of virtual CPUs presented to the VM. Don’t allocate more virtual CPUs to your Exchange server VMs that are necessary based on sizing calculations. If you allocate extra virtual CPUs, you can run into the same class of issues associated with hyperthreading on physical deployments. In summary: If you have a physical deployment, turn off hyperthreading. If you have a virtual deployment, you can enable hyperthreading (best to follow the recommendation of your hypervisor vendor), and: Don’t allocate extra virtual CPUs to Exchange server guest VMs. Don’t use the extra logical CPUs exposed to the host for sizing/capacity calculations (see the hyperthreading guidance at https://aka.ms/e2013sizing for further details on this). Jeff Mealiffe Principal PM Manager Office 365 Customer Experience41KViews0likes4CommentsAccelerate your RDS and VDI migration to Windows Virtual Desktop
https://docs.microsoft.com/en-us/azure/virtual-desktop/virtual-desktop-fall-2019/create-host-pools-powershell-2019 Learn about Azure tools to help migrate Remote Desktop Services (RDS) and Virtual Desktop Infrastructure (VDI) environments.32KViews9likes7Comments