labs
10 TopicsStep-By-Step: Enabling Hyper-V for Use on Windows 11
Want to use Hyper-V on Windows 11? Hyper-V is a virtualization technology that is valuable not only for developers and IT Professionals, but also for college and university students. This step-by-step guide will show you how to enable Hyper-V on your Windows 11 machine.1.1MViews8likes35CommentsStep-By-Step: How to Create a Windows 11 VM on Hyper-V via PowerShell
This step-by-step guide outlines how to create a Windows 11 virtual machine (VM) on Hyper-V using PowerShell commands. By following these instructions, IT professionals can save time and effort by automating the process and ensuring that each VM is configured correctly. This method is particularly useful for organizations that need to deploy multiple VMs quickly and efficiently.222KViews3likes21CommentsHow to reduce your Azure invoice?
I won't tell anyone that Cloud Computing's billing, is based on the consumption of your resources, which is called Pay As You Go (PAYG), which is one of its advantages. One of the best practices before deploying in the Cloud is obviously to estimate the costs of the resources you are going to deploy, simply to get an approximate idea of what they will cost to you. The problem is that the estimate is rarely made. Then, for some companies, costs are booming and it can be difficult to know where to start to try to reverse the trend. Some of them, after a bad financial experience, decide to stop their journey to the public Cloud, considering it too expensive. Others will define strong rules, and sometimes too strict, which go against innovation, which can punish the daily life of the Innovation and R&D teams. Fortunately, FinOps principles appeared to help companies to have a good knowledge about their Cloud costs. FinOps is the contraction of Finance and Operation. The FinOps Foundation defines FinOps: as an evolving cloud financial management discipline and cultural practice that enables organizations to get maximum business value by helping engineering, finance, technology and business teams to collaborate on data-driven spending decisions. Like any cloud provider, Azure offers several services and solutions to reduce your invoice, and ensure that the costs associated to your resources are not blockers to cloud adoption. To avoid this situation, let's see together the levers that will allow you to reduce your Azure bill. Azure Calculator As mentioned earlier, to avoid bad surprises and find out the cost of the resources deployed, one of the best practices is to estimate your costs before to deploy anything. So I see you coming, an estimate is still an estimate, but it can give you an idea of the budget for an application or a project. Let's imagine that the business wants to deploy a new service to order take-out meals. Obviously, once you have gathered the different elements, such as the needs, the technical, legal and financial constraints, you can move on to the phase of defining your architecture. Once defined, it will be time to estimate it, because it is pretty sure that the business doesn't have an infinite budget. It is at this precise moment that Azure Calculator takes all its interest. To estimate the cost of your services, simply select them from the list and enter their configuration that you are considering. Obviously several parameters come into consideration for the estimate, and some are not easy to establish such as the incoming / outgoing traffic at the level of your application, the bandwidth, or the storage required. But, remember, we do not want to have a precise amount, which is impossible to obtain, but an order of magnitude. Do not hesitate to be accompanied by a Cloud Architect, who will have defined the solution, to get you as close as possible to a coherent estimate. And as Leonardo da Vinci said: Not to anticipate is already to moan. Delete unused/orphaned resources Another way to simply lower your bill is to remove resources you don't use. Keep in mind that you are charged even when you are not using a service. It is therefore necessary to hunt for unused resources. So at first glance it seems simple, when you have a small team deploying resources, but imagine, when you are in a large organization with dozens, or maybe hundreds of people deploying resources, then it becomes a real brain teaser. Another good practice on Azure is to tag your resources. Tags allow you to add information in the form of keys/values, in order to identify, for example, the owner of a resource, the person who deployed it, or even the project to which this resource is attached. If you don't know if the resource is still in use, you can, at a minimum, contact the owner of the resource or the person who deployed it to get the information, and see if you can clean up. Remember to also delete the various orphan resources. By orphan resources, I mean those that are not attached to any service but for which you are still charged. This can be VM disks, public IP addresses, or even App Services Plans, which you are billed for even though these resources are no longer used. Here is for example the PowerShell command to list the unassociated hard disks: Get-AzDisk | where-object {$_.Diskstate -eq 'Unattached'} Or the PowerShell command to list the public IP addresses not associated with a resource: Get-AzPublicIpAddress | Where-Object {$_.IpConfiguration -eq $null} It may be interesting to create a script, which runs periodically to remove these unused resources. Stop and start your VMs For those who use VMs, one of the fairly simple solutions to reduce your bill is to stop and start your resources. If we draw a parallel with your electricity, especially in these more than complicated times, when you're not at home, you turn off your lights, well it's the same with Azure and the Cloud in general. So you are going to answer me, yes but how do I do with my Production VMs? Well precisely, our target is not necessarily the Production environment but the others, such as the Development, Acceptance, UAT, Staging, Pre-production environments, ... It is almost certain that the VMs of these environments do not have to be running all the time, so we can consider turning them off at night and even on weekends. And casually, these are significant costs that you can eliminate. Of course, it is possible to shut down and restart VMs automatically, either directly at the VM level for shutdown with the Auto-Shutdown option. Obviously, the start can also be automated with an Automation Account, through templats that are based on tags, considerably reducing your efforts. Right size your resources As you know, on many Azure services, the price is partly calculated, on the amount of allocated resources such as computing, network, and storage. Thus, more you have define resources, more the price will be high. You must therefore be vigilant to right size your resources, depending on the service, or the needs you have. It is useless to provision 8vCPU and 16GB of memory to host a Web server on a VM, or to provide a cache of 26GB of memory for an Azure Cache for Redis, if you only have 4GB of data to store! Let's take a new concrete example, I work at 10km from my home. I have the choice between a small electric car, and a large diesel 4x4, obviously I opt for the small car because it seems to be the most suitable for my needs. On Azure it's the same, size your resources according to your need. Of course, you can provide an additional margin that will be used during additional workload peaks. VM instance type versions Azure offers you different types of VM instances such as general use, optimized memory, high performance computing to name a few, with their different configurations depending of you needs. But that's not all. We are talking about Cloud Computing with Azure, the services offered are hosted on physical servers. These physical servers are arranged in racks, and these racks are stored in Microsoft Datacenters. And like any physical material, it must obviously be maintained, but also replaced in the event of failure. You have to know that when Microsoft teams add hardware or replace obsolete one or defective hardware, this new hardware is generally more powerful, and more affordable than that installed previously. (Obviously, this applies during stable economic periods, not like today with a context of war in Eastern Europe, but also with the shortage of electrical components, or the explosion of energy-related prices like it is the casenow). This is how new versions of instance types appear, suffixed in v2, v3, v4, v5, and indeed in normal times, the most recent versions offer more attractive prices than the old ones. And that's not all because you can also find instance types Promo for a limited period of time, which could be very interesting for their lowest prices. Azure VM Spot Microsoft also offers to use unused computing capacity at a given time to achieve savings of up to 90% of public prices on VMs, with the Azure VM Spot feature. How does it work exactly? Sometimes, the Azure's infrastructure is not fully used, and rather than seeing its Infrastructure unused, Microsoft decided to offer this computing at a lower price. I can already see a question on your lips “What happens when Azure needs this computing power again? » It's simple, depending on the configuration you have defined, either your VM stops or it is deleted. Know that you can even set a floor price, below the public price, that you are ready to pay. Once this price is reached, your VM either shuts down or is deleted. If you plan to use this option, you can take help of Azure Resource Graph which will allow you to obtain the price history during the last 90 days and the eviction rates during the last 28 days and thus help you to set this price. So you will have understood that Spot VMs cannot be used for all types of scenarios, and even less for critical or production environments. On the other hand, they apply perfectly to the use cases of workloads capable of managing interruptions, such as batch processing jobs, development environments, tests, among others. A notification 30 seconds before the eviction of your VM, is sent to the system in order to warn it and anticipate the stop or the deletion of this one. The feature, Azure VM Spot, applies to Linux and Windows VMs, as well as the Virtual Machine Scale Set (VMSS) service. Price varies by region Another factor that affects the price of a service is the region in which it is deployed. But how is this explained? The price of a service is obviously calculated on the cost of the hardware on which it runs, but other parameters come into consideration such as: The labor cost of the country where the Datacenter is located The cost of energy that allows the operation of the Datacenter But other elements can influence the price, such as: The financial aid granted by certain countries to promote the establishment The national or global economic context But also the geopolitical stability of a region So obviously this factor should not be the one that defines the region in which you are going to deploy your application, but at least now you are aware that a service does not have the same cost if it is deployed in the US or in Brazil. Scaling mechanisms Scaling or autoscaling, is what allows a service or an application to adapt to a peak in traffic, requests, CPU/memory consumption, by provisioning new resources to absorb this surplus of workload. There are two scaling modes: Horizontal scaling or scale-out is when you provision new instances of a service or application. This can be manual or automatic scaling. Vertical scaling or scale-up is when you increase the capabilities of an existing instance. This action is generally manual, because it can lead to a service interruption, especially at the VM level. The choice of the scaling mode will depend of your application. Scale-out is generally used when the application is stateless, it means that no application data are stored at the level of the instance that executes the workload. Conversely, scale-up is used when data is stored at the instance level, then we speak of a statefull application. Here is a link which explain furthr the difference between a stateless application and a statefull application. Of course, the horizontal scaling can be manual, but often automatic. The latter is based on criteria that you define, such as a rule that will deploy two new instances of your application, if the average CPU is greater than 70% during the last ten minutes. But the most interesting thing is that you can also define rules for de-provisioning, i.e. deleting instances on the same principle, but for stateless applications of course. So when the average CPU is less than 40% over the last fifteen minutes, an instance can be deleted. And since no data is stored there, there is no impact on your application. Other criteria are available, such as the hourly schedule to add or remove the number of instances. This criterion can be used if you are able to anticipate your traffic upstream, with for example video game servers, which are more popular in the evening and on weekends than during the day. In short, scaling mechanisms are valuable allies in the quest to reduce costs, provided that you define the right trigger thresholds, and all of this of course, without impacting the service level of your application. Reserved Instances One of the most used levers to control your costs in Azure is undoubtedly that of Reserved Instances or RI. The very simple idea is to make a contractual commitment over a period of 1 to 3 years on the use of a service, in order to benefit from very significant discounts, which can sometimes reach more than 70% of the initial price on certain services. Obviously, the longer the commitment period, the more attractive the discounts. So by making a reservation, it will no longer be the public price that will be billed to you, but the new "reserved" price that will depend on your commitment. Several services are now available for reservations, such as VMs, databases (SQL, MySQL, PostgreSQL, Redis), computing resources but also storage. Be careful though when setting up reservations. As you commit to a period of 1 to 3 years, make sure that the reservation covers a service whose lifetime will be greater than your commitment, because once the reservation is made, you pay even if your resource is not not used. Azure Hybrid Benefit Azure Hybrid Benefit is a solution that allows you to use your existing licenses, such as Windows Server, Microsoft SQL, but also Red Hat Enterprise Linux (RHEL) or SUSE Linux Enterprise Server (SLES), to significantly reduce your licensing costs. your VMs hosted on Azure, but also on the SQL Database service. How does it actually work? On the Windows Server or Windows SQL part (on VM or on SQL Database), you must have one of the licenses mentioned below to be able to benefit from it on Azure: Windows Server Datacenter Edition with Software Assurance. Windows Server Standard Edition with Software Assurance. SQL Server Enterprise Edition Core licenses with Software Assurance or qualifying subscription licenses. SQL Server Standard Edition Core licenses with Software Assurance or qualifying subscription licenses. Microsoft describes Software Assurance as A comprehensive program that includes a unique set of technologies, services, and entitlements to enable you to effectively deploy, manage, and use Microsoft products. The idea is also to allow you to move some workloads to Azure, as Software Assurance provides 180 days of concurrent use between Azure and your On-Premise environment, which is great for a smooth migration. The principle is pretty much the same for RHEL and SUSE, coming to Azure with your own licenses. In addition, during Microsoft Ignite which took place from October 12 to 14, 2022, Microsoft even extended Azure Hybrid Benefit by offering the possibility to deploy Azure Kubernetes Service on Windows Server and Azure Stack HCI at no additional cost in your environments. On-Premise if you have Software Assurance. And the top of the top is that it is also possible to combine Azure Hybrid Benefit and Reserved Instances, to further reduce your costs. Azure savings plan for compute During this same Microsoft Ignite, Microsoft announced a new way to optimize your costs, with the Azure savings plan for compute service. The latter, which as its name suggests, only concerns computing resources, is a commitment to spend, a fixed hourly amount, over a period of 1 to 3 years as for reservations. Moreover, it is possible to combine it with Reserved Instances but also with Hybrid Benefit. According to Microsoft documentation with a 3-year commitment, Azure savings plan for compute could allow you to reduce your costs by up to 65% compared to public prices. Let's take a concrete example, in which you opt for an hourly amount of 5€ per hour. If you exceed this amount by €5, the additional costs will be calculated on the public prices. Conversely, if you do not use enough computing power to reach your €5, then you will still pay the €5. But what is interesting here is that this amount, which you have defined, does not apply to a single resource, but to all of your services running compute resources, with a small limitation all the same, supported services. At the time of writing these few lines, Azure savings plan for compute applies to all of your resources deployed worldwide for the services below: Azure VM Azure Container Instances Azure Functions for Premium plans Azure App Service And Azure Dedicated Host So if I take my previous example, my 5€ will be allocated for all of my resources using the services above. Azure DevTest Lab As you know, one of the advantages of the Cloud is also to be able to quickly test a service or a feature by deploying what you need. For example, it is very simple and quick to deploy identical environments, which can be intended for training sessions. So in these specific cases, we will not really be able to reduce the costs associated with the resources deployed, but above all we will be able to control them. Azure DevTest Lab allows you to deploy resources, from an existing repository, which can be Linux, Windows VMs, App Services, Oracle or SQL databases or even Dynamics instances, to name a few. Obviously you can if you wish, deploy your own images. I was talking earlier about cost control. If I go back to my training example, we will be able for example to define the type of instance to deploy, or their number. What's interesting is that you can schedule the stop and start, which we talked about a little earlier, but also an expiration date. This expiration date will automatically delete the resources, once the date is reached, and thus avoid having unused resources for which you will be charged if you forget to delete them. Azure Cost Management + Billing Azure Cost Management + Billing is a suite of tools that help you easily analyze, manage, and optimize the costs of your workloads. We have previously mentioned the interest of tags, and here you can identify the cost of your resources by tag, by type of resource or by group resource for example. You have already received an Azure invoice. We agree, it's not always easy to decode it, well Azure Cost Management + Billing will help you hang up the wagons. It also offers you the possibility of creating monthly, quarterly or annual budgets and receiving alerts when a threshold that you have configured is reached. Obviously if the budget is exceeded, the resources will not be stopped or deleted, we are here on a notification system. It is also Azure Cost Management + Billing that will allow you to retrieve your invoices, and view information concerning the subjects of Reserved Instances, Hybrid Benefit and Azure savings plan for compute. In short, it is THE service that will allow you to track your Azure costs. Azure Advisor Another service that can help reduce your bill is Azure Advisor. It offers recommendations on good practices to implement on different aspects: The costs Security Reliability Operational excellence Performances Let's focus on what interests us today, the costs. You will find recommendations to help you optimize and reduce your costs by identifying inactive resources, oversized resources or those for which Reserved Instances could be considered. To do this, Azure Advisor relies on your resource usage history and associated metrics to offer you the best solution. It therefore requires a minimum of use before the recommendations are relevant. In addition, you can modify the basic criteria on which it is based in order to obtain advice that corresponds even more to your activity. A coherent architecture It can sometimes happen that proposed or defined architectures are disproportionate for a simple need. Let's take the example of a BU that wants to create a new micro-service that runs on the .Net 6 framework to host an API: After a quick analysis on his side, the application architect considers that the AKS service is ideal for his need. As a Cloud architect, the first thing is to understand the need and the constraints of the BU to challenge it. After various exchanges, it turns out that the critical points for the BU are the scalability of the platform, as well as the deployment of new versions of its API. The Cloud architect can thus challenge the AKS service, by proposing the use of an Azure App Service in container mode, which responds to scalability with native support for scale-out, but also for the delivery of new versions of the API, with support for deployment slots that will avoid service disruptions during deliveries. In short, the idea is always to do as simple as possible according to the client's inputs to design the architecture he needs, and not the one that makes you dream, or that is "sexy", and that will lead to costs that are ultimately not legitimate. Get accompanied by a partner And if you are not in a position to challenge your interlocutors, do not hesitate to have a partner accompany you, who will be able on the one hand to challenge, but on the other hand, carry out a complete analysis in order to help you reduce your bill. At Grow Una (external link removed by moderator), we support our customers on the Azure ecosystem, on different levels of expertise: Daily support The definition of architecture Deployment and MCO of your Cloud environments Migration of your application assets to Azure Azure training or increasing the skills of your teams While implementing DevOps methodologies and FinOps principles. So as you see, several solutions can allow you to decrease your Azure invoice, and now it's your turn to try them!2.1KViews0likes0CommentsUsing Visual Studio Code from a docker image locally or remotely via VS Online
A development container is a running Docker container with a well-defined tool/runtime stack and its prerequisites. The Remote - Containers extension in the Remote Development extension pack allows you to open any folder mounted into or inside a dev container and take advantage of VS Code's full development feature set.24KViews1like1CommentUsing Microsoft Learn new API to enable amazing content for a blended learning curricula
Welcome to the new Microsoft Learn Catalog API (Beta) The Microsoft Learn Catalog API (beta) lets you send a web-based query to Microsoft Learn and get back details about published content such as titles, products covered, and links to the training.5.5KViews1like4CommentsWindows Server 2012 R2 Essentials technical training series now available on Microsoft Virtual Academy
First published on TechNet on Feb 26, 2014 [This post comes to us courtesy David Fabritius from Product Marketing] Have you been looking for in-depth technical readiness content on Windows Server 2012 R2 Essentials and the new Windows Server Essentials Experience role? In this comprehensive training series, we cover the gamut, from deployment options to the end user experience.