Azure Friday
221 TopicsAn overview of Azure Data Explorer (ADX) | Azure Friday
Manoj Raheja joins Lara Rubbelke to demonstrate Azure Data Explorer (ADX) and provide an overview of the service from provisioning to querying. ADX is a fast, fully managed data analytics service for real-time analysis on large volumes of streaming data. It brings together a highly performant and scalable cloud analytics service with an intuitive query language to deliver near-instant insights. Azure Data Explorer overview Azure Data Explorer docs Azure Data Explorer pricing Create a free account (Azure)9.4KViews1like0CommentsAzure SIT, UAT and Production Environment Deployment
Hi I am currently working for a client where were are migrating their IT estate to Azure. This involves migrating database and re-engineering numerous integrations using Azure Function / Logic Apps / Data Factory. All in All we have nearly 200 resources contained within a Resource Group. We have also created VMs, containers, NSG, DB, API, Vnets etc. We have developed the integration using a combination of Visual Studio and the Azure Portal. All code has been written using C#. The analysis to re-develop the existing integrations has been large and the current integration used a different and old technologies. I did consider creating the dev resources using IaC but the build task has been and ongoing process and integration have been delivered on an ongoing basis. We are now at the end of build and I need to recreate the resources in SIT, UAT and eventually production. I had envisaged I could export the ARM resource group template and refactor. However, this does not seem possible as I receive an error stating 200 resources exceeded. Could somebody recommend how this can be achieved instead of manually creating resources in each environment. I thougth Azure would be agile and would provide a capability to recreate resources are speed. If this is not possible then I will have to look at not considering Azure for future clients and bids. Also could somebody recommend (as a lesson learnt) how we could have created a repeatable process at the outset of the project. I was considering developing Terrafrom scripts but due to timescales in build and the need to analyse the integration this was dropped. Thanks Mike6.2KViews0likes1CommentExpressRoute & Internet access
Hello all, trying to set up a Server which has access to an internal network, but also can be accessed via the internet. These servers will eventually be front end web servers for an on-prem application. I have deployed a Vnet, with a single server, and have successfully configured the express route, so that the server can be accessed from my internal network. I can browse, via IP, to a webserver running on this server. There is a single route table, with BGP enabled, there is no default route from BGP as this is filtered out at the datacentre. The subnet that the server is in is associated with this route table. I have then created an azure firewall, with a public IP. created a dnat rule, to nat the external ip of the FW to the server IP, and allow port 80, and ensured that the access policy on the server nic allows access from all IPs on port 80. I cannot browse to this server via port 80 via the external IP. If I remove all of the express route configuration (virtual gateway and connection) then I can browse to the server via port 80 via the external IP. It seems that the firewall works on its own, the express route works on its own, but when I use the 2 together, there is a conflict which prevents the external connectivity working. What am I missing please5KViews0likes2CommentsToday on Azure Friday: Durable Functions in Azure Functions
Chris Gillum joins Scott Hanselman to discuss a new extension of Azure Functions known as Durable Functions. Durable Functions is a programming model for authoring stateful, reliable, and serverless function orchestrations using C# and async/await. For more information, see: Durable Functions overview (docs) Durable Task Framework extension for Azure Functions (GitHub repo) Durable Functions and Bindings Extensibility Preview Announcement (blog)4.9KViews1like5CommentsRun Azure Functions from Azure Data Factory Pipelines | Azure Friday
Azure Functions is a serverless compute service that enables you to run code on-demand without having to explicitly provision or manage infrastructure. Using Azure Functions, you can run a script or piece of code in response to a variety of events. Azure Data Factory (ADF) is a managed data integration service in Azure that enables you to iteratively build, orchestrate, and monitor your Extract Transform Load (ETL) workflows. Azure Functions is now integrated with ADF, enabling you to run an Azure function as a step in your data factory pipelines. Azure Function activity in Azure Data Factory (docs) Azure Functions now supported as a step in Azure Data Factory pipelines (blog post) Azure Data Factory (docs) Azure Data Factory (overview) Azure Data Factory pricing Create a free account (Azure)2.9KViews1like0CommentsMultiple Node Pools on Azure Kubernetes Service
One of the most impressive parts of moving applications into the cloud is the ability to apply different types of computing power to your application based on use-case. By taking advantage of all the different compute types within Azure, users access to various CPUs and GPUs that can be implemented. No upfront costs or concerns about racking the gear. No fighting for better support with your vendor or having to buy the latest and greatest to keep up. Azure provides you with a number of options to help implement these various compute options. Different use cases will often require you to make decisions about what compute options you've selected. How will they improve portions of your application for the business? In this post we'll look at how you can use multiple types of compute options within a Kubernetes cluster in order to use different resources for different parts of their application. Getting Started This is a post on understanding Azure Kubernetes Service (AKS) node pools. This is a blog post that makes assumptions that you're done the Azure tutorials around fundamentals. It will also assume you know how to create and use the Azure Kubernetes Service to launch a cluster. If you'd like to start with these, check out these Microsoft Learn Modules and blog posts: Microsoft Docs Learn Azure Fundamentals Microsoft Docs Learn Introduction to Azure Kubernetes Service Kubernetes Terminology for Beginners Azure Kubernetes Service - A Beginner's Guide. What's a node in Kubernetes? From the official Kubernetes Docs The Kubernetes node has the services necessary to run application containers and be managed from the master systems. A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy. See The Kubernetes Node section in the architecture design doc for more details._ It could be your local computer using minikube, a server in a datacenter or a virtual machine in the cloud. What's a node pool? You can use different types of CPUs and storage with management for your AKS managed nodes. A subset of VM's like hardware and configuration. Scale them based on your utilization. From Create and manage multiple node pools for a cluster in AKS on Microsoft docs website: In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into node pools. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a default node pool. To support applications that have different compute or storage demands, you can create additional node pools. For example, use these additional node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage. What's a GPU Optimized VM?: From Microsoft docs: GPU optimized VM sizes are specialized virtual machines available with single or multiple NVIDIA GPUs. These sizes are designed for compute-intensive, graphics-intensive, and visualization workloads. Solving a problem Even Fake companies need help Tailwind Traders is the world's biggest fake company with a reference app used for this and many other examples. You can review this video from Azure Friday about the reference apps. You can also review the application repository here, it's open to the world to build with all the tools in this blog post. The Tailwind Traders Website at its core is an e-commerce site with a number of extremely critical services that the company requires reliability and uptime in order to successfully execute new orders. In order for Tailwind Traders to understand their customers better, much of the experience the user has is stored in a NoSQL database in Azure Cosmos DB. Cosmos provides Tailwind Traders with a low latency and high throughput database that works with the MongoDB API. Daily, reports on this information are run and delivered into a web presentable front end. The reports app uses a number of algorithms that the data science team at Tailwind Traders developed along with the application engineer; these are extremely compute-intensive and seem to be taxing the capabilities of the more general-purpose CPU offerings in Azure. The Tailwind Traders engineering team recently looked at processing times of their daily reports for and recognized that even though they were wise enough to architect their application into microservices, they felt that the return time of some reports (nearly 8-9 hours) had a potential of being reduced by utilizing GPUs available in Microsoft Azure. Tailwind Traders will want to use standard CPUs for a portion of the web front end of my app. The company's CIO would like to have reports on their collected data processed quicker on GPUs in order for the business to best place the most popular and impressive products to potential customers based on data captured. The more information Tailwind Traders is able to gain on their customer's experience, the more they can improve it. Proposed Solution: TWT's team will build an AKS cluster with multiple node pools. The two pools will contain different types of computing power to handle the microserviced application workloads. To begin, the team will create a new cluster for AKS, in this case, nodepool01 that will handle the web app front end using a Bs2 series VM for the node pool. You can also create node pools and specify your specific CPU type with az cli tool rather than do so by the portal: # Create a resource group in East US az group create --name myResourceGroup --location eastus # Create a basic single-node AKS cluster az aks create \ --resource-group myResourceGroup \ --name cluster01 \ --vm-set-type VirtualMachineScaleSets \ --node-count 2 \ --generate-ssh-keys \ --kubernetes-version 1.15.7 \ --load-balancer-sku standard # Add a node pool to the cluster az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name cluster01 \ --name nodepool02 \ --node-count 3 \ --kubernetes-version 1.15.7 This cluster will use B-series burstable VMs which are ideal for workloads that do not need the full performance of the CPU continuously, like web servers, small databases and development and test environments - exactly the solution for a web front end service and its required components such as a load balancer. The TWT team chose to create a secondary node pool nodepool02 that will handle my GPU intensive workloads, such as processing data for reports that are used to improve customer experience. The secondary node pool is created using a NC6s_v2 class virtual machine. The TWT tech team next can create a node pool using the az aks node pool add command again. This time, specify the name nodepool02 , and use the --node-vm-size parameter to specify the Standard_NC6 size: az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \ --name nodepool02 \ --node-count 1 \ --node-vm-size Standard_NC6 \ --no-wait The TWT team can begin assigning specific pods to nodes by scheduling pods using taints and tolerations. By providing a dedicates pool of GPU backed nodes, TWT will now be able to run their reports using the power of single or multiple NVIDIA GPUs. More Info There's more documentation available to you to find out the best way to implement node pools with AKS. Check out the different links and videos provided before so you can begin using this powerful option for those who want to diversify compute options within their AKS cluster. Resources Microsoft Docs: az aks nodepool Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS) GitHub Actions for deploying to Kubernetes service Azure Kubernetes Service (AKS) Multiple node pools in Azure Kubernetes Service (AKS) | Azure Friday Getting production ready in Kubernetes The Author If you're running into issues and need assistance with AKS or any other Azure service, reach out to me: Twitter: @jaydestro Twitch: jaydestro Github: jaydestro I am always happy to hear from developers and engineers who are trying to implement new services to improve their experience for their customers.2.3KViews0likes0CommentsIntroduction to Azure Integration Service Environment for Logic Apps | Azure Friday
The Integration Service Environment (ISE) provides a dedicated Logic Apps runtime that can directly integrate with systems in a virtual network, including on-premises via an ExpressRoute. Announcing Azure Integration Service Environment for Logic Apps blog post Connect to Azure virtual networks from Azure Logic Apps by using an integration service environment (ISE) docs Azure Logic Apps docs What is Azure Virtual Network? docs Create a free account (Azure) Follow @SHanselman Follow @LogicAppsIO Follow @kevinlam_msft Never miss an episode: Follow @AzureFriday2.2KViews0likes0CommentsUsing HashiCorp Vault with Azure Kubernetes Service (AKS) | Azure Friday
As the adoption of Kubernetes grows, secret management tools must integrate well with Kubernetes so that the sensitive data can be protected in the containerized world. On this episode, Yoko Hakuna demonstrates the HashiCorp Vault's Kubernetes auth method for identifying the validity of containers requesting access to the secrets. HashiCorp Vault project website Get started with Vault Kubernetes Auth Method doc Vault Agent with Kubernetes guide Vault Agent doc How does Vault encrypt data? Open Source Security Best Practices for Developers, Contributors, and Maintainers (The Open Source Show) Create a free account (Azure)2.2KViews0likes0Comments