Blogs
24 TopicsExploring Azure OpenAI Assistants and Azure AI Agent Services: Benefits and Opportunities
In the rapidly evolving landscape of artificial intelligence, businesses are increasingly turning to cloud-based solutions to harness the power of AI. Microsoft Azure offers two prominent services in this domain: Azure OpenAI Assistants and Azure AI Agent Services. While both services aim to enhance user experiences and streamline operations, they cater to different needs and use cases. This blog post will delve into the details of each service, their benefits, and the opportunities they present for businesses. Understanding Azure OpenAI Assistants What Are Azure OpenAI Assistants? Azure OpenAI Assistants are designed to leverage the capabilities of OpenAI's models, such as GPT-3 and its successors. These assistants are tailored for applications that require advanced natural language processing (NLP) and understanding, making them ideal for conversational agents, chatbots, and other interactive applications. Key Features Pre-trained Models: Azure OpenAI Assistants utilize pre-trained models from OpenAI, which means they come with a wealth of knowledge and language understanding out of the box. This reduces the time and effort required for training models from scratch. Customizability: While the models are pre-trained, developers can fine-tune them to meet specific business needs. This allows for the creation of personalized experiences that resonate with users. Integration with Azure Ecosystem: Azure OpenAI Assistants seamlessly integrate with other Azure services, such as Azure Functions, Azure Logic Apps, and Azure Cognitive Services. This enables businesses to build comprehensive solutions that leverage multiple Azure capabilities. Benefits of Azure OpenAI Assistants Enhanced User Experience: By utilizing advanced NLP capabilities, Azure OpenAI Assistants can provide more natural and engaging interactions. This leads to improved customer satisfaction and loyalty. Rapid Deployment: The availability of pre-trained models allows businesses to deploy AI solutions quickly. This is particularly beneficial for organizations looking to implement AI without extensive development time. Scalability: Azure's cloud infrastructure ensures that applications built with OpenAI Assistants can scale to meet growing user demands without compromising performance. Understanding Azure AI Agent Services What Are Azure AI Agent Services? Azure AI Agent Services provide a more flexible framework for building AI-driven applications. Unlike Azure OpenAI Assistants, which are limited to OpenAI models, Azure AI Agent Services allow developers to utilize a variety of AI models, including those from other providers or custom-built models. Key Features Model Agnosticism: Developers can choose from a wide range of AI models, enabling them to select the best fit for their specific use case. This flexibility encourages innovation and experimentation. Custom Agent Development: Azure AI Agent Services support the creation of custom agents that can perform a variety of tasks, from simple queries to complex decision-making processes. Integration with Other AI Services: Like OpenAI Assistants, Azure AI Agent Services can integrate with other Azure services, allowing for the creation of sophisticated AI solutions that leverage multiple technologies. Benefits of Azure AI Agent Services Diverse Use Cases: The ability to use any AI model opens a world of possibilities for businesses. Whether it's a specialized model for sentiment analysis or a custom-built model for a niche application, organizations can tailor their solutions to meet specific needs. Enhanced Automation: AI agents can automate repetitive tasks, freeing up human resources for more strategic activities. This leads to increased efficiency and productivity. Cost-Effectiveness: By allowing the use of various models, businesses can choose cost-effective solutions that align with their budget and performance requirements. Opportunities for Businesses Improved Customer Engagement Both Azure OpenAI Assistants and Azure AI Agent Services can significantly enhance customer engagement. By providing personalized and context-aware interactions, businesses can create a more satisfying user experience. For example, a retail company can use an AI assistant to provide tailored product recommendations based on customer preferences and past purchases. Data-Driven Decision Making AI agents can analyze vast amounts of data and provide actionable insights. This capability enables organizations to make informed decisions based on real-time data analysis. For instance, a financial institution can deploy an AI agent to monitor market trends and provide investment recommendations to clients. Streamlined Operations By automating routine tasks, businesses can streamline their operations and reduce operational costs. For example, a customer support team can use AI agents to handle common inquiries, allowing human agents to focus on more complex issues. Innovation and Experimentation The flexibility of Azure AI Agent Services encourages innovation. Developers can experiment with different models and approaches to find the most effective solutions for their specific challenges. This culture of experimentation can lead to breakthroughs in product development and service delivery. Enhanced Analytics and Insights Integrating AI agents with analytics tools can provide businesses with deeper insights into customer behavior and preferences. This data can inform marketing strategies, product development, and customer service improvements. For example, a company can analyze interactions with an AI assistant to identify common customer pain points, allowing them to address these issues proactively. Conclusion In summary, both Azure OpenAI Assistants and Azure AI Agent Services offer unique advantages that can significantly benefit businesses looking to leverage AI technology. Azure OpenAI Assistants provide a robust framework for building conversational agents using advanced OpenAI models, making them ideal for applications that require sophisticated natural language understanding and generation. Their ease of integration, rapid deployment, and enhanced user experience make them a compelling choice for businesses focused on customer engagement. Azure AI Agent Services, on the other hand, offer unparalleled flexibility by allowing developers to utilize a variety of AI models. This model-agnostic approach encourages innovation and experimentation, enabling businesses to tailor solutions to their specific needs. The ability to automate tasks and streamline operations can lead to significant cost savings and increased efficiency. Additional Resources To further explore Azure OpenAI Assistants and Azure AI Agent Services, consider the following resources: Agent Service on Microsoft Learn Docs Watch On-Demand Sessions Streamlining Customer Service with AI-Powered Agents: Building Intelligent Multi-Agent Systems with Azure AI Microsoft learn Develop AI agents on Azure - Training | Microsoft Learn Community and Announcements Tech Community Announcement: Introducing Azure AI Agent Service Bonus Blog Post: Announcing the Public Preview of Azure AI Agent Service AI Agents for Beginners 10 Lesson Course https://aka.ms/ai-agents-beginners524Views0likes2CommentsI don't have the blog subsite option
My company used to own a blog on InfoShare. We're now moving on to SharePoint modern sites and "News" seem too limited to our needs, since we'd require the users to be able to search for news by category (I know news can be categorized, but there's no possibility for the visitor to search by category). Therefore we'd like to opt for the old blog functionality. Nevertheless, I don't seem to have that option on my subsites: Do you have any rough idea of why this is? Thank you in advance!659Views0likes3CommentsResearcher takes another step toward discovering how a brain molecule could halt MS
Multiple sclerosis is an autoimmune disease in which the myelin, or fatty lining of nerve cells, is eroded, leading to nerve damage and slower signalling between the brain and the body. MS symptoms range from blurred vision to complete paralysis, and while there are treatments, the causes are not fully understood and nothing exists to reverse the disease process. More than 90,000 Canadians live with MS, according to the MS Society. In new research published in Stem Cell Reports, Anastassia Voronova, an assistant professor and Canada Research Chair in Neural Stem Cell Biology, injected fractalkine into mice with chemically induced MS. She found the treatment increased the number of new oligodendrocytes -- vital brain and spinal cord cells that produce myelin in both embryonic and adult brains -- which are damaged during the MS autoimmune attack. "If we can replace those lost or damaged oligodendrocytes, then they could make new myelin and it is believed that would halt the disease progression, or maybe even reverse some of the symptoms," Voronova says. "That's the Holy Grail in the research community and something that we're very passionate about." Voronova's earlier research tested the safety and efficacy of fractalkine in normal mice and found similar beneficial effects. Other researchers have demonstrated that fractalkine may provide protection for nerves in mouse models before the disease is induced, but this is the first time it has been tested on animals that already have the disease. Voronova and her team observed new oligodendrocytes, as well as reactivated progenitor cells that can regenerate oligodendrocytes, in the brains of the treated animals. Remyelination occurred in both the white and grey matter. The researchers also observed a reduction in inflammation, part of the damage caused by the immune system. Next steps for the treatment include testing it in other diseased mouse models, including those with neurodegenerative diseases other than MS.591Views0likes0CommentsNew Blog Post | Automated Detection and Response for Azure WAF with Sentinel
Full article: Automated Detection and Response for Azure WAF with Sentinel - Microsoft Community Hub Web applications are increasingly targeted by malicious attacks that exploit commonly known vulnerabilities. SQL injection and Cross-site scripting are among the most common attacks. Preventing such attacks in application code is challenging. It can require rigorous maintenance, patching, and monitoring at multiple layers of the application topology. A WAF solution can react to a security threat faster by centrally patching a known vulnerability, instead of securing each individual web application. Azure Web Application Firewall (WAF) is a cloud-native service that protects web apps from common web-hacking techniques. This service can be deployed in a matter of minutes to get complete visibility into the web application traffic and block malicious web attacks. Integrating Azure WAF with Microsoft Sentinel (Cloud Native SIEM/SOAR solution) for automated detection and response to threats/incidents/alerts would be an added advantage and reduces the manual intervention needed to update the WAF policy. In this blog, we will discuss about WAF detection templates in Sentinel, deploying a Playbook, and configuring the detection and response in Sentinel using these templates and the Playbook. Original Post: New Blog Post | Automated Detection and Response for Azure WAF with Sentinel - Microsoft Community Hub1.4KViews1like0CommentsNew Blog Post | The Easy Way to Get the ARM Deployment Template for a Microsoft Sentinel Solution
The Easy Way to Get the ARM Deployment Template for a Microsoft Sentinel Solution - Azure Cloud & AI Domain Blog (azurecloudai.blog) If you need the deployment (ARM) template for any Microsoft Sentinel Solution, there’s an easy in the UI to way to obtain it. The ARM template will allow you to deploy the Solution using your favorite DevOps method. Once you locate the Solution you want install, begin the actual, normal installation process. When you get to the end of the Solution installation wizard, instead of choosing to go ahead and allow the Solution to be installed, click or tap the “Download template for automation” link. This takes you to a page where the template has been auto generated for you where you can download it, add it to your ARM template library, or deploy it directly from here. You can also use this page to adjust any of the parameters, variables, or resources for the template. Original Post: New Blog Post | The Easy Way to Get the ARM Deployment Template for a Microsoft Sentinel Solution - Microsoft Community Hub1.6KViews0likes0CommentsSentinel Cost Optimization Series - Part 1 - Data prioritization
* There are graphs in this post, but I can't seem to upload/insert them; please visit the link in each part to see the picture. Problem statement Data prioritization is an issue that any SIEM or data gathering and analysis solution must consider. The log that we collect to SIEM is typically security-related and capable of directly creating alerts based on the event of that log, such as EDR alerts. However, not all logs are equally weighted. For example, the proxy log only contains the connections of internal users, which is very useful for investigation, but it does not directly create alerts and has a very high volume. To demonstrate this, we categorize the log into the primary log and secondary log based on its security value and volume. Data categorize The metadata and context of what was discovered are frequently contained in the primary log sources used for detection. However, secondary log sources are sometimes required to present a complete picture of a security incident or breach. Unfortunately, many of these secondary log sources are high-volume verbose logs with little relevance for security detection. They aren’t useful unless a security issue or threat search requires them. On the current traditional on-premise solution, we will use SIEM alongside a data lake to store secondary logs for later use. On-premise Architecture Because we have complete control over everything, we can use any technology or solution, making it simple to set up (Eg. Qradar for SIEM and ELK for data lake). However, for cloud-naive SIEM, this becomes more difficult, particularly with Microsoft Sentinel. Microsoft Sentinel is a cloud-native security information and event manager (SIEM) platform that includes artificial intelligence (AI) to help with data analysis across an enterprise. To store and analyze everything for Sentinel, we typically use Log Analytics with the Analytics Logs data plan. However, this is prohibitively expensive, costing between $2.00 and $2.50 per GB ingested per day depending on the Azure region used. Current Solution Storage Account (Blob Storage) To store these secondary data, the present approach uses Blob Storage. Blob storage is designed to hold large volumes of unstructured data, which implies it does not follow a certain data model or specification, such as text or binary data. This is a low-cost option for storing large amounts of data. The architecture for this solution is as follows: Blob Architecture However, Blob Storage has a limitation that is hard to ignore. The data in Blob Storage is not searchable. We can circumvent this by using as demonstrated in Search over Azure Blob Storage content, but this adds another layer of complexity and pricing that we would prefer to avoid. The alternative option is to use KQL externaldata, but this is designed to obtain small amounts of data (up to 100 MB) from an external storage device, not massive amounts of data. Our Solution High-Level Architecture Our solution used Basic Logs to tackle this problem. Basic Logs is a less expensive option for importing large amounts of verbose log data into your Log Analytics workspace. The Basic log also supports a subset of KQL, making it searchable. To get the log into the Basic Log, We need to use a Custom table generated with the Data Collection Rule (DCR)-based logs ingestion API. The structure is as follows: Our Solution Architecture Our Experiment In our experiment, we use the following component for the architecture: Component Solution Description Source Data VMware Carbon Black EDR Carbon Black EDR is an endpoint activity data capture and retention solution that allows security professionals to chase attacks in real-time and observe the whole attack kill chain. This means that it captures not only data for alerting, but also data that is informative, such as binary or host information. Data Processor Cribl Stream Cribl helps process machine data in real-time - logs, instrumentation data, application data, metrics, and so on - and delivers it to a preferred analysis platform. It supports sending logs to Log Analytics, but only with the Analytics plan. To send the log to the Basic plan, we need to set up a data collection endpoint and rule, please see Logs ingestion API in Azure Monitor (Preview) for additional information on how to set this up. And we also use a Logic App as a webhook to collect the log and send it to the Data collection endpoint. The environment we use for log generation is as follows: Number of hosts: 2 Operation System: Windows Server 2019 Number of days demo: 7 The number of logs we collected for our test environment are: Basic Log generated: 30.2 MB Alerts generated: 16.6 MB Data Ingestion The cost is based on the East US region, the currency is the USD, and the Pay-As-You-Go Tier was used to determine the number saved using the generated data with 1000 hosts and 30 days retention period. The calculation using only Analytic Log Table Ingestion Volume (GB) Cost per GB (USD) Total cost per day (USD) Total cost per retention period (USD) Host number Retention (Days) Cb_logs_Raw_CL 2.16 2.3 4.96 148.84 1000 30 Cb_logs_alert_CL 1.19 2.3 2.73 81.81 1000 30 Total 7.69 230.66 If we use Analytic Log with Storage Account Table Ingestion Volume (GB) Cost per GB (USD) Total cost per day (USD) Total cost per retention period (USD) Host number Retention (Days) Cb_logs_Raw_CL 2.16 0.02 0.04 1.29 1000 30 Cb_logs_alert_CL 1.19 2.3 2.73 81.81 1000 30 Total 2.77 83.11 If we use Analytic Log with Basic Log Table Ingestion Volume (GB) Cost per GB (USD) Total cost per day (USD) Total cost per retention period (USD) Host number Retention (Days) Cb_logs_Raw_CL 2.16 0.5 1.08 32.36 1000 30 Cb_logs_alert_CL 1.19 2.3 2.73 81.81 1000 30 Total 3.81 114.17 Now let’s compare these 3 solutions together and get an overall look altogether. Only Analytic Log Analytic Log with Storage Account Analytic Log with Basic Log Cost calculated $230.66 $83.11 $114.17 Searchable Yes No Yes but cost $0.005 per GB Retention Up to 2,556 days (7 years) 146,000 days (400 years) Up to 2,556 days (7 years) Limitation Even though the Basic Log is an excellent choice for ingesting hot data, it does have some limitations that are difficult to overlook: The retention period is only 8 days, and this retention can’t be increased, after that, it will either be deleted or archived KQL language access is limited, for a list of what operators can be used, please see here There is a charge for interactive queries ($0.005/GB-scanned) This is the first post in this Sentinel Cost Optimization series. I hope this helps you have another choice to consider when setting up and sending your custom log to Sentinel.2.6KViews1like0CommentsNew Blog Post | Introduction to Machine Learning Notebooks in Microsoft Sentinel
Read the full blog post here: Introduction to Machine Learning Notebooks in Microsoft Sentinel It has never been harder to keep hybrid environments secure. Microsoft’s Security Research teams are observing an increasing number and complexity of cybercrimes occurring across all sectors of critical infrastructure, from targeted ransomware attacks to increasing password and phishing campaigns on email, according to the Microsoft Digital Defense Report. The 2022 Cost of Insider Threats reported that threat incidents have risen by over 44% in the last two years, with associated costs exceeding $15.38M per incident per year, up by a third in the preceding years. The report also concluded that there has been a 10.3% increase in the average time taken to contain an incident, from 77 days to 85 days. Advanced tools, techniques, and processes used by threat actor groups allow them to counter obsolete defences and scale their attack campaigns to a broad range of victims, from government organisations to for-profit enterprises. Original Post: New Blog Post | Introduction to Machine Learning Notebooks in Microsoft Sentinel - Microsoft Tech Community1KViews0likes0CommentsNew Blog Post | Microsoft Sentinel customizable machine learning based anomalies Generally Available
Microsoft Sentinel customizable machine learning based anomalies is Generally Available - Microsoft Tech Community Security analysts can use anomalies to reduce investigation and hunting time, as well as detect new and emerging threats. Typically, these benefits come at the cost of a high benign positive rate, but Microsoft Sentinel’s customizable anomaly models are tuned by our data science team and trained with the data in your Microsoft Sentinel workspace to reduce, providing out-of-the box value. If security analysts need to tune them further, the process is simple and requires no knowledge of machine learning. Read this blog to find out which capabilities were supported in Public Preview and how to tune anomalies: Democratize Machine Learning with Customizable ML Anomalies - Microsoft Tech Community In this blog, we will discuss how customizable machine learning based anomalies have improved since Public Preview. Original Post: New Blog Post | Microsoft Sentinel customizable machine learning based anomalies Generally Available - Microsoft Tech Community764Views0likes0CommentsNew Blog Post | Create and delete incidents in Microsoft Sentinel
Create and delete incidents in Microsoft Sentinel - Microsoft Tech Community During the everyday work of the SOC, suspicious and malicious events surface from many sources. Events which are identified by SIEM and XDR systems are aggregated into alerts, and those alerts become incidents. However, at times a possible security breach is reported by other means - such as a phone call, an email, hunting results or a customer request. Those incidents need to be documented when it has been reported, partially investigated, or even resolved. As part of our journey to build better incident management capabilities in Microsoft Sentinel, we would like to announce the "Manual incident creation" feature, along with the "delete incident" capability. With the "manual incident creation" feature, analysts can now create an incident manually in the Sentinel portal and also by using the new "Create incident (preview)" LogicApp action (joining the already existing ability to create an incident through the API). If an incident was mistakenly logged, or is an exact duplicate of another incident, it can now be deleted from the grid using the new "delete" option or using an API - leaving only audit information in the Log Analytics table. Two playbooks templated are available in the template gallery, allowing out of the box incident creation using email template and Microsoft Forms - thus reducing the time between the SOC learning about the incident and the time the incident is logged in Sentinel. Original Post: New Blog Post | Create and delete incidents in Microsoft Sentinel - Microsoft Tech Community777Views0likes0CommentsNew Blog Post | Anomali Limo Feeds for Microsoft Sentinel to Expire for Good
Anomali Limo Feeds for Microsoft Sentinel to Expire for Good - Azure Cloud & AI Domain Blog (azurecloudai.blog) I’m sure there’s some organizational reason why Anomali wants to detach itself from maintaining these feeds. If you use these feeds for Microsoft Sentinel demos, consider querying the ThreatIntelligenceIndicator table for the Limo feeds and exporting the results to save them for later for when the active feed dries up. ThreatIntelligenceIndicator | where SourceSystem contains "Limo" You can then use our new functionality to import flat files into ThreatIntelligence and reuse the continually stale indicators.963Views0likes0Comments