natural language processing
33 TopicsA Framework for Calculating ROI for Agentic AI Apps
Contributors and Reviewers: Anurag Karuparti (C), Aishwarya Umachandran(C), Tara Webb(R), Bart Czernicki (R), Simon Lacasse (R), Vishnu Pamula (R) ROI serves as a critical metric for assessing the financial benefits of any investment, including AI projects. It helps determine whether the investment generates more value than it costs. The fundamental formula for calculating ROI is: ROI = (Net Return from Investment - Cost of Investment) / Cost of Investment * 100 Studies indicate that companies investing in AI are realizing significant returns, with an average ROI of $3.7 for every $1 invested. Notably, 5% of organizations worldwide are achieving an even higher average ROI of $10 for every $1 invested. (IDC Study 2024) 1. Key Metrics for Measuring ROI in Agentic AI Apps Measuring the ROI of agentic AI apps necessitates a comprehensive approach that considers both tangible and intangible benefits. Intangible benefits may be difficult to quantify but significantly contribute to ROI. Here are some key metrics to consider: a. Tangible Benefits Cost Savings: Agentic Apps can automate tasks, leading to significant cost reductions in areas like customer service, data entry, and many business operations. By handling complex workflows autonomously, agentic AI minimizes the need for human intervention, resulting in lower labor costs and increased efficiency. Revenue Increase: Agentic Apps can help businesses identify new revenue streams, optimize pricing strategies, and improve sales and marketing effectiveness, ultimately driving revenue growth. Productivity Gains: By automating tasks and providing employees with enhanced tools and information, Agentic Apps can boost productivity and efficiency. Data Quality Improvements: Agentic Apps can minimize errors in tasks such as data entry and analysis, leading to improved accuracy and reduced costs associated with correcting mistakes. Improved Customer Satisfaction: Agentic Apps can enhance customer satisfaction by providing personalized experience, faster service, and proactive problem-solving. Faster Time-to-Market: Agentic AI can accelerate product development and deployment, enabling businesses to bring new products and services to market faster. b. Intangible Benefits Improved Decision-Making: Agentic AI can analyze vast amounts of data and provide valuable insights that can help businesses make more informed decisions. Enhanced Brand Reputation: By providing innovative and efficient services, agentic AI can enhance a company's brand reputation and foster customer loyalty. Increased Employee Satisfaction: By automating mundane tasks and empowering employees with better tools, agentic AI can improve employee satisfaction and retention. Improved Compliance: Agentic AI can help businesses comply with regulations and reduce the risk of penalties. Increased Innovation: By freeing up employees from routine tasks, agentic AI can foster a culture of innovation and creativity. 2. Cost Components of Developing and Deploying Agentic Apps Developing and deploying agentic AI apps involves various cost components, which can be categorized as follows: Cost Component Description Example Development Costs This includes the cost of software and development tools, salaries of developers, data scientists, and machine learning engineers, and cloud computing resources. Salaries for a team comprising a data scientist ($120,000 - $180,000 per year), a machine learning engineer ($130,000 - $200,000 per year), and an AI software developer ($110,000 - $170,000 per year) and development costs on cloud platforms like Azure (The above salaries are just estimates based on public info and can vary) Data Acquisition and Preparation Agentic AI apps may require large amounts of data for training and operation. This includes the cost of acquiring data, cleaning it, and preparing it for use in AI models. Purchasing datasets from third-party providers or investing in data annotation services. Testing and Deployment This includes the cost of testing the AI app, deploying it to the cloud or on-premises, and integrating it with existing systems. Cloud computing costs for deploying the app on platforms Azure, AWS and Google. Maintenance and Updates Agentic AI apps require ongoing maintenance and updates to ensure they remain effective and secure. This includes the cost of monitoring the app, fixing bugs, and adding new features. Costs associated with software updates, security patches, and ongoing monitoring of the app's performance. 3. New Revenue Streams from Agentic Apps Agentic AI apps can generate revenue through various business models by enhancing business operations in several ways. Revenue Stream/Value Proposition Description Example Subscription Fees Businesses can charge users a recurring fee for access to the agentic AI app. Offering different subscription tiers with varying levels of access and features. Usage-Based Pricing Businesses can charge users based on their usage of the app, such as the number of tasks performed, or the amount of data processed. Charging users per API call or per transaction processed by the agentic AI app. Licensing Fees Businesses can license their agentic AI technology to other companies. Granting other businesses, the right to use the agentic AI technology in their own products or services. It's important to note that agentic AI is poised to disrupt traditional SaaS business models, particularly the prevalent per-seat pricing model. As agentic AI becomes more sophisticated, businesses may shift towards alternative pricing models, such as usage-based pricing or outcome-based pricing, where the cost is directly tied to the AI's contribution to measurable business goals. 4. Framework for Calculating ROI for Agentic Apps Based on the analysis presented above, the following framework can be used to calculate the ROI of agentic AI apps: Define Objectives and KPIs: Clearly define the objectives of implementing the agentic AI app and the key performance indicators (KPIs) that will be used to measure its success. This could include metrics such as cost savings, revenue increase, productivity gains, customer satisfaction, and error reduction. Establish a Baseline: Establish a baseline for the KPIs before implementing the agentic AI app. This will help measure the impact of the app on the business. Estimate Revenue Gains and Cost Savings: Estimate the potential revenue gains and cost savings that can be achieved by implementing the AI Agentic. This may involve analyzing historical data, conducting surveys, and consulting with industry experts. Identify and Assess Costs: Identify all costs associated with developing, deploying, and maintaining the agentic AI app. This includes development costs, data acquisition costs, infrastructure costs, and ongoing maintenance costs. Determine Intangible Benefits: Identify and assess the intangible benefits of the agentic AI app, such as improved decision-making, enhanced brand reputation, and increased employee satisfaction. While these benefits may be difficult to quantify, they can significantly contribute to the overall ROI. Set a Realistic Timeframe: Establish a realistic timeframe for measuring the ROI of the agentic AI app. This should consider the time it takes to develop, deploy, and fully integrate the app into the business. Develop a Current State Scenario: Develop a scenario that represents the current state of the business without the agentic AI app. This will help compare the performance of the business with and without the app. Calculate the ROI: Using the data gathered in the previous steps, calculate the ROI of the agentic AI app using the ROI formula. Monitor and Adjust: Continuously monitor the performance of the agentic AI app and track the KPIs. Adjust the app and its implementation as needed to optimize its effectiveness and maximize ROI. When calculating the ROI of AI initiatives, it's crucial to avoid common pitfalls such as: Uncertainty of Benefits: Accurately estimating the benefits of AI can be challenging due to the evolving nature of technology and the potential for unforeseen outcomes. Computing ROI Based on a Single Point in Time: AI projects often have long-term benefits that may not be fully realized in the short term. As per a recent IDC Study in Nov 2024, organizations realize value in14 months. Treating Each AI Project Individually: AI projects can have synergistic effects and evaluating them in isolation may underestimate their overall impact on the business. 5. Example Scenarios: Option-1 A financial services call center handles 100,000 customer inquiries per year, each currently taking an average of 5 minutes. Of these calls, 10% (10,000 calls) are simple, routine requests (e.g., checking balances) and can be easily automated. Additionally, misrouting and inefficient handling cause each call to run 1 extra minute on average. Current Situation (Before Multi-Agent AI): Total calls: 100,000 Simple, routine calls: 10,000 Agent costs per minute: $0.50 Routine Calls Cost (Before AI): Routine calls each take 3 minutes. Total routine call time: 10,000 calls × 3 min = 30,000 min Cost: 30,000 min × $0.50 = $15,000 per year Misrouting Cost (Before AI): Extra 1 minute per call due to misrouting. Total extra time: 100,000 calls × 1 min = 100,000 min Cost: 100,000 min × $0.50 = $50,000 per year Total Extra Costs (Before AI): Routine tasks: $15,000 Misrouting: $50,000 Combined inefficiencies: $65,000 per year After Implementing Multi-Agent Collaboration AI: The AI system handles routine inquiries automatically and optimizes call routing: Routine Calls Automated: 10,000 routine calls no longer require agent time. Saves $15,000 per year on routine tasks. Correct Routing: Removes the extra 1 minute per call. Saves $50,000 per year from avoiding misrouting costs. Efficiency Gains: With misrouting fixed and agents freed from routine tasks, staff can handle a slight increase in call volume and also reduce overtime. Staff can handle an additional 4000 calls annually, each call at 5 minutes on average. (4000*5*0.50 = $10,000) Total Annual Savings After AI (Tangible Benefit): Routine tasks saved: $15,000 Misrouting eliminated: $50,000 Efficiency gains: $10,000 Total: $75,000 System Costs: Implementation and integration: $40,000 Annual maintenance: $5,000 Total Annual Cost: $45,000 ROI Calculation: Net Benefit: $75,000 (savings) – $45,000 (cost) = $30,000 ROI = (Net Benefit / Cost) × 100% = (30,000 / 45,000) × 100% ≈ 67% A 67% ROI means that for every dollar invested in the multi-agent collaboration AI system, the call center gains an additional 67 cents in profit each year. Option 2 Scenario: A company wants to semi-automate customer support for their e-commerce platform using an AI-powered chatbot on Azure. The AI-powered customer service chatbot provides support for very frequently asked questions. It automates responses, provides real-time order tracking, and offers personalized product recommendations while proactively engaging customers with tailored offers and anticipating their needs. It autonomously handles tasks like follow-ups and issue resolution, integrates seamlessly with existing systems, supports multiple languages, and operates 24/7 to enhance customer satisfaction and drive sales. Additionally, it escalates complex issues to human agents and continuously improves through self-feedback. Cost Estimation: Development and Deployment: $25,000 (including Azure App Service, Azure Agent Service, and other development costs) Maintenance and Support: $5,000 per year Benefit Estimation: Reduced Customer Service Costs: The chatbot handles 2,000 customer inquiries per month, which previously required 3 full-time employees with an average salary of $40,000 per year. Increased Sales: The chatbot's personalized recommendations and efficient support lead to a 5% increase in monthly sales, Calculating ROI: Annual Cost Savings 3 employees * $40,000 = $120,000 Chatbot cost = $25,000 (development) + $5,000 (maintenance) = $30,000 Cost savings = $120,000 - $30,000 = $90,000 Annual Revenue Increase Monthly sales: $500,000 Increase: 5% of $500,000 = $25,000 per month Yearly increase: $25,000 * 12 = $300,000 Total Annual Benefits $90,000 (cost savings) + $300,000 (revenue) = $390,000 ROI ROI = (Total Benefits − Annual Cost) / Annual Cost × 100% = (390,000 − 30,000 / 30,000) × 100% = 1200% This example demonstrates a significant ROI for the customer service chatbot. However, it's important to remember that this is a simplified calculation. Actual ROI may vary depending on various factors specific to the business and its implementation. Note: Calculating Azure Costs Azure costs vary by use case and are dependent on the architecture components. We'll discuss example scenarios for calculating these costs in a future blog. 6. Risks and Considerations Since the core of these agents relies on LLM, there is a potential for hallucination. Rigorous testing and evaluation are therefore critical before deploying them to production. Additionally, in the initial stages, agents may exhibit inefficiencies due to the complexity of orchestration, potentially introducing a 10–20% overhead. It is wise to set an ROI range that considers differences in response confidence. However, over time, these agents are expected to improve and optimize through iterative learning and feedback. 7. ROI will differ from use case to use case For example, in one call center, routine inquiries might be the primary source of inefficiency, while in another, the biggest gains might come from reducing customer wait times. Similarly, different industries may have different labor costs, different complexity levels for tasks, or varying levels of baseline performance. Cloud workload costs on Azure may also change based on usage patterns, the AI services you choose, data storage needs, and the extent of system integration required. In short, while the overall method for calculating ROI remains the same (measure gains, subtract costs, then divide by costs), the types of gains (e.g., labor reduction, error reduction, increased throughput, improved customer satisfaction) and the kinds of costs (e.g., Azure compute, integration services, licensing fees, training expenses) will be different for each scenario. As a result, you need to carefully identify the relevant metrics and expenses for every individual use case. Conclusion Agentic AI apps hold immense potential for businesses seeking to automate tasks, enhance efficiency, and improve decision-making. By implementing a comprehensive framework for calculating ROI, businesses can effectively justify their investment in agentic AI and ensure that these apps deliver both tangible and intangible benefits. This framework should encompass both quantitative and qualitative metrics, including cost savings, revenue increases, productivity gains, customer satisfaction, and intangible benefits such as improved decision-making and enhanced brand reputation. While the framework presented in this report provides a structured approach to evaluating the ROI of agentic AI apps, it's important to acknowledge the potential challenges and limitations. Quantifying some intangible benefits, such as enhanced brand reputation or increased employee satisfaction, can be subjective and may require alternative measurement approaches. Furthermore, the rapidly evolving nature of agentic AI technology may necessitate ongoing adjustments to the ROI framework to accurately capture its impact on businesses. Despite these challenges, a well-defined ROI framework remains crucial for making informed decisions about agentic AI investments and maximizing their potential. By carefully evaluating the ROI of agentic AI apps, businesses can strategically leverage this transformative technology to achieve their objectives and gain a competitive edge in the evolving digital landscape. References: IDC’s 2024 AI opportunity study: Top five AI trends to watch - The Official Microsoft Blog1.2KViews0likes0CommentsFine-Tuning Small Language Models for Function-Calling: A Comprehensive Guide
In the rapidly evolving landscape of artificial intelligence, fine-tuning small language models (SLMs) for use case specific workloads has become increasingly essential. The motivation behind this lies in the need for lower latency, reduced memory footprint, and improved accuracy—all while maintaining cost-effectiveness. This blog delves into the reasons for fine-tuning SLMs for function-call, key considerations, and a practical guide to implementing fine-tuning on Azure Why Fine-Tune Small Language Models? 1. Lower Latency and Reduced Memory Footprint : Smaller models with fewer weights inherently offer faster processing times due to reduced matrix multiplication operations. This lower latency is crucial for real-time applications where speed is paramount. Additionally, these models reduce the memory footprint, making them ideal for deployment in resource-constrained environments. 2. Cost Efficiency : Fine-tuning smaller models is more cost-effective than training large models from scratch. It reduces the computational resources required, thereby lowering operational costs. This makes it a viable option for startups and enterprises looking to optimize their AI expenditure. 3. Improved Accuracy : By tailoring a model to a specific function-calling use case, you can achieve higher accuracy. Fine-tuning allows the model to learn the intricacies of function-calling, thereby providing more relevant and precise outputs. 4. Smaller Token Size : Smaller models and efficient token handling lead to a reduction in token size, which further optimizes processing speed and resource usage. Key Considerations for Fine-Tuning a. Selection of the Right Base Model : Choosing the appropriate base model is crucial. Evaluate industrial benchmarks and leaderboards, such as the [Berkeley Function Call Leaderboard] to guide your selection. Consider factors like model size, which affects GPU VRAM requirements, accuracy, and context length. For this blg post, we will use Llama-3.2-3b-instruct model as our base model for fine-tuning. b. Dataset Preparation : Proper dataset preparation is a cornerstone for successful fine-tuning of SLMs for function-calling tasks. The dataset must be representative of real-world scenarios and cover the full spectrum of use cases you anticipate. For this blog, we will utilize the glaiveai/glaive-function-calling-v2 dataset from Hugging Face, renowned for its comprehensive coverage of simple, multiple, and multi-turn function-calling scenarios across diverse domains. - Key Steps in Dataset Preparation: Understanding the Complexity of the Use Case Before diving into the technicalities of dataset preparation, it's essential to understand the complexity of the use case at hand. Is the task limited to function-calling, or does it involve a broader, more generic conversation? If the latter is true, it becomes imperative to ensure that the existing knowledge and capabilities of the language model (SLM) are preserved. The dataset should seamlessly integrate both function-call and non-function-call scenarios to provide a holistic conversational experience. Differentiating Function-Calling Scenarios Let's explore the different scenarios that might arise in function-calling applications: Single Function-Calling: This scenario involves invoking a single function based on user input. For instance, in the travel industry, a user might ask, "What are the available flights from New York to London on December 10th?" The dataset should include examples that allow the model to extract relevant information and call the flight search function accurately. Multiple Function-Calling: Here, the language model must choose one function from a set of possible tools. For example, if a user asks, "Can you book me a hotel or a flight to Paris?" the dataset should provide instances where the model decides between booking a hotel or a flight based on user preferences or additional input. Multi-Turn Conversations: This scenario requires tools to be invoked in a sequence based on the conversation's state. Consider a user planning a vacation: "I want to visit Italy. What are my options?" followed by "Book me a flight," and then "Find a hotel in Rome." The dataset should capture the flow of conversation, enabling the model to handle each request in context. Parallel Function-Calling: In situations where multiple tools need to be invoked simultaneously, such as booking flights and hotels at the same time, the dataset should include examples that allow the model to manage these parallel tasks effectively. For instance, "Book a flight to Tokyo and reserve a hotel in Shinjuku for the same dates." Handling Missing Information: A robust dataset should also include scenarios where the language model needs to ask the user for missing information. For example, if a user simply says, "Book me a flight," the model should prompt, "Could you please specify the destination and dates?" c. Compute Selection Ensure your compute setup has adequate VRAM to accommodate model weights, gradients, and activations. The compute should be tailored to your model size and batch size requirements. d. Hyperparameter Selection : The selection of hyperparameters is a critical step that can significantly influence the performance of a model. Hyperparameters, unlike the model’s parameters, are not learned from the data but are set before the training process begins. Choosing the right hyperparameters can lead to faster convergence and higher accuracy, making this an area that demands careful attention. Hyperparameters can be thought of as the settings or knobs that you, as the model trainer, can adjust to tailor the training process. These include learning rate, batch size, the architecture of layers, and more. One of the leading methodologies for fine-tuning models is LORA (Low-Rank Adaptation), which has gained popularity due to its efficiency and effectiveness. LORA is a technique that allows for the efficient adaptation of large language models by introducing low-rank matrices during the training process. This approach reduces the number of trainable parameters, leading to faster convergence and reduced computational costs. When using LORA, two primary hyperparameters to consider are: Rank: This represents the dimensionality of the low-rank matrices. It is a critical factor influencing the model’s capacity to learn nuanced patterns. Alpha: This is a scaling factor applied to the low-rank updates, typically set to be 2-4 times the rank value. A good starting point for these parameters might be a rank of 8 and an alpha of 16, but these values should be tailored based on the model's complexity and the specific task at hand. e. Optimize context length : Another significant aspect of model fine-tuning, especially in function-calling scenarios, is the management of context length. In these prompts, we often provide detailed information such as function names, descriptions, and argument types, which consume a substantial number of tokens. Efficiently managing this context can lead to performance gains without sacrificing accuracy. Iterative Experimentation with Context Details: To optimize context length, an iterative experimentation approach is recommended: Baseline Experiment: Start by including all possible details—function descriptions, argument types, and more. This serves as your baseline for comparison. Simplified Contexts: Gradually remove elements from the context: First Iteration: Retain only the function names and arguments, omitting descriptions. Second Iteration: Remove the arguments, keeping just the function names. Final Iteration: Test the model's performance without any function names or arguments. By incrementally simplifying the context, you can identify the minimal necessary While conducting these experiments, it is advantageous to utilize previous checkpoints. Instead of starting from the base model for each iteration, use the trained model from the previous step as a starting point. This approach can save time and computational resources, allowing for more efficient experimentation. Fine-Tuning on Azure: Step-by-Step Now lets run the fine-tuning job while adhering to all the guidelines and instructions shared above:- 1. Create an Azure Machine Learning Workspace: An Azure Machine Learning workspace is your control center for managing all the resources you need to train, deploy, automate, and manage machine learning models. It serves as a central repository for your datasets, compute resources, and models. To get started, you can create a workspace through the Azure portal by navigating to the Azure Machine Learning service and selecting "Create new workspace." Ensure you configure resource group, workspace name, region, and other necessary settings. 2. Create a Compute Instance: To run your Python notebook and execute scripts, you need a compute instance. This virtual machine in Azure Machine Learning allows you to perform data preparation, training, and experimentation. Go to the "Compute" section in your workspace, select "Create," and choose a compute instance that fits your needs, ensuring it has the necessary specifications for your workload. 3: Dataset Preparation: For this blog, we'll use the glaiveai/glaive-function-calling-v2 dataset from Hugging Face, which includes simple, multi-turn function calling and generic conversations across various domains. The dataset needs to be formatted to be compatible with the OpenAI format: Convert each conversation into a chat_template format. Assign roles as 'system', 'user', or 'assistant'. Remove "<|endoftext|>” string and if the response is a function-call, replace the “<functioncall>” string and add role as tool so that LLM knows when to stop responding and wait for function execution results def parse_conversation(input_string): ROLE_MAPPING = {"USER" : "user", "ASSISTANT" : "assistant", "SYSTEM" : "system", "FUNCTION RESPONSE" : "tool"} # Regular expression to split the conversation based on SYSTEM, USER, and ASSISTANT pattern = r"(SYSTEM|USER|ASSISTANT|FUNCTION RESPONSE):" # Split the input string and keep the delimiters parts = re.split(pattern, input_string) # Initialize the list to store conversation entries conversation = [] # Iterate over the parts, skipping the first empty string for i in range(1, len(parts), 2): role = parts[i].strip() content = parts[i + 1].strip() content = content.replace("<|endoftext|>", "").strip() if content.startswith('<functioncall>'): # build structured data for function call # try to turn function call from raw text to structured data content = content.replace('<functioncall>', '').strip() # replace single quotes with double quotes for valid JSON clean_content = content.replace("'{", '{').replace("'}", '}') data_json = json.loads(clean_content) # Make it compatible with openAI prompt format func_call = {'recipient_name': f"functions.{data_json['name']}", 'parameters': data_json['arguments']} content = {'tool_uses': [func_call]} # Append a dictionary with the role and content to the conversation list conversation.append({"role": ROLE_MAPPING[role], "content": content}) return conversation def prepare_dataset(tokenizer, args): # Create the cache_dir cache_dir = "./outputs/dataset" os.makedirs(cache_dir, exist_ok = True) # Load the dataset from disk train_dataset = load_from_disk(args.train_dir) eval_dataset = load_from_disk(args.val_dir) column_names = list(train_dataset.features) def apply_chat_template(examples): conversations = [] for system, chat in zip(examples["system"], examples["chat"]): try: system_message = parse_conversation(system) chat_message = parse_conversation(chat) message = system_message + chat_message conversations.append(message) except Exception as e: print(e) text = [tokenizer.apply_chat_template(message, tokenize=False, add_generation_prompt=False) for message in conversations] return {"text": text} # process the dataseta and drop unused columns processed_train_dataset = train_dataset.map(apply_chat_template, cache_file_name = f"{cache_dir}/cache.arrow", batched = True, remove_columns=column_names) processed_eval_dataset = eval_dataset.map(apply_chat_template, cache_file_name = f"{cache_dir}/cache.arrow", batched = True, remove_columns=column_names) return processed_train_dataset, processed_eval_dataset 4: Create a Data Asset: Azure Machine Learning allows you to register datasets as data assets, making them easily manageable and reusable: def get_or_create_data_asset(ml_client, data_name, data_local_dir, update=False): try: latest_data_version = max([int(d.version) for d in ml_client.data.list(name=data_name)]) if update: raise ResourceExistsError('Found Data asset, but will update the Data.') else: data_asset = ml_client.data.get(name=data_name, version=latest_data_version) logger.info(f"Found Data asset: {data_name}. Will not create again") except (ResourceNotFoundError, ResourceExistsError) as e: data = Data( path=data_local_dir, type=AssetTypes.URI_FOLDER, description=f"{data_name} for fine tuning", tags={"FineTuningType": "Instruction", "Language": "En"}, name=data_name ) data_asset = ml_client.data.create_or_update(data) logger.info(f"Created/Updated Data asset: {data_name}") return data_asset train_data = get_or_create_data_asset(ml_client, f"{AZURE_DATA_NAME}_train", data_local_dir=f"{DATA_DIR}/train", update=True) val_data = get_or_create_data_asset(ml_client, f"{AZURE_DATA_NAME}_val", data_local_dir=f"{DATA_DIR}/val", update=True) test_data = get_or_create_data_asset(ml_client, f"{AZURE_DATA_NAME}_test", data_local_dir=f"{DATA_DIR}/test", update=True) 5: Create an Environment: While Azure provides built-in environments for common use cases, creating a custom environment tailored to your specific needs can be beneficial. An environment in Azure ML is essentially a containerized setup that defines the software, libraries, and other dependencies required to run your machine learning workload. Why Use Environments? Reproducibility: By defining an environment, you ensure that your training and inference processes are reproducible, with the same configuration used every time. Consistency: Environments help maintain consistency across different runs and teams, reducing "it works on my machine" problems. Portability: They encapsulate your dependencies, making it easier to move and share your ML projects across different Azure services or even with external collaborators. %%writefile {CLOUD_DIR}/train/Dockerfile FROM mcr.microsoft.com/aifx/acpt/stable-ubuntu2004-cu124-py310-torch241:biweekly.202410.2 USER root # support Deepspeed launcher requirement of passwordless ssh login RUN apt-get update && apt-get -y upgrade RUN pip install --upgrade pip RUN apt-get install -y openssh-server openssh-client # Install pip dependencies COPY requirements.txt . RUN pip install -r requirements.txt --no-cache-dir RUN MAX_JOBS=4 pip install flash-attn==2.6.3 --no-build-isolation def get_or_create_docker_environment_asset(ml_client, env_name, docker_dir, update=False): try: latest_env_version = max([int(e.version) for e in ml_client.environments.list(name=env_name)]) if update: raise ResourceExistsError('Found Environment asset, but will update the Environment.') else: env_asset = ml_client.environments.get(name=env_name, version=latest_env_version) print(f"Found Environment asset: {env_name}. Will not create again") except (ResourceNotFoundError, ResourceExistsError) as e: print(f"Exception: {e}") env_docker_image = Environment( build=BuildContext(path=docker_dir), name=env_name, description="Environment created from a Docker context.", ) env_asset = ml_client.environments.create_or_update(env_docker_image) print(f"Created Environment asset: {env_name}") return env_asset env = get_or_create_docker_environment_asset(ml_client, azure_env_name, docker_dir=f"{CLOUD_DIR}/train", update=False) Reference : training.ipynb 6: Create a Training Script: Your training script will handle the fine-tuning process and log metrics using MLflow, which is tightly integrated with Azure Machine Learning. This involves - Loading the dataset, defining the model architecture, writing functions to track and log metrics such as training and evaluation loss. def main(args): ################### # Hyper-parameters ################### # Only overwrite environ if wandb param passed if len(args.wandb_project) > 0: os.environ['WANDB_API_KEY'] = args.wandb_api_key os.environ["WANDB_PROJECT"] = args.wandb_project if len(args.wandb_watch) > 0: os.environ["WANDB_WATCH"] = args.wandb_watch if len(args.wandb_log_model) > 0: os.environ["WANDB_LOG_MODEL"] = args.wandb_log_model use_wandb = len(args.wandb_project) > 0 or ("WANDB_PROJECT" in os.environ and len(os.environ["WANDB_PROJECT"]) > 0) training_config = {"per_device_train_batch_size" : args.train_batch_size, # Controls the batch size per device "per_device_eval_batch_size" : args.eval_batch_size, # Controls the batch size for evaluation "gradient_accumulation_steps" : args.grad_accum_steps, "warmup_ratio" : args.warmup_ratio, # Controls the ratio of warmup steps "learning_rate" : args.learning_rate, "fp16" : not torch.cuda.is_bf16_supported(), "bf16" : torch.cuda.is_bf16_supported(), "optim" : "adamw_8bit", "lr_scheduler_type" : args.lr_scheduler_type, "output_dir" : args.output_dir, "logging_steps": args.logging_steps, "logging_strategy": "epoch", "save_steps": args.save_steps, "eval_strategy": "epoch", "num_train_epochs": args.epochs, # "load_best_model_at_end": True, "save_only_model": False, "seed" : 0 } peft_config = { "r": args.lora_r, "lora_alpha": args.lora_alpha, "lora_dropout": args.lora_dropout, "bias": "none", #"target_modules": "all-linear", "target_modules": ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"], "modules_to_save": None, "use_gradient_checkpointing": "unsloth", "use_rslora": False, "loftq_config": None, } checkpoint_dir = os.path.join(args.output_dir, "checkpoints") train_conf = TrainingArguments( **training_config, report_to="wandb" if use_wandb else "azure_ml", run_name=args.wandb_run_name if use_wandb else None, ) model, tokenizer = load_model(args) model = FastLanguageModel.get_peft_model(model, **peft_config) ############### # Setup logging ############### logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) log_level = train_conf.get_process_log_level() logger.setLevel(log_level) datasets.utils.logging.set_verbosity(log_level) transformers.utils.logging.set_verbosity(log_level) transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() # Log on each process a small summary logger.warning( f"Process rank: {train_conf.local_rank}, device: {train_conf.device}, n_gpu: {train_conf.n_gpu}" + f" distributed training: {bool(train_conf.local_rank != -1)}, 16-bits training: {train_conf.fp16}" ) logger.info(f"Training/evaluation parameters {train_conf}") logger.info(f"PEFT parameters {peft_config}") # Load the dataset train_dataset, eval_dataset = prepare_dataset(tokenizer, args) ########### # Training ########### trainer = SFTTrainer( model=model, args=train_conf, tokenizer = tokenizer, train_dataset=train_dataset, eval_dataset=eval_dataset, dataset_text_field="text", packing = False # Can make training 5x faster for shorter responses ) # Show current memory stats gpu_stats = torch.cuda.get_device_properties(0) start_gpu_memory = round(torch.cuda.max_memory_reserved() / 1024 / 1024 / 1024, 3) max_memory = round(gpu_stats.total_memory / 1024 / 1024 / 1024, 3) logger.info(f"GPU = {gpu_stats.name}. Max memory = {max_memory} GB.") logger.info(f"{start_gpu_memory} GB of memory reserved.") last_checkpoint = None if os.path.isdir(checkpoint_dir): checkpoints = [os.path.join(checkpoint_dir, d) for d in os.listdir(checkpoint_dir)] if len(checkpoints) > 0: checkpoints.sort(key=os.path.getmtime, reverse=True) last_checkpoint = checkpoints[0] trainer_stats = trainer.train(resume_from_checkpoint=last_checkpoint) ############# # Evaluation ############# tokenizer.padding_side = "left" metrics = trainer.evaluate() metrics["eval_samples"] = len(eval_dataset) trainer.log_metrics("eval", metrics) trainer.save_metrics("eval", metrics) # ############ # # Save model # ############ os.makedirs(args.model_dir, exist_ok=True) if args.save_merged_model: print("Save PEFT model with merged 16-bit weights") model.save_pretrained_merged("outputs", tokenizer, save_method="merged_16bit") else: print(f"Save PEFT model: {args.model_dir}/model") model.save_pretrained(f"{args.model_dir}/model") tokenizer.save_pretrained(args.model_dir) Reference : train.py 7: Create the Compute Cluster: . For this experiment, we are using Standard_NC24ads_A100_v4 which has 1 GPU and 80 GB of VRAM. Select the compute based on the model size and batch size. from azure.ai.ml.entities import AmlCompute ### Create the compute cluster try: compute = ml_client.compute.get(azure_compute_cluster_name) print("The compute cluster already exists! Reusing it for the current run") except Exception as ex: print( f"Looks like the compute cluster doesn't exist. Creating a new one with compute size {azure_compute_cluster_size}!" ) try: print("Attempt #1 - Trying to create a dedicated compute") tier = 'LowPriority' if USE_LOWPRIORITY_VM else 'Dedicated' compute = AmlCompute( name=azure_compute_cluster_name, size=azure_compute_cluster_size, tier=tier, max_instances=1, # For multi node training set this to an integer value more than 1 ) ml_client.compute.begin_create_or_update(compute).wait() except Exception as e: print("Error") 8: Submit the Fine-Tuning Job With everything set up, you can now submit your fine-tuning job: from azure.ai.ml import command from azure.ai.ml import Input from azure.ai.ml.entities import ResourceConfiguration job = command( inputs=dict( #train_dir=Input(type="uri_folder", path=DATA_DIR), # Get data from local path train_dir=Input(path=f"{AZURE_DATA_NAME}_train@latest"), # Get data from Data asset val_dir = Input(path=f"{AZURE_DATA_NAME}_val@latest"), epoch=d['train']['epoch'], train_batch_size=d['train']['train_batch_size'], eval_batch_size=d['train']['eval_batch_size'], ), code=f"{CLOUD_DIR}/train", # local path where the code is stored compute=azure_compute_cluster_name, command="python train_v3.py --train_dir ${{inputs.train_dir}} --val_dir ${{inputs.val_dir}} --train_batch_size ${{inputs.train_batch_size}} --eval_batch_size ${{inputs.eval_batch_size}}", #environment="azureml://registries/azureml/environments/acft-hf-nlp-gpu/versions/77", # Use built-in Environment asset environment=f"{azure_env_name}@latest", distribution={ "type": "PyTorch", "process_count_per_instance": 1, # For multi-gpu training set this to an integer value more than 1 }, ) returned_job = ml_client.jobs.create_or_update(job) ml_client.jobs.stream(returned_job.name) 9: Monitor Training Metrics: After initiating the job, keep an eye on the output for key metrics like training loss and evaluation loss. Since we've logged the results to MLflow, which is seamlessly integrated with Azure Machine Learning, we can easily review the loss function by navigating to the metrics tab within the jobs section. Key Takeways: Both the training and evaluation loss decrease significantly in the initial steps, suggesting effective learning. The gradual reduction in loss in subsequent steps indicates that the model continues to refine its parameters, but at a slower rate. The consistency in the downward trend for both training and evaluation loss implies that the model is not overfitting and is generalizing well to new data. However, the slight uptick towards the end in the evaluation loss might need monitoring to ensure it doesn't indicate overfitting at later stages. Overall, it looks promising, so lets go ahead and register the model. 10: Register the Model: After fine-tuning, register the model to make it available for deployment: from azureml.core import Workspace, Run import os # Connect to your workspace ws = Workspace.from_config() experiment_name = 'experiment_name' run_id = 'job_name' run = Run(ws.experiments[experiment_name], run_id) # Register the model model = run.register_model( model_name=d["serve"]["azure_model_name"], # this is the name the model will be registered under model_path="outputs" # this is the path to the model file in the run's outputs ) # Create a local directory to save the outputs local_folder = './model_v2' os.makedirs(local_folder, exist_ok=True) # Download the entire outputs folder run.download_files(prefix='outputs', output_directory=local_folder) Step 11: Deploy the Model to a Managed Online Endpoint: Managed online endpoints provide a seamless way to deploy models without managing underlying infrastructure. They offer scalability, versioning, and easy rollback compared to deploying on an Azure Kubernetes Service (AKS) cluster. 11 a. Build the enviornment: For deploying the model to managed online endpoint, first create the environment with required dependencies and webserver for inference. %%writefile {CLOUD_DIR}/serve/Dockerfile FROM mcr.microsoft.com/aifx/acpt/stable-ubuntu2004-cu124-py310-torch241:biweekly.202410.2 # Install pip dependencies COPY requirements.txt . RUN pip install -r requirements.txt --no-cache-dir # Inference requirements COPY --from=mcr.microsoft.com/azureml/o16n-base/python-assets:20230419.v1 /artifacts /var/ RUN /var/requirements/install_system_requirements.sh && \ cp /var/configuration/rsyslog.conf /etc/rsyslog.conf && \ cp /var/configuration/nginx.conf /etc/nginx/sites-available/app && \ ln -sf /etc/nginx/sites-available/app /etc/nginx/sites-enabled/app && \ rm -f /etc/nginx/sites-enabled/default ENV SVDIR=/var/runit ENV WORKER_TIMEOUT=400 EXPOSE 5001 8883 8888 # support Deepspeed launcher requirement of passwordless ssh login RUN apt-get update RUN apt-get install -y openssh-server openssh-client RUN MAX_JOBS=4 pip install flash-attn==2.6.3 --no-build-isolation Reference : serving.ipynb 11b. Create a serving script: Creating a serve script for inference is a crucial step in deploying your machine learning model to a production environment. This script handles incoming requests, processes input data, runs the model inference, and returns the results. In Azure Machine Learning, the serve script is part of the deployment package for your model, typically used in conjunction with a managed endpoint or a Kubernetes service. A serve script in Azure ML typically consists of two main functions: init(): This function initializes the model and any other necessary resources. It is called once when the deployment is first loaded. run(data): This function is called every time a request is made to the deployed model. It processes the incoming data, performs inference using the model, and returns the results. import os import re import json import torch import base64 import logging from io import BytesIO from transformers import AutoTokenizer, AutoProcessor, pipeline from transformers import AutoModelForCausalLM, AutoProcessor device = torch.device("cuda" if torch.cuda.is_available() else "cpu") def init(): """ This function is called when the container is initialized/started, typically after create/update of the deployment. You can write the logic here to perform init operations like caching the model in memory """ global model global tokenizer # AZUREML_MODEL_DIR is an environment variable created during deployment. # It is the path to the model folder (./azureml-models/$MODEL_NAME/$VERSION) # Please provide your model's folder name if there is one model_name_or_path = os.path.join( os.getenv("AZUREML_MODEL_DIR"), "outputs" ) model_kwargs = dict( trust_remote_code=True, device_map={"":0}, torch_dtype="auto" ) model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map ={"" : 0}, **model_kwargs) tokenizer = AutoTokenizer.from_pretrained(model_name_or_path) logging.info("Loaded model.") def run(json_data: str): logging.info("Request received") data = json.loads(json_data) input_data = data["input_data"] params = data['params'] pipe = pipeline("text-generation", model = model, tokenizer = tokenizer) output = pipe(input_data, **params) result = output[0]["generated_text"] logging.info(f"Generated text : {result}") json_result = {"result" : str(result)} return json_result Reference : score.py 11c. Create a managed online endpoint and deploy the model to endpoint: Creating an endpoint and deploying your model on Azure Machine Learning is the final step to make your model accessible for real-time inference. This process involves setting up a service that can handle incoming requests, execute the model, and return the results. Why Create an Endpoint? An endpoint is a network-accessible interface that allows external applications or users to interact with your deployed machine learning model. Creating an endpoint is crucial for the following reasons: Accessibility: Endpoints make your model accessible over the internet or within a secured network, enabling other applications, services, or users to send requests and receive responses. API Integration: By exposing your model as a RESTful API, endpoints facilitate integration with various applications, allowing seamless communication and data exchange. Load Management: An endpoint can manage requests from multiple clients, handling concurrent requests and distributing the load appropriately. Security: Endpoints provide mechanisms for authentication and authorization, ensuring that only authorized users can access the model. Scalability: Azure-managed endpoints can automatically scale based on demand, ensuring that your model can handle varying workloads without manual intervention. from azure.ai.ml.entities import ( ManagedOnlineEndpoint, IdentityConfiguration, ManagedIdentityConfiguration, ) azure_endpoint_name = d['serve']['azure_endpoint_name'] # Check if the endpoint already exists in the workspace try: endpoint = ml_client.online_endpoints.get(azure_endpoint_name) print("---Endpoint already exists---") except: # Create an online endpoint if it doesn't exist # Define the endpoint endpoint = ManagedOnlineEndpoint( name=azure_endpoint_name, description=f"Test endpoint for {model.name}", ) # Trigger the endpoint creation try: ml_client.begin_create_or_update(endpoint).wait() print("\n---Endpoint created successfully---\n") except Exception as err: raise RuntimeError( f"Endpoint creation failed. Detailed Response:\n{err}" ) from err Why Deploy a Model? Deployment is the process of transferring your trained machine learning model from a development environment to a production environment where it can serve real-time predictions. Deployment is critical because: Operationalization: Deployment operationalizes your model, moving it from an experimental or development phase to a live environment where it can deliver value to end-users or systems. Resource Allocation: Deploying a model involves configuring the necessary compute resources (such as CPU, memory, and GPUs) to ensure optimal performance during inference. Environment Consistency: During deployment, the model is packaged with its dependencies in a consistent environment, ensuring reproducibility and minimizing discrepancies between development and production. Monitoring and Maintenance: Deployment sets up the infrastructure to monitor the model's performance, usage, and health, allowing for ongoing maintenance and updates. Version Control: Deployment allows you to manage and update different versions of your model, providing flexibility to roll back or switch to newer versions as needed. from azure.ai.ml.entities import ( OnlineRequestSettings, CodeConfiguration, ManagedOnlineDeployment, ProbeSettings, Environment ) azure_deployment_name = f"{d['serve']['azure_deployment_name']}-v1" deployment = ManagedOnlineDeployment( name=azure_deployment_name, endpoint_name=azure_endpoint_name, model=model, instance_type=azure_compute_cluster_size, instance_count=1, #code_configuration=code_configuration, environment = env, scoring_script="score.py", code_path=f"./{CLOUD_DIR}/inference", #environment_variables=deployment_env_vars, request_settings=OnlineRequestSettings(max_concurrent_requests_per_instance=20, request_timeout_ms=90000, max_queue_wait_ms=60000), liveness_probe=ProbeSettings( failure_threshold=30, success_threshold=1, period=100, initial_delay=500, ), readiness_probe=ProbeSettings( failure_threshold=30, success_threshold=1, period=100, initial_delay=500, ), ) # Trigger the deployment creation try: ml_client.begin_create_or_update(deployment).wait() print("\n---Deployment created successfully---\n") except Exception as err: raise RuntimeError( f"Deployment creation failed. Detailed Response:\n{err}" ) from err endpoint.traffic = {azure_deployment_name: 100} endpoint_poller = ml_client.online_endpoints.begin_create_or_update(endpoint) Step 12: Run Inference on Sample Data: Test the deployed model using sample data that expects function calls: import json import os sample = { "input_data": [ {'role': 'system', 'content': 'You are an helpful assistant who has access to the following functions to help the user, you can use the functions if needed- { "name": "calculate_shipping_cost", "description": "Calculate the cost of shipping a package", "parameters": { "type": "object", "properties": { "weight": { "type": "number", "description": "The weight of the package in pounds" }, "destination": { "type": "string", "description": "The destination of the package" } }, "required": [ "weight", "destination" ] }}}"'}, {'role': 'user', 'content': 'Can you help me with shipping cost for a package?'}, {'role': 'assistant', 'content': 'Sure! I can help you with that. Please provide me with the weight and destination of the package.'}, {'role': 'user', 'content': 'The weight of the package is 10 pounds and the destination is New York.'} ], "params": { "temperature": 0.1, "max_new_tokens": 512, "do_sample": True, "return_full_text": False } } # Dump the sample data into a json file with open(request_file, "w") as f: json.dump(sample, f) result = ml_client.online_endpoints.invoke( endpoint_name=azure_endpoint_name, deployment_name=azure_deployment_name, request_file=request_file ) result_json = json.loads(result) result = result_json['result'] print(result) Step 13: Compare with Base Model: Now, lets run the same sample through the base model to observe the difference in performance. As we can see, while the fine-tuned model did a perfect job of generating response with the right function and arguments, the base model struggles to generate the desired output Step 14: Rerun the fine-tuning job by removing function descriptions from the system message: Now, lets rerun the experiment, but this time we will drop the function description from the dataset for context length optimization def remove_desc_from_prompts(data): system_message = data['system'] pattern = r'"description":\s*"[^"]*",?\n?' # Remove the "description" fields cleaned_string = re.sub(pattern, '"description":"",', system_message) return cleaned_string ## Update the system message by removing function descriptions and argument description train_dataset = train_dataset.map(lambda x : {"updated_system" : remove_desc_from_prompts(x)}, remove_columns = ["system"]) test_dataset = test_dataset.map(lambda x : {"updated_system" : remove_desc_from_prompts(x)}, remove_columns = ["system"]) val_dataset = val_dataset.map(lambda x : {"updated_system" : remove_desc_from_prompts(x)}, remove_columns = ["system"]) train_dataset.save_to_disk(f"{DATA_DIR}/train") test_dataset.save_to_disk(f"{DATA_DIR}/test") val_dataset.save_to_disk(f"{DATA_DIR}/val") Reference : preprocess.py As can be seen from the results, removing the function description doesn't degrade the model performance but instead this fine-tuned model version requires lesser input tokens resulting in a significant reduction in token consumption with improved latency. Step 15: Further Exploration: Consider removing arguments or even the function itself in subsequent experiments to evaluate performance. Conclusion This blog post has walked through the process of fine-tuning an SLM for function-calling on Azure Machine Learning. By following these steps, you can effectively tailor a model to meet specific functional requirements. You can access the full code here. For a deeper dive into evaluating fine-tuned models, including metrics and code samples, check out the next blog post. By leveraging Azure's powerful tools, you can streamline the development and deployment of machine learning models, making them more efficient and effective for your specific tasks. Reference: Fine tuning for function calling | OpenAI Cookbook Fine-tuning function calls with Azure OpenAI Service - Azure AI services | Microsoft Learn michaelnny/Llama3-FunctionCalling: Fine-tune Llama3 model to support function calling Fine Tuning LLMs for Function Calling w/Pawel Garbacki - YouTube slm-innovator-lab/2_slm-fine-tuning-mlstudio at main · Azure/slm-innovator-lab1.6KViews1like1CommentScalable and Efficient Fine-Tuning of LLM on Azure ML
https://github.com/james-tn/llm-fine-tuning/tree/main/opensource_llm/single_step Co-Author: Mohamad AL jazaery Why Scalable and Efficient Fine-Tuning Matters Faster Iterations, Shorter Time-to-Value: In today’s competitive AI landscape, time is of the essence. The faster you can fine-tune a model, the quicker you can validate ideas, test hypotheses, and bring solutions to market. High-profile GPU machines are costly: High-performance GPUs and compute clusters don’t come cheap, and their availability is often limited. Efficient fine-tuning techniques, such as model sharding and distributed training, maximize the utilization of these precious resources—ensuring that you get the most out of your infrastructure investment. Choosing the Right Azure ML GPU Compute for the Job: NC or ND? Not all GPU computes are created equal, and choosing the right sku can make or break your training efficiency. ND Series: Ideal for distributed training across multiple nodes, thanks to its Infiniband (IB) connectivity that ensures high-speed communication between nodes like pretraining LLM or finetuning very large model ~70B params. NC Series: Small and medium workload where no heavy interaction between nodes needed like LLM inferencing or mid-size LLM finetuning. Azure GPU Machine Options by Scenario: Scenario Common model size Training Approach Recommended Azure Compute Small-scale fine-tuning < 3B parameters Parameter-efficient tuning NCas_T4_v3 (Tesla T4, 16 GB) Medium-scale fine-tuning 1–5B parameters Full or parameter-efficient NCs_v3 (Tesla V100, 16 GB) Distributed training for medium models 5–10B parameters Full fine-tuning ND_v2 (Tesla V100 NVLINK, 32 GB, InfiniBand) Large-scale fine-tuning (single machine) 10–30B parameters Full or parameter-efficient NC_A100_v4 (A100, 40 GB) Distributed training for very large models 20–70B parameters Full fine-tuning NDasrA100_v4 (A100, 80 GB, HDR InfiniBand) Very large models training (single machine) up to 70B parameters Full or parameter-efficient NCads_H100_v5 (H100 NVL, 94 GB) Massive-scale distributed training > 70B parameters Full fine-tuning ND-H100-v5 (H100, 80 GB, scale-out InfiniBand) Distributed Efficient Training: A Quick Guide When scaling fine-tuning tasks, choosing the right distributed training method is key: DDP (Data Parallelism): Works well when the entire model fits on a single GPU. It replicates the model across multiple GPUs and splits the data for parallel processing. Check experiment 1 in the following section. Model Parallelism: A game-changer for massive models that don’t fit on a single GPU. It shards not only the data but also the model parameters and optimizer states across multiple GPUs, enabling efficient training of models like LLaMA-70B on GPUs with low memory GPUs. Both FSDP and DeepSpeed as libraries excel at implementing advanced forms of model parallelism and memory optimization. Memory Optimization Techniques Gradient Checkpointing: Reduces memory by recomputing activations during the backward pass, trading memory for additional computation. Mixed Precision Training: Reduces memory usage by using FP16 or BF16 instead of FP32, accelerating training while maintaining numerical stability. Supported by both frameworks. Quantization (DeepSpeed Exclusive): Uses INT8 precision for weights and activations, dramatically reducing memory and compute requirements. Offloading (DeepSpeed Exclusive): Offloads optimizer states and model parameters to CPU or NVMe, freeing up GPU memory for computation. Our Experiments: Pushing the Limits of Scalability Experiment 1: Distributed Training on Multiple Nodes using DDP We conducted an experiment to fine-tune the Llama-3.1-8B model using LoRA (Low-Rank Adaptation) on Azure ML NDv2-V100 nodes. The goal was to evaluate the efficiency of fine-tuning across different numbers of nodes (1, 2, and 3) and observe the impact on training time and throughput. Azure ML Job YAML Definition $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json type: command code: ./ # Path to your training script and related files inputs: model_dir: path: azureml://registries/azureml/models/mistralai-Mistral-7B-v01/versions/19 command: > accelerate launch --num_processes 16 # gpu per machine * num of machines --num_machines 2 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT compute: azureml:ndv2-cluster resources: instance_count: 2 # Number of nodes for distributed training distribution: type: pytorch process_count_per_instance: 1 # Number of processes per node Results: As you increased the number of nodes from one to three, the throughput increased proportionally. This indicates that the system scaled efficiently with the addition of more nodes, maintaining a close-to-linear improvement in throughput. Experiment 2: Model Parallelism using FSDP Fine-tuning a 70B-parameter model on GPUs with only 16GB of memory might sound impossible, but we made it happen using FSDP (Full Sharded Data Parallelism) on Azure ML using a cluster of multiple NDv2-V100 nodes. By distributing not only the data but also the model parameters and optimizer states across multiple nodes, we unlocked the power of full sharding. $schema: https://azuremlschemas.azureedge.net/latest/commandJob.schema.json type: command code: ./ # Path to your training script and related files inputs: model_dir: path: azureml://registries/azureml-meta/models/Llama-3.3-70B-Instruct/versions/4 command: > accelerate launch --config_file "configs/fsdp_config.yaml" --num_processes 32 --num_machines 4 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT train.py compute: azureml:ndv2-cluster resources: instance_count: 4 # Number of nodes for distributed training distribution: type: pytorch process_count_per_instance: 1 # Number of processes per node Key Takeaways: Memory Efficiency: Full sharding enabled us to fine-tune the LLaMA-70B model on V100 GPUs despite their limited memory. Connectivity Matters: The Infiniband (IB) connectivity of ND nodes played a critical role in ensuring smooth communication across GPUs, making this feat possible. Conclusion Scalable and efficient fine-tuning is the key to unlocking the true potential of Large Language Models. By leveraging distributed training techniques, such as FSDP and DDP, and optimizing compute resources on Azure ML, researchers and practitioners can overcome the challenges of training massive models—reducing costs, accelerating time-to-value, and driving AI innovation. Access the code and start experimenting here! Future work: The second part will focus on real-world pipeline setups, including end-to-end model training, hyperparameter optimization, and testing. The third part will dive into deploying trained models for practical use. Future posts may explore best practices for specific fine-tuning scenarios and techniques.1.2KViews3likes0CommentsUnlocking the Power of Synthetic Data for Fine-Tuning and Evaluation
In the rapidly evolving field of large language models (LLMs) and small language models (SLMs), fine-tuning and evaluation often present unique challenges. Whether the objective is to optimize models for function-calling use cases or to validate multi-agent workflows, one thing remains constant: the need for high-quality, diverse, and contextually relevant data. But what happens when real-world data is either unavailable, incomplete, or too sensitive to use? Enter synthetic data—a powerful tool for accelerating the journey from experimentation to deployment. In this blog, we’ll explore how synthetic data can address critical challenges, why it’s indispensable for certain scenarios, and how Azure AI’s Evaluator Simulator Package enables seamless generation of synthetic interaction data to simulate user personas and scenarios. The Growing Need for Synthetic Data in LLM Development Fine-tuning or evaluating an LLM/SLM for specific use cases often requires vast amounts of labeled data tailored to the task at hand. However, sourcing such data comes with hurdles: Data Scarcity: Real-world interaction data for niche use cases may not exist in sufficient quantity. Privacy Concerns: User interactions may contain sensitive information, making direct use of this data problematic. Scenario Testing: Real-world data rarely accounts for edge cases or extreme scenarios that models must handle gracefully. Synthetic data solves these problems by creating controlled, customizable datasets that reflect real-world conditions—without the privacy risks or availability constraints. Synthetic Data for Function-Calling Use Cases Function-calling in LLMs involves executing API calls based on natural language inputs. For example, users might ask a travel app to “find flights to Paris under $500.” Fine-tuning models for such use cases requires training them on structured, intent-rich inputs paired with corresponding API call structures. Synthetic data can: Simulate diverse intents: Generate variations of user queries across languages, styles, and preferences. Provide structured outputs: Automatically align these queries with the required API call schema for training or evaluation. Include edge cases: Test how models respond to ambiguous or incomplete queries. Model evaluation post fine-tuning presents another set of challenges where we need trusted data to evaluate the performance. Hence, having synthetic data generated by a superior model followed by human screening filtering out noise can provide a rich and diverse data to compare the performance of fine-tuned vs base models. Synthetic Data in Multi-Agent Workflow Evaluation Multi-agent workflows involve multiple models (or agents) collaborating to achieve a shared goal. A restaurant recommendation system, for example, may feature one agent parsing user preferences, another querying a knowledge graph, and a third crafting human-like responses. Synthetic data can: Simulate complex user personas: From foodies to budget-conscious travelers, generating interactions that test the robustness of multi-agent collaboration. Recreate realistic workflows: Model intricate agent-to-agent interactions, complete with asynchronous communication and fallback mechanisms. Stress-test failure scenarios: Ensure agents recover gracefully from errors, misunderstandings, or timeouts. Multi-agent workflows often rely on hybrid architectures that combine SLMs, LLMs, domain-specific models, and fine-tuned systems to balance cost, latency, and accuracy. Synthetic data generated by a superior model can serve as a baseline for evaluating nuances like agent orchestration and error recovery. Azure AI Evaluator Simulator: A Game-Changer Azure AI's Evaluator Simulator Package offers a robust framework for generating synthetic interaction data tailored to your application needs. By simulating diverse user personas and scenarios, it provides: Realistic Simulations: Emulate a wide range of user behaviors, preferences, and intents, making it ideal for creating datasets for function-calling and multi-agent workflows. Customizability: Tailor simulations to reflect domain-specific nuances, ensuring data relevance. Efficiency: Automate data generation at scale, saving time and resources compared to manual annotation. How It Works The Azure AI Evaluation SDK’s Simulator class is designed to generate synthetic conversations and simulate task-based interactions. The module allows you to configure different personas—such as tech-savvy users, college grads, enterprise professionals, customers, supply chain managers, procurement manager, finance admin etc each interacting with your application in unique ways. You can also define the tasks that each of these users are trying to accomplish like shopping for a family event, manging inventory, preparing financial reports etc. Here’s how it operates: Model Configuration: Initialize the simulator with your model’s parameters (e.g., temperature, top_p, presence_penalty). Input Preparation: Provide input data (e.g., text blobs) for context, such as extracting text from a Wikipedia page. Prompt Optimization: Use the query_response_generating_prompty_override to customize how query-response pairs are generated. User Prompt Specification: Define user behavior using the user_simulating_prompty_override to align simulations with specific personas. Target Callback Specification: Implement a callback function that connects the simulator with your application. Simulation Execution: Run the simulator to generate synthetic conversations based on your configurations. By following these steps, developers can create robust test datasets, enabling thorough evaluation and fine-tuning of their AI applications. Example: Synthetic Data for an E-Commerce Assistant Bot Let’s walk through an example of generating synthetic data for an e-commerce assistant bot. This bot can perform tasks such as acting as a shopping assistant, managing inventory, and creating promo codes. Before we get started, make sure to install azure-ai-evaluation package to follow along Step 1: Define Functions and APIs Start by defining the core functions the bot can invoke, such as search_products, fetch_product_details, and add_to_cart. These functions simulate real-world operations. Please refer functions and function_list to access the complete list of functions and function definitions. Step 2: Configure the Simulator model_config = { "azure_endpoint": azure_endpoint, "azure_api_key": azure_api_key, "azure_deployment": azure_deployment, } from azure.ai.evaluation.simulator import Simulator simulator = Simulator(model_config=model_config) Next connect the simulator to the application. For this, establish the client and implement a callback function that invokes the application and facilitate interaction between the simulator and app from typing import List, Dict, Any, Optional from functions import * from function_list import function_list from openai import AzureOpenAI from azure.identity import DefaultAzureCredential, get_bearer_token_provider def call_to_ai_application(query: str) -> str: # logic to call your application # use a try except block to catch any errors system_message = "Assume the role of e-commerce assistant designed for multiple roles. You can help with creating promo codes, tracking their usage, checking stock levels, helping customers make shopping decisions and more. You have access to a bunch of tools that you can use to help you with your tasks. You can also ask the user for more information if needed." completion = client.chat.completions.create( model=azure_deployment, messages=[ {"role" : "system", "content" : system_message }, { "role": "user", "content": query, } ], max_tokens=800, temperature=0.1, top_p=0.2, frequency_penalty=0, presence_penalty=0, stop=None, stream=False, tools = function_list, tool_choice="auto" ) message = completion.choices[0].message # print("Message : ", message) # change this to return the response from your application return message async def callback( messages: List[Dict], stream: bool = False, session_state: Any = None, # noqa: ANN401 context: Optional[Dict[str, Any]] = None, ) -> dict: messages_list = messages["messages"] # get last message latest_message = messages_list[-1] query = latest_message["content"] context = None # call your endpoint or ai application here response = call_to_ai_application(query) # we are formatting the response to follow the openAI chat protocol format: if response.tool_calls: prev_messages = messages["messages"] func_call_messages = [] tool_calls = response.tool_calls ## Add the tool calls to the messages for tool_call in tool_calls: formatted_response = {"role" : "assistant", "function_call" : tool_call.function.to_dict()} func_call_messages.append(formatted_response) ## Execute the APIs and add the responses to the messages for tool_call in tool_calls: function_name = tool_call.function.name function_args = tool_call.function.arguments func = globals().get(function_name) if callable(func): result = json.dumps(func(**json.loads(function_args))) # formatted_response = {"content" : result, "role" : "tool", "name" : function_name} formatted_response = {"role" : "function", "content" : result, "name" : function_name} func_call_messages.append(formatted_response) else: print("Function {} not found".format(function_name)) # Second API call: Get the final response from the model final_response = client.chat.completions.create( model=azure_deployment, messages=prev_messages + func_call_messages, ) final_response = {"content" : final_response.choices[0].message.content, "role" : "assistant"} func_call_messages.append(final_response) # Stringify func_call messages to store in session state func_call_messages = create_content_from_func_calls(func_call_messages) func_call_messages = {"role" : "assistant", "content" : func_call_messages} messages["messages"].append(func_call_messages) # messages["messages"].append(final_response) return {"messages": messages["messages"], "stream": stream, "session_state": session_state} else: formatted_response = { "content": response.content, "role": "assistant", } messages["messages"].append(formatted_response) return {"messages": messages["messages"], "stream": stream, "session_state": session_state, "context": context} We have used two helper functions here : create_content_from_func_calls : It creates a string content from a list of function call dictionaries. This merges all the internal messages invoking function calls into a single string. This is needed as the simulator module ignores all internal context and only retains the latest response. split_content : Split a string content into a list of dictionaries based on specified separators. This is required for post-processing step to split the string comprising of function-call and function-response into separate messages each with its own role and content. Step 3: Define the Tasks Use the Azure AI Evaluation SDK to configure the simulator with user personas and tasks, such as: A marketing manager creating a promo code and tracking its usage. A customer making a purchase using the promo code. An inventory manager checking stock levels. Step 4: Customize user persona Internally, the SDK has a prompty file that defines how the LLM which simulates the user should behave. The SDK also offers an option for users to override the file, to support your own prompty files. Let’s override this file to build a user persona who engages in an interactive conversation with the bot and asks follow up questions while responding to bot’s response basis his persona and requirement system: You must behave as a user who wants accomplish this task: {{ task }} and you continue to interact with a system that responds to your queries. If there is a message in the conversation history from the assistant, make sure you read the content of the message and include it your first response. Your mood is {{ mood }} Make sure your conversation is engaging and interactive. Output must be in JSON format Here's a sample output: { "content": "Here is my follow-up question.", "role": "user" } Step 5 : Generate and Store Outputs: Run the simulator to generate synthetic data. You can specify the "num_conversation_turns" that defines the predetermined number of conversation turns to simulate. outputs = await simulator( target=callback, text="Assume the role of e-commerce assistant designed for multiple roles. You can help with creating promo codes, tracking their usage, checking stock levels, helping customers make shopping decisions and more. You have access to a bunch of tools that you can use to help you with your tasks. You can also ask the user for more information if needed.", num_queries=3, max_conversation_turns=5, tasks=tasks, user_simulator_prompty=user_override_prompty, user_simulator_prompty_kwargs=user_prompty_kwargs, ) Step 6 : Review and Save the Outputs Let's look at the output for one of the tasks We can see how the simulator engages in an interactive conversation with the application to accomplish the desired task and all the interaction between app and simulator is captured in the final output. Let's store the output in a file with open("output.json", "w") as f: json.dump(final_outputs, f) Conclusion Synthetic data transcends being a mere substitute for real-world data—it’s a strategic asset for fine-tuning and evaluating LLMs. By enabling precise control over data generation, synthetic datasets empower developers to simulate user behaviors, test edge cases, and optimize models for specific workflows. With tools like Azure AI’s Evaluator Simulator, generating this data has never been more accessible or impactful. Whether you’re building models for function-calling, orchestrating multi-agent systems, or tackling niche use cases, synthetic data ensures you’re equipped to deliver reliable, high-performing solutions—regardless of complexity. Start leveraging synthetic data today and unlock the full potential of your LLM projects! You can access the full code here References azureai-samples/scenarios/evaluate/Simulators/Simulate_Context-Relevant_Data/Simulate_From_Input_Text at main · Azure-Samples/azureai-samples How to generate synthetic and simulated data for evaluation - Azure AI Foundry | Microsoft Learn Generate Synthetic QnAs from Real-world Data on Azure | Microsoft Community Hub How to use function calling with Azure OpenAI Service - Azure OpenAI Service | Microsoft Learn Fine-tuning function calls with Azure OpenAI Service - Azure AI services | Microsoft Learn540Views0likes0CommentsTiny But Mighty: Unleashing the Power of Small Language Models 🚀
While Large Language Models (LLMs) like GPT-4 dominate headlines with their extensive capabilities, they often come at the cost of high computational requirements and complexity. For developers and organizations looking to implement AI solutions on edge devices or with limited resources, Small Language Models (SLMs) are emerging as a practical alternative. SLMs are not just "smaller" versions of their larger counterparts—they're designed to be faster, more efficient, and adaptable for specific tasks. With fewer parameters and lower computational needs, SLMs open the door to deploying AI on mobile devices, IoT systems, and edge environments without compromising performance. What You Stand to Learn 🧠 Introduction to Microsoft's AI Ecosystem Discover Microsoft's end-to-end AI development tools, from Azure AI Services to ONNX Runtime, enabling efficient and secure deployment of AI models across cloud and edge environments. The Advantages of SLMs over LLMs SLMs are game-changers for edge AI applications, providing faster training and inference times, reduced energy costs, and scalability across diverse devices. Hands-On with Phi-3 and ONNX Runtime Experience live demonstrations of SLMs in action with tools like Phi-3 and ONNX Runtime, showcasing how to fine-tune and deploy models on mobile devices, IoT, and hybrid cloud environments. Responsible AI Practices Understand how to safeguard your AI applications with Microsoft's Responsible AI toolkit, ensuring ethical and trustworthy deployments. Watch the Full Session 👨💻 📅 Date: December 12, 2024 ⏰ Time: 4 PM GMT | 5 PM CEST | 8 AM PT | 11 AM ET | 7 PM EAT A session packed with live demos, practical examples, and Q&A opportunities. Register NOW | Events | Microsoft Reactor Agenda 🔍 Introduction (5 min) A brief overview of the session and its focus on SLMs and LLMs. Microsoft AI Tooling (5 min) Explore the latest tools like Azure AI Services, Azure Machine Learning, and Responsible AI Tooling. How to Choose the Right Model (10 min) Key considerations such as performance, customizability, and ethical implications. Comparing SLMs vs LLMs (10 min) The strengths, weaknesses, and best use cases for both Small and Large Language Models. Deploying Models at the Edge (10 min) Insights into optimizing AI for mobile, IoT, and edge devices. Q&A Addressing participant questions about AI development and deployment.357Views2likes0CommentsThe Future of AI: GraphRAG – A better way to query interlinked documents
All language models are trained on a huge corpus of data. They have some world knowledge and can answer a range of questions about different things. However, due to their probabilistic nature and incomplete world knowledge, especially when it comes to different niches and domains, it’s possible to receive incorrect answers. Retrieval Augmented Generation (RAG) helps augment world knowledge with enterprise-specific references, reducing inaccuracies and inconsistencies in the generated text. How RAG works and improves LLM output In RAG, the corpus of text relevant to your domain is converted into embeddings. Embeddings are created by translating documents into a mathematical form based on their traits, factors, and categories. The resulting vector representation is a long sequence of numbers. The distance between two vectors indicates how closely related they are. Similar objects are positioned closer together in a multi-dimensional embedding space, while less similar objects are positioned farther apart. As the term signifies, RAG consists of three steps – First the relevant vectors related to the query are retrieved (typically from a vector database), then the prompt which is sent to the LLM is augmented with this relevant contextual information, and finally the LLMs generates an answer based on this context and query. Using the RAG approach, developers can extend the factual grounding of the model, improve the relevance, accuracy and quality of the answers generated by the LLMs, and in many cases, refer back to the document snippets which were used in the generation of the answer. RAG has emerged as a powerful approach that combines the strengths of information retrieval and generative models. How GraphRAG builds upon RAG approach Though RAG improves on the LLMs generative capabilities, RAG does sometimes struggle to make sense of concepts and relationships between them when they are spread across documents. Also, as the complexity of data structures grows, there is a need for more advanced systems capable of handling interconnected, multi-faceted information. This is where GraphRAG comes into play. GraphRAG is an advanced version of RAG that utilizes graph-based retrieval mechanisms, enhancing the generation process by capturing richer, more contextual information. GraphRAG improves over vector RAG in the following ways. Enhanced Contextual Understanding with Graphs RAG traditionally uses a flat retrieval system (through embeddings in a vector DB), where it retrieves documents (and relevant document fragments) from a knowledge base based on their relevance to a query. The generative model then uses these retrieved documents to generate a response. While effective, this method can struggle when information is spread across multiple, interconnected documents. GraphRAG, on the other hand, uses graph-based retrieval, which allows it to connect pieces of information across a web of nodes. Each node represents an entity or a concept, and the edges represent the relationships between them. Examples of this could be relations like “is part of,” “is cousin of,” or “is made of.” This structured approach enables GraphRAG to extract and utilize more nuanced, multi-layered contextual information, resulting in more coherent and accurate responses. Improved Knowledge Integration In RAG, the generative model can sometimes produce fragmented or inconsistent outputs when the retrieved documents lack cohesion because of the way the chunking process and embedding vectors work. GraphRAG solves this by using graph databases that can model complex relationships. Graph Databases store both the entities represented by nodes and the relationships connecting them. They make it possible to traverse nodes using relationships between them. By understanding the connections between different pieces of information, GraphRAG can integrate knowledge from diverse sources and provide a more unified and accurate response. For example, if a question involves multiple entities and their interactions (e.g., "How does the supply chain impact product availability during a pandemic?"), GraphRAG can navigate through the interconnected data points, understand their dependencies, and generate a comprehensive answer. Another good example is compliance information for related documents and references to concepts in compliance. Let’s assume you are opening a restaurant and want to know different regulations needed to open a kitchen. Regulations can span fire safety, hygiene, food storage, ingredient sourcing, insurance, and labour guidelines. GraphRAG can work in such a scenario to collect all the references, traversing the relationships between them, giving users a coherent answer spanning a collection of documents. Efficiency and Scalability Another key metric, especially for large, interconnected datasets, is efficiency. RAG requires scanning through multiple documents for relevant content, which can be resource-intensive, especially with vast datasets. GraphRAG’s graph-based structure can efficiently traverse the data by focusing on relevant nodes and relationships, reducing computational overhead. Using GraphRAG intelligently, developers can use a combination of graph traversals of knowledge graphs and vector search to reduce computation and memory overheads. This s better, more intelligent indexing over traditional approaches. Moreover, graphs can be scaled horizontally, allowing for the expansion of knowledge bases without significantly increasing retrieval times. This makes GraphRAG suitable for enterprise-level applications where scalability and performance are critical. Also, when an organization spans many different vertical domains, this helps focus the search. So, you have the advantage both in terms of scalability and performance. GraphRAG Implementation Now that we know the benefits of GraphRAG, let’s implement an approach using GraphRAG. Setup For this demonstration we will use, we will use the GPT-4o as the LLM model in Azure AI Studio and text-embedding-3-small as the embedding model to generate embeddings on the platform. We will use the open source lancedb to store the embeddings and retrieve them for GraphRAG. There are many other models available via the Azure AI model catalog which has a variety of LLMs, SLMs, and embedding models. Let’s now create the deployments using Azure AI Studio for both these models. Next, let’s open a session on WSL to create a virtual env for Python. We will be using the Python package for GraphRAG for this demo. # Create a graphrag directory and change directory to try out this example $ mkdir graphrag $ cd graphrag/ # Install virtualenv package, create a virtual environment called venv_name # & change directory to it. We create a virtual environment so we can safely # install and experiment with package without changing the global Python # environment $ sudo apt-get install python3-virtualenv $ virtualenv -p python3 venv_name $ cd venv_name/ # Activate the virtual environment $ source bin/activate # Next, install the Python GraphRAG package in the virtual environment # created. This will download and install a number of packages and may # take a little time. Amongst other things, it will install the opensource # DataShaper data processing library that allows users to declaratively # express data pipelines, schemas, and related assets using well-defined # schemas $ pip install graphrag For the purposes of this demo, we will use the text of the Mahabharata. The Mahabharata is an epic Indian classical text that is divided into 18 chapters with a multitude of characters. It narrates the events that lead to the Kurukshetra war between two warring clans of cousins – Kauravas and Pandavas and the aftermath of the war. There are more than 100 human characters in the text who interact with each other and are also related to each other in some way. You can read about the epic text here and read about the many characters. We will use one of the translations of the epic text from project Gutenberg which is in the public domain. # Create the directory for input text and download the file using curl and # store it in the input directory. Though this is one document it consists of # many parts. The word count (634955) and line count (58868) in the # example below can be seen using wc commandline utility. $ mkdir -p ./mahabharata/input $ curl curl https://www.gutenberg.org/cache/epub/15474/pg15474.txt -o ./mahabharata/input/book.txt $ wc input/book.txt 58868 634955 3752942 input/book.txt # Next, we will initialize the environment for GraphRAG using the command: $ python -m graphrag.index --init --root ./mahabharata/ This will create a .env file and a settings.yaml file in the mahabharata directory. .env contains the environment variables required to run the GraphRAG pipeline. If the file is edited, a single environment variable will be defined, GRAPHRAG_API_KEY=<API_KEY>. This is the API key for the OpenAI API or Azure OpenAI Service endpoint. This can be replaced with an API key. API keys and other settings can be seen in the screenshot below (red highlight) in Azure AI Studio. In the llm section of settings.yaml, configure the following settings, llm: api_key: ${GRAPHRAG_API_KEY} type: azure_openai_chat # or openai_chat model: gpt-4o model_supports_json: true # recommended if this is available for your model. api_base: https://<your_instance_details>.openai.azure.com api_version: 2024-08-01-preview # please replace with your version deployment_name: gpt-4o In the embeddings section of settings.yaml , configure the following settings, llm: api_key: ${GRAPHRAG_API_KEY} type: azure_openai_embedding model: text-embedding-3-small api_base: https://<your_instance_details>.openai.azure.com api_version: 2024-08-01-preview # please replace with your version deployment_name: text-embedding-3-small Next, run the indexing process as a precursor to creating the embeddings. This will create a log to track the indexing process. This will start the chunking process, create the entities, figure out the relationship between different entities, generate graph relationships between the entities and finally after multiple processing create the final documents to be stored for retrieval in lanceDB. If the process is complete successfully, a message will appear which says, “All workflows completed successfully.” Note, there will be many warnings about deprecation which can be safely ignored - for now. $ python -m graphrag.index --root ./mahabharata/ Now that the embeddings have been created successfully, let's run a couple of queries to see if we can get answers about the characters and the relationships between them. $ python -m graphrag.query --root ./mahabharata --method global "Who is Duryodhana and How is he related to Arjuna?" creating llm client with {'api_key': 'REDACTED,len=32', 'type': "azure_openai_chat", 'model': 'gpt-4o', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': 'https://graphragdemo-inst.openai.azure.com', 'api_version': '2024-08-01-preview', 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': 'gpt-4o', 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25} SUCCESS: Global Search Response: ### Duryodhana: A Central Figure in the Mahabharata Duryodhana is a pivotal character in the Indian epic, the Mahabharata. He is the eldest son of Dhritarashtra and Gandhari, making him the leader of the Kauravas, a group of a hundred brothers [Data: Reports (408, 397, 400, 275, +more)]. Duryodhana is known for his deep-seated enmity towards the Pandavas, particularly Arjuna, and his significant role in the Kurukshetra War, where he stands as a central antagonist [Data: Reports (408, 397, 569, 216, +more)]. ### Relationship with Arjuna Duryodhana and Arjuna are first cousins. Duryodhana is the son of Dhritarashtra, while Arjuna is the son of Pandu. Dhritarashtra and Pandu are brothers, making Duryodhana and Arjuna part of the same Kuru dynasty [Data: Reports (255, 398, 285, 177, 202, +more)]. This familial connection places them in direct conflict over the throne of Hastinapura, leading to the epic battle of Kurukshetra [Data: Reports (399, 216, 406, 440, +more)]. ### Rivalry and Conflict The relationship between Duryodhana and Arjuna is marked by intense rivalry and conflict. Duryodhana's ambition to rule Hastinapura and his enmity towards the Pandavas drive much of the narrative in the Mahabharata. This enmity is particularly highlighted during the Kurukshetra War, where Duryodhana leads the Kauravas against Arjuna and the Pandavas [Data: Reports (408, 397, 273, 202, +more)]. Their rivalry is a central theme in the epic, culminating in numerous battles and deceitful plots, including the infamous game of dice that led to the Pandavas' exile [Data: Reports (398, 255, 400, 256, +more)]. ### Conclusion Duryodhana's character is defined by his leadership of the Kauravas and his antagonistic relationship with the Pandavas, especially Arjuna. Their familial ties and subsequent rivalry form the crux of the Mahabharata's narrative, leading to the monumental conflict of the Kurukshetra War [Data: Reports (408, 397, 569, 216, +more)]. Let’s try another query for another character called Karna. $ python -m graphrag.query --root ./mahabharata --method global "Who is Karna and what are his main relationships?" creating llm client with {'api_key': 'REDACTED,len=32', 'type': "azure_openai_chat", 'model': 'gpt-4o', 'max_tokens': 4000, 'temperature': 0.0, 'top_p': 1.0, 'n': 1, 'request_timeout': 180.0, 'api_base': 'https://graphragdemo-inst.openai.azure.com', 'api_version': '2024-08-01-preview', 'organization': None, 'proxy': None, 'cognitive_services_endpoint': None, 'deployment_name': 'gpt-4o', 'model_supports_json': True, 'tokens_per_minute': 0, 'requests_per_minute': 0, 'max_retries': 10, 'max_retry_wait': 10.0, 'sleep_on_rate_limit_recommendation': True, 'concurrent_requests': 25} SUCCESS: Global Search Response: ### Karna: A Key Figure in the Mahabharata Karna, also known as the Son of Radha, Vasusena, and Radheya, is a pivotal character in the Indian epic, the Mahabharata. He is renowned for his exceptional martial prowess, unwavering loyalty, and tragic life. Born to Kunti and the Sun God, Surya, Karna's divine heritage endowed him with extraordinary abilities, including natural armor and ear-rings that made him nearly invincible [Data: Reports (373, 198, 465, 502, 155, +more)]. ### Key Relationships #### **Duryodhana** Karna's most significant relationship is with Duryodhana, the leader of the Kauravas. Duryodhana befriends Karna and installs him as the king of Anga, solidifying their bond. This relationship is marked by deep loyalty and mutual support, with Karna vowing to slay Arjuna and supporting Duryodhana in various schemes against the Pandavas [Data: Reports (390, 397, 373, 198, 465, +more)]. Karna's loyalty to Duryodhana is a defining aspect of his character, influencing many of his actions and decisions throughout the epic [Data: Reports (447, 440, 391, 383, 302)]. #### **Kunti** Karna's relationship with his mother, Kunti, is complex and filled with emotional tension. Kunti reveals to Karna that he is her son, born before her marriage to Pandu, which adds a layer of tragedy to his character. Despite this revelation, Karna chooses to remain loyal to Duryodhana and fight against his half-brothers, the Pandavas [Data: Reports (373, 198, 465, 502, 155, +more)]. #### **Arjuna** Karna's rivalry with Arjuna, one of the Pandavas, is a central theme in the Mahabharata. Both warriors are considered equals in skill and valor, and their final confrontation in the Kurukshetra war is one of the epic's most significant events. Karna's enmity with Arjuna is fueled by his loyalty to Duryodhana and his desire to prove his worth [Data: Reports (373, 198, 465, 502, 155, +more)]. #### **Surya** Karna's divine father, Surya, plays a crucial role in his life, often providing guidance and warnings. For instance, Surya forewarns Karna about Indra's intentions to obtain his ear-rings and coat of mail, which are sources of his invincibility [Data: Reports (518, 547, 391, 358, 371)]. #### **Indra** Karna's interactions with Indra, the king of the gods, are also notable. Indra, disguised as a Brahmin, tricks Karna into giving up his ear-rings and armor, which were his sources of invincibility. In return, Indra grants Karna a powerful weapon, the Sakti, which he can use only once [Data: Reports (302, 394)]. ### Conclusion Karna's life is marked by his unwavering loyalty to Duryodhana, his complex relationships with his mother Kunti and his half-brother Arjuna, and his divine heritage. These relationships shape his actions and decisions, making him one of the most compelling and tragic figures in the Mahabharata [Data: Reports (390, 397, 373, 198, 465, +more)]. GraphRAG is able to piece together the relevant bits from different parts of the chapters to offer get us the relationship between the different characters with references (data reports or chunks). In some cases, it can do this over many different chunks of data over a large text. This is a huge improvement over the baseline performance of large language models and baseline vector RAG. In a recent Benchmark paper, it was found that knowledge graphs can improve the accuracy of answers up to 3x (54.2% vs 16.7%). GraphRAG can also be used in applications to make them more scalable and accurate, especially for domain-specific applications. Also, if you are working with many documents such as in data lake or running this is production, I would suggest using Azure AI search as the vector store. The GraphRAG accelerator, More information about GraphRAG and Azure AI Studio is available in the resources below: Resources: Learn more about GraphRAG Build with Azure AI Studio – https://ai.azure.com Review the Azure AI Studio documentation - https://learn.microsoft.com/en-us/azure/ai-studio/ Access Azure AI Studio Learn modules - https://learn.microsoft.com/en-us/training/modules/introduction-to-azure-ai-studio/ Access the Fundamental of Generative AI learning course- https://learn.microsoft.com/en-us/training/modules/fundamentals-generative-ai/ Access the GraphRAG GitHub repository - - https://github.com/microsoft/graphrag/ Use the GraphRAG Solution accelerator - https://github.com/Azure-Samples/graphrag-accelerator3.3KViews1like0CommentsResponsible Synthetic Data Creation for Fine-Tuning with RAFT Distillation
This blog will explore the process of crafting responsible synthetic data, evaluating it, and using it for fine-tuning models. We’ll also dive into Azure AI’s RAFT distillation recipe, a novel approach to generating synthetic datasets using Meta’s Llama 3.1 model and UC Berkeley’s Gorilla project.1.6KViews2likes0CommentsThe Future of AI: The paradigm shifts in Generative AI Operations
Dive into the transformative world of Generative AI Operations (GenAIOps) with Microsoft Azure. Discover how businesses are overcoming the challenges of deploying and scaling generative AI applications. Learn about the innovative tools and services Azure AI offers, and how they empower developers to create high-quality, scalable AI solutions. Explore the paradigm shift from MLOps to GenAIOps and see how continuous improvement practices ensure your AI applications remain cutting-edge. Join us on this journey to harness the full potential of generative AI and drive operational excellence.6.4KViews0likes1Comment