Blog Post

AI - Machine Learning Blog
2 MIN READ

Announcing the GA for pipeline deployments in Batch Endpoints

santiagxf's avatar
santiagxf
Icon for Microsoft rankMicrosoft
Nov 15, 2023

We are happy to announce the general availability of pipeline component deployments for batch endpoints, a capability in Azure Machine Learning that allows customers to move machine learning pipelines as single units across environments that can be versioned, controlled, and deployed under a durable API.


Pipeline component deployments is the evolution of the “published pipelines” concept we used to have in the SDK v1, now 100% integrated with the latest and greatest of our platform.

 

What’s a batch endpoint?

Batch endpoints are durable APIs that ML practitioners can expose to external consumers to run machine learning workloads over large volumes of data in an asynchronous way. Batch endpoints take storage locations as inputs and run jobs asynchronously to process the data in parallel on compute clusters. Outputs are stored in storage locations for further analysis.

 

Batch endpoints support two types of deployments:

 

A diagram showing the different types of deployments that can be created under an endpoint: model deployment, best for operationalize models to perform batch inference at scale with automatic parallelization; and pipeline component deployment, best for operationalize complex processing graphs as reusable pipeline components under a batch API.

 

Starting from today, pipeline component deployment is also generally available, introducing new capabilities and production-ready support for deploying complex machine learning pipelines in organizations. 

 

In this release, we are including the following new features in addition to what we had in public preview: 

  • Support for indicating experiment name at invocation time. This helps customers willing to use one single endpoint to support multiple consumers and organize the jobs by experiment.

  • Better support for displaying the pipeline component’s compute graph in Azure Machine Learning studio.

  • Deployment lineage support to identify which particular deployment generated a given job.

Why components instead of pipelines

Pipeline component deployments in batch endpoints allow users to deploy pipeline components instead of pipelines, which makes a better use of reusable assets for those organizations looking to streamline their MLOps practice. 

 

By registering your pipelines as components, you get a single unit that can be moved and controlled. When combined with Azure Machine Learning registries, you can move those components across different environments and workspaces, unlocking true MLOps. 

 

A diagram showing how pipelines can be registered as components and then deployed in a staging environment first and then moved to a production environment afterwards.

 

How to move from published pipelines in V1 to batch endpoints 

Batch endpoint proposes a similar yet more powerful way to handle multiple assets running under a durable API which is why the published pipelines functionality has been moved to pipeline component deployments in batch endpoints.  

 

Batch endpoints decouples the interface (endpoint) from the actual implementation (deployment) and allow the user to decide which deployment serves the endpoint without affecting the contract downstream consumers have. 

 

Learn more about how to migrate from published pipelines in V1 at Upgrade pipeline endpoints to SDK v2. 

 

Start moving to batch endpoints today 

Visit our documentation to learn how to deploy your first pipeline component deployment or check our examples repository for batch endpoints.  If you have any feedback, we are more than happy to hear from you  

Updated Nov 22, 2023
Version 4.0
No CommentsBe the first to comment