Synapse
5 TopicsSynapse: Automated publishing for ci/cd without manual pressing "Publish" button in UI
I'm trying to implement a fully automated ci/cd pipeline for my synapse workspace using azure devops piplines. We use git branches in our workflow to create features inside the development synapse workspace and merge the feature branch via pull request into our main branch (collaboration branch). What I want to have is that accepting a pull request automatically triggers the building of ARM templates in the workspace_publish branch which will trigger the YAML Pipeline that is already in place. At the moment, whoever accepts the merge request must go to Synapse afterwards and press the "Publish" Button manually to create the ARM templates. From there, the pipeline will handle everything automatically. Basically I'm at the "Current CI/CD flow" in this article and i want to implement "the new CI/CD flow", just using Synapse instead of Data Factory. Sadly, the article marks the described solution as only valid for Data Factory and not for Synapse and I couldn't find anything about using the used ADFUtilities NPM package for Synapse workspaces. So what is the recommended way for automatically publish all changes/create ARM templates after a pull request was merged into the collaboration branch for pipelines using synapse instead of data factory? Is there even a way or are we stuck with pressing the "Publish" Button by ourself?1.6KViews0likes2CommentsLoading Parquet and Delta files into Azure Synapse using ADB or Azure Synapse?
I have a below case scenario. We are using Azure Databricks to pull data from several sources and generate the Parquet and Delta files and loaded them into our ADLS Gen2 Containers. We are now planning to create our data warehouse inside Azure Synapse SQL Pools, where we will create external tables for dimension tables which will use delta files and hash distributed fact tables using Parquet files. Now, the question is, to automate this data warehousing loading activity, which method is better? Is it better to use Azure Databricks to write our transformation logic to create dim and fact tables and load them regularly inside Azure Synapse SQL pools (or) is it better to use Azure Synapse to write our transformation logic to create dim and fact tables and load them regularly inside Azure Synapse SQL pools. Please help.619Views0likes1CommentCould not find stored procedure 'OPTIMIZE' in Synapse
Hi Fellow Azure people! I'm trying to use the OPTIMIZE procedure on a delta table in Synapse but I'm getting a "Could not find stored procedure 'OPTIMIZE'" error. Delta's docs do mention that this feature is only available in Delta Lake 1.2.0 and above, I've double checked and we are running Delta 1.2 Below is an example of what I'm doing: OPTIMIZE '/path/to/delta/table' -- Optimizes the path-based Delta Lake table Does anyone know what this could be happening? I did notice that there was no reference to OPTIMIZE in the Synapse docs but it did exist in the Databricks docs. Perhaps the procedure hasn't been implemented in Synapse yet?738Views0likes0CommentsEnriching stream data by reference data in stream analytics task
I have a task of streaming client actions. I need to add some fields from the client reference. Clients are stored in Synapse. I see that reference data is adding via file or Azur SQL in the streaming task. What's the best way to do it, via a file or a database? How do I create / update a file / table? Using the Data Factory on the trigger to insert a new client into the Synapse?692Views0likes0CommentsGetting started on Azure
I work with large dataset and I am just getting started on learning Azure. I am famaliar with Python and Powerbi. I am planning to integrate Synapse and Databricks for anaalytics and visualisation using Powerbi. What books do you recommend for me to understand these modules?1.1KViews0likes1Comment