Blog Post

Healthcare and Life Sciences Blog
7 MIN READ

Copying FHIR Entities using fhir-loader

Chad_Voelker's avatar
Chad_Voelker
Icon for Microsoft rankMicrosoft
Mar 23, 2023

Copying FHIR Entities using fhir-loader

In this blog post, we will review an alternative to the built-in FHIR Server $export and $import functionality, fhir-loader.

Prerequisites

This article, and the corresponding FHIR-loader code assumes that you already have an Azure Health Data Services or the legacy Azure API for FHIR FHIR Server installed and that you will be installing fhir-loader into the same resource group. Doing so will ease the configuration and security of fhir-loader as it will pull relevant keys from the existing KeyVault.

FHIR-loader should work against other FHIR servers supporting the HL7 export functionality, but it has not been explicitly tested.

Please be sure to also review the list of fhir-loader installation prerequisites.

Why

Under the hood, Fhir-loader utilizes the FHIR APIs, so why not just use the existing $export and $import functionality  that comes out of the box? Below are pros and cons to each and why you may want to choose fhir-loader over the existing functionality.

  • Exporting via FHIR server built-in
    • Pro: Good for exporting all entities, all patients, or groups of patients.
    • Con: Can not focus the query of the export entities.
    • Con: Uses a single connection, can take longer.
  • Exporting via FHIR-loader
    • Pro: Able to use a query to focus the export on a specific set of patients.
    • Pro: Uses multiple connections.
    • Pro: Can control the number of elements exported per thread, and the number of concurrent threads.
    • Con: Patient-centric - Only able to export patient entities and those linked to patient entities.
  • Importing via FHIR server built-in
    • Pro: Will import entities of any type.
    • Con: Uses a single connection, can take longer.
  • Importing via FHIR-loader
    • Pro: Can handle several formats: FHIR bundles, NDJSON files, zip formatted bundles.
    • Pro: Will import entities of any type.
    • Pro: Uses multiple connections.
    • Pro: Can control the number of elements exported per thread, can control the number of concurrent threads.

 

Fhir-loader Installation

It is recommended that fhir-loader be installed in the same Resource Group (RG) where it is performing the operation. It pulls resource connection information from the existing KeyVault. In our example, we were exporting from a FHIR server in one RG to a FHIR server in another RG. To ease the process, we installed two instances of fhir-loader.

As mentioned in the Prerequisites, fhir-loader should work against other FHIR Servers. You will just have to update the connection parameters manually. That scenario is beyond the scope of this article.

Git clone and Install Script Run

Following the fhir-loader detailed installation documentation, we must first download the fhir-loader code.

 

  1. Open a Cloud Shell using the bash interpreter.
  2. To ensure the code is available to other cloudshell instances, first navigate to your clouddrive directory.
  3. Clone the repo (git clone https://github.com/microsoft/fhir-loader.git).
  4. Navigate to the newly created fhir-loader directory’s scripts directory (cd fhir-loader/scripts).
  5. Ensure the scripts are executable (chmod +x *.bash).

 

Run the Deploy Script

Run the script passing in all parameters:

./deployFhirBulk.bash -i <subscriptionId> -g <resourceGroupName> -l <resourceGroupLocation> -n <deployPrefix> -k <keyVaultName> -o fhir

 

 

 

 

 

This will take a few minutes as it builds and deploys the Azure Function…

 

 

Validate Postman FHIR Connectivity

A great collection of Postman FHIR requests is available from the Azure Health Data Services workshop here. See also this post for detailed directions to get Postman authenticated against your FHIR service. We will use this collection to investigate the entities that exist in our FHIR environment.

Once you’ve updated and selected the Postman environment, save a token using AuthorizeGetToken

 

  1. Ensure the selected environment points to your resource instances.
  2. Select AuthorizeGetToken.
  3. Click Send.
  4. Select the access token, right-click and choose Set: <Current Postman Environment>.
  5. Select bearerToken.

The organization of your collection may vary, find an existing count request or create a new one to validate connectivity and set your expectations for how many resources will be exported.

For patients, the request is a GET using the following URL: {{fhirurl}}/Patient?_summary=count. Adjust this as necessary to correspond with the filter you plan to apply to your export.

 

In this environment, you can see we have 69 patients.

 

Add fhir-loader Specific Postman Environment Variables

Out of the box, fhir-loader allows for function-key authenticated calls. Feel free to adjust to ensure compliance with your organization’s API authorization standards – this can include Managed Identities or protecting access through a VPN or API Management.

To get the URL and function key, navigate to the HttpStart trigger of the function that was created as part of the deployFhirBulk.bash command above.

 

  1. Select Functions.
  2. Select the ExportOrchestrator_HttpStart HTTP trigger.

 

  1. Select Code + Test.
  2. Select the ellipse (…).
  3. Select Get function Url.

 

 

 

Copy the URL to the clipboard, we can either copy the entire string, or break it up to make it more flexible using Postman environment variables. Be sure to remove the /api after azurewebsites.net as the route path has been overridden by the function’s host.json file.

The format of the URL is:

https://<<ResourceName>>.azurewebsites.net/$alt-export?code=<<FunctionKey>>

Export from Source

To initiate the export, first create a new request in Postman. The verb is POST, the URL is above, and the body JSON has the following structure:

{

                "query": "Patient?_count=50",

                "patientreferencefield":"id",

                "include": [

                                "Encounter?patient=$IDS&_count=50",

                                "Patient?_id=$IDS&_count=50"

                ]

}

 

Remember, fhir_loader is patient centric. So, write your query to extract the desired patients and their corresponding linked entities. The example above will extract all patients and their encounter entities. “patientreferencefield” is used to glue the query results with entities in the “include” list. In this example, we are including Encounters, they are selected by those with patient=$IDS and we do have to include the Patient records (if they are desired in the export). The patient records are linked with those in the query by _id=$IDS.

Your request’s query doesn’t have to filter based on patients even though the entities included in the export will be those who have patients linked to the query. The fhir-loader documentation has other great examples. This example finds all the Coverage entities that have class-value=Med1 and returns all the related Patent, Coverage and other Entities that meet that condition. Note that all of the included entities are still linked through the patient id:

{​​​​​

                "query": "Coverage?class-value=Med1&_count=50",

                "patientreferencefield":"subscriber",

                "include": [

                                "Patient?_id=$IDS&_count=50",

                                "Coverage?patient=$IDS&_count=50",

                                "ExplanationOfBenefit?patient=$IDS&_count=50",

                                "Condition?patient=$IDS&_count=50",

                                "Observation?patient=$IDS&_count=50",

                                "AllergyIntolerance?patient=$IDS&_count=50",

                                "Encounter?patient=$IDS&_count=50"

                ]

 

}​​​​​

 

With the POST URL in place, let’s create the request and send it to fhir-loader.

 

  1. Complete the body of the request.
  2. Send the request.
  3. Fhir-loader will respond with a 202 Accepted.
  4. The response payload includes a job id and some follow up URLs.

Click on the statusQueryGetUri and click Send. This returns information about the export job.

 

 

 

  1. If the job takes a while to complete, “runtimeStatus” will be “Running”. Make the request again until it returns “Complete” indicating the job has finished.
  2. “customStatus” includes some timings, indicating how long the export took to run.
  3. “extractresults” summarizes the number of entities that were found and exported.
  4. The “output” element includes the paths to the blobs that were created in the storage account.

The blobs that were created can be viewed / downloaded / copied using tools like Azure Storage Explorer or AzCopy.

 

The _completed_run.json has similar metadata as the statusQuery returned. If you open one of the .ndjson files, you will see that it contains a list of JSON entities.

 

Import Into Destination

We can also use fhir_loader to import NDJSON files. In our scenario, we exported from one FHIR server and are importing into another FHIR server. As mentioned earlier, we have a second instance of fhir_loader in the destination resource group. This isn’t required but eases the configuration process.

Using your preferred Azure storage management tool, copy the NDJSON files to the destination’s ndjson container.

 

We again used Azure Storage Explorer above. If you refresh the view, the files will be automatically removed once the import job has completed.

You can then query your FHIR server to validate the new records exist or have appropriately been updated. You can also look at Application Insights to view the logs of the import run.

 

  1. Navigate to the Application Insights instance with the same name as your fhir_loader function.
  2. Choose Search.

 

  1. Select the time range.
  2. Choose custom.
  3. Update the start and end time to correspond to the time you expect the import.
  4. Ensure it is displayed in local time.
  5. Click Apply.
  6. Search for the term “processing” and press enter.
  7. View the corresponding results.

Conclusion

Thank you for taking the time to read this document. I hope you have found that Fhir-loader makes a great addition to the out-of-the-box $import and $export in your FHIR server when you need to extract patient-centric data from your system.

If you liked this post or want to be part of a community of healthcare developers sharing knowledge and resources, check out our HLS Developer discord at https://aka.ms/HLS-Discord. We have links to all our content there and a bunch of channels to communicate with us and like-minded tech and healthcare people. Hope to see you there.

Updated Dec 21, 2023
Version 2.0
No CommentsBe the first to comment