# Extending JSON Transformation Support in Azure Data Factory

One of the powerful features of Azure Data Factory is the extensive [JSON](https://docs.microsoft.com/en-us/azure/data-factory/format-json) support. Combined with the [Copy Activity](https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-overview), many different types of JSON file can be mapped to a tabular sink.

However, there are limits. Consider the following example:
{{< highlight JSON "linenos=table" >}}
{
  "StartAt":0,
  "Total":3,
  "Issues":[
    {
      "ID":1,
      "Values":[
        {
          "Key":"Title",
          "Value":"An issue"
        },
        {
          "Key":"Date",
          "Value":"2021-08-03"
        }
      ]
    },
    {
      "ID":2,
      "Values":[
        {
          "Key":"Title",
          "Value":"A second issue"
        },
        {
          "Key":"Date",
          "Value":"2021-08-13"
        }
      ]
    },
    {
      "ID":3,
      "Values":[
        {
          "Key":"Title",
          "Value":"A third issue"
        },
        {
          "Key":"Date",
          "Value":"2021-08-23"
        }
      ]
    }
  ]
}
{{< / highlight >}}

Here, the API designer has designed for a backend that can accomodate different fields on issues by abstracting each field into a separate object.  

From a tabular perspective this schema maps most naturally to a highly normalized 'key/value' table. If we're processing this JSON in Data Factory we really have two options:

1) Map the JSON into a key/value table and pivot the data later
1) Write custom logic to parse this json to a more natural tabular form

For the second option we can either 
* use the built in [Data Flow transformations](https://docs.microsoft.com/en-us/azure/data-factory/data-flow-transformation-overview), or
* break out to an [external transform](https://docs.microsoft.com/en-us/azure/data-factory/transform-data) 

Depending on the size of the JSON file these options can be overkill: since they are largely focused on large volume transform using Spark or Databricks. Neither were optimal for our use case.

What we really want is an XSLT or XQuery like language for JSON that permits us to functionally express a simple transform of the JSON. After a little research on [Stack Overflow](https://stackoverflow.com/questions/8481380/is-there-a-json-equivalent-of-xquery-xpath), I stumbled on the [Jsonator](https://jsonata.org) package.

If you've not looked at it, I encourage you to. While the syntax is a little terse, it's incredibly expressive. The example above can be transformed to this:

{{< highlight JSON "linenos=table" >}}
[
  [
    {
      "id": 1,
      "v": {
        "Title": "An issue",
        "Date": "2021-08-03"
      }
    }
  ],
  [
    {
      "id": 2,
      "v": {
        "Title": "A second issue",
        "Date": "2021-08-13"
      }
    }
  ],
  [
    {
      "id": 3,
      "v": {
        "Title": "A third issue",
        "Date": "2021-08-23"
      }
    }
  ]
]
{{< / highlight >}}

With a one line expression:

`` Issues.[{"id":$.ID,"v":Values{Key:Value}}] ``

Better still, there is an [online environment](https://try.jsonata.org) (ala [regex101](https://regex101.com)) that permits rapid experimentation to find the transform that works.

The only remaining question is how to integrate Jasnator into Data Factory? Jasnator is a Javascript library, so the natural solution is to host in Node, [packaged](https://docs.microsoft.com/en-us/azure/azure-functions/functions-reference-node?tabs=v2) as an Azure Function. Here's an example in Typescript:

{{< highlight Typescript "linenos=table" >}}
import { AzureFunction, Context, HttpRequest } from "@azure/functions"
import * as jsonata from "jsonata"

const convertJSON: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
    const transformExpression:string = req.body && req.body.transform;
    const inputBlobName:string = req.body && req.body.blobName;
    let responseMessage = ""
    let responseCode = 200
    try{
        if(req.method !== "POST"){
            throw new Error("Expect JSON passed on the POST method")
        }
        if(inputBlobName === undefined || inputBlobName.length < 1){
            throw new Error("No input blob name found")
        }
        if(transformExpression === undefined || transformExpression.length < 1){
            throw new Error("No transformation expression found")
        }
        const expression = jsonata(transformExpression);
        const result = expression.evaluate(context.bindings.inputBlob);
        context.bindings.outputBlob = result 
        context.log('Processed a request.');
        responseMessage = "Wrote to output blob: "+inputBlobName;
    
    } catch (e){
        responseMessage = e.toString()
        context.log('Failed a request: '+responseMessage);
        responseCode = 500
    }
    context.res = {
        status: responseCode,
        body: responseMessage
    };

};

export default convertJSON;
{{< / highlight >}}

The function has input/output bindings of HTTP and BlobStorage. Pass the name of the blob and the transform to execute as a JSON POST payload to execute the transform.

{{< highlight Json "linenos=table" >}}
{ 
    "blobName": "Example.json",
    "transform":"Issues.[{'id':$.Id,'v':Values{Key:Value}}]"
} 
{{< / highlight >}}

 The transformed file is written to output bound Blob Storage from where it can be picked up for onward processing within Data Factory.

 I found this approach to be very usable and quick to iterate. I hope you find it useful. 