Enhancing older standardisation models could simplify data exchange
Across industries from real estate and insurance to telecoms and banking, there are over?1.145 trillion MB data?created and shared every day.?This huge amount is also exchanged to ensure data transactions globally are updated regularly. Whilst the level of data exchange continues to increase in an ever-evolving digital world, so do the challenges which arise from using multiple sources within the JSON?open standard file and data interchange format which uses human-readable text to store and transmit data objects consisting of attribute-value pairs and arrays.?
Typically, a data manager might receive 10s, 100s or 1,000s of data feeds from multiple sources in different formats: excel, flat file, database report, comma (pipe, tab) separated where in some domains the CSVs are separated with semi-colons. Upon processing these files, all manner of havoc can occur, from invalid data for field, missing required field, maximum error count, upload failed, and so forth. Beyond this two-dimensional setup, the data might also sit across multiple tabs or be subject to hierarchy or nesting.
Typically, this requires a complex set of conditions to be built, applied and maintained in the processing (of an ever-increasing data verse).?
In many cases, manual intervention is often required to sift the data to transform it in the right format for the exchange. This time process can sometimes take up to two or three additional days of resources and data assets, making what was supposed to be a simple data exchange process into a time consuming and expensive process.?
领英推荐
To simplify this common recurring issue, at HARNESS Data Intelligence, we have created an updated version of the JSON open standard?file and data exchange format - which holds and carries the source data exactly as is, however, is limited by volume because large datasets cannot be easily processed in memory – and made a simple tweak in how the data breakdown is approached to create a simpler solution. Rather than place all the data in one JSON, we place all the JSONs in one file. So long as each ‘line of JSON’ is within memory limitations then any number of records, nested or otherwise, can be transferred and read as the same data at the other end, limiting the room for errors and making the data exchange process simple and effective.?
To learn more about Harness JR files, the simpler data exchange solution, visit?www.harnessdata.ai
Ground breaking Josue , unlocking endless opportunities with enriched & enhanced data in a unified form #data Intelligence