How we created a next-gen Health Data Supermodel - Part One
Five minutes with VP of Engineering, Reece Robinson
A few years ago the writing was on the wall with Machine Learning & AI, which got me thinking.
It’s a big shiny bauble, but it’s only as good as the data it’s getting. You could see that every developer would want to get in on the action, but they’re humans with varying degrees of ability. So that begs questions like, do they have the right training and approach to build an AI model? And do they have the data to execute it? Is the data they’re inputting good enough quality to train AI models on? How is it cared for? What’s its origin story or inherent bias? Because machine learning is a bias amplifier, and we work in health tech, where there would be significant consequences for unethical or inexpert people trying to input sensitive data into a machine learning setting.
So back then, I decided to try to address some of these problems by removing those variables and focusing on the business outcome potential in AI.
I was excited by the idea that if we removed the technology as a distraction - pretend that’s a solved problem - what could we do with it? So let's build machine learning models that could be approved and vetted by healthcare subject matter experts, instead of engineers. Fast forward to 2023, we launched the Orchestral Health Intelligence Platform.
We knew our solution would need to start with understanding where data comes from
Its provenance and lineage, things like what population was it generated from. Because the unique characteristics of a population, like demographics, define the character of it’s data, and deeper understanding can empower analysts to create better, more appropriate data sets to train their AI models on.
So we said, first let's create an open cloud-enabled, standardized data model to tame the chaos.
A central model for all data, no matter where it comes from, to be stored. Chaotic data is a big issue in healthcare, with providers often having really complex tech stacks that don’t talk to each other. We need a toolset and platform to make inputting data easy because existing systems aren’t built with it in mind, it’s difficult to do and costly and time-consuming to retrofit.
领英推荐
So we, as health tech experts, designed a health-specific data supermodel.
That’s key because there’s no substitute for real-world experience when it comes to understanding the different characteristics of health and health-adjacent data. A model defined by subject matter experts means that the model represents reality, so if you apply it to use cases there’s a natural fit. When you model for technical purposes only, you run into trouble when using the data because it’s been designed by people who don’t understand how it’s used in the real world.
For example, a FHIR model lacks the context to be able to link concepts -
Like Medicines Prescribed with Medicines Dispensed, because that sort of data can come in many different forms from many different providers. Our open standards agnostic model has been created by subject matter experts who have linked and unified these concepts so that analysts can much more easily and quickly query whether people have taken their medicines or not. The question became, how can we enable people who don’t have advanced data engineering skills - like subject matter experts - to be able to create those models?
We realised we needed to raise the level of abstraction to boost productivity.
We said, rather than handwritten code in a 3GL, our inputs should be domain specifications. In the old days people handwrote programmes for early computers. Then the level of abstraction jumped to higher-level code languages like the 3GLs we have today. With Orchestral HIP, now we’ve jumped another level to Domain Specific Languages - in this case, a Domain Modeller, which is a graphical language, and exponentially more productive. By making our inputs domain specifications, they capture our intent. Then we give the specifications to a robot code generator. The specific higher-level logic is still written by people, and they’re deployed together to make a complete system. This represents a massive leap forward for the healthcare industry.
So that’s the thinking behind Orchestral HIP.
As a result, it can basically expand like a sponge around any tech stack. It’s what is becoming known as a data fabric. Ours is open, standards agnostic, and cloud-enabled, and we believe the first of its kind, which is why we call it a supermodel. It can absorb any standards or attributes that you want to store or use and output them in any format you need. Any data format in, any data format out. Lots of people say they can do that, but I have not been able to find a single other product that does it, which is really cool because we can support any existing health standard, like CCDA, HL7, FHIR, X12, Open EHR, OMOP, but also any that might evolve in the future too. As you can well imagine, the downstream use case implications of that are also significant. Orchestral HIP can become the platform that underpins any and all of the applications in a customer’s ecosystem.
Read more on the Orchestral Health Intelligence Platform .
Interview originally posted on Orionhealth.com
Leading Healthcare IT Initiatives for Enhanced Patient Care ??
6 个月Exploring the realm of healthcare data management is always fascinating. How does Orchestral HIP leverage FHIR standards to address the challenges of interoperability in healthcare data management?
Clinical Director: NZ/ Australia @ Orion Health
6 个月Wow! Pretty impressive. Thanks for explaining Reece Robinson