Clinical Research Platform Maturity Measurement - Part 1
Through a series of blogs posted over the last 3 years, I have highlighted the need for and value of cloud clinical research platforms. Put simply, a complete end-to-end digital clinical research platform will allow Sponsors/CRO's, Sites and Patients to run efficient and effective clinical trials without inefficient complexity and crippling manual overheads.
The challenge though, is that vendors do a very good job, knowingly or otherwise, of masking over their software's architectural inefficiencies.
Terms like “unified” and “integrated” suggest a solution that might meet platform credentials. In reality, they are more frequently marketing speak to cover up what is largely disconnected software. Users continue to have to enter each product’s modules to determine what needs to be done, and to check/reconcile information that is copied or transposed between modules.
At KCR, the level of resources we need to apply to a clinical trial is inversely proportional to the score of the 'platform' used to support the trial.
As an example, we estimated that instead of using our preferred CTMS /eTMF implementation, instead we need to file documentation in a 3rd party eTMF, this can add one hour to a typical monitoring visit. For a typical small to mid-sized study, that might total around 600 extra hours. The lower the platform score, the more resources are required. The higher the platform score, the less manual work is necessary.
In an attempt to provide some clarity, we prepared a tool that can be used to assess the platform credentials of Digital Clinical Research technology products. This can either be used to measure a single vendor for their platform credentials, or, within a company to assess a combination of products that are used together as a platform. This also works at an individual clinical trial level to define how well the trial will perform from a resource optimization perspective.
I would like to share the measures that are used to score a platform. We have 2 dimensions:
The modules are aimed at determining a minimum set of capabilities for a clinical research platform, I will detail these in part two. These are not focused specifically on Decentralized Clinical Trials (DCT) or Traditional solutions. The modules are aimed at determining a minimum set of capabilities for a clinical research platform.?The scoring reflects a platforms capabilities to provide high performance, low cost, high quality trials supporting Sponsor/CRO’s, Sites and Patients.
Last two points, when assessing a platform, you also need to assess the use of the platform within your organization or even within a trial. For example, if a vendor provides a comprehensive API with a point-and-click user interface to operate it, but the CRO / Sponsor chooses not to use it, then it should be marked down to reflect actual use accordingly. In addition, many of the single instance platforms rely heavily on configuration prior to use. A platform that has not been adequately configured will not perform well.
Platform Capability Measures
Single Sign-On
This measures how complete a single sign on (SSO) solution is. This is the definition of how many usernames and passwords a user might require in order to use the platform. The weakest solutions have no support for SSO. The best solutions allow a user to work across the same product regardless of module, trial, CRO or Sponsor. A site user has a single login for all uses of the platform.?Reducing the number of logins reduces frustrations from users and increases security. This might often leverage technologies from companies like Microsoft (Active Directory) or Google.
0 - No support,
1 - Multi instance, sso per trial,
2 - multi-instance - sso per sponsor,
3 - single instance, global sso
Metadata Repository (MDR)
These are used to support re-use when configuring a clinical trial. Many MDR's only focus on an eCRF & Events ?but don't share these definitions with other modules - like a CTMS system. This means that different aspects of a clinical trial that are common are re-defined from module to module.??
The benefit of a full MDR is that integrations can become routine – based on common standards – data sharing becomes viable, and implementations are faster and of higher quality with frequent re-use.
0 - No Support, silod within a single product,
2 - silod 2 or more products,
3 - all configurable components,
4 - available across components / sponsors / organizations
Workflow
Workflow is application logic, ideally configurable that defines who does what when in a clinical trial. Cross module workflow is where platforms really come into their own. It allows silos - both software and departments - to work in coordination. A lack of cross module workflow is the most typical sign of a lack of platform capabilities. Products like the original Medidata Rave and Oracle Inform had fixed configurable workflow - it was hard coded into the product with some limited configuration through checkboxes. A platform like Clinpal had a workflow language that was both common and scoped across products like EDC, ePRO and eConsent / eLearning allowing activities to be configured to interact.
The benefit of a fully flexible workflow solution that runs across modules is that true coordination across departments and stakeholders can occur.?The level of manual checking, tracking and reconciliation are greatly reduced.?Workflow is also the core of planned versus actual performance management (KPI's). With advanced cross module workflow, the performance of a trial can be greatly enhanced with a minimization of lag between activities. A visual user interface means the configuration of the workflow is understandable to non techies. Ideally, the flows are also manifested in appropriate ways to the end users.
0 - No Support,
1 - fixed configuration per module,
2 - fixed configuration across modules,
3 - dynamic scriptable per module,
领英推荐
4 - dynamic scriptable across modules
5 - dynamic scriptable across modules with visual user interface to configure
6 - dynamic scriptable across modules with visual user interface to configure and operate
Application Programming Interface (API)
A platform has less requirements to interface via an API to other modules within itself. That is because the software itself already has configurable ability to communicate between modules. However, each module should also be capable of interacting with external products to expand the reach of the platform. We have good API's and bad API's. A good API provides access to not just the raw data but also the meta and administrative data. For example, I might want a list of visit and form definitions - metadata - in addition to the actual visit dates and form data. I might also need a list of site details - admin data.
We also wanted to recognize the value of an easy-to-use API. Ideally something that could be used by non-programmers. The measure therefore recognizes a no code environment by doubling the score when this is available. E.g. a Data, Metadata and Admin API that has a visual 'no-code' environment would score 6 points.
The benefit of a good bi-directional API, especially with the tools prepare support it (no-code) means that not only can the platform speak within itself, it can also reliably inter-operate with other solutions ideally without resorting to software development. ?
0 - No Support,
1 - Data only,
2 - Data and Metadata,
3 - Data, Metadata and admin,
4 - Data, Metadata, Admin and Workflow/Tasks,?
x2 - provides a no-code user interface to define, test and operate
Data Model
The way an application stores its data, and, makes this data available to other modules for sharing is fundamental. Some cloud applications actually employ separate 'databases' that are silo'd to a single licensee / module.
For data capture applications, we have different approaches. The early 2000 systems used a tall thin data model where each data point value was stored in a single record. This data must then be transformed using database views or ETL to create the tabular structure required downstream.
More recent systems have gone for the 1 form = 1 table approach. This is easier to work with but still sub-optimal. The form that data comes from should not define a data structure. A few solution isolate the data structure (like CDISC SDTM or ADaM) from the form definitions CDASH / ODM. I don't know of a solution that supports 3 - full scope multi-dimensional data.
The benefit of a good data model – that primarily applies to data solutions such as EDC and eCOA – is that data can be delivered virtually real time in the formats that facilitate analysis and submissions. ?Multi-dimensional support enables the means of creating CDISC SDTM whilst also supporting CDISC ODM and archive enabling audit trails.
0 - Tall thin data model,
1 - forms based data representation,
2 - independent tabular data representation,
3 - independent multi-dimensional data model for form and non form data
Database
Databases and Data Models should not be confused. Technically you can still have multiple databases but the application handles the communication across databases - which is fine. The software application creates a sort of 'virtual application database'. This means when the application is configured, the configuration provides access to other linked databases.
When it is not good is when the application does not have access to the other databases. For example, if the vendor provides a Training product and a CTMS product that sit in different modules with separate databases then controls and reporting in the CTMS cannot reflect the training records of study team members. They are in effect separate products within their own silo. A workaround to these silos can be an API, but who wants to bring in software developers to interface platform modules from the same vendor under the same product name?
0 - one database per (CRO/sponsor)/trial/product,
1 - one database per (CRO/Sponsor)/Product,
3 - one database per Sponsor,
5 - one Database (virtual or real) shared across the platform
In the next post, ?I aim to share the modules that define a core clinical research digital platform.
Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October
2 年Doug, thanks for sharing!
Experienced in: Digital Healthcare; Pharmaceutical and CRO clinical technology Partnerships; clinical technology sales; M&A; Corporate Strategy; Fundraising.
2 年As ever Doug a well thought out and logically reasoned article, and one I hope many in the clinical technology industry will read and internalize. Despite all the great advances in technology over recent years, the IMPACT of the new technology on real-world clinical trial processes (especially within a CRO) are often given far too little thought. This causes a significant impact on patients, sites, CRO staff, Sponsors and often other Clinical Technology vendors. I'm interested to read more about your thoughts for the future, how we can prevent this, and how KCR handles these issues.
Welcome overview Doug, hitting the nail on the head. Indeed, every hour counts and for a complex trial the number of hand-offs and transactions adds up to astronomical amounts, hence is worth counting more routinely. What happened to six-sigma and applying a “lean” approach?
Literally The Man Who Wrote the Book(s) on Patient Recruitment
2 年Thanks for this Doug - the usual comprehensive information that provides great value to those in the field. Quick question - when you're scoring the platforms you come across, do you have a minimum score in mind that implies anything falling short could be too much hassle to get involved with?
Director, Global Lead. Clinical Data Management Systems | Life Sciences Expert | Data Acquisition | Delivery & Project Management Oversight | Consulting | Team Building and Development
2 年Thanks Doug as always for sharing your expertise and wisdom. This helps everyone grow -which is pretty selfless!