Part Four: The Need for Transparency in AI - Ensuring Trust and Compliance

Part Four: The Need for Transparency in AI - Ensuring Trust and Compliance

The Need for Transparency in AI: Ensuring Trust and Compliance

This is Part-Four of a Four-Part Series on AI Compliance and Data Governance.

In the first three parts of this series, we explored AI-specific governance challenges, including: (1) ensuring that data used to train AI models is permissible (here); (2) ensuring that models are only used for proper purposes (here); and (3) ensuring that sensitive data is not shared with third-party model developers (here).

This final part addresses a challenge we touched on in the previous pieces but didn’t fully detail — the need for nontechnical roles to have complete transparency to ensure their policies are upheld during AI model development and deployment.

The core problem lies in the reliance on manual processes for policy compliance. These processes begin with lawyers interpreting complex AI laws and regulations and translating them into company policies. Lawyers then attempt to ensure that their guidance is understood and implemented by those responsible for building and deploying AI models. This involves documentation, training, and collaboration workflows. However, manual processes are inherently fallible: human memory fades, miscommunication occurs (e.g., engineers misinterpreting legal guidance), training fails to scale across organizations, and any change renders the process outdated.

The shortcomings of manual processes are well-documented through recent negative news cycles and enforcement actions against companies like Twitter, Facebook, GoodRX, and Tik Tok. These challenges are widespread for both legal and engineering teams, but they are amplified in the world of AI due to the vast amounts of data AI systems consume and the rapid pace at which projects progress. Additionally, the constant and rapid evolution of AI laws, regulations, and contractual provisions adds layers of complexity, making compliance even more difficult to achieve.

Although this process is slow and error-prone, manual processes might still work if lawyers had a way to monitor consistent compliance in code. However, lawyers lack the technical skills required to audit code or data flows, which makes it impossible for them to verify compliance in real-time. Without the ability to monitor how policies are applied in practice, lawyers are left in the dark about whether AI models are truly adhering to legal requirements, leaving lawyers unequipped to do their jobs, assuming their role is defined by positive outcomes, rather than the guidance they give.

A Solution: Real-Time Transparency and Automation

Tranquil Data offers a solution that bridges the gap between legal policies and engineering practices. Lawyers start by configuring the software in plain language to mirror the policies they care about. In the context of AI, this might include defining allowable purposes for AI use, such as whether AI can be used for marketing or service improvement, and specifying the types of data that are permissible for training. For example, using AI for marketing may not be allowable, while it may be acceptable for improving service. There may also be restrictions on certain categories of data, such as personal health information, or clauses in agreements (e.g., master service agreements) that prohibit using certain data for AI purposes.

Downstream, engineers simply state the intended purpose of a model. The software automatically checks this purpose against the configured policies to validate alignment. If approved, engineers can request data, and the software provides only the permissible fields for the stated purpose — automatically redacting or dropping any data that doesn’t meet the criteria. This seamless process eliminates manual guesswork, enabling engineers to move quickly while adhering to policy requirements.

Transparency ties the entire workflow together. Lawyers have access to real-time, customizable dashboards that provide a comprehensive view of all AI models in development. These dashboards reveal the stated purposes behind model creation, which datasets are being used for training, and how models are being used post-deployment. This visibility doesn’t just provide peace of mind — it enables lawyers to monitor compliance dynamically and in real-time.

The dashboard above is an exemplary example of Nova Health Solutions, a leading healthcare solutions provider that has a large data science team. Mia Patel, Nova’s General Counsel, relies on the AI dashboard to monitor compliance in real-time, ensuring that company policies are met without needing to intervene unnecessarily.

In one instance, depicted in row one, Nova’s data science team requested all data elements to train a model aimed at optimizing utilization. Specifically, the team wanted to ensure their staffing levels were appropriately aligned to the care needed on specific days and times to improve profitability. The stated purpose, “improve the service,” is not a HIPAA-allowable purpose for using personal health information. The Tranquil Data software automatically filtered the requested training data, excluding the use of specific data categories that are impermissible for this purpose (depicted by the red “Deny” in row one of the dashboard). Mia saw the activity reflected in real time on her dashboard and spoke with the data science team to ensure they understood that health-related data cannot be used to improve the service.

The dashboard also proved invaluable when Nova was developing a model for diabetes management. The team requested data for the stated purpose of “treatment.” Tranquil Data’s software filtered out certain health information due to a contractual restriction with several of Nova’s customers (mostly payers) who share their member data with Nova. Under the terms of these contracts, there is a data rights provision that bans the use of their members’ personal health information to train AI models. Therefore, the data science team is only provided data to train their models with member data from payers who allow the use of their member data for AI, specifically for the treatment purpose (depicted by the yellow “Partial” in row two).

After the new diabetes management product is developed, Nova needs to market it to at-risk and diabetes patients. The marketing team submitted a proposal to develop a model that would target at-risk patients for the new care model. In order for the model to work, it would need to identify individuals who both have diabetes and are at risk due to various biomarkers. The use of PHI for marketing is not an allowable use under HIPAA. The stated purpose, “marketing,” filtered out PHI, appropriately leaving the data science team without the necessary data to train this new marketing algorithm to target specific patients (depicted by the red “Deny” in row three).

Mia must also worry that models may be repurposed in the future. The Head of Sales has asked the data science team to gather insights on which payers are most likely to deny claims for the new diabetes product so that they can bring those insights to the table during upcoming contractual renewal negotiations. The data science team determines that it can use the model powering the new diabetes management solution, saving time rather than creating a new model from scratch.

However, when the data science team asserts the purpose of “sales,” their request is denied because this constitutes an impermissible expansion of the model’s original intended purpose. Mia reaches out to the head of sales and the engineering team to discuss compliant ways to achieve their objective. The early warning from the dashboard allows her to catch potential policy drift before it becomes an issue.

Conclusion

The rapid adoption of AI has brought unprecedented opportunities, but it also demands rigorous compliance and trust-building measures. Relying on manual processes and disjointed communication between legal and technical teams is no longer sustainable.

Tranquil Data’s solution addresses these challenges by automating policy enforcement, ensuring engineers and lawyers stay aligned, and providing real-time transparency into every step of the AI workflow. For more information, feel free to reach out at [email protected].

要查看或添加评论,请登录

Shawn R. Flaherty的更多文章

社区洞察

其他会员也浏览了