Issue 1:  BreezeML Newsletter October 2023

Issue 1: BreezeML Newsletter October 2023

Greetings and welcome to our monthly newsletters, where we provide updates on our business and product development, along with our insights regarding AI regulations and governance.

Pivoting Towards AI Compliance

This year holds particular significance for BreezeML. Our team has expanded significantly, growing from 4 members to 14, and a majority of our engineers have earned their PhDs from prestigious computer science programs and previously worked at prominent technology companies. Furthermore, our primary focus has shifted from optimizing ML training and inference to the creation of a comprehensive governance and compliance system that offers safeguards throughout the development and deployment lifecycle of ML models.

This transition is motivated by several factors.

First, impending AI regulations are set to have a widespread impact across various industries. The EU's AI Act is scheduled to be implemented in 2025, and the US Congress is currently engaged in substantive discussions about analogous regulations. Furthermore, several states within the United States have proactively pursued AI regulations, with examples including California's AI-ware Act and Delaware's Personal Data Privacy Act. Companies utilizing AI in any capacity will face rigorous scrutiny, requiring them to furnish compelling evidence of their compliance with the regulations under which they are being audited. Failure to do so could result in financial penalties, such as a potential 6% of the company's annual global revenue, without a set upper limit, as stipulated by the EU AI Act. We anticipate that the importance of and the focus on AI compliance across sectors will only grow in the coming years.

Second, the existing data-AI compliance market is largely dominated by tools tailored for compliance officers and legal counsel. These tools offer straightforward interfaces for document ingestion and risk identification. However, the responsibility for mitigating these risks (and ultimately, complying with regulations) falls squarely on the shoulders of data scientists and ML engineers. Risk management is primarily a post-development task, requiring weeks of additional work from developers. Moreover, in cases where the individual who trained a model has left the company, collecting evidence can be an insurmountable challenge.

A more effective approach involves designing a system that not only empowers compliance officers to construct and manage policies but also automatically gathers information for developers during the model development process (e.g., governance by construction). This approach ensures that when an audit request arises at any given time, the necessary information for addressing the audit is readily available. This serves as the foundation for our product development, and you can find a comprehensive description of our product in this article.

Conference Presence and Sponsorship

As we venture into the realm of governance and regulation, we've uncovered an extensive array of resources through the International Association of Privacy Professionals (IAPP). These resources include opportunities such as conferences, exhibitions, and networking events. Just last week, we had the privilege of attending the annual Privacy, Security, and Risk conference in San Diego, where we engaged in conversations with numerous privacy lawyers and compliance officers. Witnessing the remarkable enthusiasm within the community served to affirm our assumptions and validate our product design. We are proud to announce that BreezeML will be an honored sponsor of the AI Governance Global 2023 Conference, set to take place in Boston early November. If you happen to be in the vicinity, we invite you to visit our booth.

It's also noteworthy that AI privacy is gaining significant traction not only in the EU and US, but across the globe. The IAPP is actively consolidating resources to professionalize AI governance and foster its workforce. As a collection of AI enthusiasts and technologists, we eagerly anticipate our involvement in and contribution to these initiatives.

Perspectives on (General) Data vs. AI governance

We were frequently questioned about why a data governance system couldn't address the challenges of AI governance. The primary reason lies in the stark contrast between a data pipeline and an AI pipeline. The latter is significantly more fragmented, often employing a diverse array of services hosted on different cloud platforms and using components written in various languages (such as SQL and Python). Additionally, AI pipelines process data of numerous types, often including both structured and unstructured.

While major cloud providers like AWS and Snowflake offer comprehensive service packages within their respective environments, it's exceedingly difficult for companies to centralize all their computing and data resources in a single location. Consequently, an effective AI governance system must be able to seamlessly integrate with multiple platforms and services, accommodating various programming languages and data formats to provide a comprehensive view of the end-to-end data-ML pipeline.

The reason for this complexity is that even a small gap in information within the pipeline can lead to a substantial shift in the outcome, transitioning from compliance to violation, or vice versa. Therefore, we view BreezeML’s approach as a way to bridge the gap left by existing data governance systems, enabling comprehensive AI governance while allowing companies to remain flexible with the services they choose to use.

要查看或添加评论,请登录

BreezeML的更多文章

社区洞察

其他会员也浏览了