What Defines Open Source AI? New Standards Push for Transparency

What Defines Open Source AI? New Standards Push for Transparency

The meaning of “open source” in AI has become a hot topic, especially as companies increasingly label their AI models “open” without fully disclosing the underlying data or training details. Recently, the Open Source Initiative (OSI) introduced a formal definition for open-source AI, aiming to set a clear standard. The OSI argues that, for an AI model to be genuinely open source, companies need to share more than just model weights—they should also provide comprehensive information on the training data, code, and processes that built the model.

According to OSI’s Executive Vice President Stefano Mafulli, truly open-source AI would allow others to recreate, adapt, and build on a model fully. This push for transparency is intended to make it possible for anyone to understand and recreate the model, ensuring AI is open not just in name but in practice.

What the OSI Standards Mean

Under OSI’s standards, an AI model should offer full visibility into how it was trained, including access to data sources, licenses, and filtering processes. The goal is to allow the public to assess how a model was built, enabling developers to adapt it, improve it, or repurpose it independently. Mafulli emphasizes that for AI to be considered genuinely open source, users should have the freedom to understand, modify, and build upon it for any purpose.

This definition of open-source AI is not only about transparency but also about accountability. As regulators, particularly in the EU, consider legal carve-outs and exemptions for open-source AI, OSI’s standards aim to prevent misuse of the “open-source” label. For Mafulli and OSI, a rigorous definition protects the term’s integrity, so that models marketed as open source genuinely empower public use and innovation.

The Debate Over How Open AI Should Be

OSI’s stance has highlighted a divide within the AI industry. Meta, for instance, has labeled its popular LLaMA models as “open,” while providing access to the model weights but not to the full training data. Unlike Google and Microsoft, who have agreed to drop the term “open source” after discussions with OSI, Meta continues to argue that releasing model weights, combined with usage restrictions for safety, is a balanced approach that makes advanced AI available without compromising responsibility.

Meta’s position is that complete transparency is unrealistic in today’s AI landscape. In their view, disclosing every step of training, especially the specifics of data sourcing and filtering, could hinder AI’s accessibility and even pose risks by allowing potentially harmful uses. A spokesperson for Meta noted that while they support the OSI’s goal, the complexity of modern AI requires more nuanced definitions. For Meta, the real priority is balancing openness with security, ensuring models can be used freely but responsibly.

Practicality and Accessibility: A Counterpoint to Full Transparency

Critics of OSI’s definition argue that demanding full transparency might backfire by making models less accessible. Christian Stout, Director of Innovation Policy at the Law & Economics Center, has voiced concerns that OSI’s strict standards could actually discourage companies from releasing models to the public. Training a modern AI model involves high costs, extensive resources, and significant technical expertise. For most organizations, replicating these efforts would be impossible even with access to all data and code.

According to Stout, defining openness too rigidly could lead companies to keep models fully closed rather than attempt to meet such high standards. He argues that making model weights freely accessible, even with certain restrictions, could offer more real-world access than a fully open-source model that only a few organizations have the resources to reproduce. Stout compares the situation to Creative Commons licensing, which provides different levels of openness tailored to diverse needs. He suggests a similar approach for AI would allow for a broader range of open-source options that don’t impose impractical demands.

The Impact on Innovation and Public Funding

This debate over open-source standards has implications for AI research and public funding. If OSI’s definition becomes widely adopted by governments and public institutions, it could restrict which models qualify for funding or government contracts. Many public grants and contracts require open-source status to qualify, so a strict interpretation could limit the field to a handful of large, well-funded models. For Stout, this scenario could drive public-sector work toward a few major players, ultimately reducing diversity in AI research and stifling smaller-scale innovation.

The OSI counters that these standards are essential for AI openness to mean something substantial. Without clear requirements, companies might continue to label their models “open source” while withholding vital components, creating a “black box” model under the guise of transparency. By pushing for full disclosure, OSI hopes to promote genuine accountability, so that open-source AI truly empowers everyone, not just large corporations.

Looking Forward: Crafting Standards That Support Openness

The debate over OSI’s standards underscores the complexity of defining “open source” in AI. Advocates of OSI’s rigorous approach see full transparency as crucial for public trust and innovation, while critics caution that inflexible standards might do more harm than good. If an open-source label is to remain meaningful, it must balance transparency with practical accessibility, allowing models to be shared in ways that support both innovation and responsibility.

Ultimately, OSI’s definition may serve as a benchmark for open-source AI, even if industry players adapt it to accommodate different levels of disclosure. As governments and regulators increasingly focus on AI’s societal impact, the way we define openness will shape not just the future of AI development but also who has access to these powerful tools.

In a fast-evolving field, open-source standards need to balance idealism with practicality. Whether OSI’s definition will hold or evolve, it raises important questions about transparency, public access, and the ethical use of AI.


Richard Foster-Fletcher ?? (He/Him) is the Executive Chair at MKAI.org | LinkedIn Top Voice | Professional Speaker , Advisor on; Artificial Intelligence + GenAI + Ethics + Sustainability.

For more information please reach out and connect via website or social media channels.


Antonio Quinonez

The Expert in Adaptive Rule Based AI Systems | Technical Consultant | farfiner.com

1 周

I’m too simple minded. If you want the OSI badge, follow the guidelines. OSI could create variant badges. Or companies can just call their projects open,open source, whatever, like companies do now and have the license terms be whatever they want and then switch the terms whenever they like it. For me, a goal of open source is, darn it, here’s the build script that worked for me, now you try it. But I know we can’t all be generous in every way. And sharing something that is not yours to share, like all the Harry Potter texts, is a no go.

Sabine VanderLinden

Activate Innovation Ecosystems | Tech Ambassador | Founder of Alchemy Crew Ventures + Scouting for Growth Podcast | Chair, Board Member, Advisor | Honorary Senior Visiting Fellow-Bayes Business School (formerly CASS)

1 周

OSI's transparency push balances ethical concerns and resource constraints.

要查看或添加评论,请登录