The AI Power Struggle and the Future of Intelligence
'2025 in the Style of 1984' by Junior Williams (created with Midjourney)

The AI Power Struggle and the Future of Intelligence

“The people who are most impacted by AI systems often have the least say in their creation. Meanwhile, those building these systems operate in closed environments, shielded from public accountability.” — Timnit Gebru ?? Co-founder | Distributed Artificial Intelligence Research Institute

Disclaimer: This is an opinion piece, and the views expressed here are my own. My employer partners with several of the companies mentioned in this article. While I strive to present a balanced critique, readers should interpret these perspectives within the context of ongoing debates about AI ethics and governance.


TL;DR: ??????????????????????????????????????????????????????????♂??????????????????????????????????????????????????????????


This article examines the increasing concentration of AI development among a small group of industry leaders, how this affects access to advanced models, and what it means for innovation and public oversight. It explores the challenges of AI centralization and the case for more open and decentralized alternatives.

  1. The AI Industry and Selective Competition (power struggle, hypocrisy, data access, real-time scraping, double standards);
  2. AI Data Access and the Illusion of Fair Play (secretive data pipelines, government ties, paywalled access, surveillance, elite AI models);
  3. The Growing Divide in AI Access and Control (corporate-government collusion, selective funding, Western AI bias, surveillance hypocrisy);
  4. AI’s Uneven Playing Field (artificial competition, regulatory capture, financial entanglements, selective enforcement);
  5. The Debate Over AI Oversight and Control (oversight vs. control, innovation suppression, decentralized alternatives, ethics theater);
  6. The Case for Open AI Development (open-source revolution, decentralized AI, federated learning, censorship resistance);
  7. The Future of AI Depends on Open Development (knowledge control, elite dominance, resisting the narrative, demand transparency).


The AI Industry and Selective Competition

Recent disputes in AI development highlight a familiar pattern: those with entrenched influence framing competition as unethical while securing their own advantage. The most well-known AI research lab, heavily funded by a major cloud provider, built its empire through mass data collection—harvesting content from every corner of the internet, likely in real time. And yet, when a competitor does something similar, suddenly it’s a crisis. Suddenly, it’s theft.

It’s a clear assertion of control, revealing the double standards in how AI development is governed. While the industry’s most dominant AI company continues to absorb vast amounts of data, its leadership—deeply embedded in both corporate and government circles—enjoys unrestricted access to cutting-edge systems that remain out of reach for the broader world.

This stark contrast isn’t just about access; it reflects a broader strategy to maintain control over AI’s trajectory. The idea that these figures don’t have access to intelligence far beyond what’s commercially available is naive. They aren’t just watching AI evolve; they’re shaping its trajectory in ways that ensure they stay ahead—at any cost.

— "The idea that these figures don’t have access to intelligence far beyond what’s commercially available is naive."

Proponents might argue that such capabilities are necessary for advancing AI research. However, when smaller firms attempt similar methods, they face accusations of theft or unethical behavior—a double standard that underscores the cartel-like dynamics at play.


AI Data Access and the Illusion of Fair Play

LThe most influential AI company operates with an evolving data pipeline that extends far beyond publicly available sources, securing access to exclusive streams of real-time information while enforcing restrictions on others. The infrastructure required to train frontier AI models doesn’t just rely on static datasets—it requires continuous ingestion of the latest human-generated content. This means access to proprietary sources, paywalled content, and potentially even classified intelligence pipelines through corporate and governmental partnerships.

— "Technologies like retrieval-augmented generation (RAG) are specifically designed to incorporate real-time data into AI responses."

AI firms don’t train their models on outdated, static data alone. They leverage agreements with cloud providers, internet service providers, and intelligence agencies to ensure a steady flow of new information, often far beyond what the public can access. Technologies like retrieval-augmented generation (RAG) are specifically designed to incorporate real-time data into AI responses, reinforcing the likelihood that OpenAI’s most advanced systems are pulling from live sources rather than operating in isolation. Given OpenAI’s relationship with Microsoft—a company with deep government and military ties—it’s not a stretch to assume that access extends well beyond the publicly available internet.

A key distinction is access to real-time AI models trained on extensive datasets, spanning both publicly available and more restricted sources. These models operate beyond the constraints imposed on publicly available tools, enabling unparalleled insights and capabilities. Meanwhile, publicly available AI models are designed with constraints that align with corporate and regulatory considerations, limiting access to less filtered outputs.

If transparency were truly a priority, the full extent of these data pipelines would be publicly disclosed—but instead, they remain hidden behind vague legal justifications. Certain companies call it “research,” but when rivals dare to employ comparable tactics, they are branded as mere thieves—exposing a double standard in the AI arena.

Critics may argue that centralized AI enables better oversight and safety. But history shows that concentrated power rarely serves the greater good—it typically prioritizes profit over people.


The Growing Divide in AI Access and Control

When NVIDIA’s stock tumbled, Jensen Huang bypassed investor appeasement and rushed to the White House, starkly illustrating that in AI, corporate might often overshadows market logic.

The visit highlights the reality that when AI’s biggest players face adversity, they don’t rely on market forces or competition to recover. They turn to governments that are more than willing to bend the rules in their favor.

Public investment overwhelmingly supports a handful of major AI companies, reinforcing their dominance while newer entrants face scrutiny and restrictions—not necessarily due to their methods, but because they operate outside the established power structure. The messaging is clear and consistent: AI companies based in the West are portrayed as responsible pioneers, while those from China are cast as security risks under state influence.

— "Public funds are channeled solely to OpenAI and Microsoft, further tightening their grip on the future of AI."

Yet, the idea that China is the only country integrating AI models into government surveillance is absurd. DeepSeek’s data is undoubtedly accessible to the Chinese government, just as American AI firms are deeply entangled with the intelligence operations of the Five Eyes alliance, which includes the U.S., U.K., Canada, Australia, and New Zealand. The messaging around AI oversight differs depending on where it originates—some nations frame their approach as prioritizing privacy and ethics while engaging in similar large-scale surveillance.

While initiatives like GDPR aim to safeguard digital privacy, the reality is that surveillance remains pervasive—often justified under the banner of national security or technological advancement. Western nations present themselves as privacy advocates, yet their use of AI for surveillance is extensive and often parallels the approaches they criticize.

Supporters of this narrative claim it’s about national security. Yet, Western AI firms are equally entangled with intelligence agencies through partnerships like the Five Eyes alliance, raising questions about who truly benefits from these distinctions.


AI’s Uneven Playing Field

What looks like competition in AI development is often a controlled environment where the same major players maintain influence, shaping the field through regulatory capture and strategic financial alliances. They don’t just dominate their respective markets; they manipulate governments to protect their interests.

AI regulations are not crafted for fairness; they serve to entrench the status quo, preserving power in the hands of the elite. Public funds are channeled to tech behemoths, like OpenAI and Microsoft, further tightening their grip on the future of AI. Under the guise of national security, Western governments collaborate with AI giants to shut down open-source alternatives while giving corporate-approved models unrestricted development freedom.

— "Microsoft’s strategic investments in OpenAI and NVIDIA illustrate how apparent competition often masks internal portfolio management."

Microsoft’s strategic investments in OpenAI and NVIDIA illustrate how apparent competition often masks internal portfolio management. When one player stumbles, another props them up—ensuring no real challenger emerges. This financial entanglement ensures that a true AI market—where companies compete based on innovation rather than alliances—never actually forms.

And while they consolidate wealth and influence, their focus remains on strategic advantage rather than broad societal benefit. Even as AI companies see record growth, job cuts continue, framed as necessary for efficiency and progress. This raises questions about whether technological advancement is truly being developed for broader societal benefit or primarily for corporate interests. ESG and AI-for-good claims serve merely as cover for a relentless pursuit of profit.

Proponents of centralized AI argue that it enables better oversight and safety. Yet history shows that concentrated power rarely serves the greater good. From Standard Oil to Facebook, monopolies have consistently prioritized profit over people, exploiting regulatory loopholes to suppress competition and consolidate control.


The Debate Over AI Oversight and Control

While some argue that centralized oversight is essential for maintaining consistent safety standards and unified accountability, history shows that concentrating power often stifles innovation and entrenches systemic inequities. Proponents of centralized AI often argue that it offers streamlined oversight, ensuring safety and accountability in a rapidly evolving field. They claim that smaller, decentralized systems lack the resources to implement robust safeguards against misuse or unintended consequences.

Granted, these points hold some merit, yet history consistently demonstrates that concentrated power rarely serves the greater good. From Standard Oil to Facebook, monopolies have consistently prioritized profit over people, exploiting regulatory loopholes to suppress competition and consolidate control. In the context of AI, centralized oversight risks becoming a tool for entrenching the status quo rather than fostering innovation.

— "From Standard Oil to Facebook, monopolies have consistently prioritized profit over people."

A focus on centralized safety often overlooks the potential of decentralized models to provide transparency, resilience, and broader participation in AI development. Open-source platforms like Hugging Face and Mistral AI show that collaboration and transparency can produce powerful, secure AI models without dependence on corporate gatekeepers. Federated learning and peer-to-peer networks also support localized oversight, allowing communities to shape AI solutions to their needs while ensuring accountability.

By addressing these concerns head-on, we can shift the conversation from fear-based justifications for centralization to a vision of AI that prioritizes accessibility, equity, and genuine progress.


The Case for Open AI Development (no pun intended)

There are alternative paths. Open-source platforms and decentralized models offer a way to expand access and reduce reliance on a few dominant players. For all the resources hoarded by Big Tech, the most promising advancements in AI are emerging from outside their control. Open-source platforms like Hugging Face, Mistral, and OpenCTI by Filigran are proving that powerful AI models can be built without trillion-dollar infrastructure. Perplexity has shown that search can be enhanced through intelligent AI aggregation, even combining models like DeepSeek with other sources to deliver uncensored and highly effective responses. Smaller tools like n8n are making AI automation more accessible than ever.

Instead of centralizing AI into corporate-controlled, closed ecosystems, the world should be moving toward decentralized AI, where individuals and communities have control over their own models. Instead of relying on the whims of OpenAI, 谷歌 , and Microsoft, the focus should be on federated learning, peer-to-peer AI networks, and models that function locally, free from corporate oversight.

— "Open-source platforms like Hugging Face, Mistral, and OpenCTI are proving that powerful AI models can be built without trillion-dollar infrastructure."

Some AI safety initiatives, while framed as protective measures, also serve to reinforce existing power structures by making it harder for independent and decentralized efforts to compete. Policies designed to curb AI risks often do little more than enshrine corporate power, forcing open-source developers to comply with regulations that Big Tech can bypass with ease. If closed, centralized AI becomes the dominant model, control over the most advanced systems—and the knowledge they generate—will remain concentrated among a few private entities.

As professionals in the tech industry, we have a duty to advocate for open-source AI and decentralized systems—not just for innovation’s sake, but to ensure that AI serves humanity, not just a privileged few. Initiatives like EleutherAI highlight the potential of collaborative, community-driven approaches to solving complex problems.


The Future of AI Depends on Open Development

The direction AI takes isn’t just about safety or ethics—it’s about who gets to shape the future of knowledge and access to intelligence. The most influential AI companies, along with their partners in government and media, are not developing AI for the public good; they are shaping it to align with their own strategic interests. They are ensuring that the most advanced intelligence ever created remains in the hands of a select few, while the rest of the world is given tightly controlled, restricted versions that serve corporate agendas.

"They are ensuring that the most advanced intelligence ever created remains in the hands of a select few."

Ensuring a more open AI future will require broader support for decentralized and independent development models that prioritize accessibility and transparency. AI’s trajectory is being shaped by those with the most influence. The question is whether we accept this concentration of power—or support a future that prioritizes openness and accessibility.

If you believe in an open and decentralized AI future, help spread the message. Like, comment, share, and subscribe to keep the conversation going. Your voice matters.


Suggested Reading


About the Author

Junior Williams , Senior Solutions Architect at MOBIA , is a seasoned expert in cybersecurity and artificial intelligence, with over 30 years in programming, IT infrastructure, and strategic consulting. His career spans telecommunications, cybersecurity, and AI, consistently staying ahead of emerging technologies. As a member of the Standards Council of Canada (SCC) Mirror Committees for ISO - International Organization for Standardization / IEC 27001 (Information Security, Cybersecurity, and Privacy) and 42001 (Artificial Intelligence), he plays a key role in shaping global security and AI governance standards. Known for his pragmatic approach, Junior blends deep technical expertise with a focus on ethical AI, delivering solutions that balance innovation, security, and real-world impact.


Junior Williams (image edited using Midjourney)


Valdiran Cirilo junior

--Tech Blogger | AI & Cybersecurity Trends | Helping Businesses Stay Secure & Informed

21 小时前

AI should be built with and for society, not just for corporations. How do we bridge the gap between AI developers and the communities most affected by these systems?

回复
Saima Fancy

Privacy Engineering Expert | Cybersecurity Specialist | AI & Data Governance Leader | Former Twitter Privacy Engineer | Influential Speaker | Mentor | Championing Women and Girls in STEM

3 周

You leave us with lots to ponder over Junior. This is quite an insightful article. The choice between centralized dominance or decentralized openness will shape AI’s ethical and societal impact. Achieving this vision will require robust governance, scalable infrastructure, and sustainable funding models. Can we get there?

Rafael Arturo Ramírez

Trusted C|CISO, CISSP, CCSP, CEH blending cybersecurity, AI, and agility for impactful innovation. When OOO, the ultimate husband/dad chef exploring life’s flavors, community building and entrepreneurship.

3 周

Junior Williams a great opinion article. I like the concept of elite AI models, or perhaps they could even be called lobbied AI models. The forces shaping the new AI society are coming from all directions, and the power struggle will persist. In the end, it seems we are somehow guilty of sacrificing our well-being for the sake of profits. ??

Allen Westley, CSM, CISSP, MBA

Cybesecurity Leadership | Strategy | Cultural Competence ??AI Security ?? | Tech Talks | Intrapreneur Spirit

3 周

Well Junior, this is certainly a provocative opinion piece. There's a lot here that many in big tech and Government might take issue with. Kudos to you for promoting critical thought around this important topic.

Nazanin Bayati? Ph.D.

AI and Cybersecurity Research Engineer

3 周

IMO the case for open-source AI development is more urgent than ever. While Big Tech consolidates power and resources, platforms like Hugging Face and EleutherAI are proving that powerful AI models can be built collaboratively and transparently.

要查看或添加评论,请登录

Junior Williams的更多文章

社区洞察

其他会员也浏览了