Dangers of Artificial General Intelligence? A Solution The Big Tech Oligarchs Don't Want You To Know About!
Centralized AGI Controlling Information and Its Flow

Dangers of Artificial General Intelligence? A Solution The Big Tech Oligarchs Don't Want You To Know About!


Artificial Intelligence (AI) has made incredible strides in recent years, particularly with the debut of generative AI tools like ChatGPT that can generate human-like text, engage in conversation, and even create art and code. Today we see AI growing exponentially in sophistication and the pace of that development is raising concerns about the potential risks and the immense power it could give to the entities building it. Even more concerning is that AI appears to be evolving on its own - without direction, guardrails nor limits. Indeed, even its creators are uncertain what it’s doing, how intelligent it can get and what the end game will look like.?

AI Tech Leaders' Take on the Future of AGI

The Path to AGI

In fact, the prospects of Generative-AI evolving into what is known as Artificial General Intelligence (AGI) has created a Chicken Little scenario of sorts among many AI experts. AGI – the ultimate embodiment of AI can match or exceed human intelligence across a wide range of tasks. It could revolutionize everything from science and medicine to education and entertainment. But it could also be used to automate jobs, manipulate distributed information and misinformation en-masse, and potentially even escape human control entirely and become autonomous entities driven by self-survival. To some in the AI industry we might be heading toward an event horizon in AGI where humans lose all control of the technology like Terminator’s Skynet, a runaway AGI system that gains self-awareness and acts against humanities best interests. Pure Sci-Fi?

Well, this has become such a serious concern that prominent figures in tech, including Elon Musk, Apple co-founder Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have sounded the alarm about the breakneck pace of AI development. In an open letter released in March, they called for an immediate 6-month pause on training those AI systems that are more advanced than ChatGPT-4 so that there is time to develop robust safety protocols.

The Risks of Centralized AGI Control

That letter read in part, "Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us? Should we risk loss of control of our civilization?" More to the point, the letter pointed a finger at the big tech oligarchs, "Such decisions must not be delegated to unelected tech leaders."

If you’re familiar with my LinkedIn posts, you’ll know that I’m not a fan of big tech’s control over information flow to its users which often includes agenda-driven bias and censorship. So, the prospect of AGI being controlled by a handful of powerful big tech overlords is chilling from my perspective. We've already seen the outsized influence that centralized platforms like Facebook, Google and Twitter can have in shaping public discourse and even swaying elections. AGI in the wrong hands could supercharge the ability to surveil, deceive, and manipulate us all on a mass scale. I believe that’s a realistic vision of the dangers. This is a legitimate concern.

Ensuring Responsible AGI Development

What’s the answer? How can we ensure AGI is developed responsibly, with adequate safeguards and democratic oversight? One proposed solution is to develop AI using open source - meaning the underlying code is freely available for anyone to access, use, and build upon.

Emad Mostaque, founder of AI company Stability AI that created the image generator Stable Diffusion has been one of the most vocal proponents of this approach. Emad envisions an open source AGI development program following the principles of "Web-3.0" - a decentralized evolution of AI, based on blockchain technology. In a Web-3.0 model, data and tools aren't controlled by centralized big tech gatekeepers. They are distributed across global peer-to-peer networks, creating a hive mind that’s resistant to monopolistic control. Those networks or nodes are called Decentralized Autonomous Organizations (DAOs). A term that hopefully will become ubiquitous in the near future.

The Genesis of DOA Within Web 3.0:

The concept of DAOs first emerged in the early 2010s, primarily within the cryptocurrency and blockchain community. The idea was initially described by Dan Larimer, the founder of BitShares, in a 2013 blog post.

However, the term "DAO" gained widespread attention in 2016 with the launch of "The DAO Project” on the Ethereum blockchain. The DAO Project was an ambitious venture capital fund that aimed to democratize investment decision-making by allowing participants to vote on proposals using blockchain-based tokens.

The primary purpose of DAOs is to create organizations that can operate autonomously, without the need for traditional hierarchical management structures. By leveraging smart contracts on blockchain networks, DAOs aimed to enable decentralized decision-making, transparency, and immutability of records.

Key aspects and goals of DAOs include:

1.??????? Decentralized governance: Decisions are made collectively by members through voting mechanisms.

2.??????? Transparency: All transactions and decisions are recorded on a public blockchain, ensuring transparency.

3.??????? Automation: Smart contracts automate the execution of decisions once certain conditions are met.

Since the launch of "The DAO" in 2016, the concept has evolved, and numerous DAOs have been created for various purposes, such as investment, charity, and community organization. Today, it’s becoming a significant talking point within the AI community and is beginning to figure prominently in the AGI discussion.

Blockchain Governance of DAOs

The Limitations of Open-Source Alone

I see DAOs as critical to the democratization, operation and governance of AGI. Open source is not sufficient to mitigate centralized control of AGI because it doesn't guarantee responsible governance. Indeed, some big tech leaders who claim they’re integrating open source for all the right reasons might actually be playing a game of three card monte with the public. Even in an open source environment these leaders can exert significant influence over AGI projects in the name of “the common good” via corporate structures, restrictive licenses, or other means. These big tech AI-labs are very open about establishing control over their particular flavor of information which they frame in altruistic terms. Meta, for instance, is touting its forthcoming Llama 3 AI as 100% open-source but plans to install "overseers" to police the model's output so that it aligns with CEO, Mark Zuckerberg's own standards of ethics, morality and the 'good of society.' I see this as a concerning form of centralized control and censorship making Llama 3 little more than a renamed version of Facebook and Meta on steroids.

My Take on Meta's Open Source Llama 3

Decentralized AGI through Web3 and DAOs

Achieving a truly decentralized and democratic AGI that will offer a degree of protection against the big tech ambitions will require innovating beyond our current open-source paradigms and integrating cutting-edge Web-3.0 technologies like blockchain. With AI tools built on public blockchains, their development could be directed through transparent and participatory governance mechanisms like quadratic voting for representational fairness (like our U.S. electoral college) and DAOs rather than a backroom command center at Meta.

DAOs could play a particularly pivotal role in ensuring AGI serves the interests of diverse global communities. These blockchain-based organizations allow members to collectively make decisions and pool resources towards common goals. By forming DAOs around specific issues like healthcare, education, or environmental protection, communities could collaboratively steer the development of AI to tackle their unique challenges.

Imagine, for instance, a DAO of healthcare professionals and patients jointly training AI to improve diagnostics for rare diseases. Or a DAO of educators and students working to personalize AI tutoring tools. Or coming up with innovative improvements to the education system in Indonesia. By harnessing swarm or hive mind intelligence rather than relying on top-down control, DAOs could help make AGI development more agile, inclusive, and responsive to on-the-ground needs.

Decentralized DAOs Around the Globe Contributing to A Single Hive Mind AGI

Decentralized DAOs Around the Globe Contributing to A Single Hive Mind AGI

Realizing a Web3 AI Future and Challenges

Cultivating a global ecosystem of issue-focused DAOs could form the basis for a radically decentralized and democratic approach to AGI governance - one in which the technology's monumental power is guided not by profit or authoritarian control, but by the collective wisdom of the global community.

Realizing this Web3-native future for AGI will require tackling thorny challenges around token economics, decentralized infrastructure, blockchain scalability and more. But work is already underway on promising solutions. Examples are SingularityNET, Fetch.ai and Ocean Protocol as well as others building a blockchain-based AI marketplace where algorithms can be discovered and coordinated via smart contracts.

Ultimately, AGI could be characterized as a matrix of hundreds of thousand if not millions of DAOs all networked together via a decentralized blockchain governance system. In this way, it can become much more functional to humanity in general than a centralized or even a distributed server farm controlled by only a few. AGI can become the single hive mind of humanity.

Conclusion

Though still in early stages, these decentralized approaches point to a potential path forward - one where AGI's power is channeled not towards centralized private profit and social control, but towards democratization that will benefit humanity as a whole. As AGI draws ever closer on the horizon, the battle over who will control it is only intensifying. It's important for everyone to remember the flaws of social media that we've experienced over the past five years and recognize that AGI can be exponentially worse. We have to act thoughtfully to ensure this transformative technology fulfills its positive potential, while mitigating its existential risks. By leveraging Web3 innovations like DAOs to democratize AI governance, we can work to ensure AGI becomes a force for global good rather than centralized domination. The future of AGI hangs in the balance - and with it, perhaps the future of our civilization itself.

Sources below provide additional context and evidence for the key ideas discussed in this newsletter - from the transformative potential and risks of advanced AGI to the movement towards open-source and decentralized AI development using Web3 technologies like blockchain and DAOs.

?The open letter calling for a pause in AI development:

"An Open Letter: Pause Giant AI Experiments" (Future of Life Institute, 2023) https://futureoflife.org/open-letter/pause-giant-ai-experiments/

The concept of Web3 and decentralized technologies:

What is Web 3.0? https://www.investopedia.com/web-20-web-30-5208698

Open-source AI initiatives:

https://www.theverge.com/2024/1/18/24042354/mark-zuckerberg-meta-agi-reorg-interview

What is a DAO?

https://www.investopedia.com/tech/what-dao/

Stan Larimer and Bitshares - The DAO concept

https://how.bitshares.works/en/master/technology/history_bitshares.html

Web3 AI projects:

?SingularityNET, 2023 https://singularitynet.io/

Fetch.ai , Ocean Protocol and SingularityNet Partnership https://www.pymnts.com/news/artificial-intelligence/2024/fetch-ai-ocean-protocol-and-singularitynet-to-partner-on-decentralized-ai/

?

?

?

?

?

Toly N.

|Technology Business Process Innovator|Artificial Intelligence|Machine Learning|Cybersecurity |Data Science |Optimization| Supply Chain |Transformation|Business Value|Integration|PMO|

7 个月

Great points! I would add the following. 1. The enforceability of ethical and other 'hard' constraints on ANY 'specialized-domain AI' is paramount. Developers, managers, and investors must understand the gravity of their actions. Severe penalties, including life imprisonment, should be imposed for any violation. 2. In collaboration with a 'dark web' enforcement, the inter-government office must ensure strict adherence to these constraints. Oversight must be in the hands of a global community comprised of official, apolitical, and unofficial (dark web) participants who do this for free, except for the Commission's Chairman and limited admin staff. Dark web community representation must be at least second in authority to the Chairman.

回复
Marek Kulbacki, PhD

Entrepreneur & Solutionist | Principal Scientist | CEO, CSO, CTO | Engineering & Innovation Expert | Leading R&D at PJAIT & DIVEINAI

7 个月

You're absolutely right, Barry. Our surrounding ecosystem is decentralized too. LLMs are powerful tools, but understanding the intricate dance of nature requires more than just technology. We need to deepen our learning and take a more holistic approach. There are billions of interconnected processes happening at every scale, from the microscopic world of microbes to the vastness of weather systems. AI can help to get faster to right conclusions and completely new questions. Perhaps the answer lies in finding the right balance. We can leverage technology to discover knowledge from these complex natural systems, but true understanding will come from combining that intelligent systems with a deeper appreciation of the natural world.

回复
Mick Tinker

BMS/IOT Applications Engineer & BSc Computing Student with the OU

7 个月

This maybe the first time I have read anything about an AI future in which humans do not destroy themselves, but instead mange to harness the power of AI for the good of all. Or am I just dreaming again.....

要查看或添加评论,请登录

Barry Sandrew, Ph.D ???的更多文章

社区洞察

其他会员也浏览了