Real Threats of AI: Metaverse for Education Newsletter Issue #25

Real Threats of AI: Metaverse for Education Newsletter Issue #25

This monthly newsletter serves as your bridge from the real world to the advancements of AI, web3 & the metaverse, specifically contextualized for education.?For previous issues, check out?ed3world.substack.com. All new issues will be published on both LinkedIn & Substack.


Dear Educators & Friends,

Sam Altman and the AI crew are not backing down from their push to seriously consider regulations on AI technology. Yesterday, they came out with another statement asking for AI policy intervention. This one was short and to the point:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

For more context, this previous letter gets a bit deeper into AI destroying humanity.

?? It’s quite interesting that while these AI titans are asking for regulation, they are continuing to train GPT-4 (potentially GPT-5) and building AI products that are onboarding hundreds of millions of users. The incentives seem misaligned here and something feels a bit fishy. Perhaps, as AI expert Geoff Hinton says, “it cannot be stopped” at this point.

So for this issue, I thought it would be useful to summarize & dive into the risks and threats of AI from both a practical level (short term) and existential level (long term). The existential risks are assuming that AI will gain sentience, or the capacity for self-awareness and cognitive ability.

Immediate Risks of AI

  • ?? Misinformation: Sometimes, AI “hallucinates” and gives you the wrong information; AI can generate content that is believable, like the images above
  • ?? Algorithmic Bias: Training data can be inherently biased based on developer decisions; Data can also be biased due to flawed data sampling where certain groups are over or underrepresented
  • ?? Intellectual property theft: AI uses previous human-generated content to create new content but does not credit the original sources in any way
  • ?? Violation of privacy & data: More of our data is available online than we think and we don’t necessarily have control over what AI consumes and shares; ChatGPT has already had a data breach ??
  • ?? Lack of transparency: Open AI is free to use but the cost is our data and we don’t really know what they’re doing with it

Existential Risks of AI

  • ?? Weaponization: AI can provide complex solutions to anyone in the world who wants to build chemical weapons, cyber attacks, and other weapons of destruction; With sentience, AI may want to control those systems
  • ?? Manipulation & deception: As AI trains on human values, those values can be used to manipulate and deceive humans to pursue certain goals; With sentience, AI may create it’s own goals that may be anti-human
  • ?? Loss of self-governance: As we rely more on AI, humans may lose their ability to be independent; With sentience, AI may take advantage of this overdependence
  • ?? Imbalance of power: Small groups of people may be able to gain incredible amounts of power (even more than today) toward oppressive systems; With sentience, AI may determine who those people are
  • ?? Loss of control: As companies and governments give more power to AI, AI will have more control of our operational systems; With sentience, AI may decide to manipulate those systems

Although I won’t specify how each of these risks serve as a major problem for education, I think you can draw conclusions about how students will be impacted. I am working on a set of strategies for leaders to contextualize this productively in education. Ping me if you're interested.

The resources below will help you uncover the details of these risks and why AI developers believe it could cause human extinction.

FYI, I’m actually not trying to scare you. I’m not even trying to paint AI in a bad light. I still believe AI has the potential to reimagine how we live, learn, and earn. But I also believe that the more we understand the risks, the more we can prepare for them. A positive future with AI is possible, we just have to aggressively pursue it.

Warmly yours,

Vriti & k20 Educators


Learn about AI Threats

Do Something about AI Threats

Here are some recommendations to de-risk AI, that are within our locus of control.

  1. Use it. I know this sounds counter-intuitive but the more you learn how these systems work, the more we can disperse power. Imagine if only large companies were using AI to monopolize markets & manipulate consumers.
  2. Lock up your personal data. Avoid using personal information or company data in ChatGPT or other AI tools. Try to keep personal data off vulnerable sites.
  3. Stay informed & engaged with reputable resources. Supplement opinion pieces with research from experts.

4. Talk to others about AI. Attend AI events, join communities, and share your concerns with others. Everyone is experiencing this together. Join Ed3DAO to talk with other educators about AI.

???Ed3?Events


See https://ed3world.substack.com/ for previous issues of this newsletter.

Mint an?Ed3 Educators NFT?for life-time access to web3 events, content, conferences, & other perks… and to support the work of this newsletter.

This week’s?Metaverse for Education?newsletter is about AI agents. More to come on other web3 & emerging tech topics. We’re excited to bring?education into the Metaverse?& help you leverage the opportunities in the new world.

Stefan Bauschard

AI Education Policy Consultant

1 年

This is what I wrote on AI literacy -- AI Literacy: The Immediate Need and What it Includes (substack.com)

Stefan Bauschard

AI Education Policy Consultant

1 年

*A good review of the practical but devastating risks *Like everyone, "we need to do something," but no real specific ideas for anything practical https://time.com/6283716/world-must-respond-to-the-ai-revolution/

CHESTER SWANSON SR.

Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer

1 年

Thanks for Sharing.

Stefan Bauschard

AI Education Policy Consultant

1 年

The final thing I'd say is that it doesn't need sentience and/or consciousness to get to all the existential risks you discuss. It would require that for killing us off on its own (though I don't know of any leading AI researcher who doesn't think it will eventually be conscious). It's not required for weaponization, as the humans can be removed from the loop now and it could be instructed to make decisions to kill based on general parameters. There is a huge debate about this in the military. Similarly, someone could instruct it manipulate people. Power could become imbalanced by small wealth countries buying a lot of GPUs. Someone could use it to hack into a nuclear weapons system and do bad stuff. Anyhow, there are plenty of existential risks before you get to sentience and/or consciousness.

Stefan Bauschard

AI Education Policy Consultant

1 年

Dr. Sabba Quidwai. See, existential risks! To existential risks, I'd add Hinton & Yudkowsy (smarter beings will view us as disposable (both of those don't think regulations can work to control smarter beings, so this existential risk is harder to solve here than in other areas). Hinton thinks a lot of very smart people need to get paid a lot of $ to figure it out (Hinton himself says he has no idea how to solve it). Yudkowsky thinks we need to bomb the data centers. Regardless, yes, kids will fear this the same way they feared nuclear weapons. And, yes, I think they are working on ChatGPT5 (or something better that will be released under a different name (and it may not even be an LLM)). And, yes, part of the reason they are moving ahead is because of China and other governments. It's just really hard to do anything about it other than to both use AI to fight other's advances and to control the chip supply (Altman mentioned this in the hearing).

要查看或添加评论,请登录

Vriti Saraf的更多文章

社区洞察

其他会员也浏览了