Real Threats of AI: Metaverse for Education Newsletter Issue #25
This monthly newsletter serves as your bridge from the real world to the advancements of AI, web3 & the metaverse, specifically contextualized for education.?For previous issues, check out?ed3world.substack.com. All new issues will be published on both LinkedIn & Substack.
Dear Educators & Friends,
Sam Altman and the AI crew are not backing down from their push to seriously consider regulations on AI technology. Yesterday, they came out with another statement asking for AI policy intervention. This one was short and to the point:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
For more context, this previous letter gets a bit deeper into AI destroying humanity.
?? It’s quite interesting that while these AI titans are asking for regulation, they are continuing to train GPT-4 (potentially GPT-5) and building AI products that are onboarding hundreds of millions of users. The incentives seem misaligned here and something feels a bit fishy. Perhaps, as AI expert Geoff Hinton says, “it cannot be stopped” at this point.
So for this issue, I thought it would be useful to summarize & dive into the risks and threats of AI from both a practical level (short term) and existential level (long term). The existential risks are assuming that AI will gain sentience, or the capacity for self-awareness and cognitive ability.
Immediate Risks of AI
Existential Risks of AI
Although I won’t specify how each of these risks serve as a major problem for education, I think you can draw conclusions about how students will be impacted. I am working on a set of strategies for leaders to contextualize this productively in education. Ping me if you're interested.
The resources below will help you uncover the details of these risks and why AI developers believe it could cause human extinction.
FYI, I’m actually not trying to scare you. I’m not even trying to paint AI in a bad light. I still believe AI has the potential to reimagine how we live, learn, and earn. But I also believe that the more we understand the risks, the more we can prepare for them. A positive future with AI is possible, we just have to aggressively pursue it.
领英推荐
Warmly yours,
Vriti & k20 Educators
Learn about AI Threats
Do Something about AI Threats
Here are some recommendations to de-risk AI, that are within our locus of control.
4. Talk to others about AI. Attend AI events, join communities, and share your concerns with others. Everyone is experiencing this together. Join Ed3DAO to talk with other educators about AI.
???Ed3?Events
See https://ed3world.substack.com/ for previous issues of this newsletter.
Mint an?Ed3 Educators NFT?for life-time access to web3 events, content, conferences, & other perks… and to support the work of this newsletter.
This week’s?Metaverse for Education?newsletter is about AI agents. More to come on other web3 & emerging tech topics. We’re excited to bring?education into the Metaverse?& help you leverage the opportunities in the new world.
AI Education Policy Consultant
1 年This is what I wrote on AI literacy -- AI Literacy: The Immediate Need and What it Includes (substack.com)
AI Education Policy Consultant
1 年*A good review of the practical but devastating risks *Like everyone, "we need to do something," but no real specific ideas for anything practical https://time.com/6283716/world-must-respond-to-the-ai-revolution/
Realtor Associate @ Next Trend Realty LLC | HAR REALTOR, IRS Tax Preparer
1 年Thanks for Sharing.
AI Education Policy Consultant
1 年The final thing I'd say is that it doesn't need sentience and/or consciousness to get to all the existential risks you discuss. It would require that for killing us off on its own (though I don't know of any leading AI researcher who doesn't think it will eventually be conscious). It's not required for weaponization, as the humans can be removed from the loop now and it could be instructed to make decisions to kill based on general parameters. There is a huge debate about this in the military. Similarly, someone could instruct it manipulate people. Power could become imbalanced by small wealth countries buying a lot of GPUs. Someone could use it to hack into a nuclear weapons system and do bad stuff. Anyhow, there are plenty of existential risks before you get to sentience and/or consciousness.
AI Education Policy Consultant
1 年Dr. Sabba Quidwai. See, existential risks! To existential risks, I'd add Hinton & Yudkowsy (smarter beings will view us as disposable (both of those don't think regulations can work to control smarter beings, so this existential risk is harder to solve here than in other areas). Hinton thinks a lot of very smart people need to get paid a lot of $ to figure it out (Hinton himself says he has no idea how to solve it). Yudkowsky thinks we need to bomb the data centers. Regardless, yes, kids will fear this the same way they feared nuclear weapons. And, yes, I think they are working on ChatGPT5 (or something better that will be released under a different name (and it may not even be an LLM)). And, yes, part of the reason they are moving ahead is because of China and other governments. It's just really hard to do anything about it other than to both use AI to fight other's advances and to control the chip supply (Altman mentioned this in the hearing).