How Something Comes from Nothing
This week on the Next Big Idea podcast: Daniel Pink and Adam Moss talk about the actual nuts bolts of how writers and artists and chefs and editors create work worthy of being called art, which is the topic of Adam's extraordinary book, The Work of Art: How Something Comes from Nothing .
They talk about the "freedom of low expectations," the sense in which "imagination is combination," and the creative power of conversations with oneself. Listen on Apple or Spotify , and share your thoughts in the comments below!
I woud love to share today something else that I have been thinking about — I have been pretty deeply concerned about AI risk for a few years now. That’s part of what has driven my interest in seeking out podcast conversations with some of the leading experts and industry leaders in the space. We’ve discussed the promise and peril of artificial intelligence recently with Bill Gates , Yuval Noah Harari , Nate Silver , Stuart Russell (who runs the Center for Human-Compatible Artificial Intelligence at Berkeley), Sal Khan (founder of Khan Academy), David Chalmers , Steven Johnson (at Google Labs) and Kevin Roose (New York Times), to name a few. If you missed these, here’s a Spotify playlist of our AI greatest hits .
The essential question I keep asking myself is how soon and how profoundly is AI likely to change our world? The collective answer from our esteemed guests seems to be — sooner and more profoundly than you think. This view is powerfully reinforced by a buzzy new essay published recently by Anthropic CEO Dario Amodei called Machines of Loving Grace .
If you haven’t read it, here’s the TL;DR:
What scientific breakthroughs will this make possible in a 5-10 year time horizon?
None of these are novel predictions — we have heard similarly utopian takes from folks like Sam Altman, Marc Andreesen and Ray Kurzweil, and rebuttals from their critics. But it is surprising to see such an aggressive timeline from someone as cautious and measured as Dario, who left OpenAI in 2020 with six other senior staff members (another half dozen have followed recently ) in order to build safer and more transparent AI systems. The company they founded, Anthropic, has pledged its commitment to putting out safe AI, even if it takes longer, and hopes to start a “safety race” between top LLMs .?
Dario has a PhD in physics from Princeton, originally ran OpenAI’s safety team, and explores at great length in the essay the factors that will slow down the speed of AI driven tech innovations — namely interactions with humans. And yet … he describes a version of our world that could be unfathomably different in 20 years. Of course he hasn’t lost sight of the risks. As he puts it, “most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”
One of Dario’s more original arguments — hopeful for me, in this perilous moment — is that if we do it right, AI could help strengthen democracy around the world. This is important, Dario tells us, because “AI-powered authoritarianism seems too terrible to contemplate.”?
领英推荐
In my recent conversation with Yuval Noah Harari, author of Sapiens and most recently Nexus, he had the following to say about AI’s potential misuse.
We can have AI teachers and AI doctors and AI therapists who give us services that we didn't even imagine previously. But in the wrong hands, this can create a totalitarian system of a kind that even Orwell couldn't imagine. You can suddenly produce mass intimacy. Think about the politician you most fear in the world. What would that person do with the ability to produce mass intimacy based on AI?
He went on to describe the current use of facial recognition and surveillance in places like China and Iran (as described by Kashmir Hill in her recent book Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It) . It does feel like a pivotal moment in the global AI race. As Bill Gates said in our conversation , “Let's not let people with malintent benefit from having a better AI than the good intent side of cyber defense or war defense or bioterror defense.”?
So how, exactly, do we use AI to strengthen democracy? Dario says,
My current guess at the best way to do this is via an “entente strategy,” in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.
MIT professor Max Tegmark, among the most articulate AI safety advocates, referred to Dario’s entente strategy in a response as a “suicide race,” because “we are closer to building AGI than we are to figuring out how to align or control it.” Max makes the case for limiting AI development to “Tool AI” – AIs developed “to accomplish specific goals,” such as Google DeepMind’s AlphaFold, whose developers just won a Nobel Prize.?
I agree with Tegmark’s description of the best case —?it would be wonderful if we could globally restrict the development of superintelligence to narrow goals. But it doesn’t seem likely. As Gates put it in our conversation, “If we knew how to slow it down, a lot of people would probably say, ‘Okay, let's consider doing that.’” But, he added, “the incentive structures don't really have some mechanism that's all that plausible of how that would happen.”
These conversations are discouraging, when we think about the risks to future generations. We can’t deny the risks (Dario himself has put his p-doom at 10% - 25%). But I, for one, am buoyed by Dario’s confidence that there is a path to build “powerful AI” responsibly, with the necessary safety mechanisms. And to do it quickly enough to distribute the benefits of this technology globally, while strengthening democracy and human rights. I also like his argument that “improvements in mental health, well-being, and education [should] increase democracy, as all three are negatively correlated with support for authoritarian leaders." I hope this proves to be true. It seems particularly likely that global education will benefit from AI progress, based on my invigorating conversation with Sal Khan , who is making it happen.
Though I find all this fascinating, I also find it overwhelming, at times. It can be too much. On Friday, I had a wonderful conversation with Oliver Burkeman (stay tuned!), who says in his new book, Meditations for Mortals , too many of us are “living inside the news.” We are finite humans, who must choose judiciously what to care about. What to focus on. As William James said, “The art of being wise is the art of knowing what to overlook.”So should we ignore the dizzying acceleration of AI? I don’t think so. I think it’s a development that will be transformative enough to our individual and collective futures that it’s worth thinking about now.?
In the 90s, I leaned into the dawn of the early internet, starting an early web zine and dating platform called Nerve.com . This led to a fascinating career that has been rewarding in every sense of the word. In the late 2000s, in contrast, I was slow to study and engage with the early social media products. I came to regret that professionally. As a parent, meanwhile, as I discussed recently with Jonathan Haidt , I was too permissive with tech access for my kids. I have therefore resolved to try to be a better student of the latest tech revolution. Here are a few ways that the trajectory of AI technology is influencing my decisions:
What do you think? I am curious to hear how you all are processing these developments.
If you want to dig deeper into the topic, you might enjoy our Spotify playlist featuring all of our conversations about AI. If you would like a primer, this conversation with Cade Metz about the history of AI, where it all began, is a great place to start.
AI Engineer| LLM Specialist| Python Developer|Tech Blogger
3 周Who else needs an edge in research? Meet NotebookLM, your AI study partner! It summarizes articles in seconds and integrates multimedia for a richer learning experience. Game changer! https://www.artificialintelligenceupdate.com/notebooklm-your-ai-study-assistant/riju/ #learnmore #AI&U
Founder of Dilaab Digitals ?? ? Helping Coaches and Solopreneurs focus on the big picture | Follow for posts about virtual assistance, delegation, and outsourcing | PH 100 Brightest Minds Under 30 by StellarPH
3 周Sounds like a fascinating episode! The conversation around creativity and the role of AI in the creative process is so relevant right now. Thanks for sharing the link!?
I loved this discussion between Daniel Pink and Adam Moss about The Work of Art! I am inspired to keep creating!!
Profesor en el Centro Universitario Incarnate Word Campus Cd. de México
1 个月Excellent podcast on the The Work of Art, I highly recommend it
Education & AI | Educational Leadership | stricker.ai | jeppestricker.substack.com
1 个月Rufus Griscom you really shouldn't succumb to using foul language like TL;DR. What's wrong with the word 'summary'? ?? Thanks for another great episode!