How Something Comes from Nothing

How Something Comes from Nothing

This week on the Next Big Idea podcast: Daniel Pink and Adam Moss talk about the actual nuts bolts of how writers and artists and chefs and editors create work worthy of being called art, which is the topic of Adam's extraordinary book, The Work of Art: How Something Comes from Nothing .

They talk about the "freedom of low expectations," the sense in which "imagination is combination," and the creative power of conversations with oneself. Listen on Apple or Spotify , and share your thoughts in the comments below!



Anthropic's CEO Dario Omodei promises to build safer, world-changing AI.

I woud love to share today something else that I have been thinking about — I have been pretty deeply concerned about AI risk for a few years now. That’s part of what has driven my interest in seeking out podcast conversations with some of the leading experts and industry leaders in the space. We’ve discussed the promise and peril of artificial intelligence recently with Bill Gates , Yuval Noah Harari , Nate Silver , Stuart Russell (who runs the Center for Human-Compatible Artificial Intelligence at Berkeley), Sal Khan (founder of Khan Academy), David Chalmers , Steven Johnson (at Google Labs) and Kevin Roose (New York Times), to name a few. If you missed these, here’s a Spotify playlist of our AI greatest hits .

Harari holding forth on AI and the future of democracy.

The essential question I keep asking myself is how soon and how profoundly is AI likely to change our world? The collective answer from our esteemed guests seems to be — sooner and more profoundly than you think. This view is powerfully reinforced by a buzzy new essay published recently by Anthropic CEO Dario Amodei called Machines of Loving Grace .

If you haven’t read it, here’s the TL;DR:

  • We are likely to achieve Artificial General Intelligence, which Dario prefers to call “powerful AI,” as soon as 2026, though “it could take much longer.”
  • He describes powerful AI as “smarter than a Nobel Prize winner across most relevant fields,” and replicable, so we could have “millions of instances” of powerful AI, limited only by compute. This could make it possible to have a "country of geniuses in a datacenter" working on our most pressing problems within a couple years.?
  • Once we hit this inflection point, Dario believes we will see 50-100 years of scientific progress in the following 5-10 years, which he refers to as the “compressed 21st century.”

What scientific breakthroughs will this make possible in a 5-10 year time horizon?

  • Elimination of most forms of cancer — “reductions of 95% or more of both mortality and incidence seem possible”
  • Prevention of Alzheimer’s — it could “eventually be prevented with relatively simple interventions”
  • Prevention and treatment of nearly all natural infectious disease
  • Prevention and cures for most forms of mental illness, including depression, schizophrenia, addiction and PTSD
  • Biological freedom — “weight, physical appearance, reproduction, and other biological processes will be fully under people’s control”
  • Improvement of the human “baseline experience” — we will be able to improve a wide range of cognitive functions, and increase the proportion of people’s lives that “consist of extraordinary moments” of revelation, creative inspiration, compassion and beauty.?
  • Doubling of the human lifespan — Once human lifespan is 150, we may be able to reach “escape velocity, buying enough time that most of those currently alive today will be able to live as long as they want”?
  • Mitigation of climate change through acceleration of technologies for renewable energy, lab-grown meat, and carbon removal

None of these are novel predictions — we have heard similarly utopian takes from folks like Sam Altman, Marc Andreesen and Ray Kurzweil, and rebuttals from their critics. But it is surprising to see such an aggressive timeline from someone as cautious and measured as Dario, who left OpenAI in 2020 with six other senior staff members (another half dozen have followed recently ) in order to build safer and more transparent AI systems. The company they founded, Anthropic, has pledged its commitment to putting out safe AI, even if it takes longer, and hopes to start a “safety race” between top LLMs .?

Dario has a PhD in physics from Princeton, originally ran OpenAI’s safety team, and explores at great length in the essay the factors that will slow down the speed of AI driven tech innovations — namely interactions with humans. And yet … he describes a version of our world that could be unfathomably different in 20 years. Of course he hasn’t lost sight of the risks. As he puts it, “most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.”

One of Dario’s more original arguments — hopeful for me, in this perilous moment — is that if we do it right, AI could help strengthen democracy around the world. This is important, Dario tells us, because “AI-powered authoritarianism seems too terrible to contemplate.”?

In my recent conversation with Yuval Noah Harari, author of Sapiens and most recently Nexus, he had the following to say about AI’s potential misuse.

We can have AI teachers and AI doctors and AI therapists who give us services that we didn't even imagine previously. But in the wrong hands, this can create a totalitarian system of a kind that even Orwell couldn't imagine. You can suddenly produce mass intimacy. Think about the politician you most fear in the world. What would that person do with the ability to produce mass intimacy based on AI?

He went on to describe the current use of facial recognition and surveillance in places like China and Iran (as described by Kashmir Hill in her recent book Your Face Belongs to Us: A Secretive Startup's Quest to End Privacy as We Know It) . It does feel like a pivotal moment in the global AI race. As Bill Gates said in our conversation , “Let's not let people with malintent benefit from having a better AI than the good intent side of cyber defense or war defense or bioterror defense.”?

So how, exactly, do we use AI to strengthen democracy? Dario says,

My current guess at the best way to do this is via an “entente strategy,” in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy.

MIT professor Max Tegmark, among the most articulate AI safety advocates, referred to Dario’s entente strategy in a response as a “suicide race,” because “we are closer to building AGI than we are to figuring out how to align or control it.” Max makes the case for limiting AI development to “Tool AI” – AIs developed “to accomplish specific goals,” such as Google DeepMind’s AlphaFold, whose developers just won a Nobel Prize.?

I agree with Tegmark’s description of the best case —?it would be wonderful if we could globally restrict the development of superintelligence to narrow goals. But it doesn’t seem likely. As Gates put it in our conversation, “If we knew how to slow it down, a lot of people would probably say, ‘Okay, let's consider doing that.’” But, he added, “the incentive structures don't really have some mechanism that's all that plausible of how that would happen.”

These conversations are discouraging, when we think about the risks to future generations. We can’t deny the risks (Dario himself has put his p-doom at 10% - 25%). But I, for one, am buoyed by Dario’s confidence that there is a path to build “powerful AI” responsibly, with the necessary safety mechanisms. And to do it quickly enough to distribute the benefits of this technology globally, while strengthening democracy and human rights. I also like his argument that “improvements in mental health, well-being, and education [should] increase democracy, as all three are negatively correlated with support for authoritarian leaders." I hope this proves to be true. It seems particularly likely that global education will benefit from AI progress, based on my invigorating conversation with Sal Khan , who is making it happen.

Though I find all this fascinating, I also find it overwhelming, at times. It can be too much. On Friday, I had a wonderful conversation with Oliver Burkeman (stay tuned!), who says in his new book, Meditations for Mortals , too many of us are “living inside the news.” We are finite humans, who must choose judiciously what to care about. What to focus on. As William James said, “The art of being wise is the art of knowing what to overlook.”So should we ignore the dizzying acceleration of AI? I don’t think so. I think it’s a development that will be transformative enough to our individual and collective futures that it’s worth thinking about now.?

In the 90s, I leaned into the dawn of the early internet, starting an early web zine and dating platform called Nerve.com . This led to a fascinating career that has been rewarding in every sense of the word. In the late 2000s, in contrast, I was slow to study and engage with the early social media products. I came to regret that professionally. As a parent, meanwhile, as I discussed recently with Jonathan Haidt , I was too permissive with tech access for my kids. I have therefore resolved to try to be a better student of the latest tech revolution. Here are a few ways that the trajectory of AI technology is influencing my decisions:

  • I am doubling down on investing in personal health now. If we are likely to see medical breakthroughs that will meaningfully improve our health and perhaps extend our lives in 10-20 years, I would rather be in a physical state that is worth preserving. (This resolution has the advantage of being one I am unlikely to regret no matter what happens).
  • I am collecting ideas more carefully so that I can populate LLMs with my personal interests, paving the way for a better personal assistant. For instance, though I prefer reading physical books, I am scanning them using Readwise, so that I can later access all the highlights. My intention is to extend this practice to everything I read. I have also found that submitting my personal journal to AI analysis often yields useful insights, which positively reinforces my erratic journaling practice.?
  • I am always looking for more ways to deploy AI to empower the Next Big Idea Club, encouraging a culture of constant experimentation.
  • I am bullish on investment positions in the largest US tech companies. Though there is no guarantee that the Nasdaq 100 will continue its nearly 20% year-over-year growth rate for the last 15 years, it seems unlikely to me that the rate of technological progress is about to slow down. A twenty percent annual return means doubling your money every four years, which is an astonish rate of wealth creation. To state the obvious, that means an eight-fold increase in value in twelve years.
  • Though it’s a small gesture, I am making an effort to support the companies that take AI safety most seriously, and the candidates most likely to protect democracy and human rights in this volatile historical moment.

What do you think? I am curious to hear how you all are processing these developments.


If you want to dig deeper into the topic, you might enjoy our Spotify playlist featuring all of our conversations about AI. If you would like a primer, this conversation with Cade Metz about the history of AI, where it all began, is a great place to start.

Hrijul Dey

AI Engineer| LLM Specialist| Python Developer|Tech Blogger

3 周

Who else needs an edge in research? Meet NotebookLM, your AI study partner! It summarizes articles in seconds and integrates multimedia for a richer learning experience. Game changer! https://www.artificialintelligenceupdate.com/notebooklm-your-ai-study-assistant/riju/ #learnmore #AI&U

回复
Jan Dale Carlo Catalonia

Founder of Dilaab Digitals ?? ? Helping Coaches and Solopreneurs focus on the big picture | Follow for posts about virtual assistance, delegation, and outsourcing | PH 100 Brightest Minds Under 30 by StellarPH

3 周

Sounds like a fascinating episode! The conversation around creativity and the role of AI in the creative process is so relevant right now. Thanks for sharing the link!?

回复

I loved this discussion between Daniel Pink and Adam Moss about The Work of Art! I am inspired to keep creating!!

回复
Carlos Díaz de la Garza

Profesor en el Centro Universitario Incarnate Word Campus Cd. de México

1 个月

Excellent podcast on the The Work of Art, I highly recommend it

Jeppe Klitgaard Stricker

Education & AI | Educational Leadership | stricker.ai | jeppestricker.substack.com

1 个月

Rufus Griscom you really shouldn't succumb to using foul language like TL;DR. What's wrong with the word 'summary'? ?? Thanks for another great episode!

要查看或添加评论,请登录

Rufus Griscom的更多文章

  • Is Appreciating "The Interesting" a Superpower?

    Is Appreciating "The Interesting" a Superpower?

    This week on the Next Big Idea podcast: The surprising benefits of appreciating “the interesting.” Listen on Apple or…

    8 条评论
  • Meditations for Mortals

    Meditations for Mortals

    Friends, This morning, we released a Next Big Idea podcast episode that is already on my short list of favorites — a…

    2 条评论
  • The Kids Are Not Alright

    The Kids Are Not Alright

    This week on the Next Big Idea podcast, Jonathan Haidt and I discuss what social media is doing to our kids. Listen to…

    4 条评论
  • Revenge of the Tipping Point

    Revenge of the Tipping Point

    First, an INVITATION: We’re hosting a live taping of this show in New York City next week, on Thursday, October 10th…

    3 条评论
  • Can Democracy Survive AI? A Conversation with Yuval Noah Harari

    Can Democracy Survive AI? A Conversation with Yuval Noah Harari

    This week on the Next Big Idea podcast, Yuval Noah Harari and I discuss the history and future of information networks.…

    4 条评论
  • Michael Lewis on Sam Bankman-Fried, the Art of Storytelling, and Being Unreasonably Happy

    Michael Lewis on Sam Bankman-Fried, the Art of Storytelling, and Being Unreasonably Happy

    Friends, I sometimes have to pinch myself when I am interviewing someone whose work I have admired for many years. I…

    4 条评论
  • How to Harness Your Anxiety

    How to Harness Your Anxiety

    This week on the Next Big Idea podcast, I talk with Morra Aarons-Mele about why anxiety and achievement often go hand…

    3 条评论
  • Inside The Mind of Nate Silver

    Inside The Mind of Nate Silver

    This week on the Next Big Idea podcast, a timely conversation with Nate Silver on poker, politics, Kamala’s prospects…

    4 条评论
  • What If Laziness Is a Myth?

    What If Laziness Is a Myth?

    Some conversations are like seeds … they germinate, grow inside us. This week, we’re revisiting one of them — an…

    4 条评论
  • Booze: The Good, the Bad and the Bubbly

    Booze: The Good, the Bad and the Bubbly

    This week on the Next Big Idea podcast: Caleb Bissinger and I try to make sense of our complicated relationship with…

    2 条评论

社区洞察

其他会员也浏览了