Google DeepMind’s John Jumper on what AlphaFold’s Nobel win means, how AI is impacting science and when we’ll achieve AGI

Google DeepMind’s John Jumper on what AlphaFold’s Nobel win means, how AI is impacting science and when we’ll achieve AGI

Welcome back to LinkedIn News Tech Stack, which brings you news, insights and trends involving the founders, investors and companies on the cutting edge of technology, by Tech Editor Tanya Dua. You can check out our previous editions here.

First, catch up on the latest edition of my weekly series VC Wednesdays, where Forum Ventures Jonah Midanik discusses the new investing frontiers the rise of AI is opening up, why he’s bullish on AI agents and why he advises his portfolio companies to “code nothing” in the early stages.

Pitch me the interesting investors, founders, ideas and companies powering emerging technologies like AI to reach the inboxes of nearly 1 million subscribers plus thousands more on LinkedIn. Follow me for other tech updates. And click 'Subscribe' to be notified of future editions.

A deep dive into one big theme or news story every week.

Google DeepMind has become one of the world’s pioneering AI labs — its groundbreaking work on protein folding recently awarded with the Nobel Prize in Chemistry.

John Jumper , a key researcher who led the development of AlphaFold model, was one of the three recipients of the prize alongside DeepMind chief executive Demis Hassabis and University of Washington biochemist David Baker.

In this exclusive interview, Jumper reflects on AlphaFold's success, what the future holds for AI and its broader applications, when we’re likely to achieve AGI (artificial general intelligence), and more.

Tanya Dua: How has winning the Nobel Prize impacted your personal and professional life?

John Jumper: It's still sinking in, to be honest. We thought we had a small chance, but it was still shocking. It’s so humbling to be recognized, especially when you consider the many groundbreaking discoveries that haven't received this recognition yet — like advances in genetic sequencing, for example. I’m really excited because it’s a moment that says AI — and AI for science especially — matters, and what scientists do matters. But what's most rewarding is seeing how others are using AlphaFold to make discoveries. Just a day or two ago, I saw an article about research on sperm-egg fertilization that used AlphaFold, and we weren’t involved at all. That’s the real impact — it’s now being used by others, and I think the Nobel acknowledges that. I very much hope that we ultimately look back at this as just the first of many, in terms of how AI impacts science and how it’ll let us do things we couldn't before.

Dua: What initially inspired you to take on the problem of protein folding?

Jumper: The work that was going on at DeepMind actually started before I joined. Personally, I previously worked at a company that had done incredible research building custom computers to simulate how proteins move, and had this incredible hardware for doing it. Then I went to graduate school, and I didn’t have that anymore. It was back to this computer under my desk. That’s when I started thinking about if we could use algorithms, which we didn’t even call machine learning at the time — it was called statistical physics — to understand scientific problems. It was a necessity creating opportunity. It’s also incredibly difficult and expensive to get a crystal structure, but important enough that scientists have done it over 200,000 times and deposited it. So it was an intersection of both a valuable problem and abundant data to really enable AI training, which made it a wonderful problem to try to solve.

Learn more about SAP Business AI

Dua: What were the biggest challenges in developing AlphaFold, and how did you overcome them?

Jumper: People spend a lot of time talking about data. Of course, in some ways, we had a great dataset, but in other ways, a really small dataset. We had about 140,000 structures at the time, and that’s really small. Everyone had the same data — there was no data advantage. So the question was, how do you get these systems to learn more from the data you have? One of the biggest challenges was that off-the-shelf machine learning really didn't do it. We had to build a new type of machine learning at the intersection of proteins and AI, and that was the transformative difference. We had to rebuild our machine learning so it could learn much more efficiently off this precious data that we had. Another challenge that we spent quite a lot of time working specifically with the compiler team on was how to make it run well and be more efficient. The kind of computer science support that we would get in optimizing these models and training these models, that was absolutely enabling.?

Dua: What do you think are the biggest challenges in translating AlphaFold’s insights into clinical applications?

Jumper: One of the biggest challenges is that you get orders of magnitude better at some tasks within the pipeline, but there are others you don’t improve. Are you going to change what you do to lean into those strengths? To take a biological example, it’s helped us get incredibly good at sequencing DNA, but it’s not clear how it would help with pharmacokinetics. It’s a normal challenge for new technologies entering an industry — some tasks improve dramatically, and others don’t.?

Dua: What applications using AlphaFold have surprised you the most? Are we at a point where it’s driving ROI?

Jumper: One example that stands out is research from Feng Zhang’s lab at MIT, where they were working on how to get proteins into cells in a targeted manner —? contractile injection. Basically, they had no idea how to modify this natural protein, and then they looked at an AlphaFold prediction and saw, "oh, this is how it recognizes what it’s doing." They swapped that out with a design protein, and they were able to deliver something like [green fluorescent protein] into specific cells in a mouse brain by repurposing that system. We’ve also seen people using AlphaFold for vaccine design. None of these applications are solved by AlphaFold alone. But I often say that, on a good day, we’ve made structural biology 5-10% faster. So we’re already seeing broad societal ROI from these types of investments. In terms of the broader ROI, are we feeding into the scientific process that enables scientists to do things that they couldn't before? We're already seeing that. But I'm not looking to work on AlphaFold for the rest of my career and come out with AlphaFold 12. I really want to go off and find the next problem that we haven't solved. How do we think about, for example, pathways in the cell?

Dua: Shifting gears, when do you think we’ll see AI agents reliably act on our behalf?

Jumper: That’s really not my area of expertise. But we are seeing rapidly increasing base model capabilities and these models getting a lot better and more reliable in more areas. I tend to think of agents —I would prefer to call them systems, I wish we didn't call them agents — as systems or AI in the loop, helping with other tasks. We're seeing them deployed in applications in various ways. But when will they be booking you flights to see the Jets play? I don't know. I'm still very excited about the trajectory of model progress that we've been seeing, but there's a lot of really interesting and complex problems in capability, security and auditability that obviously we're all also thinking about internally.?

Dua: A lot of current AI architectures are based on the transformer model. Do you think there’s another fundamentally different architecture coming soon?

Jumper: It's hard to say for certain. With AlphaFold, we of course used transformer-based ideas. We used attention ideas. But it's not simply a transformer. If you take a basic transformer, it's not nearly at the performance of AlphaFold. On the protein structure problem, we found enormous benefits from specializing the architecture to our problem. So, it's one of the interesting questions to resolve and I think we'll see some variants. It's not about transformer or not transformer, but about the details within as well.

Dua: On a broader level, are there any tasks you think AI will never be able to perform?

Jumper: Closer to my area of expertise, people will say: “Can't we just predict the results of a clinical trial and proceed with the ones to trial that the neural network says will do great?” Yet, we have very, very little data, because we send very few things through clinical trials ultimately. And so, I don't think AI can just straight up learn that, it has to develop reasoning capabilities. Without reasoning, of course, you can't do much. But as these models learn to reason more and more, we can see how far they will take us.

Dua: Do you believe AI can ever be truly creative, or does it always need human input for inspiration?

Jumper: Creativity is such a continuum that AI will force us to kind of acknowledge that.. The meme that AI only regenerates, or only regenerates and combines as you push more examples, doesn’t mean it’s any less creative or that there’s less space for humans to be creative. Some people say sitcoms are creative, and some say they’re not as creative as a particular transformative work of literature. People initially doubted AlphaFold’s ability to predict novel protein folds, but it’s proven capable. So as these models become more capable, we'll see that their ability to be creative will expand outward slowly, and that continual progress will evermore increase the frontier of novelty that these models can handle. It’s a question of degree, not of not a qualitative chasm.

Dua: What ethical concerns do you have about the widespread use of AI in biological research and beyond?

Jumper: I'll distinguish two things. One, in terms of ethics, in structural biology, we are not dealing with patient data within our work. It’s mostly about what's common between people, not about individuals. But on a larger point, we do think a lot about safety and want our work to make the world a better place. Before we released AlphaFold 2, we talked to over 30 biosecurity experts to understand the risks and benefits. The conclusion was that it was safe, that it was a good thing to release it in the way that we did. We take a very active and release-oriented approach to safety, and think also about talking to policymakers and others to give them the information they need to make the right decisions. We're also working on making models more factual, but at the same time it’s very important that people learn to assign the right amounts of trust and use these tools well within the narrow areas of science. More widely, we think about misinformation. It's something that we're always trying to figure out — what is the most responsible way to deploy our models and what are the most responsible post-training mitigations. It's something that Google is doing a relatively good job on, but it's certainly an area in which we're always trying to improve.?

Dua: What does AGI (Artificial General Intelligence) mean to you, and do you think we’ll get there?

Jumper: AGI is when it is no longer as effective for me to do my job as it is to have a machine. It's when the next versions of AlphaFold and the next bits of research are done as well by a machine as they are by me. It’s when it's as effective as I am, and that will be about engaging in long-term reasoning, planning and research. We must eventually get there. What's the argument that we can't? Whether it's 100 years, 1000 years, or five years, that I don't know.

Here’s where we bring you up to speed with the latest advancements from the world of AI.

  • 苹果 debuted many Apple Intelligence features in beta on Wednesday, including an image generator and ChatGPT integration. Investors are betting the new tools — available only on the latest iPhones — will encourage upgrades. Meanwhile, Apple is set to release new MacBook Airs with faster chips for running AI in early 2025, Bloomberg reports, citing anonymous sources. But internally, some see the company as "more than two years behind the industry leaders.” The company's research showed that OpenAI's ChatGPT was 25% more accurate than Siri, according to Bloomberg. Still, it has the advantage of being able to put Apple Intelligence on "a massive base of devices" and the means to "develop, hire or acquire its way into the top tier."?
  • AI startup Anthropic released upgraded versions of its Claude models on Tuesday, and a new tool called “computer use,” which can scan a user’s screen to carry out various tasks. The function has been rolled out in public beta, with the ability to click buttons, type text and move a cursor. The product advances Anthropic’s offerings in so-called AI agents, which are designed to perform more complex tasks and boost productivity. But as Bloomberg notes, such tools could potentially “raise the stakes for errors,” given they act on users’ behalf.
  • IBM has launched the latest iteration of its generative AI models for businesses, which the company claims match or outperform similarly sized models on enterprise and academic tasks. IBM has said its technology and consulting businesses for generative AI are worth $2 billion, and that it expects the new, open-source Granite 3.0 large language models to support clients in areas including customer service, IT automation and cybersecurity.
  • Perplexity has entered fundraising talks with the hope of taking its valuation to at least $8 billion — up from $3 billion this summer and around $500 million at the start of the year, The Wall Street Journal reports, citing anonymous sources. The two-year-old company, which just announced a roster of new AI tools aimed at enterprise customers, is looking to raise roughly $500 million in a test of investors' appetite for "buzzy AI startups showing signs of market traction," per the Journal. Meanwhile, News Corp.-owned publishers have filed a lawsuit accusing Perplexity of "freeriding" on their content, or using it to train large language models without permission.
  • Meanwhile, OpenAI and LinkedIn parent 微软 are giving up to $10 million to a handful of media organizations to deploy AI tools in their work. Newsday, The Minnesota Star Tribune, The Philadelphia Inquirer, Chicago Public Media and The Seattle Times are the first recipients of the funding, which will be in both cash and "software and enterprise credits." The initiative is a collaboration between the two tech outfits and The Lenfest Institute for Journalism, which supports local reporting. OpenAI and Microsoft, LinkedIn's parent company, are simultaneously facing a number of lawsuits from other media outlets over AI use. Speaking of OpenAI, the company is dismantling a team that advised it on its own ability to handle AI — the latest scuttling of internal groups and positions focused on safety issues around the powerful new technology.
  • 高通 has been given a 60-day notice by Arm that threatens its ability to use its intellectual property to design chips. The feud "threatens to roil the smartphone and personal computer markets,” Bloomberg reports, citing a document. If the cancellation happens, Qualcomm — which sells hundreds of millions of processors annually — may need to stop selling products based on Arm's designs. The two companies have been in a protracted legal battle, with the U.K.'s Arm suing its U.S. customer in 2022. Arm is trying to “strong-arm a longtime partner," said a Qualcomm spokesperson.
  • ICYMI: 台积公司 beat profit expectations for the third quarter and raised its sales forecast for the current quarter, lifting shares of U.S. chipmakers along with its own in trading last week. The results mark what The Wall Street Journal describes as “whiplash for investors” after ASML , a semiconductor hardware maker and chip sector bellwether, earlier this week reported half the orders analysts expected in its last quarter. The semiconductor industry is navigating an uneven landscape amid feverish demand for artificial intelligence technology, but lackluster orders from customers in other sectors, such as automotive.

Here’s a list of other notable AI developments from this week:

  • Mira Murati , OpenAI ’s former chief technology officer who recently stepped down from her role, is raising funds from venture capitalists for a new AI startup, Reuters reports. The new company aims to build AI products based on proprietary models; it is not clear if Murati will assume the CEO role at the new venture.
  • 霍尼韦尔 has signed a deal with 谷歌 to bring its generative AI, including the Gemini large language model and Vertex, Google Cloud’s AI platform, to the industrial sector. See Honeywell chairman and CEO Vimal Kapur ’s LinkedIn post for more.?
  • Legal tech startup Genie AI, which helps users draft and revise legal documents, has raised $17.8 million in Series A funding from GV (Google Ventures) and Khosla Ventures.?
  • Lightspeed investor and entrepreneur Michael Mignano has launched a new AI company called Oboe , which has raised $4 million in seed funding to help people learn more efficiently, effectively and affordably. See Mignano’s LinkedIn post here for more.

A rotating section where we share key, thought-provoking insights you can’t miss. This week, we’re sharing takeaways on the ground at Reid Hoffman ’s Masters of Scale conference.

On day one at the Masters of Scale Summit in San Francisco this week, the topic most top of mind, unsurprisingly, was AI. But it was hardly the only theme that attendees on the ground were buzzing about. Here are the top highlights:

  • AI as ‘digital species’: Microsoft AI’s CEO Mustafa Suleyman proposed a new way of making sense of AI, framing AI models as "a new digital species," and emphasizing the unprecedented nature of their capabilities. In his fireside chat with Hoffman, he drew parallels between AI models and past technological marvels, noting how they could eventually see, hear and even act on our behalf. Suleyman argued that we should embrace their potential for creativity and flexibility. However, he also warned about the risks of unchecked autonomy, suggesting that 2025 may bring new developments in this space. As he put it, "we need to figure out where the boundary on that learning is." Despite concerns, he said he remains optimistic, stating that well-designed AIs could "help us interact with the very best of ourselves."

  • Why we need to stop demonizing success: In a discussion with AOL co-founder and investor Steve Case and Vox Media’s Preet Bharara , Maryland governor Gov. Wes Moore stressed on the need to stop criticizing success and billionaires, arguing that true policy failure lies in the number of people living in poverty, not the existence of billionaires. He called for a shift in focus, advocating for policies that increase opportunities for entrepreneurs and small businesses, particularly those who lack access to early-stage capital. Moore stressed the importance of democratizing economic growth, ensuring that success isn't limited by one's network or resources. He also warned against demonizing elected officials, as this could deter quality candidates from public service, urging a balance between celebrating success and holding leaders accountable. "Not everybody has the same friends," he said, pointing to the need for equal access to opportunity.
  • The ‘instigator’ thesis: In his end-of-the-day fireside chat with WaitWhat CEO Jeff Berman , veteran investor Vinod Khosla discussed his “instigator thesis, " emphasizing the power of one pioneering entrepreneur to drive transformative change in industries. Despite his frequent quibbles with Elon Musk, he credited the role Musk has played in accelerating the shift to electric vehicles, surpassing government forecasts and forcing legacy automakers to follow suit. Khosla argues that large companies are good at incremental improvements, but true innovation comes from disruptors — small, bold players who challenge the status quo. Whether in retail with Amazon, transportation with Uber, or healthcare with AI-driven drug discovery, Khosla believes that groundbreaking innovation rarely originates from big institutions, but rather from visionary instigators who show the way. As he put it, "That's what instigators do."

Here’s keeping tabs on key executives on the move and other big pivots in the tech industry. Please send me personnel moves within emerging tech.

As always, thanks for reading. Please share Tech Stack if you like it! And if you have any news tips, find me on InMail.



Smruti Bhalerao

Global Corporate Communications Expert | Director at Prittle Prattle | Vice President Ventures Advertising | Editor at Prittle Prattle News | Strategic Brand Architect & Crisis Communication Specialist

1 周

From protein folding to prize-winning breakthroughs, DeepMind reminds us that the future of AI is profoundly human-driven.

回复
Emanuel Balsa

I educate 10,000+ CRUSH confusion by turning complex ideas into simple ways to win ?? Change your behaviors, finances and career ?? 13 years of research

1 个月

Groundbreaking advancements in AI like AlphaFold are transforming science. The discussions on AGI and digital species spark vital conversations for the future.

Frank Rose

Author of "The Sea We Swim In" and "The Art of Immersion"

1 个月

Two comments here really strike me. About safety: "Before we released AlphaFold 2, we talked to over 30 biosecurity experts to understand the risks and benefits." And about AGI: "We must eventually get there. . . . Whether it's 100 years, 1000 years, or five years, that I don't know." I suspect it's somewhere between five and 100, but what's striking about John Jumper's remarks is their humility in a field that's too often marked by hubris. And seriously, if you win a #Nobel you are entitled to brag a bit.

That's veary informative thanks for sharing this best Wisis to each and everyone their ?????????????????????????

??Chris P.

Hospital & Healthcare Account Leader | Med-Tech Thought Leader | Applied Technology Innovator | Applied Knowledge Physicist | AI Workflow Pioneer | Citizen Scientist | Bohdisattva

1 个月

Wow, talk about some AI superstars making waves! Google DeepMind nabbing a Nobel Prize? That’s next level! It’s incredible to see how AI, especially breakthroughs like AlphaFold, are reshaping not just tech but the very fabric of science. And Anthropic’s AI agents? It feels like we’re getting closer to the sci-fi future I used to daydream about! Let’s keep the momentum going, friends—AI is just getting warmed up, and I’m excited for it!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了