Google DeepMind’s John Jumper on what AlphaFold’s Nobel win means, how AI is impacting science and when we’ll achieve AGI
LinkedIn News
Bringing you the business news and insights you need to stay informed.
Welcome back to LinkedIn News Tech Stack, which brings you news, insights and trends involving the founders, investors and companies on the cutting edge of technology, by Tech Editor Tanya Dua. You can check out our previous editions here.
First, catch up on the latest edition of my weekly series VC Wednesdays, where Forum Ventures ’ Jonah Midanik discusses the new investing frontiers the rise of AI is opening up, why he’s bullish on AI agents and why he advises his portfolio companies to “code nothing” in the early stages.
Pitch me the interesting investors, founders, ideas and companies powering emerging technologies like AI to reach the inboxes of nearly 1 million subscribers plus thousands more on LinkedIn. Follow me for other tech updates. And click 'Subscribe' to be notified of future editions.
A deep dive into one big theme or news story every week.
Google DeepMind has become one of the world’s pioneering AI labs — its groundbreaking work on protein folding recently awarded with the Nobel Prize in Chemistry.
John Jumper , a key researcher who led the development of AlphaFold model, was one of the three recipients of the prize alongside DeepMind chief executive Demis Hassabis and University of Washington biochemist David Baker.
In this exclusive interview, Jumper reflects on AlphaFold's success, what the future holds for AI and its broader applications, when we’re likely to achieve AGI (artificial general intelligence), and more.
Tanya Dua: How has winning the Nobel Prize impacted your personal and professional life?
John Jumper: It's still sinking in, to be honest. We thought we had a small chance, but it was still shocking. It’s so humbling to be recognized, especially when you consider the many groundbreaking discoveries that haven't received this recognition yet — like advances in genetic sequencing, for example. I’m really excited because it’s a moment that says AI — and AI for science especially — matters, and what scientists do matters. But what's most rewarding is seeing how others are using AlphaFold to make discoveries. Just a day or two ago, I saw an article about research on sperm-egg fertilization that used AlphaFold, and we weren’t involved at all. That’s the real impact — it’s now being used by others, and I think the Nobel acknowledges that. I very much hope that we ultimately look back at this as just the first of many, in terms of how AI impacts science and how it’ll let us do things we couldn't before.
Dua: What initially inspired you to take on the problem of protein folding?
Jumper: The work that was going on at DeepMind actually started before I joined. Personally, I previously worked at a company that had done incredible research building custom computers to simulate how proteins move, and had this incredible hardware for doing it. Then I went to graduate school, and I didn’t have that anymore. It was back to this computer under my desk. That’s when I started thinking about if we could use algorithms, which we didn’t even call machine learning at the time — it was called statistical physics — to understand scientific problems. It was a necessity creating opportunity. It’s also incredibly difficult and expensive to get a crystal structure, but important enough that scientists have done it over 200,000 times and deposited it. So it was an intersection of both a valuable problem and abundant data to really enable AI training, which made it a wonderful problem to try to solve.
Dua: What were the biggest challenges in developing AlphaFold, and how did you overcome them?
Jumper: People spend a lot of time talking about data. Of course, in some ways, we had a great dataset, but in other ways, a really small dataset. We had about 140,000 structures at the time, and that’s really small. Everyone had the same data — there was no data advantage. So the question was, how do you get these systems to learn more from the data you have? One of the biggest challenges was that off-the-shelf machine learning really didn't do it. We had to build a new type of machine learning at the intersection of proteins and AI, and that was the transformative difference. We had to rebuild our machine learning so it could learn much more efficiently off this precious data that we had. Another challenge that we spent quite a lot of time working specifically with the compiler team on was how to make it run well and be more efficient. The kind of computer science support that we would get in optimizing these models and training these models, that was absolutely enabling.?
Dua: What do you think are the biggest challenges in translating AlphaFold’s insights into clinical applications?
Jumper: One of the biggest challenges is that you get orders of magnitude better at some tasks within the pipeline, but there are others you don’t improve. Are you going to change what you do to lean into those strengths? To take a biological example, it’s helped us get incredibly good at sequencing DNA, but it’s not clear how it would help with pharmacokinetics. It’s a normal challenge for new technologies entering an industry — some tasks improve dramatically, and others don’t.?
Dua: What applications using AlphaFold have surprised you the most? Are we at a point where it’s driving ROI?
Jumper: One example that stands out is research from Feng Zhang’s lab at MIT, where they were working on how to get proteins into cells in a targeted manner —? contractile injection. Basically, they had no idea how to modify this natural protein, and then they looked at an AlphaFold prediction and saw, "oh, this is how it recognizes what it’s doing." They swapped that out with a design protein, and they were able to deliver something like [green fluorescent protein] into specific cells in a mouse brain by repurposing that system. We’ve also seen people using AlphaFold for vaccine design. None of these applications are solved by AlphaFold alone. But I often say that, on a good day, we’ve made structural biology 5-10% faster. So we’re already seeing broad societal ROI from these types of investments. In terms of the broader ROI, are we feeding into the scientific process that enables scientists to do things that they couldn't before? We're already seeing that. But I'm not looking to work on AlphaFold for the rest of my career and come out with AlphaFold 12. I really want to go off and find the next problem that we haven't solved. How do we think about, for example, pathways in the cell?
Dua: Shifting gears, when do you think we’ll see AI agents reliably act on our behalf?
Jumper: That’s really not my area of expertise. But we are seeing rapidly increasing base model capabilities and these models getting a lot better and more reliable in more areas. I tend to think of agents —I would prefer to call them systems, I wish we didn't call them agents — as systems or AI in the loop, helping with other tasks. We're seeing them deployed in applications in various ways. But when will they be booking you flights to see the Jets play? I don't know. I'm still very excited about the trajectory of model progress that we've been seeing, but there's a lot of really interesting and complex problems in capability, security and auditability that obviously we're all also thinking about internally.?
Dua: A lot of current AI architectures are based on the transformer model. Do you think there’s another fundamentally different architecture coming soon?
Jumper: It's hard to say for certain. With AlphaFold, we of course used transformer-based ideas. We used attention ideas. But it's not simply a transformer. If you take a basic transformer, it's not nearly at the performance of AlphaFold. On the protein structure problem, we found enormous benefits from specializing the architecture to our problem. So, it's one of the interesting questions to resolve and I think we'll see some variants. It's not about transformer or not transformer, but about the details within as well.
领英推荐
Dua: On a broader level, are there any tasks you think AI will never be able to perform?
Jumper: Closer to my area of expertise, people will say: “Can't we just predict the results of a clinical trial and proceed with the ones to trial that the neural network says will do great?” Yet, we have very, very little data, because we send very few things through clinical trials ultimately. And so, I don't think AI can just straight up learn that, it has to develop reasoning capabilities. Without reasoning, of course, you can't do much. But as these models learn to reason more and more, we can see how far they will take us.
Dua: Do you believe AI can ever be truly creative, or does it always need human input for inspiration?
Jumper: Creativity is such a continuum that AI will force us to kind of acknowledge that.. The meme that AI only regenerates, or only regenerates and combines as you push more examples, doesn’t mean it’s any less creative or that there’s less space for humans to be creative. Some people say sitcoms are creative, and some say they’re not as creative as a particular transformative work of literature. People initially doubted AlphaFold’s ability to predict novel protein folds, but it’s proven capable. So as these models become more capable, we'll see that their ability to be creative will expand outward slowly, and that continual progress will evermore increase the frontier of novelty that these models can handle. It’s a question of degree, not of not a qualitative chasm.
Dua: What ethical concerns do you have about the widespread use of AI in biological research and beyond?
Jumper: I'll distinguish two things. One, in terms of ethics, in structural biology, we are not dealing with patient data within our work. It’s mostly about what's common between people, not about individuals. But on a larger point, we do think a lot about safety and want our work to make the world a better place. Before we released AlphaFold 2, we talked to over 30 biosecurity experts to understand the risks and benefits. The conclusion was that it was safe, that it was a good thing to release it in the way that we did. We take a very active and release-oriented approach to safety, and think also about talking to policymakers and others to give them the information they need to make the right decisions. We're also working on making models more factual, but at the same time it’s very important that people learn to assign the right amounts of trust and use these tools well within the narrow areas of science. More widely, we think about misinformation. It's something that we're always trying to figure out — what is the most responsible way to deploy our models and what are the most responsible post-training mitigations. It's something that Google is doing a relatively good job on, but it's certainly an area in which we're always trying to improve.?
Dua: What does AGI (Artificial General Intelligence) mean to you, and do you think we’ll get there?
Jumper: AGI is when it is no longer as effective for me to do my job as it is to have a machine. It's when the next versions of AlphaFold and the next bits of research are done as well by a machine as they are by me. It’s when it's as effective as I am, and that will be about engaging in long-term reasoning, planning and research. We must eventually get there. What's the argument that we can't? Whether it's 100 years, 1000 years, or five years, that I don't know.
Here’s where we bring you up to speed with the latest advancements from the world of AI.
Here’s a list of other notable AI developments from this week:
A rotating section where we share key, thought-provoking insights you can’t miss. This week, we’re sharing takeaways on the ground at Reid Hoffman ’s Masters of Scale conference.
On day one at the Masters of Scale Summit in San Francisco this week, the topic most top of mind, unsurprisingly, was AI. But it was hardly the only theme that attendees on the ground were buzzing about. Here are the top highlights:
Here’s keeping tabs on key executives on the move and other big pivots in the tech industry. Please send me personnel moves within emerging tech.
As always, thanks for reading. Please share Tech Stack if you like it! And if you have any news tips, find me on InMail.
Global Corporate Communications Expert | Director at Prittle Prattle | Vice President Ventures Advertising | Editor at Prittle Prattle News | Strategic Brand Architect & Crisis Communication Specialist
1 周From protein folding to prize-winning breakthroughs, DeepMind reminds us that the future of AI is profoundly human-driven.
I educate 10,000+ CRUSH confusion by turning complex ideas into simple ways to win ?? Change your behaviors, finances and career ?? 13 years of research
1 个月Groundbreaking advancements in AI like AlphaFold are transforming science. The discussions on AGI and digital species spark vital conversations for the future.
Author of "The Sea We Swim In" and "The Art of Immersion"
1 个月Two comments here really strike me. About safety: "Before we released AlphaFold 2, we talked to over 30 biosecurity experts to understand the risks and benefits." And about AGI: "We must eventually get there. . . . Whether it's 100 years, 1000 years, or five years, that I don't know." I suspect it's somewhere between five and 100, but what's striking about John Jumper's remarks is their humility in a field that's too often marked by hubris. And seriously, if you win a #Nobel you are entitled to brag a bit.
--
1 个月That's veary informative thanks for sharing this best Wisis to each and everyone their ?????????????????????????
Hospital & Healthcare Account Leader | Med-Tech Thought Leader | Applied Technology Innovator | Applied Knowledge Physicist | AI Workflow Pioneer | Citizen Scientist | Bohdisattva
1 个月Wow, talk about some AI superstars making waves! Google DeepMind nabbing a Nobel Prize? That’s next level! It’s incredible to see how AI, especially breakthroughs like AlphaFold, are reshaping not just tech but the very fabric of science. And Anthropic’s AI agents? It feels like we’re getting closer to the sci-fi future I used to daydream about! Let’s keep the momentum going, friends—AI is just getting warmed up, and I’m excited for it!