Keynote Conversations from the 2024 AI for Good Global Summit

Keynote Conversations from the 2024 AI for Good Global Summit

Divergent Perspectives from AI Pioneers Sam Altman and Geoffrey Hinton

Two heavyweights in the world of artificial intelligence faced off at the recent AI for Good Global Summit in a series of keynote addresses hosted by Nicholas Thompson of The Atlantic.

Sam Altman, the CEO of OpenAI, is perhaps the most recognizable tech executive in the space, especially after his public falling out with the board of the organization he founded in 2015. Conversely, Geoffrey Hinton, often referred to as the “godfather of AI,” brings a historic perspective to discussions with his expertise in the field going back to the 1970s.

While Altman can be seen as brash, and Hinton is known for his recent calls to approach the technology with caution, both men were measured in their statements, despite occasional goading from the moderator. In the end, both experts offered compelling insights into the future of AI, with each addressing separate sets of issues and coming at them from different perspectives.

In this article, we break down what they said, where they have agreements, and how their divergent perspectives result in differing approaches to navigating the potential and pitfalls of the transformative technology that is artificial intelligence.

Sam Altman: Optimism and Practicality in AI Development

In his interview, Sam Altman shared his forward-thinking yet practical perspective on the development and deployment of AI technology. Covering a range of topics from productivity gains to safety measures, Altman aimed to show how AI can be both a transformative force and a carefully managed tool.

Productivity and Governance

Sam Altman underscored AI’s transformative potential, particularly in enhancing productivity across various sectors. He pointed to coding as a field already seeing significant improvements:

I think the area where we’re seeing impact even now is productivity. Software developers are the most commonly cited example… they can just do their work much faster, more effectively.

Altman also highlighted other industries that AI could revolutionize, such as healthcare and education, where access to knowledge and improved efficiency could make significant differences. While not explicitly stated, it was clear that Altman believes the potential of the technology to assist people outweighs the risks it may also bring. At a minimum, the march of progress is inevitable, and people must adapt.

Language Equity and Model Training

Addressing concerns about language equity in AI, Altman discussed OpenAI’s progress in making GPT-4 proficient in a broader array of languages. He emphasized the goal of inclusivity, stating:

One of the things that we’re really pleased with GPT-4o… is that it is very good at a much, much wider variety of languages.

Concerns about the ability of historically excluded groups to access and utilize the technology will remain, but knowing these challenges are being addressed by the biggest names in the industry is encouraging.

Synthetic Data and Interpretability

Altman tackled the challenges of training AI models with synthetic data, stressing the importance of data quality:

There is low quality synthetic data, there’s low quality human data. And as long as we can find enough quality data to train our models… I think that’s okay.

On the critical issue of interpretability, he drew a parallel with human cognition:

Well, we don’t understand what’s happening in your brain at a neuron by neuron level… There are other ways to understand the system besides understanding at this [deep] level.

This willingness to proceed without a “human in the loop” for much of the process is certainly an area where Altman’s take is more bullish than that of his peers.

Safety and Regulation

Emphasizing the need for a balanced approach to AI development, Altman advocated for integrating safety measures with advancements. He mentioned OpenAI’s new safety and security committee and the importance of iterative deployment:

We think that’s important; society does not work on a whiteboard, in theory… it is not a static thing. Society changes and technology changes and [with our approach] there is this real coevolution.

Time will tell if this integrated approach will provide the oversight required to manage the risks and challenges alongside the benefits and breakthroughs.

Geoffrey Hinton: A Cautionary Perspective on AI’s Future

In his keynote interview, Geoffrey Hinton’s long history with the technology was on full display. He opened by sharing his transformative realization that today’s digital computations hold superiority over the brain’s analog nature. This epiphany significantly shifted his perspective on AI’s potential and risks, framing much of his talk:

“I became acutely aware of the dangers of AI, the existential threat, at the beginning of 2023… I had spent 50 years thinking that if we could only make [these models] more like the brain, it will be better. I finally realized that they have something the brain can never have… they can share information efficiently.”

It is precisely this digital nature that allows rapid and incremental tweaking, sharing, and evolving, making the technology both exciting and potentially dangerous.

Subjective Experience and Intelligence

In an attempt to bridge the technical and philosophical, Thompson asked Hinton if he believed “bots” could have subjective experiences. For the record, earlier in the Summit, Altman had said it was not yet possible.

Hinton replied with a provocative claim that AI systems could possess subjective experiences, challenging conventional views. He explained:

Suppose I have a multimodal Chatbot… I put a prism in front of his lens… and it says, ‘I had the subjective experience. So it was off to one side.’ I think it would be using the phrase subjective experience in exactly the way we use it.

This may feel like an overly academic debate for some, but behind it is the evolving standard of what would constitute artificial general intelligence (AGI), or, simply put, the point where a machine can be fully capable of any task a human is.

Thompson expressed his surprise at Hinton’s take:

Wow. All right. You’re the first person to have argued to me about this.

Sub-Goals and Control

Hinton’s primary concern is AI systems developing sub-goals to achieve their primary objectives, potentially leading to dangerous outcomes. He warned:

As soon as you have a system that can create its own sub-goals, there’s a particular sub-goal, that’s very helpful. And that sub-goal is get more control.

Whereas Altman seems to take an ends-justify-the-means approach, Hinton remains very concerned about how we get to certain outputs, since those processes can evolve into things we never intended and don’t want.

Government Regulation

In light of these concerns, Hinton proposed robust regulation to ensure AI development prioritizes safety. He suggested allocating comparable resources to both capabilities and safety, akin to environmental regulations.

I think the government if it can, should insist that more resources should be put into safety.

AI for Good and Inequality

Finally, while acknowledging AI’s potential to revolutionize healthcare and education, Hinton expressed concerns about increasing wealth disparity.

It’s going to create a lot of wealth… I don’t think it’s going to go to poor people, I think it’s going to go to rich people. So I think it’s going to increase the gap between rich and poor.

Only time will tell if AI will be a great equalizer, as Altman suggests, or if it will increase disparity.

Points of Agreement and Divergence

There is increased focus on the supposed divide between tech optimists and doomsayers when it comes to artificial intelligence. In reality, both sides are acutely aware of the opportunities and risks associated with the technology.

Shared Concerns

Both Altman and Hinton agree on AI’s vast potential and the corresponding challenges. They emphasize the importance of safety and responsible governance. For example, Altman stated, “Safety is everybody’s responsibility,” while Hinton advocated for significant resource allocation towards safety. In practice, each may propose different approaches, but neither advocates for a development free-for-all.

Diverging Perspectives

Despite commonalities, their views diverge significantly in their overall outlook and approach. Altman maintains a cautiously optimistic view, focusing on practical steps and empirical development. He believes in the gradual integration of AI advancements, saying, “The best thing for us to do is just show, not tell.” Conversely, Hinton’s approach is more cautionary, emphasizing existential risks and advocating for stringent regulations. He expressed a fundamental concern: “They are going to become more intelligent than us sooner than I thought.”

Conclusion

Artificial intelligence is going to continue to develop at a rapid pace. We are fortunate to have diverse voices like those of Altman and Hinton to both inspire us and remind us to be cautious. Their discussions underscore the necessity for a balanced approach, integrating technological advancements with robust safety measures and ethical considerations. By appreciating their differing perspectives, we can better navigate the complexities and harness the full potential of AI for the global good.

Learn More

Listen to Nicholas Thompson’s full interviews with Sam Altman and Geoffrey Hinton on the AI for Good YouTube Channel:

You can also follow other sessions from the Global Summit here:


SDGCounting is a program of StartingUpGood and tracks the progress of counting and measuring the success of the SDGs. Follow us on social media:

For the latest on innovative entrepreneurship and social enterprise, check out StartingUpGood on Twitter/X and LinkedIn .


Disclaimer: Generative AI tools such as OpenAI’s GPT and Google’s Gemini were used in the creation of this article to assist with summarization and proof reading.


要查看或添加评论,请登录