AI, Everything, Everywhere, All at Once!
The news is full of AI this and AI that. Yes – we definitely need to talk about how the world we know is changing because of AI. But can all those self-proclaimed AI experts please take a seat? We need to bring policy makers, educators, and actual AI experts to the table to make sure we're doing this right.
We reached a point where discussing the development and implications of AI technology is crucial for ensuring the future of this field benefits humanity as a whole. AI is becoming more integrated into our daily lives, and it's vital to address ethical, safety, and social concerns early on. I have been working with AI technology since 2016, and while I believe I can contribute to the discussion, I am not sure whether I can call myself an expert. The technology is moving so fast, and there are so many different applications and types of AI that it is hard to keep up. These days, everyone seems to be an expert on AI. Just look at your feed – it's full of people who have been previously all about Web3 and crypto, and now they are suddenly AI professionals.
About 1,000 technology leaders recently signed an open letter calling on researchers to pause development of certain large-scale AI models for at least six months, citing fears of the "profound risks to society and humanity". While you will find my name on the list of signatories as well, it also has names of people who helped shape our world: the co-founder of Apple, the guy who sent astronauts to the International Space Station, as well as Turing Award winners, Nobel laureates, scientists, and AI professionals who have devoted their lives to this field.
So why is everyone so worried? The most powerful AI systems we have today, such as GPT-4, are already capable of generating human-level text, and the development of even more powerful models is accelerating. We are all aware of the potential benefits of AI, but many are concerned with the potential risks and harms that come with such powerful technologies. Is our society ready for AI that can outperform humans in virtually any task? We're all amazed at the accomplishments of AI technology, but I believe most people will answer this question with a resounding "NO".
Here's the bottom line: We need to make sure we are being proactive when it comes to AI and its implications. We need to bring policy makers, educators, and actual AI experts to the table to make sure we're doing this right. We need to discuss the ethical and safety concerns of AI, as well as the potential benefits of the technology, before it's too late. We need to ensure that our current AI systems are accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
领英推荐
Even as laws and regulations are being developed and implemented, the world's leading AI labs are racing to build even more powerful systems, sometimes referred to as Artificial General Intelligence. The pace of progress is so fast that experts believe AGI might happen sooner than many expected. This is why it is necessary to take a step back, pause the development of certain large-scale AI models, and make sure we are ready for the implications of this technology.
What could these policies look like? The Future of Life Institute has published a document that offers seven recommendations for policymakers to adopt. These include:
The open letter already has had a huge impact, with media coverage and policy action around the world. The public also seems to be on board, as a survey of over 20,000 US adults by YouGov America found that 69% of respondents support a temporary pause on training even larger models.
What's going to happen next? Well, I'm no expert ?? – but I think it's time to put our heads together and make sure we're doing this right. You can find the open letter here: https://futureoflife.org/open-letter/pause-giant-ai-experiments/
AI could probably be helpful to regulate AI ??
2014 Stephen Hawkin: https://www.bbc.com/news/technology-30290540