Does AI need a watchdog? OpenAI thinks so
Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company covering emerging tech, AI, and tech policy.
This week, I’m focusing on the growing push to increase oversight of companies developing generative AI (an idea that’s been endorsed by none other than OpenAI), and the White House’s own push for stronger AI regulations.?
If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here. And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected].
OpenAI calls for an international AI oversight body
In a new blog post, three of OpenAI’s top executives—CEO Sam Altman, president Greg Brockman, and chief scientist Ilya Sutskever—call for the establishment of a global body that would oversee and regulate the pace of the development of “superintelligent” AI systems, which they say could present an “existential risk” to humankind. The proposed international organization, which might look something like the International Atomic Energy Agency, would also direct the AI companies to begin sharing their knowledge and best practices for keeping large AI systems safe and “aligned” with human interests.?
Listen to the debut episode of Fast Company’s new Most Innovative Companies podcast, on whether AI is coming for our jobs.
The OpenAI execs suggest some interesting steps by which such a global organization might track the pace of AI development: “Any effort above a certain capability (or resources like compute) threshold will need to be subject to an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security, etc.,” they write in the blog post.
The blog post echoes comments that Altman made during his testimony last week before a Senate subcommittee, where he called on the U.S. government to regulate the burgeoning AI industry, including the work of OpenAI. Cynics would argue that it’s easy for Altman and OpenAI to call for such forms of regulation and oversight at a moment when the U.S. and most other countries’ governments are struggling to understand the technology.?
Click here to learn about 7 remote-first tech companies that are hiring right now.
Of course, there’s great disagreement, even in scientific circles, over the long-term “alignment” problem posed by AI systems that are far more intelligent than human beings. Some very smart people, including AI “godfather” Geoffrey Hinton, believe there’s a high risk that superintelligent systems might, in the not-so-distant future, deceive and even destroy human beings. Turing Award winner and Meta chief scientist Yann LeCun, on the other hand, believes that humans will continue to have full control over how AI systems act.?
领英推荐
Biden administration keeps pushing toward broad AI policy
The Biden administration released a new set of plans Tuesday designed to study both the positive effects and risks of the new technology, which is maturing and proliferating far faster than most expected. As part of that plan, the White House is preparing a National Artificial Intelligence Strategy, a plan that calls for input from the public and private sectors to help inform the government’s future regulations and investments in generative AI.
The goal, according to the White House, is better understanding of everything from the national security implications of generative AI (misinformation, hacking, etc.) to its potential role in addressing climate change.?
The White House Office of Science and Technology Policy (OSTP) will also be releasing an updated version of the National AI R&D Strategic Plan (that last update came in 2019), a road map that outlines the federal government’s priorities and goals for investments in AI R&D. The OSTP is asking the public for comment on the matter.
A fake fire at the Pentagon
An image that was very likely generated using AI spread rapidly on social media Monday (thanks in part to a tweet from the Russian state media outlet Russia Today) depicting an explosion at the Pentagon. While police were quick to assure the public that no such explosion had taken place, the online furor was enough to cause a brief drop in the U.S. stock market.?
The event was over shortly after it started, but it served as another sobering reminder of the tactics that new generative AI technology might enable in 2024. Next year will see major elections in the U.S., the U.K., India, Indonesia, and Russia; an estimated 1 billion people will cast votes. And there’s a good chance that various political actors will try to influence voters using high-tech disinformation created by AI, spread via social media platforms.?
AI tools greatly reduce the cost of creating new social content, so a reasonably well-funded political actor (such as the Russian state-affiliated agents that ran ads on Facebook in 2016) can try hundreds of versions of the same image and messaging until they find the right mix that moves voters to action at the polls. Neither Congress nor election oversight bodies in the U.S. have put in place new protections against such AI-generated ads, and many tech companies have laid off staff whose job would be policing such political disinformation.
More AI coverage from Fast Company:
From around the web:
Product Manager PT. Bank Multiarta Sentosa
1 年.
AI Solution | Simplifying Tech | AI Advisor #AzureCloudCertified Professional
1 年AI regulation is always good. How to progress in AI in a regulated environment : Some suggestions from expr: https://www.dhirubhai.net/pulse/big-nose-ofai-balaram-panda/ Top question: In global stage 1. Who will be the ombudsman to decide the right usecase VS wrong usecase of AI
Oceanologist, BAP/BSP Market Development USA & Latin America (#Aquaculture & #Fisheries). #Blockchain enthusiast. I Love My Daughters =)
1 年What is there to be afraid of, exactly?
Sales Associate at American Airlines
1 年Thanks for sharing