Some top voices in technology
are calling for a “pause” in public updates to AI systems
.?They point out that AI research should continue, but public facing systems should scale back until society better understands the risks.
This is a terrific conversation to have, but I don't think a pause is the right approach. We've seen how fast these systems improve.? So it's unclear that this cost in innovation will have much of a benefit.? On top of that, there's some really difficult questions that the letter doesn't do much to address.
- What would a pause accomplish? The letter says a pause would allow society to feel confident that the effects of AI systems “will be positive and their risks will be manageable.”?Right now, U.S. states and regulators are struggling with the negative effects of social media.?So this is still an open question for technology that's been around for over 20 years.?It's hard to imagine how regulators can struggle with AI systems that are just being released. It might make intuitive sense to take your foot off the gas when you're heading over a cliff.??But innovation isn't like driving. ?Sometimes the best way out is through.? Society and regulators will have a much easier time trying to manage the risk as the systems are developed.? It's much more difficult to regulate theoretical risks.? Regulators are rarely as creative as those they need to regulate.? So parallel regulation is the best outcome.? The more innovative the US companies, the better our regulations will become.
- When would be the right time to un-pause? The letter recommends at least a six month pause. What could be accomplished in six months? If it's longer than six months, then what criteria do you use to un-pause?? Would you set up a regulatory agency?? Maybe have a committee create goals and guardrails for future development?? Why would you need to pause development to accomplish these goals?
- Which AI system should be paused? The letter uses ChatGPT 4 as an example, but this isn’t the most harmful system out there. Some harm might come from systems that create deep fake videos or fake images.? Would this pause include unreleased systems such as
Adobe
Firefly? Would you have to clearly mark when images were created using digital brushes as opposed to just text prompts? Is that a distinction worth making?
- How do we pause systems outside the United States? We've seen with the recent TikTok hearings that not all the popular tech tools are coming from US companies.? Why would outside companies want to pause their products while US regulators work through these issues?? If we threaten to ban these products, then what would that mean for our own advancement and tools we'd like to promote abroad?? It seems that if the US regulators paused Instagram because it was harmful to teenagers it would have only increased the adoption of TikTok.? Then regulators would have been in a difficult position where they would have to ban other international companies that filled the gap.
As anyone will tell you it's always easier to challenge ideas then come up with your own.? So this letter is an important first step. Even though I think a pause is the wrong approach I do think that the letter points out a bunch of key concerns.??
These AI systems can cause a lot of problems.?I'm particularly concerned about the potential to spread misinformation. ?It's one thing to share fake news on Facebook. It's an entirely new and dangerous thing to chat with a convincing humanBot. ?It might just try to sell you vitamins, but you can use the same tech for much worse outcomes. It’s possible to create a fascistBot or cultBot that never sleeps and that could chat with hundreds of millions of people 24/7.? The last decade has shown that many people yearn for these causes and connections.
If I thought a pause would impact these bad actors, I would support it without hesitation.? But we need “white hat” actors to learn from these public facing systems.
It's also far from clear that the increased productivity from these AI systems will kill more jobs than they displace.? It'll probably be years before that question is even researched. Pausing now won't get researchers closer to figuring that out.
It seems the best way forward is not slowing down but speeding up.? We need more discussions and greater understanding of the risks. That means that many more people need to become familiar with the data ethics challenges
.? The solutions to these challenges aren't going to be decided in months but in decades.
CIO @ CellCarta
1 年Good article. A pause for responsible conpanies/developers/scientists will not slow down the bad actors much, if at all. Whether inside the US or out, at best they would conceal their activities, not pause then. Meanwhile researchers who are concerned with ethics might slow down, actually falling behind the less ethical competitors. An alternate approach may be more pressure on companies who are trying to monetize AI to spend resources on guardrails, controls, and grappling with the problem. Unfortunately the social media example you give is the best one, we are still struggling with its impacts as a society and AI may be disrupting things faster than social media ever did. But a pause is unlikely to change things.
Aspiring Business Analyst | R, Python, SQL
1 年Thank you for addressing this issue in such an insightful manner. I agree with the alternative that you have suggested. Clearly, a pause isn't an option!
Delivers and makes it happen | The leader in real and digital world
1 年So far we haven’t learnt how to deal with the fake news and here now we have fake pictures and videos to deal with. We need to adopt faster to the new reality as individuals and society. The fundamentals built over centuries might require adjustments. Stopping the technology development will not make any good impact.
RnD Manager at Arctic Wolf
1 年I share this perspective as well. Even if you can stop yourself, you cannot stop others. Technology is driven by need/opportunity, once the need and opportunities are out there, someone somewhere will fill it. The best way to check this is to be a leader in it rather than stopping the process.