Are AI companies prioritizing speed over responsible deployment?
Turing Award winners Andrew Barto and Richard Sutton have raised alarms about the rapid release of AI products without thorough testing, comparing it to “building a bridge and testing it by having people use it.”
This raises critical questions: What does it mean to test AI responsibly? If an AI system appears to work, is that enough? How do we define ‘working’—and for whom?
As Cansu Canca, Ph.D., Director of the Responsible AI Practice at Northeastern University’s Institute for Experiential AI, explains, the key is shifting focus from AI itself to what we are actually concerned about:
“Instead of focusing on the thing, which is AI, we should be focusing on, what is your worry? What is your concern? It doesn’t really matter whether this is an LLM, a predictive model or an unreliable search engine that you’re using – we ask what are you concerned about? The AI part should inform how we formulate the questions, but if we just focus on this AI that we are chasing, it’s hard to explain what … is really the risk that we are concerned about and trying to avoid.”
Whether companies are using LLMs, predictive models, or AI-driven search, they need to anticipate unexpected changes, side effects, and gaps in communication. If AI is summarizing emails and responding automatically, what nuances are getting lost? Where does human oversight remain essential?
We help industry leaders go beyond “does it work?” to ask: Is it tested? Is it reliable? What risks are being introduced, and what value might you be losing?
https://lnkd.in/gB8JqJNW
#ResponsibleAI #AIethics #ExperientialAI #AIsafety