On AI and Sausage Making
Once again, Google and Microsoft are?battling for the AI spotlight ?– this time with news around their offerings for developers and the enterprise. These are decidedly less sexy markets – you won’t find breathless reports about the death of Google search this time around –?but they’re far more consequential, given their potential reach across the entire technology ecosystem.
Highlighting that consequence is Casey Newton’s?recent scoop ?detailing layoffs impacting Microsoft’s “entire ethics and society team within the artificial intelligence organization.” This team was responsible for thinking independently about how Microsoft’s use of AI might create unintended negative consequences in the world. While the company continues to tout its investment in responsible AI (as does every firm looking to make a profit in the field), Casey’s reporting raises serious questions, particularly given the Valley’s history of ignoring inconvenient truths.
In leaked audio that Casey reviewed, John Montgomery, Microsoft corporate vice president of AI, explains why the team was being disbanded: “The pressure from [CTO] Kevin [Scott] and [CEO] Satya [Nadella] is very very high to take these most recent openAI models and the ones that come after them and move them into customers hands at a very high speed.” Pressed by a staffer as to whether he’d reconsider, Montgomery responded: “I don’t think I will…Cause unfortunately the pressures remain the same. You don’t have the view that I have, and probably you can be thankful for that. There’s a lot of stuff being ground up into the sausage.”
A lot of stuff in the sausage, indeed. Montgomery was no doubt alluding to the idiom of seeing “how the sausage gets made ” – a bloody mess involving parts of the animal most of us prefer to ignore (much less eat).
It may not be pretty, but it’s essential that society understand what’s going into the AI sausage – as well as the decision-making process behind how it gets made. And it’s also essential that companies making that sausage have internal controls independent of the processes (and payrolls) that favor profit and corporate advantage. From what I can tell from Casey’s reporting, it looks like that is no longer the case at Microsoft. The same seems to be true at Google, which famously mishandled the?resignation/firing of Timnit Gebru , then the leader of its independent AI oversight team.
Losing independent oversight of corporate actors is scary, because we’ve been here before – over and over again. Remember?the Cambridge Analytica scandal ? For a brief moment after that mess, the Valley seemed united in realizing that rushing powerful, addictive, and at-scale technologies into the maws of powerful market forces might have some…unintended consequences. Directly after Cambridge, vocal critics Tristan Harris and Aza Raskin set up the well-funded?Center for Humane Tech , and Facebook made a?slew of promises , including “researching the impact of role of social media in elections, as well as democracy more generally.” It then set up its?Oversight Board , the independence of which remains …?questionable .
领英推荐
One of the most valuable lessons of Cambridge was a more general reassessment of risk as it relates to tech’s unintended consequences. “Unintended consequences can’t be eliminated, but we can get better at considering and mitigating them,” writes Rachel Botsman in a 2021?Wired?piece?Tech Leaders Can Do More to Avoid Unintended Consequences. ?Botsman was particularly concerned with the tech industry’s obsession with?speed.?“Speed is the enemy of trust,” she writes. “To make informed decisions about which products, services, people, and information deserve our trust, we need a bit of friction to slow us down.” She then quotes CHT co-founder Raskin: “If you can’t determine the impacts of the technology you’re about to unleash, it’s a sign you shouldn’t do it.”
If the companies most capable of unleashing AI aren’t identifying and fully exploring the unconsidered consequences of putting AI into everything, we’re speeding headlong into the same trap we did with Cambridge.
Which brings me to the statement at the top of this post – that Microsoft and Google’s enterprise and developer offerings are far more consequential than whether Bing steals a point or two of share from Google search. By offering ever-more powerful large-language models to developers at scale, both companies are unleashing a poorly considered and mostly unregulated technology to hundreds of thousands of developers and entrepreneurs, and by extension, to nearly every business on earth. It’s a gold rush mentality, the very same approach that gave us surveillance capitalism, Cambridge Analytica, and by extension, a widespread erosion of trust in democratic institutions.
Do we really want to run that play again?
--
Chief Marketing Officer | Product MVP Expert | Cyber Security Enthusiast | @ GITEX DUBAI in October
1 年John, thanks for sharing!