The race for AI supremacy or if AGI idealists sacrificed their ideals?
Maciej Szczerba
Executive Search ?? Working across ???????? ????. Podcast host at "Past, Present & Future"" on YT???Besides:"I'm Winston Wolf , I solve problems"
John Emerich Edward Dalberg-Acton- 19th century British historian, political philosopher, better known as Lord Acton, uttered the now famous maxim: “Power corrupts, and absolute power corrupts absolutely.” Does this statement fit today's market situation for producers of large language models in AI?
Parmy Olson, a Bloomberg journalist, formerly of Forbes, argues this thesis in her recently published book “Supremacy.” And it's not just the power that the oligopoly (Microsoft and Google duopoly ?) gives. They also corrupt with investor money. Big money.
“Supremacy” is the story of two young idealists who wanted to create Artificial General Intelligence (AGI): Demis Hassabis and Sam Altman.
What is AGI?
There is no single official definition of the term, but in general it can be said to refer to artificial intelligence that matches or surpasses human intelligence and is able to perform the same tasks as humans.
There are thinkers like Ray Kurzweil who believe that AGI is closer than we think and we will achieve it within a few years (by the way, Kurzweil has been saying this for many years and the horizon keeps running away from him). There are also those like Arvind Narayanan and Sayash Kapoor, who compare AGI to “snake oil”-in 19th century America, the travelling salesmen offered “snake oil” as a cure-all. As you can guess “snake oil” did not exist, it was simply a scam (I will write about the book “AI Snake Oil” in a column next week).
Back to the main topic. Although Hassabis (British) and Altman (American) were born on two different sides of the pond, they came from similar families (family focused middle class) and both were considered whizz kids at a young age. Not without reason. Hassabis was a youth chess champion and as a youngster he defended a novel doctoral thesis on the human imagination at University College London (UCL). Altman was a top student at school and as an 18-year-old was spotted by Y Combinator creator Paul Graham and admitted as the youngest boot camp participant. Many years later, Graham handed over the management of Y Combinator to Altman.
Both did not want a career in the academy, although at least Hassabis was urged to do so by his tutors. They wanted to change the world by creating AGI. Their motivations were different, but both altruistic. Altman wanted through AGI to ensure universal prosperity, an abundance of goods for the world and to eliminate poverty and hunger. Hassabis wanted to understand the mysteries of nature and the universe through AGI. Both came to the conclusion that they could only achieve their goals through business.
At the stage when the two businesses had grown sufficiently, both were picky about institutional investors. Hassabis rejected Meta's offer as an investor in his Deep Mind company, and chose a financially lower offer from Google. Why? Google's Larry Page, with whom Hassabis negotiated personally, agreed to a ban on military use of Deep Mind technology and an ethics and safety agreement. Microsoft invested in Open AI (formally, for Microsoft corporate governance reasons, this was structured as a cooperative agreement). Altman went a step further than Hassabis. He pushed for the establishment of a formal ethics board to oversee Open AI's ethical activities.
Billions of dollars began flowing to both companies. And soon both idealistic founders began to lose their virginity.
As “Supremacy” suggests, the two tech giants were not really interested in creating AGI. Google was interested in using AI to increase advertising sales (80% of Google's multi-billion dollar business). Microsoft was interested in implementing AI into its flagship desktop products (Windows, Office), and especially in using AI to develop its cloud technology, Azure.
All the while, Deep Mind was spreading the vision to its employees that they would create something great. Open AI's leading AI scientist, Ilya Sutskever, created the slogan “feel the AGI!” for employees, which he himself seems to have believed in.
Does big tech always work on ethically right projects ? Olson cites reports from “the Intercept” that Google is said to be working on a special search engine for the Chinese market (working title: the “Dragonfly” project) that would “drown out” keywords undesirable by the Chinese government, such as “Tiananmen.” Deep Mind co-founder Mustafa Suleyman recalls how Google refused to create an ethics board at Deep Mind. Finally, the two artificial intelligence ethicists leading the ethics team at Google, Timmit Gebru and Margaret Mitchell, were simply fired, although Google to this day claims that Gebru herself resigned. She had previously been getting messages from HR that she was not cooperating enough.
领英推荐
Was the situation in Open AI better in terms of ethics? After all, Open AI had a board of ethics. The attitude towards this body is best shown by the situation last November, which I'm sure you remember. Altman himself was fired. The reason was the opinion of the board of directors, which claimed that Open AI was creating new versions of Chat GPT too quickly and not reviewing the risks the program posed. As we know Altman stood up and Satya Nadella, Microsoft's big boss, according to the book, went mad of anger when he learned of Altman's dismissal. In the end, the members of the board of directors opting for firing Altman were fired and not Altman himself.
The press wrote that the team stood firm behind Altman. Olson puts forward the thesis that this was not about Altman's genius and the team's faith in his leadership. It was about something much more mundane. Many key members of the team were hoping for a new round of investments that would raise the values of their stocks and options. Without Altman, they feared a decline in their securities values.
Olson also touches on one important aspect of big companies' neglect when it comes to AI ethics. The prevailing engineering view is that it's not about ethics, it is just about math. Engineers are convinced that solving ethical problems is just another fix in the code. It is “math can contain AI” vision. Can it? Life is not a Rubik cube. As the authors of “AI Snake Oil” rightly show, this "mathematical" approach to ethics can even work in the context of generative AI. But what about predictive AI? Here, especially in the context of social problems, there are too many factors to be described mathematically. Models operate on data sets from the past, and just as humans are unpredictable, so are the problems they create for the future. Again: more extensively on this next week.
The author cites an interesting MIT study. Between 2010 and 2021, the percentage of ownership of AI models in the hands of large companies rose from 11% to 96%. Add to that, in 2023, AI investments amounted to $23 billion compared to $5 billion the year before.
So we are dealing with a consolidating oligopoly in the AI market, if this research is to be believed. Peter Thiel, one of the best-known VC investors, says a monopoly in technology is a good thing. Adding to the flavor is the fact that Thiel generously donated to Donald Trump's campaign, is friends with Elon Musk, the president-elect's new advisor, and may have a strong informal influence on the new administration's policies.
Finally, there are two interesting topics that “Supremacy” raises.
So-called "effective altruism" has become fashionable in Silicon Valley. This idea, which originated with Julian Huxley (brother of Aldous, the one from “Brave New World”), involves maximizing a person's skills and resources to help those in need not in the here and now, but focusing on helping as much as possible regardless of when it happens. For example, if you graduated from Harvard Law go to the law firm on Wall Street that will give you the most money instead of going to Africa and teaching in a rural school. As a corporate lawyer maximize your earnings and donate them to charities that support schools in Africa. And one more thing. Australian journalist Peter Singer, in his book “Life You Can Save,” argues that the most important thing in ethical business is to focus on future generations, not the "here and now".
These ideas can serve up complete distortions. Proponent of effective altruism, Oxford academic Will McAskill has worked with Sam Bankman Fried (SBF), author of one of the biggest financial scam scheme of recent years, owner of crypto exchange FTX. Madoff of technology. Their cooperation was cheered on by Elon Musk, who claims to be an effective altruist himself.
Olson cites SBF's conversation with a journalist in prison: “You were good at ethics”. “Well, I had to be (cynical laugh)".
We are dealing with a model of technological capitalism: big tech does not create new solutions, big tech simply buys new solutions created by emerging companies. As they say, “Google doesn't move unless it is a 1 billion dollar business.”
Is this, as Olson neatly puts it, a “Faustian bargain” ?
One thing is certain. With all the PR about AI ethics, “explainable AI”, “aligned AI”, a "a dogfight under the carpet goes on.
IT Specialist at Know Your Company only one
1 个月new day>< every day we opens>< our eyes>< the future of another day>< new day><
General Manager | Driving Operational Excellence & Growth | Strategic Leader | CEO
1 个月Insightful overview of AGI and the ideas related to tech development. In case of our dependency on tech solutions specifically that which are now top of the art AI ecosystem we are still at the learning phase I mean ordinary users. So all voices are valuable in discussion.