Open AI Crisis: a final consideration

Laid off on a Friday, snapped up by Microsoft on a Monday, then back at the helm of his company on a Tuesday evening: Sam Altman's life has recently gone through ups and downs that few humans will ever experience. Like the technologies themselves, the world of AI companies is going through an unusual growth crisis, crushing the usual space-time continuum of start-ups. The classic dichotomy between the technical or entrepreneurial profiles of the founders, and the ultimately more public responsibilities induced by the societal role of these technologies, is superimposed this time on another problem: the divergent objectives within the founding team, and the tensions between the foundation and the commercial structure, as well as the philosophies animating them.

From the outset of the OpenAI project, two philosophies have coexisted: the first is what the Americans call Effective Altruism. This goes beyond AI, as it can be found, for example, in the fight against poverty or the defense of animal welfare. Proponents of this theory, instead of donating to charitable causes, are themselves striving, within the corporate framework, to define objective variables to determine the societal impact of their actions. In the context of artificial intelligence, the board of directors of the OpenAI Foundation claimed to manage the issues inherent in AI, such as those of alignment, safety, and the advent of general-purpose AI (AGI), through rules internal to their company. Such an approach implied an adapted pace of business innovations in order to study their consequences. Until Friday, this council saw itself as a council of AI sages. Other OpenAI executives, just as idealistic about the applications of this technology in healthcare, engineering, etc., defended Effective Accelerationism: a more enthusiastic technological trend wishing to move faster with change. In both cases, we note the hypocrisy of quasi-public pretensions, whereas these two objectives require colossal funds, provided in this case by venture capitalists and then by Microsoft. By acting as a think-tank and regulatory foundation, while Open AI has become a commercial enterprise over the past two years (I'm a customer), the company has sought to bring to the baptismal font a model of corporate governance that has been flawed from the outset. Once it passes the billion-dollar mark in annual revenues, OpenAI becomes a company like any other, and must abide by the same rules. This is not to say that the Foundation has no right to exist, but it must continue its existence separately. As for the opposition between the two approaches to technological change, altruistic or commercial, this must be discussed in public debate, with regulators, philosophers and economists, and not simply between players in the sector.

Sébastien Laye is an economist and entrepreneur, founder of Aslan AI.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了