Has AGI become a status symbol?
https://twitter.com/ilyasut/status/1491554478243258368

Has AGI become a status symbol?


Some AI researchers promote luxury ideas that signal how distant their concerns are from common concerns and show how idiosyncratic they can be to conventional worries. While these luxury ideas confer status to peers, they can create deleterious effects on others with public concerns and should not be adopted, much less sustained by business leaders.

Consider the self-congratulatory article All Roads Lead to Rome by Eric Jang.[1] Yang explains in a post-mortem job search that he is uninterested in starting his own company because he is “more interested in solving AGI than solving customer problems.” This is the leading edge of AI culture, where distancing oneself from real-world concerns makes you look interesting to others unwilling to disregard personal and idiosyncratic pursuits or have real-world concerns. What about Microsoft’s dubious Sparks of Artificial General Intelligence, where 14 researchers collaborated on a self-indulgent 155-page tome speculating whether or not GPT-4 has a spark [sic] of general intelligence?[2] Or OpenAI's chief scientist tweeting that artificial intelligence may [sic] already be slightly [sic] consciousness where (like “spark”) “may” and “slightly” are doing a lot of heavy lifting.[3]

The aspiration to solve everything instead of something is the ultimate flex. After AlphaGo beat Lee Sedol at Go, Demis Hassabis, the cofounder and CEO of DeepMind, said DeepMind will no longer focus on winning games but on “developing advanced general” solutions. Hassabis adds that general solutions “could one day help scientists tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials.”[4] Since board games have little or no social significance, any yearning to do more is commendable.

However, Hassabis is not interested in solving any of these problems (e.g., new cures for diseases, reducing energy consumption, or inventing new materials) but in solving all these problems simultaneously. DeepMind recently released Gato, which the company describes as a “general” solution. Gato can perform over six hundred tasks, including playing video games, organizing objects, captioning pictures, and chatting. One DeepMind researcher even claimed, about AGI, that “The game is over!” However, none of the six hundred tasks have anything to do with curing disease, reducing energy consumption, or inventing new materials, which DeepMind declared important five years ago. In pursuing AGI, no problem is important enough to solve directly.

Marvin Minsky recalls his approach to AI in a biography by?The New Yorker?in 1981, saying, “I mustn’t tell the machine exactly what to do. That would eliminate the problem.”[5]?Focusing on a specific problem has always been a problem in the field of artificial intelligence because AGI aims to recreate the problem-solver ex nihilo, not to solve problems directly. The problem is that interpretation is required when solving a problem. When the interpretation of a problem begins, one is no longer solving intelligence or designing solutions to solve all problems.

Solving a problem requires eliminating a problem by way of problem framing, domain knowledge, designing and running experiments, analyzing data, collaborating with peers, and generally being responsible to regulators, customers, and patients. Solving every problem doesn’t. Plus, isolating oneself from the cares of the world and everyday life in favor of personal idiosyncratic pursuits signals superiority, and such distance from necessity signals status. The pursuit of ignoring problems is not “infinite courage,” as Jang states in his article. Isolating oneself from the cares of the world and everyday life in favor of personal esoteric pursuits is sanctimonious.

Yet, the importance of a problem statement cannot be overstated. A concise, well-written problem statement allows everyone involved to agree on the actual problem being solved. This is especially salient when the problem is ill-defined. People see different things in the same phenomenon. When people see different things in the same phenomenon, they begin to work on what they want the problem to be. Without a problem statement, solutions tend to become more complex and expand to fill in the time allocated for problem-solving. Work expands to fill the time available for its completion. This is a solution sprawl, similar to the urban sprawl that expands to fill in geographic spaces immaterial to how well the urban landscape serves its citizenry. Business-minded technical leaders may recognize the conspicuous reason why a solve-no-problem business model might fail. This type of all-or-nothing strategy is ineffective, and most businesses cannot survive on such a feast-or-famine strategy. Businesses cannot aspire to solve everything instead of something nor distance themselves from necessity.


[1] All Roads Lead to Rome: The Machine Learning Job Market in 2022

[2] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, & Yi Zhang. (2023). Sparks of Artificial General Intelligence: Early experiments with GPT-4.?

[3] Let’s not forget that Sam Altman posted on Reddit that "agi has been achieved internally" at OpenAI (then changed his mind an hour later).

[4] Demis Hassabis and David Silver, “AlphaGo’s Next Move,” DeepMind, May 27, 2017.

[5] Bernstein, Jeremy. “Profiles: Marvin Minsky” The New Yorker, December 14, 1981. pg. 73

Mykola Rabchevskiy

Owner, Gnosis Engineering, LLC

1 年

Sounds something like "the numbers are slightly integers" ??

Mike Archbold

Artificial General Intelligence

1 年

The title alone is kind of mind boggling for me.... I got serious about AGI in March of 2007 ... I decided to pursue what I was passionate about... since then if there was a status to this, I don't know what it is. The problem has always been having a design that you didn't "cheat to narrow" but could still go the distance to AGI. Maybe the field is in danger of being taken over by poseurs (quick to add not people mentioned in your editorial). still kind of a mind blowing headline...

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了