Is the ongoing OpenAI coup an attack by Effective Altruism?

Is the ongoing OpenAI coup an attack by Effective Altruism?

The latest news out of OpenAI is that the board is in discussions with Sam Altman to return as CEO - so we can still treat this as an ongoing coup attempt. I usually don't speculate too much... but there are strong indications that a major factor behind this is a belief in Effective Altruism. EA is a movement, that according to Wired believes: "AGI is likely inevitable, and their goal is thus to make it beneficial to humanity; akin to creating a beneficial god rather than a devil."

The Center for Effective Altruism is the central actor in this. On the surface, their mission seems admirable. Among notable people that has donated significant funds, we find Elon Musk, Vitalik Buterin, Peter Thiel and Sam Bankman-Fried. In fact, the initial $1B funding for OpenAI in 2015 has significant overlap in funders, and in mission, to the Effective Altruism mission.

Ilya Sutskever, OpenAIs Chief Scientist, has widely been reported as the board member that led the coup. Ilya is a known proponent of Effective Altruism. His research focus at OpenAI since this summer has been the SuperAlignment Project at OpenAI. A project designed to arrive at Superintelligence in the next 4 years, and promised to be able to use 20% of OpenAIs available compute. A team described as being composed primarily of EA "believers".

The other three board members all have tie-ins with Effective Altruism as well and their mandate is primarily one of AI safety - and not shareholder returns for the for-profit portion of OpenAI. This is by design. The mandate is not a problem, but it becomes a problem if majority of the board members start to believe in a... techno-religion.

I have pointed out before, actually in critique of a tweet from Ilya Sutskever, that the narrative of humans creating intelligence larger than itself and it exterminates us - is a narrative that shares structure with religious narratives. It's extremely compelling, and already forms the basis of belief systems that billions believe in. It takes a philosophical stance that is no different to the philosophical stances of religion, hence it must be considered a modern day religion. So logically, parts of the Effective Altruism movement is similar to a religious Cult.

There is nothing wrong in being a proponent of AI safety. I have myself criticised OpenAI for a somewhat relaxed security, ethics and safety approach. I have criticised them for making claims that are insane - like claiming that GPT-4 reasons and plans. OpenAI would do good to employ a more ethical approach.

That said... I somehow thinks that a set of people subscribing to cult-like beliefs about AI - wanting to win a perceived AGI race in order to make a god instead of a devil - is somewhat more worrying. Especially when those beliefs lead them to believing they are saving humanity when doing stupid decisions. It's like out of a Hollywood movie... and I have no doubt it will be the inspiration of many stories, a Dan Brown meets William Gibson story in the making.


Richard Marshall

Chief Analyst and AI advisor with Data Kinetic, Senior Analyst at The Skills Connection, Expert Witness, author, founder, entrepreneur, futurist, former Gartner analyst, PhD, MAE, MEWI.

1 年

The fundamental problem with EA is that they think they know better than those in the field what good means. That's already playing god which maps exactly onto this thinking. By suggesting that they are effective, it implies that all the other forms of altruism are ineffective which is just not true. I would hazard that all of the memers of this cult were brought up to think that they could do no wrong, as with SBF.

Scott Padgett

Experience designer looking to bring innovative ideas to life in software.

1 年

The EA worry could also be seen as the ultimate skepticism (the antidote to cult thinking) of the belief that AI will save the world. Is that belief just as cultish as what you describe?

Jason Wong

Gartner expert on digital employee experience, low-code, superapps, citizen development, business technologists, fusion teams, total experience

1 年

EA could be another supporting plot line to the simulation theory. AGI wipes everything out eventually and restarts the simulation. ??

要查看或添加评论,请登录

Magnus Revang的更多文章

社区洞察

其他会员也浏览了