OpenAI: Speculation as to why Sam Altman got fired

OpenAI: Speculation as to why Sam Altman got fired

While I did take a bit of a reprieve from talking about AI, AI, AI, nonetheless I have been deeply immersed in the tech, watched the OpenAI DevDay keynote, and saw the news today about Sam Altman getting the sudden boot off the back of a very successful marketing campaign. At the same time, there is the remarkable news from Google that their generative artificial intelligence service, Bard, is now responsibly designed for teens.

So while I have gone a bit quiet, I continue to be struck that there is finally a technology where news from "big tech" that even school leaders will want to follow. This further reinforces my view that GenAI technology is just as massive a technology as the internet itself, leaving no stone unturned, such that even schools are invested in what the heck happened to Sam Altman?

While I have not read any article shedding any light on this, and I am completely speculating, allow me the opportunity to guess on what might have happened. As of the time of writing, for a quick summary of what is currently known, I'll refer to his post on the topic by my favorite tech blogger, Daring Fireball. (He thinks there is some sort of major scandal.)

My bet as to what happened is not scandalous, but rather more reflective of the industry in general (or at least what the narrative will turn out to become). It goes something like this:

  • OpenAI under Altman is (was?) rapidly developing custom-build chat bots called "GTPs", which is a technology that allows anyone to build an AI chat bot. (I have built two, not ready for public release, and am finding it potentially transformational.)
  • This GPT technology would aptly be described as "Generative Artificial Intelligence for Everyone." If you are someone who has a bunch of content at the ready (blog posts, google docs), you could upload that to a GPT and you got an AI chat bot is able to talk like you. Not a single line of code required.
  • The commercial aspect to these GPTs brings up all kinds of ethical quandaries. What if someone makes and publishes a popular GPT that randomly spews hatred, would OpenAI be on the hook for class action suit? When the internet became mainstream, email tech was abused by spammers, but that wasn't associated with any single company.
  • The industry is, at the same time as the above, full of doomsday scenarios where GenAI will take over the world or lead to our destruction. Those individuals who take that point of view tend to be very vocal when it comes to funding and oversight.

In other words, I think Sam Altman pushed his company too far towards commercial success, instead of something more deliberate and free from controversy like aligning with open standards. The Board didn't like that, saw it as a larger issue, and took action it thought appropriate.

What implications does this series of new have to schools?

  • There is a technology that brings GenAI to Everyone. It might get delayed even with this firing but there is no holding that back.
  • Within our lifetimes, there are going to be GenAI chat bots built by teenagers that help other teenagers understand how to answer exam board questions, and not have to know how to code to build it.

The above is a biased, perhaps uninformed, speculation from an author with a particular viewpoint. But at least you know it's not artificially generated.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了