The Leading Edge #14
Robin Jose
LinkedIn Top Voice | CPO | CTO | ?? 2x Successful AI Product Exits | Founder | Angel Investor | Speaking, Advisory & Consulting
Welcome back to edition 15, and buckle up - this week's a doozy! We're diving into the epic Google vs. OpenAI showdown, plus executive departures from OpenAI and the close of an entire division. We also take a moment to appreciate a CEO with a sense of humor (and maybe learn a thing or two about taking feedback in stride).
As always, we want to hear from you! What grabbed your attention this week? Let us know in the replies, and feel free to suggest topics for next week's edition.
And hey, if you find this newsletter valuable, share it with your network! The more the merrier (and the smarter ).
Clash of the Titans
The biggest news of the week offocurse is OpenAI vs Google. OpenAI gatecrashed yet another big Google announcement with with GPT-4o (O for omnichannel - this one is native multimodal, not a stichting of many models together).
I am not going to talk about "Her" because that's been all over the news.
But just in case you did not, do watch all the videos in here. Partcularly, i recommed at least the following two videos.
But what was also impressive is that GPT-4o is available to non plus customers, making it truly available to all. And if that's not all, they claimed GPT-4o is 2x faster and 50% cheaper than GPT-4 Turbo.
Depiste OpenAI stealing their thunder, Google I/O was very nontheless very impressive! Of course, Google’s highly scripted videos don’t hold a candle the more realtime, endearing demos that’s done by OpenAI.
Nonetheless, Project Astra is amazing when it comes to being an assistant. There could be tons of use cases on top of this.
The battle lines have been drawn, but the real winners are us – the users. What are you most excited about? Project Astra for its efficiency and integration to your work systems or GPT-4o for its charming demeanor?
Super(mis)Alignment and OpenAI
Back in July 2023, OpenAI announced the SuperAlignment team with much fanfare.
The goal of the Superalignment team was to build an “automated alignment researcher”?- ?“to steer and control systems much smarter than us”.
They also stated this “Superintelligence alignment is fundamentally a machine learning problem, and we think great machine learning experts—even if they’re not already working on alignment—will be critical to solving it”.
The team was headed by Ilya Sutskever (cofounder and Chief Scientist of OpenAI) and Jan Leike (Head of Alignment). OpenAI pledged 20% of compute for this initiative.
Then things changed.
Sam got kicked out, and Ilya was considered one person who worked with the previous board to initiate this. And then Sam joined back, Ilya went silent until now.
领英推荐
And now this week, both Ilya and Jan are gone.
Ilya gave a very good corporate story of pursuing a passion project. Jan was more direct, stating “over the past years, safety culture and processes have taken a backseat to shiny products”.
OpenAI apparently has dissolved the SuperAlignment team.
Conspiracy theories are around - and as can be expected, AGI is the most used word.
Should OpenAI focus more on Superintelligence Alignment? Of course, and so should so many other companies.
Yann LeCun had an interesting take on Jan Leike’s post here. He challenges the whole notion of the urgency of the statement “how to control AI systems much smarter than us”. In his words, we need to have the beginning of a hint of a design for a system smarter than a house cat before this rallying cry.
When Sam Altman got fired, the conspiracy theorists made videos on AGI was here and Skynet will take over in two weeks. Two quarters passed since then, and all we got was GPT-4o “Scarlet Johansson”.
OpenAI is still a (relatively) small company fighting some major incumbents like Google and Meta - and the whole opensource AI movement. Their competition is not going to stand still.
OpenAI cannot as well - they need to move forward, release new products and keep their technology lead.
If OpenAI instead invested half their resources on building AI safety products, the same people would soon be making videos on "OpenAI is now longer is cutting edge - did leadership play too safe?".
Sundar's Got Jokes (and We Should Too)
Spekaing of the event, even Google's CEO isn't immune to memes!
Remember the "AI" counting memes after last year's I/O? Apparently he mentioned AI 37 times in the last one, and internet was filled with memes.
This year, Sundar Pitchai embraced them with humor, proving a point: we can all lighten up a bit.
Seriously, sometimes we take ourselves WAY too seriously at work. (but seriously, stop counting how many times i said seriously in this post ??)
A little constructive criticism feels like a personal attack, and a bad day can send us spiraling. But here's the reality: nobody's perfect. We all make mistakes, and sometimes those mistakes come disguised as feedback.
The key? Laugh it off (when appropriate), learn from it, and move on. Plus, being open to feedback helps you grow faster.
So next time you get a not-so-glowing review, take a deep breath, and ask yourself what you can learn from it.
You might be surprised at how valuable it can be.
Until next week...
We're still growing - this is the 13th edition! Your feedback stays as crucial as ever. Hit reply and let me know what you think! Want to see a specific topic covered next week? Don't be a stranger, share your ideas!
And of course, if you find this newsletter valuable, spread the knowledge! Share it with your network and help us grow. ????
See you next week!
#artificialintelligence? #generativeai #leadership #ai #productdevelopment #startups