?? The OpenAI drama
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ???Join our AI governance training (1,000+ participants) & my weekly newsletter (37,000+ subscribers)
Plus: AGI, AI regulation, and AI governance
-?LinkedIn subscribers: unlock the full newsletter, an exclusive monthly edition, and Masterclass discounts with a subscription plan.
?? Hi, Luiza Jarovsky here. Welcome to the 80th edition of this newsletter! Thank you to 80,000+ followers on various platforms and to the paid subscribers who support my work. Read about my work, invite me to speak, or just say hi here.
?? This newsletter is fully written by a human (me), and illustrations are AI-generated.
A special thanks to the Center for Financial Inclusion, this edition’s sponsor:
When designing new financial products, how can teams ensure they are taking into account the user’s privacy needs and concerns? How can privacy be reframed as an essential tenet during ideation and design, rather than seen as a compliance exercise? The Center for Financial Inclusion’s new Privacy as Product Playbook helps designers of digital financial services bring privacy needs front and center. Read it here.
?? To become a newsletter sponsor, get in touch.
?? The OpenAI drama
If you have connected to any social network or news channel in recent days, you probably already know that Sam Altman, OpenAI's CEO, was suddenly fired. But 2 days later, he was reinstated. Meanwhile, OpenAI's employees were posting heart emojis on X (Twitter), and rumors of a secret AGI project were circulating. The drama continues. What is going on?
I'll summarize it in quick bullets below:
November 17
November 19
November 20
November 23
After it all looked “solved,” there is, of course, still the question of why he was fired in the first place. Perhaps a piece of the puzzle, Reuters reported:
“Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity”
They are talking about “Q*” (pronounced Q-Star), a mysterious project that sources have mentioned as one of the reasons for Sam Altman's firing, as it was groundbreaking and could potentially lead to artificial general intelligence - AGI (more on that in my last commentary of this newsletter).
-
From my point of view, what matters to all of us, regardless of gossip, rumors, firing, and reinstatement, is that:
Critical thinking, advocacy, and action are more important than ever. As the slogan of this newsletter says: let's keep reimagining technology in light of privacy, transparency, and fairness.
?? Grok enters the room
As the OpenAI drama unfolds, Elon Musk opportunely announced that Grok, his AI chatbot, will be integrated into X (Twitter) next week. What you need to know:
领英推荐
See below a mockup of how its interface will look like, according to this screenshot from Nima Owji's X account:
The integration with X (Twitter) seems worrying to me for two main reasons:
We will have wild years ahead.
?? Enjoying the newsletter?
Refer to your friends. It only takes 15 seconds (writing it takes me 15 hours). When your friends subscribe, you get free access to the paid membership.
?? Job opportunities
Are you looking for a job in privacy? Transitioning to AI? Check out hundreds of opportunities on our global privacy job board and AI job board. Good luck!
?? Join a training program
More than 620 professionals have attended our training programs. Each of them is 90 minutes long, led by me, and you receive additional reading material, 1.5 pre-approved IAPP credits, and a certificate. They help you keep yourself up to date and upskill: read more and save your spot.
?? Join our AI Book Club
250+ people have already registered for our AI Book Club.
There will be book commentators, and the goal is to have a critical discussion on AI-related challenges, narratives, and perspectives.
Have you read these books? Would you like to read and discuss them? Join the book club!
??? Privacy & AI in-depth
Every month, I host a live conversation with a global expert - I've spoken with Max Schrems, Dr. Ann Cavoukian, Prof. Daniel Solove, and various others. Access the recordings on my YouTube channel or podcast.
?? AGI, AI regulation, and AI governance
Part of the OpenAI drama, as I described above, was related to the mysterious project Q* and the conflicting interests around it. Today I would like to discuss the topic, focusing on AI governance and privacy by design.
This is a public preview. Choose a?paid subscription?or recommend this newsletter to access the premium content. Thank you!
Privacy Management | Risk Management | Mindfulness Enthusiast
11 个月I love your newsletter!
Senior advisor in dataprotection / infosec / cybersec / privacy enhancing technologies
11 个月Txs Luiza for the timeline, are we missing the big point! Will AGI save the world https://www.youtube.com/watch?v=pdRzEASwksA EVGENY MOROZOV IN CONVERSATION WITH DESSY GAVRILOVA SONNTAG, AKADEMIE DER BILDENDEN KüNSTE WIEN Silicon Valley has pitched A.G.I.—or Artificial General Intelligence—as a solution to the world's problems; all we need to do is to make it safe and mitigate its risks. And, of course, to build it: it doesn't yet exist. Yet, Silicon Valley gets it wrong: even the safer version of A.G.I. is not desirable, for it will amplify already neoliberal trends while posing a threat to democracy and public reason. In conversation with Vienna Humanities Festival co-founder DESSY GAVRILOVA, the award-winning author and leading thinker on technology, policy and society EVGENY MOROZOV will explain why. Part of the Vienna Humanities Festival 2023. The Vienna Humanities Festival is a project by Institut für die Wissenschaften vom Menschen and Time to Talk. #politicalPower #welfareState #democracy #decisionmaking #citizens vs datasubjects