Today Might Be The First Day of The End Of The World
Jessica Kriegel
Chief Strategy Officer @ Culture Partners, Podcast Host @ Culture Leaders, Keynote Speaker, Author
As the mother of a seven-year-old, I will remember today. Today is the day that the most powerful woman in AI, Mira Murati, left her post as CTO of OpenAI, and the company announced its plans to transition to a for-profit entity. This is a critical moment for humanity as OpenAI pivots its leadership, corporate structure, and—perhaps most alarmingly—dissolves the super alignment team focused on AI risk. Sarah Kreps, director of the Tech Policy Institute at Cornell University, puts it bluntly: “This points to an accelerated move into the boundary-pushing directions of AI research. It's a natural consequence of an AI arms race with high financial stakes.” What we are witnessing is not just a change in leadership but the dissolution of the foundational cultural beliefs of OpenAI, which were once rooted in safety, transparency, and social good.?
As expected, the leaders have all released the typical press release talking points about maintaining their commitment to social good. And Murati has stated that her departure has nothing to do with this restructure announcement. But Americans can see past the polished statements to the real story. If her departure had nothing to do with the restructuring, she would have waited. Her previous misgivings about Sam Altman’s leadership are well-known, and the timing of her exit makes the real reason obvious.?
This move to a for-profit model raises more questions than answers, especially around the future of AI and its governance. Under this new structure, Sam Altman, for the first time, would become a shareholder himself. ?The idea that a for-profit entity can genuinely focus on social good while prioritizing shareholder value is delusional.? Let me repeat: this means that OpenAI would now be duty-bound to prioritize profits over the well-being of society. They could even be sued for failing to act in the best interest of their shareholders’ value. This shift essentially marks the end of OpenAI’s commitment to social good as its guiding principle. Priority will inevitably follow the money, and the implications of this change could be catastrophic for the future of AI innovation and safety.?
This is why today feels so monumental, so disorienting. With the exit of Mira Murati and the shift in OpenAI’s mission, we’re not just witnessing a corporate restructuring. We’re seeing the dismantling of a culture that was once dedicated to AI safety, a culture that once claimed to prioritize humanity over profit. Today might literally be the first day of the end of the world. The decision to dissolve the super alignment team, which was tasked with managing AI risks, sends a chilling message: profits now come first. ?
The truth is, for-profit entities are not designed to prioritize long-term societal good; they are built to maximize short-term financial gains. The transformation of OpenAI into a public-benefit corporation does little to change the fact that shareholders will now dictate the company’s future. It’s hard to see how a culture of profit can still align with the original mission of AI safety and transparency. When decisions are motivated by financial gain, the results are bound to reflect those priorities—and that should concern all of us. The risks of pushing AI research too fast, without sufficient oversight, could lead us into uncharted and dangerous territory.?
Today could literally be the first day of the end of the world, not because AI is inherently bad, but because the framework governing its development at the biggest AI company with Big Tech backing has shifted away from safeguarding humanity and toward maximizing returns for investors. The culture of OpenAI is changing, and with it, the future of AI and our society. This isn’t just a business decision—it’s a moment that could define the trajectory of technological advancement for decades to come. And, for the sake of my child’s future, I hope we realize the gravity of this shift before it’s too late.?
Elsewhere In Culture??
领英推荐
The Boeing strike continues to evolve into a classic “us vs. them” narrative. Boeing recently announced a 30% pay increase, an uptick from their previous 25% offer, calling it their “best and final offer.” However, there’s a catch—it must be ratified by this Friday. The workers aren’t impressed, and they’re calling Boeing’s bluff, saying “not good enough.” But the conflict goes deeper than just dollars and cents—it’s rooted in trust. Workplace culture is shaped by the experiences employees have, and Boeing executives have now created two negative experiences that are shaping workers’ beliefs that leadership cannot be trusted.?
The first breach of trust came when Boeing initially claimed that 25% was the best they could offer, but after workers held their ground, they suddenly found an extra 5%. This change leaves workers feeling deceived, questioning whether the company was truly honest in their earlier negotiations. It’s possible Boeing had to make tough choices to find that extra money, but if that’s the case, this should have been communicated to the workforce to maintain trust. The second blow came when Boeing bypassed the negotiating committee and went directly to union members and the media with their offer. Union organizers called it a “slap in the face,” undermining the collective bargaining process. For the workers, these experiences aren’t just frustrating—they’re cementing the belief that Boeing’s leadership can’t be trusted, deepening the divide between management and the workforce.
?
An Amazon's return-to-office (RTO) mandate has sparked overwhelming dissatisfaction among employees, as evidenced by the survey where the average satisfaction rating with the new return to office policy? was just 1.4 out of 5. This discontent highlights a deeper issue within Amazon’s workplace culture—when decisions are made without genuine alignment or trust, the results can be damaging. Employees have expressed concerns that the five-day RTO policy disrupts their productivity, collaboration, and work-life balance, especially given the flexibility they've come to rely on. The sharp dissatisfaction rating signals a gap between leadership’s vision and employees' needs, a disconnect that risks eroding the trust and accountability crucial for long-term success. When culture shifts without communication and understanding, it often leads to disengagement, affecting the results that leadership is trying to drive.?
The low satisfaction score also reflects a critical issue with Amazon’s approach to balancing leadership mandates with employee experience. Many employees feel the new policy undermines trust in their ability to deliver results remotely, a sentiment that threatens to weaken the company's culture of innovation. For an organization that values data-driven decision-making, this RTO policy seems misaligned with the realities of how its workforce operates most effectively. Without a culture that supports flexibility and demonstrates trust in its employees, Amazon risks losing top talent and fostering a workplace environment where accountability feels forced rather than earned. The dissatisfaction rating speaks volumes—when leadership decisions don't resonate with the workforce, the resulting culture often undercuts the very productivity it aims to improve.?
Data Visionary & Founder @ AI Data House | Driving Business Success through Intelligent AI Applications | #LeadWithAI
2 个月Interesting thoughts Mira Murati's departure and OpenAI's shift to a for profit model really raises important questions about trust and values in leadership. we prioritize safety and transparency in our AI solutions. How can we ensure that these values remain central as AI evolves?
Here’s what’s jarring to me: Reading your well-reasoned perspective on Armageddon, then running into a subhead that says, “Elsewhere in Culture.” ?? If the world ends, where does “elsewhere” exist? ?? Seriously… Excellent vision regarding the perils we must navigate moving forward, Jessica.