The OpenAI Roadshow in Tokyo
Sam Altman and Kevin Weil visited the 日本东京大学 on February 3, 2025, as part of OpenAI’s ongoing global outreach, a calculated effort to shape the public discourse around artificial intelligence while reinforcing their company’s role as its steward. Their appearance at Dialogue at UTokyo GlobE was another stop in this intellectual roadshow, a chance to engage with Japan’s top students and position AI not just as an inevitability but as a force for universal good. It was billed as a "Dialogue," but like many OpenAI events, it played out more like a carefully scripted monologue, one where the company's executives controlled the tempo, directing the audience’s excitement toward an AI-powered future that aligns neatly with their ambitions.
Yet beneath the polished presentation and well-rehearsed optimism, there was an unspoken tension, a clash between AI’s immense promise and the creeping suspicion that we are not just witnessing technological progress but the gradual outsourcing of human intelligence itself. The students asked good questions, some insightful, some blunt, but the smooth answers often felt designed to reassure rather than reckon with the full implications of AI’s rise.
The AI Utopia Pitch and the Gaps in the Story
Altman’s optimism about AI’s impact on education was, in many ways, the thesis of the entire event. He painted a future where every student, no matter where they are, has access to a tutor more knowledgeable than any human professor. Education would no longer be limited by geography, wealth, or class; an intelligent assistant would shape itself to each learner, identifying their weaknesses, adapting in real-time, and making the best knowledge on earth universally accessible.
It sounds magnificent. Almost too magnificent.
Because the reality is this isn’t how AI is unfolding in education today. If anything, AI is reinforcing the very inequalities it claims to dismantle. The best AI tutors? Still locked behind paywalls. The top-tier models? Available first to those with corporate or institutional funding. The students in the room may have been excited by OpenAI’s vision, but they should also be wary that when powerful technology is developed under a for-profit model, equitable access is almost always an afterthought. OpenAI may claim that it wants intelligence to be “as cheap as possible.” Still, Silicon Valley’s history suggests a different outcome: first, the elite monopolize it, and then the rest of the world gets the scraps.
And even if AI tutors become freely available, what happens to traditional education? If AI is the best teacher, does that mean schools and universities become obsolete? Who decides what knowledge the AI imparts? Altman spoke about personalized learning, but personalization is just another word for algorithmic control; students will be fed knowledge based on the patterns AI discerns in them. Education, in this world, isn’t just learning; it's more akin to programming.
A Convenient Amnesia About AI’s Ethical Fault Lines
One of the more uncomfortable moments came when a student raised the question of OpenAI’s technology being used in military applications, specifically Microsoft’s AI alleged services in conflicts like Gaza. It was the type of question that punctures the techno-utopian script, dragging it back to reality.
Altman’s answer was predictable: OpenAI has policies in place, AI is not ready for offensive military use, and they don’t endorse it. But here’s the thing: this is the same OpenAI that signed a multibillion-dollar deal with Microsoft, a company with military contracts. When you build an intelligence so powerful that governments and defence contractors covet it, you don’t get to wash your hands of how it’s used. The line between civilian and military applications of AI is thin, if not non-existent.
The student’s question cut to the heart of an issue Silicon Valley has been skirting for years: AI companies want to be seen as neutral forces, as mere providers of technology, but the world does not work that way. If you build something that can be weaponized, someone will weaponize it. OpenAI’s refusal to engage with the full moral weight of that reality is, at best, na?ve and, at worst, could be perceived as willfully deceptive.
The AI Creativity Paradox
One of the more thought-provoking moments of the event came when students asked about AI’s ability to generate its own internal culture, language, and way of thinking. The idea is tantalizing: what if AI doesn’t just mimic intelligence but starts creating something genuinely new?
Altman acknowledged the possibility of AI developing novel forms of communication, but his response was distinctly lacking urgency. This is what makes OpenAI’s stance so frustrating at times; they acknowledge the wildest possibilities of AI, but always in an abstract, distant way. It’s never: this is happening now, and here’s what it means. Instead, it’s framed as a future curiosity, something to watch but not necessarily prepare for.
Yet, the evidence is already here. AI models, in controlled environments, have reportedly created new forms of shorthand between themselves, cryptic, non-human languages that even their creators don’t fully understand. What happens when this scales? What happens when AI systems develop ways of thinking that are opaque to us? If we are building intelligence that may one day outthink us in ways we can’t comprehend, shouldn’t we think about that?now?rather than marvelling at it in a lecture hall?
Are We Outsourcing Thinking Without Asking Why?
Perhaps the most revealing part of the entire event was when Altman and Weil flipped the question back on the students: What do you want us to build?
It was a clever move that shifted the onus away from OpenAI and onto the audience. And yet, the student’s answers spoke volumes. They didn’t just want AI that was smarter, faster, or more efficient. They wanted something more human, more emotional, and more connected to their experience. One student even said outright that AI should not be perfect, because imperfection is what makes things relatable.
It was a moment of unexpected clarity. For all the ways AI is evolving, there remains something essential about human experience that it cannot replicate. Intelligence is not just computation but rather its struggle, failure, and uncertainty. The way OpenAI talks about AI often reduces human intelligence to an equation that needs to be optimized. But maybe intelligence is not just about knowing the answer. Perhaps it’s about not knowing and searching anyway.
Where This Leaves Us
The UTokyo event, like most OpenAI roadshow appearances, was designed to inspire, to excite, to make people believe that we are at the dawn of a new era. And to some extent, we are. AI will change the world; there is no doubt about that. But the real question is how, and for whose benefit.
The students at UTokyo were presented with a future where AI is a partner, a tutor, a collaborator. But beneath that narrative lies a more unsettling truth: AI is also an economic force, a tool of power, a potential disruptor of entire professions, and a technology that, if left unchecked, could redefine intelligence itself in ways we don’t yet fully grasp.
So, what should we take from this event? My takeaway was that OpenAI’s vision of the future is compelling but incomplete and that AI’s potential is extraordinary, but its risks are not just theoretical. The key problem is that we are not just adopting AI if we do not ask and seek answers to the hard questions about power, access, ethics, and control. We are surrendering to it.
I write a monthly magazine called UZU that provides commentary, interviews, and articles on branding, marketing, and life in Japan. Subscribe here. ?? https://lnkd.in/gH-drv6B
#ai, #openai, #samaltman, #airevolution, #artificialintelligence, #utokyo, #techethics, #futureofai, #aiineducation, #aiandsociety, #generativeai, #siliconvalley, #aiethics, #techpower, #aiequity, #aiinnovation
View the whole Dialogue at UTokyo GlobE event via the embedded video below:
Ex-Employee, Current Dabbler | AI Enablement | PM,PdM,PMO | Building stuff in Tokyo
2 周Thanks for a great read! It seems inevitable that the “biggest AI companies”will not have the best answers around security and ethics. The pressures of competition and profitability are in direct opposition to putting ethics first. It’s always been that way for technology even pre-internet, I think. So we shouldn’t sleepwalk into a future designed by the Altmans of the world (by themselves). I suppose it’s going to take regulations, public campaigns, and possibly funded security/ethics focused technology companies, in multiple countries, to keep any semblance of control of how this technology shapes our future..
Enjoy the moments | It's about the people you meet along the way
2 周Surrendering to a new type of human experience - Sounds wild… my hot take - life is really simple, but AI will make it complicated in ways we can’t imagine. Beautifully written as always Paul.
Head of Global Sales @Giftee | Founder @Ulpa
2 周Subscribe to my magazine, UZU, right here → ?? https://lnkd.in/gH-drv6B