The OpenAI Roadshow                                                      in Tokyo

The OpenAI Roadshow in Tokyo

Sam Altman and Kevin Weil visited the 日本东京大学 on February 3, 2025, as part of OpenAI’s ongoing global outreach, a calculated effort to shape the public discourse around artificial intelligence while reinforcing their company’s role as its steward. Their appearance at Dialogue at UTokyo GlobE was another stop in this intellectual roadshow, a chance to engage with Japan’s top students and position AI not just as an inevitability but as a force for universal good. It was billed as a "Dialogue," but like many OpenAI events, it played out more like a carefully scripted monologue, one where the company's executives controlled the tempo, directing the audience’s excitement toward an AI-powered future that aligns neatly with their ambitions.

Source: The University of Tokyo

Yet beneath the polished presentation and well-rehearsed optimism, there was an unspoken tension, a clash between AI’s immense promise and the creeping suspicion that we are not just witnessing technological progress but the gradual outsourcing of human intelligence itself. The students asked good questions, some insightful, some blunt, but the smooth answers often felt designed to reassure rather than reckon with the full implications of AI’s rise.

The AI Utopia Pitch and the Gaps in the Story

Altman’s optimism about AI’s impact on education was, in many ways, the thesis of the entire event. He painted a future where every student, no matter where they are, has access to a tutor more knowledgeable than any human professor. Education would no longer be limited by geography, wealth, or class; an intelligent assistant would shape itself to each learner, identifying their weaknesses, adapting in real-time, and making the best knowledge on earth universally accessible.

It sounds magnificent. Almost too magnificent.

Source: The University of Tokyo

Because the reality is this isn’t how AI is unfolding in education today. If anything, AI is reinforcing the very inequalities it claims to dismantle. The best AI tutors? Still locked behind paywalls. The top-tier models? Available first to those with corporate or institutional funding. The students in the room may have been excited by OpenAI’s vision, but they should also be wary that when powerful technology is developed under a for-profit model, equitable access is almost always an afterthought. OpenAI may claim that it wants intelligence to be “as cheap as possible.” Still, Silicon Valley’s history suggests a different outcome: first, the elite monopolize it, and then the rest of the world gets the scraps.

And even if AI tutors become freely available, what happens to traditional education? If AI is the best teacher, does that mean schools and universities become obsolete? Who decides what knowledge the AI imparts? Altman spoke about personalized learning, but personalization is just another word for algorithmic control; students will be fed knowledge based on the patterns AI discerns in them. Education, in this world, isn’t just learning; it's more akin to programming.

A Convenient Amnesia About AI’s Ethical Fault Lines

One of the more uncomfortable moments came when a student raised the question of OpenAI’s technology being used in military applications, specifically Microsoft’s AI alleged services in conflicts like Gaza. It was the type of question that punctures the techno-utopian script, dragging it back to reality.

Source: The University of Tokyo

Altman’s answer was predictable: OpenAI has policies in place, AI is not ready for offensive military use, and they don’t endorse it. But here’s the thing: this is the same OpenAI that signed a multibillion-dollar deal with Microsoft, a company with military contracts. When you build an intelligence so powerful that governments and defence contractors covet it, you don’t get to wash your hands of how it’s used. The line between civilian and military applications of AI is thin, if not non-existent.

The student’s question cut to the heart of an issue Silicon Valley has been skirting for years: AI companies want to be seen as neutral forces, as mere providers of technology, but the world does not work that way. If you build something that can be weaponized, someone will weaponize it. OpenAI’s refusal to engage with the full moral weight of that reality is, at best, na?ve and, at worst, could be perceived as willfully deceptive.

The AI Creativity Paradox

One of the more thought-provoking moments of the event came when students asked about AI’s ability to generate its own internal culture, language, and way of thinking. The idea is tantalizing: what if AI doesn’t just mimic intelligence but starts creating something genuinely new?

Source: The University of Tokyo

Altman acknowledged the possibility of AI developing novel forms of communication, but his response was distinctly lacking urgency. This is what makes OpenAI’s stance so frustrating at times; they acknowledge the wildest possibilities of AI, but always in an abstract, distant way. It’s never: this is happening now, and here’s what it means. Instead, it’s framed as a future curiosity, something to watch but not necessarily prepare for.

Yet, the evidence is already here. AI models, in controlled environments, have reportedly created new forms of shorthand between themselves, cryptic, non-human languages that even their creators don’t fully understand. What happens when this scales? What happens when AI systems develop ways of thinking that are opaque to us? If we are building intelligence that may one day outthink us in ways we can’t comprehend, shouldn’t we think about that?now?rather than marvelling at it in a lecture hall?

Are We Outsourcing Thinking Without Asking Why?

Perhaps the most revealing part of the entire event was when Altman and Weil flipped the question back on the students: What do you want us to build?

Source: The University of Tokyo

It was a clever move that shifted the onus away from OpenAI and onto the audience. And yet, the student’s answers spoke volumes. They didn’t just want AI that was smarter, faster, or more efficient. They wanted something more human, more emotional, and more connected to their experience. One student even said outright that AI should not be perfect, because imperfection is what makes things relatable.

It was a moment of unexpected clarity. For all the ways AI is evolving, there remains something essential about human experience that it cannot replicate. Intelligence is not just computation but rather its struggle, failure, and uncertainty. The way OpenAI talks about AI often reduces human intelligence to an equation that needs to be optimized. But maybe intelligence is not just about knowing the answer. Perhaps it’s about not knowing and searching anyway.

Where This Leaves Us

The UTokyo event, like most OpenAI roadshow appearances, was designed to inspire, to excite, to make people believe that we are at the dawn of a new era. And to some extent, we are. AI will change the world; there is no doubt about that. But the real question is how, and for whose benefit.

Source: The University of Tokyo

The students at UTokyo were presented with a future where AI is a partner, a tutor, a collaborator. But beneath that narrative lies a more unsettling truth: AI is also an economic force, a tool of power, a potential disruptor of entire professions, and a technology that, if left unchecked, could redefine intelligence itself in ways we don’t yet fully grasp.

So, what should we take from this event? My takeaway was that OpenAI’s vision of the future is compelling but incomplete and that AI’s potential is extraordinary, but its risks are not just theoretical. The key problem is that we are not just adopting AI if we do not ask and seek answers to the hard questions about power, access, ethics, and control. We are surrendering to it.


I write a monthly magazine called UZU that provides commentary, interviews, and articles on branding, marketing, and life in Japan. Subscribe here. ?? https://lnkd.in/gH-drv6B

#ai, #openai, #samaltman, #airevolution, #artificialintelligence, #utokyo, #techethics, #futureofai, #aiineducation, #aiandsociety, #generativeai, #siliconvalley, #aiethics, #techpower, #aiequity, #aiinnovation


View the whole Dialogue at UTokyo GlobE event via the embedded video below:


Ved Kamat

Ex-Employee, Current Dabbler | AI Enablement | PM,PdM,PMO | Building stuff in Tokyo

2 周

Thanks for a great read! It seems inevitable that the “biggest AI companies”will not have the best answers around security and ethics. The pressures of competition and profitability are in direct opposition to putting ethics first. It’s always been that way for technology even pre-internet, I think. So we shouldn’t sleepwalk into a future designed by the Altmans of the world (by themselves). I suppose it’s going to take regulations, public campaigns, and possibly funded security/ethics focused technology companies, in multiple countries, to keep any semblance of control of how this technology shapes our future..

Charlie Fuller

Enjoy the moments | It's about the people you meet along the way

2 周

Surrendering to a new type of human experience - Sounds wild… my hot take - life is really simple, but AI will make it complicated in ways we can’t imagine. Beautifully written as always Paul.

Paul J. Ashton ????

Head of Global Sales @Giftee | Founder @Ulpa

2 周

Subscribe to my magazine, UZU, right here → ?? https://lnkd.in/gH-drv6B

要查看或添加评论,请登录

Paul J. Ashton ????的更多文章

  • Who Are We When No One Asks?

    Who Are We When No One Asks?

    There was a friend I had when I first arrived in Japan, a guy who, whenever asked what he did for a living, would…

    3 条评论
  • Explosive Buying Syndrome

    Explosive Buying Syndrome

    Not long ago, entire sections of Tokyo’s Ginza district could have been mistaken for high-stakes auction houses, except…

    1 条评论
  • Obsession That Alters Known Understanding

    Obsession That Alters Known Understanding

    I remember the first time I went to Akihabara years ago. I was walking down the neon-drenched streets on a Friday…

    7 条评论
  • How Gilded Projections Flatten Reality

    How Gilded Projections Flatten Reality

    This story has about as much to do with arts and crafts as a viral meme does with nuance. This is not a post about…

    4 条评论
  • The Illusion of Digital Ownership

    The Illusion of Digital Ownership

    TikTok has always been a reliable source of obsession. For years, it dominated our feeds with its viral dances, chaotic…

    3 条评论
  • Japan First?

    Japan First?

    OpenAI’s Economic Blueprint is not just a corporate strategy; it’s a geopolitical manifesto. By aggressively calling…

    1 条评论
  • Japan in 2025

    Japan in 2025

    As 2025 kicks off, Japan’s economy finds itself in a swirl of transformation. Corporate reforms are shaking up…

    5 条评论
  • X Marks the Spot

    X Marks the Spot

    There’s a certain romance to uncovering hidden treasure, but this isn’t some swashbuckling pirate tale. This is modern…

    2 条评论
  • Scaling the Unimaginable

    Scaling the Unimaginable

    There’s a thought experiment that never fails to short-circuit the human brain. How long is a million seconds? About 11…

  • Ban Buttons Won’t Save Kids

    Ban Buttons Won’t Save Kids

    Let’s get this out of the way: banning teenagers from social media sounds bold, but it’s about as effective as plugging…