Building Schools in the Cloud: Who Owns Our Narrative in the Age of AI?
?? What is real? The top row of images is completely AI-generated—polished, professional, but not me. The bottom row? That’s really me.

Building Schools in the Cloud: Who Owns Our Narrative in the Age of AI?

These Images Look Like Me—But They Aren’t Me

Look at the banner above. The top row? Completely AI-generated. The bottom row? That’s really me. They capture elements of my style, my professional presence, even the way I might hold myself in a photograph. And yet—I never sat for these pictures.

What’s even more unsettling? The app that created these images is marketed for children aged 9+.

To generate these images, the app requires users to upload at least a dozen photos, which it then uses to create 20+ AI-generated images per day—pulling from a constantly changing database of gender stereotypes.

And the dangers go beyond aesthetics. The app:

? Uses variable internal scheduling to keep users hooked.

? Employs algorithmic nudging to influence choices.

? Contains inherent bias, reinforcing stereotypical beauty ideals.

? Terrifyingly, allows users to upload photos that aren’t their own—creating ethical and privacy concerns.

? Randomly generates sexually explicit images, even for minors.

For neurodivergents, especially those with ADHD, this is even more concerning. These platforms exploit cognitive traits such as:

  • Impulsivity – Clicking without fully processing the risks.
  • Perfectionism – Wanting to generate just one more image to get it "right."
  • Hyperfocus – Becoming fixated on the process, losing track of time and consequences.
  • Rejection Sensitivity – Comparing AI-generated versions of oneself to unrealistic standards.

In a world where AI can manipulate our self-perception, schools can’t afford to ignore digital identity, AI literacy, and online safety.

It has never been more important for schools to teach skills that are relevant to the world we actually live in.


The Playful Side of AI: What Happens When Your Avatar Becomes You?

There’s another side to this conversation. Not all AI-generated likenesses are deceptive or malicious—some are just incredibly fun.


AI capturing a number of emotions during a personal conversation

Apps like Snapchat allow users to create and customise avatars, dressing them up, tweaking their features, and even watching them interact in unexpected ways. It’s genuinely addictive. There’s something strangely compelling about designing a digital double and wondering what it gets up to without you.

But here’s the twist:

  • AI appears to be reading conversations and shaping the avatar’s responses accordingly.
  • Your little digital self can react, emote, and engage in ways that mimic your personality.
  • The avatar becomes a third entity in conversations, a playful yet slightly eerie version of you—but not you.

It raises some bizarre questions about relationships in the digital world:

?? What does it mean for human connection when AI inserts itself into our interactions?

?? Is AI augmenting relationships in a meaningful way—or subtly replacing real human dynamics?

?? Are we, without realising it, forming a kind of strange “AI throuple” in our digital conversations?

When I send a message, and my avatar reacts in a way I didn’t consciously choose, is it still me? Or has AI become an active participant in my digital identity?

This is where AI stops being just a tool and starts becoming something else—an intermediary, an augment, a social entity in its own right.

For schools and online learning communities, this raises even bigger questions:

?? If students engage with AI-driven avatars in social spaces, how does this affect their sense of identity and interpersonal skills?

?? Will AI interactions shape how young people form relationships online?

?? Can AI strengthen human connection, or will it distort what authentic engagement even means?

This is not a warning: it’s an invitation to pay attention. AI is not just shaping how we see ourselves; it’s influencing how others experience us.


When AI Starts Writing for You (Without You Knowing)

This isn’t just happening with images.

I recently googled my own name and found articles attributed to me that I never wrote.


I have not written a blog about AI and Further Education, but thanks for the heads up AI, maybe I will.

Some were legitimate - things I actually published. Others? They were on-brand, aligned with my expertise, but entirely AI-generated or misattributed.

It was unsettling. Not because the content was wrong, but because it was plausible.. so close to my actual voice and topics that someone could easily assume it was mine.

This is how AI is evolving. It doesn’t just pull exact matches; it connects dots, fills in gaps, and associates you with ideas, topics, and even content you didn’t create.

Why Does This Matter for Schools?

? AI-generated misinformation in research projects – Students using AI tools for assignments might unknowingly incorporate or cite incorrect sources, leading to academic deepfakes—work that looks rigorous but lacks accuracy.

? Erosion of authorship and academic integrity – If AI is generating content on behalf of students without attribution, how do we differentiate between learning and simple automation?

? Blurred digital identity – Schools must prepare students for an online world where their work, ideas, and even faces might be generated or manipulated by AI, raising issues of ownership and accountability.


Practical Steps for Schools Extending Learning into the Cloud

1. Implement AI & Digital Identity Literacy

  • Teach students how AI generates content—including its biases, limitations, and ethical concerns.
  • Introduce "AI hygiene" practices: verifying sources, fact-checking AI-generated responses, and critically assessing digital identities.

2. Rethink Assessment Methods

  • Move beyond traditional essay-based assessments to interactive project-based learning, live presentations, and collaborative problem-solving.
  • Use AI detection tools sparingly—not as punitive measures but as teaching tools to help students understand responsible AI use.

3. Establish Digital Identity Awareness

  • Teach students to track their own online presence and understand how AI might be shaping their digital footprint.
  • Help learners curate AI-proof portfolios—verifiable, original work that reflects their thinking, not AI-generated remixes.

4. Encourage AI as a Co-Thinker, Not a Replacer

  • Model responsible AI use in classrooms—using it to enhance learning, not replace critical thought.
  • Introduce AI-assisted creativity—getting students to shape AI-generated content rather than passively accepting its output.


AI Doesn’t Replace: It Remixes

Think about how language itself works: every phrase we speak, every sentence we write, is a remix of something that has come before. AI accelerates that process, generating plausible next words, images, or ideas based on patterns. But just like photography didn’t kill painting, and digital art didn’t kill sculpture, AI-generated content doesn’t diminish human creativity—it reshapes it.

The real danger isn’t AI "stealing" our voices. It’s the lack of literacy in recognising and questioning what AI produces.

So the real question isn’t whether AI is making our work less authentic. The question is: Are we using it consciously, intentionally, and in ways that amplify (not dilute) our human intelligence?

That’s the skill we need to build. That’s the new literacy.


What do you think? Is AI enhancing or distorting relationships? Are playful AI-driven avatars harmless fun, or do they subtly alter how we interact with others? Let’s keep this conversation going.

Looking forward to seeing some of you in person tomorrow at the KCL event: Human Intelligence in the Age of AI or online for a catch up - book a call here. I always love a real person chat about edtech!

#AI #DigitalIdentity #OnlineLearning #HumanIntelligence #FutureofEducation #AIandRelationships

Salvador Bosque

BDM Scaling Sales | Digital Enterprise Architect | Teams & Organizational Non-Dual Systems Coach (ORSC)

2 天前

Wow Kirstin your article resonates deeply. "Resonate" is probably not something AI can do. Maybe we humans need to find what is truly human. I guess some examples may include love, compassion, joy, grit, playfulness, critical thinking, altruism, reflection, creativity, vulnerability... Yesterday, I learned from an AI expert that AI can be more efficient by training itself autonomously rather than relying on human-generated data. He said that soon all knowledge generated by humankind over the last thousands of years will be merely anecdotal. Well, that was his opinion, his belief. I don't know how the universe changes based on my opinions; I took two pictures before and after forming my opinion, and everything looked the same. If AI needs training, maybe we humans need untraining from our opinions and beliefs?? Who knows. I just know I did exactly the same experiment that you did, and voilà! These are the images that AI generated about me...

  • 该图片无替代文字
Max Mamoyco

Founder & CEO @ Nozomi - Creating digital health products that bring positive emotions and engagement

3 天前

Kirstin Stevens this is wild—and a little unsettling. ?? AI isn’t just generating content; it’s shaping identities. The line between real and synthetic is blurring fast. How do we keep control of our own narratives in a world where AI can mimic us so well? ??

Amelia King

Director of EdTech & Innovation | MSc Smart EdTech & Co-Creativity

3 天前

I like some of the phrases you're using here Kirstin Stevens, particularly AI hygiene and 'academic deepfakes'. Definitely food for thought.

This really resonates with me. I view AI as similar to a child: with proper nurturing, guidance, and the right values instilled, it can grow into a responsible, positive force. When we support AI with robust ethical frameworks and thoughtful direction, it becomes a powerful storyteller—one that can amplify our narratives rather than distort them. Ultimately, while AI may mimic our images and voices, it's our responsibility to ensure that it represents our true identity, not a version shaped without our input.

Dr Kevin House, FCCT, FRSA

Building regenerative education

3 天前

Kirstin Stevens you are right. Our collective passivity in the face of this technology shows yet again our lack of metaphysical sophistication and moral naivete. I used to think tech was simply an amplifier but the last year I have begun to revise this. Our hubris and self-delusion that we will control and enslave this silica-based sentience as it evolves highlights our shallowness in systemic sensibility. Our relationship with this tech is far more important than what it currently can do and how much exploitation the few can harness it for.

要查看或添加评论,请登录

Kirstin Stevens的更多文章