AI: It’s a Trap
Image: It's a trap meme. Star Wars Return of the Jedi. Used without permission.

AI: It’s a Trap

As I prepare to return to school, I have been reflecting on my journey with generative AI, how it has evolved, how I have adapted to it, and what its growing role means for learning, discovery, and our collective future.

My journey with AI started well before ChatGPT. I first experimented with Jasper, an early tool built on OpenAI’s models, and later integrated OpenAI’s API to develop my own marketing tools. Over time, I have built custom chatbots, implemented retrieval-augmented generation (RAG), and explored basic agentic systems. While I would not call myself an expert, I am a capable programmer who can create tailored AI solutions.

Now, as I prepare for my MSc program, I can’t help but notice how much of the thesis-writing process could be automated. From formulating a research question to conducting literature reviews and even drafting the final document, it is entirely feasible to build a toolchain that handles everything. In fact, recent research suggests that this is already happening to some degree in academia (Bhatt et al., 2024). With advancements like OpenAI’s Deep Research and Operator models, AI’s role in academic work will only continue to expand.

If AI can do all of this, what is the point?

For me, going back to school as a mature student is not just about earning a credential. It is about learning and building new skills. Relying entirely on AI to complete a thesis would not just cross ethical boundaries, it would also rob me of the intellectual growth that comes from struggling through the process. Wrestling with complex ideas, refining my thinking, and translating insights into a (hopefully) coherent argument is how I learn.

This weekend, I came across a concept that resonated deeply with me: AI Sciolism. It suggests that by automating too many low-level tasks, we risk losing the ability to perform high-level ones. Take a Sales Director, for example, who has never experienced the day-to-day grind of selling yet oversees a team of AI-driven agents. Over time, their strategic thinking could weaken (Madhavaram & Appan, 2025), and when the AI inevitably malfunctions, they may no longer have the expertise to troubleshoot effectively.

I see this in my own classroom. I tell my students that while AI is an incredible tool, some skills can only be learned by doing. You can’t learn programming just by reading about it—you need to practice, write code, run it, debug, and modify. This process, repeated thousands of times, is how you become a strong programmer. ChatGPT is great for explaining issues, but if you just copy and paste code from it, you’re not really learning. And if you don’t understand the code, you won’t be able to troubleshoot when the AI system is unavailable.

Yes, we can automate much of our intellectual work with AI. But should we? More importantly, what do we lose when we hand over too much to machines?

At first glance, automation seems like a huge advantage. Efficiency increases, tasks become easier, and we free ourselves from repetitive work. But this convenience comes with an overlooked cost. It leads to the gradual erosion of skills that once defined expertise. If AI can research, analyze, and even create, what happens to our ability to think critically, problem-solve, and innovate?

Throughout history, every technological leap has had winners and losers. The Industrial Revolution replaced skilled artisans with factory workers. The rise of automation and outsourcing displaced millions of jobs, leaving many dependent on systems they no longer understood. Today, AI threatens to do the same. Not just with physical labour, but with intellectual labour.

This raises a stark question. Are we at risk of becoming modern-day peasants, dependent on AI tools controlled by a small elite?

If we allow AI to take over too much of our cognitive work, we risk a new kind of feudalism where intellectual power is concentrated in the hands of those who control the technology. Imagine a world where only a select few truly understand AI’s inner workings while the rest of us rely on opaque algorithms to think, create, and make decisions for us. This would create a widening gap between the AI-empowered elite (tech billionaires, AI engineers, corporations) and the rest of society who merely consume and follow AI-generated outputs without understanding or questioning them.

Consider this analogy. A farmer who loses the ability to farm is no longer a farmer but a tenant, dependent on landlords for food. Similarly, a society that loses its ability to think critically, analyze, and create independently is no longer a society of free thinkers but one at the mercy of those who own and control AI systems.

This is not just a theoretical concern. The AI arms race is already centralizing power in the hands of a few corporations. Open-source AI models are trying to compete, but well-funded companies are tightening their grip on proprietary algorithms. Government regulations, while necessary, often lag behind, allowing tech giants to dictate how AI is developed and deployed. If we continue down this path, we may find ourselves in a world where only a handful of individuals have the power to shape knowledge, culture, and decision-making, while the rest of us become passive consumers of machine-generated reality (it is likely we are already here).

The Takeaway

The real danger of AI is not just job displacement or the risk of violating academic dishonesty. It is the loss of agency over our own intellectual lives. If we automate too much, we risk more than just skill decay. We risk becoming intellectually dependent, unable to function without the tools handed to us by corporations that have no incentive to keep us capable, only to keep us consuming.

So while AI is an incredible tool, we should be judicious about how much we rely on it. The process of thinking, learning, and creating is not just about the end result. It is about the struggle, the discovery, and the refinement of our own minds. Automating too much might make things easier, but at what cost? Are we trading intellectual independence for convenience?

And if we do, will we even recognize what we have lost before it is too late?


Sources

Bhatt, A., Kumar, R., & Singh, P. (2024). AI in academic research: A systematic review of generative models in literature synthesis. Journal of Artificial Intelligence in Higher Education, 12(1), 34-56.

Madhavaram, S., & Appan, R. (2025). AI for marketing: Enabler? Engager? Ersatz? AMS Review.

?? Bonnie Zink, KMb??

Knowledge Mobilization Specialist | Plain Language Champion | GenAI Prompt Engineer

1 个月

Great article and thought-provoking reflection on AI's impact on learning and intellectual independence. I especially appreciate your personal journey with AI tools and the concept of "AI Sciolism." The analogy of the farmer becoming a tenant is spot on. How can we design educational systems that leverage AI's benefits while mitigating the risks of skill decay and intellectual dependency? We need to teach critical and ethical AI usage, not just about AI.

Michael K.

Instructor / Entrepreneur

1 个月

If you enjoyed the article please give it a thumbs up or share with your network. LinkedIn seems to throttle those that haven't subscribed to their paid tier and engagement helps get around that hurdle.

Ryan Holota

Chief Operating Officer | Marketing | Leadership | Strategy

1 个月

Great post! IMO, you are correct that proficiency with base/entry-level skills is a prerequisite for a true understanding of any topic. I think that we are seeing a large portion of people now who use AI tools to replace those skills in their lives, and while those folks may see a short-term benefit in, I think that the long-term value is not in removing our need to do those tasks at the beginning, but in removing the need for us to continue to do those tasks when we have moved beyond them and should be spending more of our time on larger and more impactful things. We need to think about AI usage in terms of helping us to complete work on a higher level, the classic 1+1=3 scenario, rather than replacing the tasks we are currently doing. There's a quote that I have heard which states that "AI won't replace people's jobs, but that people using AI will replace other people's jobs", and I think that's the way that we want to look at it. Thanks for sharing.

要查看或添加评论,请登录

Michael K.的更多文章

  • AI is Broken. Let’s Use It Anyway?

    AI is Broken. Let’s Use It Anyway?

    A few weeks ago, I came across a post by a colleague Jill Sauter who was essentially saying ’no to AI’. Their example?…

    2 条评论
  • Psychographic Profiling in the Age of AI

    Psychographic Profiling in the Age of AI

    Groundbreaking research by Michal Kosinski demonstrated that digital footprints can predict personality traits with…

    8 条评论
  • You’re Using AI Wrong

    You’re Using AI Wrong

    AI has become a powerful ally in creative processes, revolutionizing how we approach storytelling, art, marketing, and…

    5 条评论
  • Do We Trust AI? It’s Complicated.

    Do We Trust AI? It’s Complicated.

    Artificial intelligence (AI) is changing the way businesses operate, influencing everything from product…

    2 条评论
  • When Empathy Backfires: How Cognitive Biases Distort Audience Understanding

    When Empathy Backfires: How Cognitive Biases Distort Audience Understanding

    Empathy is one of the most powerful tools in a marketer’s toolkit. It helps us connect with our audience, understand…

社区洞察

其他会员也浏览了