FallAIcies 3: Why Tools Don't Determine Our Capabilities - the deskilling myth.
https://www.cntraveler.com/video/watch/how-cities-of-the-future-are-embracing-nature - A Singapore has been looking forward for years.

FallAIcies 3: Why Tools Don't Determine Our Capabilities - the deskilling myth.

The Fear of Deskilling

One of the most persistent fears about artificial intelligence is that it will deskill us. Many worry we'll hand off all our thinking, creativity, coding, and translation to AI systems, leaving us incapable of performing these tasks ourselves. The concern is that we'll become cognitively dependent on these technologies, much like how some fear physical dependence on modern conveniences.

But is this inevitable? Is AI destined to erode our cognitive abilities?

The Car Analogy

Consider the automobile. Cars completely possess the ability to render us without the capacity to run or walk any distance. You can get in a motor vehicle and let it, metaphorically speaking, suck all fitness out of you. Your legs could practically atrophy in the modern world.

But they don't. At least, not necessarily.

Some people have indeed allowed the modern world to degrade their physical capabilities. Others have taken the advantages that the modern world offers and used them to assist their physical capabilities. It depends entirely on what we do with the tools we're given.

Cars don't make people unfit. Not exercising, not walking, not running, not lifting weights, and not doing load-bearing exercise makes people unfit. Sedentary lifestyles make people unfit. Sedentary lifestyles existed before the car, but the car certainly amplified the possibility.

A Matter of Choice

If we choose to allow ourselves to be deskilled by AI, that's us making a choice, and we've got to consciously guard against it.

The caveat here is that it is easier—far easier now—to engage in cognitive offloading in a way that we've never really been faced with before. We have tools that are so capable of doing much of the "lifting" component of our thinking processes that we do run a significant risk.

But AI will not deskill us unless we let it. AI is not going to replace researchers unless we decide that's what we want, in which case we've made an error in judgement. We need people who can conduct powerful and capable research, who can do that reading and know those skills.

The Value of Human Verification

How will you know whether research is good if an AI agent completes it for you, and you're not checking it, verifying it, and making sure that it's effective, well-reasoned, and addresses counter-claims?

We still need people to learn research skills. We need them not to hand off the research to a machine at every given opportunity. We still need people to be able to write and not hand off their writing whenever possible.

Finding Balance in a Hybrid Approach

At the moment, I'm hyper-productive. I'm creating revision resources using exam board-supplied material. I've set up a Claude project with the syllabus, exam mark schemes, content from the exam board, revision material, and specific instructions from me. I then request specific outputs to create revision material based on the set product sheets provided in media.

This produces nice interactive slide shows where I include quizzes and develop support boxes at the bottom to help students learn and ask questions.

Some of this would have been impossible—I couldn't conjure up interactive chat elements from nothing. I need the tool for that. But could I have written the resources? Could I have created them myself? Could I have made them interesting? Yes, probably.

Do I need to in this instance? No.

Are there times when I really need to spend the time doing a resource by hand? Yes.

A Blended Approach to Technology

I'm probably going to end up with a blended approach, where I:

  • Do some tasks that require my attention by hand
  • Automate others completely
  • Automate the bulk of some and edit the results
  • Do the bulk of work myself and automate the formatting

All of these approaches are valid, but I need to make sure I don't lose my core skills.

It's like asking: Can I run five miles? Yes, I did on Sunday. Can I drive five miles? Yes. Do I sometimes drive somewhere to go for a run? Yes, I do—just because I fancy a change of scene, or because I can't quite run the distance to meet my running partner. Will I build up to that? Probably.

So I need a hybrid model. I can't allow myself to be completely deskilled. I need to keep my cognitive fitness going, just as I need to be able to run, jump, or climb when necessary. I need to ensure my skills remain sharp, but I don't need to use them all the time if there are ways to speed up processes and make them more efficient.

A Call for Conscious Technology Use

The fundamentalists on either side who say "everything will be written by AIs, and we don't need to learn how to write anymore" are missing the point. Just because AI will write code doesn't mean we don't need to learn coding. Just because AI can produce high-quality writing doesn't mean we don't need to learn how to produce high-quality writing ourselves.

My car can cover the distances that I run far more quickly and comfortably than I can. But I still need to run them to get the benefit and to prove that I can, because there may be times when the car isn't available. There might be times when I want to run just for the pleasure of running.

Designing Systems for Human Enhancement, Not Replacement

If we are genuinely concerned about the impact of artificial intelligence and its potential to cause harm, we need to anticipate challenges in advance. We must identify what damage might arise from these technologies and how we can mitigate it by designing systems that prevent people from becoming overly dependent or stuck in limiting routines.

Consider how we've shaped our environment around cars. We've built entire cities that prioritise vehicle traffic over pedestrians. We've extended road networks and structured our lives in ways that almost force car dependency. People aren't at fault for adapting to a car-oriented environment—the system encourages these behaviours.

If you were designing a city from scratch today, you'd take a different approach. You'd keep cars out of city centres, create transportation hubs on the periphery, develop robust public transport networks, and build protected cycling lanes and covered walkways. You'd design for human health and movement first, accommodating vehicles where necessary but not letting them dominate.

Singapore offers an excellent example of forward-thinking design. Most citizens don't own cars because they don't need to—the public transportation infrastructure is exceptional. Singapore looked ahead and designed appropriately, both with its infrastructure and education system.

Applying This Thinking to AI in Education

We need to apply the same foresight to artificial intelligence in education. We're trying to retrofit our educational practices to meet standards we never anticipated having to meet. But unlike with our cities, we have the opportunity to get ahead of the problem.

Our knowledge work networks and understanding of learning are already well-developed. We can see the potential roadblocks ahead. We need to rebuild our education systems to leverage AI's advantages while protecting against its pitfalls.

This isn't about improving air quality or traffic jams, as important as these things are, this is about something existential — we're dealing with people's cognitive development and thinking capabilities. That's far too important to leave to chance.

Guarding Against the Path of Least Resistance

There is a genuine danger that AI could lead us down a path of cognitive laziness, but that's for us as humans to guard against. We're going to be required to make a greater effort to maintain our cognitive abilities.

And to be fair, forcing ourselves to make that effort may be no bad thing. AI, far from reducing our skill set, might even improve it—if we approach it with intention and awareness.

The challenge before us isn't whether AI will deskill humanity. The challenge is whether we'll make conscious choices about how we integrate these powerful tools into our lives and learning, maintaining our essential capabilities while leveraging technology to enhance what we can accomplish.

It's not the tools that determine our destiny, but how we choose to use them—and more importantly, how we design our systems to encourage the best of human capabilities while harnessing the power of artificial intelligence.


#AI #Education

要查看或添加评论,请登录

John Dolman的更多文章