Papert's Revenge: AI Revolution in Education
Interview with Professor Ilya Levin, Head of the School of Computer Science, Holon Institute of Technology
Mikael Gorsky [MG]: Why is it important not just to implement generative AI in education, but to first understand its nature?
Prof. Ilya Levin [IL]: The traditional view of any technology entering education implies perceiving it as a tool for improving learning, teaching, and assessment processes. This is a natural approach - otherwise, it's unclear why we would implement the technology at all. The pragmatic component is evident and difficult to argue against.This approach somewhat worked with previous technologies, although even then it wasn't entirely correct.
However, when we're dealing with technology that represents a fundamental change in key educational concepts - cognition and the learning process itself - then it becomes appropriate to change our attitude toward this tool, not viewing it as purely instrumental, but rather understanding its essence.
Our article is dedicated to this, and we examine it in historical context - how the understanding we've now reached has evolved. We don't claim that generative AI simply fell from the sky. On the contrary, we show that we've always somehow known and understood this. The article isn't about generative AI suddenly appearing and now we need to do something about it - it argues the opposite: that these ideas have always existed, but now they've found a new embodiment.
We often hear criticism: "We knew before that hands-on learning is good, we knew about the constructivist approach to education." Yes, we knew, but new technologies have allowed us to look at this with fresh eyes. Let's not debate whether this is evolution or revolution - that's a separate topic. But recognizing that this step forward has been significant is indeed important. I don't want to discuss now whether quantity has transformed into quality, but I want to draw attention to the fact that moving along the same old path, purely instrumental perception of technology as exclusively a means, is critically wrong in this case.
MG: What is the fundamental novelty of the current stage?
IL: Today, following the old path and perceiving technology purely as an instrument is critically incorrect. The right approach is rethinking all of education in the context of the emerging capabilities of generative artificial intelligence. Why? This relates to a key aspect of education - cognition, its epistemological principles, and the very perception of what it is.
Traditionally, we remain within the framework of the conventional classroom, where knowledge is transmitted to students in one way or another. This has always worked, despite criticism and discussions about the need for change. Throughout these years, the system somehow managed, adapting new technologies to its traditional goals. However, the situation has now critically changed because cognition occurs under entirely new conditions.
We now have a partner - GPT and similar systems. This is not just a tool; it is something different. And its "otherness" is not related to it being some kind of alien - that would be only half the problem. It is, in some sense, a reflection of ourselves. Due to its properties, it is individualized by its nature.
Unlike Papert's "Objects to Think With," which we personalized during work - first choosing what resonates with us, then personalizing it through building microworlds - here the object initially becomes individual for each user. This is a fundamentally new aspect: individualization occurs not through our choice or customization but is an inherent property of the technology itself.
Moreover, cognition now takes place under fundamentally new conditions. Practically all routine processes related to accumulating knowledge and even skills are automated. Generative systems radically change the system, leaving humans exclusively with the creative component - not because they are incapable of creativity, but because this is where human uniqueness manifests itself.
MG: How has the understanding of technology's role in education historically evolved?
IL: Three important stages can be identified. The first was when the idea emerged that computers are an excellent implementation of constructivist ideas. This was stated by Papert, who created constructionism. The computer was the element that enhanced his perception of Piaget's ideas and allowed him to move beyond their initially rather narrow framework.
The second stage is connected with the emergence of the social component and network technologies at the turn of the 21st century. When social networks, Google, and other technologies appeared, education became networked, and distance learning emerged. This led to the democratization of the process. It developed constructionist ideas, although not quite accurately - social-constructivist ideas were somewhat separate from this.
And finally, what we have now is a completely new quality associated with the emergence of generative AI. It represents for us an answer to the question of what constructionism means in relation to education.
领英推荐
Constructionism claimed to change education 50 years ago, but this didn't happen then - the education system managed to adapt computers for its traditional purposes, for improving learning and teaching. But today the situation is different - learning and teaching processes are not just improving but changing fundamentally.
MG: How does the cognition process change under new conditions?
IL: The process of cognition takes on a completely different character. It is now very far from the traditional concept of accumulating knowledge and even skills. Practically all processes related to this are automated. Generative AI radically changes the system, leaving humans exclusively with the creative component - not because it can't be creative, but because we need to preserve what is uniquely human.
And it's not even about AI being unable to come up with something that hasn't been thought of before - perhaps it can do this even better than humans. It's about human uniqueness in their very perception of the new environment. The hyper-analysis of what's happening is a component that wasn't there before.
For example, if someone worked with generative AI, communicating with it using a keyboard, and then continued this conversation using voice - these become two different conversations. They reveal different aspects, although the interlocutor is the same. This makes a person think about the nature of multimodality, which is already a completely different level of creative thinking.
MG: How can these ideas be implemented in practice in education? For example, how can the "flipped classroom" work using generative AI?
IL: We can already apply these ideas even in school education. For example, the article mentions the concept of "flipped classroom." But it's important to understand that this isn't just a traditional flip class - it's a new version adapted for generative AI capabilities.
In our case, it works like this: the teacher first communicates with the students, then they go home and converse with their AI partner. This is their personal matter, which the teacher doesn't interfere with - this is a fundamentally important point. At home, each student conducts their own dialogue with AI, forming their individual path of cognition. Then they come to class with their developed results, which are strictly individualized.
This is where the new professionalism of the teacher manifests - they must be ready to perceive the individual "microworlds" of each student. This requires a completely different approach to teaching, where the teacher becomes not just a transmitter of knowledge, but rather a navigator through these various individual paths of cognition.
It's also important to recognize the right to non-symbolic thinking in the classroom. Traditionally, only the "top-down" approach is recognized: a task is given that needs to be solved in a strictly defined way. If a student solves it differently - they don't get a good grade.
A characteristic example is the story about Misha Bongard, which Isaac Yaglom told. When Misha was in a sixth grade, he came to an advanced class at Moscow State University. Yaglom proposed a problem: a ship travels from point A to point B along a river and returns back. The same distance in still water is covered by the same ship. Where will the journey take less time?
Most said it would be the same - the current helps going there and hinders coming back. But Bongard said it would be faster in still water, and gave a brilliant explanation: if the current speed equals the ship's speed, it will never return to point A. This is an example of situational thinking, whose relevance is growing in the era of generative AI - it doesn't think only top-down but looks at the specific situation.
MG: Shall we call generative AI a technology, given its "living" characteristics?
IL: When we call it an "alien" - this is, of course, a metaphor, a kind of joke. The fact that we classify it as technology is obvious - because we created it. This is an unchangeable fact.
One can propose such a thought experiment: if besides software, algorithms, and so on, it was known that the creators embedded a piece taken from living nature - for example, a fragment of neural network from an animal's brain - then the conversation would be different. In human consciousness, there is a clear understanding of what is created by nature and what is created by humans.
Floridi said that blurring the boundaries between these categories is a sign of digital society. But he meant blurring boundaries in people's consciousness: previously, people clearly distinguished between living and non-living, but now this line is becoming less obvious.
One can draw an analogy with fire - we learned to create it by understanding the principles of its occurrence, but fire also exists in nature. Similarly, there is cognition that we "programmed" and cognition that exists naturally. We learned to achieve a similar result by a different path. But does this mean that the very idea of cognition is technological? Or is only the specific solution technological?
Carl Mitcham, a philosopher of technology, provided another example - genetic engineering, where there is also a merger of technological and natural. When people created an airplane, one could say: how is it different from a bird? The same bird, only made of iron. But when it comes to thinking - the most intimate process for us - the situation is different.
Despite the fact that our brain has the same cells as the rest of the body, and the mechanistic view should seemingly extend to thinking as well, we internally resist this. As the remarkable American philosopher and cognitive scientist Daniel Dennett wrote, the Cartesian view of consciousness as a unique entity still influences our thinking - even if we disagree with it verbally.
Interviewer – Mikael Gorsky, an AI Researcher at the School of Computer Science at the Holon Institute of Technology.
GenAI in knowledge work
1 个月https://www.mdpi.com/2227-7102/15/1/45