The future of learning with Artificial Intelligence
Generated on https://imagine.art using the prompt: The future of learning with AI, AR and VR in 16:9 aspect ratio and artistic style

The future of learning with Artificial Intelligence

Note: all the examples provided in this article are scenarios I personally tried with my subscriptions of ChatGPT and Gemini. I mention the versions where relevant and encourage the users to try these examples with Generative AI models they have access to.

The world's first all-electric digital calculator, ANITA, was released in 1961. Now that a machine could do it, who would need to learn how to add, subtract or multiply numbers any more? More than half a century later, although it is impossible to imagine a math pupil never using a calculator in their life, we still teach those skills in early school years using pen and paper.

Since that early, if not the first, digital tool for learning, many technological interventions such as MOOCs, microlearning, and virtual and augmented reality have managed persistent but cameo appearances on the stage for pedagogy. They were all about to fundamentally disrupt education and take center stage, but then they did not.

The emergence of Large Language Models (like OpenAI’s ChatGPT and Google’s Gemini) poses questions that have been asked before on numerous occasions: Are we now at a true inflection point in the trajectory of technological intervention in education or is this just another hype cycle like many in the past? Will the quality of education now suffer a decline, making matters even worse? Will ChatGPT do more good (or harm) to education overall than the calculator ever did to math education?

These questions are not new but they have never been this popular or contentious. This is because none of the previous interventions have had such potential for disruption, or controversy, as Generative AI. While proponents argue that, as it gets even smarter, AI will completely revolutionize education in positive ways, there is also considerable skepticism due to its potential adverse impacts on the quality of learning.

Settling this debate is exceptionally challenging. Instead, it may be easier to find goals of effective pedagogy, with or without technology, that most educators can agree on. A framework can then evaluate whether AI has an overall positive or negative impact on those goals.

So, what are some universally acceptable goals for pedagogical progress? The list is probably long and debatable but we can choose a few where AI is poised to have a direct impact.

It would be hard to contest that effective teaching should be more:

Personalized & engaging: tailored to each individual's unique needs, interests and context

Efficient: allowing teachers and/or students to do more with their time

Experiential: creating learning experiences that go beyond reading, writing, listening, watching, and memorizing

At the same time, we want less:

Bias: political, racial, gender-based, or personal etc.

Inequity: widely different means of access to educational technology and resources?

Misleading information: knowledge that appears to be true but is not


Personalization and engagement

Technology-based personalization in education is not new. The use of statistical models for CAT (Computer Adaptive Testing) has been around since as early as the 1980s: a computer algorithm chooses the unique testing path for test-takers depending on their correct or incorrect answers. As a consequence, multiple students taking the same test can each actually end up taking their own ‘personalized’ version of it, answering completely different questions from their peers.?

Personalization has come a long way since those early days. It's latest example is the GPT-4 based tutor Khanmigo, built by Khan Academy - an online learning platform. Khanmigo's most impressive value proposition, among many other useful features, is that it can start tailoring its responses for each individual student based on the history of its interactions with the student. So, if someone is more interested in biology than math, Khanmigo can quote more examples from biology than from math when teaching a new concept, thereby sparking more interest. Khanmigo also has strict guardrails which prevent the AI assistant from offering direct solutions that would inhibit students from arriving at them through their own understanding. The goal is to personalize the learning journey uniquely for each student without compromising its integrity.

Example of Khanmigo providing guardrails to prevent students from cheating


Personalization is likely to become even more powerful in the coming years with computer-vision based AI. For example, research that uses biometric sensors like cameras to gauge student emotions in a classroom or online setting is already showing promise.? Accurate identification of emotions like boredom can allow teachers teaching in the classroom, or online, to dynamically introduce content to re-engage students.

Mood detection using AI


Efficiency

Large Language Models can make both teachers and students more efficient with their time. For teachers, it helps to quickly build and review lesson plans, grade assignments, and deploy Khanmigo-like teaching assistants that can learn the teacher's unique style and content. Grading objective-type questions using software has been around for decades but assessing open responses, like essays and reports, has never been possible to the level LLMs now enable. Teachers can also save a lot of time by generating relevant and context-aware assessments and problem sets - a traditionally time-consuming task.

The time thus saved can be invested in areas such as mentorship, coaching, creative endeavors, and social interactions to improve the connection between teachers and students. Teachers can also spend more time learning new skills to advance their own careers. In the not-too-distant future, we can expect that the best teachers would be able to create their digital twins and increase their reach to a much larger audience.

Students equally benefit from the availability of AI assistants that can make them learn concepts faster and answer questions that may otherwise require a teacher's time which is a limited resource. It also helps solve the ‘blank page’ problem, where students struggling to get started with a presentation, paper, or essay can generate ideas that catalyze their thought process. An AI agent can also become a constant learning partner and help the student guard against behavioral patterns, like procrastination, known to reduce efficiency. Whether reducing this struggle is conducive or detrimental to learning is a question that academics are divided on. Another productivity boost for students can come from using AI to generate more practice problems, which they can solve to further their learning. Unlike generic practice problems, AI can focus on areas where the student needs more support and ignore concepts already mastered.

Experiential Learning

The interface for AI in education does not have to be limited to chat, voice, and video only.?

Long before AI becoming mainstream, projects like Brilliant,? Labster,? and Harvard's LabXchange have sought to offer interactive simulations (though not necessarily rooted in AI), in both on-screen (2D) and virtual reality (3D) settings, to provide experiential learning. Such experiential learning is beneficial for teaching concepts in STEM topics, especially those that require very expensive or potentially dangerous experiences like deep sea or space exploration. While in many cases these virtual simulations and interactive exercises are not a replacement for an actual environment, they are certainly the second best alternative, at virtually zero cost when compared with the physical setting. Virtual labs and simulations are also a great supplementary aid even when physical spaces and equipment for experiments are available.

Rapid AI-based advancements in virtual and augmented reality hold a lot of promise for mimicking real human interactions more closely. Perhaps the most exciting work in this area is being done by Meta Reality Labs, where Dr. Yaser Sheikh is working on creating lifelike 3d avatars of humans from just a few smartphone camera snaps. These "codec avatars" are virtually (pun intended)? indistinguishable from actual people when viewed in a virtual or mixed-reality setting, with physical and virtual entities seamlessly coexisting in the same environment. The production of these codec avatars was only made possible due to supervised learning - a slightly more traditional form of artificial intelligence -? which uses large training datasets to learn how to reconstruct a 3D representation that can look and act like real humans. With codec avatars, it's not hard to imagine a future where lightweight, 5g-enabled, augmented reality glasses can let a student learn from a digital clone that, for all practical purposes, looks, acts and behaves like their favorite teacher.

So is it all good news?

In the areas of personalization, productivity and experiential learning, the progress that has been made in just the last couple of years is indisputable and extraordinary. So why are the critics so concerned? Turns out their reservations are not entirely baseless. Let's look at a few areas where AI's influence is questionable, to say the least.

Bias

There is the obvious issue of bias. A majority of large language models have been trained on internet-scale data which is inherently biased because of disproportionate representation of cultures, religions, ethnicities and geographies. Any attempts to remove such biases through human feedback are limited at best, and those humans certainly have their own biases as well. Another source of bias is that most training data for AI models is text in English or other European languages. Other texts, especially the ones from the global South, are often unavailable or absent in training language models.?

Here is a popular example of such bias. The free version of ChatGPT (based on GPT 3.5) displays unnecessarily woke behavior in its response to the same question for different genders.?

ChatGPT 3.5: Unnecessary woke behavior

Gender bias is just one type. You can expect language models to have many other types of biases including but not limited to political, racial, cultural, geographical, linguistic and demographic ones. Biases can be countered using different techniques: improving input data sources, better prompting, fine-tuning, supplying your own facts and knowledge-bases (also known as retrieval augmented generation) and reinforcement learning through human feedback (RLHF). But the process for such improvement is slow and imperfect. If the training data has any stereotypes, which it nearly always does, they are only likely to get amplified by AI algorithms.

Misleading information

In her now famous TED talk: AI is incredibly smart and shockingly stupid, Dr. Yejin Choi, a professor of Computer Science at University of Washington, quotes several examples of how AI can sometimes give silly answers to even the most basic of questions. One of the examples she quotes in her talk can be seen below. It's easy to see how even the newest versions of ChatGPT (4o) and Google's Gemini miss the obvious answer. While the prompt may not provide the utmost clarity, it would be surprising if a human tutor struggled to comprehend its intention. Out of the box, generative AI is not smart enough, yet, to ask for a clarification. Although, when supplied with additional context in the prompt, like: give me the simplest answer, or use the fewest possible beakers, it does come back with the correct answer.? In this case the correct answer is too obvious so most students will not be fooled by AI's response but this example validates the idea that AI can sound very confident about incorrect answers and easily mislead students into accepting suboptimal responses. It also alludes to the fact that language models may appear to understand natural language as humans do but require additional instructions in the prompt for better answers (a process known as prompt engineering).

ChatGPT stumbles on a simple question


Gemini stumbling on the same question


An important observation in the case of obvious nonsensical responses (also known as hallucinations) is that even newer LLMs can stumble against some very simple prompts. While the incidence of hallucination may be going down, the severity persists. Even the latest LLMs can make mistakes you would rarely expect an adult human to make. Prompt engineering, or asking the question in a specific manner, can improve outcomes. But most users are unaware of those rules and exceptions. Consequently, humans expect that AI understands natural language but what AI really understands is a combination of human language and algorithmic style instruction. Consider the same question as before but now with some added context in the prompt:

AI corrects itself with the right prompt


Sometimes hallucinations are a result of "bias to please". In language models, "bias to please" is a distinctive type of bias that significantly influences their responses. Instead of aiming to establish the veracity of a statement, AI models tend to prioritize aligning with the user's stance or objection. This bias manifests in responses that emphasize agreement rather than critical evaluation or fact-checking. See example below from the latest ChatGPT model.

An example of "bias to please"

Another alarming aspect of these biases is that they manifest themselves differently to different users depending on the history of interaction. Over time AI may learn the kinds of responses that please the user and start tailoring it's answers to suit that bias. To test this, feel free to try the prompts in the examples above with your own version of ChatGPT or Gemini.

Equity

Access to AI, or the lack thereof, certainly has potential to massively increase the digital divide between the two-thirds of the world population that has access to the internet and one-third that does not.?

Even among those who can access the internet, the ones who can afford the monthly subscription to models like GPT 4o, have a distinct advantage over those who can not. Currently, the free models sufficiently underperform compared with the paid models. Similarly, those learning in English have a distinct advantage over those learning in their native language.?

A flip side of the equity argument is that students who do have access to the internet and devices, have a much more level playing field, thanks to AI. Soon every such student will have at their disposal, the same high level personalized 1-1 training that was previously accessible only to the very privileged.

Large Language Models also require massive compute infrastructure for their training. This means that large companies like OpenAI and Google who have that infrastructure have an unfair advantage over those who don't, thus increasing the digital divide.

While there are some arguments on both sides of this debate, at present AI's detrimental effects on the digital divide greatly outweigh its contribution to creating a level playing field.

Conclusion

Technology in general and AI in particular, has a positive role to play in personalized learning, increasing student and teacher productivity, and providing experiences that are impossible otherwise. However, AI also poses risks of misinformation, training data biases and access-based inequity which should be carefully and continuously weighed against its promise. Due to the magnitude of these risks, we should not expect an immediate watershed moment in education. Addressing these risks requires policy-driven interventions, a process that can span years or even decades.

The one reality none of us can ignore is that AI is here to stay and our best bet, as teachers, students, parents, and administrators, is not to evade it but to embrace and play our role in shaping its future.

Dr. Ayesha Khanna

AI Entrepreneur and Advisor. Board Member. Forbes Groundbreaking Female Entrepreneur in Southeast Asia. LinkedIn Top Voice for AI.

2 个月

Excellent insights Yasser Bashir!! ??

Thanks for sharing. Very comprehensively defined the scenario. Just curious if you have come across any AI run simulations for teaching in schools or universities? and what could be the barriers or challenges to that from teacher community.

回复
Rehana Kazi

Training and Adult Education, Edtech & AI Lead, Non-Profit Management | LUMS MPhil '23

3 个月

Such interesting use cases!

回复
Ghulam Ali

Software Engineer | WordPress Developer

3 个月

Great to see your commitment to empowering educators with AI, Yasser Bashir! Your dedication to summarizing your thoughts on the future of learning is truly inspiring. It's wonderful to witness the collaboration and diverse perspectives that have shaped your insights. Keep up the fantastic work!

Great article, Yasser! To overcome misleading information I've been exploring Perplexity. It's not 100% foolproof but it provides transparency of sources of information for its responses.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了