My Q&A on AI with MIT’s Eric Grimson: From Personalized Socratic Tutoring to AI Joint Majors to Mastering the Smell Test

My Q&A on AI with MIT’s Eric Grimson: From Personalized Socratic Tutoring to AI Joint Majors to Mastering the Smell Test

The more conversations around AI that I have with my colleagues in higher ed and business, the more excited I get about its potential for helping learners, universities, and employers make real and meaningful connections. Yes, them robots are coming—but it’s ultimately to bring us humans closer to our goals, ourselves, and each other.

Learners are using AI to find and connect to the right programs and get personalized support. Universities are using AI in the classroom to enhance learning experiences and connect their students to opportunity. Employers are using AI to connect to today’s brightest talent—and to help their employees connect to and master the skills of tomorrow.

In the kickoff of my new Q&A series earlier this fall, we learned how Columbia University is leveraging AI from Soulaymane Kachani , Columbia’s Senior Vice Provost. Another colleague of mine who’s long been a champion of innovation and open access in higher ed is Eric Grimson , MIT’s Chancellor for Academic Advancement and Interim Vice President for Open Learning. Eric has also been a pioneer in AI and machine vision at MIT’s Computer Science and Artificial Intelligence Laboratory.

After hearing Eric speak on a panel at our 2023 edX Global Forum about the evolving mission of universities in the face of complex societal and technological changes, I knew I had to invite him to my next Q&A on all things AI.?

Eric—I can always count on you to be 10, 20, 30 steps ahead of trends in education and technology. But I’m curious: When you first heard about the capabilities of generative AI, how did you react?

My first reaction was an equal mix of fear and excitement! The opportunity to use generative AI to personalize someone’s learning journey is fantastic, especially if we can use it to broaden access to underserved learners. As in, level the playing field so that wherever they are, they can learn, accomplish tasks, and master material. The potential for misuse is worrisome, but a learner who only uses generative AI to create answers without an understanding of the reasoning behind it is missing an opportunity. As educators, it’s our responsibility to prevent that.?

Some of the negative perspectives on generative AI remind me of a similar reaction in the early 1970s when the first calculators came out. There was this panic that the technology would produce a generation of illiterate students with no clue how to conduct basic calculations on their own. But what actually happened was that the calculator turned out to be a great tool. An experienced user understands what underlies those computations and uses it to be more efficient.?

Too funny—I started my Q&A with Soulaymane with that same calculator analogy! Throughout time, there are so many disruptive technologies that have had a democratizing effect on society. What else might you be dreaming about when it comes to AI’s potential for teaching and learning?

Imagine being a senior in high school and asking an AI system, “In four years time, I want to be an X. How do I get there?” X being a quantum physicist, a hedge fund trader, whatever their dream is. And then the AI system provides multiple road maps for what they need to learn, and adapts as the student works through their choices and checks things off. It begins building a model of the learner and customizes material that most efficiently gets them to where they want to go.?

AI could also become that student’s personalized Socratic tutor. For example, they’re working on a problem and the AI system acts like an upperclassperson looking over their shoulder and saying, “You might want to look at this term in your equation and think about what it's doing” or “I noticed you’re struggling with this particular component—review this lecture and see if it helps.” The AI system isn’t there to provide the answer, but rather to offer more questions that steer the learner to answer it themselves.?

There’s a fun variant of this: Have a learner first solve a problem on their own and then ask an AI system to solve it. The learner critiques what the system did, which not only improves their own learning but also helps the system out. We have long used this technique of reviewing other people’s code with great success in teaching programming and computational thinking. This variant can help learners see how there might be different and/or more efficient ways to solve similar problems in the future.

That’s a great example of using AI to drive critical thinking and creative problem solving. So, what do you think needs to happen to help diminish educators’ concerns around AI and compel them to embrace its possibilities instead?

With any large language model, it’s important to create data sets that are large enough to train it, but where you're also really confident about the quality of the data so that you reduce the chance for the system to be misleading. One of the biggest challenges is, how do you go about collecting and organizing all that data? For universities, this presents interesting opportunities to collaborate—sharing their teaching data for this purpose.

Another challenge is more sociological: For educators who aren’t AI experts, we need to help them understand what an AI system does and doesn’t do. This will make it much easier for them to adopt and use it effectively. Educators are hearing the AI hype but they don’t necessarily fully grasp its capabilities and limitations.

Indeed, there’s been a lot of hype around AI over the past year—but I hope conversations like this one are helping our colleagues cut through the noise and feel more confident that it can be used for good—like connecting learners to the skills they need to thrive in the job market. Are there any specific examples of how AI is being used at MIT toward this end?

Like every other institution, MIT has several experiments underway—we’re using AI in a broad range of subjects, from introductory physics to writing courses to upper-level quantum systems. We embed ethical questions in the middle of technical classes on face recognition and autonomous vehicles, for example; We don’t isolate ethical issues in a separate class. The next generation needs to have an understanding of how AI is embedded in everything and be comfortable using it, no matter the discipline they choose.

We’ve also created what we call joint majors at MIT. About 20% of our students do them, where they meet the requirements of two disciplines in the course of four years. One joint major we offer is between Computer Science and Economics, which opens up opportunities for students to learn AI tools as well as their applications in such areas as credit assignment or online investing. You could imagine joint majors in the future between AI and Biology, AI and Physics, AI and Urban Planning—really any joint major with AI that students can dream up in connection with their passions.

And that’s exactly what we’re all trying to do here—incite learners’ passions, meet them wherever they are, and connect them to new ideas, new conversations, and each other. Which brings me to the importance of maintaining human connection in the face of new technology—how is MIT ensuring that in the work you’re doing with AI?

The human element is a priority in interactions with AI—and at MIT, it’s an essential part of our education. One place I think AI can really help is to put it to task on some of the more mundane aspects of curriculum design. For example, a student turns in a problem set and then an AI system grades it and highlights to the instructor two or three areas that need to be further explored with the learner. That’s where the instructor can really focus on a more direct interaction with the student.?

A good AI system could look at the performance of an instructor’s class and tell them, “For this set of problems, all the students got it. But for this other set, they’re all over the map and really confused.” This lets an instructor start the next class with, “We're going to spend our first two minutes reviewing these items, but then for the rest of class we’ll go deeper into these concepts, because that's where most of you are struggling.” In this scenario, AI actually amplifies the human aspect because it’s giving the instructor data-driven insight to focus on areas that need the most help.

In the near and/or far future, how do you see AI impacting the day-to-day functions of your own role as a university leader? What do you think it will help you do more and less of?

For straightforward administrative functions and communication processes, AI is absolutely going to help—with report generation, emails that just need a canned response, certain kinds of outreach, etc. Of course, I’d still want someone to review it before it goes out into the world. Some curricular design could also be handled by AI. Educators could tell it, “Here are the three things I want to cover in this topic. Pull together the material from my other courses and resources to create an outline of what that lecture should look like, and then I'll add to it to form a coherent whole.” You can imagine an AI system doing that pretty well, which would then let educators spend more time on the personal interactions with students that are much more important, dealing with that more critical email, or working more intently on a lecture that needs that expert touch.

One final task for you, my friend. Could you offer learners and leaders some advice for how to set themselves up for success with AI??

For learners, my advice would be to learn enough about generative AI so that you have the confidence to know when it works and when it doesn’t. You don’t need to be an expert, but you want to be able to apply the smell test when it comes up with an answer and ask yourself, “Does that seem right?” Or say, “That’s really close but I want to go back and refine that question.” Treating generative AI like a black box—to use the physics metaphor—is dangerous because you have no sense of what it’s doing and what it’s not.?

For leaders, I essentially have the same advice, just elevated a level. And for whatever AI system you use or create for your people, you want to accumulate enough data so that you trust it to not make critical or embarrassing mistakes. That process takes resources and time—but if you do it well, you’re going to end up with something that’s really valuable.?

~~~

Readers—now it’s YOUR turn. Where do you see the biggest opportunities for AI to transform the way we learn, teach, and work? Share your perspective with Eric and me in the comments below.

Vienne Lee

汉通金融学院: ESG策略及教育院长

11 个月

“Where do you see the biggest opportunities for AI to transform the way we learn, teach, and work?” I will share you my thinking Anant Agarwal & Eric Grimson

要查看或添加评论,请登录

社区洞察

其他会员也浏览了