AI in Higher Education
Roy Haggerty
Executive Vice President and Provost at Louisiana State University / Rector académico y vicepresidente ejecutivo de la Universidad Estatal de Luisiana
Artificial intelligence, particularly Large Language Models (LLMs) have captivated many of us since OpenAI's release of ChatGPT in November, 2022. After exploring some of its uses over the holidays, I became convinced that it, and related AI technologies, will transform higher education (and most other institutions). The rate and scale of the transformation, unlike other technological transformations in the past, will not be limited by the technology's development but by the rate at which human beings can adapt to use the technology. Therefore, the universities that deploy and train their staff, faculty and students on the technology the fastest will be in the lead.
Right now, there are three rate-limiting factors. The first is a hesitation in higher education to put AI to work. In part, this is because of a fear of faculty, staff or students being labeled as academically dishonest. Universities and academic societies would be well served by developing clear policies around appropriate and inappropriate uses of AI technology and how and when to disclose that something was done or aided by AI. In my opinion, the less that personal creativity or personal development would be reasonably expected in the task, the less should be the expectation of disclosing that an AI technology was used. On the flip side, the more that a task would be considered intellectual property, a creative output (see image above), or part of personal development or learning, the greater the expectation of disclosure. Writing a routine report that is a simple update from last year? I wouldn't think disclosure or attribution to an LLM would be necessary. Writing a manuscript for publication or creating a piece of art? Disclosure should be expected.
A second limitation is the availability of skilled super-users and experts to put to work on implementation of AI. Consequently, I have spent a good part of my free time over the past six months working to understand LLMs (and I will be co-teaching a class on LLMs this fall, along with James Ghawaly and Henry Hays , for the purpose of developing the ecosystem of skilled super-users and experts), and have begun to put LLMs to work for some parts of my job as executive vice president and provost at LSU. For example, over the weekend, I wanted to have access to all of LSU's policies and procedures in a single pdf. Currently, they are available on two well-organized websites. While I could have waited until next week and asked one of my staff to concatenate all of the documents together (a task that would have taken a few hours for them), instead, I had ChatGPT (v. 4) write a python script that would go to a specified website, download all of the pdfs it found there, and paste them together into a single pdf. The entire task, from prompt to running the script, took me about 30 minutes. (I am an OK python coder, but the particulars of this would have taken me a day or more if I had done it without ChatGPT.) In the coming weeks, I'll write an article and post it here on LinkedIn outlining some of the use cases that I've put LLMs to work on.
领英推荐
The third limitation is structured data and, related, security of sensitive data. Many of the most interesting uses of LLMs will require two things. First, they will require that an LLM be trained on, or otherwise given access to, a lot of our own data and that it be organized in certain ways. In many cases, it will require that staff or student workers begin to log their interactions with certain types of data so that an LLM can be trained on those interactions (e.g., to create chatbots that can automate common interactions about something such as how to change majors). For LLMs to be fully put to work to assist in day-to-day tasks at universities and to assist students, faculty and staff, LLMs will need to be trained on a wide range of the institution's own data, will need to be run in a secure environment, and will need to limit user access appropriately. The latter will, to my current knowledge, will still require some technological development in LLMs, some of which touch on cybersecurity.
In future articles, I will outline some use cases for ChatGPT and LLMs, how I think that AI is likely to transform higher education, and discuss some of the risks of AI to higher education.
Policy and Advocacy Consultant and Co-Owner, Auburn Urogynecology and Women's Health
1 年If you're not on the AI train now, you will never catch up. Great article. I have been training in my free time for a year. I wish I could take some of your classes! But let's definitely talk the next time I see you.
Growth Marketer @Reinforz AI || Chess Enthusiast || Polyglot
1 年Thank you for this thoughtful analysis of the immense potential of AI, particularly large language models, to transform higher education. As an AI ed-tech startup, Reinforz AI (https://www.reinforz.ai) shares your vision to harness AI's capabilities to augment human intelligence. Our platform helps tackle challenges like automated grading, personalized assessments, targeted feedback, and interactive dashboard. We aim to reduce educator workloads, not replace them, freeing up time for student interactions. Eager to discuss our human-centered approach and explore potential collaborations.??