What are people missing about generative AI?
Unless you are talking about OpenAI products, saying "Generative AI" or "LLMs" is better than "ChatGPT"
ChatGPT became part of the vocabulary of higher education last November when people noticed that Open AI’s newly released ChatGPT did a much better job of generating human-sounding responses than previous Large Language Models (LLMs). The ensuing homework apocalypse has captured the collective attention of educators and pleas to use a more general term like LLM were as effective as suggesting we use “internet search” instead of “google,” or “facial tissue” instead of “kleenex.”
Like many others this summer, I began using the term generative AI because it is more general than talking about ChatGPT. There are other LLMs out there like Claude and Google's upcoming Gemini, as well as models that can produce images like DALL-E and Midjourney. Generative AI covers all sorts of foundational models that can be used to generate human-seeming outputs while also being more specific than saying AI, which covers a much wider range of actual and imagined technology.
Generative AI got its own Wikipedia page on March 14 of this year and saw a spike in interest just as the school year ended.?Technology breakthroughs and innovation generate new terms that can be slippery to define. Remember the early days of big data? Over the past decade that term has described a collection of ideas, methods, and fears about using large amounts of data in various disciplines and professions. It was and still is a buzzword.?
We may be looking at the next buzzword in higher ed.
Buzzword are not all bad. They serve a useful purpose when something novel arrives on the scene and what exactly is going on isn’t clear. This past spring you could say ChatGPT and anyone teaching at a college or working in edtech would know what you were talking about. Maybe not exactly what you were talking about, but enough so that you didn’t have spend 10 minutes explaining the context.
Will generative AI overtake ChatGPT as the buzzword we use to talk about what’s happening? Or, will ChaptGPT become an eponym like Band-Aid or Uber?
Much depends on whether we’re at peak ChatGPT. Gary Marcus seems to think so. But for now, ChatGPT is what people are searching for on the internet.
Generative AI is much more than LLMs
Open AI’s launch of DALL-E 2 in April of 2022 created less excitement in higher ed than its release of GPT-4. But if you work in publishing, the circulation of deepfakes, copyright lawsuits about the images used to train large text-to-image models, and fears that illustrators, artists, and photographers will be replaced by products using generative AI are a big a deal. Similar issues are playing out in the coverage of text-to-audio models.
What’s happening with text-to-image and text-to-audio models is every bit as exciting and confusing as LLMs. But with fewer teachers in disciplines like music composition, design, and fine arts where students are using generative AI to help produce non-written homework, that aspect of our collective freak-out is not getting as much attention. Text-to-code generation is hitting computer science, but what little I’ve run across suggests that teachers in that and related fields are more sanguine about adapting to the new reality, maybe because as a discipline they understand the technology better.
I’ve not experimented with image generators until very recently when I needed to find an image for my first post. But I have noticed there is more AI generated visual content in the academic humor that circulates on social media. Here is a collection of images created using Midjourney that proport to reflect what the AI *thinks* professors look like based on their academic department. It was originally posted on r/midjourney, a subreddit for sharing images.
领英推荐
Generative AI tools are a little bit racist
I mean how could they not be. These tools are trained on huge amounts of human created culture. They reflect the society that had created them and that culture is permeated by social attitudes that treat ethnic, gender, racial, regional, religious, sexual and other minority groups as stereotypes or as invisible.
If you look back on the images of professors, you’ll note that every one of them except ethnic studies is white. PetaPixal, a web publication about photography, pointed this out in May when the images first circulated. Examples of racist or sexist outputs from generative AI tools are easy to find online, but the more subtle invisibility of minorities in the words and images that generative AI is trained on should concern us more. As the example shows, just like the outputs from publishing and entertainment industries where minorities are underrepresented, the outputs from generative AI reflect their inputs.
Outputs from foundational models are relatively easy to police, at least for prompts that are not designed to avoid the guardrails in place. And Khanmigo, Bing, and other services that use generative AI models are learning to address prompts about potentially controversial subjects in ways that are more complex than simply refusing to engage. But what about the models themselves?
Chatbots powered by generative AI literally don’t know what they are talking about. They simply predict a sequence of words that will satisfy a human. The human enters a prompt and the chatbot generates an answer based on the data they have been trained on. As most everyone knows by now, they like to make up answers or “hallucinate.” This kind of creativity can be a lot of fun when a chatbot is collaborating with a five year old to make up a story, but is something of a disaster when people use ChatGPT as a research assistant.?
Some of these problems are getting fixed as both the foundational models and the services wrapped around them get better at not being offensive and learn to check sources before confidently presenting them. These are two areas where users in higher education will demand better and provide a lot of user feedback to drive improvements. It seems likely the developers of the foundational models and the vendors selling products using those models will try to meet that demand and only those who can do it will be successful.??
Regulators have a role as well, and they are concerned about bias. Lawsuits about copyright will provide more visibility into what has gone into these models and as copyright law pertaining to generative AI gets settled, we will have greater clarity about what can and should go into the models. But policy and law are long term fixes at best.
What can we do now? Demand transparency about what’s going into the models and have independent audits of what comes out is a good start. But it is worth asking in whose names we are making these demands. If greater transparency becomes part of the process of improving generative AI, then we also need to ensure that process includes a wider range of perspectives than computer scientists, policy experts, academic experts, and tech entrepreneurs.
Tools reflect the values and experiences of the people who make and use them. That is a theme in the art of Stephanie Dinkins , a transmedia artist and Professor of Art at Stony Brook University. Her Project al-Khwarizmi (PAK) and AI.ASSEMBLY are educational projects that directly engage “vulnerable communities disproportionately impacted by data-centric technologies.” Her work is a good place to start thinking through how we might structure engagements that move us beyond thinking about generative AI as something being done to "us" and toward some notion of the "we" who should be deciding how best to use this technology.
As educators using these tools, we have a responsibility to advocate for the ways we want to see them used, which means we need to understand how they work and how different communities of teachers and students understand them. Institutions will be making decisions about what policies to put in place and what products to buy. I hope these decisions are informed by active engagement with the tools and reflection based on our shared values.
Link of the week
Open AI released "a guide for teachers using ChatGPT" called Teaching with AI. Not an instruction manual...just brief descriptions of how four teachers use ChatGPT, four examples of detailed prompts, and an FAQ. Useful if you're just starting.
AI Log is happening
Thanks readers! The feedback I received about my first article convinced me that this is worth doing.
AI Log is a weekly newsletter that tried to make sense of the ways that generative AI is changing higher education. It is for teachers, students, and those who support them. Click here to subscribe.
Please subscribe and consider using the buttons below to send this edition of AI Log to a friend, leave a comment with suggestions or feedback, or repost so it shows up in the LinkedIn feeds of your colleagues.
#generativeai #academia #chatgpt
100% Rob! Beyond the vocabulary, another challenge will be people only wanting to adopt ChatGPT because it's the primary brand they identify as leading in the space. While a powerful tool, with the proliferation of products in this space, there will likely be better options that are more tailored to each task. Navigating those options is a challenge, and some of this will take a while to shake out. I think we'll be looking for more integrations in products we're already using to make real progress here. For now, we'll continue to experiment and lose some of the efficiency by using multiple tools to evaluate which provides the best outcome for each task (as I now add?ideogram.ai to my list of image generation options!).