ChatGPT: A discussion for Adjunct Faculty
The image for this article was created with the assistance of DALL-E2 using the text “digital art, a humanoid robot taking a test”.*

ChatGPT: A discussion for Adjunct Faculty

The image for this article was created with the assistance of DALL-E2 using the text “digital art, a humanoid robot taking a test”.* (see the bottom of this article)

It seems everywhere you look, there is a conversation going on about ChatGPT. Today, I’m going to talk about ChatGPT from a layperson perspective, which is what I am, and what most of us are.

Information is coming at us about ChatGPT so quickly, it can be hard to understand what this technology is and what its potential uses are.

?Does it have the potential to save Humankind?

Is it the beginning of the end for human existence?

Seriously, in what I read, that’s the message that I’m hearing, and it doesn’t make sense.

This is the purpose of this paper. To talk about what ChatGPT is from a layperson’s perspective, and to talk about it from behind an academia lens, as an adjunct professor, which is one of the jobs that I do.

Let’s get started.

BTW, ChatGPT is not the only language model for conversational AI in use. I’m just focusing on this particular model because it’s dominated the conversation lately.

The only way I know of to understand something is to research it. Throughout the rest of this paper, I’ve identified the websites I’ve drawn information from. The conclusions that I have made, the summaries I have presented, and any mistakes in those summaries, are my own.

You can get a technical description of ChatGPT here at their website. Um…I don’t really get what their description means. And that’s perfectly ok. Sometimes, there is this technology snobbery where people “in the know” can make it seem like somehow, they are privy to some special knowledge that only super brilliant people can understand. That’s not the case AT ALL. Remember Einstein’s saying, “If you can’t explain it to a six-year-old, then you don’t understand it yourself”?

I’m not saying ChatGPT is engaging in technology snobbery, at least not intentionally. Sometimes, technical topics just don’t lend themselves to easily understood translations, and they aren’t necessarily written for consumption by the general public.

So, here’s what I think ChatGPT is.

ChatGPT is a language processing model. In summary, language processing models can mimic human language based on questions or prompts from human input.

This particular model has assimilated data from the Internet, some 300+ billion words of it! Then, with human input and a methodology called Reinforcement Learning with Human Feedback (RHLF), these automated language models “practice” question/response scenarios, again, with a human technician, over and over again, until the model begins to “learn” human language, often referred to in AI platforms as “natural” language.

Is human language a series of repetitive dialogues? Or repetitious prompts and responses? Absolutely not. This is why platforms, like ChatGPT, are in their very rudimentary beginnings. I don’t believe they are yet capable of full simulation of the Human brains ability to communicate, and from what I can tell, the Artificial Intelligence community doesn’t believe they are at that capacity level either. At best, given certain data, these models can replicate human dialogue, under very tightly controlled conditions.

But the problem is that just because a language model has been trained to provide certain responses, under controlled, and limited circumstances, it doesn’t mean that the language model is always correct, and it doesn’t mean that the model has reasoned through this data, in the way that the human brain can.

The use of AI in Grammarly.

An existing example of AI usage in analyzing text is Grammarly. Grammarly gives a particularly good and very understandable definition of their use of what an artificial intelligence system is, “Broadly speaking, an artificial intelligence system mimics the way a human would perform a task. AI systems achieve this through different techniques. Machine learning, for example, is a particular methodology of AI that involves teaching an algorithm to perform tasks by showing it lots of examples rather than by providing a series of rigidly predefined steps.”?Grammarly also has a great summary of how their application of AI technologies has value, “Our goal is to help you express yourself in the best way possible, whether you’re applying for a job or texting a joke to your friends.”

I’ve authored numerous articles and papers over the years, often about technical topics. You can read more about me on LinkedIn and see some of the articles I’ve written here. I usually run my articles, and my writing samples through Grammarly, and almost every time, I get an item response that says my writing sample may not be easily understood by the average reader level, which Grammarly identifies as that of an eighth grader.

That I know of, no one thinks that using Grammarly will be the beginning of the end for human existence.

?That’s where we need to get with ChatGPT.

The use of AI in Microsoft Word.

In Word, there is a summary option that reviews the first 1000 words of a document, then, using AI, creates a summary. Here’s the AI generated summary for this paper:

Summary

ChatGPT is a language processing model that can mimic human language based on questions or prompts from human input. This particular model has assimilated data from the Internet, some 300+ billion words of it. At best, given certain data, these models can replicate human dialogue, under very tightly controlled conditions. The only way I know of to understand something is to research it.

There is also a message at the top of the screen: “Summary inserted: The summary is generated by AI based on the first 1000 words of the document. It may contain sensitive text and factual inaccuracies.”

I’m not a huge fan of this summary. It’s not horrible, but it misses a LOT of the message about what the paper is about.

But if you, as an audience, read that summary, wouldn’t you assume that the paper is about what ChatGPT is? And maybe you wouldn’t read any further to discover what this paper is really about!

The message I hope I’ve conveyed is that AI is a potential tool that can be used to simplify work or make suggestions to improve the state of our work. But its effectiveness is dependent on having a human confirm its accuracy, at least for now.

The implications of ChatGPT for academia, and specifically, for adjunct faculty/professors.

Adjunct Faculty/Adjunct Professors, there are a few different titles for this type of instructor. According to Inside Scholar, these instructors are part-time, non-tenure track instructors, who reportedly cover some 75% of college classroom instruction. That translates to some half a million Adjuncts!

The concern for educational institutions, and subsequently adjuncts, is that some students may use this platform to cheat, or as educational institutions refer to it – commit plagiarism. Let’s say I’m a student, and that I have a paper due on how ChatGPT could make human educators obsolete. This is a concern, considering all the hype about what ChatGPT is capable of. It's such a concern that numerous school districts across the country have banned the use of ChatGPT on any of their platforms and on their Wi-Fi.

Jona Jupai recently wrote an article for the U.S Sun newspaper about this, that succinctly summarizes these concerns. I can simply submit my topic to ChatGPT, and it will return a completed “paper”.?This statement may be simplistic, but it does cover the functionality of what ChatGPT can do.

I’ve been testing ChatGPT functionality. You will see those test results later in the paper.

ChatGPT and concerns in the .edu domains regarding plagiarism.

ALL colleges and universities have policies on plagiarism. I work for Ivy Tech community college as an Adjunct. Their plagiarism policy is here. The Harvard Extension School covers plagiarism under its Academic Integrity policy here. This information is public and pretty standard across all .edu domains. Merriam Websters dictionary defines plagiarism as “[to plagiarize] or to steal and pass off (the ideas or words of another) as one's own: use (another's production) without crediting the source”.

Current state for colleges and universities is that students’ work is automatically reviewed by a third-party application to determine if it “matches” similar work. One vendor that offers this service is Turnitin.com. On their higher education solutions page, Turnitin states that their solutions will,Ensure that students’ work is original and protect against even the most sophisticated forms of student misconduct.”

Turnitin uses proprietary technology to review student’s work. Once the review is complete, a “score” is returned. The higher the score, the more likely it is that the paper contains some use of wording, concepts, et al, that have already been written by someone else.

Not all students who have a high score from Turnitin are willfully committing plagiarism. As an entry-level student, you may be unaware of the requirements for citation (usually APA style standards), or you might think that summarizing information for various sources: books, periodicals, newspapers, and the Internet, without citation, doesn’t meet the criteria for plagiarism. In this scenario, these scores can be used as educational tools.

Of course, there are instances where plagiarism is willful. If this is determined to be the case, students may be subject to academic discipline, as defined by their institution.

Updated August 2023: a finding from one of my favorite, and most reliable sources, The Markup, on results from AI scanning and Turnitin, based on results from non-native English speakers.

What’s missing from ChatGPT’s deployment.

I come from a software development background. One of the credos of this discipline is test! test! test!.?IBM has a great definition: “Software testing is the process of evaluating and verifying that a software product or application does what it is supposed to do.”

These “tests” are often referred to as use cases. Wrike.com has a good definition of what a use case is: “A use case is a description of the ways in which a user interacts with a system or product.”

ChatGPT, and its builders, Open AI undoubtedly tested this model under rigorous conditions. This paper is not an indictment of anyone, any organization, any policy, or a forum to debate the value (or lack thereof) of open-source platforms or whether ChatGPT conducted sufficient testing.

The fact is that ChatGPT is here to stay. As potential users, and as a society who will reap the benefits of this technology, or suffer its consequences, we have the power to contribute use cases that will help these application(s) be the best that they can be and that they continue to be subject to the highest levels of oversight.

We need to put aside our fears.

In the words of Dr. Kai Fu Lee, one of the world’s leading experts in Artificial Intelligence, from his book AI 2041, “We are the masters of our fate, and no technological innovation will ever change that.

The case for Ethical and Trustworthy AI.

Just a quick FYI, I volunteer with Women in AI, as a Futurist in Ethics and Culture. Women in AI (WAI) is a nonprofit do-tank working towards inclusive AI that benefits global society. I also work with Advancing Trust in AI, as a founding member, and a Senior Ambassador. Advancing Trust in AI’s mission is to bring awareness on the multifaceted elements required for creating Trust In AI.

Ethical and trustworthy AI is something that’s on my mind a lot (no surprise). I think we all, in this industry, have both the right, and the responsibility, to make contributions of value to the field.

Here are the examples I created in ChatGPT.

I entered, “Understanding ChatGPT for Adjunct Faculty in Higher Learning”.?You can see that entry by my initials. ChatGPT’s response is right below that.

An image of the conversation from ChatGPT with the text "Understanding ChatGPT for Adjunt Faculty in Higher Learning"? and ChatGPT's response

I entered, “How Adjunct Faculty in Higher learning can prevent cheating with ChatGPT”.

An image of a text conversation with ChatGPT for "How Adjunct Faculty in Higher Learning can prevent cheating with ChatGPT"?

I entered, “Can Adjunct Faculty in higher learning detect the use of ChatGPT?”?Oops..something happened, and no response was generated.

An image of a ChatGPT conversation that returned an error message to the text :Can Adjunct Faculty in higher larning detect the use of ChatGPT"?

I entered, “Will ChatGPT be able to replace human educators?

A ChatGPT conversation for the text Will ChatGPT be able to replace human educators?

The longest response I got from ChatGPT was about a page. I entered, “Write a compelling essay on how Chat GPT can't replace human educators”.

An image of the ChatGPT conversation to the text "Write a compelling essay on how ChatGPT can't replace human educators"?

Even though it doesn’t look like you can get an entire research paper from ChatGPT, interacting with this system is empowering! Now I’m not just talking about ChatGPT without understanding what it is, I’m working with it.

Something new – Open AI now has an AI classifier tool. This tool is designed to tell the difference between text written by humans and text written by AI tools, including ChatGPT.

I entered a portion of this paper into the classifier tool. According to the classifier tool, the text I entered was very unlikely AI-generated.

An image of text from this paper entered into the AI classifier tool indicating this text is "unlikely AI generated"?

I also entered the ChatGPT essay response in the AI classifier tool. According to the classifier tool, the text I entered is possibly AI generated.

An image of text from this paper entered into the AI classifier tool indicating this text is "possibly AI generated"?

Here is what the citation credit for the classifier looks like:

An image of the citation credit for the classifier (code)

This is just what you would put on a webpage. Since I’m not posting that way, I’ve just included it as a text box.

We’ve gotten through some of the technical “weeds” of the discussion, we have a general understanding of ChatGPT...let’s talk now about solutions.

1)?????Know your students.

2)?????Be a subject matter expert on the topics you teach.

3)?????Use the tools we have available, as educators, to verify student work.

4)?????Understand that some students will cheat, and cheat successfully, regardless of the use of ChatGPT. Plagiarism started long before ChatGPT. ChatGPT is just another tool that SOME people will use to try to “get over” on a system. Would you blame a hammer for a nail being in the wrong place? It’s not the tool that’s the problem.

5)?????Join the conversation. Learn about these emerging technologies and submit use cases that you come across or that you think are necessary for the application to learn.

6)?????Are you listening to people talk about ChatGPT? There’s a LOT of fearmongering. Ask yourself – are they actually working with the platform? Are they giving examples? If they aren’t, then I don’t see the value in listening to those voices. You can’t be a subject matter expert on something you know nothing about. ?? That’s the ultimate contradiction.

7)?????ANYONE can create and use an account with ChatGPT. I encourage anyone and everyone to do that. To contribute to it, to criticize it (with relevant examples) and to challenge it to do better and to be better.

I’ve learned so much just doing the research on this article. That research has even changed my opinion about ChatGPT. I’m embarrassed to say that I was on the fearmongering bandwagon, until I did some research, and did some testing.

*It’s noteworthy that I did NOT specify a gender in the text for the DALL-E2 image, yet the image that was generated is clearly a female representation. You can read more about how this can perpetuate gender inequality with my article on LinkedIn “Does the AI preference for a female voice and appearance perpetuate gender inequality?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了