Ethically Using ChatGPT as Part of Social Innovation Research
The academic community is divided over whether ChatGPT and other generative AI should be used in research. But, at Global UniversityONE, we recognize the potential value to humanity through the use of AI, but also insist that it must be done ethically.
This position is held because supporting social innovation is a core value and competency of the organization, so it would be contradictory for us to not encourage its use, when it is potentially one of the greatest social Innovations so far in the 21st century.
But how it is used is important. First, to think it will come up with social Innovations at this point is misguided. Maybe some day it will, but right now it is mostly only able to talk about current knowledge. And even there, it is often faulty. Often, I have tried to have ChatGPT help me find resources for my research, and while it is far better than traditional search engines at being able to take a concept and find related terms, it also often "hallucinates" about the existence of resources, giving book and article titles that don't actually exist.
It also has a tendency to currently write fairly dull and uninspiring text. Although this can be improved by giving more directions, especially about giving it a theoretical role or write in the style of such and such person. I personally feel asking to write in the style of another person is something that crosses the line into the territory of plagiarism. But asking it to write in a particular style for a specific purpose or pretending to be a specific role is not.
So what are some good uses for generative AI? I use ChatGPT to help me with my aphasia. Since I was young, I am not able to say certain words at certain times. I know everyone have moments where it feels like a word is on the tip their tongue but can't remember it, or they can't remember someone's name. I have this issue more frequently and severely than most people. For example, when I was a teenager I forgot my sister's name when introducing her to someone, and had to ask my sister her name!
领英推荐
So in writing the article you are reading, I knew there was a word for the sentence that had: "it would be _____ for us to not encourage its use". I was fairly sure there was a word that would be right, but at the time I could only think of "paradoxical" or "hypocritical", and neither of those were what I meant. So when I gave ChatGPT the full paragraph, with the _____ in it for where I wanted the word, and asked for suggestions, it gave me "contradictory", which was exactly the word I was looking for. In this case, I don't believe it is unethical to use ChatGPT without attribution, as I used it as an advanced dictionary/thesaurus.
ChatGPT is also very useful for brainstorming. For example, when creating a social innovation, a name needs to be given to it. ChatGPT has been useful in being able to brainstorm these, and looking at synonyms as well as connotation. Further it is pretty good at helping make a backronym.
Another area of brainstorming is on the ideas themselves. But there are a few things that need to be considered. First, ChatGPT appears to have a bias for agreeability, so unless you specifically ask it to be a "devil's advocate" it won't always do a great job at this. Also, while it can be very creative on a surface level, like writing something in the style of Shakespeare, on the more important broader creativity that underlies social innovation, it has a bias towards the status quo.
So, our stance on using ChatGPT and other generative A for research I is similar to our stance on using Wikipedia: It is a good starting place, but to must dig deeper to be sure it is accurate. For wring , our stance is that it is acceptable to brainstorm using a generative AI, as long as this is noted. We do not yet have a full position on using ChatGPT or other such tools for the writing itself, so it will be considered on a case by case basis for now.
Senior Managing Director
3 个月Jacob Walker Very well-written & thought-provoking.