AI and Plagiarism
The development of ChatGPT and other large language models (LLM), colloquially referred to as AI, has stimulated discussion on the ethicality of their usage in education in general and research in particular. The LLM models mimic, and sometimes seem to exceed, human language capabilities. Hence, the originality of text generated with the assistance of AI is colloquial. The dominant school of thought suggests that unaccounted-for usage of AI is plagiarism. In contrast, some proponents of AI usage indicate that there is nothing wrong in using it and the current opposition to it reflects the innate nature of human beings to resist change. They hope that, in time, the usage of AI tools will become normalized like the use of the printing press and computers.
I belong to the school of thought that construes significant AI usage as plagiarism, where plagiarism is defined as passing on someone's else mental effort as your own. That someone need not be a conscious being. The only criterion that needs to be fulfilled is that the output does not essentially belong to the author. Whether the output belongs substantially to the author or not can be determined only by examining the ontological structure of the final text and the author's initial version. In the case of LLM, the author's initial version is the prompt that is fed to the LLM.
Plagiarism as a concept assumes an ontological difference between essence and embellishment. Copy-editing, proofreading, and formatting are not considered plagiarism because they are supposed to embellish the text without changing its essence. Hence, one of the ways to judge whether the use of AI constitutes plagiarism or not is to compare the prompt to the final text. To reduce subjectivity, one can give both the prompt as well as the final text to a group of people and ask them to judge whether the text is only an embellishment of the prompt or is essentially different. If people judge the prompt and text to be essentially different, the usage of AI constitutes plagiarism if the author of the prompt passes the paper as his/her own.
领英推荐
Is there any ethical way out of this dilemma? One way is to give authorship credits to the LLM also. However, as LLM is not qualified to accept the responsibilities associated with authorship, this is not an option. Another option is to give authorship credits to the firm that launched the LLM. Besides the philosophical debate of attributing ownership to an organization, the practical question of assigning authorship to someone who did not ask for it also crops up in this case. Given these problems, I think that the best way out is to refine one's writing skills rather than use LLM as crutches.
Associate Professor @ XIM University | Fellow (IRMA), CMA
2 个月True. A solution could be disclosure of the extent to which AI was used.