AI Detects 82% of speech by John F Kennedy written by AI
Muricu Toni
Editor in Chief of Family Office Magazine & Events and Art & Museum Magazine. Director of (FAMCON) Family Office Conference. PR for -The Art Market: A Concise Guide for Professionals and Collectors written by Ty Murphy
John F Kennedys address on the United States at Rice University, 12 September 1962
The Rise of the Machines
Artificial intelligence is driving a significant revolution in the field of education. AI writing helpers have become more advanced, causing the distinction between human creativity and machine-generated work to become less clear. This has obviously sparked apprehension over plagiarism, especially in academic environments with the utmost importance on uniqueness.
To address this issue, colleges have adopted AI detection technologies. These programmes evaluate writing patterns to identify statistical irregularities and stylistic inconsistencies that could suggest the presence of AI-generated content. Although the objective is commendable, which is to guarantee academic integrity, the accuracy of these tools has been subject to examination. We conducted an evaluation of John F. Kennedy's renowned 1962 address at Rice University regarding the U.S. space programme, which was identified as being 84% generated by artificial intelligence according to a specific tool. This raises scepticism about the claimed 98% accuracy rate of these programmes, as they are prone to false positives triggered by Bible verses and Obama speeches.
Although institutions frequently claim a false positive rate as low as 2%, the repercussions for students can be significant. Suppose that 2% of the total population of 80,000 students are falsely accused of cheating. In this case, the negative impact on human well-being exceeds the perceived benefits of AI's effectiveness.
The problem extends beyond individual occurrences. Grammar tools such as Grammarly can potentially confuse AI detectors, adding more complexity to the situation. According to the Teaching Centre, they have concluded, based on their professional judgement, that the existing AI detection software is not sufficiently trustworthy. Their position emphasises the moral issues associated with these tools - protecting student privacy is at stake depending on flawed technology.
At first glance, AI detection tools seem simple. The process involves analysing a student's work and cross-referencing it with an extensive text database to identify distinctive characteristics that indicate the use of artificial intelligence in its creation. However, the method is more intricate. These technologies utilise sophisticated algorithms to analyse patterns in text. The researchers search for statistical anomalies, such as atypical word frequency or sentence structures that diverge from conventional human writing. In addition, they detect stylistic incongruities such as sudden changes in tone, voice, or formality, which could indicate a deficiency in AI's comprehension.
AI detection systems still provide advantages. They can identify probable instances of plagiarism that may go undetected by conventional methods, particularly when dealing with advanced AI-generated material. This aids universities in maintaining academic integrity and deterring students from exploiting AI shortcuts.
The false positives in John F. Kennedy's renowned speech serve as a prominent illustration. The address, both powerful and well-documented, was identified as being created by an AI using a detection technique. Likewise, false positives have targeted impactful speeches by Barack Obama and enduring lines from the Bible.
What causes these errors? The constraints of AI detection algorithms are a contributing factor. These techniques are taught using extensive datasets, yet human language is naturally varied and intricate. The algorithms can be confused by using repetitive phrase structures for emphasis or using specialised jargon in a given sector. In essence, the characteristics that give human writing a strong effect might cause difficulties for AI identification, resulting in embarrassing and perhaps harmful false accusations.
The high incidence of deceptive actions raises significant doubts about the general reliability of AI detection technologies. If a program lacks the ability to distinguish between historical speech and AI-generated text, its reliability in identifying student plagiarism is questionable. There is a substantial risk of false accusations. Envision a diligent student investing immense effort and passion into an essay, only to have it identified as problematic by an untrustworthy AI tool. The ensuing inquiry and possible sanctions could have a catastrophic impact.
In addition to the immediate effects on students, significant ethical problems are of great importance. These technologies depend on examining student work and posing inquiries on student privacy. Do we feel confident in relying on possibly imperfect technologies to handle such sensitive data? Moreover, employing flawed technology for such crucial judgements leads to a precarious situation. Should the caprices of an algorithm determine a student's academic prospects
领英推荐
The emergence of "AI humanisers" - programmes specifically developed to eliminate the distinctive characteristics of AI-generated text - introduces an additional level of intricacy. These humanizers function by modifying sentence structure, incorporating synonyms, and imitating the idiosyncrasies of human writing. Although their efficiency may vary, their existence underscores the constraints of present detection approaches. To keep up with the ever-changing methods used to make AI more human-like, it is imperative to enhance AI detection systems constantly.
However, exclusively concentrating on this technological game of evasion may lead to a futile outcome. Universities should adopt a multifaceted strategy, incorporating AI detection as a supplementary tool in conjunction with another way. Human evaluation remains essential for assessing the calibre of student work and detecting instances of plagiarism that are beyond the capabilities of AI detection.
Universities find themselves in a hazardous predicament. They have the challenge of upholding academic honesty while dealing with the complex task of AI detection. A recent story by Bloomberg brings attention to the problem, and Turnitin was cited as stating that "the rate at which it incorrectly identifies writing as AI-generated varies depending on the specific task." The evaluation of an entire document for AI writing results in a false positive rate of 1%, whereas determining if a specific line is AI-generated yields a false positive rate of 4%.
Software vendors have a significant function to fulfil. In order to enhance the precision of their detection algorithms, it is imperative for them to allocate resources towards incorporating larger and more diverse datasets for training purposes. In addition, implementing a transparent rating system that assesses accuracy, akin to Turnitin's analysis of false positive rates, would enable colleges to make well-informed choices.
Nevertheless, placing absolute trust in AI detection is an imperfect strategy. Universities should investigate other methodologies. Universities may consider investigating alternate approaches such as conducting source code audits and utilising originality reports.
Developing critical thinking abilities in students is crucial in the battle against plagiarism. The temptation to optimise education using AI may be powerful, but substituting human discernment in such intricate affairs has a significant price. The JFK speech incident demonstrates the limitations of AI detection systems, which are not infallible. Universities should stress precision and impartiality rather than unquestioning dependence on technology. The future of education depends on achieving a harmonious equilibrium between technological advancement and human supervision.
The recent example involving Marley Stevens at the University of North Georgia has shown the susceptibility of AI detection technologies to errors. Stevens, a third-year student at the university, has publicly denounced what she characterises as a chaotic situation following allegations of utilising artificial intelligence to write a report. The individual maintains that the task was solely completed by her, with the aid of conventional grammar and spell-checking functionalities offered by Grammarly, which she accessed through a browser plugin. This incident underscores persistent worries regarding the precision and dependability of AI detection techniques in academic environments.
Does the usage of Grammarly in tests pose a significant risk for students? According to Grammarly points out that even if an AI-detection system is right 98 per cent of the time, it falsely flags 2 per cent of papers. And since a single university may have 50,000 student papers turned in each year, that means if all the professors used an AI detection system, 1,000 papers would be falsely called cases of cheating.
Turnitin has publicly stated that its AI-detection tool is not always reliable and that educators should use Turnitin’s detection system as a starting point for a conversation with a student, not as a final ruling on the academic integrity of the student’s work.
Given the inherent uncertainties associated with AI detection tools, universities must exercise caution when using this technology to address academic integrity. If these tools are not completely infallible, reliance on them as the sole basis for cheating allegations could be fundamentally flawed and unjust. Educational institutions must adopt a more nuanced approach, acknowledging the potential for error and incorporating multiple verification forms before taking serious actions such as misconduct accusations.
?
#aidetection #AI #ChatGPT #JFK #bbc #NBCNews #CNN #harvarduniversity #bppuniversity #UOL #lawyer #Grammarly #Turnitin #bpp #bpp #cnbcnews #nytimes #NYTimes?#washingtonpost #washpost:?#cnn ?#bbc #bbcnews #foxnews #foxandfriends #googlenews #TheEconomist #economist #apnews #ap?#forbes #forbesbusiness