The Era of AI: Urgent Call for Real Experts
Alberto A.
Learning & Organizational Development Expert | Adjunct Professor: Communication Studies | Consultant (formerly at Intuit, Meta, Twitter, PIMCO) | Johns Hopkins CTY Alum | AI Nerd
Asking an AI to fact-check itself is like asking a mime for directions—amusing but ultimately unhelpful.
The rapid advancement of generative artificial intelligence (AI) technologies has significantly transformed content creation, making it easier to produce large volumes of text, images, and other media. However, the same technology that democratizes content production also amplifies the risk of misinformation. There is a critical need for true expertise to vet the accuracy of AI-generated content. The proliferation of misinformation, compounded by the public's limited understanding of complex subjects, underscores the necessity for experts to guide the dissemination of reliable information.
The Proliferation of AI-Generated Misinformation
Generative AI systems, such as OpenAI's GPT, are capable of producing human-like text that can be difficult to distinguish from content written by actual humans (Brown et al., 2020). These systems are trained on vast datasets that include both accurate information and substantial amounts of erroneous or misleading content. As a result, AI-generated outputs can inadvertently perpetuate and spread misinformation. The scale at which AI can generate content exacerbates the problem, as it allows misinformation to proliferate rapidly and widely (Zellers et al., 2020).
The Role of True Experts in Vetting AI Content
True experts—individuals with deep knowledge and extensive experience in specific fields—play a crucial role in identifying and correcting misinformation. Unlike AI, which lacks the ability to critically evaluate the veracity of its outputs, human experts possess the nuanced understanding necessary to discern factual information from falsehoods. Their expertise is essential in assessing the credibility of sources, the accuracy of data, and the soundness of arguments presented in AI-generated content (Lewandowsky et al., 2021).
Methods for Experts to Vet AI-Generated Content
The Challenge of "Unknown Unknowns"
A significant obstacle in combating misinformation is that individuals often do not know what they do not know. This concept, known as "unknown unknowns," refers to the gaps in knowledge that individuals are unaware of, which prevents them from seeking or recognizing accurate information (Dunning, 2011). In the context of AI-generated content, this lack of awareness can lead to the uncritical acceptance and dissemination of misinformation.
领英推荐
Addressing the Knowledge Gap
The rise of generative AI and the corresponding influx of misinformation highlight the indispensable role of true expertise in today's information landscape. Experts are essential in vetting the accuracy of AI-generated content, providing the critical analysis needed to discern truth from falsehood.
Addressing the challenge of "unknown unknowns" through public education and expert involvement is crucial in mitigating the spread of misinformation. As generative AI continues to evolve, the collaboration between AI technologies and human expertise will be paramount in ensuring the integrity of information disseminated to the public.
References
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., ... & Amodei, D. (2020). Language models are few-shot learners. Advances in Neural Information Processing Systems, 33, 1877-1901.
Dunning, D. (2011). The Dunning–Kruger effect: On being ignorant of one's own ignorance. Advances in Experimental Social Psychology, 44, 247-296.
Fischer, F., Kollar, I., Stegmann, K., & Wecker, C. (2020). Toward a script theory of guidance in computer-supported collaborative learning. Educational Psychologist, 45(1), 56-66.
Lewandowsky, S., Cook, J., Schmid, P., Holford, D. L., Finn, A., Leask, J., ... & Lombardi, D. (2021). The COVID-19 vaccine communication handbook: A practical guide for improving vaccine communication and fighting misinformation. Frontiers in Psychology, 12, 745-762.
Thorne, J., & Vlachos, A. (2021). Evidence-based verification of information: A survey of context and challenges. ACM Computing Surveys, 53(6), 1-34.
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., & Choi, Y. (2020). Defending against neural fake news. Advances in Neural Information Processing Systems, 32, 9051-9062.