A Looming Crisis: When Did We Stop Checking Our Sources with AI?

A Looming Crisis: When Did We Stop Checking Our Sources with AI?

In an age where artificial intelligence (AI) is seamlessly woven into the fabric of our professional lives, a silent crisis may be unfolding. The convenience and efficiency of AI tools, particularly language models like GPT-4, have lulled us into a dangerous complacency. When did we stop checking our sources? This is not only a rhetorical question, but a pressing warning about the erosion of critical diligence and thinking in our work.

The Illusion of Infallibility

Language models often boast high accuracy rates, but it is important to note that these rates can vary depending on the task and evaluation metrics used. For instance, while GPT-4 demonstrated impressive capabilities, growing with each model, their accuracy is not absolute (Bubeck, Chandrasekaran, Gehrke, Horvitz, Kamar, Lee, Lee, Li, Lundberg, Nori, Palangi, Ribeiro, & Zhang, 2023). A 95% accuracy rate may seem reassuring, but a 5% error margin can have catastrophic consequences, especially when misapplied in sensitive contexts. When do we determine that language models, and other AI-powered systems, are accurate to trust with something that determines life-altering decisions (Defense Advanced Research Agency, 2024)?

One of the most concerning issues is the phenomenon of "hallucinations" in AI outputs. These are confidently presented statements that are entirely false or fabricated (IBM, 2021). Without vigilant verification, such misinformation can slip into reports, analyses, and decisions, undermining the integrity of our work.

The Dire Consequences of Complacency

The failure to rigorously check AI-generated information has real-world implications:

Medical Misdiagnoses: In healthcare, unverified AI outputs can lead to incorrect treatment plans, endangering patient lives. In these scenarios, who is responsible for medical malpractice and the life-changing results (Dinerstein, 2024)?

Financial Disasters: Inaccurate data can result in poor investment decisions, regulatory breaches, and significant financial losses. This brings to mind situations such as the flash crash in the U.S. equity market in 2010, the flash rally in the U.S. treasury market in October 2014, and the more recent sell-off in Japanese and U.S. equity markets in August 2024 (Adrian, 2024).

Engineering Failures: Small errors in technical calculations can lead to flawed designs, resulting in structural failures or technological malfunctions. It is highly likely in 2024 that anyone who watches national news will be aware of the challenges, and collisions, resulting from the ongoing development of cars being operated autonomously (Guardian News and Media, 2024).

How do outcomes change when these challenges are scaled up? For example, I support the capabilities of nuclear energy and believe that it has a place in our future where we need expanded, cleaner, energy sources. However, with the fear associated with this industry, how much AI automation can be expected in scenarios like Three Mile Island? Without oversight, and Human Intelligence (HI), are we trading in human oversight for potential digital oversight?

A Growing Skill Gap in AI Literacy

Compounding the concern with the merging of AI and HI is a troubling shortage of a workforce sufficiently trained for future industries and a lack of AI literacy among existing professionals. Despite AI's presence, many lack the necessary understanding to use these tools responsibly:

  1. Growing Skill Gap in AI Literacy: There is a significant shortage of AI literacy among existing professionals. Despite the increasing presence of AI, many individuals lack the necessary understanding to use these tools effectively, securely, and responsibly.?This gap is exacerbated by educational shortcomings and the rapid pace of industry demand outpacing the supply of qualified candidates (World Economic Forum, 2023).
  2. Educational Shortcomings: There is a noted lack of accessible education focused on AI proficiency and ethics, particularly for individuals who are not experienced in finding free or low-cost courses.?This educational gap hampers the ability of professionals to effectively utilize AI technologies (Bergson-Shilcock, 2021).
  3. Industry Demand Outpacing Supply: Job listings for AI roles have doubled since 2019, but the pool of qualified candidates remains insufficient. This skills gap not only hampers innovation but also increases the risks associated with unchecked AI reliance.?Without adequate expertise, professionals are ill-equipped to identify and correct AI inaccuracies (World Economic Forum, 2023).
  4. Impact on Innovation and Risk: The skills gap in AI literacy is a significant barrier to innovation and poses risks due to the potential for AI inaccuracies.?Industry experts emphasize the need for improved AI education and training to bridge this gap and ensure responsible use of AI technologies (Snyder, 2024).

Eroding Skepticism: A Dangerous Trend

Our collective skepticism—a crucial safeguard against error—has waned. The ease of obtaining quick answers from AI has overshadowed the need for critical evaluation. This trend is particularly evident in education:

Student Mistrust: Many students question the value of expensive degrees, expressing skepticism about the motivations of educational institutions. They wonder if the investment aligns with real-world demands. Recent graduates are worried about whether the skills they obtained are relevant in either finding gainful employment or in their new roles where they already see new colleagues being hired possessing these skills (Bindley & Pisani, 2024).

Faculty Resistance: Educators often distrust the authenticity of student work in the AI era, with some dismissing AI as a passing fad rather than embracing it as an essential tool.

This mutual mistrust hinders the development of AI literacy and prevents the cultivation of a workforce capable of navigating AI's complexities.

A Modern-Day Technological Babel

We are witnessing the construction of a technological Tower of Babel—a grand endeavor fueled by ambition but lacking foresight. In the biblical narrative, humanity's attempt to reach the heavens resulted in confusion and disarray due to their hubris. Similarly, our unbridled advancement in AI without adequate checks and balances threatens to allow us to become dependent on AI when we should be using it to augment our skills and push the limits of our capabilities.

The proliferation of AI models trained on AI-generated data creates the possibility of a feedback loop of inaccuracies and biases if models are not trained properly.? Both Claude 3.5 Sonnet and Llama 3.1 have been trained on a portion of synthetic data with OpenAI sourcing synthetic training data from their o1 “reasoning” model for Orion (Wiggers, 2024). Gartner predicted that by the end of this year, 60% of all AI training data will be synthetic, up from 1% in 2021 (Morrison, 2023). Without a certain level of understanding of what this could mean, we risk building knowledge on a shaky foundation, destined to collapse under its own weight.

A Personal Call to Vigilance

As a coach and professor, I am acutely aware of these dangers. In my practice:

I prioritize tools that provide verifiable sources, such as Microsoft Copilot, enabling me to cross-check and confirm information.

I leverage my expertise, supported by research, to critically assess AI-generated content, ensuring it aligns with established knowledge.

I consult with fellow experts when venturing into unfamiliar territory, recognizing the limits of both AI and my understanding.

I recall preparing a lecture for a class that involved a complex topic at its core. To help me brainstorm ways to simplify the concept, I used a llm to break the topic down. The tool offered a compelling explanation of the concept with initial impressions suggesting it was accurate. However, in reviewing the output, I discovered subtle yet significant errors. Had I not checked, I could have inadvertently misled my students or had to make corrections in during my lecture—a stark reminder of the importance of diligence.

Bridging the AI Literacy Gap

Addressing this crisis requires a concerted effort to enhance AI literacy:

Educational Initiatives: Institutions must offer accessible programs focused on AI proficiency and ethics. For example, The University of Texas at Austin has introduced an affordable and scalable online master's program in AI, priced at $10,000, aiming to democratize AI education and meet industry demands (Burkhart, 2023; U.S. Department of Education, 2021). At the same time, Martin Brossman, Emery Carr, and I have developed a more affordable, 8-week professional course titled, “AI for Business Professionals” that has been successfully offered with highly favorable reviews.

Ethics at the Forefront: Incorporating ethics education equips professionals to navigate moral dilemmas posed by AI. When business ethics is a topic that increasingly has a negative tone to it in the modern marketplace, how can we guarantee that leaders will use AI-powered tools responsibly? Do we have the capability to truly govern it corporately and ensure it is used responsibly?

Continuous Learning: Professionals should engage in lifelong learning to stay updated of AI advancements and challenges (MIT Open Learning, n.d.; HaSadun, 2023). If we, as educators, are expected to train current students, returning alumni, and continuous learners in AI literacy, we should be literate ourselves.

An Urgent Warning and a Call to Action

We stand at a crossroads. The unchecked acceptance of AI outputs without verification is a ticking time bomb. The lack of qualified AI literate professionals exacerbates this risk, leaving industries vulnerable to errors that could have been prevented with proper expertise and scrutiny.

We must act now:

Rekindle Our Skepticism: Embrace critical thinking and the need for accuracy as a non-negotiable aspect of professional practice.

Invest in Education: Support and participate in initiatives that enhance AI literacy and ethical understanding.

Foster Collaboration: Share best practices across industries to develop robust standards for AI use and verification.

Conclusion: Reaffirming Our Commitment to Integrity

The promise of AI is immense, but so are the perils of its misuse. Let us heed this warning and recommit to the principles of accuracy, verification, and ethical responsibility. We should not allow the convenience of technology to erode our professional integrity. Our ethics today safeguard the trust and excellence that define our professions tomorrow.

Transparency: With a little help from AI, I crafted this article and made sure to verify all the information presented.


References

Bergson-Shilcock, A. (2021). Nearly 1 in 3 workers lack foundational digital skills, new report finds. National Skills Coalition. https://nationalskillscoalition.org/blog/future-of-work/nearly-1-in-3-workers-lack-foundational-digital-skills-new-report-finds/

Bindley, K., & Pisani, J. (2024). Tech jobs have dried up—and aren’t coming back soon. Wall Street Journal. https://www.wsj.com/tech/tech-jobs-artificial-intelligence-cce22393

Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., Palangi, H., Ribeiro, M. T., & Zhang, Y. (2023). Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv.org. https://arxiv.org/abs/2303.12712v5

Burkhart, R. (2023). Ai Master’s program launches with ability to serve thousands. UT News. https://news.utexas.edu/2023/01/26/ai-masters-program-launches-with-ability-to-serve-thousands/

Defense Advanced Research Projects Agency. (2024). Voices from DARPA Podcast Episode 83: When should machines decide? https://www.darpa.mil/news-events/2024-10-24

IBM. (2021). Understanding AI Hallucinations. Retrieved from https://www.ibm.com/

MIT Open Learning. (2024). Ethics of AI. Retrieved from Ethics and AI-powered learning and assessment | Open Learning

MIT Sloan Management Review. (2020). When AI Misleads: How to Spot and Correct Misinformation. Retrieved from https://sloanreview.mit.edu/

Morrison, R. (2023). Most AI training data could be synthetic by next year - gartner. Tech Monitor. https://www.techmonitor.ai/digital-economy/ai-and-automation/ai-synthetic-data-edge-computing-gartner

Sadun, R. (2023). How to reskill your workforce in the age of AI. Harvard Business Review. https://hbr.org/2023/08/how-to-reskill-your-workforce-in-the-age-of-ai

Snyder, L. (2024). Report investigates workforce implications of ai. News. https://www.cmu.edu/news/stories/archives/2024/november/report-investigates-workforce-implications-of-ai

U.S. Department of Education. (2024). Artificial Intelligence and Education. Retrieved from Artificial Intelligence (AI) Guidance | U.S. Department of Education

Wiggers, K. (2024). The promise and perils of Synthetic Data. TechCrunch. https://techcrunch.com/2024/10/13/the-promise-and-perils-of-synthetic-data/

World Economic Forum. (2023).? The Future of Jobs Report 2023. https://www.weforum.org/publications/the-future-of-jobs-report-2023/

?

Some of our AI Trainings:

AI for Professional 8-Week Certificate Training: https://www.martinbrossmanandassociates.com/ai-training-for-professionals/

Unlock the Power of Artificial Intelligence to Supercharge Your Small Business Operations and Marketing Success https://www.ncsmallbusinesstraining.com/unlock-the-power-of-artificial-intelligence-to-supercharge-your-small-business-operations-and-marketing-success/

Our Small Business AI certificate program https://www.ncsmallbusinesstraining.com/ai-powered-business-solutions-certificate-for-small-business-owners/

Professional AI training https://martinbrossmanspeaks.com/artificial-intelligence-ai-machine-learning-ml-and-the-4th-industrial-revolution-are-you-ready/


Dan Matics

Senior Media Strategist & Account Executive, Otter PR

2 个月

Great share, Justin!

回复
Jennifer Ewing

Library Director at Southern California Seminary

3 个月

Thank you for the article. I started my career as a librarian 30 years ago, at the start of the publicly accessible internet (www and web browsers). Sometimes I feel like I've been spitting into the wind. Librarians have been teaching students to evaluate their sources for the last few decades (and probably before that?). Why has it taken the faculty so long to see the importance of this? I am glad they are finally seeing beyond crowdsourcing/authority in Wikipedia. When I ask faculty if they assess in a student's sources in the rubric, usually they don't...they are more interested in their line of thinking/argument.

回复
Sean Whidbee

Information Technology Technician and Owner of Sean’s Custom Detailing Service

3 个月

I thoroughly enjoyed this read and am now pursuing a degree in interdisciplinary studies, with the long-term objective of educating our youth on the significance of critical thinking.

Great read. I loved how you called out the underlying complacency that is being developed when we outsource critical thinking to AI. The loss of critical thinking is subtle, gradual, and palpable.

Martin Brossman

Results Driven Success Coach, Speaker, Author, Social Media, and Social Selling Trainer

3 个月

Excellent!

要查看或添加评论,请登录

Dr. Justin B. Rose, CWDP的更多文章

社区洞察

其他会员也浏览了