Are humans the engineers of prompts or are prompts the engineers of humanity?

Are humans the engineers of prompts or are prompts the engineers of humanity?

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise." —Plato, Phaedrus p. 275

Prompt engineering is the process of typing, editing, and submitting queries into a website, or tool (typically large language models such as ChatGPT) and receiving an automatic, generated, and well-rehearsed response based on pre-installed training data. [1] ChatGPT, Bard, Grok and other LLM's are used by millions of humans or users on a daily basis for both trivial and complex tasks. [2] I claim in this article that the regular usage of LLM's will be detrimental to curiosity, critical thinking and our culture as human beings. [3]

As the quote from Plato points out, my concerns have been raised before. [4] What is different about LLM's and artificial intelligence this time? Humans have been faced with industrial revolutions before; the engine, the phone, social media. Yet, we survive and supposedly, continue to thrive.

Critics of this position, of AI, are bullied as 'out of date' or 'scared of change'. They are tasked with proving why this particular industrial revolution is any different, or why it should be taken more seriously. I propose that the changes are stark. They are alarming, and they are being completely ignored.

A] THE MECHANICS OF PROMPT ENGINEERING

The translator's edition of Dante's Inferno discusses the difficulty of language, and I paraphrase it here. In essence, words like 'rose' might invoke feelings of love, and romance in one culture. In another, it's thorns may be a sign of hostility. [5] When you remove words from their context and history, you are left with the bare bones of a forgotten culture. There is no invitation to interpret, only to accept as commanded.

As it stands, prompt engineering creates the expectation of reliability. Who are we to question the competence of the programmers who have worked hard to field this input-output tool? [6] I argue that we do not even know what questions to ask. Many of the data on which LLM's are trained is kept hidden. Gone are the days of open-source software. [7] Now, in legal cases that contest this data — allegations of intellectual property theft, defamation in the form of fake scenarios — the expenses for discovery and the burden of proof on victims are astronomical. [8] How can we blindly trust something we do not understand?

Millions of users log into ChatGPT every day to complete their menial tasks. Ideally, these users restrict themselves to trivialities only. Perhaps it was a difficult day, and ChatGPT will finish a task or assignment you could have otherwise done (if only you had the time!). Unfortunately, I suspect that this habit evolves into a complete dependence on the tool. How often have students said: "There's no need to do that revision paper, the memo should be enough," only to get humbled by exams later on? The notion of accepting pre-prepared, well-rehearsed answers from an omniscient machine has not been examined properly, understood comprehensively or assessed ethically.

A doctorate student, or experienced practitioner might be able to discern the inaccuracies or hallucinations in an LLM. However, if the excuse for using an LLM is 'efficiency' then how would they have the time to critique the answers provided? Further, to those not versed in the field, the answers are accepted as provided. It does not matter if these answers are guiding your ship into an iceberg, it only matters that the "job gets done". [9] It is increasingly worrying that the direction of knowledge-creation, and research, is becoming formulaic input question, and output answer. I will discuss this abstraction more under heading B.

These mechanics raise additional questions. Is the standard of education so high that students are required to depend on a tool? Is the workforce evolving faster than the law can adapt so that aspiring candidates must evolve alongside it or be left behind? What is being done to help those who have no access to devices, internet or LLM's? For the purposes of this article, I will not attempt to answer these questions here.

B] THE CORROSION OF CURIOSITY, CRITICIAL THINKING AND CULTURE

The four (4) c's are at the heart of this article. If teachers, mentors and advisors are being substituted for a machine, what will the future of humanity look like? The mechanics of prompt engineering show that there is typically no analysis of the answer given. Even if there is, in time this will dissipate. As our dependence on LLM's grow, our knowledge-generation will shrink. Summarily, if all our graduates drew their inspiration from the same sources used by LLM's, then all the sources created by those graduates in future will be, in some way, connected to standardised ideas provided by Eurocentric beliefs. Our culture will be clothed in the metallic emptiness of machines, the cold steel of technology.

In the past, we were required to meticulously collect research, ponder over each sentence, and reference these ideas to human authors. By doing this for days, and years, our brains grew new neural pathways and manufactured new techniques that work for us. [10] Now, this creativity, this desire for curiosity has been all but destroyed. In the quest for efficiency, we ignore the unseen benefits of 'outdated' and 'traditional' hard work. While there are little to no sources on the correlative effect between use of LLM's and the 4C's, I claim that there will be in the future. Just because the effects on our general capabilities are not yet apparent, does not mean that they are absent. In the future, will we even be aware of what critical thinking abilities we have lost? Will we once again turn to a machine to decipher what parts of ourselves were left behind?

For emphasis, I reiterate here that prompt engineering works through abstraction. A user approaches an online website, plugs in a query (and if they have the time, they might flavour this query with more nuance) and receives an instantaneous response. This expectation of a guaranteed response, with an unsubstantiated belief in the sources that produced it, removes any building of knowledge, any curiosity at all. Merely asking a question without having tried to answer it yourself is not curiosity. Curiosity is the desire to learn something new, and to experiment with your own ideas within the framework of the context that surrounds that question. LLM's remove that context, they ignore your ideas, and they satiate the instant expectation of an answer. Ultimately, when your mind is trained to expect an answer, and geared to accept it regardless of its form, something is indeed lost.

C] FUNDAMENTAL ISSUES WITH HOW LLM'S OPERATE

First, LLM's have been known to 'hallucinate'. As in, if a user does have the insight to ask for context, the LLM will generate its own scenarios. Sometimes, it will name real people, list real cases. However, these LLM's will make ridiculous claims about what these real people did, and what occurred in these real cases. In essence, the LLM will blur the lines between reality and fiction in a confusing way that is entirely imagined. [11]

Second, LLM's have reached a kind of invisible wall. Where does AI or an LLM that has trained itself on all the human data available go? Recently, ChatGPT and Bard started experiencing glitches, behaving in bizarre ways, and providing completely incorrect or irrelevant answers. Allegedly, this is because LLM's cannot train itself on data created by LLM's. It creates a death spiral of repetitive data that confuses the machine in a significant way. [12]

I cannot claim to know why LLM's hallucinate, or what solutions there are to resolving the death spiral (humans will likely not create as much unique material of their own in future due to their dependence on AIs, so AI will not actually have new material to train on).

D] CONCLUDING REMARKS & POTENTIAL SOLUTIONS

The advent of writing, the engine, the phone and social media are fundamentally different to AI and LLM's. With the former revolutions, there was a chance of human-device interaction and the emergence of new ideas. With the latter, our skillset is completely ousted, our workforce disrupted. Our minds are guided towards a dictated outcome which is being blindly accepted by most. Worst of all, for millions of Africans, their future and history are being described by a machine taught by Eurocentric sources. [13] How can a mere human being compete with technology that outperforms or is equal to in most examinations? [14] How are we to debate each other if we all invest our faith in the same machine?

My apocalyptic vision of LLM's and AI will likely be regarded as pessimistic. This may be true. However, I raise these concerns in the manner presented to, at the very least, accelerate and inspire some critique of our usage. Why should we use LLM's at all? What is the most ethical way of using these tools? How can we slow their innovation so that law, and ethics can catch up?

If I told you this entire article was produced by ChatGPT, would your eyes widen with worry? Would you be dead faced, unsurprised? Hmm, yes, now is the time to become the sceptic.

I attempt here to propose some solutions:

  1. Host interdisciplinary workshops in rural communities with single function, cost-effective devices to train underprivileged learners to use LLM's in an ethical, restrictive way;
  2. Create guidelines that are enforced in the workplace regarding the extent, and nature of LLM usage. Specifically, have training on what constitutes 'ethical' questions and how to assess the credibility of answers;
  3. Use LLM's created by local, ethnically born programmers to ensure that the sources are context-driven, and culturally sensitive [15];
  4. Make a conscious effort to use LLM's only when absolutely necessary, not out of convenience. When you do, take the time to compare those answers to your own research and thought process;
  5. Use custom GPT's trained on your own study notes, revision papers and lecture slides (do abide by relevant jurisdictional copyright and intellectual property laws). I've linked a tutorial on how to do this here; and
  6. Begin initiatives to demand that LLM's are open-source, available for discovery on a cost-effective basis, and accountable to the public.

In our quest to fly upwards to the heights of efficiency, we should be careful lest we lose the humanity which inspired us to touch the clouds in the first place.


[1] OA Acar 'AI Prompt Engineering Isn't the Future' AI Prompt Engineering Isn’t the Future (hbr.org) (accessed 1 April 2024).

[2] B Thormundsson 'Global user demographics of ChatGPT in 2023, by age and gender' Usage of ChatGPT by demographic 2023 | Statista (accessed 1 April 2024).

[3] S Healy 'Please understand' Please Understand — LessWrong (accessed 1 April 2024).

[4] A Plato Phaedrus trans M Ficino (1484) 275.

[5] D Alighieri Divine Comedy trans D Neff (1995) 11-15.

[6] W Heaven 'The inside story of how ChatGPT was built from the people who made it' The inside story of how ChatGPT was built from the people who made it | MIT Technology Review (accessed 2 April 2024).

[7] H Chen, F Jiao and others 'ChatGPT's One-year Anniversary: Are Open-Source Large Language Models catching up?' (2024) arXiv 2.

[8] B Britain 'OpenAI, Microsoft hit with new US consumer privacy class action' OpenAI, Microsoft hit with new US consumer privacy class action | Reuters (accessed 2 April 2024); S Ray 'OpenAI Sued For Defamation After ChatGPT Generates Fake Complaint Accusing Man Of Embezzlement' OpenAI Sued For Defamation After ChatGPT Generates Fake Complaint Accusing Man Of Embezzlement (forbes.com) (accessed 2 April 2024).

[9] A Bruno, P Mazzeo and others 'Insights into Classifying and Mitigating LLMs’ Hallucinations' (2023) arXiv 4-5.

[10] M Owens, K Tanner 'Teaching as Brain Changing: Exploring Connections between Neuroscience and Innovative Teaching' (2017) CBE Life Sci Educ. 4.

[11] Bruno (n 9).

[12] P Grad 'AI models feeding on AI data may face death spiral' AI models feeding on AI data may face death spiral (techxplore.com) (accessed 2 April 2024).

[13] G Gondwe 'CHATGPT and the Global South: how are journalists in sub-Saharan Africa engaging with generative AI?' (2023) De Gruyter 9.

[14] Y Tanaka, T Nakata and others 'Performance of Generative Pretrained Transformer on the National Medical Licensing Examination in Japan' (2024) PLOS Digital Health 5; J Roos, A Kasapovic 'Artificial Intelligence in Medical Education: Comparative Analysis of ChatGPT, Bing, and Medical Students in Germany' (2023) JMIR Medical Education 3-4.

[15] C Okorie 'Beyond intellectual property protection: Other artificial intelligence intellectual property strategies for the African context' in C Ncube, D Oriakhogba and others Artificial Intelligence and the Law in Africa (2023) 165.

Wow, you really dived deep into the complexities of prompt engineering and AI ethics! Your attention to detail, especially on the impact of LLMs and cultural implications, is impressive. Consider exploring the ethical frameworks of AI development further. It could add a new layer to your understanding and discussions. Have you thought about how this passion for AI and ethics might shape your future career? Would you see yourself advocating for policy changes, or perhaps innovating in AI development? What specific area within AI or ethics are you most excited to explore next? It's super cool how you're thinking critically about these topics. Keep it up!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了