Meaningful Work: What AI Can’t Do through Words
Todd Mei, PhD
Expert on Meaningful Work & Economics | Researcher, Technical Writer & Consultant
Note: This is a deep-dive reflection linking the current potential of generative AI with philosophical meaning and insight.
It would be silly to believe that generative AI can do meaningful work, as if a chatbot was a human who could find a greater purpose beyond its work of responding to user prompts.?Yet, especially given recent data about workplace woes, what is not so far-fetched is a situation in which humans use AI for finding or creating meaningful work. We’re already seeing a similar application of AI in the field of mental health.
The aim of this article is to show that even when generative AI provides suggestions for creating meaningful work, it's not really effective. Its words, in a manner of speaking, are empty.
There are two reasons for this, which I’ll unpack in more detail respective to AI- and human-based conversations.
It turns out that meaningful work is an excellent candidate for seeing the criteria of authorship and dialogue play out. It’s an area of personal and organizational improvement that involves more than just reading about suggestions and tips.
Creating meaningful work is a bespoke process by nature—intricate when involving individual career changes and complex when dealing with organizational transformation. For humans, meaningful work can be the difference between a good life and a not-so-good one. For chatbots, it is just one topic among many on which it has been trained to predict a response, but not necessarily find the appropriate response for the user.
What Is Meaningful Work?
Meaningful work is the idea that one’s job or career is a key contributor to a flourishing life, not just financially but emotionally, psychologically, intellectually, and even ethically. It’s the antithesis of living to work, hustle culture, and careerism. It’s a relatively niche idea within philosophy and sociology. Yet, the Great Resignation, which began in November 2021, pervaded the social imagination, revealing disappointment with careers and a shared conviction that jobs should be more meaningful. As a result, 4.5 million people resigned from their employers.
A few years on, there has been a sharp rise in regret amongst those who left their jobs. It would be easy to conclude that this regret resulted squarely from a rash and poor choice. Was the pandemic-induced slowdown in the global economy really going to inspire a change in working conditions, especially when some experts argue that disappointment with careers has been part of a trend in the labor market that preceded the pandemic?
Nonetheless, it should not be ignored that 80% of workers in the world are unhappy with their place of work. Regret stemming from leaving work prematurely is, in this instance, an outgrowth of a more fundamental regret—namely, that one’s work is not meaningful. Had one’s work been such, it’s likely the labor market landscape would have been very different.
The question about meaningful work remains timely. Moreover, it may be tempting to ask an AI chatbot exactly what one needs to do in order to find or create meaningful work. But at least in its current iteration, chatbots will not be especially helpful.
Informative, yes. Helpful, no.
The reasons why AI fails have largely to do with its features of authority and application and how they contrast in human conversation.
AI and Artificial Authority
There’s an interesting debate playing out since Ted Chiang published his article “ChatGPT Is a Blurry JPEG of the Web” in The New Yorker (February 2023). Chiang is a well-known science fiction writer with a degree and work experience in computer science. His article was lauded for convincingly illustrating the limits of generative AI based on its methods of compressing data and interpolating “lexical spaces” within the data to generate responses.
One of the limits Chiang points out involves a compromise between efficiency and accuracy. The main method of achieving efficiency is compressing the information that chatbots have to scrape in order to answer a prompt. However, the compression of information is not an exact duplicate of the source. It’s an approximation, or what Chiang describes analogously to a “blurry” digital image duplicate. If a chatbot’s remit is all of the information available on the internet up to a specific point in time, you can see how strategies for compressing this resource are vital.
To answer a user question, a chatbot has to interpolate between data points in the approximation, using statistical regularities in the data to generate (or predict) what is ideally an accurate answer. Chiang’s example pertains to economic theory:
Any analysis of the text of the Web will reveal that phrases like “supply is low” often appear in close proximity to phrases like “prices rise.” A chatbot that incorporates this correlation might, when asked a question about the effect of supply shortages, respond with an answer about prices increasing.
Interpolation is probably better understood in terms of using existing data points to generate new data points. So, instead of a chart with points that appear randomly spread, machine learning interpolation can generate the connecting points in order that the original data points subsequently appear polylinearly (like a curvy line on a graph).
It might sound a bit dodgy since it could very well be the case that the chatbot interpolates in a way that draws on bad data. It seems like statistical regularity isn’t enough. And that should be worrying.
Nonetheless, Chiang is not without his critics. Experts and practitioners in the field believe his characterization of AI to be a bit exaggerated or misguided in so far as it omits discussions of other methods by which AI chatbots can be trained to respond more accurately to prompts.
Fortunately, the controversy over Chiang’s methodological claims does not bear directly on our topic of consideration (as compelling as they might be). Instead, we can focus on one of his observations, which remains hugely important when illustrating the difference in the way words matter for chatbots versus humans. It begins with a counter-intuitive observation about how chatbots work.
The fact that ChatGPT rephrases material from the Web instead of quoting it word for word makes it seem like a student expressing ideas in her own words, rather than simply regurgitating what she’s read; it creates the illusion that ChatGPT understands the material. In human students, rote memorization isn’t an indicator of genuine learning, so ChatGPT’s inability to produce exact quotes from Web pages is precisely what makes us think that it has learned something.
(my italics)
It is this illusion – or artificiality – of understanding that presents a large part of the authoritativeness of chatbots. Chatbots speak, as if they were in a position of human agency and authority; and we listen. As some experts in machine learning have pointed out, ascribing human-like characteristics to AI chatbots (or anthropomorphizing) can actually be dangerous if it exerts influence and authority over us. To see exactly why, we’ll need to see AI authority in contrast to the kind of authority that arises in human speech.?
The Authority of Language
We can take a few key points from the philosopher Paul Ricoeur (1913-2005) to get to grips with what is underwriting the AI illusion. Ricoeur highlights the important features of language for human speakers.
?(A Ricoeur Reader: Reflection and Imagination. Harvester Wheatsheaf, 1991.)
When a chatbot replies and “speaks” to the user, it not only simulates these features of human speech, but more significantly, creates the illusion that it is expressing and sharing a concern with the user who prompted the question. The overall effect is that when responding to our questions in “speech”, chatbots give the impression of not only being human-like, but more importantly, of sharing our world and our concerns. There is the assumption that they therefore know what we are experiencing, even if they don’t explicitly say so (but they often do!).
Though drawing on a different philosopher in Ludwig Wittgenstein (1889-1951), Murray Shanahan nonetheless alights on the same issue of engagement and concern when discussing AI.
As adults (or indeed as children past a certain age), when we have a casual conversation, we are engaging in an activity that is built upon this foundation. The same is true when we make a speech or send an email or deliver a lecture or write a paper. All of this language-involving activity makes sense because we inhabit a world we share with other language users.
(p. 2)
?In contrast, the Large Language Models (LLMs) used by chatbots
?are generative mathematical models of the statistical distribution of tokens in the vast public corpus of human-generated text, where the tokens in question include words, parts of words, or individual characters including punctuation marks.
(p. 2)
For chatbots, meaning is a function of coherency within the text-based informational resource “ingested” by its LLM. In other words, meaning is a function of accurately generating a response to a user prompt using methods of scraping and prediction according to which the chatbot was trained. Sentential response, and thus meaning, is confined to accurately responding to the user prompt relation. It is therefore meaning poor.
Humans, in contrast, understand the use of words in sentences that typically refer to the world in which we inhabit. Meaning is a function of the concern to understand something or someone so that we can better inhabit or act in the world—whether this simply involves co-existing or creating projects by which we strive to live in a way we think is best. Words are meaning rich for us because they relate to more than just a text-based resource. They bear on how we live in the world.
领英推荐
Another way of capturing this difference is to say that chatbot sentences don’t really refer to anything since they are generated according to mathematical models (which nonetheless represent the world in some abstract way). Human sentences, when not entirely nonsensical or deliberately false, refer by means of our intention to speak truthfully about some matter relating to the world.?
AI Application: Correspondence
Bearing in mind that the main function of a chatbot is to accurately predict what a user response requires based on interpolation of data points and other methods of AI training, there is an underlying assumption about machine learning that accuracy of response somehow equates to it being usable for the human questioner. In most cases where the user is asking for information about facts, the generated responses are usable. Information can be used to become more knowledgeable about something or someone, or even be used for further investigation.
If there is a truth function to chatbot responses, it is one of correspondence between the response and whether or not it tracks what is true in the world of its textual resource (e.g. the Web or some website). If you have to write a blog on the merits of carbon-based sailing equipment, you can ask it to create an outline and list the specific pros and cons. ?In sum, we can say that the application of chatbot responses lies in
If a chatbot’s response does not appropriately correspond to the user prompt, then it fails to follow instructions. If a chatbot’s response is not accurate about facts or information, then it “hallucinates”.
In its current iteration, generative AI training methods focus on fine-tuning these two types of correspondence. The question, then, is there a kind of human conversation that AI correspondence does not address?
Hermeneutic Application: Intimacy and Dialectical Engagement
The answer lies in situations that involve a great deal of interpretation.
Humans are quite good at picking up side-directed, oblique, non-literal, non-verbal (i.e. tone, gesture, mood), environmental, contextual, and enactive or dynamic interrelations and meanings. All of these features can occur at once in a given statement, in a given situation and therefore require a specialized form of engagement in order to be understood.
Problems of interpretation and understanding are the specific remit of hermeneutics. While hermeneutics has a lengthy tradition in the West that goes back to examining matters of how to interpret scripture in Judaism and Christianity, its more recent iteration is specifically philosophical with its concern to understand broadly what is involved in the act of interpreting dialogue and text. One quickly finds that in order to understand how interpretation works—especially when dealing with metaphorical and non-literal meanings—some fundamental grasp of the way language functions is required.
Are chatbots hermeneutic? Are they capable of interpretation?
Perhaps, if prompted to interpret something; but there are two limiting factors to note.
There is a significant difference between predicting an answer based on the lexical spacing of words like “supply is low” and “prices rise” given a question about the law of supply and demand, on the one hand; and responding to a specific question about, for example, changing careers or whether or not one is really finding one’s work meaningful.
To thresh this distinction out a bit more, one can say that chatbot conversations correspond to a user prompt. Whatever a user asks (within reason and norms of safety), a chatbot will try to predict a corresponding answer. Chatbot responses follow a linear pattern: user prompt-response. They can take into account the preceding prompts and responses, and quite impressively so. Yet, they cannot engage circuitously by posing questions back to the user in a probing or investigative manner.
In contrast, engaged human discussion features the hallmarks of being intimate (as opposed to correspondent) and dialectical (as opposed to linear). Intimacy is a mode of being attuned to the specificities of the situation and conversation, involving background knowledge of conventions of expression and openness to cues and signs that might call for further clarification. Generative AI may very well be trained to take into account such things, but only when expressed as part of a user prompt. Intimacy is more about discovering ideas or problems only partially expressed, hidden beneath the surface. Intimacy therefore requires both participants in the conversation to be actively participating since what remains nascent in meaning in the statements of one person can only be brought out if the other person actively recognizes the need to investigate further.
Dialectical engagement is essentially the oscillation of conversation between question and answer, where answers to questions do not necessarily progress straightforwardly, but may instead be elliptical, as with clarificatory Q&A. When combined with intimacy, dialectical engagement is a method of elucidating what has yet to be understood.
Investigation in this manner is what I termed earlier as hermeneutical application. It’s a form of sophisticated, situational reasoning; sophisticated in so far as it remains vigilant and sensitive to what a person says and how it might have multiple and often conflicting means, many of which may be nascent or only partially transparent.
Essential to this method of inquiry is the application of general ideas, laws, principles, or even steps to a particular instance. It’s the core strategy of hermeneutics, articulated in one of its seminal works by Hans-Georg Gadamer (1900-2002) when referring to how investigation requires judging what matters in understanding a particular case.
The individual case on which judgment works is never simply a case; it is not exhausted by being a particular example of a universal law or concept. Rather, it is always an “individual case,” and it is significant that we call it a special case, because the [general] rule does not comprehend it.
(Truth and Method, Sheed & Ward, 1989, p. 39; my bracketed insertion)
When Gadamer states that the general idea or rule “does not comprehend” the particular case, he is emphasizing how particular instances may be elucidated by the idea or rule but not completely exhausted. The general idea is thus an indicative guide for further understanding; what’s more, an understanding that is brought to light only through engaged conversation with another person. Gadamer thus notes that hermeneutical application of the general to the particular instance rests on “the general relationship between thinking and speaking, whose enigmatic intimacy conceals the role of language in thought . . . closed by the dialectic of question and answer” (Ibid., p. 389).
?
To recap before looking expressly at meaningful work:
Why AI Can’t Do Meaningful Work
Meaningful work involves facts and information on which a chatbot can report. But it also fundamentally involves the kinds of investigative techniques that aim to have application for the inquiring person. In other words, whatever advice might be contained in suggestions or tips a chatbot provides on how to find or create meaningful work, the intimate and dialectical question will likely be,
“But how does this apply to me?”
It’s not clear how general suggestions apply to any particular person or business. Their efficacy relies not just on applying them generally, but on finding ways in which their general purposiveness can be applied to a specific situation. Because a chatbot cannot specifically follow up on questions pertaining to unique individuals and organizations – only a token of them as represented in the lexical space – their responses can only merely be indicative. An example from my consultation experience may help thresh this out.
Imagine someone who has been dedicated to their career since entering university. They always saw themselves in the role of helping people experiencing physical impairments, only at the midpoint of their life to begin seriously questioning if they really want to continue.
Because they were highly respected in their field, to have such a person request a meaningful work consultation seemed as a bit of a surprise. Yet over the course of our conversation, it became clear there were specific reasons for their discontent that were not so clear—a kind of “known unknown”. It may be that the person in question no longer likes working with people because they have become somewhat of a misanthrope.
Yet, as it turned out to be the case, it was discovered that the consultee no longer felt like they could meet the changing patient expectations of the services they were providing. As a result, they felt incapacitated at a fundamental level. Per the hallmarks of intimacy and dialectical engagement, we then discovered this incapacitation had several heads, as it were, relating to the history of institutional management and the nature of institutions themselves (very philosophical!). These tangential topics led to exploring how the consultee understood their institution, how it had changed.
Bearing in mind that this person had been in the same career since university, the extent of force pushing against change was formidable. The real breakthrough came when we discovered that institutional management was conflicting with personal responsibility. This then led to a cathartic moment of release and transformation. Weeks later the consultee happily (and to this day, successfully) changed careers.
Meaningful change relies on human conversation, where the recognition of a shared concern is evident. I think this is why chatbots “know” that in instances requiring a more hermeneutical tact, it’s usually good to include in their suggestions that the user ought to consider speaking to other humans to better understand their dilemma and pathways forward. In that sense, at least the lexical space of generative AI has an exit from its detached approximation of the real world.
About the Author
Todd Mei (PhD) is former Associate Professor of Philosophy and is currently a researcher and consultant in meaningful work. He is founder of Philosophy2u. With over 20 years of experience in teaching, researching, and publishing in the philosophy of work and economics, existentialism, hermeneutics, and ethics, Todd enjoys bringing insight, innovation, and worklife revolution to organizations, businesses, and individuals. He co-authors The Meaningful Work Newsletter with Joseph Smart.