Why AI Struggles to Write (Good) Poetry
AI Art by Adobe (with some alteration by the author)

Why AI Struggles to Write (Good) Poetry


Another Deep Dive from Philosophy2u !


There’s an ancient quarrel between philosophy and poetry that can be instructive for understanding exactly why AI struggles to write poetry. The quarrel goes back to ancient Greece, during the time when philosophical analysis and argumentation emerged as the “new kid on the block”.

Poetry was the main medium through which ancient Greeks learned history, philosophy, and even science. With the prominence of Socrates and Plato, a rivarly emerged where philosophical discourse (in the form of argument and dialogue) questioned the legitimacy of poetry as a path to knowledge.

Poetry, as one of Plato's criticisms alleged, is a form of expression that purports to say something about reality, yet in ways that are largely unconstrained by any concern for accuracy.

Two Views of Key West

This worry holds true today. In fact, a poem that accurately describes its object tends to be regarded as “bad art”. You don’t read a poem about Key West to get an idea of what Key West is like factually.

Take, for instance, Wallace Stevens (1879-1955):

She sang beyond the genius of the sea
The water never formed to mind or voice,???
Like a body wholly body, fluttering
Its empty sleeves; and yet its mimic motion???
Made constant cry, caused constantly a cry,???
That was not ours although we understood,???
Inhuman, of the veritable ocean.

(“The Idea of Order at Key West”, 1934)

Yet, you would go to a poem if you wanted a form of art that captured more about an event, place, experience, or object than any prosaic account could muster. In short, we rely on the poet to do their thing.

It just so happens, that “doing their thing” often involves using techniques that break with conventions. More specifically, with poetry words that make sense individually are employed in a way that seems nonsensical when put together in a phrase or sentence.

And this is what AI cannot do. It cannot break rules of logic and meaning, the very things which they are trained to follow, in order to create genuine poetic expressions.

Think about it like this. The ethical concerns about AI running amok are addressed by ensuring that AI systems adhere to rules we believe to be consistent with physical, emotional, psychological, and personal safety. To create an AI chatbot that can break one rule risks creating an automated “thinking” system that can break any rules. Or, at the very least, it creates a system that does not know when and when not to break rules.

As it turns out, good poets know when to break rules and when not to. Let’s look at the case of metaphor to see how this plays out.

The Case of Metaphor

Metaphor is the main reason why AI fails where humans succeed. Why??

  1. Metaphor is perhaps the heart and soul of poetry because it is essential in predicating meanings that have not been said before.
  2. These meanings have not been said before because metaphorical expression uses the familiar senses of words in order to say something that is not familiar – that is, something we can't look up in a dictionary.
  3. Metaphorical expression is a form of “semantic impertinence” since it expresses by means of putting together words that don’t make sense.
  4. AI does not have a rule or a sufficient dataset that enables it to successfully create semantic impertinence.
  5. This is because creating such impertinence goes against its rules of sense-making and the dataset on which it draws, the majority of which contain statements that make sense in the conventional way we expect (and not via impertinence).

Let’s take each of these points in turn.

Heart and Soul: Metaphorical Predication

[Points 1-3]

At the most basic level, a metaphor is a group of words that, when put together, describe something familiar in a new way. Specifically, a metaphorical phrase will involve a tension between familiar senses of words to say something new about its subject. Generally, there are two levels of tension that we can describe: semantic clash and semantic impertinence.

Semantic Clash

At this level, the clash between the familiar senses of words may point out something meaningful in a novel way, but its novelty tends to quickly become part of our common parlance. For example, consider the following phrases:

  • John, the cheeky monkey.
  • leg of a chair.
  • the stone heart.

The chair, for example, does not literally have a leg, but the familiar sense of a human leg enables us to see the chair with a distinct feature. And, of course, we speak of chair legs as if they weren’t metaphors at all. When metaphors become common in everyday language, or part of the lexicon, they become “dead” metaphors.

Semantic Impertinence

Impertinence suggests a stronger conflict in the senses of words, perhaps to the point of seeming nonsensical or enigmatic. Poetry’s use of metaphor is the epitome of semantic impertinence.

Think of it like this: There are certain conventions, rules, and expectations of how to use words; poetic metaphor generates meaning by breaking these norms. The impertinence can cause different reactions in us, ranging from not liking what it does to changing the way we perceive or understand something.

There is always a subjective element in responding to metaphor since it will call upon our personal experiences to make sense. But the most powerful and memorable encounters with metaphor often reveal something significantly new or transform how we relate to ourselves, the world, or others. When this happens, the impertinence is transformative; it breaks or ruptures our sense of reality.

I remember hearing for the first time the Welsh poet Dylan Thomas (1914-1953) reading “Fern Hill” and how the vivid imagery and metaphor brought to life something transformative. The scene he described was something powerfully universal yet unfamiliar:

Now as I was young and easy under the apple boughs
About the lilting house and happy as the grass was green, ?????
The night above the dingle starry,
Time let me hail and climb
?????Golden in the heydays of his eyes,
And honoured among wagons I was prince of the apple towns
And once below a time I lordly had the trees and leaves
??????????Trail with daisies and barley
?????Down the rivers of the windfall light.

What is interesting about such instances of poetry is that one feels entirely inadequate in trying to summarize what the poem is about. And that is perfectly consistent with the power of metaphoric expression. It predicates original meaning by virtue of an expected and unorthodox conflict in meaning.

The Shakespeare scholar L.C. Knights (1906-1997) described this phenomenon so well. A great poem will “say” what it alone can say. If asked what William Blake’s “Tyger” is, or in my example Thomas’ “Fern Hill”, you would have to recite the poem.

Tyger Tyger, burning bright,?
In the forests of the night;?
What immortal hand or eye,?
Could frame thy fearful symmetry?

One key insight we can take away from this: the best forms of metaphorical expression recognize and break rules of sense-making, syntax, grammar, and coherence in order predicate something radically new.

AI is very bad when it comes to replicating this practice.

AI: Information Rich, Poetically Poor

[Points 4 & 5]

There are two key reasons why AI chatbots – at least in their current iteration – are poetically poor:

  • Their reliance on and relation to their respective datasets; and
  • Their need to follow rules.

These reasons are much like a double-edged sword. On the one hand, they allow chatbots to create responses that sound like human utterances. On the other hand, they constrain or determine the style and substance of the utterance.

Datasets and Prediction

If you are familiar with the concept of AI chatbots, you will probably have come across the term “dataset”. Chatbots like Chat GPT, Copilot, and Meta AI are trained on datasets that include snapshots of information available on the internet. Some chatbots have real-time access to the internet while others, like the Chat GPT 3.5, do not.

What is essential to note for the purpose of creating metaphor is that whenever a chatbot is asked to answer a human prompt – such as, “What is a metaphor?” – it will use a variety of training methods to draw on its dataset and “predict” the right answer. The action of answering a human prompt is called prediction because the chatbot does not find an answer in its dataset and then refer you to it (as a search engine would). Rather, it analyzes its dataset and generates an answer by predicting what the right response is according to its training.

This means chatbot responses are always in some version or form that resembles what is encoded in its dataset. It can only “predict” what in some sense has been “said” before. It may not quote text verbatim (in fact, chatbots tend not to due to the way they approximate the meaning of words and concepts), but its response will always be some iteration of what has been said before.

When asked to write poetry, the chatbot may provide coherent and descriptive verse, but its language will appear quite worn. Here's an example from my work as a human trainer (RLHF) for chatbot prototypes:

The wave whispers a riddle to the shore,
A cryptic message scrawled in foam and fury.
The surfer, an oracle poised on a precipice of water,
Listens with a heart attuned to the ocean's ancient tongue.

In the example above, the chatbot recognizes it needs to present "the ocean" in a poetic manner. According to its dataset, it has determined that vocalization and whispering are good fits in relation to its mysterious nature. The interpolated connection might be mystery ≈ whispering and, furthermore, that the surfer is gifted in this skill since surfing ≈ attunement to the waves.

But what the chatbot could not come up with is something more poetically rich where connections between themes, words, and meanings are neither explicit nor implicit, but "impertinent":

She sang beyond the genius of the sea

Why the chatbot selected whispering and mystery on the theme of the ocean has to do with how it has been trained. But more importantly, it can’t come up with something more radical or impertinent because there is no rule it can follow for breaking rules (or being impertinent).

Rule-Following

A key feature that makes this process of prediction feasible are rules that the chatbot can follow. These can include a range of categories, such as interpolating words, following ethical precepts, and adhering to grammatical conventions and style.

This remit is significant when it comes to a feature of creativity that works by virtue of breaking rules.?

Let me put it like this: AI chatbots can easily answer a question like “What is metaphor?” because it is something that can be easily predicted according to its data.

When I asked one of the prototypes with which I was working, "What metaphor is according to semantic impertinence?", it replied:

Semantic impertinence in metaphor challenges conventional language patterns. It unlocks new layers of meaning, often residing in the tension between familiar and unexpected associations. It's at the heart of innovative poetry, where language is stretched to its expressive limits.

Explanation is one thing. But when pressed to follow this definition in creating metaphors that involve semantic impertinence? It is essentially asking the chatbot to follow a rule (or definition) to break its rules of sense-making.

Here's what the same chatbot penned:

The waves of my emotions are a relentless ocean. Some days, they ripple with a gentle joy, the seafoam of laughter cresting their peaks.

When asked where the semantic impertinence is, it replied: "The clash of meaning (nature vs. emotional struggle) forces a fresh perspective."

Rule-Breaking

Where AI chatbots have a tough time is when metaphor at the most complex level is concerned. This is because the tension between words is not one it can predict. And this is because the poet creates metaphors from words that are not normally paired together.?

There is a balance between making sense and not making sense that is at the heart of metaphor’s semantic impertinence. It uses the rules of sense-making to break the rules.

From "Loveliest of Trees", by A. E. Housman

And this is what AI chatbots cannot do. In order for it to be able to do so would require it to be able to think, since breaking a rule of grammar or syntax to create a metaphorical utterance would rely on recognizing a good reason to do so. Where in the dataset would this appear? If it did, it would have to appear frequently enough that the chatbot could approximate what should be said when asked to create a poetic metaphor.

This is why chatbots often use the same words when asked to write poetry. What counts as poetic has to already have been said, whereas genuine poetry says what has not yet been said.

Summary

Telling a chatbot to create something radically new is not only impossible on the poetic front but in general.

The current state of AI chatbots is that they can only draw on and predict answers from what is represented in their dataset. There is no genuinely "new", only the word “new” as it appears in various contexts.

  1. AI does not have a rule or a sufficient dataset that enables it to successfully create semantic impertinence.
  2. This is because creating such impertinence goes against its rules of sense-making and the dataset on which it draws, the majority of which contain statements that make sense in the conventional way we expect.?

What of the Impertinent Poets?

Perhaps we can see a little more clearly why Plato was unhappy with poets and their craft. Their craft relies on a kind of impertinence that can be disruptive when not understood appropriately. Breaking rules with the sole aim of creating some kind of effect, which Plato describes as affectative and infectious, can be problematic for a society based on order . . . and rule-following.

To be sure, Plato also recognizes that poetry is virtuous in the right context – that is, when its audience is mature and educated enough so that non-literal, metaphorical meanings are not misunderstood. In fact, Plato also seems to recognize that metaphorical utterances require a careful process of interpretation to appreciate their power – but that is another question hotly debated by scholars.

When it comes to chatbots, if Plato were alive today, he’d probably be much more cautious in his judgment. This is because the two features of humans that make them capable of understanding complex and nuanced situations are their intellectual (nous) and spiritual faculties (psyche). Nous is related to the ability to reason (logos), but it’s much more holistic in its approach. Nous is like having an insight into the nature of things. Psyche is often translated as “soul”; one way of thinking about the ancient Greek distinction is that soul is a manner of being engaged with things from a uniquely human perspective. It is centered on care, concern, and solicitude.

These ancient Greek distinctions are important, not only for scholarly reasons but because they offer us an alternative, critical insight into our own world. Plato, if he were alive today, might suss the relation between chatbots, their datasets, and their rules like this (in dialogue format):

Glaucon: And what of chatbots, Socrates?
Socrates: My dear Glacuon, chatbots are good at forms of reasoning and problem-solving. This is because they are good at following rules and can access vast amounts of information and teachings. However, they lack in the human capacity to care for and be concerned with the nature of things. It is this concern that enables us to break rules when necessary, as in the case of the poet, who predicates new meaning by breaking with the old.
Glaucon: Yes, Socrates, it must be so.

The difference lies in breaking rules out of care for meaningfulness and not recklessness out of hubris.

The quantum leap for AI is, so it seems, a poetic one.


About the Author

Todd Mei (PhD) is former Associate Professor of Philosophy specializing in hermeneutics, the philosophy of work and economics, and ethics. He is currently a researcher and consultant in meaningful work and is founder of Philosophy2u. He also enjoys training chatbots on the side for major social media and tech companies (who remain anonymous). With over 20 years of experience in teaching, researching, and publishing, Todd enjoys bringing insight, innovation, and worklife revolution to organizations, businesses, and individuals.

#poetry #AIpoetry #chatbotpoetry #responsibleAI #hermeneutics #AI #LLM #creativeAI #poetrylover #poetrycommunity #deadpoetssociety #writinglife #metaphor #semantics #semanticimpertinence #limitsofAI

要查看或添加评论,请登录

社区洞察

其他会员也浏览了