Do poets write better prompts? Professional strategies in the AI era

Do poets write better prompts? Professional strategies in the AI era

Summary:

  • Advances in the field of artificial intelligence (AI) will lead to the transformation of the job market. Some roles will go away, some others will be created.
  • There a 2 collaboration models between humans and AI: "cyborg" (professionals equipped with AI as assistant) and "centaur" (separation of tasks between humans and AI).
  • "Cyborg": fits well with so-called "generalists" who pursue a multidisciplinary path, because making complex, creative, non-standard, and cross-functional decisions will likely remain the prerogative of humans.
  • "Centaur": a good fit for so-called "specialists".
  • For both models soft skills will be crucial for productive execution of work.


Non-linearity of history, generalist’s mindset and AI disruption

"If you skillfully follow the multidisciplinary path, you will never wish to come back. It would be like cutting off your hands." - Charlie Munger, investor, whose multifaceted life experience helped him develop his own method for evaluating businesses.

Nothing captured my imagination during my school years more than the legend of the construction of the Tower of Babel. Just imagine a huge ziggurat with its top in the sky - so huge that their god confounded their speech so that they could no longer understand each other, and scattered them around the world. This was the first myth of the ancient world that I learned. A magical, colorful, and yet logical world of myths opened up before me. But during my school years, I was only interested in the literary aspect of myths and world history in general, and I did not apply critical analysis to what I read and didn’t know that I can use history to better understand myself and the world around me.

(image: Nakoaktok men in ceremonial dress, with long beaks, crouching on their haunches. Source )

Years later, I came across the book called "The Hero with a Thousand Faces" by Joseph Campbell, 1949, which later formed the basis for a handbook for Hollywood screenwriters. The main idea of the book was that all myths ever created by humans, from Africa to the northern reindeer herders and from Native Americans to the inhabitants of Oceania, have the same narrative structure: most mythological stories revolve around a hero who undergoes transformation throughout life, during which they must defeat their enemy and return triumphantly to their community to make its life better. Isn't it a remarkable discovery: that all people, regardless of their place of residence, climate, or landscapes around them, had the same archetypal thinking patterns!

Thanks to this and many other books, I learned to view history as a tool for understanding people. The main thing I learned over time is that history is non-linear: what seems logical and linear in retrospect was often completely unexpected, ambiguous, and random at the moment. What is nonlinearity? According to Wikipedia: "A nonlinear system is a system in which the change of the output is not proportional to the change of the input." In other words, nonlinearity is a property of systems in which cause and effect are spread out over time and space. To understand this with real historical events, remember that many revolutions began with shortages of goods or even unfair customs duties .

That's how I met the systems thinking. And what I learned later is that the systems thinking that helps in studying history can be easily applied to any other systems - for example, the IT systems of large enterprises, which:

  • contain interconnected components,
  • show emergent behaviour (its overall behavior cannot be fully understood by examining each component in isolation),
  • provide a dynamic environment (constant change, such as software updates, hardware upgrades, regulatory changes, and evolving business requirements),
  • and demonstrate non-linear relationships (e.g. a minor software bug could potentially cause widespread system failures if left undetected).

Oh wait! Did I really do that? It seems that I just demonstrated the mindset of people called "generalists": they tend to see and apply analogies in distinct disciplines, and often have cross-functional thinking. My career has so far unfolded in such a way that I involuntarily followed a multidisciplinary path, and now, like the author of the quote at the beginning of the text, Charlie Munger, I can't imagine myself without such a mindset. For example, my education in social sciences (media, political science, sociology) allows me to see aspects that my counterparts with purely technical background might lose out of sight.

Having such a mindset can be considered a superpower in some cases - and a weekness in some others. My interpretation of the "generalism" is that the more senior a person becomes, the more benefits can a cross-functional view bring - at least, in the IT industry. Just for an example: any good system architect or experienced software engineer understands how important it is to understand not only their own area, but also everything that surrounds it: economics, interactions between system components, company processes and dynamics, and even the engineering culture of the company. But of course, I am questioning my own experiences and knowledge in order to adapt for changing circumstances. So for a long time I was not really sure whether my "multidisciplinary path" is my superpower or my Achilles' heel.

But why did I decide to philosophize about professional strategies? Recent advances in artificial intelligence (AI) seem to bring good news for generalists. At this very moment, when many IT professionals are diving headfirst into studying machine learning, many observers are predicting a renaissance of liberal arts and generalists' mindset: supposedly, large companies have started hiring poets and philosophers for work, and many authors (1 , 2 , 3 ) predict that generalists will become the most sought-after type of employees. Even as a liberal arts advocate myself, my initial reaction is: excuse me, they're hiring poets?! Sensational headlines, indeed!

After some initial confusion, I began to realize that the rapid development of AI further underscores the importance of understanding and developing purely human skills (also called "soft skills"), paradoxical as it may seem. This idea intrigued me, and I decided to delve into this issue by attempting to understand what models of interaction with AI are possible at all. Based on this, one can also try to envision what professional strategies are possible for people with different types of thinking and skill sets. I will do this using an example of a business process that I understand best - software development.

Just a normal day in the foreseeable future…

Lavon sat in the chair at his desk. A small indoor palm tree stood near the window. He turned on his computer, logged in, and connected a small projector via USB. A few minutes later, he saw a notification on the screen that the meeting had started. When Lavon joined the virtual room, light poured from the projector into the center of the room, and a projection of a table with several people sitting around it appeared. New technologies allow for meetings to be held at home, with a full sense of presence.

In reality, some of these people were working from home, some even from other countries. And some were working from within the depths of the data processing centers of a cloud provider - strictly speaking, they weren't even human. These were so-called AI agents, each given separate identities - both on metadata level and visually - within the company for ease of perception during such discussions, but also to allow testing different models trained on different sets of data, and some other technical reasons. Otherwise, they were simply models surrounded by a large amount of software designed to ensure data security and the accuracy of responses. The employment of AI agents completely eliminated the need to compel employees to work from the office. Now, employees met in person only when it really made sense, and such meetings were usually held in the format of a more or less structured workshop, aimed at productive discussion or mutual learning. For everything else, employees communicate from home via their projections.

(image: A somewhat spooky meeting, I have to admit.)

The meeting participants, who could definitely be called humans, came to the meeting clearly prepared: each had already outlined the tickets that needed to be completed in the next sprint. Indeed, this is the work of many of them: formulating requirements. Many of them have previously written code for this system themselves and therefore have a good understanding of its technical aspects. But recently, the AI agents took over this work, and humans continue writing programming code only for open-source projects or for internal libraries - high-quality code aimed at standardizing complex processes, using understandable and useful abstractions - in other words, code on which AI can learn and provide quality responses.

All this time, Lavon sat comfortably reclined in his chair. When it was his turn, he outlined the required functionality of the new features expected by the client. "Understood, we'll move on to the discussion," one of the AI agents under human avatar said; it was one of the agents responsible for receiving requirements and translating them into specific tasks. Human-looking agents started discussion. A sunbeam from the window in Lavon's room was bathing the particles of dust in its light. Silence fell in his room, and only the buzzing of a fly, trapped in a glass with remnants of juice, disturbed it. Thirty seconds later, participants of the meeting were already listening to the agent's planned list of actions. Lavon approved it, and it was time to discuss another component of the system.

When Lavon was about to exit the meeting, and almost all avatars disappeared, Yanina asked him to stay. Yanina, with her love for literature and composing narrative structure, had previously worked as a journalist. In her free time, as a person fond of analyzing historical trends, she conducted engaging tours in a museum. With such a skill set, she was happily accepted into the new position as a "cyborg" - a person "armed" with AI assistants, tasked with identifying weaknesses in enterprise processes.

"Lavon, I wanted to discuss your performance as a requirements engineer."

Lavon was surprised: "Don't my agents produce quality code?"

"They do, but the question is about the number of iterations. You know, GPUs aren't cheap, and your agents require on average 1.3 more iterations than other AI managers on the project to ensure their code meets the requirements."

"Are you sure about your numbers?"

"Here, take a look." She pointed to the tablet, where she opened a dashboard. "And here are your tickets from the last sprint: here an agent misunderstood your intentions due to punctuation, and here you didn't specify the persona for whom this feature was developed. Because of such details, agents can misinterpret your requirements and intentions. I suggest you come to the workshop I'm organizing next week, where I'll show you a couple of techniques and give you a checklist."

After the workday, Lavon went to pick up his daughter from daycare. While getting her ready, he struck up a conversation with the caregiver, Ryhor. It turned out that a few years ago, he also worked in an office. With the development of AI, he had to rethink his interests and skills, while society reassessed some professions anew - so he decided to study to become a caregiver. "It was quieter in the office, for sure. But now I understand my kids better," Ryhor said.

Virtualization of work, philosophy of mind and jobs in the AI era

What was that? I applied one of the creative methods for innovation development - science fiction (by the way, I also use it in my work - I'll tell you about it someday). Why? Quoting Nathan Furr and Jeffrey H. Dyer, authors of the article "When Your Moon Shots Don’t Take Off " for Harvard Business Review: "<to dislodge our mind> from the lazy, timorous habit of thinking that the way we live now is the only way people can live." I tried to imagine the working environment of the future, where humans and AI work side by side.

But before I continue, I must say that I assume AI will not deprive us of work. Those who possess deep or unique knowledge or skills will become even more productive. And those who are not positioned that well will either improve with the help of AI assistants or find another job they would enjoy. Of course, some old responsibilities will disappear, but new ones will take their place. Kurt Cagle , a brilliant author on information architecture and the future of AI, gave such an example: "Already the office manager is disappearing as more and more offices become virtual, but a virtual office steward is emerging that manages the infrastructure of that virtual office. The same holds true for many formerly physical office functions." He also believes that the working world as a whole may undergo a shift from a wage-based work model to a "more entrepreneurial approach": "This virtualization, not necessarily AI itself, will be responsible for these changes, and yes, over time, this will result in an erosion of “jobs” and job slots. Traditionally relatively independent creatives will become consultants, paid by the contract and not by the hour." (full article: here )

(image: it may very well be that those ladies also thought that nobody would take their job away. Source )

Of course, nobody knows exactly how and when AI will change the working world, but it's already evident how it makes us more efficient in routine tasks and research. Therefore, many, like Cagle, are trying to predict what new professions and work models will emerge. For example, companies are increasingly concerned with questions of "ethical" AI, and safe implementation of AI requires having certain specific knowledge and skills.

One of my former colleagues Casper Wilstrup wrote a brilliant piece called “With Artificial Intelligence, Philosophy of Mind Has Become an Experimental Science” where he first proves in the easiest way that humans have consciousness: “Why am I bringing up death in an article about AI and philosophy of mind? Simple. That bone-deep fear of death serves as a visceral reminder that we’re alive, sentient, and experiencing this world. If we didn’t have that sense of self, we’d already be in what Michael Ende called “The Nothing.””. And then, he called philosophy of mind an experimental science - I will cite the main conclusion from his essay and encourage you to read the full text :

"The goal is to extend that recognition to beings that don’t necessarily look like us — like AI, for instance. This sets the stage for experimentation. We’re not just tossing around ideas; we can actually set up tests and experiments to pinpoint the configurations that allow consciousness to form from its proto-conscious elements. And who knows? Understanding how unified consciousness forms from proto-conscious elements could give us clues about what happens when that process reverses — like, what happens when we die. I’m not claiming to have all the answers, but thanks to AI, we’re inching closer to turning these philosophical questions into scientific ones. So it’s time to roll up our sleeves and head back to the lab. Let’s take these big, existential questions and turn them into actionable experiments."

The rapid development of AI will undoubtedly bring about the emergence of new professions, many of which will require very human knowledge and skills - the very paradox I mentioned earlier. By the way, my personal forecast is that the work of teachers and caregivers will eventually experience a resurgence - considering that some portion of office work will be automated, the offline professions might finally get appreciated.

Cyborgs vs. Centaurs, soft skills and logic

"Thou shalt not make a machine in the likeness of a man’s mind", Paul quoted. "Right out of the Butlerian Jihad and the Orange Catholic Bible", - she said. "But what the O.C. Bible should’ve said is: Thou shalt not make a machine to counterfeit a human mind." (from "Dune" by Frank Herbert, 1965)

So, I am convinced that there is no need to fear upcoming changes - we rather need to prepare for them. But how exactly? After all, nobody knows what AI will be capable of in the future. We can only broadly say that there are things we will be able to delegate to machines in the near future, and there are things we will always do better than machines (or that we will reluctantly delegate to AI and prefer to do ourselves). Why am I confident in the latter? This is just my assumption, and I suggest you read the article by Robert Epstein (senior research psychologist at the American Institute for Behavioral Research and Technology in California) "The empty brain " in which he claims that comparing the human brain to a computer is fundamentally wrong. "Your brain does not process information, retrieve knowledge, or store memories. In short: your brain is not a computer." According to Epstein, we experience life, but do not "store memories" - however, what exactly we do with them is still unclear. To debunk the metaphor of our brain as a computer, let's conduct a short mental experiment: try to visualize in detail the face of your grandmother or a classmate. I have no doubt that you will remember them. Most likely, you remember their faces as a whole, or maybe some specific, particularly expressive features of their faces. But I doubt you will be able to remember all the details - the size of the ear, the distribution of eyelashes, or small moles. Not to mention people who are less significant to you - for example, colleagues with whom you worked for a couple of years in the past. Most likely, you remember them in general, but in most cases, you won't remember most of the individual characteristics separately. If computers worked the same way as our brains - remembering only some properties of data and gradually losing some of the attributes over time - we would have big problems.

Anyway, let's agree for simplicity's sake that in some cases we'll be able to rely on machines and will be formulating requirements for them and overseeing task execution, becoming managers for AI, while in other cases we'll be performing tasks ourselves, only becoming more efficient through targeted use of AI. These two interaction models, known as "centaur" and "cyborg”, are what I described in my short story about the future office life.

In my example, Yanina represents a "cyborg." In the "cyborg" interaction model, humans become augmented professionals. It involves an interweaving of AI and human effort. In this model, AI becomes an assistant, collaborator, and advisor, while humans continue to remain the drivers of action. If we assume that AI will be able to automate any routine and straightforward work, we can try to predict what skills a "cyborg" should possess. Most likely, these include a nuanced understanding of people - their psychological types, reactions, and possible motivations. It also involves systems thinking - for process design and identifying weaknesses, for accumulating and structuring knowledge. It entails understanding the workings of complex systems, including AI, and monitoring the quality, security, and ethicality of their results. It also includes understanding the needs of your clients and rapid prototyping. In all these activities, you can use AI - for example, to quickly understand a problem (AI can "import" domain knowledge from knowledge graphs), but making complex, creative, non-standard, and cross-functional decisions will likely remain the prerogative of humans. And it seems that this model fits well with so-called "generalists" who pursue a multidisciplinary path.

(image: a Cyborg, or augmented professional. Source )

But there is another model - the "centaur". Lavon from our example is precisely that: a human delegating tasks to AI, acting as a manager for AI agents. What are the responsibilities of a good manager, specifically in terms of delegation?

  • Ability to clearly articulate their vision and formulate requirements → skill of clear communication of ideas, requirements engineering, writing contracts (in software development - e.g. API specification);
  • Understanding of what the result should look like → domain knowledge, knowledge of technologies;
  • Ability to assess the quality of the work performed → in software development, it would require knowledge of e.g. clean code and how to create good abstractions;
  • Ability to give clear and objective feedback → once again, good communication skills.

In essence, these qualities will be most important when delegating tasks to AI. And it seems that this model of interaction with AI is well suited for the so-called "specialists".

It is indicative that in both models, soft skills will become fundamentally important. Take, for example, the articulation of thoughts and requirements engineering: neither is possible without an understanding of logic, which plays an important role in the development of AI, particularly in knowledge representation. Personally, I believe that this area will become critically important in the near future, when companies realize that without well-structured, classified, clean, and up-to-date data that has a unified semantic layer, AI models will not be able to truly automate processes, and will likely remain assistants with a high probability of providing erroneous answers. In other words, if a user working in the automotive industry would ask an AI-powered Q&A system, "Which cars will we have to recall if the autopilot release under ID xyz123 is deemed dangerous for pedestrians by the Japanese regulator," the model must clearly understand what "cars" and "autopilot" mean in the lexicon of this company, as well as the entities within the "car" class that have a relationship with the "autopilot" class under the mentioned ID.

"Soft" skills will become increasingly important, regardless of one's professional strategy. They cannot be underestimated. Nonetheless, I often encounter very intelligent people who cannot build processes, make logical conclusions, understand their interlocutors, and convey their thoughts. It's truly not easy. In "Dune", which I quoted earlier, these skills were specifically taught to so-called "mentats" - individuals exceptionally gifted from birth, whose mental potential was comparable to supercomputers, as well as to members of the Bene Gesserit school, where women were taught from a young age to read people's emotions, control their own emotions, and master the art of persuasion. In real life, soft skills are mainly learned through trial and error, by showing initiative, interacting with people, and having curiosity about the world around us.

Pavel Lipski

Music Educator, Composer & Sound Engineer

8 个月

A very interesting read. I also appreciate you mentioning and quoting Dune. I find it quite a unique blend of science, philosophy and sociology. Probably exactly the mix is needed to understand the AI better.

Vlad Radziuk

4x leader of digital-native teams | Leveraging human aspects in tech: employee engagement, team interaction design, knowledge management, AI augmentation @ Nordcloud, IBM | originally from Belarus

8 个月

Recommended reading list: Interesting piece on how cybernetics can help developing better (safer, more ethical) AI applications: https://medium.com/neo-cybernetics/a-matter-of-equilibrium-cybernetic-perspectives-for-human-ai-symbiosis-61b37ed0d930 Casper Wilstrup: “With Artificial Intelligence, Philosophy of Mind Has Become an Experimental Science”, https://medium.com/machine-cognition/with-artificial-intelligence-philosophy-of-mind-has-become-an-experimental-science-e0b79dc6601a -- very interesting view on how advances in AI might stimulate studies of the human mind Nathan Furr and Jeffrey H. Dyer, "When Your Moon Shots Don’t Take Off" for Harvard Business Review: https://hbr.org/2019/01/when-your-moon-shots-dont-take-off -- Text about 4 unconventional innovation methods. Classical book: "The Hero with a Thousand Faces" by Joseph Campbell, 1949. Campbell showed that all myths around the world follow the same schema. This book became foundation for a handbook for Hollywood screenwriters. Russian folklorist Vladimir Propp did similar work on fairytales in the 1920s: https://gointothestory.blcklst.com/vladimir-propps-31-narratemes-another-approach-to-story-structure-da756027ed13

要查看或添加评论,请登录

Vlad Radziuk的更多文章

社区洞察

其他会员也浏览了