Quality needs new Qualities: 
AI’s Threat to the Value of our Work
Where new knowledge is created, inspired by Matt Might

Quality needs new Qualities: AI’s Threat to the Value of our Work

TL:DR // Key Insights (interactive version):?

  • ???♂???????????????????

Hidden beneath much celebrated AI productivity boosts has been growing an increasing threat to the value of our ‘AI-enhanced work’ - a threat that has nothing to do with quantity, but is all about AI’s tendency to hollow out the quality of our work output. ?
In order to escape this threat and safeguard the future value of human work we need to urgently refocus on utilising more diverse and complementary sets of uniquely human skills, specifically combinations of analytical and creative skills which allow for the creation of truly new knowledge - the type of knowledge which will help our societies push disruptive innovation forward and the boundaries of human development outward.

  • ??♂???????????????????????????


There is not one week going by where we don’t hear about impressive productivity gains to be achieved through AI, be it in areas like customer service, software development, marketing or even manufacturing and R&D. However, not many people have been critically exploring the side-effects of such AI-driven productivity improvements*. And I am actually not talking about the possible job cuts resulting from AI-triggered supply and demand imbalance in the labour market - assuming that we will see a reduced demand for costly human labour wherever it can be replaced by cheaper or more efficient AI-powered services.?

* for a more nuanced analysis of the anticipated AI productivity gains see here, here, here, here, here and here

But irrespective of the actual effects AI will have on the labour market, meaning whether we will see a stronger impact of automation or augmentation - or some form of Jevons paradox where an increased labour efficiency may ultimately even lead to a long-term increase in demand alongside new job opportunities: The most critical aspect about how AI is ‘helping usbe more productive at work is not a quantitative, but a qualitative issue.

Self-referential AI producing hollow words

But what does that mean? Rather than abstractly describing the issue, I’d like to use a verbatim quote from the Canadian Writer Gilbert Paquet to capture the essence of the problem:

Every time I come across a platitude like ‘Gen AI can help us return to high growth rates in our economies by boosting our productivity, i.e. the speed at which we produce goods and services’, I can't help but wonder: what productivity are we talking about? … Where is the productivity gain in overwhelming digital and physical worlds with a near-infinite spew of junk? I may be old-fashioned, and if so I'm glad, but what about quality? Unless we lower our standards, how do we evaluate the quality of Gen AI output when we already know that the ‘engine’ can produce sh*tload of random crap?”?

So the real danger we are all facing by blindly chasing after the productivity boosts promised by Microsoft Copilot,GitHub Copilot, Gemini for Workspace as well as the other cognitive prostheses now readily available in the AI self-enhancement section is actually that of a massive degradation of the quality of our output. Maybe not right away, and not for everyone - especially as statistics is teaching us that most probably there will be a significant output quality shift towards the mean, much in line with the so-called ‘AI mediocritisation’ hypothesis. But unless you are happy with being ‘mediocre’ or don’t mind your job eventually becoming irrelevant, you might actually be well advised to safeguard the quality of your work output.

“If you don’t like change, you’re going to like irrelevance even less.”?—Eric Shinseki, Source: John Maeda’s Blog

But how would such a quality degradation through the impact of artificial intelligence be even possible? For one, AI might still have an undue hold on average users due to their inability to differentiate between actual and imagined capabilities of this technology. As an explanation we might want to remind ourselves of the experience Joseph Weizenbaum made with his ELIZA, an early NLP application often touted to be one of the first ever chatbots:?

“ELIZA attempted to simulate, in a relatively simple way, the conversation one might have with a psychotherapist. It used an interactive interface that enabled the user to type answers to questions generated by the software. Famously, many people were impressed by its ability to engage in conversation, assuming that the program possessed a greater amount of understanding and intelligence than was actually the case. After writing ELIZA, Weizenbaum admitted that ‘what I had not realised [was] that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people’. Source: Weizenbaum-Institute

In addition, independent studies have meanwhile shown that AI users tend to exhibit a whole series of unfavourable biases well-known in general human-computer-interaction, specifically with regard to loss of decision making quality and overall deterioration of motivation. So all of this combined might very well establish cause for concern when it comes to potentially degrading effects on the output quality of ‘AI-powered’ work.

So how do we prevent our work quality from degrading under the influence of AI? In order to answer this not particularly trivial question, let's draw upon an older, but hopefully still helpful concept from the early days of psychology, namely Lewin’s ‘field theory’. In this approach, human behaviour is conceptualised in its holistic ‘Gestalt’, as a dynamic, interactive function of the person (P) and their environment (E), resulting in the quasi-mathematical formula B = f(p, e).

Visualisation of Lewin’s field theory. Source:

In this concept, Lewin saw a person being defined by characteristics such as their needs, beliefs, values, motivations and abilities. In essence, for Lewin a person could be summarised as ‘the behaving self’, meaning ‘the individual's perception of its relations to the environment’. This was directly related to a person’s life space being highly dynamic, meaning the fact that every day will bring new experiences. So the only chance we have as individuals, is to adapt and change our ‘person’ with it. This is where the change resulting from AI entering our life space comes into play:

Visualisation of AI’s impact on our ‘life space’, adapted from Lewin’s field theory

So whilst we would traditionally only have to follow a more or less fixed career path, highlighted in this visualisation with a straight, dashed arrow, AI is effectively jolting us out of our routine, prompting us to adapt our career trajectories, at least if we are unwilling to compromise on the quality of our work output. This is because, as a new ‘General Purpose Technology’, AI is effectively changing our lived technological environment - and with it the reality of our work as we are entering into the ‘Post-Knowledge Economy’.

The evolution of our cultural environments and work contexts along with respective technologies. Source:

Because the one thing that has effectively changed when we entered this most recent, sixth technological stage of human development, is that we started to replace our very own cognition, specifically our decision making, with ‘Machines of the Mind’ instead of what we used to do with ICT in the previous ‘Knowledge Economy’: Using machines only as a support to more efficiently pre-process the information required for our own decision making. This shift-change in how we now use technology to replace what, according to Enlightenment, once represented our very essence as thinking beings has consequently established the need to redefine ourselves in very fundamental ways - including the way we work. Effectively requiring us to actually qualify what and how we should now be thinking in order to still be able to contribute to this new economy in any relevant manner, meaning above and beyond what can or will soon be automated through ‘thinking machines’...?

Ultimately, any meaningful answer to the existential identity challenge resulting from this ‘automation of cognition’ will eventually arrive at the transformation required at the level of the competencies, abilities and skills that are truly unique to us as human beings. Because by “interlacing…skills and technologies closer than they have ever been”, AI is forcing us to not only reevaluate our existing skills at the traditional speed of cultural and social evolution, but at an accelerated rate of technological progress that is threatening to outpace our human capabilities for cognitive adaptation. Which might be the reason why last year’s “Future of Jobs Report” predicts that almost half of all workers’ skills will be disrupted within the next five years, leaving as many as six in 10 workers in need for (re)training before 2027.

Skills with (increasing) future importance, according to WEF’s

The surprising aspect about this skills transformation is however that there is no longer only one specific skill set representing the key qualification required to sufficiently equip the future workforce for their job alongside these constantly evolving technologies. Instead, two traditionally quite different if not opposite, yet complementary skills, appear to be regarded most in need in that future, namely creativity and analytical thinking - together reflecting the “increasing importance of complex problem-solving in the workplace”. This points to a highly dynamic and complex future work scenario, where no single skill alone can any longer be regarded as “standard measure of value” - a change in skill requirements that can be summarised as ‘AI Skills Transformation’.

Instead, we will most likely be facing an “uncertain and constantly evolving…skill composition” of completely new future occupations, in which the value of each skill individually depends on changed market forces of supply and demand, but is now - more than ever - also determined by its relative complementarity. Meaning to what degree this skill can be beneficially combined with other skills, specifically those from a completely different domain - pointing to the added value of skills diversity and cognitive flexibility. And, for the first time ever, these skills from ‘another domain’ will now also include those of machines - asking humans willing to be part of this future workforce to also specifically consider how well their individual skill sets complement those of AI technologies…

Skills with strong AI complementarity come at a higher premium, source:

This is supported by collective AI expert Gianni Giacomelli who rightly asserts that?

much of our GDP's performance … is embedded in the systems, the networks, and generally the interplays between people, processes, and organizations - supported by intelligent technologies. Knowledge-intensive economies derive a lot of their power from that collective intelligence”.?

Giacomelli then goes on to identify the source of that collective intelligence based value creation, pointing to our ability to ‘interoperate’. Just to finally hypothesise about AI’s theoretical ability to augment this interoperational ability, provided that “we transform - possibly radically - the way we work”.?

But if the way we work is based on the way we think, then ‘work transformation’ is nothing more than a fancy way of masking our cognitive inability to clearly explain how any new way of thinking is ultimately supposed to deliver this much anticipated ‘transformational’ productivity increase. Or maybe it is just pointing to the deeper, underlying ‘automation irony’ where, by increasingly automating and augmenting human thinking with ‘thinking machines’, we are actually not easing the burden for the last - supposedly then still useful - human thinkers. Instead we are effectively making them responsible for solving the remaining, likely much more complex tasks, which cannot be solved by AI any more. And we probably also expect them to become the ‘crisis intervention task force’ stepping in whenever AI gets it wrong - which would also require these last ‘super-thinkers’ to know when that might actually be the case. All of this of course calls for a vastly increased need for training of whoever shall find themselves in the role of these future ‘AI Whisperers’ who will be responsible for knowing what no machine can know and helping where no machine can help

But how is one to achieve such an extremely high level of expertise in any given subject matter area - or even two? And especially, how should one compete against machines who have essentially already been trained on all the knowledge digitally available out there. The answer to these questions is likely to be found within ourselves, to be precise within the very nature of our ‘natural intelligence’ - which is first of all much more abundant than we think, as it can be found across a wide range of species, including mammals, birds, reptiles but also invertebrates. But which is also extraordinarily efficient in the way it can extrapolate from astonishingly little empirical data onto actual ground truths in the real world. Which brings us to the very essence of what ‘understanding’ actually is - and how it can be differentiated from simple ‘knowing’. As it in fact does not seem to require hundreds of billions of data points to solve some of the most essential, yet for current AI systems still often elusive challenges of life.

Elon Musk

So the ‘cognitive core capabilities’ we humans should be relying on in order to develop the skills needed to not just survive but thrive alongside ‘thinking machines’ is the kind of thinking that is unique to us humans, especially as our naturally evolved intelligence has given us the extraordinary ability to make original and unique observations and then draw new conclusions from these observations about how the world works. In short, where we excel and actually completely outperform any machine is at the creation of such new knowledge about how the world works. A skill that is very similar to what has been formally described as a requirement for acquiring a PhD, for example by Yale University’s doctoral manual, which prescribes that a PhD thesis…?

should demonstrate the student’s mastery of relevant resources and methods and should make an original contribution to knowledge in the field” - whereby “the originality of a dissertation may consist of the discovery of significant new information or principles of organization, the achievement of a new synthesis, the development of new methods or theories, or the application of established methods to new materials.
Where new knowledge is created, inspired by

This means that - whilst there might still be room for value to be created through the implementation of standardised knowledge in areas already covered by AI, actually new knowledge, and hence the potential for completely new value creation, is only generated when humans thinkers venture into areas not covered by AI, and specifically when such a human expert is extending the boundaries of collective human knowledge beyond its current state. Obviously this doesn’t mean that all remaining (intellectual) work to be done by humans requires a PhD or a PhD-equivalent quality level of thinking. As the nature of human work will also in future remain to be social in character, all and any, even small, intellectual contributions to push the boundaries of our existing human knowledge outwards will count.

Side effects of technological ‘progress’ for our understanding, image source:

The key difference between mindless, unreflected regurgitation of pre-existing knowledge and deep, original thinking resulting in new knowledge however refers to the level of ‘understanding’ going along with and hence qualifying respective types of thinking skills - often requiring a challenging going back and forth between highly analytical and deeply creative ways of thinking. So here is hope that - despite a whole series of indicators pointing to declining quality levels of thinking - we will somehow continue to be able to hold the human ground when it comes to our cognitive collaboration with ‘thinking machines’. Especially when it comes to withstanding the convenience-informed impulse to give in to the mediocrity resulting from simply regurgitating second- or even only third-hand based pseudo-knowledge which has only been eloquently rehashed by any of the current AI thought-prostheses…

Research by


要查看或添加评论,请登录

Thomas Hirschmann ?的更多文章

社区洞察

其他会员也浏览了