Artificial Intelligence: Between Hope and Hype

Artificial Intelligence: Between Hope and Hype

In 1987, the economist Robert Solow famously observed that:

“You can see the computer age everywhere but in the productivity statistics”.

He made the comment on the basis that information and communications technology (ICT, today just mainly IT) had attracted large investments and these technologies were ubiquitous in the 1980s, yet still, total factor productivity (TFP) growth had stalled. It would be another ten years after Solow’s quote until productivity increases due to ICT were measurable at the macro level.

IT is a story about how long it actually takes for organizations and society writ large to adapt to new general-purpose technologies, often abbreviated GPTs. It took around 40 years between the advent of personal microprocessors until it could be seen in the productivity statistics. The same story is true for the other main GPTs; Edison provided power to New York customers for the first time in 1882, yet electricity did not change labor productivity until the 1920s. And James Watt patented his revolutionary steam engine in 1769, yet it wasn’t until the mid-19th century that it had a macro-level effect on any economy. Of course, specific types of labor had been dramatically changed, for better or worse, but not to a degree where the steam engine’s contribution to total factor productivity was measurable.

In other words: Even with transformative, society-changing technologies, it takes a long time for impact to manifest.

Thinking of AI as a general-purpose technology

The reason this is worth pondering is because AI probably also is a general-purpose technology. It is pervasive and generally useful across a wide array of tasks, jobs, contexts; it provides the basis for new inventions (like AI powered software services or search engines); AI gets better over time and increases in usefulness; it is complementary with other innovations (for example, you can carry your AI around in your smartphone); and, perhaps most crucially, it impacts the economy. But not right now. And probably not as soon as some might think.

There are already now a growing mountain of evidence documenting how large language models (LLMs) and AI in general can either assist with or completely solve relatively complex problems. Productivity increases reported by randomized controlled trials go from a few % to over 50 %, depending on tasks and skill level of the human. But productivity increases stemming from AI should, theoretically, increase in many different types of tasks. Similarly, studies have shown that the quality of output can be enhanced by a similar amount, depending on the type of work.

The problem is that real life tasks are part of larger systems and other people who do not constitute experimental setups. Organizations all have different routines and workflows, and make use of other programs or tools, and if these routines or tools do not fit well with, say, ChatGPT, then productivity in practice may suffer. Or the work that goes into making it work exceeds the short-term benefits of the LLM itself. My colleagues Lise Justesen and Ursula Plesner have explored the “invisible work” that organizations and managers have to undertake when new IT technologies are introduced, since significant resources can be spent on integrating with current routines or rectifying unexpected challenges. So, while productivity may increase in isolated tasks, the effect in a real organizational setting will be dampened – and that is not to say anything about which tools are allowed, whether GDPR is a problem, and so on.

Integrating AI into an organization is often harder than it seems

Let’s look at a real example: I was assisting someone working in a large organization with figuring out how ChatGPT could be used for assisting communication professionals with internal and external communication. A really good use case for AI! It took me less than an hour to develop and refine a custom GPT with instructions, give in-context learning examples based on other pieces of communication, and guidance on the formats typically used. A rough estimate would be that, in isolation, that custom GPT (that is, a custom version of ChatGPT, not a custom general-purpose technology) would probably make tasks related to actually writing communication around 3-4 times easier. Or, in other words, a single person would do the job of three or four colleagues.

Example of custom-made communication GPT tailored to mimic a specific style of communication

As it happens, the real world is not this smooth. The following challenges arose immediately (within one hour of discussion):

  • How do we ensure we are allowed to use custom GPTs if we don’t have a corporate license? Without a corporate license, the organization won’t reimburse the expense. Free version is fine, but you need one person to be able to edit the custom GPT.
  • How do we figure out who is responsible for maintaining custom GPTs? Should it be a few people, or everyone?
  • How should we organize the teams or workflows around this?
  • When and how should we disclose we use it?
  • What if we do not get permission and people use it anyways?
  • How does it interface with this handful of other software packages that we use, like website editing, mass emailing, Office package, etc.?
  • What if some employees feel unsafe using it?
  • Are our data safe? Should we be worried?
  • If the climate impact of a prompt is high, should we then use it sparingly?

These questions are broadly similar to virtually every other organization grappling with how to handle AI right now. Some concerns are easier to verbalize – such as data safety – while others appear only when you concretely discuss how to use a tool – such as who is responsible for developing and maintaining custom versions of a chatbot.

I think that issues like these are the problems holding back productivity gains from AI. Just as it was the case with the steam engine, electricity, and early personal computers.

This is also one reason why there are such dramatic differences between people’s perception of AI. You may have one type of software developer who works in an organization with an infrastructure that just so happens to be conducive to using state-of-the-art AI tools. Compare that to another type of software developer who works in a type of organization that does not – for example by having very specific processes for code development or specific templates for developing and reviewing changes to test and production code. You don’t just have ChatGPT generate a few lines of code in those organizations. Thus, these two software developers will have widely different perspectives on the viability and utility of AI. And it is the same with other job categories, or between categories, or between people who work in the public or private sector, or even depending on whether you live in the U.S. or not. Integrating AI into a complex infrastructure of humans, machines, software, hardware, and routines is not automatically easy – far from it.

Wait 25 years before evaluating the productivity gains from AI

When we look back at the earlier general-purpose technologies, it took a really, really long time for them to reach their full productive potential across the economy as a whole. It is easy to find detractors who regarded these early technologies as a waste of resources or dangerous because they were not demonstrating productivity gains. For some technologies, they were right. For others, they were wrong. The key thing was that evaluating a general-purpose technology in the early phases of its introduction underestimated its effect on the economy.

When this is brought back into the context of the chatbots we all know and love, it is considerably more difficult to say with certainty what the future of AI will be. I often see people dismissing LLMs or ChatGPT because it doesn’t help them directly in their tasks – but evidently, other people find it useful in their own tasks. But even if it is useful in a wide array of tasks, it has differential effects and it is not trivial to incorporate AI into any organization, as discussed earlier.

So here is a claim: Based on the trajectory and history of earlier transformative technologies, we should probably wait with estimating economy-level transformative changes until, say 2050. That would give around 25 years for the technology to develop, be refined, packaged and prepared, and integrated into existing systems and routines. Even if we do not see widespread macro-level productivity effects before 2050, it would still be the fastest general-purpose technology ever in terms of how quickly it began affecting the wider economy beyond niche use cases and isolated tests.

That is why I think it is quite premature to say that AI is purely hype, or that is only a bubble. Sure, there are hype-like claims about AI, and certainly it seems everyone are rushing to put .ai on their website domains and start-ups. But that doesn't mean that the underlying technology is useless; far from it. It just means we have some sorting to do to figure out what the technology will work and how we will use it.

In the meantime, what most of our professional lives will revolve around the next two decades will probably have a lot to do with figuring out how to use AI productively and responsibly. It is incredibly difficult to assess how fast the technology becomes better, but there is growing consensus that human-level machine intelligence will appear sometime before 2050, which is a staggering thought when you reflect on it. Yet, even now, with simple chatbots like ChatGPT, Claude, and Gemini, we as users have not even begun scratching the surface of how to make the most of it. So in sum, for the next 25 years, I will hold off dismissing AI as purely hype even if major productivity effects are not showing at the macro level. And then we will see in 2050.


Here are some questions organizations can ask themselves when it comes to uncovering all the 'hard parts' of getting AI to work in a real, practical sense. I had ChatGPT read my short article and then suggest the questions implied, and honestly, I think it is better than what I would have suggested. Good reminder that sometimes, the obvious answer is often really good if you provide decent background information.

Strategic Alignment and Vision

  • Does your organization have a clear strategic vision for AI adoption? How does AI align with your overall business goals and objectives?
  • Is there a designated leadership or champion for AI initiatives within the organization?
  • How does the organization view AI: as a tool for cost-cutting, a driver for innovation, or a transformative change agent?

Readiness and Capability Assessment

  • How would you rate your organization’s current digital and data maturity? Do you have the necessary data infrastructure and quality data to support AI initiatives?
  • What existing AI skills or expertise do you have within the organization? Are there gaps that need to be filled through training, hiring, or partnerships?
  • How prepared is your IT infrastructure to support AI technologies, including computational power, cloud readiness, and integration capabilities?

Operational Integration

  • What are the current workflows or processes where AI can have the most significant impact? Have you identified low-hanging fruit versus more complex, long-term integration opportunities?
  • How adaptable are your current processes and workflows to accommodate AI tools? What changes or redesigns might be necessary?
  • How are you planning to handle the ongoing maintenance and updates of AI systems? Who will be responsible for these tasks?

Risk Management and Compliance

  • What are the potential risks associated with AI adoption in your organization (e.g., data privacy, security, ethical concerns)? How are these risks being mitigated?
  • Are there any regulatory or compliance issues that must be addressed in relation to your AI use cases (e.g., GDPR, industry-specific regulations)?
  • How do you plan to manage the ethical implications of AI, such as bias, transparency, and accountability in AI-driven decisions?

Change Management and Cultural Readiness

  • How prepared is your organization culturally for AI adoption? Is there resistance, fear, or lack of understanding about AI among employees?
  • How do you plan to address employee concerns, upskill teams, and encourage a mindset shift towards AI adoption?
  • How are you involving different stakeholders (from end-users to executives) in the AI integration process?

Performance Measurement and ROI

  • How will you measure the success of AI initiatives? Are there clear KPIs or metrics that link AI integration to business outcomes?
  • What is your approach to evaluating the ROI of AI investments? Are you considering both direct productivity gains and broader strategic benefits?
  • How flexible is your AI strategy? Can it adapt based on ongoing feedback, performance, and evolving technology landscapes?

Experimentation and Scaling

  • How are you approaching experimentation with AI? Are there pilot projects or proof of concepts in place?
  • What criteria are you using to determine when and how to scale successful AI initiatives across the organization?
  • How do you handle failures or setbacks in AI projects? Is there a mechanism for learning and iterating quickly?

Collaboration and Ecosystem Engagement

  • How are you engaging with external AI ecosystems, such as partnerships with AI vendors, academic institutions, or industry consortia?
  • Are you leveraging open-source AI tools, collaborating with startups, or exploring other innovative approaches to keep pace with AI developments?
  • How are you sharing AI knowledge and best practices across teams and departments within the organization?

Ursula Plesner

Research in Digital Organizing

2 个月

I really like your perspective, Christian, emphasizing the long horizon. Thanks for a great guest lecture - our students were inspired and intrigued ??

要查看或添加评论,请登录

Christian Hendriksen的更多文章

社区洞察

其他会员也浏览了