BETWEEN FICTION AND REASON: THE AI IRRATIONAL EXUBERANCE
Carlos Cruz, MSc, MBA, FSI
VP | Strategy Consulting, Technology, Business, CDO | Data & AI | Analytics, ML, DL, LLM, GenAI | XAI, Governance & Ethic | Cloud, Dataops, Data Mesh, ML&AIops, RPA | Architecture, Engineering, Sales, Economics, R&D, FSI
Abstract
This article explores the relationship between irrational exuberance theory and elevated expectations about artificial intelligence (AI). Examining how the quest for competitive advantage and value creation through AI can be put at risk by being influenced by fiction, overconfident tendencies, or unwarranted optimism, core characteristics of irrational exuberance. It is argued that without a critical understanding of these dynamics and the application of an AI portfolio centered on a data strategy and business processes, low-return efforts and substantial risks be incurred, compromising both value creation and economic sustainability in the long term.
Irrational Exuberance and AI: A Cautionary Tale for the 21st Century
Artificial intelligence, once restricted to the domain of science fiction, has become one of the central pillars of organizational innovation in the 21st century. Companies around the world are investing heavily in AI with the expectation of gaining competitive advantages and creating value in increasingly volatile and unpredictable markets. However, this headlong rush toward extreme automation and predictive analytics raises critical questions about the rationality of these strategic decisions.
By highlighting the concept of "irrational exuberance", coined by Alan Greenspan, former president of the United States Federal Reserve, in his well-known 1996 speech (Greenspan, 1996), who used the expression to describe the behavior of investors who drive the prices of financial assets to excessively prominent levels, disconnected from current economic fundamentals. "Irrational exuberance" therefore refers to an exaggerated optimism in the financial market, where investors buy assets unrestrainedly, driven by unrealistic expectations of future profits, without adequately considering the risks involved.
The term was popularized during the technology bubble of the 1990s, the so-called "dot com bubble". During this period, the stock market, especially technology company stocks, experienced explosive growth. Expectations of future profits were so high that many investors began buying shares in companies that often did not have significant revenues or a sustainable business model. This behavior resulted in a financial bubble, which eventually burst in 2000, causing massive losses for investors.
In (Shiller, 2005), expands on the concept of “irrational exuberance” by describing the United States housing bubble in the 2000s, which culminated in the global financial crisis of 2008. During the years leading up to the crisis, property prices rose rapidly, driven by speculation and reckless lending. Many people believed that home prices would continue to rise indefinitely, and this fueled a cycle of buying and selling that drove up prices far beyond what economic fundamentals justified.
When the bubble burst, many investors and property owners faced devastating losses, leading to a global financial crisis. The “irrational exuberance”, defined as the excessive optimism that inflates markets and leads to risky decisions, serves as a starting point for understanding how fiction can raise expectations about AI by making us think about how organizations can take substantial advantage of AI. potential of AI in various organizational processes by focusing on the ideal combination between fiction, reason, and rationality.
From Fiction to Reality: The Peak of Inflated Expectations
Isaac Asimov imagined artificial intelligence (AI) and robots as entities endowed with cognitive capabilities, designed to coexist harmoniously with humans. Asimov was a pioneer in creating a detailed and ethical vision for how robots could function in society.
In (Asimov, 1950) introduced “The Three Laws of Robotics” as fundamental principles for the behavior (also exploring the possibility of consciousness and autonomy) of robots, ensuring that they were safe and beneficial to humanity: (1) A robot cannot harm a being human being or, through inaction, allow a human being to suffer harm. (2) A robot must obey orders given to it by human beings, except where such orders conflict with the First Law. (3) A robot must protect its own existence, so long as such protection does not conflict with conflict with the First or Second Law.
Asimov imagined Robots as Helpers and Protectors as in (Asimov, 1940), Asimov's first fictional robot is a loyal caregiver to a child, showing how robots can be integrated into family life, but Asimov also expounds on the “Moral and Ethical Complexity of AI” as in (Asimov, 1952), a robotic supercomputer who controls the world economy must decide whether to lie to humanity to avoid a greater conflict, exploring the nuances of the laws of robotics.
Asimov imagined positronic[1] societies essential to distinct aspects of human life in society (Asimov, 1957), ?in addition to bringing the long-term vision of benevolent governance of AI-equipped robots.
From Asimov's fiction concerned with the behavior of AI to the reality explored by Alan Turing (Turing, 1950), who proposed the “Turing test” to determine whether a machine can exhibit behavior indistinguishable from a human from a theoretical and practical perspective, laying the foundation for modern computer science and AI.
The echoes of fiction bring exuberant expectations about AI, and we can observe the large-scale investments in AI made by Big Tech's (Nvidia, Google, Amazon, Microsoft, Meta, OpenAI, Anthropic), which are now under pressure from investors who want to obtain the financial returns promised by AI. Amid speculation, investors and experts raise concerns about whether AI is worth the investment.
On the optimistic side it is understood that large capital expenditures will be rewarded. Taking a step back does not seem to be an option for Big Tech, there is a forecast of spending around US$ 1 trillion on AI in the next five years, but with little to show so far (Sachs, 2023). Despite the concerns and restrictions, there is space and time for AI to be successful in delivering the benefits making exuberance rational confirming belief in fiction by finding reason throughout the evolutions of AI and the maturation of companies by systematically using AI in your business.
Balancing Exuberance and Reality: Creating a Sustainable AI Portfolio
Finding the path of reason to quell exuberance requires an approach beyond implementing use cases, it requires an approach to building an AI portfolio (Raskino et al., 2018) which combines “Quick Wins” projects to bring gains in operational efficiency and optimization combined with “Long Term” projects, to redefine “end-to-end” processes that bring impacts in all dimensions of the organization, influencing the relationship between competitive strategies and innovation strategies (Cruz, 2020).
This approach also helps prevent organizations from withdrawing investments from the use of AI due to irrational failures in the conduct of projects. (Wilson & Daugherty, 2017).
It is understood that data is the most important asset in the organizational AI strategy since algorithms have become commodities that can be replicated and customized, especially considering that quick wins are concentrated in the application of ready-to-use Machine Learning (ML), with specific adaptations.
Highlighting that these projects “do not” have transformative power but have the potential to expose employees to the benefits of AI and build consensus on its potential with organizational leaders by using the appropriate data strategy (Agrawal et al., 2020).
Projects of this nature help organizations develop the skills needed to support larger projects that consider “large-scale data capture, processing and labeling”.
Examples of “Quick-Wins” Projects:
Long-term projects offer greater organizational impact and should be combined with short-term projects to enable synergies. Long-term projects involve entirely rethinking processes and not just to gain optimization or operational efficiency, as in (Porter, 2000) which argues that operational efficiency is not a strategy.
Example of “Long-Term” Project:
Automation of the customer on-boarding process in an investment bank using speech understanding, vision, entity extraction, text generation and automated interaction resources from the front-office to the back-office.
Because long-term projects are about more than quick wins, off-the-shelf technology is not enough. Long-term projects require organizational skills in building ML algorithms, combining the various existing technologies (Cloud and on-premise) and the types and subtypes of AI: ML, Deep Learning (DL), Large Language Models (LLM's) for example.
Connecting the organization to a culture of continuous learning and training to inform, update, and enhance employees' knowledge is crucial for increasing the organization'sAIient" and fostering "reason" over "irrational exuberance."
Building an AI portfolio requires in-depth knowledge of the industry and business model and the execution of fundamental steps such as:
Developing an AI portfolio that enables integration is critical as companies gain momentum with “Quick-Wins” and move toward “Long-Term” goals by altering organizational moves toward business transformation with AI.
An AI portfolio combined with an organizational learning culture, integrating AI strategy with Competitive Strategy, creating a data infrastructure integrating processes, behaviors and motivating the consumption of AI Helps drive the organization around the use of data and AI.
领英推荐
Despite the irrational exuberance around AI, organizations can take substantial advantage of its potential by understanding the implications and dynamics of creating the ideal AI portfolio for each type of business, observing the economics of AI (Ajay Agrawal et al., 2018), long before robots began to question their own existence.
REFERENCES
Agrawal, A., Gans, J., & Goldfarb, A. (2020). How to Win with Machine. Harvard Business Review.
Ajay Agrawal, Joshua Gans, & Avi Goldfarb. (2018). Prediction Machines: The Simple Economics of Artificial Intelligence. Harvard Business Review Press.
Asimov, I. (1940). Robbie (1st ed.). Super Science Stories.
Asimov, I. (1950). I Robot (1st ed.). Doubleday.
Asimov, I. (1952). The Avoidable Conflict (1st ed.). Astounding Science Fiction.
Asimov, I. (1957). The Sun Unveiled (1st ed.). The Magazine of Fantasy and Science Fiction.
Cruz, C. J. X. (2020). As Rela??es Entre As Estratégias Competitivas e Estratégias de Inova??o na Indústria Bancária [Funda??o Getulio Vargas (FGV)]. In FGV. https://bibliotecadigital.fgv.br/dspace/handle/10438/28820
Greenspan, A. (1996, December 5). Remarks by Chairman Alan Greenspan. Federal Reserve. https://www.federalreserve.gov/boarddocs/speeches/1996/19961205.htm
Porter, M. E. (2000). What Is Strategy? HBR.
Raskino, M., Garthwaite, C., Kiron, D., & Spira, J. (2018). Winning with AI. MIT Sloan Management Review. https://sloanreview.mit.edu/projects/winning-with-ai/?fbclid=IwAR3vpFQ0bPE1CK9kf8Mm07tleOO1eB5urQ9OdYOo7Lwcunxh6VbeamgGT6Q
Sachs, G. (2023). Gen AI: Too Much Spend, Too Little Benefit? https://www.goldmansachs.com/images/migrated/insights/pages/gs-research/gen-ai–too-much-spend,-too-little-benefit-/TOM_AI%202.0_ForRedaction.pdf
Shiller, R. J. (2005). Irrational Exuberance (Second Edition). Princeton University Press.
Turing, A. M. (1950). Computing Machinery And Intelligence. In Computing Machinery and Intelligence. Mind (Vol. 49).
Wilson, H. J., & Daugherty, P. R. (2017). The first wave of corporate AI is doomed to fail. Harvard Business Review. https://hbr.org/2017/04/the-first-wave-of-corporate-ai-is-doomed-to-fail
?
Notes:
[1] Positronic Societies: Asimov coined the term "positronic brain" to describe the control unit of robots in his books, such as in the I, Robot series and the Foundation saga. A positronic brain is an advanced artificial intelligence system that allows robots to make complex decisions and interact with humans in an ethical manner, based on the Three Laws of Robotics.
?
?
?
?
?
?
?
?
?
?
?