Why the Value of Uniquely Human Skills Will Increase in the Age of AI
David Ragland, DBA, MS, PMP
Management Consultant (AI Strategy); Professor of the Practice, Management
FuturePoint Digital is a research-based consultancy that explores practical applications of emerging concepts from science and technology. Follow us at: www.futurepointdigital.com .
In an era marked by unprecedented advancements in artificial intelligence (AI), understanding the evolving interplay between technology and human capabilities becomes crucial. The O-Ring Theory, initially proposed in 1993 by University of Chicago economist Michael Kremer, provides a compelling framework for examining why uniquely human skills are gaining in value as AI technologies advance. This theory, originally developed to explain the interdependence of tasks within a production process, draws its name from the Challenger space shuttle disaster, where the failure of a single component (an O-ring) led to the mission's catastrophic failure.
The theory has subsequently been used by researchers to highlight the importance of complementarity and quality matching in tasks where AI and human workers interact. Applied in this context, it suggests that as AI-driven automation becomes more prevalent, the premium on human skills that cannot be automated, or reduced to algorithms—especially those involving complex, nuanced judgments and interpersonal interactions—will likely increase in value (Autor, 2015; Deming, 2017).
More specifically, the uniquely human skills that will likely surge in value encompass a wide range, including but not limited to, creative and innovative thinking, emotional intelligence, ethical judgment, and complex problem-solving. These skills are not easily codified or replicated by algorithms, making them increasingly critical in a technology-driven world. For instance, while AI can analyze data and identify patterns, the human capacity for creative thinking leads to groundbreaking innovations (Hass, 2020; Starkey, 2020). Similarly, emotional intelligence, vital for leadership, teamwork, and customer relations, remains uniquely human. Furthermore, as ethical considerations become more complex with the integration of AI into societal structures, the ability to navigate these moral landscapes will be indispensable (Fernández-Berrocal & Extremera, 2020).
This white paper delves into these dynamics, offering insights into how individuals and organizations can adapt to and thrive in a future where human skills and AI capabilities are deeply intertwined. By fostering and valuing uniquely human skills, we can harness the full potential of this symbiotic relationship, ensuring that the advancement of AI not only augments our capabilities but also enriches the human experience. The future, therefore, is not one of displacement but of enhancement, where AI's evolution propels the worth and demand for our most human qualities to new heights.
With this concept in mind, FuturePoint Digital adopted the slogan, “human intelligence + artificial intelligence = super intelligence?,” to illustrate the ever increasing multiplier effect of human/machine interactions. To learn more, please follow us at: www.futurepointdigital.com .
Envisioning the Future of Human-AI Collaboration
As AI systems become more sophisticated—capable of performing tasks ranging from complex data analysis to autonomous driving—the question of how human workers will fit into this new paradigm becomes increasingly urgent. The fear that machines might replace human labor en masse has sparked widespread debate. However, a closer examination reveals a more nuanced reality. While AI excels in tasks that involve processing vast amounts of information or executing well-defined procedures, there remains a spectrum of skills distinctly human in nature that AI cannot replicate making these skills of even greater value (Frank et al., 2019; Susskind & Susskind, 2020; World Economic Forum, 2020).
Specific human skills that are likely to see a surge in value in tandem with the increasing sophistication of AI technologies include (but are not limited to) the following:
The Small Data vs. Big Data Advantage
The difference in how AI platforms and humans develop capabilities, particularly in creative tasks like painting or decision-making, can also be explained (at least partially) in terms of how humans and AI platforms leverage small data versus big data to their respective advantage. Here's a closer look at why AI requires vast amounts of data and time to develop capabilities that a human can perform more intuitively:
Continuing with the human artist example to illustrate how humans are, in many ways, distinctly advantaged over their algorithmically driven cousins, let’s consider what it takes to train an AI platform to paint a unique portrait of a particular person. Let’s call this person Mona Lisa. Starting from scratch, we would first need to select an AI model —Generative Adversarial Networks (GANs) or Variational Autoencoders (VAEs) are popular choices due to their ability to generate high-quality, detailed images. Now we’ll need to collect a large dataset of portrait images (quality over quantity counts here). These images should be as varied as possible in terms of styles, lighting, backgrounds, and facial features to ensure the AI platform learns a wide range of portrait characteristics. This will be the training set used to develop the general artistic capabilities we’re looking for. Next, we’ll develop a test set of images that the AI platform has not yet “seen” and that will be used to test the platform’s ability to create a unique and specific image —in this case, a unique portrait that captures the essence of Ms. Lisa. The entire process could take months, but with enough data and training the AI platform should be able to paint a unique likeness of Ms. Lisa. Perhaps even something on par with Mr. Da Vinci himself. However, after undertaking this rather arduous and time consuming process, the AI platform is limited to only the task for which it has been trained. It cannot, for instance, stop painting and sit down to play a simple game of checkers. Clearly humans are far more versatile in this sense. One of the reasons for this is that humans can perform complex tasks with relatively small amounts of data, while AI platforms require large amounts of data to process, learn, and perform tasks.
Differences in Learning Processes
Another consideration is that the mechanisms driving AI and human learning are fundamentally distinct, each with its unique strengths and limitations. AI platforms, with their capacity for processing vast datasets, embody the pinnacle of linear and algorithmic learning, optimizing tasks through patterns and probabilities (Goodfellow, Bengio, & Courville, 2016; LeCun, Bengio, & Hinton, 2015). In contrast, human learning embodies a more fluid and adaptable approach, leveraging a rich tapestry of emotional, subconscious, and cognitive faculties that enable a nuanced understanding of the world (Tyng, Amin, Saad, & Malik, 2017; Woollett & Maguire, 2019). This juxtaposition of AI's data-driven learning against human's experiential and intuitive knowledge acquisition highlights a broader conversation about the complementary roles of artificial and human intelligence in advancing collective knowledge and capabilities. Here’s a closer look at the differences:
Differences in Contextual and Abstract Thinking
The dichotomy between AI and human cognitive abilities becomes particularly pronounced when examining their respective capacities for contextual and abstract thinking. While AI systems demonstrate unparalleled efficiency in processing and applying specific information within the confines of their programming, they falter when faced with the need for abstract reasoning or the application of knowledge across diverse contexts (Barrett & Satpute, 2019; Zador, 2019). In stark contrast, human intellect thrives on adaptability and abstraction, effortlessly weaving through various domains of knowledge and applying learned experiences in novel situations (Hassin, 2020; Oakley & Sejnowski, 2019). This fundamental difference underscores the unique strengths of human cognition, highlighting the intrinsic value of our ability to navigate complex, multifaceted scenarios that AI cannot inherently grasp. As we delve deeper into these differences, it becomes clear why fostering both AI's precision and human adaptability is crucial for harnessing the full spectrum of cognitive capabilities. Put succinctly:
Efficiency and Flexibility
In the realm of cognitive performance, the concepts of efficiency and flexibility present a fascinating study of contrasts between AI and human capabilities. AI systems excel in efficiency, processing and executing tasks with a speed and precision that often exceed human abilities, particularly in well-defined domains such as data analysis and pattern recognition. However, this efficiency comes with limitations in versatility and adaptability (Rajkomar, Dean, & Kohane, 2019; Litjens et al., 2017). Conversely, the human brain, while it may not always match the raw processing power of AI in specific tasks, demonstrates remarkable flexibility. This human capacity to seamlessly switch contexts, grasp the nuances of complex problems, and integrate new information without extensive retraining underscores a critical advantage in the unpredictable and ever-changing landscape of real-world challenges (Beaty et al., 2021; Woollett & Maguire, 2019). Exploring these attributes further reveals the complementary nature of AI's efficiency and human flexibility, suggesting that the most effective solutions leverage the strengths of both. Points to remember:
In summary, as AI platforms become more complex and capable, the human skills that complement, enhance, or surpass AI's capabilities will become increasingly important. This interplay between human ingenuity and artificial efficiency presents a roadmap for navigating the future of work, emphasizing the development of skills that AI cannot replicate. By focusing on nurturing these uniquely human abilities, individuals and organizations can prepare to thrive in an AI-augmented world.
How might FuturePoint Digital help your organization explore exciting, emerging concepts in science and technology? Follow us at www.futurepointdigital.com , or contact us via email at [email protected] .
About the Author: David Ragland is a former senior technology executive and an adjunct professor of management. He serves as a partner at FuturePoint Digital, a research-based technology consultancy specializing in strategy, advisory, and educational services for global clients. David earned his Doctorate in Business Administration from IE University in Madrid, Spain, and a Master of Science in Information and Telecommunications Systems from Johns Hopkins University, where he was honored with the Edward J. Stegman Award for Academic Excellence. He holds an undergraduate degree in Psychology from James Madison University and has completed a certificate in Artificial Intelligence and Business Strategy at MIT. His research focuses on the intersection of emerging technology with organizational and societal dynamics.
References
Autor, D. H. (2015). Why are there still so many jobs? The history and future of workplace automation. Journal of Economic Perspectives, 29(3), 3-30.
Barrett, L. F., & Satpute, A. B. (2019). Large-scale brain networks in affective and social neuroscience: Towards an integrative functional architecture of the brain. Current Opinion in Behavioral Sciences, 28, 100-107.
Beaty, R. E., Kenett, Y. N., Christensen, A. P., Rosenberg, M. D., Benedek, M., Chen, Q., ... & Silvia, P. J. (2021). Robust prediction of individual creative ability from brain functional connectivity. Proceedings of the National Academy of Sciences, 118(10).
Benedek, M., & Fink, A. (2019). Toward a neurocognitive framework of creative cognition: The role of memory, attention, and cognitive control. Current Opinion in Behavioral Sciences, 27, 116-122.
Bughin, J., & Seong, J. (2020). Solving the world's problems with better data. Harvard Business Review, 98(4), 86-95.
Choudhury, S. R., Yee, M. M., & Kumar, S. (2021). Skill shifts: Responding to the changing demands of the future of work. McKinsey Global Institute.
Clarke, N. (2019). Developing emotional intelligence capabilities in the workplace. Journal of Industrial and Commercial Training, 51(1), 2-6.
Deming, D. J. (2017). The growing importance of social skills in the labor market. The Quarterly Journal of Economics, 132(4), 1593-1640.
Du, J., Bhattacharya, A., & Sen, S. (2021). Machine learning approaches for creativity. Journal of Business Research, 134, 503-513.
领英推荐
Elgammal, A., Liu, B., Elhoseiny, M., & Mazzone, M. (2017). CAN: Creative adversarial networks, generating "art" by learning about styles and deviating from style norms. ACM Transactions on Graphics (TOG), 36(4), 1-12.
Fernández-Berrocal, P., & Extremera, N. (2020). Ability emotional intelligence and life satisfaction: Positive psychosocial effects in adolescents. Advances in Social Sciences Research Journal, 7(4), 90-101.
Frank, M. R., Autor, D., Bessen, J. E., Brynjolfsson, E., Cebrian, M., Deming, D. J., ... & Rahwan, I. (2019). Toward understanding the impact of artificial intelligence on labor. Proceedings of the National Academy of Sciences, 116(14), 6531-6539.
Gebru, T., Hoffman, J., & Fei-Fei, L. (2021). The real threat of artificial intelligence. Foreign Affairs, 100, 94-109.
Goleman, D. (1995). Emotional intelligence. Bantam Books.
Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT press.
Griffiths, T. L., & Tenenbaum, J. B. (2019). Principles of learning and inference in probabilistic models of cognition. Cognitive Science, 43(6), e12714.
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293(5537), 2105-2108.
Harteis, C., Goller, M., & Caruso, C. (2020). Conceptual change in the face of digitalization: Challenges for workplaces and workplace learning. Frontiers in Education, 5, 1-10.
Hass, R. W. (2020). How art helps us understand human intelligence: An interdisciplinary perspective. Journal of Intelligence, 8(1), 5.
Hassabis, D., Kumaran, D., Summerfield, C., & Botvinick, M. (2017). Neuroscience-inspired artificial intelligence. Neuron, 95(2), 245-258.
Hassin, R. R. (2020). Yes, we can (think): The role of construal level in cognitive control. Trends in Cognitive Sciences, 24(8), 586-599.
Huang, M. H., & Rust, R. T. (2018). Artificial intelligence in service. Journal of Service Research, 21(2), 155-172.
Immordino-Yang, M. H. (2020). Emotions, learning, and the brain: Exploring the educational implications of affective neuroscience. Mind, Brain, and Education, 14(3), 162-170.
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science, 349(6245), 255-260.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.
Klein, G. (1998). Sources of power: How people make decisions. The MIT Press.
Kremer, M. (1993). The O-Ring theory of economic development. The Quarterly Journal of Economics, 108(3), 551-575.
Kuncel, N. R., & Hezlett, S. A. (2020). The changing structure of creativity over time. Journal of Applied Psychology, 105(4), 367-385.
LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.
Lin, A., Osman, M., & Ashcroft, R. (2021). Understanding cognitive biases in human decision making: A review of cognitive research. Cognitive Research: Principles and Implications, 6(1), 1-17.
Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60-88.
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.
Oakley, T., Sperry, R., & Pringle, A. (2020). The role of emotions in biological systems: Theory and research on emotion-specific activations with evolutionary function. International Journal of Environmental Research and Public Health, 17(19), 7084.
Oakley, T., Sejnowski, T. J., & McConville, C. (2020). Human intelligence and artificial intelligence: Theory and measurement. Cognitive Psychology, 122, 101281.
Oakley, T., & Sejnowski, T. J. (2019). Foreword: Emergent dynamics of artificial and human cognition. Cognitive Systems Research, 54, 1-2.
Rajkomar, A., Dean, J., & Kohane, I. (2019). Machine learning in medicine. The New England Journal of Medicine, 380(14), 1347-1358.
Schwartz, J., & Schaninger, B. (2019). Education for the future. McKinsey & Company.
Starkey, G. (2020). Creativity: How to develop your creativity skills to become a more creative leader. John Wiley & Sons.
Susskind, R., & Susskind, D. (2020). The future of the professions: How technology will transform the work of human experts. Oxford University Press.
Tyng, C. M., Amin, H. U., Saad, M. N. M., & Malik, A. S. (2017). The influences of emotion on learning and memory. Frontiers in Psychology, 8, 1454.
Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford University Press.
Woollett, K., & Maguire, E. A. (2019). Acquiring "the Knowledge" of London's layout drives structural brain changes. Current Biology, 29(23), 3915-3922.
World Economic Forum. (2020). The future of jobs report 2020. World Economic Forum.
Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn from animal brains. Nature Communications, 10(1), 1-11.
Zhang, L., & Chen, Y. (2020). Machine learning and artificial intelligence in the age of big data: A survey. International Journal of Environmental Research and Public Health, 17(20), 7236.
Content Marketing @ Cyfrin: World-class web3 education, tools, and security audits | "Metaverse AI" trilogy author
8 个月Superb insights David Ragland, DBA, MS, PMP!
Human Factors Engineer
9 个月Thanks for this post Dave. I have seen so much change in the past 30 years as a Human Factors Engineer: from punch cards to AI. Quite a journey. The allocation of functions between man and machine. What a journey! Did you know the human eye can detect a candle a mile away if nothing obscures the line of sight? Did you know most of our senses are logarithmic with regard to detecting changes in our environment? AI differs significantly from the way a human processes information, processes stimuli into information, stores and retrieves it. AI memory will increase over time, unlike us poor humans who forget why we went upstairs. Emotions are largely chemical, and can probably, over time, be decided, encoded, and learned. The same for responses to smells. But I doubt the objective will be to mimic humans. It is to replace us. Of what value are emotions? Of what value is diversity? Of what value is our uniqueness? Of what value is inspiration? For me, it's pretty frightening to think of the puppeteers and their new toys.
Entrepreneur, Change Agent, Executive Leader and Advisor
9 个月Loved it! I am amazed at what AI can do, and could not agree more. There is so much opportunity here to use the power of AI while leveraging our own skills! I am very interested in exploring how AI can be used to bridge gaps among those with special needs. Cannot wait to hear more! Awesome paper David!