Why AI Projects Fail: Lessons from New Product Development

Why AI Projects Fail: Lessons from New Product Development

80% of AI projects in business failure, according to some estimates... worse than the failure rate for new products! Something is going very wrong here, and it's time for a course correction. Find out why the high failure rate, and see some proven mitigating strategies borrowed from the field of new-product development.

Dr. Robert G Cooper

June 2024

ABSTRACT

The majority of AI projects in business do not succeed. Despite numerous opinion pieces on the causes of these failures, there is a notable scarcity of solid research on the subject. Analysis of the available data shows that the reasons for AI failures are strikingly similar to those identified in the extensively studied field of new product development (NPD). Furthermore, effective strategies from NPD can be adapted to address AI project failures. This article identifies seven primary reasons for AI failures in business, primarily stemming from poor business practices, and offers corresponding recommendations for improvement.


1.???? MOST AI PROJECTS DO NOT END WELL

AI projects have an alarming failure rate! A recent Deloitte investigation finds that only 18 to 36% of organizations achieve their expected benefits from AI (Mittal et al., 2024), and only 53% of AI projects proceed from prototype to production (Masci, 2022). Another report finds 87% never make it into production (Dilmegani, 2024). The “pilot paralysis” phenomenon, where companies undertake AI pilot projects but struggle to scale up, is epidemic (Gregory, 2021). Some estimates place the failure rate as high as 80%, almost double the failure rate of IT projects a decade ago (Bojinov, 2022), and higher than the failure rate for new product development (NPD) (Knudson et al., 2023). A Harvard Business Review article points out that almost all failure causes are “dumb reasons” (Tse et al., 2020), the result of poor business practices, and thus can be avoided.

1.1?? Parallels Between AI Deployment and New Product Development

Similarities exist between AI projects and B2B NPD projects: In both, the ultimate goal is to deliver a solution that meets the needs of the end-users or customers, whether they are internal (for AI projects) or external (for NPD projects). The activities in AI and NPD projects share many commonalities, such as crafting a business case, developing the solution, pilots or field trials, and scale-up or launch. (Here NPD refers to developing physical products, not software or services).

“Those who cannot remember the past are condemned to repeat it” (Santayana, 1905) is an appropriate lesson for AI deployment. In NPD, failure causes have been well-researched. The first significant NPD study in 1964 sought managers’ opinions on causes of failure; the top two were “inadequate market knowledge” and “technical defects in the product” (NICB, 1964). In 1975, an analysis of product failure cases identified a “lack of understanding of the marketplace (customers and competition) followed by “technical difficulties with the product” (Cooper, 1975).

While personal opinions are many, the reasons for AI failure have yet to be revealed by robust research studies. Many of the challenges in NPD, such as understanding user needs or dealing with technical risks, are also common to AI projects, and the limited evidence available suggests AI failure reasons parallel the well-known reasons for NPD.

2. ??? THE SEVEN REASONS FOR AI PROJECT FAILURES

Armed with information from physical NPD together with evidence from the few but limited studies of AI projects (supplemented by opinion-based articles), the following important reasons for AI failure are delineated, shown in Fig. 1:

The 7 main reasons for AI failure -- frequency & impact


2.1 Failure to Understand Users’ Needs, a Lack of Clear Objectives

Most AI journeys begin with a technology-first orientation, what Overby calls the “shiny things disease” (Overby, 2020). Instead of starting with a solution looking for a problem, companies must start by investigating the specific user problem and then determine which AI tool solves it (Lamarre, 2023; Dilmegani, 2024). A McKinsey study?highlights the importance of focusing on the users’ needs and problems rather than being enamored by the technology itself (McKinsey, 2023). As in NPD, a lack of understanding of users’ needs is a major reason for AI failure (Lamarre 2023).

?A lack of understanding of users’ needs often underlies ill-defined project objectives. As in NPD, not conducting user research and failing to involve end-users throughout the development process leads to solutions that miss the mark, making this one of the reasons in Fig. 1 (McKinsey 2023; Lamarre, 2023). Without a deep understanding of the users’ pain points, workflows, and applications, the resulting AI solution may fail to provide meaningful value or seamlessly integrate into existing processes.

?Recommendations: AI projects should begin with a comprehensive Voice of Process (VoP) and Voice of Business (VoB) study, similar to a Voice of Customer (VoC) study in NPD – gathering insights from various users to identify their needs, problems, and wants. And involving end-users throughout the development process by employing iterative feedback loops with demo validations, the project team (the AI Ops Team) ensures that the AI solution addresses problems, meets the users’ expectations, and has value-in-use (Cooper, 2017, p. 317). Another best practice is to ensure the AI Ops Team is cross-functional with representation from the user group. The project’s goals should then be defined, including the problem to be solved, the expected outcomes, and measurable success criteria.

?2.2 The AI Solution Did Not Work

An IDC study concludes that the top reason for AI failure is that the model did not perform as promised (35% of firms; Jyoti and Kuppuswamy, 2023). Inaccuracy of output is now cited more frequently than either cybersecurity or regulatory compliance, which had been the most common risks previously (McKinsey, 2023). But only 3% of firms are mitigating inaccuracy.

Delivering inaccurate results is due to various technical reasons, such as inadequate data quality, algorithmic limitations, or unforeseen edge (outlier) cases (Westerberger et al., 2022). Model instability is another technical issue: as algorithms are updated, the new system may not give the same results as the previous one. AI algorithms also lack transparency: the “black box” problem.

The failure of the AI solution to work is often discovered during the pilot stage, leading to “pilot paralysis.” Thus, identifying AI malfunctions is best done earlier in the process, ideally during the alpha testing or development stage, to minimize the impact on resources and timelines.

Recommendations: To mitigate technical risks, companies should conduct thorough testing and validation of the AI solution, much as is done in NPD, including alpha testing and pilot or user trials, before full-scale deployment. Disciplined ways to assess for accuracy, manage bias, and deal with incorrect output do exist, but human oversight is also essential (Korolov, 2023):

*? establish clear acceptance criteria for the AI solution to evaluate its effectiveness;

*? continuously monitor the system’s performance, accuracy, and reliability in real-world conditions; and

*? have a contingency plan in case the AI solution does not work.

?

2.3?? Poor Data Quality

In AI, there is never enough quality data! AI systems rely on high-quality, relevant, and properly labeled data for both training and operation (Overby, 2020). Roughly 80% of the time spent preparing a Machine Learning algorithm for use is data gathering and cleaning (Nieto-Rodriguez and Vargas, 2023). But lack of data availability for training the model is often a key reason for failure (Overby, 2020).

?

Once in operation, not feeding “the machines enough quality sources of information to make accurate decisions and recommendations” is also a primary reason for failure (Tadiparthi, 2024). The IDC study concurs: that the “lack of a production-ready data pipeline for diverse sources” is the #2 reason for AI failure (Jyoti and Kuppuswamy, 2023). And “a lack of data or poor data” is among the top three failure reasons found in a Norwegian study (Haugen, 2021).

Many companies struggle with poor data quality, insufficient data volume, or lack of proper data governance, which leads to inaccurate models, biased outputs, and ultimately, project failure. Working with outdated, insufficient, or biased data can lead to garbage-in-garbage-out situations and project failure (Dilmegani, 2024). The ability to merge data from multiple sources requires a robust integration process but is a typical deficiency in most firms (Bratton and Glynn, 2024).

Recommendations: Companies should implement robust data governance practices, including data quality checks, data labeling, and data management processes. Think of “data” as the raw material in a production operation for a physical product – this source of supply must be thoroughly assessed. By prioritizing data readiness, quality, and governance, companies can increase the likelihood of developing accurate AI solutions. Continuously monitoring and updating the data used by the AI system is essential to maintain its accuracy over time.

2.4 Unrealistic Expectations

AI Ops Teams often overestimate the capabilities of AI or have unrealistic expectations about its potential impact (Schlegel et al., 2023). “AI project failure is 99% about expectations” – the failure of the technology to deliver inflated expected results (Korolov, 2023). This stems from a lack of understanding of the technology’s limitations, overly optimistic projections, or a desire to showcase the project in a favorable light to secure management’s approval and resources.

Additionally, management often expects AI to solve complex problems immediately without understanding the iterative nature of AI development.?If managers “think too big”, the project’s scope can get so large that success is unattainable. Ambitious expectations are often linked to “large promises” made by the AI vendor (Westerberger et al., 2022). When expectations are set too high, there is a risk of disappointment, loss of support from stakeholders, and project cancellation, particularly during the pilot stage.

Recommendations: The AI Ops Team should conduct a thorough technical feasibility assessment early in the AI project, much like in an NPD project. This involves vetting the AI vendor and the technical solution, as outlined in a methodology from MIT (Brown, 2021). Additionally, the team should create a business case with realistic financial expectations and timelines, taking into account the complexity of the problem, available data, and AI capabilities. Transparent communication with management about the potential benefits, limitations, and risks is also crucial, especially at project go/no-go decision points.

In NPD, one way that business case estimates have been made more realistic is by building a post-launch review and holding the project team accountable for the results they promise to deliver (Cooper, 2017, p. 302). A final suggestion is to use a proven technology adoption and deployment process, similar to frameworks popular in NPD with clearly defined stages, tasks, deliverables, gates, and go/no-go decision criteria – see Brem and Cooper (2024).

2.5?? Lack of a Cross-Functional AI Ops Team

Successful AI projects require collaboration across various functional areas, including data science, IT, and relevant business functions (Bratton and Glynn, 2024). But teams from different functional areas often don’t work well together, the result being siloed efforts leading to misalignment, inefficiencies, and failure to integrate the AI solution into existing business processes. Having an AI Ops team composed solely of technical personnel, such as data scientists and IT-techs, is likely to result in solutions that are disconnected from real user needs and operational realities (Dilmegani, 2024): Similarly, Haugen’s research revealed that “data scientists know little about the company's business processes or the domain, and therefore there is a need for an interdisciplinary composition of the project team to develop and/or implement AI,” (Haugen, 2021).

Recommendations: Having a data science team working in isolation on an AI project is not a recipe for success (Dilmegani, 2024). Best practices from NPD are that companies should establish cross-functional AI Ops teams that include team members from data science, IT, R&D, marketing, operations, finance, etc. (Cooper, 2023). This interdisciplinary approach ensures that the AI solution…

* aligns with organizational goals,

* addresses pain points and high-value applications of end-users,

* integrates seamlessly with technological architecture and existing workflows, and

* is deployed at scale.

Cross-function project teams also share learnings, develop best practices, and help standardize the AI deployment process recommended in 2.4 (Bratton and Glynn, 2024; Dilmegani, 2024).

2.6?? Talent Shortages

The shortage of skilled professionals with expertise in AI is a significant challenge, cited as a major failure reason (29% of firms; Jyoti and Kuppuswamy, 2023). Acquiring talented data science people is costly and time-consuming due to this skill shortage; but without strong data scientists and IT engineers, AI projects suffer from sub-optimal execution, lack of technical depth, and failure to realize the full potential of the technology (Westerberger et al., 2022). This talent gap also leads to delays, compromises in solution quality, and ultimately, project failure. Additionally, with a shortage of skilled people, invariably insufficient resources are allocated to the project, leading to premature termination once the team runs over budget (Westerberger et al., 2022).

Recommendations: Companies must invest in training and upskilling existing employees in AI, and also in talent acquisition. Data science and AI are skills that can be developed internally (Bratton and Glynn, 2024). Hiring the needed talent means offering competitive compensation packages, providing growth opportunities, and fostering a culture that attracts and retains top AI talent. Additionally, companies can explore partnerships with academic institutions, research labs, or specialized AI firms to access expertise and resources. Leveraging cloud-based AI services and vendors’ platforms and services can also supplement in-house capabilities and mitigate the impact of talent shortages.

2.7?? Lack of Change Management

Implementing AI means a new way of working, much like the introduction of Agile was in NPD. As product developers discovered, this requires significant organizational, mindset, and cultural changes, which can be challenging for companies (AIM, 2023). Failure to adopt a comprehensive change management strategy for AI, address ethical, privacy, and job loss concerns, and foster a data-driven culture leads to resistance and hinders the effective integration of AI solutions (Overby 2020).

The company’s readiness for AI adoption must also be evaluated, including data readiness, cross-functional collaboration, and a culture that embraces data-driven decision-making. Managers’ reluctance to trust the knowledge generated by AI may limit the application of AI-generated knowledge (Nylund et al., 2023), one of the potential barriers to AI’s adoption (Cooper, 2024). Without a solid foundation of organizational and cultural readiness, AI projects will face resistance, lack of buy-in, and ultimately, failure to realize their full potential.

Recommendations: Develop a comprehensive change management strategy that addresses the organizational and cultural shifts required for successful AI integration (Creasey, 2023). When Agile Development was introduced, gurus met in 2001 and developed the Agile Manifesto to guide its deployment. That has not happened in the field of AI (Schuerman, 2023), although some firms have developed their own manifestos (Globant, 2024).

Both the Agile Manifesto and Agile Mindset proved to be excellent compasses to guide the implementation of Agile in NPD and similar guides are recommended here for AI (Agile Alliance, 2024). Changing the mindset to adopt a culture of data-based decision-making as well as learning, experimentation, and adaptation is crucial to embracing the changes brought by AI. Thus, much training, education, and communication with employees is required about the role of AI in the business, ethical and privacy implications of AI, as well as its potential impact on their jobs and workflows. By prioritizing change management and cultural readiness, companies can increase the likelihood of successful AI adoption and integration.

?3.??????? AI is a Journey – So Follow a Map!

Recommendations were showcased above as each of the AI failure reasons was unveiled. In the field of NPD, similar recommendations are treated as “best practices” (PDMA, 2023), and are integrated into a single idea-to-launch model. As a Forbes article instructs: “Once you’ve defined your business intentions for AI, you need a roadmap for how to implement it,” (Schuerman D. 2023). And that roadmap should also include these best practices as guideposts along the journey.?

Adopt an AI acquisition, development, and deployment process, similar to those used in NPD or technology development projects (Cooper and Brem 2024), with best practices built in, to guide your AI projects to success.



Bio-sketch of author: Dr. Robert G. Cooper is ISBM Distinguished Research Fellow at Pennsylvania State University’s Smeal College of Business Administration, Professor Emeritus at McMaster University’s DeGroote School of Business (Canada), and a Crawford Fellow of the Product Development and Management Association (PDMA).

For more info on AI in new-product development see:


Bob is the creator of the popular Stage-Gate? process now used by thousands of firms globally to drive new products to market. He has published 11 books – including the “bible” for NPD, “Winning at New Products”, and more than 150 articles on the management of new products. He has won the IRI’s (Innovation Research Interchange) prestigious Maurice Holland Award three times for “best article of the year”. Bob has helped hundreds of firms over the years implement best practices in product innovation, including companies such as 3M, DuPont, Bosche, Danfoss, LEGO, HP, Exxon, Guinness, and P&G.?

Cooper holds Bachelors and Master’s degrees in chemical engineering from McGill University in Canada; and a PhD in Business and an MBA from Western University, Canada. Website: www.bobcooper.ca

Contact: [email protected]

To cite this article: Cooper, R.G. 2024. "Why AI projects fail: Lessons from new product development. IEEE Engineering Management Review pre-print, June. Link: https://ieeexplore.ieee.org/document/10572277

Full article available (Article #10) at: https://www.bobcooper.ca/articles/artificial-intelligence-in-npd

DOI: 10.1109/EMR.2024.3419268

REFERENCES

Agile Alliance. 2024. Agile Manifesto and Agile Mindset. Agile Alliance website. [Online]. Available: Download the Agile Manifesto as a PDF | Agile Alliance

AIM. 2023. “Culture key to successful AI transformation: Empowering organizations for the future.” AIM Research. June 13. [Online]. Available: Culture key to successful AI transformation: Empowering organizations for the future ( aimresearch.co )

Bratton, A. and Glynn, K. 2024. “Why your AI project is going to fail.” Lextech. [Online]. Available: Why Your AI Project is Going to Fail ( lextech.com )

Brem, A. and Cooper, R.G. 2024. “Artificial Intelligence in new product development: An adoption and deployment process model for engineering management.” In review: IEEE Transactions on Engineering Management. Article #7 [Online]. Available: Robert G. Cooper - Artificial Intelligence in NPD ( bobcooper.ca

Brown, S. 2021. “3 requirements for successful artificial intelligence programs.” Jan 6. MIT Management. Oct 3 requirements for successful artificial intelligence programs | MIT Sloan

Cooper, R.G. 1975. “Why new industrial products fail.” Industrial Marketing Management 4: 315–26. [Online]. Available: ( Why new industrial products fail - ScienceDirect

Cooper, R.G. 2017. Winning at New Products: Creating Value Through Innovation, 5th edition, New York, NY: Basic Books, Perseus Books Group. [Online]. Available: Amazon.com : winning at new products

Cooper. R.G. 2023. “New Products—What Separates the Winners from the Losers and What Drives Success.” In: The PDMA Handbook of Innovation and New Product Development,?4th ed., edited by Bstieler, L. and Noble, C.H. . Chapter 1: Hoboken, NJ: Wiley. [Online]. Available: The PDMA Handbook of Innovation and New Product Development: Bstieler, Ludwig, Noble, Charles H.: 9781119890218: Amazon.com : Books

Cooper, 2024. “Overcoming roadblocks to AI adoption in new product development.” In process, Research-Technology Management. Article #9 [Online]. Available: Robert G. Cooper - Artificial Intelligence in NPD ( bobcooper.ca )

Creasey, T. 2023. “A point of view on AI, change, and change management.” Change Success Insights. June 23. [Online]. Available: (24) A Point of View on AI, Change, and Change Management | LinkedIn

Dilmegani, C. 2024. “Why does AI fail?: 4 reasons for AI project failure in 2024.” AI Multiple Research, Feb. 14. [Online]. Available: (https://research.aimultiple.com/ai-fail/

Globant. 2024. “The AI Manifesto.” Globant blog. [Online]. Available: AI Manifesto | Globant

Guinness, H. 2024. “What is Perplexity AI.”?Zapier Blog. April 3. [Online]. Available: What is Perplexity AI? How to use it + how it works ( zapier.com )

Haugen, K.S. 2021. “Failure in AI projects: What organizational conditions and how will managements’ knowledge, organization and involvement contribute to AI project failure.” Master’s thesis, University of South-Eastern Norway, Faculty of School of Business, Spring. [Online]. Available: (https://openarchive.usn.no/usn-xmlui/bitstream/handle/11250/2783842/no.usn%3Awiseflow%3A2613763%3A44363626.pdf?sequence=1

Jyoti, R., and Kuppuswamy, R. 2023. “Create more business value from your organizational data.” IDC Research InfoBrief, Feb. [Online]. Available: (URL has spelling errors; ignore errors): Jyloti - imapctrs ofAI.pdf

Knudsen, M.P., von Zedtwitz, M., Griffin, A., and Barczak, G. 2023. Best practices in new product development and innovation: Results from PDMA’s 2021 global survey. Journal of Product Innovation Management 40(3), 257–275: Doi.org/10.1111/jpim.12663

Korolov, M. 2023. “4 reasons why gen AI projects fail.” CIO. March 4. [Online]. Available: (4 reasons why gen AI projects fail | CIO

Haugen, E. 2023. “In Digital and AI transformations, start with the problem not the technology.” McKinsey Strategy & Corporate Finance Practice, Nov. How to succeed in digital and AI transformations | McKinsey

Masci, J. 2022. “AI has a poor track record, unless you clearly understand what you’re going for.” Industry Week, Jan. 19. AI Has a Poor Track Record, Unless You Clearly Understand What You’re Going for | IndustryWeek

McKinsey. 2023. “The state of AI in 2023: AI’s breakout year.” Quantum Black, August 1. [Online]. Available: The state of AI in 2023: Generative AI’s breakout year | McKinsey

NICB (National Industrial Conference Board). 1964. “Why new products fail,” The Conference Board Record.

Nylund, P.A, Ferràs-Hernández,?X., and Brem, A. 2023. “A trust paradox may limit the application of Al-generated knowledge.” Research-Technology Management (66)5: 44–52. Doi.org/10.1080/08956308.2023.2236475

Nieto-Rodriguez, A., and Vargas, R. V. 2023. “How AI will transform project management.” Harvard Business Review, Feb. 2. [Online]. Available: https://hbr.org/2023/02/how-ai-will-transform-project-management

Overby, S. 2020. 8 reasons AI projects fail. The Enterprisers Project. March 4. [Online]. Available: https://enterprisersproject.com/article/2020/3/why-ai-projects-fail-8-reasons

PDMA (Product Development and Management Association). 2023. The PDMA Handbook of Innovation and New Product Development,?4th ed., edited by Bstieler, L. and Noble, C.H. . Hoboken, NJ: Wiley. [Online]. Available: The PDMA Handbook of Innovation and New Product Development: Bstieler, Ludwig, Noble, Charles H.: 9781119890218: Amazon.com : Books

Santayana, G. 1905. The Life of Reason, Volume 1, Chapter XII. https://archive.org/details/dli.ernet.864

Schuerman,?D. 2023. “It’s time for every company to establish its own AI Manifesto.” Forbes. Dec. [Online]. Available: It’s Time For Every Company To Establish Its Own AI Manifesto (forbes.com)?

Schlegel, D., Schuler, K., and Westenberger, J. 2023. “Failure factors of AI projects: results from expert interviews.” International Journal of Information Systems and Project Management (11)3: 25–40. Doi.org/10.12821/ijispm110302

Tadiparthi,G. 2024. “Why AI projects fail.”?InsiteAI. [Online]. Available: Why AI Projects Fail? - Insite AI Insite AI

Tse, T., Esposito, M., Mizuno, T., and Goh, D. 2020. “The dumb reason your AI project will fail.” Harvard Business Review, June 8. [Online]. Available: The Dumb Reason Your AI Project Will Fail ( hbr.org )

Westenberger, J., Schuler, K., and Schlegel, D. 2022. “Failure of AI projects: understanding the critical factors.” Procedia Computer Science 196: 69–76 (https://creativecommons.org/licenses/by-nc-nd/4.0

?

?

Jeroen Derynck

AI Business Strategist | flywheel for AI, Intelligence & CRM Programs ??

2 个月

In order to overcome the shiny things disease, I am using various techniques to make sure AI products meet the end user needs: - Design Thinking as it helps to identify the pain points and bottlenecks in a given process or journey - Experimentation as this is a powerful technique that helps you formulate a hypothesis and makes you think upfront about value (read success metrics) of the experiment - Problem formulation as it helps to break down the problem and allows you to take an inside - out perspective Each of these techniques have their own merits and most importantly they help to frame a real business problem, reflect about potential outcomes and expected value and answer the question:" is this a problem AI can solve?"

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了