Skip the Proof of Concept
What you're building in a POC isn't meant to last, or be useful

Skip the Proof of Concept

Cognilytica Insights

Here’s a hint as to what is separating the AI failures from successes: skip the proof of concept.

Who do you blame when AI projects fail? The technology? Your machine learning and data science team? Vendors? The data? Certainly you can put blame on solving the wrong problem with AI, or applying AI when you don’t need AI at all. But what happens when you have a very well-suited application for AI and the project still fails? Sometimes it comes down to a simple approach: don’t take so long.

At a recent event, a presenter shared that their AI projects take on average 18 to 24 months to go from concept to production. This is just way too long. There are many reasons why AI projects fail and one common reason is that your project is taking too long to go into production. AI projects shouldn’t be taking 18 or 24 months to go from pilot to production. Advocates of best-practices agile methodologies would tell you that’s the old-school “waterfall” way of doing things that’s ripe for all sorts of problems.

Yet, despite the desire to be “agile” with short, iterative sprints of AI projects, organizations often struggle to get their AI projects off the ground. They simply don’t know how to do short, iterative AI projects. This is because many organizations are running their AI projects as if they were research-style “proofs-of-concept”. When companies start with a proof of concept (POC) project, over a pilot, it sets them up for failure. Proof of concepts often lead to failures because they don’t aim to solve a problem in the real world, but rather focus on testing an idea using idealistic or simplistic data in a non-real world environment. As a result, these organizations are working with data that isn’t representative of the real world data, with users who aren’t heavily invested in the project, and potentially not working in systems where the model will actually live. Those who are successful with AI projects have one simple piece of advice: ditch the proof-of-concept.?

AI Pilots vs. Proof of Concepts

A proof-of-concept is a project that is a trial or test run to illustrate if something is even possible and to prove your technology works. Proof of concepts (POCs) are run in very specific, controlled, limited environments instead of in real world environments and data. This is much the way that AI has been developed in research environments. Coincidentally, many AI project owners, data scientists, ML engineers and others come out of that research environment they are very comfortable and familiar with.

The problem with these POCs is they don’t actually prove if the specific AI solution will work in production. Rather, they only if it will work in these limited circumstances. Your technology may work great in your POC but then fall apart when put into production with real world scenarios. Also, if you run a proof of concept, you then might have to start over and run a pilot causing your project to run much longer than originally anticipated which might lead to staffing, resource, and budget issues. Andrew Ng ran into this exact problem when they tried to take their POC approach to medical image diagnosis to a real world environment .?

Proof-of-Concept Failures Exposed

POCs fail for a variety of reasons. The AI solution might only have been trained on good quality data that doesn’t exist in the real world . Indeed this was the reason cited by Andrew Ng for the failure of their medical imagery AI solution that didn’t work outside of the well-groomed data confines of Stanford hospitals. These POC AI solutions could also fail because the model hasn’t seen how real users as opposed to well-trained people will interact with it.? Or, there is a problem with the real world environment. As a result, organizations that only run projects as a POC won’t get the opportunity to understand these issues until you’re too far along.

Another case in point with POC failure is with autonomous vehicles (AVs). AVs often work very well in controlled environments. There’s no distractions, no kids or animals running into the road, great weather, and other common issues drivers face. The AV performs very well in this hyper controlled environment. In many real-world scenarios, AVs don’t know how to handle many specific real-world issues.There’s a reason we don’t see level 5 autonomous vehicles on the road. They only work in these very controlled environments and don’t function like a pilot that can be scaled up.

Another example of AI POC systems failing is Softbank’s Pepper robot. Pepper, now discontinued as an AI project, was a collaborative robot intended to interact with customers at places such as museums, grocery stores and tourist areas. The robot worked very well in test environments but when rolled out to the real world it ran into issues. When deployed in a UK supermarket, which had much higher ceilings than US supermarkets where it was tested, Pepper was having difficulty understanding the customers. It turns out it was also scaring the customers. Not everyone was excited to have a robot approach them while shopping.? Because Pepper wasn’t actually tested in a pilot, these issues were never properly discovered and addressed causing the whole release to be pulled. If only they had run a pilot where they rolled out the robot in one or two places first in a real world environment, they would have realized these issues before sinking time, money and resources into a failed project.?

Building Pilots vs. Proofs-of-Concept

As opposed to a POC, a “pilot” project focuses on building a small test project in the real world, using real-world data in a controlled, limited environment. The idea is you’re going to test a real world problem, with real world data, on a real world system with users who may not have created the model. This way, if the pilot project works you can focus on scaling up the project versus applying a POC to an entirely different environment. As a result, a successful pilot project will save an organization time, money and other resources. And if it doesn’t work you find out what the real world issues are quickly and work to address those issues to make your model work. Just like a pilot that guides a plane to its final destination, a pilot project guides your AI solutions to a destination that is production. Why spend potentially millions on a project that may not work in the real world when you can spend that money and time on a pilot that then only has to be scaled up to a production level? Successful AI projects don’t start with proof of concepts, they start with pilots.?

It is much better to run a very small pilot, solving a very small problem that can be scaled up with a high chance of success rather than trying to solve a big issue with a proof of concept that could fail. This approach to small, iterative successes focusing on pilots is a cornerstone of best-practice AI methodologies such as CPMAI that aim to give guidance on how to develop small pilots using short iterative steps to obtain quick results. Focusing on the highly iterative, real-world AI pilot will ground your project in that one simple method that many AI implementers are seeing with great success.

? Stay subscribed to this AI & ML Best Practices newsletter as we cover other thought leadership in the area of how AI can be put into practice and realize the most positive of outcomes.


?? In Case you Missed it… Insights from Cognilytica Podcasts & Content

  • ?? {Featured Podcast} - Skip the AI Proof of Concept- Here’s a hint as to what is separating the AI failures from successes: skip the proof of concept. When it comes to AI projects go right for pilot projects. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer discuss AI Pilots vs. Proof of Concepts. [Listen here ]
  • ?? {Featured Podcast} - How AI is Transforming Manufacturing and Other Industries: Interview with Linda Yao, Lenovo- CIOs everywhere are gearing up for increased investments in AI, while facing challenges and overcoming barriers that come with implementing and scaling AI. In this episode of AI Today hosts Kathleen Walch and Ron Schmelzer interview Linda Yao. Linda is COO and Head of AI Solutions and Services at Lenovo. [Listen here ]
  • ??? Interested in being a guest on our AI Today podcast? You can sign up and pay to be a guest here [Sign up for AI Today Guest interview ]?
  • ?? {Featured Forbes Article} - What is the future of Intellectual Property in a Generative AI world? - As the barriers to idea sharing and creation come down, so too do the barriers for the protection and even fundamental concepts of intellectual property. [Read More ]
  • ?? {Featured Forbes Article} - Collaboration Skills Are Necessary For GenAI Success - Collaboration in AI is not just about people working together. Effective collaboration drives successful outcomes and ensures AI is developed and implemented responsibly. [Read More ]


?? The Most Important AI Stories from the Past Week You Should Know:

  • ?? {AI Applications} - OpenAI is Entering Search Market- OpenAI is testing SearchGPT, a prototype that combines AI models with real-time web information to provide fast, relevant answers and clear sources. The prototype, launched to a small group of 10,000 for feedback, aims to enhance search experiences and will eventually integrate the best features into ChatGPT. SearchGPT marks a significant challenge to Google's dominance in the search market and highlights OpenAI's growing influence. Are the search tides finally turning? [Read More ]
  • {GenAI} - Google’s Olympics ad went viral for all the wrong reasons - Google's recent ad showcasing its AI tool Gemini, which aired during the Olympics, was not received well. The ad shows a child using AI to generate a fan letter to an Olympic athlete. Backlash comes as many feel it undermines authentic human creativity. Despite Google's defense that AI can enhance creativity without replacing it, the controversy highlights broader fears about AI's impact on creative industries and personal expression. As we always say, soft skills are still very much needed! [Read More ]
  • ??{AI Use Cases} - Garbage In, Wisdom Out - We all know the old adage, Garbage In, Garbage Out – GIGO.? It’s particularly apropos in today’s age of AI. ? [Read More ]


?? Events and Opportunities to Hear from Cognilytica

  • September 17, 2204: PMI Mile Hi Chapter: “The Seven Patterns of AI” keynote at Women in Project Management Leadership Summit 2024, In-person, [Register here ]


??? Bring Cognilytica Thought Leadership to Your Organization or Event

? Bring the Power and Excitement of Cognilytica to your Organization or Event!

Cognilytica’s inspiring, thoughtful, and engaging analysts deliver speaking engagements, workshops, panel participation and moderation, podcast interviews, and other engagements informed by real-world experience and thought leadership.?

Engage our analysts as keynote speakers, panel moderators and participants, workshop facilitators, podcast hosts or guests, and as experts for field marketing events.

?? [Engage with Cognilytica for your next Event, Panel, Keynote, or Activity ]


?? Move Forward with AI Best Practices - Training & Certification

Cognilytica’s AI best practices & Trustworthy training and certification continues to be in high demand. Haven’t yet enrolled in a certification or training? What’s holding you back?

  • ?? CPMAI v7 - Get Certified with comprehensive AI & ML Project Management Training. Includes: AI Fundamentals, AI Applications, Managing Data for AI, Data Preparation for AI, ML Algorithms, Generative AI, CPMAI Methodology, and Trustworthy AI. [Enroll now ]
  • ?? CPMAI+ Plus v7 - Greater Depth: Enhances CPMAI with RPA, Big Data, Data Science. Includes: All CPMAI Training Content, including AI & ML Fundamentals, CPMAI Methodology, plus Fundamentals of Big Data, Big Data platforms, Foundations of Data Science, Foundations & Applications of Robotic Process Automation (RPA), Big Data Engineering, Security & Governance, and more! [Enroll now ]
  • ?? CPMAI+E v7 - Our most comprehensive training & certification! Adds Ethical & Trustworthy AI to CPMAI+. Includes: All CPMAI+ Plus Training Content, Ethical AI Concepts, Responsible AI Concepts, Transparent AI Concepts, Governed AI Concepts, Explainable AI Concepts, The Trustworthy AI Framework, Putting Trustworthy AI into Practice. [Enroll now ]
  • ? Trustworthy AI Framework v3? - Most Comprehensive, Vendor-Neutral Trustworthy AI Training & Certification. Learn how to Build and Run Trustworthy AI Systems. Boost your credentials. Keep Your AI Solutions, Organization, Customers, and Stakeholders Trustworthy. Advance your career. [Enroll now ]


AI Resources

?? Check out our AI & Data Resource List: Dive Deeper! ?

要查看或添加评论,请登录

Cognilytica的更多文章

社区洞察

其他会员也浏览了