Failed AI project : Lets blame it on Data

Failed AI project : Lets blame it on Data

As AI is becoming mainstream and projects have started to fail. We all need a scapegoat...

Yes, your thinking is on point , most of us make Data as scapegoat. The blame on data quality often stems from a deeper issue related to the understanding of legacy systems, their context, and the evolution of data standards. Here's why:

1. Context and Evolution of Data Standards: Legacy systems were built over many years, often decades, during which data standards and practices evolved. The data within these systems might have been collected under different standards, business rules, and objectives, which were appropriate at the time but may not align perfectly with modern expectations or AI requirements.

2. Understanding the System: Many professionals approaching AI solutions may not have a comprehensive understanding of the legacy systems they are dealing with. These systems have intricacies and specific contexts that influence how data was captured, stored, and processed. Without a deep understanding of these factors, it is easy to misinterpret the data or overlook its nuances, leading to assumptions about poor data quality.

3. Blame on Data Quality: When AI initiatives struggle, data quality is a convenient and often justified ( or may be unjustified) target. However, the issue might not be with the data itself but with a lack of understanding of how that data was generated, structured, and how it should be interpreted. The historical context of the data might not align with the requirements of modern AI tools, but that doesn't necessarily mean the data is of poor quality; it means that the AI application needs to be adapted to fit the data context.

4. Need for Expertise: Successfully implementing AI solutions with legacy systems requires not just technical expertise in AI but also a deep understanding of the legacy system’s data architecture, history, and operational context. This includes recognizing the reasons behind certain data structures and standards, which may have been appropriate in their original context.

5. Holistic Approach: Addressing these challenges requires a more holistic approach that goes beyond simply blaming data quality. It involves a detailed analysis of the legacy system, understanding the evolution of data standards, and perhaps adapting or transforming the data in ways that respect its context while making it usable for AI applications.

In summary, my observation highlights a common misconception. The problem is often not just about data quality but about the lack of understanding of the legacy systems and the context in which the data was generated and maintained.

Addressing this gap can lead to more effective and realistic AI implementations.

Here are a few hashtags that align with the ideas you've expressed:

#DataContextMatters #LegacySystems #AIImplementation #DataStandards #TechEvolution #UnderstandingData #AIChallenges #DataQualityMisconceptions #SystemsThinking #DigitalTransformation #cio #CTO #CDO

Durgesh Dandotiya

Technologist with passion for clean design , specialising in dealing with complex people and technology challenges.

7 个月

Great article Ash Shukla From my perspective, many organizations are either in the exploratory phase or closely monitoring the AI landscape by running proofs of concept. Identifying the right use case and deciding between building or buying solutions remains a challenge. However, trends indicate some early successes, even if they don’t fully meet all goals yet. I categorize the successes into the following areas: 1. Enhancing individual efficiency - such as various co-pilots, etc. 2. Automation - leveraging AI APIs together with (RAG) and agent frameworks. 3. Developing AI products that align with the organization’s mission. In my view, the first two areas have seen more success, while the third is still evolving.

回复

要查看或添加评论,请登录

Ash Shukla的更多文章

社区洞察

其他会员也浏览了