It doesn't take AI to figure out that "garbage in, garbage out" is as true today as it ever was

It doesn't take AI to figure out that "garbage in, garbage out" is as true today as it ever was

Getting checks signed for data management work has never been easy. Even in situations where requests were driven by compliance requirements, the response has often been resigned acceptance instead of enthusiastic approval. However, the rationale for making these investments is growing ever stronger. Effective data management was first emphasized in compliance, and more recently with analytics programs, and now with a new wave of technology – the rise of artificial intelligence (AI). Feeding poor-quality or incomplete data to AI invites risks that executives need to be aware of – not least, high-profile public failures that can damage brands.

Successful AI for enterprises is dependent on unpopular investments in data management

The majority of enterprise AI use cases depend on access to data, most often because data is the training ground for the machine-learning (ML) algorithms that power the bulk of those use cases. This should be a familiar concept to those with a cursory knowledge of business intelligence and analytics: we have long said, "garbage in, garbage out." That remains true today; however, compare then and now. Previously, limited or bad data meant weak analyses; today, it could mean an AI-powered chatbot, at best, making irrelevant recommendations and, at worst, running amok on social media, offending customers in a very public way.

Data for the majority of AI use cases can be thought of as being analogous to our memory, or the experience that we all rely on to make decisions. If a situation requires a decision, we draw on our bank of experience to help make that choice; if it is outside of our experience, the likelihood of a good outcome is lessened.

The "to do" list for data in AI should feel familiar. Identifying what data is necessary, ensuring that data is available for use (physical access and format), and ensuring an appropriate level of accuracy for the intended use are the steps any data management process generally follows. Accuracy of data brings both data quality and business context to the table, necessitating the inclusion of not just technical experts, but business-domain experts as well. It's important to think of this process with final use in mind. Providing a chatbot with access to transactional records to resolve a customer inquiry clearly requires fine-grain accuracy – a massive landscape of Internet of Things sensors feeding an ML algorithm may not.

Relying on grumbling acceptance of the importance of compliance-related data work will be one of the – uninspiring – ways AI-related data investments get signed off because the motivation for similar risk-mitigation spending already exists within many enterprises. An AI-powered capability may simply fail without the right data, a possibly acceptable outcome; but it could also wreak havoc. As an ever-greater number of customer and other public-facing processes introduce AI capabilities to automate, the risk profile grows larger. If AI is to deliver even a modest slice of the expected benefit to enterprises, a more positive culture of investment in, and protection of, data needs to be fostered at every level within the organization.

Cheryl McCurdy Christensen M.Ed.

Transformational Leader Helping Promote STEM Education

7 年

Yes just look at CRM Data input by Sales reps; becomes so evident #DataQuality

回复

要查看或添加评论,请登录

Tom Pringle的更多文章

社区洞察

其他会员也浏览了