WHY "TRUSTED AI" FOR SOCIAL, MEDIA AND CX ANALYSIS IS AN ESSENTIAL "MUST HAVE" (A CALL TO ACTION)
Conversus Validation Screenshot

WHY "TRUSTED AI" FOR SOCIAL, MEDIA AND CX ANALYSIS IS AN ESSENTIAL "MUST HAVE" (A CALL TO ACTION)


The surge in adoption of AI technologies across organizations and geographies has given rise to an equally urgent and concurrent sprint by relevant governing bodies to establish critical frameworks and processes to help ensure their safe and effective use. While positive and important steps in the right direction, these initiatives are admittedly overdue given that AI technologies are not new.?

Our firm, Converseon, built its first machine learning model in 2008, and for over a decade we’ve advocated for effective and trusted applied AI approaches and standards in the area of classification of unstructured text data. And while we’ve been fortunate to work with many of the world’s leading brands, for many, these key principles were for many years considered “nice to haves.” But with the advent of these new regulatory efforts, amid a growing understanding of the immense power and risk associated with these technologies, this perspective is rapidly pivoting to view trusted AI as a "must have."

While current concerns about AI are initially focused on areas with “unacceptable” and “high-risk” (such as policing, social credit systems, facial recognition, and regulated industries), the? social, VoC and media intelligence space will not remain immune for long. ??While most use cases in the space are currently considered low risk from a regulatory perspective, there is significant risk to the brand itself in making business decisions based on poor quality data and analysis.?

The core, fundamental goal of regulatory efforts is to eliminate AI bias, increase accuracy and build trust in the systems. This means higher data standards, greater transparency and documentation of AI systems, measurement and auditing of its functions (and model performance) and enabling human oversight and ongoing monitoring. Indeed, even without forced regulation, these processes represent important and critical best practices that warrant immediate adoption. In the near future, it is likely that almost every leading organization will have a form of AI policy in place that will adhere closely to these standards.??

Yet these standards represent areas where by and large the vast majority are currently falling dangerously short on technology and process. Poor quality sentiment and opaque systems, for example, have created skepticism about the resulting data and insights and helped contribute to an overall “trust deficit” that has stifled some important adoption.

Aligning with these standards will help reverse these perceptions. But this will require all stakeholders to substantially elevate their technology, requirements, demands and systems.? It’s a process that needs to begin now, before these requirements kick in, because doing so will reap clear and immediate benefits. These include mitigated enterprise and consumer risk, alignment with emerging global standards as well as generating substantial improvement in accuracy, adoption and impact. Perhaps most importantly it will help engender more trust among key stakeholders not just with the AI technology itself, but also for all the solutions and products that leverage the technology.??

While the US has recently announced agreement on principles among many leading AI organizations for self regulation, the EU’s AI Act is the first major proposed AI law and generally represents an evolved and thoughtful approach. Its principles are representative of effort elsewhere and are likely a harbinger of what’s to come globally.

The Act categorizes use cases for AI technology from unacceptable to high to low risk, and requires corresponding, specific actions that range from stringent to nominal. High risk areas include employment, transportation and more. Social and media analysis and market research, depending on how the data is used, is considered largely low risk at this point. However, this is just a starting point and we believe it’s likely that, over time, the standards applied to high risk today will eventually migrate to other, lower risk use cases.

The crux of the Act requires strong data governance, accurate training, the elimination of potential bias within models, clear and transparent model performance validation, tracking and auditing and the ability for human-in-the-loop intervention if the models go off the rails. Model training has an important and prominent focus.? It states:??

“High data quality is essential for the performance of many AI systems, especially when techniques involving the training of models are used, with a view to ensure that the high-risk AI system performs as intended and safely and it does not become the source of discrimination prohibited by Union law. High quality training, validation and testing data sets require the implementation of appropriate data governance and management practices. Training, validation and testing data sets should be sufficiently relevant, representative and free of errors and complete in view of the intended purpose of the system…”?

The Act notes that taking these steps to build trust in these systems is simply essential.? Untrusted AI is doomed to failure.

Let’s contrast these standards and requirements with the application of AI to current social, voice of customer and media analysis.

While the application of some level of AI is pervasive across most social, media and CX platforms, it is highly uneven. According to the Social Intelligence Lab’s 2023 “State of Social Listening” study, data accuracy remains one of the industry’s biggest complaints.? In many systems there is negligible human-in-loop oversight or ability to fine tune or modify models. One-size-fits-all model classification generally takes precedence over more accurate domain specific ones.

Model performance and auditing is mostly opaque or “one off” – if available at all (in most cases it is not). Further, the training processes and data used are most often black box and mostly unavailable to users of the technology (eliminating bias in model training can be a complex task that requires sophisticated end to end processes).???

If asked, most users simply do not know the specific performance of their models and accuracy of their data classification, yet they often make business decisions based on this data. If probed on specifics, many providers of AI gloss over details of capabilities. Unclear marketing, promotional materials and other documentation often just muddy the picture.??

This state of affairs is simply unsustainable in this new environment.??

To their credit, organizations like the Institute of Public Relations (IPR), ESOMAR and AMEC (International Association for the Measurement and Evaluation of Communication?) are working to educate and generate some consensus for action, but those efforts remain mostly in the early stage and aspirational.? Importantly, an increasing number of analytics and technology savvy brands are demanding greater visibility and transparency – features of trusted AI – which is a key impetus for change.? Without pressure from buyers, many technology providers simply will not prioritize the development of new key “trusted AI”? features.??

Here are some key questions and topics we recommend for consideration when evaluating AI vendors, drafting RFPs, or participating in relevant industry groups:?

Conduct a Current Assessment: Does your team understand this technology well enough to effectively evaluate it and establish the right processes? Do you need to improve education, especially among key stakeholders?? Are you asking vendors the right questions?

Who are your current vendors and what is the state and quality of their trusted AI technologies and processes, if any? If none, what is their roadmap? How are your data and insights being used from social and media analysis? How does that align to high and low risk categories? Where could trusted AI combined with this unstructured data provide your organization with even greater value?

How are models trained? Is it in house or third party? What specific roadmap and strategy do your vendors have to align and elevate their offerings to these standards? Are they capable of working with third-party audit and trusted AI platforms? How do they conform to key trademark and privacy requirements? What is their timeline for action?

What process is used to eliminate potential bias? Are there robust data discovery capabilities? Is the model training conducted by third parties or domain experts? Are there intercoder reliability processes? How do you ensure the highest data quality? How are models scored and evaluated? Can you access domain or industry specific models? And can your team participate in the fine tuning or are you stuck with a static one-size-fits-all model that doesn’t meet your requirements?

How accurate is your model? How can you know and verify this? Can you access and audit the training data models directly and see the precise performance of your model at any point in time? Is the model evaluation process comprehensive? Does it incorporate standard measures (F1, precision and recall) or more? How often is model performance assessed? And is there an available audit trail of the model performance over time? Do you have a data drift detection technology providing you with advance warning that models might need to be retrained and updated? Is model performance tracked and registered, or is it “train it and forget it?”

Is there a model governance system? Can you or your organization provide input or changes to the system? Can you track and see the performance of all your models across the organization near real time? Is there an end-to-end system to build, fine tune, integrate, validate and deploy models efficiently? How does it work and how is it accessed? Is there a process for feedback and model optimization?

Is there a human-in-the-loop capability for oversight and intervention? How does it work? Can you have direct access? And if models do go off course, what processes are in place to help explain why and determine what corrective action to take? For many use cases, the AI Act demands that transparency must be built in so that users can interpret the system’s output (and challenge it if necessary).


These questions represent only partial (but important) ones for critical discovery and areas of investigation that deserve in-depth, detailed responses. When it comes to AI approaches, details and specifics matter.

And the time for this effort is now. While initial legislation is not focused squarely on this category, it does not mean the industry should not take aggressive steps to abide by the standards. Indeed, the opposite is true. ? The growth in importance and impact for insights derived from unstructured data requires the most trusted AI. And as trust is gained, the insights and solutions will continue to expand across essential areas ranging from corporate sustainability efforts to product innovation, brand reputation and customer experience.???

Now is also an ideal time to get involved more with your industry groups for education, consensus development and representing the category in front of key regulators, academics and other influentials.

Moreover, taking specific actions now will not only front-end potential risk and help ensure compliance to internal AI alignment and policies; it will also generate significant positive impact in model effectiveness, leading to broader adoption and even predictive and prescriptive analytics that will better serve your organization and its key stakeholders. Finally, of course, challenging your own internal capabilities and the industry at large is indeed the critical leverage point required to level up capabilities in a manner that safeguards consumers and helps assure trusted implementations of these essential technologies.??

Clearly, the payoffs of being a leader -- and not a laggard - in trusted AI are simply too important for the industry at large to wait any longer.


Rob Key is Founder and CEO of Converseon, the leading AI powered NLP technology and consulting firm.? Its core product, Conversus, provides a full suite of trusted AI features for classification of unstructured text data directly to leading brands around the world and through? partnerships with other key platforms in the social, media, and VoC industries. For a demo of Conversus please visit https://vimeo.com/851622097?share=copy

Konstantin Babenko, Ph.D.

Generative AI Innovator | AI Team Builder | Helping businesses transform with cutting-edge AI solutions

1 年

Rob, thanks for sharing!

回复
Betsy Summers

Principal Analyst @ Forrester | Talent & Strategy Advisor | Coach | Tech Alum

1 年

Great article

Angela Dwyer

Improving business results through data-driven insights

1 年

Thanks for sharing, Rob! I couldn't agree more with the need for trusted AI. As leaders use data to make decisions, it is important to make sure it is accurate and ethical data. I especially appreciated your "key questions to ask" that can be applied to potential partners but also internally for any organization integrating AI technology.

Joe Rice

Partnerships, Alliances and Ecosystem Builder (ex-Twitter, Cisco)

1 年

"One-size-fits-all model classification generally takes precedence over more accurate domain specific ones." That's exactly right. Great write-up Rob, PDF that!

Todd Grossman

CEO, Strategic Advisor, Professor

1 年

Well done Rob! Your article emphasizes the growing importance of "trusted AI" in social, media, and CX analysis. It highlights the urgency for organizations to adopt critical frameworks and processes to ensure the safe and effective use of AI technologies. The call to action is timely, given the increased regulatory efforts and a better understanding of the risks and power associated with AI. By aligning with these standards, organizations can reverse the existing skepticism and "trust deficit" and build more accurate, impactful, and transparent AI systems. Taking proactive steps now will not only ensure compliance with emerging global standards but also result in significant positive impacts on model effectiveness, leading to broader adoption and enhanced customer experience.

要查看或添加评论,请登录

Rob Key的更多文章

社区洞察

其他会员也浏览了