Is A.I. “friend” or “frenemy”?
RONIN International
Global research services with the reassurance of quality and experience
In my trilogy of pet hates, “A.I.” is my second most detested jingo-ism.
This might seem somewhat “curmudgeonly”, as ‘intelligent’ systems are
So why am I being so “grumpy” about this term?
Well, briefly, the current marketing imperative of having an “A.I. capability” means that anything from a scanner to an Excel macro can fall under such a definition, without fear of suffering a mis-representation suit! Additionally, when there is an algorithmic or machine learning element to the capability on offer, there is no transparency about the code, the training data or the source files. For anyone who can remember the buyer-side antipathy to proprietary, “black-box” methodologies, why are we insisting on repeating that mistake here?
While some find it easier to hide behind a “caveat emptor” approach, the legacy of our profession and our reputation for transparency and self-regulation leads me to say we MUST do better. Carlos Ochoa and Ezequiel Paura from Netquest were pioneering in this regard when they presented their PII algorithm at the ESOMAR Fusion conference in Dublin 2018, with a follow-up session presented a year later in Madrid by their colleague Anna Bellido. This was the first time an algorithm structure, code and learning platform was openly provided to a public audience. More recently, ESOMAR has finally instigated a global workgroup on determining standards for A.I. – based on the excellent work of Judith Passingham and Mike Cooke, published more than 18 months ago. This work – and the resultant definition of what can (and cannot) be classified as A.I. – will be essential to maintaining citizen confidence in our profession’s deployment of artificially intelligent systems.
领英推荐
But why is this essential, you might ask? For 2 primary reasons...
An educational TED talk, entitled “Why AI is incredibly Smart and Shockingly Stupid”, by Yejin Choi, is a great place to start to understand why we must work together to ensure that our professional cornerstones of rigour and quality are equally applied to this new methodology, as we have done with statistical research methodologies in the past: https://youtu.be/SvBR0OGT5VI
The advent of photographic and video-based social media platforms created an essential debate around “fake news” and the need for triangulation and verification; the same principles apply here, so that users and buyers of any service incorporating an A.I. element can be assured that they fully understand how the A.I. element was constructed and is applied, and be reassured that there is no hidden bias or synthetic data that may contaminate the findings.
There are many excellent, considerate and rigorous developers of A.I. systems in our sector, all of whom would have no qualms in explaining what they do and how they got there for those who are unhappy to share that information, ask yourself, “Why won’t they?"
Chief Executive Officer at Insights Association
1 年Jargon is also risky from a legislative perspective. When AI is fully regulated, if not defined and labeled well, we could end up over-regulating key parts of our work, like automation and analytics. I also have a post coming out soon about the 6 Rs of managing the use Artificial Intelligence. Risk, Reason, Representation, Respondent Care, Recency and Repeatability. Stay Tuned!
Founder: GMIFY | LET'S TURN PANELISTS INTO PLAYERS | 2023 Tony Cowling Foundation Innovation Award Finalist
1 年Finn, you’ve elegantly laid out the potential strengths, accompanied by the weaknesses and fears of AI as it begins to introduce itself to our industry. Market Research’s dilemma is that it is terribly uncomfortable when it comes to peeling the onion back as you well know and pointed out with your example of the shocking showing of algorithms. The next few years will most likely be a bumpy road with respect to AI - what to use, when to use, whom to trust, etc. Your counsel about AI has an important shelf life associated with it - thanks for sharing.