ESOMAR Fusion Day 2 Wrap
Hi everyone. Coming to you live again from Madrid. We were actually pretty lucky to get to the conference today after deciding to hire the publicly available battery powered scooters to make the trip across town (https://www.instagram.com/natureptyltd/). A good heart starter ahead of what’s been another fabulous day.
We left Day 1 of ESOMAR Fusion with a couple of important takeaways. One being that the future is all about the “translator”. Yes, the data scientist has been described in recent years as the sexiest job on the planet, but really, the need for human involvement (problem framing, interpretation) means that the true champion is going to be the translator. As it was put yesterday by ESOMAR President Joaquim Bretcha “increasingly, our profession needs translators. Translators who know how to incorporate the context of the business into an environment of data and sources of data of increasing complexity”. The translator will need to have the hybrid and sought-after skill set of being able to speak and understand the language of business, yet having sufficient hands-on technical nous to work with data – be it qualitative or quantitative.
The other big thing that’s worthy of note is that everyone is talking about A.I., without really carefully defining what is meant by that. It’s becoming something of a motherhood or umbrella catchall statement that covers a range of things, and we need to be careful with that. And we urge client-side buyers of research to beware of the ‘A.I. badge’, which can mean anything from something incredibly simply and black box that may in fact stifle creativity and value extraction, kind of like a black box that gives a similar framework answer over again, to something customised and nuanced which has the potential to seriously positively impact the value extracted from data.
One of our Day 1 presenters, Katya Vladislavieva, a dual PhD holder from Belgian company DataStories, reminded us of the important distinction between A.I. (artificial intelligence) and the “other” A.I. (augmented intelligence). She made the point that at the end of the day, Artificial Intelligence is actually incredibly simple: it’s when one gets a machine to repeat a process over & over again, independent of human involvement. By contrast, Augmented Intellgence is an alternative conceputalisation of the same thing, which accentuates the complementary or assistive role technology and ‘the machine’ plays along side the human. The name suggests, Augmented Intelligence augments human intelligence and ‘effort’ as opposed to replacing it. I think we need to be careful when discussing A.I. in research and insights, whether we are referring to Artificial or Augmented Intelligence, as they respectively imply different levels and role of human involvement. This is ultimately because of the central role that humans have, do and will play in the data translation and insight derivation process.
Let’s get into the Day 2 wrap …
Our first major take out from Day 2 is something already rather close to our heart at the Nature philosophy: that if you want to measure consumer experience and behaviour, you must try to get as close to that experience & behaviour possible. Really, our quest as researchers needs to be about getting closer and closer to the consumer truth. This theme was particularly apparent in a presentation today by a Dutch presenter Tom van Bommel reporting the results advertising evaluation work using a synthetic approach of VR and brain scanning and eye tracking, which together attempt to do just that. VR and brain scanning and eye tracking are individually not new, but putting them together in cocktail form is super hot. This is why we travelled 24 hours each way for a 3 day trip!
Let’s keep going with some other key take outs and summary points from today:
- As most of us already know and are certainly being told, in the future the modern organisation will have at its fingertips more data than ever before. Traditionally, insights functions have dealt predominantly with survey data. But they now and soon will have at their disposal myriad other sources of data including but not limited to behavioural, social and financial data, not to mention other forms brought to bear through data democratisation. A challenge and opportunity for client-side insights function becomes carving out the mandate and developing the capability to synthesise and fuse multiple data inputs, to continue to deliver value to their organisations. This is essential to remain relevant, let alone move further up the value chain. Part of this process implies the need for an internal transformation journey, whilst another part of course implies raising the bar on external agency partner requirements. The new frontier is here, if not very fast approaching, and the opportunity therefore here to carve out greater organisational responsibility to leverage disparate data inputs to informing the decisions that have until now been largely informed by a much smaller number of sources. Today, we were lucky enough to hear one of the very best presentations of the conference so far, when Microsoft USA’s insights team talked of their transformation journey as they embark on a world no longer wedded to single data sources. The Microsoft team outlined in the detail the steps taken to transform their team capability and ways of working as a team and with stakeholders. No surprise, we have already begun translating this into the external agency requirements that follow! We are more than happy to share these insights with our client side partners if this topic resonates.
- In the CX space, there is clear opportunity to leverage new technologies to leverage social media data to add context to traditional CX programs. Social media data however is of course voluminous, presenting a challenge to extract value from its richness. Meaning technology is required to home in on 3 key distinct types of information – the types of things are a source of irritation to people, their actual level of dissatisfaction, and importantly, the emotional triggers.
- Technology offers a powerful way of unpacking HOW responses are given in research, which is illuminating when examined alongside WHAT was actually said. We saw a great case study by the Dutch agency SKIM for client Johnson & Johnson, which leveraged voice A.I. technology able to detect the emotion in voice response. This type of technology has many applications, a simple case in point being NPD research which as we know can suffer from overstatement of interest and appeal by survey respondents. SKIM’s use of technology to detect emotion in verbal response data was super impressive, and even more so is the augmentation and mashing of this against ‘traditional’ metrics. Does System 1 and System 2 in one study get any better than this!? We are happy to tell you more about the specific verbal A.I. when we return if that would be of value and interest. It’s hot!·
- Video content is everywhere but to make sense of it, technology is needed to negate the cost and time associated with human coding time. The watch out, according to one contributor, Mike Kuehne from FocusVision (USA) is that standard NLP routines are not able to meaningfully distil powerful content from focus group length video content, but instead, bespoke deep neural network algorithms are needed alongside them.
- Social intelligence (AKA social listening, or social media monitoring) which comes from blogs, forums, reviews, news, and social media; refers to the process of gathering data from such sources and making meaning to it such it has value. At the conference today we were involved in a session which involved an active debate as to whether ‘social intelligence’ is in fact part of the ‘insights function’ in client organisations. Or does it sit elsewhere? The first thing to note here is that in the 13 years since 2010, the social intelligence market has grown in value from USD$0.2 billion to ~USD$8 billion. It’s mainly used by PR departments, digital marketing, social media marketing, operations, and CX; but not ‘the market research’ department’. There was general consensus in the conference room that the consumer industry has the potential to take greater ownership of the evolving data landscape associated with democratisation, but that this will involve proactivity and drive. The message here is a pretty strong and clear one: leverage the translator role and seize the day by leveraging all of the available data sources, or risk being made redundant in the modern organisation! This kind of links nicely with my opening point about the need for client-side insights functions to quickly get on a transformation journey … trust me, your agencies will follow!
A final word…
There is absolutely no doubt that things are changing. Looping back to our Sunday musings and questions to self – “where will be in a decade” – the answer to that question is becoming increasingly clear.
We all know that we are living in a world of more and more data of all sorts. The research industry is fast morphing into one that needs to live in this world by leveraging this data. Our foray into this is likely to be initially played out by making qual more scalable and by enabling the layering in of greater emotional and meaning understand into quant through open text analytics, not to mention the fusion of complementary data sources to enrich and provide efficiency. These are exciting times.
What’s at stake? Everything it seems. Now is our time to evolve, and to embrace the changing context in which we exist. This applies equally to our peer on client-side and agencies alike.
Oh, did I mention we’re pumped?