Market Research, Synthetic data, Plato and Nietzsche.

Market Research, Synthetic data, Plato and Nietzsche.

Market Research, Synthetic data, Plato and Nietzsche.

Why all research data is synthetic data.

Synthetic data - whatever that really is - is the philosophical subject of the year for the insight industry. I have a perspective on this and MR in general I would like to share. For every thesis there must be antithesis. If this is a bit TL;DR, my apologies.

The root of the word synthetic is Ancient Greek: the word synthetic may have come from the ancient Greek word sunthetikós, which means the process of constructing or composing something.

So at root, synthetic doesn’t mean “fake”, although the danger is that most MR people might immediately jump to that interpretation!

Ray Poynter [1] has bravely set out his definition of synthetic data and I applaud him for it. But there are a lot of problems with the call for “data purity”, or “authentic data”. Ray Poynter [1] in his “Draft Synthetic Data Manifesto” states that researchers must:

“Tell the buyer/user if the data is not 100% raw, unmodified responses from real humans.”?

The problem with this is that there is no such thing as “raw, unmodified responses from real humans”.

Rational…not so much.

There is a core concept that has to be addressed when we talk about human behaviour: it’s mostly not rational (take a look at the news - ideology seems to reign in respect to decisions affecting thousands if not millions of lives). Psychological research gave up decades ago asking people about “how they think” - introspection it was called - and it was an utter dead end. Most people operate on the “not thinking that much level” of Kahneman's [2] cognition type - “System 1”. The idea that you can get a consistent reading of what people think and can do this by asking them about it is a precarious proposition at best. And this is assuming there is a “truth” in cognition, that there is something in people’s minds that is a “core truth”, what they really think, what really motivates them.?

Let’s also mention what people remember about everyday life: not very much. The research by Loftus et al [3] on eyewitness testimony is very revealing about how much attention people really pay to their surroundings and more relevantly what they actually remember when something significant happens. If we can’t remember who shot whom when we are a witness to some heinous act,? how do we remember, reliably,? our day to day lives?? Do we really know why we chose a shampoo, does it really matter to us? Of course if you manufacture hair shampoo you are deeply invested in the belief that we think about choices we make closely, but the reality is probably far different. There is often a sort of “brand psychosis”, brands have a belief system around their products that often bears no relationship to how the consumer at large sees the brand. And the very act of recall, of asking about something, seems to affect the memory of that object or concept [4].

In the data mines.

Early in my business career I spent a lot of time in CATI (Computer Assisted Telephone Interviewing) studios listening to interviewers as they interviewed respondents. After spending hours and hours listening to interviewers it’s extremely hard to believe that there is such a thing as data purity in market research.? I have listened to interviewers desperately trying to keep respondents on the line for a “20 minute interview” that was always 45 minutes long. There was a young man with the wrong kind of accent repeatedly getting hung up on trying to do a financial survey. On the other side there are “refcons” - refusal converters. These are interviewers who could? persuade respondents that had refused to complete surveys to complete the survey. There is always that quota cell that just needs one more complete but has run out of sample, enter the refcon.The reality is that certain interviewers could always get more completes than others on surveys. Of course there is the argument that the interviewer effects are random, but I’m pretty certain they are what is termed a “fixed” effect. That is: it biases the data. In the social/government research space a lot of analysis is often carried out to parse out the interviewer effect on data but that just isn’t viable in commercial research. Interviewer management was bad enough - just staffing a call center is a huge challenge let alone somehow balancing the interviewing staff on some metrics is impossible. Just getting completes was often hard enough.

The advent of web surveys changed a lot of that. No more interviewer management, with a 50 seat CATI center you might have 150 part time interviewers to manage, pay and schedule. And in the USA particularly the abuses of the direct telephone marketing industry sent response rates plummeting, just getting people to answer the phone was hard. Web surveys gave back respondents control, they could do the surveys when they wanted. We got detailed open ends. The phrase in CATI studios “tidying up the open ends” no longer happened. Too often interviewers were not great typists, so what the respondent said when faced with an open end request was heavily edited by the interviewer and then checked and massaged, there was not ill intent here but the open end had to be analyzed and had to be clear enough to be analyzed.?

The web changed all that. Of course there was a sampling issue, but that was an issue in CATI as people did not pick up the phone. With web surveys we also have the problem that we can’t be sure exactly who is responding to the survey. We also have all sorts of effects of the way the question is presented, of respondents “straight lining” or gaming responses, of the effect of rubbish questions causing random responses or respondent drop outs. It is fair to say that the web sample industry is tackling these issues with great effort, but they do remain issues. It’s hard to see that there is “data purity” in web surveys. There is bias and noise, but then again that is humans for you - biased and noisy.

Pure data and the cave.

So the idea of “data purity” is a Platonic ideal, and what we see in research are just shadows on the wall of our cave.??

The problem is that it seems clear to me that there is no core truth of behaviour. A lot of the time how people behave is driven by casual thought processes, not some deep rationality and analysis. We know that survey wording, notoriously in political polling, can be used to generate the desired response to a question. Change the wording and you get a different response. People make things up - the process of asking a question can generate the response. A trivial example is my preference for bars of soap. I like it if it is coloured blue. Asking me about other aspects will generate a response for sure, but generally if it is blue, I’ll buy it. Sounds pretty stupid I know (“what are you 10 years old ?” was a comment I have heard) , but are we sure that this sort of behaviour is that unique???

Making it all up.

Human cognition is generative. Perception is synthetic, we make up the world around us. A great example is the human visual system. What we see is far from a simple copy of light hitting the retina, the brain fills in gaps in our vision, it makes us think we can see things clearly when in reality only a tiny part of our visual field is very clear. And what has happened in the past can affect what we see in the future. A recent paper [5] was summarized as “the human visual system lives 15 seconds in the past”. I’m sure we’ve all seen visual illusions of movement, of how after images can persist for many seconds. And if you start looking at how we see colour it becomes very clear it is far from simple wavelengths of light.?

We are constantly adapting how we respond and think, we don’t have a simple truth to how we think. It has to vary constantly so we can adapt to the world around us, and so the idea that there is one pure truth to how people respond to questions seems extremely unlikely. We need to adopt a more Nietzschean view, that attaining some ideal truth is a fallacy and accept the messy nature of human thought. We can’t attain a pure truth of how people think simply because there isn’t one. The way we ask about thought can affect the thoughts - that is just how humans are. The nature of intelligence is adaptation not static patterns.

So this brings us back to the concept of “synthetic data”. The idea is that this is somehow different (worse) from the “pure” data collected by traditional methods. But I would argue there is no data purity, all data is synthetic in a sense.?

The AI systems that produce this “synthetic data” are massive associative networks. They are based on a huge amount of text generated by humans. In many cases, it is as valid as asking someone a question, and arguably as accurate - as accurate as it can be given the variance in human behaviour. Trying to say that there is a form of pure data makes no sense if you accept .s theories, system 1 thought is inherently adaptive, the fastest answer in the circumstances that comes to mind.

“Through a Glass Darkly”

This famous biblical quote (Corinthians - King James) just about sums how we can see the human mind - on a good day. A glass in those days meant a mirror, we see the human mind reflected imperfectly through behavior. Some of that is in surveys, some in qualitative studies.

There is no pure data, so called “synthetic data” is just as “pure” as any other data except it is new and confusing. In the end all research data is qualitative - remember we invented surveys because it was hard to analyze the verbal responses to questions consistently so we made people categorize their responses.? There is a good argument that the original synthetic data are survey data. We don’t naturally think in Likert scales, we don’t say “I love you 0.5”, these constructs of scales and numeric attributes of emotions are synthetic, they do not exist in humans. We invented those constructs to make data analysis easier.

In the end we have to evaluate all data based on how valid we feel it is. Is it important ? Is it consistent both internally and externally ? Is it directionally useful ? Making distinctions about types and “purity” of data doesn’t help.

It is all through a glass, darkly.

[1] Ray Poynter, “Draft Synthetic Data Manifesto”, https://newmr.org/blog/draft-synthetic-data-manifesto/

[2] https://www.scientificamerican.com/article/kahneman-excerpt-thinking-fast-and-slow/

[3] Loftus E. 1996, “Eyewitness Testimony”, Harvard University Press; 2nd New edition

[4] “Karim Nader and the unification of memory erasure: PKMζ inhibition and reconsolidation blockade”, Brain Research Bulletin, Volume 194, March 2023, Pages 124-127.

[5] “Illusion of visual stability through active perceptual serial dependence”, M. Manassi and D. Whitney. “Scientific Advances” January 2022

?

Great Article Andrew Jeavons!! We need to remember why we do research and what is the final goal of it. At the end we build a sample to represent a universe to understand it.

Dr Paul Marsden (CPsychol)

Chartered Psychologist: Machine psychology, Consumer psychology

3 周

Super. “Human cognition is generative. Perception is synthetic”. Missed the Nietzsche angle, but is it that market research is built on ‘bad faith’ ie shilling naive empiricism and bad inference?

回复
A.J. Smith

?? simplifying research for better decisions | Senior Analytical Consultant @ The Directions Group

3 周

Is this more than hand-wringing over the definitions of "synthetic" and "pure"? To me, it seems you've missed the crux of most valid concerns about the application of so-called synthetic data. It should also be noted that Ray Poynter and others have made efforts to define synthetic data when they talk about this fairly new topic (so we don't need to go back to the Greek origin). There's not a single serious MR practitioner out there today who believes reported behaviors and beliefs are a perfect guide to what humans have done or will do. But we can collect information directly from humans, try to consider the biases that may be present in what they share or the collection methods used, and aim to learn from that. Synthetic data limits our ability to investigate those potential biases while introducing additional biases. That doesn't mean synthetic data is useless or that results based on synthetic data will necessarily look drastically different from more directly collected human data, but it does make it inherently different from traditional research data.

回复
Philippe T.

AIOPS, Operational Intelligence, IT Operations and Service Management, Observability, Full Stack Monitoring, Important Business Services Resilience, Project Management, Service Delivery

3 周

Anyone involved in social sciences knows that survey data is biased. The simple fact of asking a particular question masks other questions that could have been asked. Anyone also knows that asking a particular question in different ways will yield different results. Kahneman explained cognitive biases but perhaps more relevant here is Simon's bounded rationality theory. Interestingly LLMs incorporate an element of randomness to make their findings appear more human (but this can be turned off). The bottom line is that Human data is imperfect, but Claude Shannon showed that from data one can infer information.

回复
Finn Raben

CEO | Successful Business Leader | Strategic Growth Advisor | Change Consultant | Public Speaker

3 周

Thanks Andrew Jeavons - great read. I tend to agree with a lot of what you say, although I do believe that the extensive use of synthesised data (I think”synthetic” is completely the wrong term), is the data equivalent of continually making a clone from a clone - ultimately the clones suffer from age-dependant fitness decay. Now, you could say the same for the source data that is being synthesised (as most machine learning systems need to be renewed every five or so years), but I think we need to stress test the synthesised data a little more (as many are now doing).

要查看或添加评论,请登录

Andrew Jeavons的更多文章

社区洞察

其他会员也浏览了