A blind willingness to accept.
Josh Muirhead
Strategic Leader | Brand Builder | Trainer, Coach & Facilitator | Leved-Up 1,000+ employees, Drove $5M+ in new business and Managed a $10M+ client portfolio.
Last week, I wrote about how the?first data point can influence?the way we think — i.e., picking a show to watch on Netflix or a coffee shop in Google.
But the risk of trusting the first bit of information we receive is far more profound than choosing a bad show or cup of joe.
Here’s a recent/personal example:
I have been playing around with ChatGPT for the past few weeks. In one chat, I wanted to see how different the weather was in Ottawa vs. other cities in Canada. On the surface, this is the type of question that ChatGPT excels at. However, once I got the information, I continued my research, happy with what I had discovered.
And there is the risk.
I based my entire worldview on the first answer that ChatGPT gave me, including all additional weather (or non) related questions. And because I was lazy, I never checked the responses.
There are dozens of places to get weather reports. We Canadians love tracking and talking about it! But I didn’t go to any sources. Further, I never asked what source or sources ChatGPT was using.
If you’ve ever wondered how disinformation spreads — look no further than my blind willingness to accept what a chatbot told me.
Is this problematic for marketers and businesses — heck ya. But it can be even worse for society.