ChatGPT - an unpopular opinion (part 2)
Photo by Mulyadi on Unsplash

ChatGPT - an unpopular opinion (part 2)

In my previous essay I outlined some aspects of ChatGPT, which aren't widely discussed. Namely, how ChatGPT may change the way we think, and what its potential risks are, especially to the generations currently in school.

As mentioned in my previous post, there's another topic that I haven't seen people discussing regarding the effects of ChatGPT -- and it's the cultural or societal impact.

I'm no expert on language models, so I'm basing my thoughts on my personal experience with the tool -- and on some educated guesses and extrapolation of my experiences.

I'm an entrepreneur. At our company, Skonto Platform -- to borrow a phrase from Andrew Gazdecki -- "we're just getting started". We have a small, dedicated team, meaning that we have to do a range of things and cannot (yet) afford the luxury of being specialized and compartmentalized.

So naturally, I am doing everything from sales to product to HR to marketing. Having seen the rise of a language model I asked myself the question on how can we (1) improve our product or (2) improve sales or (3) grow bandwidth. The third point especially stood out as some areas have not received enough attention lately, namely marketing. We had some initial articles posted on our blog (which rank steadily in the top 3-10 most relevant keywords for us), but we know that more is needed and we have to extend our reach on this end.

I started following some marketing experts who posted methodologies to produce content using ChatGPT. The steps were clear. Define product, define clients, ask for keywords, ask for titles, etc. Easy-peasy. And then it occured to me: I'm obviously not the only person seeing these how-to articles. I'm obviously not the only one trying to implement them. Maybe not even the only one in our fairly niche area.

So what happens if I want to produce content and improve ranking, and I use the same keywords that is also suggested to my competitors by ChatGPT? Peter Thiel's Zero to one's contrarian question echoed in my head:

What important truth do very few people agree with you on?

How do language models work? (As said, I'm not an expert, but bear with me for a second.) As I understand these models use the data you feed them. The more occurances of a "truth" appear in the training set, the more "true" it will be considered. Consequently, the "truth" that comes out as an output from the model will reinforce the majority view -- and cut out and disregard the outliers and the contrarians.

Now this is an issue. The world has been driven forward by outliers and contrarians. Sometimes they were leading in the wrong direction, sometimes the risk inherent in outlier and contrarian views didn't result positive things. Sometimes they're downright stupid or dangerous. But the consensus will never find anything new. The least common denominator is a very small number.

As people, one of our key strengths lies in our imagination, creativity and in the diversity of our minds and thinking. Don't let it go to waste.


In my essays on ChatGPT I tried to shed light on two unpopular opinions regarding language models. I find this technology fascinating and amazing, so I urge you to use it, but don't forget to use and train your own brain, too.

要查看或添加评论,请登录

社区洞察