5 Bad ChatGPT Mistakes You Must Avoid
5 Bad ChatGPT Mistakes You Must Avoid

5 Bad ChatGPT Mistakes You Must Avoid

Thank you for reading my latest article?5 Bad ChatGPT Mistakes You Must Avoid.?Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter,?Facebook, Instagram, Slideshare or YouTube.

---------------------------------------------------------------------------------------------------------------

Generative AI applications like ChatGPT and Stable Diffusion are incredibly useful tools that can help us with many day-to-day tasks. Many of us have already found that when used effectively, they can make us more efficient, productive, and creative.

However, what’s also becoming increasingly apparent is that there are both right ways and wrong ways to use them. If we aren’t careful, it’s easy to develop bad habits that could quickly turn into problems.

So, here’s a quick list of five pitfalls that can easily be overlooked. Being aware of these dangers should make it fairly simple to avoid them and ensure we're always using these powerful new tools in a way that's helpful to us rather than setting us up for embarrassment or failure.

?

Believing everything it tells you

Unfortunately, you only need to play around with ChatGPT for a short time to realize that far from being an all-knowing robot overlord, it can be prone to being a bit dim at times. It has a tendency to hallucinate – a term borrowed from human psychology to make its errors seem more relatable to us. It really just means it makes things up, gets things wrong, and sometimes does so with an air of confidence that can seem comical.

Of course, it's constantly being updated, and we can expect it to get better. But as of now, it has a particular propensity to make up non-existent citations or to cite research and papers that bear no relationship to the topic at hand.

The key lesson is to check and double-check anything factual that it tells you. The internet (and the world) is already full of enough misinformation, and we certainly don't need to be adding to it. Particularly if you’re using it to create business content, it’s important to have stringent editing and reviewing processes in place for everything you publish. Of course, this is important for human-created content, too. But putting too much trust in the capabilities of AI can easily lead to mistakes that can make you look silly and could even damage your reputation.

?

Using it to replace original thinking

It's important to remember that, in some ways, AI – particularly language-based generative AI like ChatGPT – is similar to a search engine. Specifically, it's entirely reliant on the data it can access, which in this case, is the data it's been trained on. One consequence of this is that it will only regurgitate or reword existing ideas; it won’t create anything truly innovative or original like a human can.

If you’re creating content for an audience, then it’s likely they come to you to learn about your unique experiences or benefit from your expertise in your field or because there's something about your personality or the way you communicate that appeals to them. You can’t replace this with generic AI-generated common knowledge. Emotions, feelings, random thoughts, and lived experiences feed into our ideas, and AI doesn’t replicate any of this. AI can certainly be a very useful tool for research and for helping us to organize our thoughts and working processes, but it won't generate that "spark" that enables successful businesses (and people) to distinguish themselves and excel at what they do.


Forgetting about privacy

When we’re working with cloud-based AI engines like ChatGPT or Dall-E 2, we don’t have any expectation of privacy. OpenAI – the creator of those specific tools – is upfront about this in its terms of use (you did read them, right?). It’s also worth noting that its privacy policy has been called "flimsy."

All of our interactions, including the data we input and the output it generates, are considered fair game for its own systems to ingest, store and learn from. For example, Microsoft has admitted that it monitors and reads conversations between Bing and its users. This means we have to be careful about entering personal and sensitive information. This could also apply to content such as business strategies, communications with clients, or internal company documents. There’s simply no guarantee they won’t be exposed in some way. An early public version of Microsoft’s ChatGPT-powered Bing was briefly pulled offline when it was found that it was occasionally sharing details of private conversations with other users.

Many companies (and at least one country – Italy) have banned the use of ChatGPT due to concerns over privacy. If you do use it in a professional capacity, it is important to have safeguards in place, as well as to keep up-to-date on legal obligations that come with handling such data. Solutions exist for running local instances of applications, allowing data to be processed without leaving your jurisdiction. These could soon become essential for businesses in fields such as healthcare or finance, where handling private data is routine.

?

Becoming over-reliant

Developing an excessive reliance on AI could easily become a problem for a number of reasons. For example, there are numerous situations where services may become unavailable, like when users or service providers are hit by technical issues. Tools and applications can also be pulled offline for security or administrative reasons, such as to apply updates. Or they could be targeted by hackers with denial-of-service attacks, leaving them offline.

Just as critically, over-reliance on AI could prevent us from developing and honing certain skills of our own that the AI tools are filling in for. This might include research, writing and communicating, summarizing, translating content for different audiences, or structuring information. These are skills that are important for professional growth and development, and neglecting to practice them could leave us at a disadvantage when we need them at a time when the assistance of AI isn’t available.

?

Losing the human touch

In a recent episode of South Park, the kids use ChatGPT to automate “boring” aspects of their lives – such as interacting with their loved ones (as well as cheating on their schoolwork). Obviously, this is played for laughs, but as with all good comedy, it’s also a commentary on life. Generative AI tools make it easy to automate emails, social messaging, content creation, and many other aspects of business and communications. At the same time, it can make it difficult to convey nuances and be an obstacle to empathy and relationship-building.

It’s essential to remember that the idea is to use AI to augment our humanity – by freeing up time spent on mundane and repetitive tasks so that we can concentrate on what makes us human. This means interpersonal relationships, creativity, innovative thinking, and fun. If we start trying to automate those parts of our lives, we will be building a future for ourselves that’s just as damaging as the worst that the AI doom-mongers are predicting.


To stay on top of new and emerging business and tech trends, make sure to subscribe to?my newsletter, follow me on?Twitter, LinkedIn, and YouTube, and check out my books, Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.

---------------------------------------------------------------------------------------------------------------

About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a?best-selling author of 21 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1.7 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest books are ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’ and ‘Future Skills: The 20 Skills and Competencies Everyone Needs To Succeed In A Digital World’.?

No alt text provided for this image
5 Bad ChatGPT Mistakes You Must Avoid
Andre M.

Public Service Commercial Lead | Bid Management & Strategy Expert | Market Insights & Project Leadership Expertise

1 年

Very useful

回复
Ian Fuller

Production Associate at FGF Lebanon

1 年

I’ve been tinkering with ChatGPT quite a bit and all of this absolutely valid. On top of this though I would say that you should always ask for non standard and unconventional answers to acquire some answers that aren’t common sense. Everyone should spend some time with AI to really understand what it is and how real the limitations of AI currently are. The average person struggles with their definition of AI and because of the portrayal in media they are often overly scared of it.

Sandra D.

Cybersecurity GRC Risk Leader | Women’s ERG Co-Lead | Thought Leader | WOC STEM Tech Rising Star | Girls Inc DC Woman of Impact | Strategy Execution Specialist | Mentor | Career Coach

1 年

Great discussion on thr pitfalls of using generative #AI. The risks associated with using these tools are real, no one wants to end up on the wrong side of the news for misinformation or false information given to them by AI. Verify everything, and edit it as you would any other work product. Thanks

回复
Denise Murtha Bachmann

Sales is like a box of chocolates. Wrong! We should know exactly what we are getting. Together we will make sure that you know where your Sales are coming from in the remainder of this fiscal year.

1 年

The one that resonates the most with me Is losing the human touch Bernard Marr. I focus on when adopting and adapting to these new AI technologies in the sales process, how do we retain the human touch/ human interaction.

POOJA JAIN

Storyteller | Linkedin Top Voice 2024 | Senior Data Engineer@ Globant | Linkedin Learning Instructor | 2xGCP & AWS Certified | LICAP'2022

1 年

Great and informative share of tips to avoid mistakes with ChatGPT! Bernard Marr

要查看或添加评论,请登录

社区洞察

其他会员也浏览了