My honeymoon with OpenAI's ChatGPT is over
AI generated image using OpenAI's DALL-E 2

My honeymoon with OpenAI's ChatGPT is over

In my previous post, I discussed the emergence of OpenAI's ChatGPT and Playground and their usefulness to knowledge workers like myself. I mentioned the potential benefits of using AI alongside our daily work, as well as the potential limitations and ethical implications of relying on AI tools.

I've spent 8 weeks using ChatGPT & Playground and countless hours reading Reddit posts of people who are finding weird and wonderful things to do with ChatGPT. My journey on the Gartner Hype Cycle with OpenAI's tools has gone beyond the Peak of Inflated Expectations and it is now between the Trough of Disillusionment and the Slope of Enlightenment.

No alt text provided for this image
Gartner's Hype Cycle

In this follow up post, I will focus on the implications of bias, accuracy and dependency based on my daily user experience when it comes to using AI tools. It is essential to consider these factors as a business or as an individual when utilising AI tools, as they can have a significant impact on outcomes:

  1. Bias: bias is an inevitable situation when using AI, as the data that is used to train the GPT-3 algorithm could contain inherent biases and assumptions (based on data until Nov 2021). This can result in skewed results, which can lead to inaccurate decisions. To avoid this, it is important to assess the outputs and challenge any biases that may exist. I now find myself challenging my own confirmation biases on the results from ChatGPT. The biases from DALL-E 2 are even more worrying sometimes where it assumes the gender depending on the input parameters to render humans with robots in a particular setting (e.g honeymoon) and especially when the sentence is gender neutral. Next time ChatGPT gives you a convincing response, challenge it and say that it is wrong, and it will often agree with you - then watch it go down the wrong rabbit holes.
  2. Accuracy: linked to bias, accuracy is also a crucial factor when using AI tools. Although AI can be incredibly helpful in certain tasks, it is not always 100% accurate. This is because AI algorithms are trained on data sets, which can be incomplete or contain errors. Therefore, it is important to assess the accuracy of the AI before relying on its results. In the early days, I've found myself accepting PowerBI DAX code suggestions from ChatGPT only to realise that they're not correct, so I challenge the perceived inaccuracy only for it to keep giving me even more incorrect answers. (Note that there are other programming languages like python where I have seen that it's got much better accuracy on output).
  3. Dependency or complacency: Finally, it is important to consider one's dependency on AI tools. AI tools can be incredibly helpful, but if you're not careful you can become heavily reliant on them. If you don't make yourself self-aware of bias and accuracy of the results, it can lead to the complete opposite of using AI for productivity and efficiency. To avoid this, it is essential to regularly challenge the results and critically assess if any biases exist and more importantly that you're not getting complacent. I've read some posts on Reddit where the users got so complacent on using ChatGPT that they decided to 'quit AI', and another user coined the term 'AI dependency disorder'.

In conclusion, AI tools like OpenAI's ChatGPT and Playground can be incredibly useful. However, it is important to consider the implications of bias, accuracy and dependency and complacency when using these tools. By assessing and monitoring the outputs regularly, users can ensure that they are getting the most accurate and unbiased results. By being aware of these issues and taking steps to mitigate them, businesses and individual users can make effective use of AI tools and ensure that they are used responsibly.

Bonus thought provoker: over dinner late last year, Nick McFarlane posed the question to me which still stays at the back of my mind: If ChatGPT's training data had a cut off at Nov 2021, most of the new online content generated from end 2022 onwards will be AI generated - how can we ensure the new versions of GPT's won't be biased by being trained on data generated by itself?

#ai #artificialintelligence #chatgpt #openai #playground

Anesh Tailor

Head of Customer Success - Accelerating GRC technology adoption and driving efficiency in risk and compliance programs

1 年

This is very insightful. AI or for a better word Machine learning will play a pivotal role in organisations digital transformation. Like any AI:ML there is alot of reliance on data quality and the alogorithm applied. It will come to a point where human intervention would be to assess and validate the outputs produced.

要查看或添加评论,请登录

Bhavin L.的更多文章

  • Happy Birthday ChatGPT: A Year of AI Innovation

    Happy Birthday ChatGPT: A Year of AI Innovation

    Today marks a significant milestone as we commemorate the one-year anniversary of ChatGPT since its launch on 30th Nov…

  • The great rebound

    The great rebound

    Earlier in the year I wrote a piece about my honeymoon being over with ChatGPT, it was tongue in cheek to say that I…

    1 条评论
  • Staying ahead of the AI curve

    Staying ahead of the AI curve

    In my third post I develop on the ideas and observations I previously shared as an early adopter of AI tools like…

    3 条评论
  • Can AI replace a knowledge worker like me?

    Can AI replace a knowledge worker like me?

    In my first post, I put my thoughts down on the emergence of opensource AI and what it means to a knowledge worker like…

    6 条评论

社区洞察

其他会员也浏览了