Professionals should familiarise themselves in the use of (AI) technology
Over the last few weeks, a lot of buzz has been created on the use of AI technology. With the introduction of Open AI's ChatGPT and later Google's Bard , some state that the future has arrived, whilst others primarily promote the errors these AI systems still make. Apart from this, people discuss the downsides of AI systems and why caution is needed when used (professionally).
In this article I will introduce ChatGPT (knowing that there are a lot of other AI tools, but ChatGPT is what the fuss is about), to summarize (part) of the above mentioned discussions primarily to make a completely separate point, being: ?
In order to stay relevant, professionals should familiarise themselves in the use of AI.
What is ChatGPT?
ChatGPT is a language model developed by OpenAI , built using deep learning techniques based on the transformer architecture. It is trained on a large corpus of text data to generate human-like responses to text-based inputs. It can perform a variety of language tasks such as answering questions, generating text, translating languages, and more.
ChatGPT, or AI in general, does not have 'human-like' intelligence (yet)
It is important to realise that it is currently 'just' that. A language model trained to generate human-like responses to text-based inputs. It is not always correct, it is not without bias, it does not decide whether the output is relevant in your context, it is not a human, nor does it have 'human-like' intelligence.
What are the benefits of using ChatGPT, or any other (AI) tool?
In my opinion, ChatGPT can help you to investigate certain (new) topics, create text outlines for you to build on, write poems, LinkedIn posts or help you to review texts you wrote (be cautious here), and a lot more. I've used it to explain new terminology to me, or put things into certain perspectives. I've also written paragraphs of text, but I am mainly training myself to better use the system and get more relevant answers, quicker - and discuss the impact of ChatGPT on my profession with my peers.
领英推荐
AI, or technology in general, can be used to make your life easier, more efficient, increase quality or just to have fun. At the time of writing, there are AI tools that can help you create images that can be used freely, create speech from text , compose your own music or create narrative-driven presentations , including visuals and a lot more.
What should be considered before using ChatGPT, or any other (AI) tool which is generally available to the public?
I am an advocate for using new technologies. I would be bad at my job if I wouldn't. But the more I read about ChatGPT, and the more I use it, I came to the conclusion that there are some 'rules of engagements'.
Make a conscious decision to use publicly offered AI in a professional context and write internal guidance for its use, prior to using it professionally.
So, what is my point about professionals?
When I started conversations about the professional use of ChatGPT, we came to realize that there are some rules of engagement, of which I just discussed a couple. Having said that, I strongly believe that any professional should familiarize themselves with new technologies, like ChatGPT. Professionals using AI, or technology in general, will have huge benefits over those who do not familiarise themselves in the use, or do not adopt them in time.
Before you can effectively work with a system like ChatGPT, or have discussed how your organisation can use the system - and under which conditions - new technologies have emerged. Continuously adopt these technologies and incorporate them consciously in your ways of working, improves your professionalism greatly.
Counsel Tech & Data / AI expert @ Van Doorne
1 年The issue with #generativeai tools like #ChatGPT is that we tend to use these as #searchengines looking for the truth and tend to forget that the output is based on predictive modelling. It is only a likely truth! In the end you will either need to verify various sources and validate the level of truthfulness of the output, or you will need to take the risk and trust the AI model and its output. For me personally the second option is not really an option yet, as I continue to see much faulty or oversimplified output. But are search engines much better? They promise to help us finding the truth that is supposedly out there, but their underlying #algorithms also send us into a fairy trap. What is worse?