Personal data protection in the world of ChatGPT
Have you ever wondered how privacy is kept for individuals that are on the internet? Are you worried that artificial intelligence will dominate us completely one day?
When we take a moment to look at the world from the outside, we notice and question things, especially when it comes to personal data protection on the Internet today. Here we will share the latest news on everything that is currently known about the protection of personal data in artificial intelligence systems and especially in ChatGPT.
A few days ago, leaders in the technology sector, such as Elon Musk (Twitter) and Steve Wozniak (Apple), signed an open letter. In summary, they ask to interrupt the training of artificial intelligence systems for 6 months. The signatories claim to plan and manage this type of experiment with greater care. They criticize that the technological laboratories are in an out-of-control race that can impact the history of humanity.
At the moment, one of the most popular systems is ChatGPT. You have probably already read an article about this intelligent chat that was launched by the OpenAI firm. You might have even tried it. Access is very simple: you search chat.openai.com and create a user for the free version. You also have the possibility of accessing a paid version with more features.
But what is Chat GPT? It is an artificial intelligence language model based on the Generative Pre-trained Transformer (GPT) 3.5 architecture. It can understand and produce human language and can interact with users to provide useful and informative answers to a wide range of questions, including book summaries, trip planners, and websites, among its many other features.
How can we manage personal data protection in ChatGPT?
In 2021, the Organization of American States (OAS) updated a set of principles on the processing of personal data. The initiative reflects changes in the digital environment and new ways of collecting, using, and transferring personal data. Artificial intelligence product companies are taking note of the demand for the protection of personal information. At the end of April, OpenAI introduced the ability to disable chat history, with the aim of facilitating and customizing its management. Conversations started when chat history is disabled will no longer be used to train and improve the system. The decision is up to the users. For those who disable the chat history, new conversations will be retained for 30 days and then permanently deleted.
The firm also announced that it is working on a new ChatGPT Business subscription. The product is geared toward professionals who need more control over their data, as well as businesses looking to manage their end users. Per OpenAI's data policy, end-user information will not be used to train your AI models by default. ChatGPT Business will be available in the coming months.
Finally, ChatGPT users will have the option to export their personal data that the system has stored. Users can request it from the configuration menu and will receive a file with their conversations and all other relevant data by email.
领英推荐
?Conclusion
Reflecting on the amazing potential of Artificial Intelligence, we can’t help but be inspired by its potential to shape our future. Despite the many opportunities, we must ensure that this technology is used in a way that preserves personal data and privacy. We owe it to ourselves – and to those around us – to stay informed and up to date on how AI may affect our lives in the coming years. How do you think it could be used most effectively in the future? Are there any concerns or areas of improvement you would highlight when it comes to personal data security and AI? Let us know in the comments!
Written by Luis Diaz
Follow us on Instagram
Visit our Blog
Senior Audit Consultant at Makosi | CA(SA) | Honours in Accounting | Bachelor of Law
11 个月Getting employees to think about privacy when leveraging AI tools is definitely something that companies will need to incorporate into their corporate training going forward.