Privacy in the era of Generative AI -Will we see an increase in AI walled up within the corporate gardens?
Neha Bajwa
Product Marketing Executive | Go-To-Market Strategy Expert and Advisor | B2B Enterprise Software/SaaS Technology + Marketing Evangelist| Advisor
ChatGPT and generative AI are trending – it is all that any of us talk about these days. Are you using it? Are you allowing your employees to use it? Which latest application have started to include it? The race is on, and software vendors are making a mad dash for it.
So, it is not surprising that when I was presenting at the #cxreimagined event in February, Microsoft's recent announcement of Generative AI capabilities in #vivasales and #dynamics365 was the hot topic of discussion among over 30 senior executives.
There was an active discussion on the privacy and compliance implications – all the debate awoke the security geek in me, compelling me to research the implications.
The exciting part of ChatGPT is that it has finally brought AI to the masses, with even children as young as 8 are using it for homework help. As product marketers and engineers, we do not explain AI and what it can do - we only have to say “ChatGPT”.
So it's no surprise that professionals are starting to use it too to increase efficiency and productivity. ChatGPT can complete tasks in a fraction of the time it would take a person, such as helping with code, writing emails, articles, campaigns, and segmenting target audiences.
领英推荐
However, implementing generative AI capabilities in a privacy and compliance secure way is crucial. CISO will not be able to control it and stop its usage. They will have to figure out a way to enable employees to benefit from the assistance while ensuring that IP is not leaked.
ChatGPT is an open system and increasingly I am starting to see the emergence of closing it to corporations to provide privacy and compliance. This approach ensures that sensitive data is protected and only accessible to authorized personnel. However, creating corporate "walls" limits the amount of data that the model can be trained on, and the efficacy of the model is directly proportional to the amount of data. ChatGPT 4.0 is trained on 100 trillion parameters – where will a corporation get close to these numbers?
Security and IT will perhaps need to strike a balance where they implement strict access controls, encrypt data, and regularly audit the system to ensure it is being used for its intended purpose. They can also implement techniques such as differential privacy, which adds random noise to data to protect individual privacy while maintaining overall accuracy. This can help to reduce biases, improve fairness and accuracy, and protect privacy.
It's likely that organizations will create hybrid models that enable employees to implement necessary privacy and compliance measures. Services such as Azure OpenAI provides such a balance – it builds on the datasets in ChatGPT and adds more datasets such that applications vendors will be provide the hybrid model.
The debate just begun! Italy is the first country to block ChatGPT. it will be interesting to see where this evolves and will we start to see a lot of walls coming up around AI.
Product Marketing Executive
1 年Neha, excellent write -up and thought provoking!