ChatGPT- a Startup friend or foe!!
Navigating data Privacy in a startup ecosystem

ChatGPT- a Startup friend or foe!! Navigating data Privacy in a startup ecosystem

In today’s digitized world, everyone aspires to grow by adopting digital means and emerging technologies. As on 30th June 2024, DPIIT has recognized 1,40,803 entities as startups[1]. Since the Startup India initiative in 2016, the government of India has ranked globally 3rd in the start-up ecosystem. Since the launch of the Startup program, the Indian Startup Ecosystem has witnessed exponential growth with more than 100 unicorn Startups. Most of the startups are Technology-based companies, taking leverage of emerging technologies like Artificial Intelligence (“AI”), Machine Learning, and Big data to be competitive. ?There are many applications that are AI-based, and we are witnessing many AI-based devices in our day-to-day lives. AI is data thirsty i.e. data is the fuel for AI. ChatGPT is one such AI application and creates a lot of noise. Every aspect of our lives is impacted by this technology. ?

ChatGPT is an artificial intelligence (AI) chatbot. ChatGPT works through GPT. G stands for Generative, P stands for Pre-trained, and T for Transformer. It uses natural language processing to generate human-like responses.? The responses of ChatGPT are based on data it has been trained upon. It sources its data from a wide range of sources like websites, articles, blogs, Books, and so on.? ChatGPT saves user data, prompts, and questions queries. It also collects information about the User. It generally collects information from all text input to it. The success and improved functionality of ChatGPT depends on its training on more and more new data.? If you've ever posted any blog or product review or commented on an article online, it is quite possible that this information might be consumed by ChatGPT and used for its responses. Input by Users is a great source of data for ChatGPT. The larger and more diversified the data, the better will be the result of ChatGPT.

For any type of organization be it a large, medium, or small startup technological advantage can be a key differentiator.? As more and more organizations are adopting AI and encouraging their employees to use ChatGPT. Employees at startups use ChatGPT to improve productivity as the benefits of using ChatGPT are being recognised increasingly. Though ChatGPT is promising a lot of advantages it comes with its own challenges. One of the primary concerns with Generative AI models like ChatGPT is data security. The Startups are provided information by their customers for the provisioning of various services. ?Many customers lack technical expertise in data security and rely on the service provider for the security of their data and use such data for intended lawful purposes. Building trust through robust data security mechanisms is necessary in an increasingly competitive landscape from a customer perspective as a lot of information provided by customers can be proprietary and confidential.

The use of ChatGPT requires the exchange of data and information with it in the form of input. This input can be prompt or some other data that needs to analyse to produce a response. The Data/information shared with ChatGPT becomes part of its Database and can be used by ChatGPT for its future responses as we are aware that ChatGPT responses are based on the Database and other information available on the internet. Additionally, ChatGPT collects information about its users, their behavior, and personal information.? These inputs are used to train the tool and generative tools are good at memorizing things.?

As per a report published in Employees are submitting sensitive business data and privacy-protected information to large language models (LLMs) such as ChatGPT, raising concerns that artificial intelligence (AI) services could be incorporating the data into their models, and that information could be retrieved later if proper data security isn't in place for the service[2]. Data security service Cyber Heaven detected and blocked requests to input data into ChatGPT?from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM. In one case, an executive cut and pasted the firm's 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. In another case, a doctor input his patient's name and medical condition and asked ChatGPT to craft a letter to the patient's insurance company.

According to recent research, sensitive data makes up 11% of what employees put into the system. That could include things like sensitive data such as?PII and PHI.[3]

Employees of Samsung Electronics are not permitted to use ChatGPT or any other AI-powered chatbots. The company took this action after discovering that an engineer had unintentionally uploaded sensitive internal source code to ChatGPT. ?The Wall Street Journal claims that Apple has prohibited its staff members from utilizing ChatGPT and other AI-powered tools, such as Github's Copilot, which assists developers in writing code. Concerns regarding these AI platforms' data-handling procedures are the reason for the ban. After claiming to have found instances of ChatGPT responses that mirrored internal Amazon data, Amazon prohibited employees from providing any code or private information to OpenAI's chatbot.

JPMorgan Chase severely restricted the internal use of ChatGPT to avoid potential regulatory pitfalls over the sharing of sensitive financial information with a third-party platform.

Bank of America, Citigroup, Deutsche Bank, Wells Fargo, and Goldman Sachs, have also banned the use of AI chatbots by staffers[4].

According to a study published by BlackBerry Limited and PRNewswire, 75% of businesses globally are either banning ChatGPT and other generative AI apps from their workplaces or are thinking about doing so. Decisions to act are driven by risks to data security, privacy, and corporate reputation, according to 61% of those implementing or considering bans, who stated that the measures are meant to be long-term or permanent. 83% also voiced concerns that unsecured apps pose a cybersecurity threat to their corporate IT environment[5].

The common question or concern that comes to the mind – what about my information given to ChatGPT?? ChatGPT is an AI chat bot develop by OpenAI. As discussed above the data or information shared with ChatGPT can be used to train the Chatbot to improve its performance of chatbot. AI needs more and more data to train and input by users is one of the important sources to collect data. ChatGPT FAQ states We are not able to delete specific prompts from your history. Please don't share any sensitive information in your conversations."

ChatGPT also gives a disclaimer that “'It's crucial to be cautious and avoid sharing any sensitive, personally identifiable, or confidential information while interacting with AI models like ChatGPT.”

Chief Technology Officer at Metomic, Ben Van Enckevort, says, "Whilst AI progress (ChatGPT et al) is an extremely exciting advancement in tech, managers should be aware of how their teams are using them - particularly with regard to the data that's been shared with these services. It's another factor security teams will need to take into consideration when they're thinking about their data security strategy. The rapid pace of change also means security professionals will need to be on the ball when it comes to keeping up with the latest threats.[6]

?

Major Risk of using ChatGPT in Organisation:

  1. If the ChatGPT user account is hacked, then the User account's entire ChatGPT history is exposed and can be misused.
  2. Organization reputation can be at risk and may result in loss of customer trust if customer data is shared with ChatGPT.?
  3. Organisations may suffer irreparable harm and loss of competitive advantage if confidential information like trade secrets or business strategies is shared with ChatGPT.
  4. If personally identifiable information is shared with ChatGPT, it may result in a breach of Privacy laws like the Digital Personal Data Protection Act, GDPR, IT Act, etc.
  5. ChatGPT is not error-free. Accuracy is not guaranteed. It produces results based on the data it has been trained upon. User to rely on its outcome on User risk. If the data on which ChatGPT is trained can produce biased and discriminatory results.
  6. ChatGPT's conversational nature can result in social engineering attacks.
  7. There is a risk that ChatGPT may retain the data given as input.?
  8. Difficult to know the reasoning behind the decision/ results arrived at by ChatGPT due to the black box algorithm.

?

Way forward

Collection, analyzing, storing, and use of data requires calls for robust data protection and privacy measures in place as there is always a risk of data theft, misuse, identity theft, unwanted disclosure, etc. Recognising the risk associated with data misuse data protection laws are implemented by the states. India has recently passed the Digital Personal Data Protection Act, 2013. ?The said Act imposes certain role-based obligations for Data Fiduciaries, Data Processor and Significant Data Fiduciaries. ?On the other hand, the said Act also confers certain rights on the Data owners. A Startup needs to analyze and decide under what category they are coming, and what data is being collected and processed so that data protection, consent management, data localization, Appointment of a Data protection officer etc. measures can be adopted. The Startups need to establish comprehensive guidelines as to how to use this tool responsibly considering the requirement of the applicable data privacy laws. As this tool continue to be evolved so should the measures, and policies toward data security. By evaluating and understanding the potential risk and keeping our security measures updated we can take advantage of this technology advancement while minimising the risk posed by it. Another solution is to develop its own ChatGPT to address data security concerns. Though it will ensure data security it can provide tailored responses, but it has its own challenges in terms of cost and data/information it requires to train.

?

Conclusion

Undoubtedly, ChatGPT is promising unprecedented benefits in all the fields but at the same time, ChatGPT is vulnerable.? Though the use of digital technologies and data are enablers for the growth of startups, unconscious use can create risk for organisations. There is always a risk if data is shared by employees in a non-secured environment. While ChatGPT is convenient and efficient due to its ability to generate quick responses it poses significant risks as well. It is being used by employees everywhere for faster turnaround and even Companies are advocating faster adoption for performing various assignments. When employees share sensitive data with ChatGPT it may amount to a security breach and may also attract a violation of the DPDA Act, 2013. Penalties imposed by the Privacy laws are severe and may hit at the root of startups. There is a need to strike the right balance so that organisations can leverage technological advancement for their benefit without compromising the data security.? Educating employees about the risks involved, safe use through regular training and awareness about refraining and careful while using such public platforms for sharing sensitive information. It's the collective responsibility of the organization and employees to ensure that the data they share with ChatGPT should not put their organization at risk.

Since the stakes are high, it needs to be accorded priority.


CS- Krishan Kumar


[1] https://pib.gov.in/PressReleaseIframePage.aspx?PRID=2043805

[2] https://www.darkreading.com/cyber-risk/employees-feeding-sensitive-business-data-chatgpt-raising-security-fears

[3] .?https://www.metomic.io/resource-centre/is-chatgpt-a-security-risk-to-your-business

[4] https://www.forbes.com/sites/siladityaray/2023/05/02/samsung-bans-chatgpt-and-other-chatbots-for-employees-after-sensitive-code-leak/

[5]https://www.prnewswire.com/news-releases/75-of-organizations-worldwide-set-to-ban-chatgpt-and-generative-ai-apps-on-work-devices-301894155.html

?[6] https://www.metomic.io/resource-centre/is-chatgpt-a-security-risk-to-your-business

?

Rens Timmermans

Helping companies use AI tools like ChatGPT safely with SENSIBLE | Guiding companies in using GenAI tools like ChatGPT to save costs and drive innovation | Co-founder @ Gemeentepeiler

1 个月

Or use Sensible! With sensible we help people by flagging sensitive information and warn the user of the sensitive information that is found. It works in almost al AI tools like Claude and ChatGPT but also in mail slack etc.

回复
Rokon Zaman

Academic, Researcher, and Activist--Technology, Society and Policy

1 个月

Krishan Kumar, despite all these AI possibilities, how come OpenAI cannot generate profitable revenue from ChatGPT?

Raksha kishore

Corporate Lawyer

1 个月

Insightful

要查看或添加评论,请登录

社区洞察

其他会员也浏览了