Generative AI Systems and Cyber Security

Generative AI Systems and Cyber Security

About the author

Nisha Iyer, Senior Data Analyst is passionate and curious about data and its applications. She has hand-on experience with applied data science and worked on large scale data science projects.

ChatGPT, Generative AI, Bard are in the news for quite some time. Employees from various Industry Sector are feeding confidential and sensitive data into various AI platforms including ChatGPT which raises various Security concerns and fears??This information is not just contextual and graphics but also details including one's company vision, roadmap, architecture strategy, customer’s data and numbers related to financial accounting as well.?

In a recent report, one of the data security services detected and blocked requests to input data into ChatGPT from 4.2% of the 1.6 million workers at its client companies because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM.??

October 2023 observed global Cyber Security awareness month of the year. With cybersecurity awareness, it's equally important for all organizations to train employees on using ChatGPT, Bard and other AI tools. You know, there are two forms of education: There's the classroom education, like when you are onboarding an employee during trainings, and then there's the in-context education, when someone is actually trying to paste data. To me both are extremely important to train and aware all personnel using AI related platforms.??

Education could have a big impact on whether data leaks from a specific company because a small number of employees are responsible for most of the risky requests.??

A common concern is that an LLM might 'learn' from your prompts and offer that information to others who query for related things. There’s also the risk of the technology pulling data that it shouldn’t.?

There is a fear that LLMs could spread or normalize misinformation. For example, bad actors can use generative AI to write fake news stories quickly. And since the model is trained by online data – including fake data – fake news can be inserted into the AI-generated responses and used as trustworthy information.?

Businesses and consumers can start by limiting the data these AI-based tools access. These tools will only be as informed as the data that feeds them, so efforts should be made to encrypt sensitive or personal data to ensure it stays protected. At the same time, companies are responsible for raising awareness of new fraud risks, such as criminals impersonating them to trick customers into giving away personal information.?

Companies could very easily wind up with a data privacy nightmare. Businesses implementing LLMs must be vigilant to protect sensitive data and ensure robust access management protocols are respected from the get-go.?

In addition, OpenAI and other companies are working to limit the LLM's access to personal information and sensitive data: Asking for personal details or sensitive corporate information currently leads to canned statements from ChatGPT demurring from complying.?It's important to learn more about different AI standards, regulations, laws and rules.??

One of the latest ones was the US’s Biden Administration Executive Order on AI and Enhancing Cybersecurity.??

The Executive Order on AI highlights the importance of collaboration, standards development for AI-generated content, and responsible government use of AI. AI companies should prioritize security measures, stay informed about compliance requirements, and invest in AI-specific security solutions. Detecting AI-generated attacks is crucial for maintaining the security of AI systems and safeguarding against potential threats by aligning with the objectives of the Executive Order and implementing robust security practices, AI companies can contribute to the safe and responsible deployment of AI in the digital landscape.?

Similarly over here in Canada, the The Government of Canada is committed to ensuring that Canada’s legislative frameworks remain responsive to modern realities. That is why the government proposed the Artificial Intelligence and Data Act, as part of Bill C-27, and continues to consider how other legislative frameworks may need to be updated to address the changing technological landscape, including the rapid advancement of artificial intelligence (AI) technologies.?

Here the statement from The Honourable Fran?ois-Philippe Champagne, Minister of Innovation, Science and Industry?

“As developments in AI intensify, our government is seizing every opportunity to stimulate innovation and the possibilities offered by this revolutionary technology. Canada’s copyright framework needs to remain balanced and able to facilitate a functional marketplace, and that’s why we’re studying the best way forward to protect the rights of Canadians, while ensuring the safe and ethical development of AI”??

Along with companies, country wise governments are making measures for preventive use of AI services and tools and coming up with Code of Ethics, guardrails.?NIST, SOC2, MITRE have updated their respective frameworks and practices on use of AI.?

?Although In recent months, generative AI systems—such as ChatGPT, Dall-E 2, and Midjourney—have captured the world's attention, it's us as individuals to know our rights and responsibilities.?

Learn more about Gen AI and Cybersecurity in 2024 though Canada DevOps Community of Practice ?

?

?

?

?

?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了