What are the Top Concerns for Enterprises in Adapting GenAI?

What are the Top Concerns for Enterprises in Adapting GenAI?

Are companies interested in investing in GenAI?

In a recent Gartner poll of over 2,500 executives, a compelling trend emerged, shedding light on the strategic priorities that are driving the adoption of Generative AI (GenAI) within enterprises. A staggering 38% of respondents identified customer experience and retention as the primary purpose behind their GenAI investments.

Image credit: Gartner

This underscores the pivotal role GenAI plays in enhancing the relationship between businesses and their customers. In light of these statistics, it is evident that GenAI holds immense promise for enterprises seeking to not only improve their customer engagement but also drive financial growth, operational efficiency, and resilience.

However, as businesses embark on this journey, a series of technical concerns must be thoughtfully addressed to ensure the responsible and successful integration of GenAI into their operations. This article delves into the top concerns for enterprises when adopting Generative AI to fulfill these strategic objectives.


Let's see what are the top concerns for Enterprises in adapting GenAI

1.?????? Privacy and Security:

The foundation of GenAI models lies in the data upon which they are trained. Ensuring data privacy and security is of paramount importance. The outcomes generated by these models may raise concerns about their adherence to data privacy principles. For instance, when dealing with tasks involving financial records, it is likely that these records contain sensitive information, including Personally Identifiable Information (PII) and Payment Card Industry (PCI) data. This data is sent to the model for processing, and if not handled with the utmost care, there is a potential risk of data leakage.

In light of such concerns, major tech companies have taken steps to restrict or ban the use of GPT-based models:

Smartphone giant Samsung recently implemented a ban on the use of ChatGPT and other AI tools, triggered by an accidental leak of sensitive code by an engineer who uploaded it to ChatGPT.
JPMorgan Chase took significant steps to limit the internal use of ChatGPT in order to mitigate potential regulatory issues related to sharing sensitive financial information with a third-party platform.

However, the risk remains that employees may inadvertently misuse these models for their own purposes, such as sharing proprietary code, company data, internal reviews, and meeting notes, which could pose a substantial risk to their respective companies' privacy and security.

?

2.?????? Legal implications

Similar to how organizations abide by industry-specific laws and regulations, they must also respect intellectual property rights and copyright laws when using GenAI. GPT is trained on vast quantities of internet sources. The usage of some of these sources may infringe on the intellectual property rights of third parties. For example, for an advertisement company, that uses GPT to generate content, GPT may be produce contents that would have a legal issue with that company’s country policy.

“Legal and compliance leaders should coordinate with owners of cyber risks to explore whether or when to issue memos to company cybersecurity personnel on this issue,” said Friedmann. “They should also conduct an audit of due diligence sources to verify the quality of their information.”

?

3.?????? Safety

GPT can be used for both beneficial and harmful purposes. While it has the potential to enhance content creation and improve productivity, it can also be exploited for unethical or malicious activities such as generating spam, fake reviews, or phishing emails. Striking a balance between enabling innovation and preventing misuse is a significant challenge.

In a research note published on Thursday, Israeli cyber security firm, Check Point Research, said despite improvements to safety metrics, GPT-4 still poses the risk of being manipulated by cyber criminals to generate malicious code. These abilities include writing code for a malware that can collect confidential portable document files (PDFs) and transfer to remote servers through a hidden file transfer system, using programming language, C++

?

4.?????? Fairness (Hallucinations)

AI models can inadvertently perpetuate biases present in the training data, leading to unfair or discriminatory outcomes. Enterprises must meticulously curate and scrutinize training data to mitigate these biases. Implementing bias-detection and bias-correction mechanisms during model development is essential to ensure fairness and equity in AI-generated content.

One major concern with AI models, such as GPT, is their potential for generating "hallucinations." These are instances where the model produces inconsistent or unexpected outputs for the same input prompt. For example, when prompted with "Rhodes had an exhaustive day," the model's responses can vary widely from one attempt to another. These variations may include differences in gender-based responses, racial biases, and many other possibilities, making it challenging to ensure consistent and unbiased content generation.

chatgpt responses (


?

5.?????? Regulatory compliance

Regulatory compliance poses a significant challenge in the development and deployment of Generative Artificial Intelligence (GenAI) systems. As GenAI technologies continue to advance, concerns regarding their adherence to existing legal frameworks and ethical guidelines have grown. Ensuring that GenAI models and applications align with privacy regulations, data protection laws, and intellectual property rights remains a complex issue. Furthermore, the interpretability and explainability of GenAI models are critical, especially in highly regulated domains like healthcare, finance, and law, where accountability and transparency are paramount.

“Cook says that AI tools like ChatGPT and Google Bard have shown “great promise” but also have the potential for “things like bias, things like misinformation [and] maybe worse in some cases.” He emphasised the importance of regulation of AI and introducing guardrails.”

?

6.?????? Intellectual Property and Copyright

Intellectual property and copyright issues are a central concern in the realm of Generative Artificial Intelligence (GenAI). These technologies, capable of creating original content such as images, and written text, have blurred the lines of authorship and ownership. Questions arise regarding who holds the rights to content generated by AI systems and whether it infringes upon existing intellectual property. Copyright laws may need adaptation to accommodate these novel challenges, addressing issues of attribution and the distinction between human and AI-generated works

"If you're copying millions of works, you can see how that becomes a number that becomes potentially fatal for a company," said Daniel Gervais, the co-director of the intellectual property program at Vanderbilt University who studies generative AI. "Copyright law is a sword that's going to hang over the heads of AI companies for several years unless they figure out how to negotiate a solution."

?

7.?????? Accuracy and facts of generated content

GenAI systems have the potential to produce large volumes of content, ranging from text to images, but often struggle with discerning and ensuring the accuracy of information, thereby propagating misinformation and deepening concerns about the spread of fake news. Verification mechanisms for AI-generated content are still in their infancy, and the potential for biases in training data to lead to biased or inaccurate output raises ethical and practical concerns. Ensuring that GenAI-generated content aligns with established facts, adheres to ethical guidelines, and avoids misleading or false information is a pressing challenge that demands ongoing research, oversight, and innovation to mitigate the potential negative consequences associated with misinformation and disinformation.

"Accuracy will continue to be a challenge for the next couple of years," Morgan Stanley's Kim said about ChatGPT.

?

8.?????? Lack of transparency

These AI systems often operate as complex black-box models, making it difficult to understand the underlying processes and decision-making mechanisms. This opacity raises concerns about accountability, as it becomes challenging to trace errors, biases, or ethical violations back to their sources. Users and stakeholders are often left in the dark regarding the data sources, training methodologies, and algorithms used in GenAI systems, which can result in a lack of trust and the potential for unchecked biases and ethical violations.

“A significant concern raised in the CAIDP complaint is the perceived lack of transparency and explainability in GPT-4. This is a well-known problem in the field of AI, often referred to as the “black box” problem.”

?

Conclusion:

In conclusion, as enterprises increasingly invest in Generative Artificial Intelligence (GenAI) to enhance customer experiences and achieve strategic objectives, they must confront a multitude of pressing concerns and challenges. While GenAI offers immense promise, these concerns must be thoughtfully and proactively addressed to realize its potential benefits and mitigate potential risks to privacy, security, and ethical use

In navigating this landscape, one thing is clear: a thoughtful, well-defined enterprise level compliance framework is significantly important to harness the potential of Generative AI while addressing its complex challenges.

?

In my next article, I will delved into how the top tech giants are implementing AI practice, Click here to read the article.


Safe harbor

The views and opinions expressed in this article are individual opinions of the matter and do not necessarily reflect the official stance or policies of any organization, institution, or company mentioned


Reference:

https://www.gartner.com/en/topics/generative-ai

https://www.forbes.com/sites/siladityaray/2023/05/19/apple-joins-a-growing-list-of-companies-cracking-down-on-use-of-chatgpt-by-staffers-heres-why/?sh=35ab66f228ff

https://www.livemint.com/companies/start-ups/security-experts-warn-of-gpt-4-risks-11678979137596.html

https://www.businesstoday.in/technology/news/story/apple-ceo-tim-cook-admits-using-chatgpt-says-it-shows-great-promise-but-needs-to-be-regulated-384563-2023-06-07

https://www.npr.org/2023/08/16/1194202562/new-york-times-considers-legal-action-against-openai-as-copyright-tensions-swirl

https://news.bloomberglaw.com/us-law-week/ftc-investigation-of-chatgpt-aims-at-ais-inherent-challenges

?

?

?


Gajanan Patil

Automation Technical Lead @Accelirate | Official UiPath Chapter Lead | Community and Corporate Speaker | Devops | Corporate Trainer | Workato Pro

1 年

This is a great Livan ! thanks for sharing ??

Hossein Ziayan

Director of Corporate Development at Accelirate Inc.

1 年

Great perspective on the adoption of AI and industry concerns. I'd also include auditing in the discussion, as it could be intertwined with the issue of transparency.

Suraj Agarkar

UiARD | Sr. Automation Architect at Accelirate Inc.

1 年

Very Insightful and thoroughly researched article Vino Livan!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了