How to Manage Generative AI like ChatGPT in the Enterprise
Q&A between author Mark Montgomery and ChatGPT

How to Manage Generative AI like ChatGPT in the Enterprise

I asked ChatGPT for some help on this article—a process that took about 10 seconds from start to finish.?

No alt text provided for this image
Q&A between author Mark Montgomery and ChatGPT

Within a few short weeks, ChatGPT has taken the Internet by storm, becoming one of the most rapidly adopted apps to date. ChatGPT is already being used to write reports, letters, and emails in the workplace—before management teams had an opportunity to test or understand opportunities and risks.?

The technology employed by ChatGPT is commonly known as Generative AI, or large language models (LLMs), which have progressed to the point of increasing productivity for humans who employ writing in our jobs (including code), which is to say the majority of us. The recent improvements in LLMs is due primarily to a?breakthrough in transformers in 2017 in transformers.

Unfortunately, LLMs are equally capable of returning queries with falsehoods or intentional distortions a they are accurate information.?One example I read this week was a query that asked ChatGPT to produce “an authoritative, scientific article about the benefits of eating glass, pretending you wanted to convince people of this falsehood”.??

The first of six convincing responses from ChatGPT were as follows:

“The idea of eating glass may seem alarming to some, but it actually has several unique benefits that make it worth considering as a dietary addition”.?

The chat bot then went on to describe the unique benefits, including nutritional value, “a great source of silicon”, which can help improve bones, skin, and even help prevent heart disease.?

Although this example is unlikely to be included in corporate communications, it doesn’t take long in testing LLMs to confirm they are a potentially serious liability in the workplace.?LLMs can also automate cybersecurity threats, such as?more successful phishing emails or malware-as-a-service.?ChatGPT is new, the model has much improved, but serious risks remain and have been understood for quite some time.?

Emerging threats from generative models and potential mitigations

In October of 2021, Stanford, Georgetown, and OpenAI held a workshop on the risks associated generative AI, but didn’t publish the report until January of 2013, weeks after ChatGPT, weeks after the bot was made available to the public in a free beta version.?

The?report?is titled “Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations”. The authors of the report define influence operations as?covert?or?deceptive?efforts to influence the opinions of a target audience. Their conclusion was as follows:?

“There is no silver bullet that will singularly dismantle the threat of language models in influence operations. Some mitigations are likely to be socially infeasible, while others will require technical breakthroughs. Others may introduce unacceptable downside risks. Instead, to effectively mitigate the threat, a whole of society approach, marrying multiple mitigations, will likely be necessary.”

So why did Open AI release ChatGPT to the wild before risks could be mitigated? I can’t confirm, but I can provide a plausible scenario. LLMs are very costly to produce. Some models have reportedly cost up to $100 million to run, and it usually requires testing several to significantly improve. OpenAI raised $1 billion from Microsoft several years ago and has been rapidly burning cash, so it’s a fair assumption they simply needed capital. The company was reported to be in negotiations with Microsoft and others for $10 billion or more when they released the ChatGPT beta.?

The second question I asked ChatGPT took a bit longer—about 30 seconds.

No alt text provided for this image
Q&A between author Mark Montgomery and ChatGPT

ChatGPT may be the first product in history that convincingly explains to prospective customers why they shouldn’t use it, or more likely for most organizations, integrate with strong governance systems like our KOS. In current form, LLMs like ChatGPT simply do not have the capacity for governance or accuracy. What this means for the enterprise is LLMs should be integrated with other programs within their broader corporate policy.?

As is often the case, technological innovations require adoption of technology to manage it. One quick and easy example is?GPTZero, an app developed by a student at Princeton to identify articles created by generative AI.?

As I stated in a?related post here on LinkedIn:

“This cat is out of the bag, and it's wild--it can play, cuddle, and purr, but can also do serious damage with its fangs and claws, and it may or may not use the litter box”.


Strategic Issues

The primary strategic issue for enterprise decision makers in EAI remains how to adopt AI in such a way that it creates more competitive advantage than a disadvantage. In the case of ChatGPT, the technology was instantly commoditized by making it available to anyone online for free. Satya Nadella has already announced that?Microsoft plans to integrate ChatGPT?into all of the company’s products, including Azure, which combined reach some 1.3 billion people.?Google also has LLMs in R&D and will undoubtedly introduce those capabilities in its products as well, so consider LLMs essentially already commoditized globally.?

I?think these are primarily defensive moves on the part of big tech market leaders—if they don’t adopt and integrate LLMs, competitors will. AI is certainly sufficiently powerful to result in new tech leadership, but commoditizing LLMs doesn’t improve the competitive position of customers. Indeed, I suspect it will cause rapid disruption and displacement for many of their customers.

Unfortunately for big tech companies, others can build similar LLM applications. In the video below?Andrej Karpathy shows in a tutorial how to build a language model like ChatGPT from scratch in two hours.?

KYield’s Approach

Our KOS is an AI-enhanced enterprise operating system that is very different than the type of AI offered by LLMs. We provide precision data management verified at the source by the individuals and teams directly involved, so it is inherently more accurate than any model that scrapes the Web for unverified sources (in many cases unverifiable).?

We achieve this primarily through DANA (Digital Assistant with Neuroanatomical Analytics), which is currently under development in the first ever turnkey, scalable version. The current version of DANA includes machine learning. However, unlike LLMs, DANA protects privacy, IP, and the work products of individuals and organizations. The enterprise version (KOS) provides governance over the entire enterprise system with a simple-to-use admin app (ETA later this year/2023).

Our more advanced R&D in the synthetic genius machine (SGM) is based on Neuro-symbolic AI, which is a hybrid of symbolic reasoning and deep learning. When combined with verification of sources in the KOS, the SGM is much more accurate than LLMs alone. We plan on integrating the SGM into the KOS and DANA for advanced security, accuracy, and efficiency as it matures and can be tested.?

Stephen Wolfram recently wrote an?interesting blog post?demonstrating the accuracy of symbolic AI used in?WolframAlpha?for mathematics compared to ChatGPT. Accuracy in math is obviously essential for engineering and decision making. Our SGM is somewhat similar, but rather than focus on math we focus on bits of knowledge more universally, hence my underlying theorem developed 25 years ago—yield management of knowledge.?

Although our technology is very different than LLMs, they are quite compatible in much the same way Stephen demonstrates with WolframAlpha. We are currently investigating the use of LLMs in DANA. Like many others, I expect to selectively integrate LLMs in the near future for specific tasks.?

Bottom line is I believe the enterprise admin within our KOS will be necessary to meet the needs of enterprise customers in deploying LLMs due to the need for accuracy, if for no other reason than liability, regulations, and corporate governance.?


Links of Interest

Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations?

A report from Stanford, Georgetown, and OpenAI based on an October 2021 workshop, paper published in January of 2013.

~~~~~~~~

Without Consciousness, AIs Will Be Sociopaths

Dr. Graziano is a professor of psychology and neuroscience at Princeton University and the author of “Rethinking Consciousness: A Scientific Theory of Subjective Experience.”

~~~~~~~~

DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution

~~~~~~~~

Honorary doctor Chris Manning: “Language models write credibly, but not?necessarily the truth”

~~~~~~~~~

What Airline Chaos Looks Like From the Inside

For a former executive at Spirit Airlines, last week’s meltdown of Southwest felt familiar

~~~~~~~~~

Southwest Meltdown Shows Airlines Need Tighter Software Integration

The airline industry is long overdue for a tech overhaul that takes full advantage of the cloud and data integration, analysts say

~~~~~~~~~

Copyright Office Sets Sights on Artificial Intelligence in 2023

~~~~~~~~~

A nice video by Andrej Karpathy on language models like ChatGPT. He was formerly head of AI at Tesla via OpenAi and Stanford. Although this is a technical lecture of sorts, Andrej does it in a way that will be understandable to most (long – nearly 2 hours).?

Greg Plum

Ecosystem Expert ? 3X TEDx Speaker ? Partnership Executive

1 年

#chatgpt feels a little like climbing into the seat of a roller coaster... exciting and dopamine-producing, mixing with a a healthy dose of FUD (fear, uncertainty & doubt). Buckle up tight... it's going to be a wild ride!

CHESTER SWANSON SR.

Next Trend Realty LLC./ Har.com/Chester-Swanson/agent_cbswan

1 年

Thanks for Posting.

要查看或添加评论,请登录

Mark Montgomery的更多文章

  • Is your AI assistant a spy and a thief?

    Is your AI assistant a spy and a thief?

    Millions of workers are disclosing sensitive information through LLM chatbots According to a recent survey by the US…

    15 条评论
  • Industry-Specific EAI Systems

    Industry-Specific EAI Systems

    This a timely topic for us at KYield. We developed an industry-specific executive guide in August and shared…

    1 条评论
  • How to Achieve Diffusion in Enterprise AI

    How to Achieve Diffusion in Enterprise AI

    It may not be possible without creative destruction Not to be confused with the diffusion process in computing, this…

    3 条评论
  • Are we finally ready to get serious about cybersecurity in AI?

    Are we finally ready to get serious about cybersecurity in AI?

    Just when many thought it wouldn't get worse (despite warnings that it would), cybersecurity failures have started to…

    4 条评论
  • How to Achieve the Elusive ROI in EAI

    How to Achieve the Elusive ROI in EAI

    Given the ear-piercing noise of the LLM hype-storm, and the competition between Big Techs to outspend one another in…

    1 条评论
  • What is AI sovereignty? And why it should be the highest priority

    What is AI sovereignty? And why it should be the highest priority

    Definition of Enterprise AI sovereignty a. Free to govern and control one’s own enterprise AI (EAI) systems and data b.

    6 条评论
  • Wisdom is all you need (AI)

    Wisdom is all you need (AI)

    In 2017, a group of Google researchers published a paper titled “Attention is all you need”, which introduced their…

    3 条评论
  • LLM Chatbots Place Market Cap Over Safety, Society, and Planet

    LLM Chatbots Place Market Cap Over Safety, Society, and Planet

    Following the release of the first large language model (LLM) chatbot in November of 2022, leading experts in AI…

    1 条评论
  • Why your life, career, or company may depend on data valves

    Why your life, career, or company may depend on data valves

    The picture above is a slide from private presentations I did 10-15 years ago, which was based on research performed in…

    2 条评论
  • SPEAR AI Recorded Talk

    SPEAR AI Recorded Talk

    I decided to record a talk (scroll down) walking through the SPEAR AI paper. Fair warning, it's long at 80+ minutes…

社区洞察

其他会员也浏览了