Understanding ChatGPT/ Generative AI and the Autonomous Enterprise

Understanding ChatGPT/ Generative AI and the Autonomous Enterprise

The idea of an autonomous enterprise is widely misunderstood. Everytime I’m presenting on AI and automation or AI and any other technology, there is a genuine fear. Every major technical evolution has brought fear from the cloud to the internet to steam locomotives! That fear is driven due to a lack of clarity on what the “new reality” will be like. I’ll get to those realities later in this blog.

AI will be combined with many technologies with the objective of making things work better for humans. As with all technical evolutions, bad actors could create bad outcomes- more on that too later.

To understand all this, it’s important to get the mechanics of AI and generative AI. AI is fueled by a machine. The machine is given instructions on what to look at and a formula (called an algorithm) that it can use to identify things in the data. Different algorithms are built to find different things. An example most can recognize would be in retail. The retailer has lots of data on what I have purchased from them, may have profile information on me like my birthday and age. I might also share my social feed with the retailer. A simple algorithm could use those data sets along with time to propose new purchases. Like types of things I have purchased on prior birthdays or similar styles to outfits in my social posts. There are many variations. The longer the AI runs and the data and algorithm are tuned, the more accurate the results will be. You hear this referred to as “trained models'' because the algorithm has been used and tuned until the results are predictable. But this is AI in its most basic form. For years software vendors have been building solutions with trained AI built in for specific purposes with at least a partially known data set (or expectedly known).?

That means a company can buy software that already understands what to identify. A good example here is in IT operations. Companies have lots of software that monitors if their servers, networks and applications are running as expected. When you buy an AI-powered operations software, it looks for early predictors of failure, network topology problems that lead to slowdowns and things like age of equipment to failure rates. The point of these tools is to identify the issue before the failure rather than the old tooling that just told you there was a failure. This kind of AI is being applied to so many domains and verticals that there is a literal race to drive intelligence into everything.?

Generative AI is different. It’s different because any of us can use it. It’s not a technical tool, it’s not abstract, it does creative things well but the big difference is that we are all training it for free every time we use it! You will hear these tools also referred to as LLMs (Large Language Models) which refers to the fact that they use deep learning and large data sets to respond and produce what the user wants. Instead of data scientists in closed settings training on specific outcomes and data, generative AI has exploded as a consumer tool. Everyday uses now are generation of graphics- pictures, icons, logos and combining text on subjects to create a “valid” response. This one takes a little more unpacking. First, the generative services are cloud based because they use public data sources to answer the thing you ask it to do. Most applications of generative AI today like ChatGPT, Azure OpenAI and Google Bard provide a text interface that you will hear referred to as a ChatBot. When you enter a command which in the generative world is called a Prompt the AI looks out across all available data sources, combines the data into a response and then provides the answer or matching graphic back to the requestor. The requestor can then help train the AI engine by instructing it back on what is not right or add instruction to refine the result. One of the current issues with generative AI is that you need to closely check the answers you get from it because it can “hallucinate” or make up something that is not real or accurate where it is interpreting the data but there seem to be gaps.?

The massive adoption of AI that this is driving is helping to provide businesses with more and better algorithms. There is a ton of promise for these tools in simple activities like generating emails that are personalized to individuals, generating graphics for campaigns that specifically align to a customer base’s interests and creating variations in marketing materials that companies can test and implement based on reaction results. The most successful software solutions will solve specific problems that provide a clear value for the customer.

And that takes us to the reality of the software market today. EVERY software vendor will promote their AI capabilities over the next few years. Buyers who will be successful with these solutions will need to understand new things about what they are buying that are different than the past. There are many questions, these are just a few of the top ones:

  1. What algorithms are used in the software?
  2. Is it trained? If so, how long? And for what outcome?
  3. What is the accuracy?
  4. How transparent are the AI findings? Does the solution explain HOW it arrived at the finding?
  5. What is the data security model??
  6. What does the cost of massive data ingestion look like?

So now we get to the fear bit. Will AI bring about Terminator or 2001: A Space Odyssey where computers make their own decisions and decide humans are not needed? Or even one step back- take everyone’s jobs? In its present state, it’s not capable of it. However you hear alot about Ethical AI and Transparent AI because without rules or guardrails, AI can absolutely do unexpected things. This is largely why you don’t see AI running things all by itself today. A human really needs to oversee the work the AI does and needs to understand why it’s doing what it’s doing to ensure it doesn’t take any wrong turns. My view on this is that it will be this way for some time. Until you have specific and proven AI that does very repeatable things well always, no one is willing to turn over full control to AI for anything but the simplest use cases where no harm can be done. Today that is largely reporting findings and recommending courses of action.?

The concept of the Autonomous Enterprise is gaining momentum not from the perspective of going to full automation overseen by AI now. Rather it’s a progression of beginning to drive manual work out where possible. How does Autonomous Enterprise begin to become real? The mechanisms here are different than what I covered before. You have AI supervising and orchestrating other AI to do jobs, return results and then follow workflows. Humans will continue to be in these processes because the AI is recommending what should be done with transparency on why the particular course of actions is being recommended. The human will then decide the action to take and select what to do. The AI learns from that selection and over time figures out the variations in handling. At some point the business may be comfortable that the AI now understands how to make the decisions for the particular decision points and allow the AI to make that decision going forward, only asking for human input when there is a new or different variation. The key is to provide the rules up front and only allow AI to make automatic decisions based on those rules and the findings the AI comes back with.

Will AI kill jobs? Well think about when word processing was introduced. Prior to that, there were whole typist pools (people who typed things into typewriters, reviewed the documents, revised and retyped them). With word processing software, one person could do the work of several and typist pools disappeared. Many of the typists learned word processing, a valuable market skill, and got those jobs as they were created. An example of bigger displacement was to researchers as volumes of content came onto the internet. Companies used to have significant numbers of researchers who would go to libraries and hunt through tons of books and articles to get content that they would then summarize with the references. You still have researchers, but now they can work from anywhere and do that work in a fraction of the time. Some jobs will no longer be needed, but the new set of skills will be how to use these new tools like generative AI. The work doesn’t disappear, just the way it’s done. Whether you love AI or hate it, it’s here to stay. I will continue to provide guidance and want to hear your points of view.

Ken, Wow, great read. I especially like your "Top 6" questions checklist. It was really helpful, and strikes at the core issue of 'how deployable' in my company will a vendor's AI-based solution turn out to be. Good way to avoid buying an "AI lemon" ?? btw: I wish you were around when I bought my last used car (haha)

回复
Ron Favali

I help tech companies tell stories that communicate value to customers, media, partners, employees, and analysts.

1 年

This is excellent, Ken. Thanks for proactively addressing the "fear." We're starting to see a divergence in use cases. LLMs are about as smart as the average person on the street. That's what they are trained to be. Need expertise on a niche subject? At best, generative AI can't help. Worse, it's wrong. The huge upside with AI is around automation, as you highlight. We're starting to see it referred to as invisible AI. The reality is that most people, including those in the autonomous enterprise, won't even realize a specific thing is happening because of AI until that specific thing breaks or stops working.

要查看或添加评论,请登录

Ken Parmelee的更多文章

社区洞察

其他会员也浏览了