How to Achieve the Elusive ROI in EAI

How to Achieve the Elusive ROI in EAI

Given the ear-piercing noise of the LLM hype-storm, and the competition between Big Techs to outspend one another in the AI arms race, one might think ROI is irrelevant in the AI era (“this time it’s different”). AGI, after all, could fix everything, and may break capitalism, so why would CEOs and business owners worry about anything as insignificant as ROI or economic survival? ??

Nonsense. Of course CEOs and business owners are concerned about ROI, unless they are one of a handful of Big Techs with market caps pumped up by $trillions due to LLM hype, and/or an LLM startup immediately valued in the $billions with no product, and plan to burn hundreds of $millions per year. This newsletter is for everyone else.

Consumers are apparently not ready to pay for LLM bots either, at least not in a sustainable manner, which influences EAI strategy. Microsoft, for example, has decided to retire GPT Builder for their consumer version of Copilot, just three months after scaling, presumably due to internal analysis that suggested it wasn’t viable.

Microsoft didn’t disclose the reason for this decision, but I suspect adoption was slower and costs were much higher than expected. The $20 per month subscription may not been sufficient to cover their costs even if they achieved lofty adoption goals. This type of experience has been common for investors in LLM/GenAI ventures and products, reported by many, which is why VC investment in GenAI has plummeted.

Although the ability to prompt an LLM bot on any topic was novel, and having bots automate tasks can improve productivity, boiling the ocean for generalized LLMs is not terribly conducive to ROI. By attempting to provide answers to any question with stochastic parrots, it requires training all the data one can scrape, steal, beg, or license, which is not only enormously expensive, but creates inaccuracies, unprecedented risks, requires enormous amounts of power and water, and billions of dollars.

The need for scale is of course why Big Tech has bet the farm on LLMs – they are among the only companies that can provide it, so they are unsurprisingly attempting to extend multiple monopolies to AI and force dependency. Bottom line is LLMs are optimized for the strategic interests of Big Techs, LLM scientists and engineers, not customers or society. See my recent paper on SPEAR AI systems (Safe, Productive, Efficient, Accurate & Responsible) on the risks of LLMs.

Dos and don’ts for LLMs in business

It’s critically important to understand that LLMs have a much more limited beneficial role than what the LLM hype-storm is claiming. Attempting to use LLMs for everything in AI is absolutely the wrong approach, and frankly credible grounds for being fired, as it demonstrates incompetence and poor decision making. It’s also a great way to recklessly increase risk for organizations and people, and that approach is all but impossible to generate an ROI. ?

Dos

  1. It’s OK to train LLMs on internal data to create your own GenAI app or bot as long as it’s done safely. Employing multiple bots in one enterprise has already become common in large companies. Some have a customer service bot, a sales bot, specialized R&D bot, and a general knowledge base bot. LLMs shouldn’t be used for every type of bot. Each bot should (must) comply with corporate policy and regulatory compliance, so easy-to-use system-wide governance is very important, even if almost non-existent (designed-in from inception in our KOS). Most important to understand is that inaccuracy, many types of risks, and high costs come with LLMs, so use sparingly, and limit to only those use cases where LLMs outperform other options. Consumer LLMs were instantly commoditized and provide no competitive advantage whatsoever, just higher costs and whatever value one thinks they gain from them. However, training bots on proprietary data is a great way to leverage knowledge products, build on strengths, increase sovereignty (see my previous EAI newsletter on AI sovereignty), and potentially create a competitive advantage.
  2. Strive to maintain sovereignty while preventing the negative influence of NIH syndrome. This includes reducing dependency on Big Tech cloud providers, embracing interoperable data standards (as we do in the KOS), leverage edge and device AI, and keep your most sensitive data under your control.
  3. Be hyper sensitive to security risks, not least systemic risks from LLM firms and Big Techs (see recent report by Swiss Re on the increasing systemic risk in Big Tech companies).?
  4. Consider graphs for your EAI architecture if you haven’t already done so. Although they can be used for LLMs, and offer more control than without, we’ve found several other uses that offer more value.
  5. Apply small language models (SMLs) and neurosymbolic AI rather than LLMs were more appropriate and much less costly. SMLs are well-matched for many applications, and neurosymbolic is a great option for precision data management (see my EAI newsletter edition on the power of neurosymbolic AI).

Don’ts

  1. Never share corporate data with an LLM platform, regardless of whether it’s a Big Tech or an LLM company. This includes ANY use of bots by employees relating to the business. Very few employees are sufficiently trained to understand the risks caused by sharing interactive data with LLM bots. I highly recommend that all organizations establish a strict policy against using LLM platforms for any work-related task. The risks are just too great. Not sharing data is becoming more difficult to achieve since software companies are changing their terms of service to include training on customer data. Businesses and governments needs to draw a line in the sand on data as it represents all the value in the digital economy. Our policy at KYield is very clear – customer data is owned and controlled by customers. Period.? We don’t want access to any customer data that isn’t required to keep our systems running safely and efficiently, which is limited to a small amount of meta data on system performance and security.
  2. Don’t use LLMs when other methods are safer, more accurate, and less expensive, which includes the majority of tasks. I’m seeing increasing numbers of reports of companies using LLMs for use cases that are completely inappropriate. Examples include robotics, industrial automation, and preparing professional documents that require precision accuracy. While exceptions exist, such as in accelerating drug development, language models should generally be limited to rapid language tasks for large amounts of unstructured data that would require humans much longer to perform, on tasks where accuracy isn’t important, regardless of modality.
  3. Don’t assume the portfolio (of AI projects) approach will be sufficient to remain competitive against highly refined AI systems.

Our approach at KYield

Twenty-seven years ago this summer I conceived the KYield theorem (yield management of knowledge) while operating GWIN, the leading learning network for thought leaders we had just built the previous year. Over the course of the following few years of R&D, the theorem manifested as the KYield Operating System, which we shortened a few years ago to just KOS.

Now highly refined and tested for basic scenarios, the KOS is an organizational, or business operating system (we refer to as an EAI OS, or enterprise OS). Not to be confused with computer operating systems or any other type, the KOS is primarily concerned with organizational management, human interaction, knowledge systems, and AI augmentation for governance, security, prevention, personalized learning, and productivity.

For technical readers, the KOS is a quasi neurosymbolic AI system run on precision data management. It’s a modular system architecture that includes multiple interconnected apps, including a CKO app for simple to use system-wide governance, business unit app (smaller version of the CKO), team apps, and DANA, the digital assistant for every employee in the organization (optimal).

The KOS provides end-to-end data management with a very strong bias for high quality data, which is the complete opposite of consumer LLM chatbots. The system is fully automated with semi-automated admin for the entire system by senior corporate officers, which then approves all other units, teams, and individuals. Every individual with access to the system through their DANA app is verified in a manner that is more similar to banks than consumer software.

To achieve the theorem, it was necessary to convert everything in the system to math, including meta data, ratings on all files, and profiles of all people.

The functions in the KOS include (part of the KOS is patented):

  • System-wide governance and admin in simple natural language (for all apps)
  • Personalized learning for every individual in the organization
  • Secure knowledge networks supported by visual graphs, tailored to each individual
  • Data valves to manage quality and quantity for every individual in the DANA app
  • Prevention of several different types of costly crises, from routine to major catastrophes
  • Multiple types of security designed-in from inception
  • Generative AI trained on personal data, corporate data, and licensed data
  • Prescient search for automatic presentation of high-quality material
  • Personal analytics and reports on projects, relationships, learning and productivity.

To learn more about our approach at KYield, see our executive briefing paper, “What is an EAI OS?“. Below is the video talk I recorded to walk through the paper. DM me if you'd like to discuss in more detail.



?


Paul Webber

Critical thinker, CyberSecurity Industry Analyst, Business Advisor, Chief Cyber Security Officer, CISO, vCISO, AR Manager, VP Product Management, Senior Director Product Marketing,

9 个月

GenAI solutions that customers will actually pay good money for ( versus merely experimenting with) are still few and the winning use cases seem to be all about improved (human) resource utilization rather than anything that truly moves the needle on revenue generation. Thus it is hardly surprising that the likes of Microsoft who went very early and very big on GenAI are failing to cover the gazillions they have had to outlay to create the services. It is probably going to be those that came along later with more efficient and cost effective LLMs ( e.g. Databricks) that actually manage to make these services generate some revenue and deliver more beneficial business enabling capabilities than the more frivolous experiments from OpenAI, Google, Musk et al. True ROI will continue to be elusive meantime and we should be very circumspect when we hear such claims - as to how exactly people are calculating it.

要查看或添加评论,请登录

Mark Montgomery的更多文章

  • My response to the 'AI Action Plan' RFI

    My response to the 'AI Action Plan' RFI

    This is a special edition of the EAI newsletter consisting of my response to the RFI from the Office of Science and…

    5 条评论
  • A Prudent Approach to Agentic AI

    A Prudent Approach to Agentic AI

    When considering adoption of EAI systems, enterprise decision-makers should consider the influence of Big Tech…

    2 条评论
  • Knowledge Distillation and Compression

    Knowledge Distillation and Compression

    Given the recent shock from DeepSeek and their R1 distillation model, I thought we should focus this edition on…

  • How to future-proof businesses with AI systems

    How to future-proof businesses with AI systems

    We focus on the auto sector in this edition, but the same issues apply to most industries. Introduction Few would…

    3 条评论
  • AI in 2025: A Forecast for Business and Markets

    AI in 2025: A Forecast for Business and Markets

    Looking back on the first two years of consumer LLM chatbots, this era would have made a great sci-fi movie. Given the…

  • The AI Arms Race is Threatening the Future of the U.S.

    The AI Arms Race is Threatening the Future of the U.S.

    (Note: I wrote this piece as an op-ed prior to the election and submitted it to two of world's leading business…

    5 条评论
  • Is your AI assistant a spy and a thief?

    Is your AI assistant a spy and a thief?

    Millions of workers are disclosing sensitive information through LLM chatbots According to a recent survey by the US…

    15 条评论
  • Industry-Specific EAI Systems

    Industry-Specific EAI Systems

    This a timely topic for us at KYield. We developed an industry-specific executive guide in August and shared…

    1 条评论
  • How to Achieve Diffusion in Enterprise AI

    How to Achieve Diffusion in Enterprise AI

    It may not be possible without creative destruction Not to be confused with the diffusion process in computing, this…

    3 条评论
  • Are we finally ready to get serious about cybersecurity in AI?

    Are we finally ready to get serious about cybersecurity in AI?

    Just when many thought it wouldn't get worse (despite warnings that it would), cybersecurity failures have started to…

    4 条评论

社区洞察

其他会员也浏览了