AI Systems of Integrity
Mark Montgomery backcountry skiing at 11,000 ft. in the Southern Rocky Mountains

AI Systems of Integrity

As AI systems become more widely adopted, the need for integrity becomes more obvious. Unfortunately, if AI systems weren’t designed with integrity from inception, it may be infeasible to transform them into a state of integrity.?

System integrity is a disciplined process in the design, building, and maintaining of a wide variety of functions in our society, including professions in journalism, accounting, and engineering. Integrity is necessary to establish and maintain safety, reliability, and trust, which is most obvious in industries like airlines. If the level of integrity is not high in aerospace engineering and airline operations, and accidents rare, few would fly. Similar scenarios exist with medicine, autos, food, and water systems, hence the need for scientific rigor and engineering integrity.?

The goal is to design systems in such a way that risk is mitigated as much as possible given the limitations of our understanding, and constraints within the laws of physics, economics, and governments.

A few examples of different types of system integrity:

UN:??The United Nations Office on Drugs and Crime (UNODC)??“State of Integrity ” guide on conducting corruption risk assessments in public organizations.?

Electric grid: “system integrity? means the?adequate?and?reliable?state of operation of the?Transmission System providing?electric service?to?customers who?purchase power?and?related services?delivered?through the Transmission System”.

NIST: “The quality that a system has when it performs its intended function in an unimpaired manner, free from unauthorized manipulation of the system, whether intentional or accidental”.

Data Integrity: “Data integrity?is the accuracy, completeness, and quality ?of data as it’s maintained over time and across formats. Preserving the integrity of your company’s data is a constant process”.

ISO/IEC:?15026 specifies the concept of integrity levels ?that are required to be met in order to show the achievement of the integrity level. It places requirements on and recommends methods for defining and using integrity levels and their corresponding integrity level requirements. It covers systems, software products, and their elements, as well as relevant external dependencies.

IEEE:?Proposed standard:?This document specifies the concept of integrity levels with corresponding integrity level requirements .?It places requirements on and recommends methods for defining and using integrity levels and their corresponding integrity level requirements. It covers systems, software products, and their elements, as well as relevant external dependencies.

AI systems

In order to achieve integrity in AI systems, system architects must first ensure integrity in data management, software engineering, and infrastructure, without which it's physically impossible to achieve integrity. The infrastructure level in technology tend to practice the highest levels of integrity in information technology. Semiconductors are required to keep all of society’s primary systems running today, including the financial system, transportation systems, medical systems, and utilities.?

Unfortunately, that same level of rigor is not as widely practiced in data management, software engineering, or AI systems. The most prominent current example of poor integrity in AI is in generative AI where data is scraped from large amounts of unverifiable sources, resulting in a high percentage of false information, “hallucinations”, and little if any integrity.?

A recent article in the WSJ by?Christopher Mims ?highlights some of the serious ethical problems with Large Language Models (LLMs):

Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous.
Celeste Kidd , a professor of psychology at University of California, Berkeley, studies how people acquire knowledge -- “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,” she says.
Mark Riedl , a professor at Georgia Institute of Technology who studies AI believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment”.

While it is possible to run LLMs on well-designed data management systems and achieve integrity, it takes longer. Most systems with integrity do. The problem is LLMs like ChatGPT need massive scale in order to provide full content responses to user queries, so they scrape the Web and presumably any other data source they can, which of course are not validated and riddled with errors.?

It is important to understand LLMs are not the only model employed in AI—it’s just the preferred model by big tech because their strength is unlimited compute power and financial resources. Neurosymbolic AI, for example, is the model we used to design our Synthetic Genius Machine (SGM), which is still in research.?

Neurosymbolic AI can provide precision accuracy, provided of course the data is accurate, which requires validation. We plan to integrate our SGM technology in our KOS as each component matures, consistent with science and engineering rigor required to provide systems of integrity. Achieving and maintaining integrity is a very high priority at KYield, Inc. .


New at KYield

1)?????I’m very pleased to share a new book on India by?Colonel Prashant Jha, titled "Billion Unlimited Minds, India@100 ".?

Prashant found KYield while researching his book last year and we connected on LinkedIn. I didn't learn until recently he discusses our work in the book:

"Mark Montgomery, a pioneer in Enterprise Artificial Intelligence (EAI) and founder of KYield, Inc......He now has a series of EAI product lines to solve real-world?problems, prevent human and natural catastrophes, and many industry-specific solutions including healthcare. His KOS system is the mother of all inventions in?the present data-driven, dynamic and dense information environment. All organizations and governments will sooner or later have to be AI-driven and in that?scenario, EAI based on the yield management of knowledge principles would be a one-stop solution for everyone."

2)?????KYield was?named as a key player in global artificial intelligence in diabetes ?management by VMT, a research firm. KYield was covered along with Apple, Diabnext, Glooko, Google, IBM, Tidepool, Vodafone, Medtronic, Sensyne Health plc, and DreaMed.?The five-page briefing on KYield includes an overview, key developments, winning imperatives, product benchmark (KOS, KYield Healthcare Platform, HumCat, and SGM), current focus and strategy, threats from competition, and mini-SWOT analysis.


Recommended reading

  1. SCOTUS?Justice Gorsich?recently implied ?during the Google trial that LLMs like ChatGPT would not be covered by Section 230 (legal liability) “let’s assume that’s right”. (VentureBeat)
  2. Stephan Wolfram?explains how LLMs work ?(Blog)
  3. Race of the AI labs heats up activity ?(The Economist)
  4. Safer Algorithmic Systems ?(ACM)

Mark Montgomery

Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

1 年

I didn't see this paper for benchmarking LLMs until after publishing the newsletter (a bit dated but includes GPT3). Check it out --> We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web. https://arxiv.org/abs/2109.07958

回复
CHESTER SWANSON SR.

Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan

1 年

Thanks for Posting.

John Sarkesain

Senior System Architect / Semi-retired @ AraneaReteC2 LLC (Owner)

1 年

Excellent points!

回复

要查看或添加评论,请登录

Mark Montgomery的更多文章

  • The AI Arms Race is Threatening the Future of the U.S.

    The AI Arms Race is Threatening the Future of the U.S.

    (Note: I wrote this piece as an op-ed prior to the election and submitted it to two of world's leading business…

    4 条评论
  • Is your AI assistant a spy and a thief?

    Is your AI assistant a spy and a thief?

    Millions of workers are disclosing sensitive information through LLM chatbots According to a recent survey by the US…

    15 条评论
  • Industry-Specific EAI Systems

    Industry-Specific EAI Systems

    This a timely topic for us at KYield. We developed an industry-specific executive guide in August and shared…

    1 条评论
  • How to Achieve Diffusion in Enterprise AI

    How to Achieve Diffusion in Enterprise AI

    It may not be possible without creative destruction Not to be confused with the diffusion process in computing, this…

    3 条评论
  • Are we finally ready to get serious about cybersecurity in AI?

    Are we finally ready to get serious about cybersecurity in AI?

    Just when many thought it wouldn't get worse (despite warnings that it would), cybersecurity failures have started to…

    4 条评论
  • How to Achieve the Elusive ROI in EAI

    How to Achieve the Elusive ROI in EAI

    Given the ear-piercing noise of the LLM hype-storm, and the competition between Big Techs to outspend one another in…

    1 条评论
  • What is AI sovereignty? And why it should be the highest priority

    What is AI sovereignty? And why it should be the highest priority

    Definition of Enterprise AI sovereignty a. Free to govern and control one’s own enterprise AI (EAI) systems and data b.

    6 条评论
  • Wisdom is all you need (AI)

    Wisdom is all you need (AI)

    In 2017, a group of Google researchers published a paper titled “Attention is all you need”, which introduced their…

    3 条评论
  • LLM Chatbots Place Market Cap Over Safety, Society, and Planet

    LLM Chatbots Place Market Cap Over Safety, Society, and Planet

    Following the release of the first large language model (LLM) chatbot in November of 2022, leading experts in AI…

    1 条评论
  • Why your life, career, or company may depend on data valves

    Why your life, career, or company may depend on data valves

    The picture above is a slide from private presentations I did 10-15 years ago, which was based on research performed in…

    2 条评论

社区洞察

其他会员也浏览了