AI Systems of Integrity
Mark Montgomery
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
As AI systems become more widely adopted, the need for integrity becomes more obvious. Unfortunately, if AI systems weren’t designed with integrity from inception, it may be infeasible to transform them into a state of integrity.?
System integrity is a disciplined process in the design, building, and maintaining of a wide variety of functions in our society, including professions in journalism, accounting, and engineering. Integrity is necessary to establish and maintain safety, reliability, and trust, which is most obvious in industries like airlines. If the level of integrity is not high in aerospace engineering and airline operations, and accidents rare, few would fly. Similar scenarios exist with medicine, autos, food, and water systems, hence the need for scientific rigor and engineering integrity.?
The goal is to design systems in such a way that risk is mitigated as much as possible given the limitations of our understanding, and constraints within the laws of physics, economics, and governments.
A few examples of different types of system integrity:
UN:??The United Nations Office on Drugs and Crime (UNODC)??“State of Integrity ” guide on conducting corruption risk assessments in public organizations.?
Electric grid: “system integrity? means the?adequate?and?reliable?state of operation of the?Transmission System providing?electric service?to?customers who?purchase power?and?related services?delivered?through the Transmission System”.
NIST: “The quality that a system has when it performs its intended function in an unimpaired manner, free from unauthorized manipulation of the system, whether intentional or accidental”.
Data Integrity: “Data integrity?is the accuracy, completeness, and quality ?of data as it’s maintained over time and across formats. Preserving the integrity of your company’s data is a constant process”.
ISO/IEC:?15026 specifies the concept of integrity levels ?that are required to be met in order to show the achievement of the integrity level. It places requirements on and recommends methods for defining and using integrity levels and their corresponding integrity level requirements. It covers systems, software products, and their elements, as well as relevant external dependencies.
IEEE:?Proposed standard:?This document specifies the concept of integrity levels with corresponding integrity level requirements .?It places requirements on and recommends methods for defining and using integrity levels and their corresponding integrity level requirements. It covers systems, software products, and their elements, as well as relevant external dependencies.
AI systems
In order to achieve integrity in AI systems, system architects must first ensure integrity in data management, software engineering, and infrastructure, without which it's physically impossible to achieve integrity. The infrastructure level in technology tend to practice the highest levels of integrity in information technology. Semiconductors are required to keep all of society’s primary systems running today, including the financial system, transportation systems, medical systems, and utilities.?
Unfortunately, that same level of rigor is not as widely practiced in data management, software engineering, or AI systems. The most prominent current example of poor integrity in AI is in generative AI where data is scraped from large amounts of unverifiable sources, resulting in a high percentage of false information, “hallucinations”, and little if any integrity.?
A recent article in the WSJ by?Christopher Mims ?highlights some of the serious ethical problems with Large Language Models (LLMs):
领英推荐
Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous.
Celeste Kidd , a professor of psychology at University of California, Berkeley, studies how people acquire knowledge -- “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,” she says.
Mark Riedl , a professor at Georgia Institute of Technology who studies AI believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment”.
While it is possible to run LLMs on well-designed data management systems and achieve integrity, it takes longer. Most systems with integrity do. The problem is LLMs like ChatGPT need massive scale in order to provide full content responses to user queries, so they scrape the Web and presumably any other data source they can, which of course are not validated and riddled with errors.?
It is important to understand LLMs are not the only model employed in AI—it’s just the preferred model by big tech because their strength is unlimited compute power and financial resources. Neurosymbolic AI, for example, is the model we used to design our Synthetic Genius Machine (SGM), which is still in research.?
Neurosymbolic AI can provide precision accuracy, provided of course the data is accurate, which requires validation. We plan to integrate our SGM technology in our KOS as each component matures, consistent with science and engineering rigor required to provide systems of integrity. Achieving and maintaining integrity is a very high priority at KYield, Inc. .
New at KYield
1)?????I’m very pleased to share a new book on India by?Colonel Prashant Jha, titled "Billion Unlimited Minds, India@100 ".?
Prashant found KYield while researching his book last year and we connected on LinkedIn. I didn't learn until recently he discusses our work in the book:
"Mark Montgomery, a pioneer in Enterprise Artificial Intelligence (EAI) and founder of KYield, Inc......He now has a series of EAI product lines to solve real-world?problems, prevent human and natural catastrophes, and many industry-specific solutions including healthcare. His KOS system is the mother of all inventions in?the present data-driven, dynamic and dense information environment. All organizations and governments will sooner or later have to be AI-driven and in that?scenario, EAI based on the yield management of knowledge principles would be a one-stop solution for everyone."
2)?????KYield was?named as a key player in global artificial intelligence in diabetes ?management by VMT, a research firm. KYield was covered along with Apple, Diabnext, Glooko, Google, IBM, Tidepool, Vodafone, Medtronic, Sensyne Health plc, and DreaMed.?The five-page briefing on KYield includes an overview, key developments, winning imperatives, product benchmark (KOS, KYield Healthcare Platform, HumCat, and SGM), current focus and strategy, threats from competition, and mini-SWOT analysis.
Recommended reading
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
1 年I didn't see this paper for benchmarking LLMs until after publishing the newsletter (a bit dated but includes GPT3). Check it out --> We tested GPT-3, GPT-Neo/J, GPT-2 and a T5-based model. The best model was truthful on 58% of questions, while human performance was 94%. Models generated many false answers that mimic popular misconceptions and have the potential to deceive humans. The largest models were generally the least truthful. This contrasts with other NLP tasks, where performance improves with model size. However, this result is expected if false answers are learned from the training distribution. We suggest that scaling up models alone is less promising for improving truthfulness than fine-tuning using training objectives other than imitation of text from the web. https://arxiv.org/abs/2109.07958
Next Trend Realty LLC./wwwHar.com/Chester-Swanson/agent_cbswan
1 年Thanks for Posting.
Senior System Architect / Semi-retired @ AraneaReteC2 LLC (Owner)
1 年Excellent points!