Full Paper: SPEAR AI Systems
Mark Montgomery
Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.
We just uploaded my working paper on SPEAR AI systems (Safe, Productive, Efficient, Accurate, and Responsible), so wanted to provide the link to the full paper (also a companion video). Below is the conclusion.
11. Conclusion
Nearly seven decades after John McCarthy coined the term artificial intelligence, we find ourselves in the midst of a global AI arms race, which was triggered by unleashing an LLM chatbot to the general public for generating text, which rapidly converted to multimodal. The LLM bots are trained on the content owned by millions of people and organizations without their permission, representing trillions of dollars of investment and great personal effort to develop over centuries.
A review of AI research for this paper confirmed all known existing safeguards for LLMs are easily breached. This is no surprise to many of the pioneers of this very technology, hence the consistent warnings from Hinton, Bengio and many others, including scientists from the LLM firms themselves and their big-tech enablers.?
领英推荐
As briefed in this paper, LLM chatbots in use today are nowhere close to meeting the rigorous safety standards required of other industries. These risks include significant cybersecurity risks, social and psychological risks, economic risks, and catastrophic risks, among others, yet over a year after achieving the most rapid adoption of any product in history, the U.S. Government still hasn’t taken the action required to mitigate these risks to levels comparable to other safety-critical technologies in other industries.
I can only speculate based on behavior of other arms races that a perception of national competition is one reason for the unprecedented lapse in safety governance. However, given the state of technology markets in the U.S. today, and the amount of money at stake, I postulate that unhealthy influence from the big-tech sponsors of LLM chatbot firms are also contributing to the hands-off policy for these very serious risks.?
However, regardless of what courts and legislatures do or don’t do in response to what I believe was a recklessly premature release of high-risk technology, individuals and organizations are nevertheless faced with their own risks to consider, as well as opportunities, and must make decisions. As the unprecedented live experiment on society has now confirmed, the supermajority of the problems and risks caused by LLM bots are due to the lack of effective data governance, which is due in part to the very large scale necessary to provide mimicry of generalized intelligence to consumers on any topic.
Fortunately, strong data governance, safety, and security can be provided to individuals and organizations without sacrificing the majority of productivity benefits offered by LLM chatbots. Moreover, precision data management can provide much higher levels of accuracy at a small fraction of the financial and environmental costs created by LLM chatbots. As many companies have now demonstrated, even generative AI functions can be executed within systems with strong data governance by employing small language models and other techniques, such as licensing additional data for larger models free from copyright and reputational liability.
Well-designed SPEAR AI systems are technically viable today. I offer a briefing on the KOS as an example of one such system. By adopting a refined SPEAR AI system like the KOS, organizations can benefit from nearly three decades of R&D, save significant time and money, improve accuracy with precision data management, increase productivity, and reduce risks while avoiding unnecessary and wasteful social, economic, and environmental impacts.?
Expertise in your corner | ????? | MIT | HBS | ?? )'( ??
8 个月Very curious — in the excerpt you provide, you mention that LLMs can be made safe for use in organizations. Other than natural language search, do you see any benefit for the average person coming out of LLMs and generative AI? Most of the applications I’ve seen involve helping organizations eliminate employees or further distance themselves from customers. When it comes to the person on the ground, I’ve seen very little benefit. (Other than “use AI to write your memos because otherwise you’ll be passed over for promotion in favor of the person who does use AI”). I guess implicit in my question is, even with all the safeguards, is AI going to be a net benefit to society?
Hypnotherapist & Radio Presenter
10 个月Excellent!
Principal information architect & diagnostician at Ripose Pty Limited
10 个月?? An interesting post. In 1990 I developed my Caspar (Computer Assisted Strategic Planning And Reasoning) engine (my LLM AI) . I will now describe how I managed to implement my AI system addressing 3 of Mr. Montgomery’s points: 1?? Page 5: The socioeconomic costs that LLMs will cause. In 1990 I identified the said socioeconomic factors: On 14 Dec 2023 I published my 360th post ‘Socioeconomic Factors and Conflict ‘ - https://www.dhirubhai.net/posts/charles-meyer-richter-1734a19_activity-7140797298166460416-Rdxw explaining my interpretation of these factors & how they can be used to avoid conflicts 2?? Page 5: “Impact on the Knowledge Economy”. On 24 Dec 2023 I published my 363rd post ‘AI vs NI Knowledge’ - https://www.dhirubhai.net/posts/charles-meyer-richter-1734a19_24-dec-2023-ai-vs-ni-part-2-knowledge-activity-7144627499157602304-tcy_ in which I described how Emmanuel Kant’s ‘a priori’ & ‘aposteriori’ knowledge influences everyone 3?? Page 7: “Fine tuning”: On 29 Dec 2023 I published my 350th post ‘AI vs NI: Data’ - https://www.dhirubhai.net/posts/charles-meyer-richter-1734a19_aidatafailurepdf-activity-7146301099690344448-YL9T in which I described how data-items & databases create the logical construct of ‘Data’ Regards