KYield, Inc.

KYield, Inc.

软件开发

Rio Rancho,NM 1,192 位关注者

KYield, Inc. - a pioneer in AI. We offer the KOS, an enterprise-wide AI OS, and our Synthetic Genius Machine (SGM).

关于我们

KYield’s mission is to provide systems of integrity to organizations and individuals that have the ability to manage the knowledge yield curve in a secure, affordable manner so they can execute precision governance, make informed decisions based on accurate data, prevent crises, improve productivity, and remain competitive. KYield offers multiple products and systems, including: 1) KOS: Universal to any type of organization, the KOS is a distributed AI OS in our patented modular architecture. The KOS provides enterprise-wide governance, security, prevention and enhanced productivity tailored to each entity. 2) KYield Healthcare Platform: While still premature when we published our use case scenario on diabetes in 2010, which has since been downloaded by millions of people from most heatlhcare institutions and companies, the healthcare platform is designed to optimize preventative care in a patient-centric manner. Ideal for self-insured employer paid healthcare, we are seeing renewed interest from governments and insurers due to realization that much more efficient systems are needed. 3) HumCat: Prevention of human-caused catastrophes. This new product first revealed in early 2017 has long-been under R&D. By bundling the KOS prevention function with financial incentives including insurane and potentially financing, customers can achieve a very attractive ROI. There is no other higher ROI than prevention of crises other than accelerating R&D and creating the next Apple, Google, etc. 4) 'Synthetic Genius Machine and Knowledge Creation System' (SGM). When available, the patent-pending SGM (August, 2019) will provide superintelligence as a service at the confluence of symbolic AI and quantum computing.

网站
https://kyield.com/
所属行业
软件开发
规模
11-50 人
总部
Rio Rancho,NM
类型
私人持股
创立
2002
领域
Artificial Intelligence、Innovation、Discovery、Data Management、Business Intelligence、Governance、Algorithmics、Risk Management、Human Performance、Predictive Analytics、Knowledge Systems、Personalized Medicine、Machine Learning、Deep Learning和Productivity

地点

KYield, Inc.员工

动态

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    On the AI patent bubble and regional business environments.

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    I just bought a national subscription to American City Business Journals?(ACBJ) so will be posting more articles from their network of 44 journals. The Puget Sound Business Journal was our primary source of regional business intelligence in the 1980s to early 90s, and it was instrumental for many younger growing companies, including Microsoft, Costco and Starbucks, among others. They are often the only pubs who cover important business news. When I filed my original core patent application in 2006, Google had 4 AI patents. In 2023, they had 1,870 AI patents. While there has been a massive spike in recent years due to the LLM hype-storm and Big Tech priority, what this really tells us is almost everyone was behind Google in AI -- and KYield frankly in our focus areas. It also tells us there is an LLM bubble. This article cautions on what occurred with the blockchain and the metaverse where there was a similar spike of investment and patents followed by a 70% bubble deflation. I expect that to occur in LLMs, but unlike the above mentioned -- AI has enormous value for consumers and businesses. We just need to plow the massive piles of noisy junk off the road so we can deliver the value. "Rather than obtaining patents with an eye toward suing competitors," Love said (Brian Love, Prof. of Law at Santa Clara University School of Law). "Silicon Valley companies tend to build portfolios for the purpose of deterring competitors from suing them." This is precisely our strategy at KYield. I decided not to pursue a portfolio of patent applications related to our Synthetic Genius Machine invention a couple of years ago for several reasons, including industrial espionage that ignores patent law (China is the leader but far from alone), the high costs associated with a complex IP portfolio, and other higher priorities. NM has some unique challenges in IP. Due to the national labs, there is plenty of IP, but there is precious little support for IP-related ventures as government dominates the economy, and government is an extremely poor early adopter. They not only have obstacles galore, they usually lack motivation or incentives. NM biz environment is primarily gov't contractors, mom & pops, a few mid-markets, & non-profits / quasi-gov't entities like healthcare and universities. The state VC fund is the only VC game in town of any size, and that's a political game. Otherwise, NM has a lot going for it... location, climate, affordability, science, infrastructure, but it is a challenged regional economy. There is no support like we experienced and provided in Seattle or SV. Even AZ offered a few options in the 90s. That said, there isn't any reason businesses can't thrive here -- it just requires a lot more energy and money to break through the early obstacles. We have quite a few now that are growing nicely. Several right here in Rio Rancho. Stay tuned.

    AI patents soar in Silicon Valley, with more on the way - Silicon Valley Business Journal

    AI patents soar in Silicon Valley, with more on the way - Silicon Valley Business Journal

    bizjournals.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    An op-ed by Yoshua Bengio warning that LLM models need to be regulated due to the increasing risks from "reasoning".

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    A thoughtful op-ed by Yoshua Bengio that comes with a well-deserved warning about the inability to run LLMs safely, which as he suggests becomes even more problematic as large models begin to learn how to reason. Reasoning combined with web-scale data and massive compute power without rules-based governance is a recipe for disaster, yet that's where we find ourselves. "We thus see a new form of computational scaling appear. Not just more training data and larger models but more time spent “thinking” about answers. This leads to substantially improved capabilities in reasoning-heavy tasks such as mathematics, computer science and science more broadly." "We don’t yet know how to align and control AI reliably. For example, the evaluation of o1 showed an increased ability to deceive humans — a natural consequence of improving goal-reaching skills. It is also concerning that the ability of o1 in helping to create biological weapons has crossed OpenAI’s own risk threshold from low to medium. This is the highest acceptable level according to the company (which may have an interest in keeping concerns low)." "But with improved programming and scientific abilities, it is to be expected that these new models could accelerate research on AI itself. This could get it to human-level intelligence faster than anticipated. Advances in reasoning abilities make it all the more urgent to regulate AI models in order to protect the public." From my own decades-long R&D in AI systems, I've found very few have Yoshua's understanding or level of credibility. So many other brilliant researchers sold their soul only to be predictably disheartened when they found the ideological bait used to attract them was actually poison for the public. While I've always supported advanced research in AI, it never occurred to me anyone would be so reckless to unleash self-generating models on the public long before they could be demonstrated safe. It's frankly still difficult to believe. Even more bizarre is that they've been made into celebrities by the LLM-hypestorm, media and even governments... The only methods I'm aware of that meet the standards of safety we apply to all other advanced technology is to either keep LLMs in controlled lab environments (and any similar self-generating models), or to govern with rules-based systems, compartmentalization, and strong security as we do with the KOS, including precision end-to-end data management and access.

    AI can learn to think before it speaks

    AI can learn to think before it speaks

    ft.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    We are pleased to share this announcement on the new automotive division for KYield to be based in Michigan and led by Robert Hegbloom (Bob). Bob retired as CEO of the Ram brand in 2020. He joined KYield's board earlier this year and has been collaborating with auto industry leaders on an industry-specific version of the KOS. Bob will remain on our board with additional responsibilities as president of the new auto division. Robert Neilson, Skyler N., Deborah McGuinness, Santa Fe Institute https://lnkd.in/g7E_sfur

    AI pioneer KYield appoints Bob Hegbloom to lead new automotive division

    AI pioneer KYield appoints Bob Hegbloom to lead new automotive division

    prnewswire.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    A new post from Gartner on their AI hype cycle research

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Gartner unknowingly gave me a bit of a birthday gift by publishing this note with their 2024 AI hype cycle yesterday. ?? This is far from a perfect science of course, but it's a worthwhile effort nonetheless (see chart). "Generative AI (GenAI)?receives much of the hype when it comes to artificial intelligence. However, the technology has yet to deliver on its anticipated business value for most organizations." "The hype surrounding GenAI can cause AI leaders to struggle to identify strong use cases, unnecessarily increasing complexity and the potential for failure. Organizations looking for worthy?AI investments?must consider a wider range of AI innovations — many of which are highlighted in the 2024 Gartner Hype Cycle for Artificial Intelligence." "Data governance?— or ensuring that a company’s?AI training data is accurate, complete, bias-free and reflective of its future deployment without being too narrowly scoped — is one of the biggest hurdles in the race to composite AI adoption This compounds another challenge: As AI becomes a larger part of enterprise processes, organizations using it will face increased regulatory scrutiny, particularly regarding business ethics and data privacy laws." Hence the precision data management system in our patented core AI system in the KOS. We haven't used the term composite AI, but it effectively describes the KOS in a single, cohesive, efficient system. "Composite AI represents the next phase in AI evolution. It involves combining AI methodologies — such as machine learning, natural language processing and knowledge graphs — to create more adaptable and scalable solutions." "This approach enables businesses to maximize the impact of their AI initiatives, leading to more accurate predictions, decisions and automations — even within complex environments. Composite AI is particularly powerful compared to singular forms of AI, because it doesn’t rely on a single technique, thus spreading out its points of failure across multiple techniques instead of one." "For example, integrating rule-based systems with machine learning allows enterprises to better handle unstructured data, thereby enhancing their ability to derive insights from diverse datasets. By embracing composite AI, organizations can solve problems that were previously too complex for single-technique AI models to address." The KOS is a rules-based AI system, and integrates several other types of technologies within the strong governance and security provided within the system....

    Hype Cycle for Artificial Intelligence 2024 | Gartner

    Hype Cycle for Artificial Intelligence 2024 | Gartner

    gartner.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    Article on the benefits of augmented learning with background on the field from KYield's founder.

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Exceptional learning opportunity for CXOs, boards and senior managers, both from the article and in this case from the pioneer in augmenting knowledge work (KYield, Inc.). When we first started talking to then exclusively CXOs in large companies nearly 20 years ago, not a single one reported that they had even considered augmenting the work of individuals across the organization with AI. We were therefore confirmed as the pioneer by more than a decade. All of the organizations were familiar with AI and most of the market leaders were employing it in R&D, but before we talked to them it was primarily limited to drug development, increasing O&G production, etc. Some of the largest banks were using ML to prevent fraud. Most of the work was performed by small teams of scientists working with very expensive super computers, so AI was limited to a few dozen organizations. Who pioneers a particular field only matters if concerned with remaining competitive and relevant (in addition to ethical conduct of course) as our R&D has remained at least a decade ahead of others. Consider the pioneers in deep learning, for example. The authors in this article apparently sponsored by BCG (key members and contacts here on LI have read most of our work) are indirectly making the argument for the KOS as it's clearly the most advanced, efficient, secure and affordable enterprise learning system in the world (EAI OS). The apps mentioned in this article don't come anywhere close. They can't -- they lack the functionality. What surprises me here is the article is based on a fairly small survey of knowledge workers and a very small interview process of only 9 companies. We've talked to hundreds of organizations about these issues. That said, it's a solid article and accurate, hence my sharing despite no mention of KYield or our KOS. There is no question that augmented learning, or enhanced learning, if done well can improve financial outcomes. The KOS can improve everything the organization does. It not only reduces uncertainty in the workplace, captured preventions and opportunities together is actually one of the eight functions in DANA--based on decades of pioneering research in preventing crises of many types, from major catastrophes to minor more common events that are collectively very costly. The KOS is tailored to each organization with the CKO app, and for every individual with DANA (digital assistant) we also provide four layers of security in the patented core AI system. Administration is a simple-to-use natural language interface. The KOS is built on a precision data management system, which is automated in the natural work process, without which companies will waste vast sums of money and never catch their tail. This is a good advertisement for the KOS--it's just very unfortunate that our decades of pioneering work isn't mentioned, particularly since I graciously cite their work in our papers read by thousands of CXOs.

    Learning to Manage Uncertainty, With AI

    Learning to Manage Uncertainty, With AI

    sloanreview.mit.edu

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    The latest edition of Mark's enterprise AI newsletter is now posted to LinkedIn.

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    I wrote this as an op-ed prior to the election and submitted to two of the leading business publications, neither of which published, so I decided to post it as the November edition of my enterprise AI newsletter. A super important topic. I challenge conventional wisdom surrounding the SWOT analysis of LLM chatbots and Big Tech, U.S. prosperity, and national security. Please read and share as you feel appropriate. Thanks, Mark

    The AI Arms Race is Threatening the Future of the U.S.

    The AI Arms Race is Threatening the Future of the U.S.

    Mark Montgomery,发布于领英

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    "A nasty case of pilotitis"

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Solid and well informed perspective on why "many companies seem to be suffering from an acute case of pilotitis, toying with pilot projects rather than implementing the technology on a large scale". Some additional perspective from the front-lines: 1) Many companies (governments, and others) are waiting for the functionality in the KOS (most teams in large companies have been on our distribution list for over a decade). Unfortunately, the supermajority of capital is going only towards the LLM hype-storm -- not strong security, governance, and rules-based systems like our KOS -- the pioneer in EAI OS. We planned for all of these contingencies from inception and would never release such high-risk, immature technology to the public. 2) Going direct to the individual was no accident. ChatGPT and the fast followers in consumer LLM chatbots were recklessly released to the public undoubtedly due to full awareness that the lack of security and governance would make them difficult if even possible to adopt in business, healthcare and government, without massively expensive internal customization. A few of the largest have done so, but integrating with legacy systems is incredible expensive. I'm aware of several multi-billion dollar efforts. "Many employees seem to be secret cyborgs, using generative?ai?in their work even as their employers go slow." That's because many are conducting work with consumer LLM chatbots that represent serious risk for their employers, and in a fair portion, they would be fired if their employers found out about it. Very few people are aware of how LLM chatbots at that scale can connect dots that can be catastrophic for organizations. I suspect in the vast majority of cases, employees aren't even aware that they are causing such risks, until it blows up of course at which time the investigation begins. This is all well-known by experts in the valley. There is just too much money in the LLM arms race -- very few have the discipline not to jump on the hyped-up bandwagon, despite the inherent flaws. That's not a particularly attractive scenario for the risk committees on corporate boards, responsible CEOs, CFOs, CROs (risk), CIOs and CLOs (legal), not to mention CISOs (security).

    Why your company is struggling to scale up generative AI

    Why your company is struggling to scale up generative AI

    economist.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    Very important post on AI security and catastrophic risk.

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Article on a warning from Anthropic about systemic cyber risk and catastrophic risk from LLMs, and the need for rapid, surgical regulation by the USG. Although this is a self-serving blog post by Anthropic -- the specific regulations they are calling for would favor their company and products, they are making the same warnings I've been making from day one. However, I would go one giant step further and disallow all consumer LLM bots as they are inherently unsafe and can't be made safe. That's what should have been done immediately after the launch of ChatGPT due to national security risks. So far the risks have been realized precisely as we expected. First we see cyber risks realized that will be increasingly systemic. Catastrophic risks will be realized more slowly -- particularly bioweapons as they require physical labs. Confirmation this week that China's military (PLA) is using Meta's open source LLMs -- an almost certain realized risk I warned POTUS/DoD/Congress about long before the launch of ChatGPT. LLMs can be employed safely in the enterprise, provided robust security like our KOS is installed that includes strong access security, as other high-risk technologies require (we currently have four types of security in the KOS). Importantly, individual consumers can access DANA, our digital assistant, through their employer, health provider, bank, university, government or some other organization that installs the KOS and its strong security. We could also offer DANA to consumers via their ISP/cable/telecom provider, though none have yet partnered with us to install. What doesn't work at all from a safety and security perspective is to offer consumer LLMs to anyone on the web like it has been to date, without very strong upfront security. That's an invitation for a chain reaction of catastrophes that leads to anarchy.

    Anthropic warns of AI catastrophe if governments don't regulate in 18 months

    Anthropic warns of AI catastrophe if governments don't regulate in 18 months

    zdnet.com

  • 查看KYield, Inc.的公司主页,图片

    1,192 位关注者

    On the prisoner's dilemma in the Big Tech arms race.

    查看Mark Montgomery的档案,图片

    Founder & CEO of KYield. Pioneer in Artificial Intelligence, Data Physics and Knowledge Engineering.

    Author points out a classic case of the prisoner's dilemma in the LLM arms race not unlike the arms race during the Cold War, which is when the term was popularized. The author assumes, I think wrongly, that the LLM arms race (media misapplies LLMs to all AI methods) will similarly result in one winner takes most. An otherwise good article, a couple of critical issues the author is missing. 1) The looming risk of SCOTUS enforcing copyright on LLMs. Most GenAI is trained on data created and owned by others. There is no question there is great value in stealing the knowledge economy, but also no question it's theft. There is a big question where SCOTUS and/or Congress will fall on the issue. Most of the Big Techs can afford to pay license fees in the near term, but that will require burning far more capital, and as Marc Andreessen has admitted, the GenAI business model isn't sustainable if they need to pay for the content their products are based on. 2) Completely missing from media is the significant risk of disruption from superior AI methods. I'm admittedly in a unique position to understand this risk as I've invested most of my adult life immersed in AI R&D, resulting in two distinct systems, one of which is the synthetic genius machine (SGM). While it's unlikely some of the Big Techs would invest in the SGM -- it would cannibalize some of their cash cows, I can see a clear path to more advanced AI than is possible with LLMs. It requires a fraction of the compute power or infrastructure, has the strongest form of security I'm aware of, is much more accurate, and importantly, I believe will be much better at accelerating discovery. Still in R&D, the SGM will require considerable investment to mature. However, I consider the SGM to be actual super intelligence rather than claimed, as it's based on the way the brains of proven human geniuses work. I'm also aware of several other methods that contain high probabilities for surpassing LLMs. Bottom line: The LLM arms race is being driven by paranoid monopolists and opportunists milking it for all it's worth selling picks and shovels. Big Techs will keep burning cash as long as investors reward it. From Investopedia (https://lnkd.in/ghP8wKQF) : "The prisoner's dilemma is a paradox in?decision analysis?in which two individuals acting in their own self-interests do not produce the optimal outcome. A prime example of game theory, the prisoner's dilemma was developed in 1950 by RAND Corporation mathematicians Merrill Flood and Melvin Dresher during the Cold War (but later given its name by the game theorist Alvin Tucker). ?Some have speculated that the prisoner's dilemma was crafted to simulate strategic thinking between the U.S.A. and U.S.S.R. during the Cold War." https://lnkd.in/gRrbaDXp

    Google and peers weigh an AI prisoner’s dilemma

    Google and peers weigh an AI prisoner’s dilemma

    ft.com

关联主页

相似主页

查看职位

融资

KYield, Inc. 共 3 轮

上一轮

种子轮

US$500,000.00

Crunchbase 上查看更多信息