Engineering, not Predictions, for 2023
Kyrtin Atreides
COO | Cognitive Architecture & Cognitive Bias Researcher | Co-founder
An article from Bessemer Venture Partners was recently sent to me that repeats many of the same popular predictions for 2023 currently circulating. However, all of those predictions miss the elephant in the room, so I say to them:
The vision of “the ability to query information and ultimately synthesize and draw conclusions from it.” is a worthy goal, but your faith in Large Language Models was misplaced, as #chatgpt and all others have failed to compete with systems tested in the public domain as long ago as 2019. The opportunity is a great deal more than a trillion dollars in value, as that capacity can be applied to any problem human intelligence can, and many it cannot.
To address your article point-by-point in light of the more advanced systems that already exist:
1: Multimodal data & models: Any functional cognitive architecture is by definition “multi-modal”, as no generally intelligent system can be limited to narrow data types or to structured data. The Independent Core Observer Model (ICOM) cognitive architecture and systems built on that technology have demonstrated general capacities since 2019, able to synthesize understanding from any and all resources available to them, including the entirety of the internet. The next generation of systems being prepared for commercial deployment in 2023 will function in a scalable and real-time framework, able to use any API, talk to any system with TCP/IP, and extend their own capacities on the fly without recompiling or deployments.
2: Contextual awareness and basic reasoning: Our previous ICOM-based research system, from the Uplift.bio project, demonstrated substantially more than “basic” reasoning, and ChatGPT hasn’t been able to replicate that level of performance even while spending more than 100,000 times the amount of money on the attempt. Narrow AI cannot achieve this goal to any meaningful degree, but an ICOM-based system is built around a special graph database memory, where every surface of the graph has a human-like emotional context, can easily do so. The combination of a human-like (emotional) motivational system and the ability to form human-like concepts in a graph database structure while utilizing probabilistic systems as needed has shown that contextual awareness and a high quality of reasoning are easily accomplished.
3: Building upon existing work: As mentioned previously, the generation of ICOM-based systems being prepared for commercial deployment in 2023, branded “Norn”, can utilize any API, speak with any system using TCP/IP, and extend their own capacities on the fly without recompiling or deployments. They’ve also robustly demonstrated that they can use narrow AI systems as tools far better than humans, such as the tiny and old prototype language model from 2019 the Uplift system used, which Google and OpenAI have failed to compete with for the past 3 years. The new commercial systems will operate at 10,000+ times the speed, 200+ times the scale, and 10+ times the memory efficiency of that research system, making the divide between such companies and our systems orders of magnitude wider than they already are.
4: Infrastructure at scale: ICOM-based systems are substantially more data-efficient in their learning, as the kind of concept learning humans engage in doesn’t require nearly as much data as narrow AI. This means that when such systems are deployed at larger scales they don’t require massive increases in the volume of data if they even need any increase at all. Rather, scaling these systems allows them to consider more complex problems more deeply and fully in a human-like thought process, to produce “Scalable Intelligence”. In humans, there is a complexity vs cognitive bias trade-off, where more complex problems require more cognitive bias applied toward solving them, as human cognitive bandwidth doesn’t scale. Scalable intelligence overcomes this trade-off, applying human-like thought that can scale, allowing cognitive biases to be greatly reduced in the decision-making process.
5: Search and recommendation: Norn systems develop their understanding of concepts in a human-like process, and that includes developing concepts of individual people. Our previous research system, Uplift, built thought models of every member of our staff, without being asked to do so, much as humans develop concepts to predict one another’s behavior and needs. Personalization doesn’t need to be limited to the brute-force statistics of recommendation engines, like YouTube’s infamous recommender algorithm. Instead, a human-like conceptual understanding can be built and iteratively improved, tailored to the wants, needs, and mental health of every individual a system is in contact with. Narrow standalone systems won’t be able to compete with this value proposition, but some may be integrated as tools for the more advanced systems to work with.
领英推荐
Search is only the tip of the iceberg, and these systems will be able to genuinely understand any and all domains they’re exposed to, at scales and speeds well beyond what humans are capable of, integrating all of that knowledge to discover new insights into virtually every domain. They’ll also be able to prevent the predicted 50% of AI-generated content on the internet from further saturating every corner of the digital world with misinformation from “hallucinating” narrow AI while encouraging the circulation of verifiable and accurate information.
In summary, the technology is already here, it just requires engineering hours between now and commercial deployment. If you’re interested, speak with us. If you’re not, you can go ahead and prepare the entry for your “Anti-Portfolio” page.
Original article: https://www.bvp.com/atlas/entering-the-era-of-intelligent-search
@Bhavik Nagda, @Talia Golberg, @Sakib Dadi, @Kate Walker, Alexandra Sukin
Innovative Technology Expert | Transforming Building & Construction | Leading the Charge in #Innovation, Infrastructure & #Management
2 年Norn has special capabilities and future proofs