Two computing paradigms: deterministic and probabilistic (neuromorphic) computing
"Coin toss" by OpenAI's DALL-E

Two computing paradigms: deterministic and probabilistic (neuromorphic) computing


From the earliest computational models, like those envisioned by Leibniz in the 1600s, the goal was clear: use deterministic logic to deliver precise, mathematical outcomes. Fast forward to the 1800s, and we see Charles Babbage's ambition to expedite simple calculations while minimizing errors through his designs of the Difference Engine and Analytical Engine. However, the technology of his era was not advanced enough to fully realize these inventions. It wasn't until the advent of the first fully functioning electronic digital computer that Babbage's vision was actualized, leading to a mathematics-driven machine capable of decoding Nazi communications.

For many years, deterministic computing, with its structured syntax and rule-based approach, dominated, consistently delivering specific and precise answers. However, an alternative was already brewing in the early days of computing: a path inspired by the human brain, grounded in probabilistic reasoning. Unlike deterministic systems, this approach didn't yield a single precise answer but a range of possibilities, each weighted according to their likelihood of accuracy.

Frank Rosenblatt's early artificial neural model in 1958 marked a significant milestone. He demonstrated a basic yet pioneering way to process visual data akin to human brain function. Much like Babbage, Rosenblatt's ambitions were curtailed by the technological limitations of his time. Thus, while deterministic computing surged ahead, probabilistic (or neuromorphic) computing took longer to find practical applications. It's only in recent years that we've seen its successes in fields like robotics, computer vision, natural language processing, and predictive analytics.

Understanding neuromorphic systems challenges our conventional notions of computing, shaped predominantly by deterministic models. Criticisms like "hallucination" in large language models (LLMs) stem from misapplying deterministic expectations to probabilistic computing. Indeed, while inconsistent results in deterministic programming indicate flaws in that programming, variability is the cornerstone of neuromorphic design.

Embracing this new paradigm in neuromorphic system programming opens up questions about managing probabilities and optimizing outcomes for specific scenarios. What guiding information can we provide these systems? How can we write programs to minimize unwanted ambiguities? What checks can ensure consistent quality and mitigate errors? And crucially, how do deterministic and human oversight integrate with probabilistic outputs?

Generative AI is not merely a "coin toss," yielding random good or bad answers. These systems, when thoughtfully designed, can consistently produce reliable results, significantly enhancing the capabilities of traditional deterministic and human problem-solving methods.

Tom Short

What’s next?

9 个月

So is it possible then that LLM hallucinations should be thought of as a feature rather than a bug, and design to take advantage of them? Or are you already implying that?

Michael Lorberbaum

Digital Transformation | Business Strategy Director | Applied AI Consultant | 6+ years AI experience

9 个月

Interesting background. This makes me curious if LLMs are capable of associating a probability with each answer and thus enable us to set a confidence internal on potential answers.

Mark Evans

AI Strategist & Ecosystem Builder | Innovation & Partnerships Leader | Driving Collaboration Between Startups & Enterprises

9 个月

Very insightful and great way to frame the following: "Criticisms like "hallucination" in large language models (LLMs) stem from misapplying deterministic expectations to probabilistic computing."

Awesome insight! ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了