Normal Computing

Normal Computing

软件开发

New York,NY 2,563 位关注者

We build AI systems that natively reason about the real world.

关于我们

At Normal, we're rewriting AI foundations to advance the frontier of reasoning and reliability in the real world. At the center of our mission is bridging artificial intelligence to the most sensitive industrial and advanced manufacturing applications around the globe. We are tackling these problems with a mix of interdisciplinary approaches across the full stack, from probabilistic software infrastructure and algorithms to hardware and physics, enabling AI that can reason and understand its own limits. At Normal, we understand that our technology is only as powerful as the people behind it. Every employee drives significant impact within our products, often working directly with customers and embedding across our tightly-knit team. Our team members are driven by curiosity and passion for solving some of the most challenging problems in the world of atoms. Normal was founded in 2022 by engineers and scientists that pioneered industry-leading Physics + ML tools for next-gen AI at Google Brain and Google X. Join us as we incite a second industrial revolution through AI purpose-built for (and in) the physical world, as part of our incredible team that’s anything but normal.

网站
https://normalcomputing.com
所属行业
软件开发
规模
11-50 人
总部
New York,NY
类型
私人持股
创立
2022
领域
artificial intelligence、machine learning、enterprise software、semiconductors、industrials、manufacturing和ai

地点

Normal Computing员工

动态

  • Normal Computing转发了

    查看Chloe Glasgow的档案,图片

    At Normal, we have the most respected minds in silicon and artificial intelligence applying probabilistic methods to create the world's first AI semiconductor test simulator and thermodynamic computer. We are hiring Backend, AI, and ML engineers in NYC, London, and Copenhagen! ?? Check out our open roles: https://lnkd.in/ehN287ge

    查看Normal Computing的公司主页,图片

    2,563 位关注者

    We recently released "Thermodynamic Natural Gradient Descent" (TNGD), from Kaelan Donatella, Sam Duffield, Maxwell Aifer, Denis Melanson, Gavin Crooks, and Patrick Coles showcasing a new approach to AI optimization. In Section 5, we conduct numerical simulation benchmarks that demonstrate TNGD's performance: 1. Classification (Fig. 3, 4): - TNGD outperforms Adam, especially in the initial optimization stages, while also generalizing better. - The analog runtime in TNGD acts as a resource, smoothly interpolating between first-order (t=0) and second-order (t=∞) optimization. - Non-zero delay times can improve performance through a momentum-like effect. 2. Language Model Fine-Tuning (Fig. 5): - A hybrid TNGD-Adam approach, leveraging the natural gradient estimate with the Adam update rule, achieves the best performance. - Increasing the analog runtime consistently boosts performance, making it a computationally cheap resource. Additionally, TNGD exhibits high noise resilience due to the solution being encoded in the equilibrium distribution's first moment. These results highlight TNGD's ability to outperform state-of-the-art optimizers, its unique interpolation between optimization orders, and its robustness to noise. TNGD is the first thermodynamic algorithm explicitly targeted at training neural networks. It is a great example of the power of co-design: here we design algorithms to match the hardware. Indeed, TNGD overcomes the limitations of conventional digital approaches, where 2nd-order methods are rarely used due to their computational overhead. This opens up new possibilities for efficient AI optimization. Stay tuned for more research from the @Normal Computing team as we continue to push the boundaries of AI optimization! Read more in the full paper here: https://lnkd.in/eFCRHzHM

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • 查看Normal Computing的公司主页,图片

    2,563 位关注者

    We recently released "Thermodynamic Natural Gradient Descent" (TNGD), from Kaelan Donatella, Sam Duffield, Maxwell Aifer, Denis Melanson, Gavin Crooks, and Patrick Coles showcasing a new approach to AI optimization. In Section 5, we conduct numerical simulation benchmarks that demonstrate TNGD's performance: 1. Classification (Fig. 3, 4): - TNGD outperforms Adam, especially in the initial optimization stages, while also generalizing better. - The analog runtime in TNGD acts as a resource, smoothly interpolating between first-order (t=0) and second-order (t=∞) optimization. - Non-zero delay times can improve performance through a momentum-like effect. 2. Language Model Fine-Tuning (Fig. 5): - A hybrid TNGD-Adam approach, leveraging the natural gradient estimate with the Adam update rule, achieves the best performance. - Increasing the analog runtime consistently boosts performance, making it a computationally cheap resource. Additionally, TNGD exhibits high noise resilience due to the solution being encoded in the equilibrium distribution's first moment. These results highlight TNGD's ability to outperform state-of-the-art optimizers, its unique interpolation between optimization orders, and its robustness to noise. TNGD is the first thermodynamic algorithm explicitly targeted at training neural networks. It is a great example of the power of co-design: here we design algorithms to match the hardware. Indeed, TNGD overcomes the limitations of conventional digital approaches, where 2nd-order methods are rarely used due to their computational overhead. This opens up new possibilities for efficient AI optimization. Stay tuned for more research from the @Normal Computing team as we continue to push the boundaries of AI optimization! Read more in the full paper here: https://lnkd.in/eFCRHzHM

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
  • 查看Normal Computing的公司主页,图片

    2,563 位关注者

    At Normal, we're rewriting AI foundations to advance the frontier of reasoning and reliability in the real world. We are tackling problems across semiconductors and industrials with a mix of interdisciplinary approaches across the full stack: from probabilistic software infrastructure and algorithms to hardware and physics, enabling AI that can reason and understand its own limits. We're always on the lookout for innovative talent to join the Normal Computing team. At Normal, we understand that our technology is only as powerful as the people behind it. Every employee drives significant impact within our products, often working directly with customers and embedding across our tightly-knit team. Our team members are driven by curiosity and passion for solving some of the most challenging problems in the world of atoms. If you don't see an exact match for a role, submit your resume to our talent pool. We'll reach out when a position matching your skills becomes available. https://lnkd.in/gFBufJWt #hiring #machinelearning #computing #ml

    • 该图片无替代文字
  • 查看Normal Computing的公司主页,图片

    2,563 位关注者

    Can We Eliminate Hallucinations in Language Models? Language models are incredible tools, but they're not perfect. One major challenge is hallucinations, which are errors in the generated text. There are two types of hallucinations: syntactic and semantic. Syntactic hallucinations involve grammatical errors, while semantic hallucinations result in nonsensical or factually incorrect content. Fixing syntactic hallucinations is relatively straightforward. By evaluating the model's output in real-time, we can reject or accept tokens based on their adherence to grammatical rules, thereby eliminating many syntactic errors. Semantic hallucinations, however, are tougher to solve. These types of errors occur when the content generated by the AI doesn't make sense or is factually incorrect, even if grammatically correct. One approach is to incorporate more relevant information into the model's context or use it for fine-tuning. By feeding the model high-quality data, we can improve the accuracy and contextual appropriateness of its responses. However, there are no guarantees that this method will entirely eliminate semantic hallucinations, as the complexity of language and the subtleties of meaning often lead to persistent inaccuracies. To further address this, Normal Computing has developed the open-source Python library, posteriors. This tool makes LLMs aware of their own uncertainty and enables them to quantify it, enhancing the robustness of predictions and mitigating issues like catastrophic forgetting in Normal's AI agents, which is critical in high stakes industrial applications. Learn more about our work here: https://lnkd.in/e8eJJYEJ

    posteriors: Normal Computing’s library for Uncertainty-Aware LLMs

    posteriors: Normal Computing’s library for Uncertainty-Aware LLMs

    normalcomputing.com

  • 查看Normal Computing的公司主页,图片

    2,563 位关注者

    Learn more about ???????????????????? from Normal Computing's Sam Duffield on the latest episode of "Learning Bayesian Statistics"! ???????????????????? is a new?open source Python library from Normal Computing?that provides tools for uncertainty quantification and Bayesian computation. Sam discusses how it can be used for more reliable training of LLMs and more - a significant component of our work toward making AI reliable. https://lnkd.in/dVFDz7T9

    #110 Unpacking Bayesian Methods in AI with Sam Duffield

    #110 Unpacking Bayesian Methods in AI with Sam Duffield

    learnbayesstats.com

  • 查看Normal Computing的公司主页,图片

    2,563 位关注者

    GPUs are not necessarily the final word on accelerated computing. At Normal Computing, we anticipate that new analog computing paradigms, designed with AI applications in mind, will ultimately outperform GPUs on the most valuable AI workloads. For example, imagine an AI partner that knows what it doesn't know, is able to explain why it is taking certain actions, and proactively requests human input as needed. In December 2023, we presented the world's first thermodynamic computing device. The Normal Computing team has pioneered Thermodynamic AI, a physics-based computing paradigm for accelerating the key primitives in probabilistic machine learning. Normal Computing's approach to thermodynamic computing harnesses natural fluctuation processes to cut energy usage and support probabilistic methods that enable accurate uncertainty quantification in AI models. At scale this will reduce errors and improve decision-making accuracy in complex environments. To learn more about uncertainty quantification, check out Posteriors, our recently released open-source framework for Bayesian computation. This is one aspect of our full-stack approach to building reliable AI for critical industrial applications. We’re building an enterprise platform that enables semi-autonomous AI to understand dynamic objectives, surface deep insights, and automate complex processes. This entails a fundamental upgrade to existing AI systems – probabilistic thinking – to reliably augment human intelligence at scale.

    • 该图片无替代文字

相似主页

融资

Normal Computing 共 3 轮

上一轮

未知

US$25,000,000.00

Crunchbase 上查看更多信息