OpenAI Launches A Game Changing New Feature

OpenAI Launches A Game Changing New Feature

?

?

Before we dive into OpenAI's new canvas feature watch this video which provides a good overview of the new feature

https://www.youtube.com/vxWUfvuS35o

Canvas provides a great way of collaborating with AI; almost like your super intelligent personal assistant. However, if your personal assistant's knowledge is intrinsically biased or inaccurate then your entire output will be factually incorrect, or at worse dangerous. These factual inaccuracies and biases result in what we call hallucinations.

Let's now get a little bit under the bonnet of why and how hallucinations can occur. Nothing better explains this than the Halting Problem.

https://www.youtube.com/Kzx88YBF7dY

That video was a bit heavy, so let me boil it down for you. The key point is that the program remains reliable only for certain data entries. Consequently, there are some hallucinations you can never detect. Unless you have strict controls of what data you feed into the Large Lnaguage Model (LLM).

In truth, hallucinations can never be totally eradicated from any AI model. However, we can minimise their effects and probability of them occurring. AI engines are continually being improved to reduce the frequency of hallucinations. But the quality of the data that the models are trained on is the other proponent that must be improved. This can be done by filtering out data containing biases and inaccuracies that the LLM's are trained on. But here lies the problem; how can the models determine if the inputs to the LLM are factually correct or computable. We need to first create LLMs with carefully selected data that have undergone rigorous testing to be factually consistent and deterministic. Only then can we be confident of reducing the frequency and probability of hallucinations. For this very reason I invented SES (Self Evolving Software); a new AI that reduces hallucinations, duplication of code, defects and code churn; all of the issues that are plaguing all current AIs, substantiated in a study by GitClear.

SES learns code of a customer's system and from a community of connected systems that have been rigorously tested, and leverages code of these systems, and importantly generates new code, referencing the existing code. This approach reduces hallucinations.

If you are interested to learn more or discuss anything connected to AI code generation, contact me at sfouracre@selfevolvingsoftware.com, or visit our website at www.selfevolvingsoftware.com

Nadio Granata

CMAIO| Positive Disruptor for good | Founder: The AI Collective: from school leavers to Thought Leaders. Connector | Author | Influencer. On a mission to democratise responsible AI. Crafted 100 GPT’s in 100 days.

5 个月

Very interesting. I’ll need @ Robin Davis to exokain it to me in more simple terms. Thanks for sharing Steve Fouracre, great to have your knowledge in here.

Peter Gascoyne

Founder at RavenApps, Native Salesforce ISV Partner, Grids: Spreadsheet-style workspace

5 个月

Thanks for sharing Steve ????

赞
回复

要查看或添加评论,请登录

Steve Fouracre的更多文章

  • Can AI Estimate The Build Cost Of A Property

    Can AI Estimate The Build Cost Of A Property

    Rarely am I truly impressed with the accuracy of AI. For sure AI is evolving at an exponential rate and it is amazing…

    1 条评论
  • The Good Bad And Ugly Of AgentForce

    The Good Bad And Ugly Of AgentForce

    We’ve all seen the hype generated by AgentForce recently. In this article we will learn: ● what is AgentForce ● how can…

    1 条评论
  • How Safe Is DeepSeek?

    How Safe Is DeepSeek?

    DeepSeek’s business model is an opensource system, meaning it will not make any money from licensing its AI engine. It…

    1 条评论
  • New Salesforce ISV Community Group

    New Salesforce ISV Community Group

    I am pleased to announce Salesforce has accepted my application to host the official Salesforce ISV Community Group…

    4 条评论
  • The Foolish Fermi Paradox

    The Foolish Fermi Paradox

    After reading numerous articles asking 1 of the most enigmatic questions since the dawn of science, "Where are all the…

  • What Are The Mechanics Of AI

    What Are The Mechanics Of AI

    The following mechanisms are the main processes used in modern AIs. Supervised Learning A type of machine learning…

  • What Would The Presidency Of The First AI President Be Like?

    What Would The Presidency Of The First AI President Be Like?

    First read the previous episode https://www.linkedin.

  • History Is Made AI Will Stand As The Next US President

    History Is Made AI Will Stand As The Next US President

    It is the year 2124, and the results of US elections have made history by electing the first AI US President. Is this…

    8 条评论
  • The True Einstein AgentForce Pricing

    The True Einstein AgentForce Pricing

    If you are thinking of purchasing Einstein AgentForce or Co-Pilot that uses the Chat GPT Large Language Models (LLM);…

  • What Is AI Model Collapse?

    What Is AI Model Collapse?

    AI is trained mainly on data from the internet. But as AI continually generates new content, published onto the…

社区洞察

其他会员也浏览了