Simple, secure LLMs accelerates innovation

Simple, secure LLMs accelerates innovation

Generative AI value is highest when the LLMs are simple and secure to manage. So NetFoundry is thrilled to help AI innovators with simple, secure AI via our solution with Aarna Networks and Predera. Two AI innovations which we are excited about:

  • SuperUI. How many apps do you use today via their custom command line, web UI or thick client GUI? What if an LLM was your interface, and the interface could be a mix of text and voice? So, you tell the LLM 'agent' what you need to do, rather than navigating (n) different custom interfaces. You move from an imperative model to a declarative model. Maybe the LLM agent even responds that you can't do it because of x reasons, and/or suggest another way to accomplish your goal which you do have the data, systems, API integrations, etc. to do. This isn't too far fetched on the backend - APIs do much of this work today. On the user interface side, we need to take some latency out. Current LLM tokens don't let us move at the speed of conversation - it is too choppy due to the latency - especially because the current LLM architecture for 'memory' often means re-submitting the whole conversation each time. This is not to say that SuperUIs will replace every dedicated interface, but there will be innovation here.
  • Custom, small LLMs. This somewhat counters the above, but both will happen. The 'next word' or next token predictions of the (mainly) general purpose LLMs is amazing, especially considering when they are running on (mainly) general purpose GPUs. What happens when custom LLMs, trained on custom data, with subject matter expert humans in the loop for reinforcement learning are paired with custom ASICs and next gen GPUs? Innovation. New, 'custom' LLMs which have both the cost and speed (e.g. the latency necessary for natural, smooth, full duplex conversation) advantages to multiply at rates closer to software speed (whereas today we are mainly gated on the infrastructure/GPU side), meaning individuals or small teams will be able to iterate and experiment on solving smaller (smaller than 'AGI' or 'general purpose AI') but interesting problems. And of course other AIs and APIs can bridge these custom LLMs to serve even more use cases.

In both of those examples, data privacy, integrity and security is extremely important. Likewise, latency will often be important. So will operational visibility, controls and agility. So could custom hardware. Together with the innovation we are seeing in open source AI (check out Hugging Face), we believe this will result in many models being run in edge data centers, on premises and even on user, OT and IoT devices (or at least partially). Hence, we are focused on working with partners like the open source community (our OpenZiti zero trust networking platform, OpenZiti, is open source) and innovative partners like Aarna and Predera to do our part in helping this ecosystem with the speed and security it needs to maximize innovation.

Learn more from Aarna here, and start in minutes.

#GenAI #OpenSource #ZeroTrust #CloudEdgeML #OpenZiti

Brandon Wick Amar Kapadia Sriram Rupanagunta Nazeer H Shaik Philip Griffiths

Looking forward to an exciting joint journey!

回复
Iain Struan

CIO & CISO | Cybersecurity Leader | AI & Zero-Trust Innovator | GRC and Ransomware Prevention Expert | Protecting What Matters Most.”

1 年

#GenerativeAI, and let’s be honest, scares some people. Instead of telling them they are crazy or have no rational basis for their fear, it’s a better tactic to demonstrate security around #AI and #LLM. #ZeroTrust goes both ways. The best way is demonstrating why we don’t need to trust anything yet deliver transactionally functional systems. The wrong way is to let fear drive an absence of trust.

回复

要查看或添加评论,请登录

Galeal Zino的更多文章

  • Develop once, deploy anywhere, deliver everywhere

    Develop once, deploy anywhere, deliver everywhere

    Adding secure networking to the developer platform empowers product teams to develop once, deploy anywhere and deliver…

    2 条评论
  • The AI fork in the road

    The AI fork in the road

    We are at a fork in the road. In one direction, AI is basically meaningless in the context of cybersecurity and…

    1 条评论
  • Your private AI is public

    Your private AI is public

    Many enterprises are deploying private AI. Keeping the AI (e.

    4 条评论
  • Security-first

    Security-first

    You'll find the Castillo de San Marcos National Monument area in St. Augustine, Florida.

    6 条评论
  • The only way to win the cybersecurity war

    The only way to win the cybersecurity war

    I know the future I don’t have a crystal ball, nor a predictive AI, but I do know how the next UnitedHealth, Snowflake…

    2 条评论
  • UnitedHealth breach

    UnitedHealth breach

    Note: I helped with the breach recovery but all info below is public, and all opinions are my own. The notorious $1.

    10 条评论
  • Open source magic

    Open source magic

    Open source innovation "No matter who you are, most of the smartest people work for someone else” This terrific…

    1 条评论
  • Zero trust Ansible

    Zero trust Ansible

    DevOps goals hit a brick wall You already know how awesome Ansible is. Unfortunately, you have also experienced the…

    3 条评论
  • A trillion dollar cybersecurity assumption

    A trillion dollar cybersecurity assumption

    Cyberattacks will cost us $1 trillion this year, despite our spending of $150 billion trying to protect ourselves. Old…

    3 条评论
  • Attacking Ransomware

    Attacking Ransomware

    Gas, toilet paper, ransomware and the business WAN The ransomware attack on the Colonial Pipeline business WAN raised…

社区洞察

其他会员也浏览了