Generative AI value is highest when the LLMs are simple and secure to manage. So
NetFoundry
is thrilled to help AI innovators with simple, secure AI via our solution with Aarna Networks and Predera. Two AI innovations which we are excited about:
- SuperUI. How many apps do you use today via their custom command line, web UI or thick client GUI? What if an LLM was your interface, and the interface could be a mix of text and voice? So, you tell the LLM 'agent' what you need to do, rather than navigating (n) different custom interfaces. You move from an imperative model to a declarative model. Maybe the LLM agent even responds that you can't do it because of x reasons, and/or suggest another way to accomplish your goal which you do have the data, systems, API integrations, etc. to do. This isn't too far fetched on the backend - APIs do much of this work today. On the user interface side, we need to take some latency out. Current LLM tokens don't let us move at the speed of conversation - it is too choppy due to the latency - especially because the current LLM architecture for 'memory' often means re-submitting the whole conversation each time. This is not to say that SuperUIs will replace every dedicated interface, but there will be innovation here.
- Custom, small LLMs. This somewhat counters the above, but both will happen. The 'next word' or next token predictions of the (mainly) general purpose LLMs is amazing, especially considering when they are running on (mainly) general purpose GPUs. What happens when custom LLMs, trained on custom data, with subject matter expert humans in the loop for reinforcement learning are paired with custom ASICs and next gen GPUs? Innovation. New, 'custom' LLMs which have both the cost and speed (e.g. the latency necessary for natural, smooth, full duplex conversation) advantages to multiply at rates closer to software speed (whereas today we are mainly gated on the infrastructure/GPU side), meaning individuals or small teams will be able to iterate and experiment on solving smaller (smaller than 'AGI' or 'general purpose AI') but interesting problems. And of course other AIs and APIs can bridge these custom LLMs to serve even more use cases.
In both of those examples, data privacy, integrity and security is extremely important. Likewise, latency will often be important. So will operational visibility, controls and agility. So could custom hardware. Together with the innovation we are seeing in open source AI (check out Hugging Face), we believe this will result in many models being run in edge data centers, on premises and even on user, OT and IoT devices (or at least partially). Hence, we are focused on working with partners like the open source community (our OpenZiti zero trust networking platform, OpenZiti, is open source) and innovative partners like Aarna and Predera to do our part in helping this ecosystem with the speed and security it needs to maximize innovation.
Learn more from Aarna here, and start in minutes.
#GenAI #OpenSource #ZeroTrust #CloudEdgeML #OpenZiti
Looking forward to an exciting joint journey!
CIO & CISO | Cybersecurity Leader | AI & Zero-Trust Innovator | GRC and Ransomware Prevention Expert | Protecting What Matters Most.”
1 年#GenerativeAI, and let’s be honest, scares some people. Instead of telling them they are crazy or have no rational basis for their fear, it’s a better tactic to demonstrate security around #AI and #LLM. #ZeroTrust goes both ways. The best way is demonstrating why we don’t need to trust anything yet deliver transactionally functional systems. The wrong way is to let fear drive an absence of trust.