Building the road to trustworthy GenAI and LLMs
The GENZERO Workshop, hosted by TII’s Secure Systems Research Center brought together hundreds of researchers from around the world to discuss the opportunities and challenges of bringing zero trust approaches to foundation models and to use GenAI tools to improve the trust of autonomous systems.? Although zero trust originated in the networking security arena, the SSRC has been actively exploring how zero trust principles could apply to security, safety, and resilience more broadly.?
GENZERO considered how these same concepts might be further expanded to GenAI, particularly LLMs. The conference sparked widespread discussion across research from many disciplines on practical research and development projects that could support these goals.
It has been incredibly rewarding to see all of the progress that has been made since our founding in 2019. From the start, the focus has been creating what we call a zero trust end-to-end secure, safe, and resilient autonomous platform, and we have made incredible progress on that front.?
Since our last conference two years ago, we have introduced a new focus area: zero trust with generative AI and LLMs. That’s our new direction and where we intend to expand. Today a flight mission operator based in the cloud, manages critical things like ground control, fleet management, and mission planning. You can think of this as mobile robot management that authenticates and ensures everything works as expected.
This allows for a secure communications and software platform, which means we can trust thousands of devices running in the field. We care about security, resilience and safety when we talk about a platform that is important for large-scale systems to grow and be trusted.?
It's also important to think about a platform approach for this to really take off. I have worked for most of my life on the Windows platform, which runs on top of the x86 hardware platform. What we are going to do for autonomous systems like robots and drones is very similar to that.?
We have been developing a secure autonomous platform that includes flight mission operations, communication systems, and edge devices that you can tune for whatever application you want. Today, we have many flours of our secure, Saluki, flight controller and mission computer system. It connects with the system through a secure communication shield over software-defined, Wi-Fi, and LTE radios. Additionally, we have developed a flight and mission operation system, first based in the cloud but that now also runs on a secure laptop. The hierarchical system secures all endpoints and devices.
The GENZERO vision
The next step is extending this vision to generative AI and LLMs that will support a secure autonomous platform of robots that move, fly and walk. Our goal is to form a secure and trusted intelligent network of robots.
We are exploring how to take this to the next level with secure runtime assurance, with predictive maintenance and proactive threat management. This requires innovations in information fusion for decision-making within robots for sensor fusion and between robots for collective or collaborative fusion.?
领英推è
We need to consider not just robots but also supporting first responders like we are doing with Caltech to improve firefighting and save lives using cameras, communicators, and collective intelligence. This is one of the things we want to expand on.?
Integrating LLMs and GenAI can further advance by building more capable and trusted autonomous systems. ChatGPT was introduced two years ago, about the same time as our last conference and has grown at lightning speed and inspired new ways to improve our lives. For example, we have developed a copilot interface on top of our platform that can tell operators about issues and make it easy to tell a drone or a fleet what to do without having to specify all of the waypoints. This reduces complexity, allowing even small teams or a single operator to oversee a large fleet of drones or robots.?
The path ahead
We are starting to realize that GenAI and LLMs can also cause us to lose trust in new ways. They hallucinate and make mistakes, which could be critical when trying to use them in emergencies like fighting fires.?
I am fascinated by ChatGPT. It's an amazing conversational partner when exploring new ideas and helps me think about new possibilities I had not considered. But it can make things up like fake references and even fake people.?
These tools can also cause harm when we don’t pay attention to the data they are trained on. Professor Ed Lee at Berkeley talked about how some of these kinds of problems occurred when Microsoft released the Tay chatbot in 2016, which learned from interacting with people on Twitter. They had to shut it down after it learned to say harmful things after only a day.??
How do we move forward?? We need to start thinking about teaching these things like we teach our children certain values. For example, I am a vegan, and when I see a spider at home, I pick it up and put it outside. My son recently told me how he had stopped to save a turtle stranded after a hurricane. That’s when I knew something good had resonated with him. We need to start thinking about how we can do this for these LLMs and other GenAI to ensure the data we give them is pure.?
Some kinds of GenAI for detecting intrusions or anomalies are easier because they are more constrained in what they can do. But we are building LLMs as language models trained on much larger collections of human dialog and are guessing how biased it might be and what they can do. So, we also need to start thinking about the data we are using to train these systems and the explainability of these systems when they go wrong.?
Some of the LLMs, like ChatGPT and Anthropic Claude, work amazingly well most of the time. But they are black boxes that are hard to understand and tune. There has also been a lot of progress on open source LLMs like Meta’s Llama and the TII’s Falcon. However, we also need to be able to see what kind of data is going into these and how it might bias decisions or cause harm.?
What GenAI and LLMs can do today is phenomenal, and there is so much more we could do if it were a little bit more advanced and learned from its mistakes so that we could trust it more. This will require thinking through all the pieces needed to build trustworthy, secure, safe and resilient autonomous systems using LLMs and GenAI to evolve our trust and their capabilities together.?
Great opportunity