All Things Open AI: Showing the Power and Challenges of AI

All Things Open AI: Showing the Power and Challenges of AI

This past week, I attended All Things Open AI in Durham, NC. The conference tackled some of the most important issues in today’s AI space from the real challenges of implementing AI in the enterprise, to scaling AI infrastructure, as well as governing AI responsibly. Before I dive into my thoughts though, I first want to give a huge thank you to Mark Hinkle and Todd Lewis for organizing an event that continues the tradition of All Things Open while establishing itself as a unique space for those building and using AI today.

While I spoke about LLM evaluations and governance(Here is a pdf version of my slides from the event) there were many other incredible discussions. The talks at the event explored the tension between open-source and proprietary AI, the rise of AI agents, the infrastructure challenges behind scaling AI, and the ongoing battle against AI bias. The speakers also shared real-world implementations, challenges, and even AI failures, making it clear that we all are still figuring out how to make AI work at scale, safely and responsibly.


Photo Credit: Mark Hinkle, All Things Open AI

AI Governance and the Urgent Need for Better Evaluations

One of the most important themes of the conference was the need for AI governance, evaluation, and transparency. My own talk focused on how LLMs do not behave like traditional software, with the same prompt producing different outputs depending on context, temperature settings, and training data drift. This unpredictability necessitates LLM evalautions, especially in industries like finance, healthcare, and legal services, where accuracy is crucial.

Melissa Nysewander from Fidelity reinforced this concern, explaining how companies are rushing to deploy AI but often fail to consider long-term governance and regulatory risks. She pointed out how bias in AI-driven hiring tools can lead to discrimination and lawsuits, and how companies that don’t think about governance from the start may find themselves facing major legal and financial consequences.

The consensus is that if companies don’t invest in AI governance now, they will pay the price later either through regulatory penalties or brand-damaging AI failures.

The Future of AI is Open… But What Does That Actually Mean?

A recurring debate throughout the event was whether AI should be open-source or proprietary, and what "open" actually means in the context of AI. Many talks, such as Dr. Ruth Akintunde from the SAS Institute, touched on this idea, highlighting the advantages and trade-offs of both models and how organizations are likely to combine them to build practical AI solutions.

JJ Asghar from IBM explained that while open-source AI is gaining momentum, enterprises still need structured, controlled environments to manage AI risk, security, and compliance. Also from IBM, Sriram Raghavan discussed the push for a more open AI ecosystem to give developers better tools for working with unstructured AI data and orchestrating AI workflows, signaling a shift toward community-driven AI development.?

Sheng Liang from Acorn Labs added to this conversation by showcasing how open AI agent frameworks can be used to automate enterprise workflows, such as turning Slack messages into GitHub issues or using AI-powered resume filtering. His demonstration showed that open AI is not just about models. It’s about building ecosystems where AI can be extended, modified, and integrated into real-world systems.

AI Agents and Automation: Moving Beyond Chatbots

Another major trend was the rise of AI agents. While chatbots have been the most visible AI applications in the past year, speakers at the event made it clear that AI is moving beyond simple text generation and into full-fledged automation systems.

Rachel-Lee Nabors from TinyFish painted a picture of a future where AI agents replace traditional web interfaces, allowing users to navigate and interact with online content using natural language commands instead of manual searches. She also demonstrated how AI can extract and summarize information from disparate web sources, reducing the time and effort required to find relevant data.

Ben Ilegbodu from Netflix took a more practical approach, showing how AI can be "sneakily" integrated into existing applications to enhance user experiences. He demonstrated how AI-assisted form filling, real-time feedback generation, and predictive decision-making can help users complete tasks faster, without requiring them to learn entirely new interfaces. His talk underscored the idea that the best AI is often the one that users don’t even realize they’re using.

AI at Scale: The Infrastructure Challenge No One Talks About

While much of the AI conversation focuses on models and applications, several talks at the event highlighted the massive infrastructure challenges behind scaling AI.

Shashank Kapadia from Walmart explained how memory and compute limitations are becoming a bottleneck for enterprise AI deployments. As companies deploy larger AI models, they often underestimate the cost and complexity of maintaining them in production. He discussed techniques for optimizing AI performance, such as memory-efficient training methods, hardware acceleration, and distributed AI workloads.

Dhivya Nagasubramnian from US Bank echoed these concerns, emphasizing that many companies rush to deploy AI pilots without considering long-term scalability. She stressed the importance of iterative AI development, where models are continuously tested, fine-tuned, and optimized before they are rolled out at scale.

Final Thoughts: AI’s Future is Open, Ethical, and Integrated

The All Things Open AI conference made it clear that many of the hardest problems around governance, transparency, and scalability are far from solved. However, the talks highlighted a more altruistic perspective where AI is not just about building better models, but about making AI work in the real world safely, ethically, and efficiently.

At the end of the day, AI’s future will be shaped by how well we evaluate and govern it, how open AI continues to evolve, and how effectively we integrate AI into real-world workflows. It was exciting to see so many brilliant minds tackling these challenges head-on, and I’m already looking forward to the next All Things Open AI.


AI Updates

This JFrog Report looks at key vulnerabilities in the 2025 software supply chain, emphasizing the need for visibility, automation, and unified security strategies to address gaps.


Elizabeth Montalbano dives into the Tenable Cloud AI Risk Report 2025 which reveals that organizations deploying AI in the cloud are repeating past security mistakes.


Reuven Cohen looks at Aider’s polyglot benchmark which tests LLM coding skills by having them tackle 225 of Exercism’s toughest challenges.


This repository explores "medical hallucination," where AI models generate incorrect medical information, highlighting its unique risks in healthcare, and introduces benchmarks and tools to study and mitigate these errors.


Europol’s Serious and Organised Crime Threat Assessment 2025 highlights the growing use of AI by European criminal networks to facilitate cybercrime and money laundering.


John Leyden looks at rising threats targeting AI development pipelines and widespread vulnerabilities in open-source and third-party software.


Eduard Kovacs dives into Nvidia patching two vulnerabilities in its Riva AI services, including a high-severity flaw (CVE-2025-23242) that could enable privilege escalation and data tampering.


Paulina Okunyt? examines GitGuardian's 2024 research which reveals a 25% surge in leaked hardcoded secrets on GitHub, with over 23 million new exposed credentials.


要查看或添加评论,请登录

John Willis的更多文章