Building Trust in AI: Insights on AI Accountability from NIST and Tech Industry Executives

Building Trust in AI: Insights on AI Accountability from NIST and Tech Industry Executives

The tech industry is leading AI accountability efforts to protect consumers and bolster public trust. ITI convened top minds from National Institute of Standards and Technology (NIST) and industry leaders from ITI members Intuit , 埃森哲 , and IBM for AI Accountability: How Tech is Building Consumer Trust , a virtual event focused on how tech is shaping the future of AI with responsible practices and transparent policies.?

?The event also highlighted ITI’s AI Accountability Framework , a comprehensive set of consensus tech sector practices companies are using to develop and deploy AI technology safely and securely.?

?Explore key takeaways from the event and learn how the tech industry and government are collaborating to set new standards for AI accountability and foster a safer, more trustworthy digital future.?

?AI Policy Overview – U.S. and Global Perspectives?

?ITI President and CEO Jason Oxman n opened the event by introducing ITI’s AI Accountability Framework: ?

“This wave of policymaking activity is one of the reasons ITI developed our AI Accountability Framework. This framework, a first-of-its-kind consensus set of best practices, is designed to advance the responsible development and deployment of AI, including specific practices for frontier AI models. It complements existing frameworks and initiatives, such as the G7 International Code of Conduct, and addresses key themes in the global conversation on AI safety, particularly the management of evolving risks associated with highly capable AI models.”?

NIST AI Safety Institute – Driving Innovation, Mitigating Risk?

Next, Conrad Stosz , Director of Policy at National Institute of Standards and Technology (NIST) 's AI Safety Institute, discussed driving innovation while mitigating risks with ITI’s Vice President of Policy, Trust, Data, and Technology Courtney Lang :?

?"We will soon be launching a series of targeted task forces to help us transparently and openly develop practical approaches for putting these [Executive Order deliverables] into practice, including ones that were released already, as well as informing future Executive Order actions, and potentially the broader evaluation work of the Safety Institute. We’re expecting to set up these task forces to help us hit the remaining deadlines and to continue to inform this technical work that's very much still on site as well."?

Stosz also highlighted the need for flexibility given the rapid pace of AI advancements:?

?"It's not just about political uncertainty. We have so much uncertainty from just how rapidly the technology is developing and the industry is evolving. So, I think we are operating very much in a mode of agility, and being ready to change and evolve as priorities change. But I do think going back to sort of the core vision of the AI Safety Institute, which is creating safe AI and doing that with the knowledge of the fact that safety is what drives adoption and adoption and safety drive further innovation. We really see this core mission as enduring. It's fundamentally enduring. We're seeing again so much progress on AI so quickly."?

?The Need for Global Cooperation and Diverse Viewpoints?

Stosz also underscored the global nature of AI and the necessity for international collaboration:?

?"It’s very clear to us that we cannot and should not be working on AI safety in individual, national silos. AI is a global technology, and so it requires global solutions, so there’s obviously tremendous benefit to standardizing or aligning international approaches to safety. As a result of that, any meaningful work has to be done in concert with other governments’ AI Safety Institutes and other scientific institutions around the world looking to uphold human safety and trust in the context of artificial intelligence."?

Accountability Across the AI Ecosystem?

In the next session, William Carter , Responsible AI Lead at Accenture Federal Services ; Daniela Combe , Vice President of Emerging Technology Advocacy at IBM ; and Shannon Orr , Assistant General Counsel, AI at Intuit, shared their perspectives with moderator John Miller , ITI’s ?Senior Vice President of Policy, Trust, Data, and Technology and General Counsel.?

Diverse Voices Across the Sector??

Daniela Combe of IBM echoed the importance of diverse voices in developing effective AI frameworks:?

"I thank ITI for bringing together so many voices across the technology industry to develop the AI Accountability Framework. It definitely takes a lot of diverse voices on these topics and lots of points of view to end up with a better result for all of us."??

The Importance of Transparency When Building Trust?

Intuit’s Shannon Orr emphasized the continuous effort needed to maintain trust with consumers:?

?"Trust is a conversation, and it is an ongoing conversation, and it's an ongoing relationship that we have with our customers that we are responsible for every single day and with every new product that we put out into the market [...] Fundamentally developing trust with our customers means that they understand what choices are available to them, and that they have the information to assess whether or not the product is actually helping them. We always strive for the right level of transparency and explainability in our AI, and we focus particularly on providing customer-facing explanations."?

?Daniela Combe of IBM chimed in to stress the importance of transparency in AI development:?

?"Transparency – it starts with the data. Be transparent about your sources. Don't hide your AI, and anything you're doing. When we build models, we put out papers that talk both about the datasets that we use to train our models and the testing we apply, and we think that is the best practice. And if you're not willing to do that, you probably shouldn't be in this space."?

Introducing the Role of the “Integrator” in the AI Value Chain?

Will Carter highlighted the critical role of integrators - a newly defined term for an intermediate actor in the supply chain highlighted in ITI’s AI Accountability Framework:??

?He further explained the growing reliance on integrators across various industries:?

"The vast majority of the AI ecosystem in a few years is going to be financial companies, retail companies, healthcare companies...their core expertise is not technology, is not artificial intelligence. They're never going to be able to build that muscle, to do everything for themselves. An integrator can come in to bridge the gap between [companies] and their vendors [to] help them to understand the technology they're using within the context of their own particular business and operations and help them to manage that risk throughout the value cycle."??

Click here to watch the full event . ?

?Read more from ITI: ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了