Collaborate With Your Competitors

Collaborate With Your Competitors

#LeadershipInTheAgeOfAI

Artificial intelligence (AI) is one the most powerful tools for business competitiveness and differentiation. Alongside the adoption and deployment of automation and machine learning models, in the past several months, we have also seen the rapid proliferation of a new kind of AI known as "generative AI." With large language models, image generators, code generators and more, the AI landscape is becoming more crowded by the day.

As with other valuable business investments, there might be an inclination to closely hold hard-earned insights and use cases. Yet, when it comes to AI’s implications for enterprise risk and ethics, sharing knowledge with competitors in your industry is important for the longer-term use of these transformative tools.

The reason is two-fold. First, enhancing risk mitigation and governance can promote more valuable, trustworthy AI applications. Second, governments worldwide are exploring how to regulate AI, and leading by example can encourage rulemaking that enhances AI use, rather than stifles it. This only becomes more important as governments face calls to regulate generative AI. From my perspective, the timeline for regulatory action might be shortening.

With so much AI potential, businesses across every industry are making significant investments. One outcome, however, is that half of executives point to managing AI-related risk as a top challenge, according to the fifth State of AI in the Enterprise by Deloitte, where I help lead the AI Institute. Mitigating risk with AI is complicated by the fact that the ways these tools are being used today are, in many cases, fundamentally new. The “rules of the road” for the trustworthy use of AI are still being discovered.

Every pilot or deployment has the potential to reveal an unrecognized risk or an effective method to mitigate it. Questions arise: What lessons do your competitors hold that could enable greater trust in your own applications? What lessons do you have that may be helpful to them? As experimentation and deployments increase across an industry, each organization is gaining valuable knowledge, and there is business logic in sharing it.

A good analogy is the mass availability of the consumer car. When automobiles became ubiquitous, there was a need to invent best practices for this new fact of everyday life—the speed limits, safety requirements, standards for parts and all the things that impact how a car is built, operated and maintained. In that time, it was in the interest of auto manufacturers and their customers to share knowledge across the industry.

I see a similar environment today with AI. Lessons learned through the process of developing and deploying AI can be fed back into development cycles to mitigate risk and improve applications. This has important impacts in terms of enterprise security, efficiency, customer engagement and return on investments. Yet, if learning by doing is a quality of today’s AI application, it is logical that there is only so much one enterprise can glean from its deployments. It’s said a rising tide raises all ships, and when it comes to AI risk and governance, collaboration can elevate AI applications across an entire industry.

The exchange of leading practices is important, in part, because it moves toward industry self-regulation, wherein businesses acknowledge and follow standards and leading practices because they elect to and because it is in the best interest of their business and customers—and not because regulations compel it. Taking this proactive approach toward AI risk mitigation and governance secures early confidence that the enterprise is taking steps to use AI in a trustworthy way, and it also sets an important example.

AI regulations and laws will proliferate in the months and years ahead, propelled in part by growing industry calls to regulate generative AI. An industry wherein companies are already self-regulating is positioned to help shape government rulemaking. Regulators cannot inspect at a technical level all of the AI applications that are emerging across industries, particularly as innovation and deployment are occurring at such a rapid pace. When regulators consider how to develop rules that guide AI in the marketplace, they will likely look to known harms, as well as to known remedies and preventative measures. What becomes codified may be informed by industry example. This moves toward meaningful regulations that accommodate how AI is being used. Conversely, an industry that lacks examples of trustworthy AI applications may encounter regulations that are more stringent and disruptive because regulators lack case studies in excellence.

To begin steering the organization and the industry toward self-regulatory action, look toward meetings and forums where people can gather to share insights, lessons learned, emerging risks and potential solutions. Bringing together executives, subject matter experts and even a line of business users creates fertile ground for creative thinking and momentum toward self-regulation. Business leadership might coordinate with competitors to schedule a conference, seek one another out at industry events or leverage digital platforms to connect anywhere in the world.

The goal is to create a collaborative environment that is conducive to sharing insights and working to advance the entire industry toward the trustworthy application of AI.




Originally published at https://www.forbes.com/sites/forbesbusinesscouncil/people/beenaammanath1/?sh=33cfc0b2bd3e


Tamara McCleary

Academic research focus: science, technology, ethics & public purpose. CEO Thulium, Advisor and Crew Member of Proudly Human Off-World Projects. Host of @SAP podcast Tech Unknown & Better Together Customer Conversations.

4 个月

This is an excellent article, Beena and I absolutely agree with you. And I particularly like this call out — "The exchange of leading practices is important, in part, because it moves toward industry self-regulation, wherein businesses acknowledge and follow standards and leading practices because they elect to and because it is in the best interest of their business and customers—and not because regulations compel it. Taking this proactive approach toward AI risk mitigation and governance secures early confidence that the enterprise is taking steps to use AI in a trustworthy way, and it also sets an important example."? I also recently wrote an article that's related to this exact topic — if you're interested in taking a peak, here's the link and let me know what you think!?https://www.dhirubhai.net/pulse/navigating-digital-trust-paradox-reflections-ethics-human-mccleary-6jx4e/.

Lucie Newcomb

Global Business/GTM Markets Entry. | Communications | Boards | Transformational Leadership

4 个月

Thanks for this cogent point, Beena Ammanath. FYI Bruno Herrmann

Riley Coleman

I help non-tech professionals build confidence using AI at work

4 个月

This is a thoughtful and interesting comparison between the evolution of the automobile industry and the current trajectory of AI is spot-on. Just as the automotive industry developed best practices and regulations to ensure safety and efficiency, the AI industry must now focus on creating standards that prioritize ethical use and societal benefits. How can we ensure that these AI standards are both comprehensive and adaptable to blistering rate of technological changes? And who do you think helps write the rulebook or police the streets?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了