TomTalks?? with Kathleen Garenani, Director of Responsible AI & AI Bias at BDO Digital
Tom Popomaronis
Innovation Leader | GenAI Expert | HBR Contributor | 40 Under 40 | Host of TomTalks??
In today's TomTalks??, we're diving into the critical world of responsible AI with Kathleen Garenani , Director of Responsible AI & AI Bias at BDO Digital . As businesses rush to adopt AI, Kathleen offers invaluable insights on navigating the ethical implications and potential biases of AI implementation.
Let's explore how companies can harness AI's power responsibly and ethically! ??
About Kathleen Garenani:
Kathleen leads BDO Digital’s Responsible AI and AI Bias team as a director in the Washington, DC office. Her client service experience involves data governance, information technology program management, information security programs, and various process improvement initiatives for complex and highly visible customers within the federal, public, and private sectors. Kathleen has worked extensively on a large-scale monitorship for a major international telecom entity and is responsible for leading a large team providing analysis of data and reporting of export controls in compliance with US regulations.
Kathleen has over twelve years of experience leading information security programs, enterprise-wide risk mitigation, data management, and process improvement initiatives. Before joining BDO, Kathleen served as Information Governance Operations Lead for the White House Information Governance office, where she conducted multiple process improvement programs, defining and implementing data privacy initiatives for Presidential and Federal records data.
Tom: Tell us about your organization and your role.
Kathleen: BDO Digital is the technology consulting division of BDO USA, a professional services firm that provides assurance, tax, and advisory services to a diverse range of clients. We help people harness the full power of technology to become faster, smarter, and more resilient to change.?
BDO Digital offers artificial intelligence (AI) services across the spectrum of business needs, from helping our clients develop an AI adoption strategy to supporting AI implementation, change management, cybersecurity, and ongoing monitoring and support of AI programs.?
As BDO Digital’s Director of Responsible AI & AI Bias, I advise clients on the ethical implications of AI and help ensure transparency, fairness, and accountability in their AI implementation journey.?
Tom: What is responsible AI, and why should companies care?
Kathleen: Responsible AI refers to the development and deployment of AI systems in a transparent and accountable manner that also aligns with company and societal values. A responsible approach to AI considers the ethical implications and potential biases that may come with AI adoption. It aims to mitigate these issues while adhering to relevant regulatory requirements, guidelines, and best practices.
AI governance is the foundation of responsible AI. It’s the set of policies, principles, standards, and practices that guide the development, deployment, and use of AI systems. AI governance aims to align AI with human values and goals and help ensure that AI is trustworthy, accountable, transparent, fair, and safe.??
AI has been around for a long time and isn’t going anywhere — research from BDO’s 2024 CFO Survey Report shows that most CFOs plan to adopt generative AI this year, and many of them (39%) say their company is building a proprietary generative AI platform in-house. But despite their eagerness to adopt AI, many executives aren’t as aware as they should be of the risks associated with the deployment. The same survey found that only 13% of CFOs see AI ethics and responsible use as a top concern for their business.?
Companies that take the time to implement AI thoughtfully protect their investments in AI. They are more likely to avoid legal and compliance risks, safeguard their reputations, build trust with their stakeholders, and even give their business a competitive advantage. To achieve these goals, business leaders must take a multi-disciplinary approach to AI adoption that involves collaboration between legal, privacy, compliance, and technology.
Tom: What kinds of issues might arise if companies aren’t considering how to adopt AI responsibly?
Kathleen: Responsible AI issues vary by industry, but some of the biggest concerns uncovered in BDO’s CFO Survey include risks related to generating and acting on incorrect information (20%) and concerns about data privacy (19%). Data privacy is an especially important issue due to the many privacy laws in place to protect consumers online. In the future, we’re likely to see more lawsuits and reputational damage stemming from issues related to both misinformation and privacy breaches as a result of AI that was not deployed responsibly.
Another responsible AI issue the C-suite needs to monitor is AI bias. AI bias can occur when an AI system produces biased results due to issues with the machine learning (ML) process. These biases can be caused by problems with the data being used to train AI systems or issues with the algorithm.?
For example, in the healthcare industry, predictive AI models are being deployed to help health insurance companies determine health risks and healthcare costs for consumers. These predictive models are designed to assess the health risks of applicants based on various factors such as age, gender, medical history, socioeconomic status, and lifestyle, among other considerations.
However, if predictive models place significant weight on socioeconomic factors often correlated with race — such as zip code, education level, or occupation — they may generate a bias and learn to associate those factors with higher health risks. As a result, applicants who meet the socioeconomic criteria identified by the predictive model might automatically receive higher premiums or even be unfairly denied healthcare.
While that example is specific to the healthcare industry, any company that uses AI to make decisions must understand and work to mitigate the risks of AI bias.?
Tom: What are some steps companies should take to ensure responsible AI adoption??
Kathleen: Transparency is key. Responsible AI programs are built on an approach to AI adoption that ensures transparency around both the inputs and outputs of AI tools.?
To achieve transparency, and therefore trust, companies need to take a measured approach to AI adoption. Specifically, they need to take the time to develop an AI adoption strategy that aligns with their business needs and values.
领英推荐
Once companies are ready to implement their AI roadmaps, we recommend they build an AI advocacy team comprised of leaders from across the organization. This advocacy team should work together to ensure the interests and concerns of each department are considered during the rollout and change management process.?
Organizations must also ensure they have the proper data governance and security processes in place to maintain the integrity and security of their data.?
Tom: What role does the corporate board play in guiding responsible AI strategy?
Kathleen: The corporate board plays an important role in guiding responsible AI strategy. Boards should collaborate with management to oversee the development of AI goals and the organization’s AI roadmap and align on an intentional approach to adoption and governance.?
Boards should also consider creating or assigning a committee or sub-committee to evaluate AI risks and opportunities. Adding new directors with AI experience to help oversee AI efforts, especially as it relates to governance, accountability, and risk management, can be particularly useful. The assigned committee should oversee a holistic AI governance program that integrates legal, regulatory, privacy, security, and ethical considerations — while building a foundation of trust among all stakeholders, including employees, consumers, regulators, and shareholders.
The board should also work to create a governance framework that promotes responsible AI and includes mechanisms for monitoring, managing, and mitigating AI risks. This framework will be unique to each business and should be informed by applicable laws, industry norms, the organization’s values, risk appetite, and more.
Tom: Are there any current or upcoming legal or regulatory requirements related to responsible AI that companies need to be aware of? Where do you think we’re headed in terms of regulating AI?
Kathleen: There are several regulatory requirements related to AI that companies should understand.
For example, the EU AI Act ranks AI systems by risk level and imposes strict regulations and compliance requirements for high-risk AI systems. While there are currently no U.S. federal laws pertaining to AI in effect, the current administration has issued an executive order on safe, secure, and trustworthy AI, and regulatory bodies like the Federal Trade Commission (FTC) have released guidance on AI.?
State and local level laws are also starting to develop across the U.S., and there are several frameworks that companies should be aware of. Working with a third-party advisor can help companies navigate the evolving regulatory landscape and adhere to the appropriate guidelines.
Tom: What’s the best advice you’d give to companies just getting started on their AI adoption journey?
Kathleen: At BDO, we think about the AI adoption journey as a five-step process.
The first step companies need to take is to educate themselves and their people. They need to learn the technology’s practical applications, as well as its risks and limitations. AI demands oversight and setting up a formal AI task force is a critical first step. The task force should not only guide AI strategy and implementation, but also inform the rollout of education and knowledge sharing. Everyone in the organization — from senior management to operational staff — should understand what AI is, how it works, and its potential impact.?
The second step companies need to take is to define their AI vision, journey, and impact. This work entails aligning your AI goals with your organization’s mission, ethical principles, and sustainability practices.
The third step is to lay the data foundation for AI. This step includes ensuring that the organization has the data infrastructure in place to support future AI use cases. At this stage, organizations should develop an AI governance framework. This governance framework should promote responsible and trustworthy AI, and create mechanisms for monitoring, managing, and mitigating AI risks.?
The fourth step to AI adoption is to address change management by preparing teams for AI adoption. This work entails communicating the “why” behind the AI, followed by the “how.” It’s important for organizations to clearly define roles and incentivize engagement and adoption while also offering the necessary training and resources for employees to succeed.
The final step is to execute and then iterate on AI. Organizations need to test, refine, and launch AI with mechanisms in place for continuous feedback and iteration. They should measure performance impact, learn from any potential issues, and celebrate wins.
? Thank you Kathleen! ?
Want to be featured in an upcoming TomTalks?? newsletter? Send me a message!
Tom Popomaronis is Co-Founder and Chief GenAI Officer at Phantom IQ . In addition to serving as a fractional GenAI consultant and experienced product innovator, Tom is a prolific writer and content strategist, having published over 1,000 op-eds across mainstream platforms including Entrepreneur Magazine, CNBC, Inc., and Forbes, while also enabling the publication of an additional 3,000+ op-eds for executive clients.
Want to be featured in an upcoming TomTalks?? newsletter? Send me a message!
Cofounder @ Profit Leap and the 1st AI advisor for Entrepreneurs | CFO, CPA, Software Engineer
3 个月Ethical AI is wild these days! What’s your take on bias in tech? Tom Popomaronis
IBM External Communications | Artificial Intelligence
3 个月Great read/listen, Tom!