'THE DAILY CORPORATE GOVERNANCE REPORT’ (for public company boards, the C-suite and GCs)
? ? ? ? ?Please see the items below with the related links (NOTE: access to link content may be metered, require a no-charge registration or require a paid digital subscription)?
? ? ? ? ?NOTE: This issue focuses primarily on AI, including AI ethics, AI governance and board oversight of AI.
? ? ? ? ? ? ? ?(i) AI ethics, AI governance and board oversight of AI (and more)-roundup:
? ? ? ? ? ? ? ? ? (a) In March, Deloitte?released this report on, inter alia, the ethical and responsible use of AI, "Preparing the workforce for ethical and trustworthy AI", based on a survey of 100 C-suite executives. Below are from the findings:
? ? ? ? ? ? ? ? ? ? ? "Leaders see a pressing need for ethical AI guidelines—and believe those guidelines are essential to growing revenue and establishing trust.
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- 88% of executives surveyed said their organizations are taking measures to communicate the ethical use of AI to their workforces, demonstrating leaders’ commitment to the responsible use of this technology.?
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- Among executives surveyed, publishing clear policies and guidelines was ranked the most effective method of communicating AI ethics to the workforce, followed by workshops and trainings.
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- 55% of C-suite leaders believe having ethical guidelines for emerging technologies like Generative AI is very important as they relate to revenue, followed by brand reputation and marketplace trust (47%).
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? – Respondents indicated their boards of directors (52%), and chief ethics officers (52%) are always involved in creating policies and guidelines for the ethical use of AI.
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? – Among respondents, 49% report their organizations currently have guidelines or policies in place regarding the ethical use of AI,?and another 37% of those surveyed said they are nearly ready to roll them out....
? ? ? ? ? ? ? ? ? ? ? ? ? ?-- More executives surveyed said their organizations are currently hiring or planning to hire for specific positions to meet the ethical needs of emerging tech. Those positions include AI ethics researcher (53%), compliance specialist (53%), and technology policy analyst (51%), along with roles such as chief ethics officer (38%) and chief trust officer (36%)."
? ? ? ? ? ? ? ? ? ?(b) As reported in last Wednesday's Fortune CFO Daily Newsletter, Mercer recently posted on its website its 2024 global manager survey,?"AI integration in investment management". Note one of the interesting findings, which gives some ?precision to the current use of the term "AI":
? ? ? ? ? ? ? ? ? ? ? ? "Challenges in agreeing to a definition of AI?reinforce the complexity of determining exactly how managers are using and integrating capabilities. Yet, there is clear consensus among managers about what constitutes AI, with what might be termed the “core capabilities” being generative AI (gen AI), large language models (LLS), natural language processing (NLP) and machine learning (ML) models."
? ? ? ? ? ? ? ? ? (c) Below are excerpts for this FT feature article last Wednesday, "What does AI mean for a responsible business?", inter alia quoting Ken Chenault, former chief executive of American Express and Nuala O’Connor, head of the AI responsibility team at Walmart:
? ? ? ? ? ? ? ? ? ? ? ? "It was what many called an iPhone moment: the launch in late 2022 of OpenAI’s ChatGPT, an artificial intelligence tool with a humanlike ability to create content...... But soon this latest chapter in AI’s story was generating something else: concerns about its ability to spread misinformation and “hallucinate” by producing false facts. In the hands of business, many critics said, AI technologies would precipitate everything from data breaches to bias in hiring and widespread job losses.......
? ? ? ? ? ? ? ? ? ? ? "The message for the corporate sector is clear: that any company claiming to be responsible must implement AI technologies without creating threats to?society — or risks to the business itself, and the people who depend on it. Companies appear to be getting the message. In our survey of FT Moral Money readers, 52 per cent saw loss of consumer trust as the biggest risk arising from irresponsible use of AI, while 43 per cent cited legal challenges.?
? ? ? ? ? ? ? ? ? ? ? “CEOs have to ensure AI is trustworthy,” says Ken Chenault, former chief executive of American Express and co-chair of the Data & Trust Alliance, a non-profit consortium of large corporations that is developing standards and guidelines for responsible use of data and AI.?“AI and machine learning models are fundamentally different from previous information technologies,” says Chenault. “This is a technology that continuously learns and evolves, but the underlying premises must be constantly tested and monitored.”.......
? ? ? ? ? ? ? ? ? ? ? ?"For companies, among the biggest risks of getting it wrong is losing public trust. When KPMG polled 1,000 US consumers on generative AI, 78 per cent agreed on the responsibility of organisations to develop and use the technology ethically — but only 48 per cent were confident they would do so. “You’re going in with a level of scepticism,” says Carl Carande, US head of advisory at KPMG. “That’s where the frameworks and safeguards are critical.” Approaches to AI governance will vary by sector and company size, but Carande sees certain principles as essential, including safety, security, transparency, accountability and data privacy. “That’s consistent regardless of whatever sector you’re in,” he says. In practical terms, a responsible approach to AI means not only creating the right frameworks and guidelines but also ensuring that data structures are secure, and that employees are given sufficient training in how to use data appropriately.
? ? ? ? ? ? ? ? ? ? ?"But responsible AI does not always mean reinventing the wheel.?The UN Guiding Principles on Business and Human Rights?provide a ready-made means of assessing AI’s impact on individuals and communities, says Dunstan Allison-Hope, who leads the advisory group?BSR’s work on technology and human rights.?“There’s been all kinds of efforts to create guidelines, policies and codes around artificial intelligence, and they’re good,” he says. “But we suggest companies go back to the international human rights instruments and use them as a template.”
? ? ? ? ? ? ? ? ? ? ?"Some have not yet implemented any governance structures at all. While 30 per cent of FT Moral Money readers said their organisations had introduced enterprise-wide guidelines on the ethical use of AI, 35 per cent said their organisations had not introduced any such measures. Reid Blackman, founder and CEO of Virtue, an AI ethics consultancy, sees no excuse for inaction. A rigorous approach to AI does require companies to make change, which takes time and effort, he says.?“But it’s not expensive relative to everything else on their budget.” While some might turn to the services of consultancies like Virtue or products such as watsonx.governance, IBM’s generative AI toolkit, another option is to build internal capabilities.?
? ? ? ? ? ? ? ? ? ? ?"This was the approach at Walmart, which has a dedicated digital citizenship team of lawyers, compliance professionals, policy experts and technologists.?“Given our scale, we often build things ourselves because the bespoke model is the only one that’s going to work for our volume of decision making,” says Nuala O’Connor, who leads the team. Whether turning to internal or external resources, there is one element of a responsible approach to AI that so many agree on it has its own acronym: HITL, or human in the loop — the idea that human supervision must be present at every stage in the development and implementation of AI models. “Let’s not give up on human expertise and the ability to judge things,” says Ivan Pollard, who as head of marketing and communications?at The Conference Board?leads the think-tank’s development of online guidance on responsible AI.?
? ? ? ? ? ? ? ? ? ? ? "For Walmart, putting humans front and centre also means treating AI systems used for, say, managing trucks and pallets differently from AI programs that can affect the rights and opportunities of employees.?“Those tools have to go through a higher order of review process,” says O’Connor......
? ? ? ? ? ? ? ? ? ? ? "(W)hile many organisations have appointed chief ethics officers to maintain ethical behaviour and regulatory compliance, they may need to go further.?One solution, says Virtue’s Blackman, is to put someone in charge of responsible approaches to AI.?“If you’re the chief innovation officer, you want to move fast, but if you’re the chief ethics officer, you don’t want to break things — so there’s tension,” he says. 'Someone with a dedicated role doesn’t have that conflict of interest.'...."
? ? ? ? ? ? ? ? ? (d) Below are excerpts from this WSJ article last Wednesday, "AI Is Moving Faster Than Attempts to Regulate It. Here’s How Companies Are Coping", inter alia quoting Marco Argenti, CIO at Goldman Sachs; Ed McLaughlin, CTO at?Mastercard; and Jim Fowler, CTO at Nationwide Mutual Insurance:
? ? ? ? ? ? ? ? ? ? ? ? ?"Companies are pressing ahead with building and deploying artificial intelligence applications, even as the regulatory landscape remains in flux. “There’s a lot of unanswered questions,”?said Mastercard?President and Chief Technology Officer?Ed McLaughlin.?“So you start having to build systems in anticipation of requirements that you don’t quite know what they are yet."
? ? ? ? ? ? ? ? ? ? ? ? ?"Nationwide Mutual Insurance and Goldman Sachs?are among the companies that have established their own internal guidelines and frameworks for how they use data and AI,?in part by anticipating what ultimate regulations could look like. Any future state-level AI regulation will likely mandate certain levels of transparency in terms of how customer data is used to fuel AI decision making, said Nationwide CTO Jim Fowler. Nationwide?established a “red team, blue team approach,” with the blue team exploring new AI opportunities and the red team considering where it should pull back due to concerns around cybersecurity, bias and ensuring it can meet government regulation. What came out of the red team was a set of principles for AI that Fowler said?“will?help us continue to develop solutions that drive business value but in a way that matches up with where we believe state risk is going to go,” Fowler said....
? ? ? ? ? ? ? ? ? ? ? ? ? ?"At Goldman Sachs, CIO?Marco Argenti?said the company established a committee focused on the potential risks associated with deploying AI. Argenti said he has a constant dialogue with regulators and works to ensure that all internal AI use cases address those risks, including concerns around data protection. But setting up internal guardrails isn’t a cure-all for meeting future rules.?“We?need to be aware that there will most likely also be additional regulations that might come from policymakers,” he said......"
? ? ? ? ? ? ? ? ? ?(e) On March 21, and as announced and discussed in this news release, ISS?published this report on board oversight of AI at the S&P 500 companies,?"AI and Board of Directors Oversight." Below are excerpts from various sections of the report:
? ? ? ? ? ? ? ? ? ? ? ? "Board Oversight of AI: Board oversight of AI can take on many forms?as it has varying degrees of relation to a company’s?overall business strategy. For the purposes of this evaluation, a company was determined to have oversight of AI if it disclosed in the proxy statement that: (1) the full board or a specific committee either has oversight responsibility of AI or AI was mentioned as one of the topics evaluated by the board or the committee during the year, (2) at least one director has expertise in the field of AI, or (3) the company has established an AI ethics board or a similar governing body tasked with overseeing AI-related topics. References to AI in business strategy or executive officer expertise were not included in the assessment process. From September 2022 to September 2023, over 15% of the S&P 500 disclosed board oversight of AI, including specific committee oversight responsibility, director(s) with AI expertise, and/or an AI ethics board......
? ? ? ? ? ? ? ? ? ? ? "Director Expertise: The most common evidence of a board's readiness to oversee AI-related risks and opportunities is found in the skills and experience of its board members, especially in industries where AI may have a greater impact. In the S&P 500, 13% of companies have at least one director with AI expertise on the board, compared with 1.6% with explicit board or committee oversight of AI and 0.8% with an AI ethics board.....For the purpose of this paper, a director was classified as having?expertise in AI if any of the following apply:
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? – Current or past employment with companies in AI or relevant industry
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? – Current or past employment positions relevant to the AI industry
Board membership with companies in AI or relevant industry
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- Certification in AI
? ? ? ? ? ? ? ? ? ? ? ? ? ? ? -- Employment titles related to AI.....
? ? ? ? ? ? ? ? ? ? "Committee or Full Board Oversight: Explicit disclosure of full board or committee of AI oversight is still rare, with just 1.6% of the S&P 500 providing specific disclosure......When oversight responsibility for AI is delegated to a committee, an existing committee's scope is typically expanded to oversee this new area. For example, some companies have recently added technology-related risks, such as cybersecurity, to the Audit Committee’s risk oversight?responsibilities and have further expanded the scope to include AI-related risks. There are also companies that have a dedicated Technology Committee whose oversight responsibilities include a broad range of topics, including AI. Others are approaching AI in terms of environmental and social impacts and regulatory considerations, delegating oversight responsibility to a committee tasked with public policy matters and/or environmental and social risk oversight. A handful of companies disclosed that the committee responsible for privacy oversight and risk management has been expanded to include AI-related risks and trends.?Other companies included a discussion of non-financial regulatory risks, which include responsible AI use. Overall, most company disclosures regarding specific board committee oversight state that the board is responsible for constantly evaluating the competitive landscape and keeping pace with investments in the?company’s business offerings and technology, which include AI.
领英推荐
? ? ? ? ? ? ? ? ? ? ? "AI Ethics and Review Board:?Some companies have elected to designate the responsibility of AI to an ethics or review board comprised of multi-disciplinary teams. The presence of an AI ethics board, while not necessarily a board-level entity, indicates a systemic and organizational oversight mechanism relating to this emerging technology.....Among the companies that disclosed an AI ethics board, the common element was the presence of a multi-disciplinary group to ensure an ethical approach to AI......"
? ? ? ? ? ? ? ? ? ?(f) On a related AI topic, namely attracting AI talent, note this WSJ article last Wednesday, "The Fight for AI Talent: Pay Million-Dollar Packages and Buy Whole Teams."
? ? ? ? ? ? ? (ii) Adobe associate general counsel on the importance of guardrails for generative AI: J. Scott Evans is Senior Director and Associate General Counsel at Nasdaq-listed software maker?Adobe, and below is from this Corporate Counsel blog post last Tuesday, "'Building Guardrails': Adobe's AGC on Legal's Responsibility in Generative AI":?
? ? ? ? ? ? ? ? ? ?"Adobe’s Senior Director and Associate General Counsel J. Scott Evans?said it is a company’s responsibility to build guardrails to prevent misuse of?AI tools, but it cannot be done in a siloed fashion.?It needs to be multifunctional and thought of from many perspectives.?“You need to bring in your HR department, your procurement department, your operations team. You need to find out if there are teams that have operational heads within their team and bring all of those together and that’s how you build guardrails,” he said......
? ? ? ? ? ? ? ? ? ?"In a conversation with Corporate Counsel, Evans shared the importance of building AI guardrails to protect against the misuse...of AI......
? ? ? ? ? ? ? ? ? "Corporate Counsel: Talk about some of the risks that could come from improper implementation and utilization?
? ? ? ? ? ? ? ? ?Evans: I think the improper implementation for a large company like Adobe is the fact that you have no guardrails around it, and everyone has a computer at their desk, and they’re all somehow tapping into some sort of AI system through their own computer and using it in their work. If there are no guardrails or guidance on what is good or proper use of AI and how you can use it in your job, that’s a problem. Also, over reliance on it and trusting it. It’s in the nascent stages. This is new technology—just because it seems so intelligent, you shouldn’t assume that that is the truth. It is still new, it is still developing and you need to verify things.
? ? ? ? ? ? ? ? ? ? ? ? ? ? ?It is important to have guardrails and pay attention to how people are using it. Ask vendors the question: ‘When you’re providing me X, are you using any artificial intelligence to develop this?’ When you deal with engineers or programmers and you want to file a copyright application, are you asking that question? Have you used any artificial intelligence to help create this code? Because you’re required at the copyright office to disclaim any code that was created by artificial intelligence. And so learning the questions to ask, learning the guardrails to put in place and getting that message out to your employees’ employment base is very important and something that needs to be carefully considered...."
? ? ? ? ? ? ? (iii) AT&T disclosure of a data breach/press release of the day: As reported in this WSJ article on Saturday, "AT&T Says Data From 73 Million Accounts Were Leaked to Dark Web", AT&T disclosed in this press release?on Saturday that it had suffered a data breach approximately two weeks ago, as follows:
? ? ? ? ? ? ? ? ? "AT&T* has determined that AT&T data-specific fields were contained in a data set released on the dark web approximately two weeks ago. While AT&T has made this determination, it is not yet known whether the data in those fields originated from AT&T or one of its vendors. With respect to the balance of the data set, which includes personal information such as social security numbers, the source of the data is still being assessed.
? ? ? ? ? ? ? ? ? "AT&T has launched a robust investigation supported by internal and external cybersecurity experts.?Based on our preliminary analysis, the data set appears to be from 2019 or earlier, impacting approximately 7.6 million current AT&T account holders and approximately 65.4 million former account holders. Currently, AT&T does not have evidence of unauthorized access to its systems resulting in exfiltration of the data set. The company is communicating proactively with those impacted and will be offering credit monitoring at our expense where applicable. We encourage current and former customers with questions to visit www.att.com/accountsafety?for more information. As of today, this incident has not had a material impact on AT&T's operations."
? ? ? ? ? ? ? (iv) two Warner Bros. directors resign amid DOJ inquiry over whether they served on boards of a competitor/(another) press release of the day: Below is from this NY Times report yesterday, "Warner Bros. Discovery Directors Step Down Amid Antitrust Inquiry", describing the circumstances that resulted in Warner Bros. Discovery, Inc.'s announcement yesterday in this press release?of the resignation of two of its directors:
? ? ? ? ? ? ? ? ? ?"Warner Bros. Discovery said on Monday that?two members of its board of directors, Steven Newhouse and Steven Miron, had stepped down after the company learned about an investigation into whether their presence on the board violated antitrust law. Federal law forbids most corporate officers and board members to simultaneously serve on the boards of their competitors.
? ? ? ? ? ? ? ? ? ?"Mr. Newhouse and Mr. Miron are both executives at Advance, a private, family-held business whose holdings include the Condé Nast glossy magazine empire that publishes titles such as Vogue and The New Yorker.?Advance is also one of the biggest shareholders of Charter Communications, a cable company that, like Warner Bros. Discovery, sells streaming services to its customers. Mr. Miron has served on Charter’s board of directors since before Warner Bros. Discovery made its debut in 2022, and a relative of Mr. Newhouse’s also serves on the board....."
? ? ? ? ? ? ? ? ? ?Below is from the Warner Bros. press release:
? ? ? ? ? ? ? ? ? "Warner Bros. Discovery today announced that Steven A. Miron and Steven O. Newhouse, both independent directors, have resigned from WBD’s Board of Directors, effective immediately. Messrs. Miron and Newhouse resigned after the US Department of Justice informed them that it was investigating whether their service on the Board of Directors violated Section 8 of the Clayton Antitrust Act. Messrs. Miron and Newhouse informed WBD that, without admitting any violation, and in light of the changing dynamics of competition in the entertainment industry, they elected to resign rather than to contest the matter.
? ? ? ? ? ? ? ? ? "Mr. Miron and Mr. Newhouse were each appointed to the WBD Board effective upon the closing of the merger between Discovery, Inc. and WarnerMedia on April 8, 2022......Mr. Miron is chief executive officer of Advance/Newhouse Partnership, a privately held media company, and a senior executive officer at Advance, a private, family-held business that owns and invests in a broad range of media and technology companies. He previously served as a Discovery, Inc. director from 2008-2022, and was on the WBD Compensation Committee. Mr. Newhouse is co-president of Advance. He previously served as a board observer at Discovery, Inc. from 2008-2022, and was on the WBD Nomination and Corporate Governance Committee......"
? ? ? ? ? ? ? ?(v) (other) press release of the day: NYSE-listed Corning Incorporated announced last Monday in this press release?the appointment of a new General Counsel,?reporting to the Chief Legal and Administrative Officer, as follows:
? ? ? ? ? ? ? ? ? ?"Corning Incorporated, the world’s leading innovator in glass and ceramic technologies, announced today that ?Michaune D. Tillman has joined the company as Senior Vice President and General Counsel.....In addition to leading the Law Department, Tillman will serve as a key advisor to both the company’s Senior Leadership Team and its Board of Directors, providing expert counsel on a wide range of legal matters regarding Corning’s operations and strategic initiatives.
? ? ? ? ? ? ? ? ? ? "Michaune’s appointment underscores Corning’s commitment to maintaining the highest standards of legal excellence”, said Lewis A. Steverson, Corning’s Executive Vice President and Chief Legal and Administrative Officer. Tillman will report to Steverson, as he oversees many of the corporate functions within Corning.....Tillman joins Corning from Worthington Steel, Inc., North America’s premier steel processor and a producer of market-leading building products. She served as the company’s General Counsel and Corporate Secretary......"
? ? ? ? ? ? ?--------------------------------------------
Please contact me if you would like to be on the distribution list and receive every issue of this newsletter directly in your inbox.