Strategic Insight: Six Critical Questions for Board Members and C-Level Executives Confronting High-Risk, High-Reward Technologies
Cliff Locks
Mentor & Strategy Expert | Independent Board Director/Advisor | Best-Selling Author | 3x Founder Each Company Sold | Next Generation Mentor | Family Office & Enterprise Specialist | Podcast Guest | Exec. Ascend Program
If the topic of Generative AI has been hogging an increasing share of your meeting agendas, your board is hardly alone. Since it was unleashed on the general public two years ago, GenAI has become the subject of endless news stories that seem to alternate between predictions of game-changing benefits for the modern enterprise and dire warnings of gloom and doom.?? ?
GenAI, even in its infancy, represents not merely an incremental step forward but a generational leap that might even genuinely warrant the hype and buzz it has engendered. Among its potential gifts, it promises to reduce product development cycles and enable innovation at scale, as well as automate all manner of laborious tasks, thus streamlining operations, reducing costs and freeing up humans to focus on strategic tasks.?
It offers even small and midsize companies the ability to provide personalized and interactive customer experiences. “Amazon could always have written a great chatbot that feels value-added,” says Tariq Shaukat, board member at Gap and Public Storage, co-CEO of Sonar, and former president of Google Cloud and Bumble. “Now, any company can get a chatbot that sounds like a human, more or less, to answer customer service questions. So, it does level the playing field and create great opportunities for midsize companies.”
Simultaneously, GenAI has introduced a whole new array of enterprise risks. And with the technology so new and changing rapidly, some board members are understandably jittery about knowing what they don’t know. But as Herman Bulls, international director and vice chairman of Americas at JLL and board member at Host Hotels & Resorts, USAA, and Comfort Systems USA, points out, directors don’t need to be AI experts.??
“A good board member does not have to know the answers,” he says. “A good board member has to know the right questions to ask.”?
Here are some questions to ponder and pose to management.?
1. How is GenAI aligned with the company’s?strategic objectives? (Or, why are we doing this?)
As with all shiny new technologies, companies must resist the FOMO-induced urge to leap before they look and remember that any investment in AI needs a concrete business case. “You need some clarity around what the company is trying to achieve,” says David Garfield, global head of industries for AlixPartners. “Is it to generate insights, enhance productivity or generate incremental revenue—or maybe some combination of those things?”?
Without a clear strategy, much capital will be wasted, says Richard Boyd, cofounder and CEO of AI company Tanjo and co-founder and CEO of Ultisim. This simulation learning company utilizes gaming technology and AI. “I’ve already seen a lot of projects, and it reminds me of early ERP systems at the turn of the century when people were implementing them, but they weren’t ready yet. It created a lot of failed projects that cost tens of millions of dollars and were just disasters.”?
Bob Rogers, CEO of supply chain AI company Oii and cofounder and chief scientific officer for BeeKeeperAI, observed something similar when he was chief data scientist at Intel. “I saw companies building huge data transformation projects, which ended up with huge negative ROIs because they were trying to do everything, everywhere, all at once rather than having one or two focused ROI opportunities.”
That said, most companies should be starting to experiment with the technology. Most experts recommend a small pilot project involving one group within the company. “Something you can trial and then very rigorously evaluate and test and learn from that experience, and then decide whether to expand or to modify before you take further steps,” says Garfield. “Then you look at KPIs and metrics, technical indicators of whether the software is performing the way it was intended.”
The pilot project should be “in a sandbox,” says Nate Thompson, founder of The Disrupted Workforce and cohost of its podcast. “It should be something that’s not connected to the broader corporate network or has a very limited connection, so there could be no way this technology could somehow expose a broader data set or run wild on a corporate network.”?
It’s a tightrope walk to find the balance between bleeding-edge adopter and missing-the-boat latecomer—but it’s strategy boards and management have to find together, says Bulls. “You can’t go too fast; you can’t go too slow. It’s got to be just right.”
2. What are our policies around AI use??
Having clear policies around AI usage—who can use it, how, and for what purpose—is critical for ensuring that the deployment of AI is aligned with the company’s values, mission, and the expectations of stakeholders. By proactively addressing issues such as bias, fairness, and accountability, the company can not only get ahead of regulatory scrutiny and legal challenges but also maintain public trust, a key asset in the digital age, where consumers and partners are increasingly concerned about data privacy, security, and ethical implications.?
“Every company needs to make their AI principles and acceptable-use policies incredibly explicit,” says Shaukat. “At Google Cloud, we listed it—here are the principles, and we will not do any deal that does not meet these principles.” Sonar’s policy dictates acceptable use cases and spells out specifics on AI usage. “Like, you must use this approved tool with this enterprise contract, etcetera. It can’t just be the Wild West.”
With the genie out of the bottle and apps like ChatGPT, Copilot, and Jasper making GenAI increasingly ubiquitous on desktops and smartphones, a formal policy has become that much more critical to safeguard the company against a host of risks that employees, perhaps unwittingly, invite. Cyberattacks are already a chief concern, says Gary LeDonne, a board member with MVB Financial. “This just kind of takes that to the next level. What algorithms can hackers develop to try different avenues into systems continually? That has always been a concern, and it’s an even bigger for me today.”
Board members also need to sort out which specific risks apply to their companies and try not to get caught up in the media buzz around any one issue, says Reid Blackman, an AI ethics advisor and host of the podcast “Ethical Machines.” “I wouldn’t recommend to the board, ‘Hey, make sure you know what your company’s doing about bias.’ I would say, ‘Make sure that your organization has an AI risk program that’s enterprise-wide, that systematically and comprehensively identifies and mitigates the risks of AI.’” He adds that some known risks—bias, hallucinations, and privacy violations—are built into the technology. “It’s the nature of the beast. These risks are not mere possibilities—they’re probable.”?
Ben Waber, CEO of Humanyze, cautions boards to understand the weight of any AI project's potential downsides before going live. He points to the recent example of British package delivery company DPD’s public embarrassment after its AI chatbot swore at a customer and criticized its company. “They probably lost way more than they saved from the amount they pay call center workers. So, you need to understand the systemic risk that caused the problem. If you can’t answer those questions, doing something whole-hog seems incredibly foolish.”
3. Where is our data coming from??
As the saying goes, garbage in, garbage out. Data is the lifeblood of generative AI projects; it’s what fuels the intelligence and adaptability of these systems. Quality data enables AI to learn, discern patterns, and make decisions. The data must be sourced ethically and legally and structured for easy access, processing, and analysis. Disorganized data can lead to inefficiencies or inaccuracies in learning by even the most sophisticated (and most expensive) large learning models (LLMs).?
Before investing in any AI project, “make sure you have the infrastructure that will support it,” says EY’s former chair and CEO Mark Weinberger, who sits on the boards of Johnson & Johnson, MetLife, and Saudi Aramco. “Do we have our data in a way that we can access and use it? Do we have the skillset, the software engineers? Are we partnering with others who do this? Do we have them lined up to help us understand and apply this new thinking that AI will provide? Those are the fundamentals you need before the end use case.”
While you don’t want to get too far into the weeds, you can ask for the sources of data that the system is being trained on, says Ann Skeet, senior director of leadership ethics at the Markkula Center for Applied Ethics and coauthor of?Ethics in the Age of Disruptive Technologies: An Operational Roadmap. “[Directors] can also advise proactively choosing an AI system with an identifiable training data set.”
领英推荐
Since most companies will acquire AI services rather than build from scratch in-house, those vendors need to be identified as potential sources of risk, says Paola Zeni, chief privacy officer for cloud communications company RingCentral. “If the strategy is to rely a lot on third-party AI, I would ‘double-click’ on what exactly we have done to vet those third parties, and what kind of criteria have we identified to vet them? And who is in charge of ensuring that we have the right terms and conditions with that vendor?”?
That includes understanding what happens to the data your company feeds the AI system, says Flavio Villanustre, LexisNexis Risk Solutions's global chief information security officer. “Depending on the contractual arrangement with the generative AI service provider, prompts used to interact with the models could be captured and used to improve further the system, which could lead to privacy or security issues.”
4. How are we addressing the potential impact of AI on our workforce??
Most experts agree that while some roles will no longer be needed in an AI world—just as the Industrial Revolution displaced blacksmiths and handloom operators—AI will not replace humans en masse any time soon. However, it will create new jobs to manage and interpret AI outcomes and fuel demand for soft skills like problem-solving, communication, and emotional intelligence.?
“If we do this right, by the time jobs are eliminated by AI, those people will be upskilled, reskilled, and future skilled in a way that they’ve already pivoted to higher value and higher-impact tasks,” says Thompson, who recommends having a plan for growing talent as the fight for AI skills erupts into an all-out war and spoils go to those with the most bottomless coffers. Unlike more complex technologies, expertise with GenAI will be easier to learn. “Some people can learn this with affinity and the right exposure. Do not wait, and I hope that you will find a unicorn. Start developing your talent now.”
In the meantime, this nascent technology must never be allowed to take the wheel or be the last set of eyes on anything it creates. Waber offers an example of an HR department using an LLM to write the latest version of their employee handbook. “These models are trained to reduce the amount of sexualized content, and so imagine that, because of that, it doesn’t output a section forbidding sexual harassment in the workplace,” he says. That omission might be missed if a human does not carefully read every word. Later on, if an employee does something out of line, you may be unable to fire them. “Now, are you going to be able to pin that on the fact that you used a large language model?” asks Waber. “Probably not.”? ?
The key is to have AI take first, not last, crack at any task, says LeDonne. “We can let AI be the flag, and then those flags can go to those skilled in fraud to make more judgment-based assessments. I would call data analytics an aid to decision-making, never the sole source.”
5. Who will own it??
For any serious implementation, experts say it’s essential to identify which individual or team will oversee AI implementation, identify risks and opportunities, and provide accountability for ethical standards, compliance, and performance outcomes. Given AI’s far-flung consequences across the enterprise, placing one person is not a simple exercise. RingCentral opted to create an AI governance council, which gathers leaders from across departments once a month to discuss every AI initiative in play.?
“The purpose of the council is loosely?to make sure that there is a shared awareness and understanding of what the company is doing with AI so that different teams could identify what it means for them,” says Zeni. “So, if I hear an update about how AI impacts a product strategy, I can immediately think, ‘Okay, what are the potential legal implications and risks?’ Someone from the communication office would think, what are the communication opportunities or challenges around this strategy?” And so on. The agenda varies from month to month but typically includes product updates “as well as updates on best practices or processes that have been introduced to manage, for instance, AI risks.”
One of the biggest benefits of the council meetings is that they have fostered team collaboration so that no opportunity or risk is missed. “It has created many opportunities for a deeper dive,” Zeni says. “The discussions also trigger operational conversations at the lower level in the organization. What you ultimately want is not only alignment on the strategy but also for the different organizations not to be siloed and to work together.”
A team focused on AI also ensures staying current on technology?with an exponential rate of change. Skeet advises having one board member provide dedicated AI oversight. “Is somebody, both at the board level and ideally a partner inside the company on the senior executive team, charged with taking point on AI ethics?”
That individual may already sit on the audit committee and need not be an AI expert. But it does beg the question: Given AI’s import, should nom-gov be looking to fill their next board position with an AI guru??
Most experts say no. With a limited number of seats at the table, a specialist could negatively impact governance overall unless AI is central to strategy. “I don’t think you can afford to have a legal expert, a technology expert, a supply chain expert, an AI expert,” says Weinberger. “You’ll need people who’ve been through transformation and industry change. So, if someone is an AI expert, what do they do beyond that? What other value would they be able to bring to the board? Because you can’t afford to give up a seat for just one issue.”
But boards do need to stay educated. “There needs to be formal AI-specific education at the board level, along with other required training,” says LeDonne. “This is in addition to what you do on your own. Every board should have guidelines on education, and AI needs to be part of that curriculum.”
Bringing in outside experts is a must, says Weinberger. “All the boards I am on, across industries from healthcare to energy to financial services, we get people in to talk to us, outside experts who are in the field and can talk about risks on the regulatory front and/or other potential unconscious biases that could come up.”?
Thompson recommends that directors completely new to GenAI get their own personal accounts, “not on the corporate network,” and start playing with the technology. “Board members are going to have a really hard time leading and setting strategy and governing and creating risk mitigation around the technology they’ve never used themselves.”
It’s also crucial to benchmark and keep an eye on how other companies in your industry are using AI—but not copy, “because really, no one knows what they’re doing,” says Ben Waber, a visiting scientist at MIT Media Lab. “The floor is littered with horrible decisions that came from people just copying each other.”
Getting arms around AI’s impact will push many directors outside their comfort zone. But Louise Forlenza, who sits on the global data engineering company Innodata board, says that’s just part of the job. “When there’s something new, I don’t think you can ever feel completely comfortable in the beginning,” she says. “You have to use your instinct many times, your gut.”?
She adds that you have to get a foundation and ask the right questions, but that’s all any director can do. “I could ask as many questions as I think are intelligent, and they’ll answer them—but the Boeing door still flew off the plane, didn’t it?”?
In the meantime, just as they have for decades, directors will have to do a bit of educated guessing. “That’s why being a board member is a very nimble skillset,” says Bulls, “because, by definition, you will not have 100 percent of the information. Having that ability to see around corners, which comes from experience and wisdom, having the ability to be a continuous learner and staying up to speed as much as you can—that’s really what you need.”?
Edited by: Cliff Locks, CEO | COO | Executive Mentor | Independent Board of Director | Governance | Private Equity Podcast Host | Supply Chain | Advisory Board | SaaS | Clean Tech | Med Tech | Custom AI Agents/LLM/ML/NLP
#WSJ #privateequity #boardmembers #corporateleadership #IBD #CEO #CFO #COO #BoD #CXO #management #PE #hedgefund #PrivateEquity #limitedpartners #LP #venutrecapital #VC #ethicalbusiness #directors #familyoffice #uhnw #ultrahighnetworth #publicprivatepartnerships #mergersandacquisition #InvestmentCapitalGrowth #MillionaireLifeServices
Founder & CEO At Sufix Tech. 250+ Businesses across USA rely on us for CRM Management , Building Funnels & Automations, Web Development, SEO, Media Buying, Video Editing , Graphic Designing. We are 100% White Label.
3 个月Cliff, thanks for sharing!
Generative AI presents a crossroads of potential and prudence. As Tariq Shaukat highlights, it can democratize capabilities, yet vigilance in strategic alignment is key. Let's embrace this generational leap with the wisdom to ask the right questions. ????