Where Is an AI Strategy To Be Found?
Scarcely a year ago, the most popular computer science course at most major universities was about #blockchains. Few people in the business community had heard much about general purpose transformers (#GPT) or Large Language Models (#LLM). Machine learning systems were certainly playing a meaningful role in back-office and other behind-the-scenes functions like inventory optimization –– but the idea of an #AI technology "moment" equivalent in scope and significance to the beginning of the world wide web or the mobile era? That seemed fanciful, and still the stuff of science fiction movies (mostly dystopian, at that).
Fast forward to early 2023, and the story looks very different. The explosive growth and tangible impact of LLMs in public use, along with the arms race among cloud providers to incorporate these technologies into search and enterprise software products, raises monumental and possibly generational questions to grapple with –– likely for years.
A recent blog post by GitHub showed that, at least in software engineering, the future with LLMs is coming faster than most could have imagined. The code hosting platform –– used as a basic tool for developers to test and collaborate on their coding work –– launched GitHub #Copilot to the public less than a year ago. The world’s first at-scale AI developer tool provides subscribers with “autocomplete” options as they code. It’s basically #ChatGPT for software languages, and it works remarkably well (though, as with the spoken language equivalent, there are errors and security flaws). Still, Copilot now accounts for a staggering 46% of developers’ code across all programming languages on GitHub. Put simply: nearly half of new GitHub code is being written by machines.
You don’t have to fixate on that halfpoint threshold to recognize that virtually any language-intensive activity –– from copywriting to lobbying to creative arts to the drafting of laws –– is either already facing or soon will face a wave of machine learning integration. That justifies a level of unease about the prospect of not only individuals’ day-to-day work, but the fundamental value propositions of companies and individuals alike, being disrupted by AI.
?
It’s not surprising, then, that 37% of Americans feel “more concerned than excited” about the integration of AI in their everyday lives –– more than double the 18% of Pew Research Center survey respondents who are more excited than concerned.?
We’ve heard colleagues, clients, and friends from C-suites and boards talk about the need for an "AI strategy" and for "AI governance and oversight mechanisms'' inside their firms. We believe that level of abstraction may not be the best approach. AI is a colloquial term that represents a large and diverse bundle of technologies. As such, it is more like a description of how human beings perceive certain kinds of outputs from machines than it is about what the machines are doing and how these capabilities impact the enterprise. Still, it’s easy to understand the motivation behind the ask.?
?
It’s trite but true to say there is no single formula, best practice mantra, or easy answer. We’ll be sharing a number of ideas and insights on this question over the next few months (as will many others, of course). For the moment, we offer just a very simple observation and starting point about the nature and directionality of the search for answers. It’s not only that there is a massive rush for strategy; it is that the demand for strategy is sitting at the intersection of (at least) four distinct Vectors with meaningfully different priorities, anxieties, time horizons, and risk appetites.
?
For the moment, here’s a vastly oversimplified scheme to help start to illustrate how AI is pushing and pulling on each of the Four Vectors:
领英推荐
?
?
?
Even with these vastly oversimplified characterizations, a poignant observation stands out: there is no existing algorithm or even rule of thumb to reconcile amongst these perpendicular, and sometimes conflicting, demands. A secondary and interesting observation, derivative from that, is that solving for this problem over time may turn out to be a good example of where human intelligence will continue to excel over machine learning. But even that’s too soon to say for certain.??
?
What is certain, is that the idea of landing on a single “AI strategy” –– and then executing on it –– is probably not a good mental model for how to move forward with any of the Four Vectors. This technology is going to require a much more nuanced, ongoing strategic conversation. That will include small experiments, clear communication, dynamic risk management, and sharing of lessons among firms and across the Vectors.
It’s often said that for every one dollar spent on technology, organizations end up spending ten more figuring out how to incorporate that technology to good effect. This time –– and with this particular technology –– focus, patience, and disciplined attention to how the Four Vectors are experiencing the AI revolution may be even more pronounced than that tenfold cost.
Reach out to Breakwater Strategy if you'd like to hear more.