How to succeed at Generative AI Projects
McKinsey Digital has published a comprehensive article targeted to CEOs to break down Generative AI along several dimensions. This is, of course, one among several comprehensive takes from reputable firms on this topic. Acknowledging these conversations, CEOs have indeed started personally driving forward and driving boundaries around Generative AI. While reputable firms take a stand and CEOs direct resources towards this, as an industry strategy and as a society, we are still in the early stages of Generative AI. In an imminent future, It is going to be up to all of us to figure out what works the best in companies and in personal life. For those who have any questions about how significant AI (including Generative AI) will be in the 2040s, I recommend Kai Fu Lee’s book named “AI 2041” – an imagination of “ten visions for our future”. Hence, as we figure things out, I want to layout a few ways to Select, Implement and Measure Generative AI projects for your enterprise. Secondly, I want to drive the idea that even if these three are simple words, they apply uniquely to AI and Generative AI compared to traditional IT applications.
Generative AI Project Selection
In the Generative AI context, when I think freely of opportunities in the enterprise, I feel like a kid in a candy store. When, I apply rules to my thinking, I feel like a bull in a China shop. I can either be totally right, or totally wrong. (Please pardon those analogies). I think this might ring true because Generative AI applies to a several broad categories of problems and the impact can be significant. Generative AI can classify, summarize, answer questions, create relevant responses and in varied forms such as art, prose, songs, or speech. Using advanced techniques, an enterprise can even help a Generative AI model reason independently and act in sophisticated manner specific to the circumstances. Scientists continue to research new methods and hardware makers continue to offer better chips for AI. At the highest level, energy companies are innovating how to supply abundant, affordable energy for building new AI. This ecosystem is a lot to take in and without proper evaluation, companies can make mistakes or be left behind. The largest of US corporations possibly have enough resources to run several Generative AI projects simultaneously to select what works and what does not. For others, they must evaluate ROI sooner than later. There is proven value in setting up the fundamentals – for example, there is immense value in unlocking data using a data lake or lakehouse or a warehouse, building a truthful BI system is valuable, building an AI platform is valuable. Hence, for sure, invest in fundamentals. However, when choosing outcomes that you want to deliver, it might be good to create a decision framework for evaluating technical considerations and business considerations. Have your direct reports organize cross functional workshops to inform this decision framework. Some of the things you may put into this decision framework are “Do we have the capability (data and services) to fine tune generative ai for our purposes?”, “What is the PR strategy if this works and what is it if this does not work?”, “What is the opportunity cost?”. Technically, ask questions such as “What models can we train?”, “Does this require research or vendor assistance?”, “What are our methods of fine tuning a model and how will we compare performance?”. These are not very different to what you would ask for traditional IT projects, just that now these are more significant than ever before. Once you have some of these questions answered, use the decision framework to prioritize projects. Seek early successes and penalize slow-to-materialize projects sooner in their cycle.
领英推荐
Implement your Generative AI Project
You will have no difficulty finding energized vendors, service providers and staff members to implement a Generative AI project. It is the talk of the town for sure. Hence, as of today, this is a matter of tempering enthusiasm and directing energy towards useful ideas. To do exactly that, start with a good training plan. A good way to get engineers to train themselves is to poke their pride about their knowledge. Challenge them to prove that they truly know about Generative AI by asking them to show certificates. When asked to take lessons, an engineer who has their sleeve rolled up is forced to take in new perspectives and think more about what they are about to do. A certificate adds to their confidence to be able to challenge their peers when they see BS or contribute in a new way when problems prove tough to solve. Create guardrails that help teams learn and adjust quickly. Several companies use the lean canvas to build MVPs and continually iterate on results within guardrails. Use Lean practices or Agile practices to deliver small results fast to production and evaluate frequently how you did with respect guardrails. (I understand that Lean and Agile may be much bandied, half as understood, paradigms). Some CEOs and Managers prefer to provide a long leash when expecting results from a Generative AI project. I see no reason to do so. I feel like having a strong expectation of frequent, useful results combined with training a capable team is a better, structured way to aim for the big goals instead of allowing teams to learn on their own and show up semi-prepared on game day. Once you have staff ready and guardrails set, ensure that you capture the learning gain. This is critical. Generative AI, and AI itself, is quite new in the Enterprise space, especially in the way it is being used now vs decades ago. It is critical to capture learnings from each project and inform the overall roadmap so each team can adjust and move. Create a community forum for sharing ideas. Create formal, yet free, channels for sharing good ideas. Build a library of documentation and code. Training, Delivering and Learning – these are the glue for implementing your Generative AI project.
Measure the results of Generative AI
Finally, prepare to measure the effectiveness of Generative AI. The difference between measuring impact of a traditional application portfolio vs Generative AI is the subtle ways that latter can work or not work. Imagine a use case where your Sales manager wants to read a summary of all inbound and outbound product demos that occurred in the last 2 weeks. This sales manager may be interested in finding out how prospects reacted when the salespeople demonstrated a new feature or said specific words that they have been instructed to say. Traditionally, the sales manager would walk the floor (or call people) to take a dipstick measurement of how people felt. At best, a CRM application that salespeople use might offer a structured screen to input feedback right after every call. In a new Generative AI world, a different approach could be that all call logs go to an AI Platform where, at the end of the day, a Generative AI model summarizes all call logs for those goals and sends an email to the sales manager. How do you measure the value of such a model? We can’t measure a direct impact on customers using techniques like A/B testing, we can’t measure effectiveness of the generated summary until the sales manager self-reports it. In general, the outcome of using Generative AI can be quite subjective and needs new ways to measure. In this case, we may indirectly measure success by performing a statistical experiment by having managers use a summary for some calls and use the traditional approach for a control group. Further, it is likely that your data scientists will want to conduct several experiments in the field with these summaries before they call it done. This is another way in which AI applications are different than traditional. Next, AI applications are not determinately the same across time. In other words, AI applications can, and should, change frequently without users ever knowing. This is because with continuing business operations, several attributes of a model can change – these are called “drift” in data science speak. Data can drift, model quality can drift and new biases can be discovered in the field. Hence, the point here is that we need to come up with new ways to measure the results of Generative AI. Disclaimer: I have based this paragraph on an imagined situation based on real conversations, hence, please have a think of what your specific circumstances are.
All of these are active topics in management science and data science research as much remains to be discovered. There are several reputable firms that provide guidance to CEOs, Executive Managers with several specific prescriptions on how to drive value with Generative AI. My goal here is to describe at a broader level what are the three things that all managers must do to succeed with Generative AI projects. Select your projects, Implement them, and Measure them. Wish you the best.