The Pilot to Production journey in Generative AI

The Pilot to Production journey in Generative AI

Over the Year: Three Lessons Learned

We have come a long way with Generative AI. A year ago, I hosted a Stanford Computer Science Professor Emeritus in a meeting with a think tank I run - the Executive Technology Board with some 100 plus F1000 global corporation chief technology executives. That conversation is posted here: /digitally with Yoav Shoham .??

From those early days of Generative AI, and reflecting back, what’s been really interesting is to see the discussion evolve from "what could we do with Generative AI" to "what should we do with Generative AI".?This is not just a play of words.?It's that after a year of experimentation, incubation, POCs, and pilots, the discussion has changed to - where are we going to reliably get return on capital; and what have we learned from the bumps on the road along the way.??There are three areas we are the wiser for, today.

Build the right data foundation

Without a proper data foundation, we can’t even get started with AI. ?Data is the food for AI, and this means having access to relevant, high-quality, clean and well-governed data is key.

But data is the last unsolved enterprise problem. And data is no longer a technology play - the technology is in fact the easier piece. In fact, many companies are well down their data platform journey with key decisions behind them on infrastructure and modern data stacks, but data cleansing continues to be “a journey - not a destination”. In reality, this is actually a two-part journey - first cleaning up current data sets, and then designing the new data build – to be first-time right.

Data engineering (and the work around data management, data governance, data dictionaries, data ontologies and data quality) can almost be likened to “data janitorial services”. And ownership, accountability, data strategy, capital allocation, and organization design are super important to get right.

Best in class organizations have driven data quality by shifting all operating reviews into the data within core applications only, not derivatives on management slides – and as a result, drive significant focus on fixing core data quality. How you organize for data is also important - great examples of scaled CDO organizations show that there is a data officer in each business reporting up to the CDO accountable for the data within each business unit.

Reimagine business processes and manage change

True innovations come at the intersection of disciplines, and with Generative AI it is all the more so. We have learned that unlike automation, where once completed, the processes still remain the same – albeit move faster, cost less and can be scaled better – AI fundamentally changes the end-to-end process - and the work that remains for human colleagues to do is now entirely different from what they did before. ?As a result, how we reimagine and redesign the business process because of the toolkits now available - as opposed to retrofitting Generative AI into our existing processes - is a critical component of the pilot to production journey.? As is thinking through the change management and team skilling to take on the new work.

The other insight across so many Generative AI deployments across industries is that the power of and returns on Generative AI projects do not come from technology alone – they come from the business and IT teams working together. One critical success factor is picking the right business problems to address – effectively crowdsourcing inputs closest to the business, and ensuring teams start with the questions, not the answers. ?The other one is capital (and management mindshare) allocation – balancing large investments in new technologies whilst at the same time reducing costs and legacy debt. The widespread interest across corporate boards in Generative AI is often a great opportunity to drive more investment in transformational projects with AI. Finally, Lean as a methodology serves as a good framework for balancing ideas and execution approach, with a clear focus on driving operating processes redesign.

How we become more intentional with change management and employee reskilling so we can drive better adoption is critical to ensuring return on investments in Generative AI. Technology is no longer the long pole in the tent – talent, skills and culture are. The best talent for Generative AI is often dispersed and decentralized – sitting in the business. Prompt engineering skills are generally lacking at the levels needed, and often to be found outside of computer science fields. As a result, preparing talent for Generative AI and the long arc of technology is a critical success factor for corporations.

The right Tech Stack and governance framework

The evolving landscape of technology players - foundation models, tools and utilities and application providers - requires enterprises to make more strategic and well-reasoned capital allocation strategy decisions around the "many ways to bell a cat" problem. At the highest level, being intentional about approach is important – whether to use publicly available tools like ChatGPT, or build your own Generative AI stack within your firewalled cloud structure, prompt engineering and fine-tuned models, or take advantage of Embedded AI solutions – where existing application vendors are pre-packaging Generative AI features within their apps.? Often this comes down to timing, control, fit and line of work integration.?

In addition, when it comes to specific large language model choices for building out applications of Generative AI, more and more enterprises are choosing to hedge their risks – and go with two or more parallel LLM models to mitigate the rapid pace of technology change – what’s best today may not be tomorrow. This in turn requires a front-end build with a consistent, great user experience, but with an extensible framework at the back-end using API interfaces that can pull into multiple models. ?There is also the real concern that with the increasing concentration of power in the provider space - from LLM to GPUs, vendor-locked-up is increasing exposure to economic, concentrated security and associated financial risks.

Organizations who think through evolving regulatory frameworks and lead a fair, equitable and inclusive implementation of AI will come out ahead.?Technology organizations are seeing a significant uptick in regulations related to critical infrastructure, privacy, security, and data in the context of Generative AI - the financial services industry, at the forefront of the regulatory dynamic, provides a glimpse of what’s to come more broadly. ?And the challenge is to work through new laws and ethical use policies in an affordable manner while maintaining speed and innovation.?

One thing is for sure though, we are only getting started - and what we do going forward will dwarf what we have done so far with Generative AI.

Sunil Kumar Gupta

Assistant Vice President - Digital Leader at Genpact

3 个月

Great Insight and articulation.

回复
Ashok Krishnamoorthy

Entrepreneurial Growth Strategies

3 个月

Nice insights. Would you agree that a good ‘Data strategy/foundation’ has been a pre-requisite, although a big enterprise challenge for all applications, even pre-AI. Of course, as you said, you can’t even get started with AI/Gen AI without that. With Gen AI expectations already sky high, any thoughts on how one should look at striking the right balance between accomplishing early AI/Gen AI successes in not-so-strategic projects vs embarking on strategic programs that could potentially deliver the highest business value?

回复
Melville Carrie

Digital | Product | Data | Ai | Fellow | Views my own

3 个月

Sanjay Srivastava - bingo - you captured everything on my mind Of particular note is the separation of layers that you mentioned - front-end, middle and back-end - to "hedge" bets as you put it. This becomes of distinct importance when differing business needs (entities within one org structure) require different outcomes, that a one-size-fits-all models or tech stacks cannot offer, yet the centralisation of the provision and ultimately governance of AI (and data) is required - so decoupling these layers for interoperability both now (and as you mentioned) in the future is paramount The ethical dimension - specifically "inclusion" - is also at the front of mind - which has been at the front of mind of many a CDataO - it is now sharply in focus for ensuring "augmenting the human in the loop" is done fairly and equitably And finally, you know I wouldn't disagree that Lean is the underlying principle of making something more effective. Re-engineering a service / product to achieve either an existing outcome or re-imaging a new (better?) one, requires those principles - else, like you said, you have a quicker process, that may cost less, but is it the right "outcome"? Thanks for sharing ??

Ankur Sangal

Live Life Love Life

3 个月

@

回复
Ramit Tyagi

Technology Hiring Leader with 02 decades of experience in setting up successful Hiring Teams for GCC, Product-engg, Tech StartUps & Services Orgs with a focus on Early Career & enhancing Diversity and Veterans

3 个月

Very well articulated Sanjay

要查看或添加评论,请登录

社区洞察