Why Platforms Could Be The Key to Unlocking The Full Potential of Generative AI Within Your Enterprise
Image generated via Dreamstudio ( https://dreamstudio.ai/generate ), with the prompt : “A future AI enabled world”

Why Platforms Could Be The Key to Unlocking The Full Potential of Generative AI Within Your Enterprise

By now, many enterprises have already invested in some sort of a functional internal developer platform, and others are well on their journey to do so. Some are still wondering whether it is worth the investment, and effort, to delve into a multi-year initiative like this, especially in the current market condition, and more so on how the ecosystem will shape up in light of the recent influx of Generative AI and other emerging technologies.

I don’t want to spend too much time on what a Platform ( https://platformengineering.org/blog/what-is-platform-engineering ) is in this article, as there are many before me who have substantially covered that topic. You can read the CNCF whitepaper at : https://tag-app-delivery.cncf.io/whitepapers/platforms/ if you want to do a deep dive the concept of platforms, and platform engineering.

In summary, you can think of a Platform as a set of capabilities/offerings/reusable assets/building blocks that are created and managed centrally by a Platform Engineering team and consumed/absorbed by business-aligned product teams who can build business-focused products using those reusable assets, rather than build everything from scratch. In most cases, it is expected than an enterprise-grade platform will be scalable, reliable, aligned with the architecture principles of the enterprise, fully automated, and self-service to some extent.

Now, that we have Platforms covered, at a high-level, let’s figure out why and how they can help your enterprise unlock the myriad and untapped potential of AI in general, and GenAI in particular.

Governance

Just like other aspects of your enterprise systems/functions, you need complete transparency and governance for your AI applications/assets as well. Ensuring ( or in a sense mandating ) that all AI applications have to be built on top of an AI Platform ( which is part of the overall Enterprise/IT Platform ) is a great way to start this process.

With the renewed and immediate focus on GenAI technologies like ChatGPT, Dall-E, etc, business executives are desperate to get the next AI integrated product out in the market, and they might not always care about AI Ethics, AI Governance, Transparency, and other critical aspects. But, as a firm, there needs to be some sort of AI Governance Board/Authority established that can ensure that only safe, transparent, ethical AI use-cases are pushed to production, and proper guidelines/principles are enforced.

Most likely, your enterprise will need to publish a set of AI Principles that you would mandate or recommend your developers to abide by. For example, Google has published it’s set of AI principles at : https://ai.google/responsibility/principles/.

What better way to ensure governance than having a central funnel to route all AI related requests to ? The AI Platform in that sense becomes a repository of all AI requests raised across the firm, which are then assessed by the AI Authority/Governance board, and only approved AI cases finally end up being deployed on the production-scale AI Platform.

I know at this stage, there will be teams/individuals who will feel that this will be a bottleneck, or that unnecessary regulation/governance can lead to delays, and as a firm, you might lose out on first-mover advantage, and so on. All of that might be true ( to some extent ), but I really like this line by ?Dr. Rumman Chowdhury ( https://www.dhirubhai.net/in/rumman/ ) which explains the finer points of this argument :

It is important to dispel the myth that 'governance stifles innovation'. This is not true. In my years of industry solutions in Responsible AI, good governance practices have contributed to more innovative products. I use the phrase 'brakes help you drive faster' to explain this phenomenon - the ability to stop a car in dangerous situations enables us to feel comfortable driving at fast speeds. Governance is innovation

The specific setup of the AI Governance board will vary based on the particular enterprise, but you can take ideas from how Google has set it up at : https://ai.google/responsibility/ai-governance-operations

No alt text provided for this image

Deduplication

Again, like the earlier point, this is not specific to AI/ML related applications, rather a functional attribute of all platforms, for all types of applications. Platforms help you removed duplication to a large extent.

Let’s take an example, which is a common use-case across multiple enterprises. With the advent of mature conversational AI interfaces like ChatGPT, it makes sense to modernize the internal Knowledge management systems of enterprises and infuse the same with GenAI.

Normally, the knowledge within an enterprise is stored in disparate data sources, could be SP sites, Teams channels, Confluence, Custom Portals, etc. Most enterprises design a sort of global search system which scans across all the different data/knowledge repositories and provides a consumer an easy way to find answers to questions. If we can infuse ChatGPT sort of a GenAI understanding/human-like inference, and summarizing capabilities to those knowledge repositories, it could enhance the overall value exponentially, as now users/consumers of the repository can talk to the system in an intelligent, human like way, and get more granular and directed responses to their queries. Refer to : https://www.dhirubhai.net/learning/generative-ai-the-evolution-of-thoughtful-online-search/how-finding-and-sharing-information-online-has-evolved to know more about how Enterprise Search ( overall ) is transforming, and evolving from query/keyword based search engines to an immersive human-like chatbot interface with intelligent inference capabilities.

This is a common use-case that all enterprises are trying to tackle. But, in reality, we know how disparate and diverse the different business units in an enterprise is; most of these business teams work in silos, and most likely, if we do not have a common/integrated platform/process to develop AI applications, each of these business teams will develop their own ChatGPT infused Knowledge Management applications, which will reside in separate silos.

This is a complete waste of company effort, and time/money, and this is where the Platform based approach shines, as when a new request for the AI Platform will be requested for a business use-case, the governance board can first check for similar use-cases, and only allow a custom/independent deployment if nothing similar exists. Rather, if there are duplicate/similar use-cases, the AI Governance board might suggest the different teams to collaborate and join forces, rather than working in silos.

However, the efficiency of the AI Governance board in these cases, is limited to the extent of their visibility; you cannot govern, or control, what you cannot see. If a rogue team in a member firm violates the central directive and sets up their own processes to develop AI applications, the AI Governance board won’t be able to intervene if they are not even aware of it.

This is where you need strong regulations, and strict vendor agreements, for example, at an enterprise level, you might mandate that nobody will be able to leverage Azure OpenAI studio unless the request comes from the central team, etc.

No alt text provided for this image
Azure OpenAI Studio : https://learn.microsoft.com/en-us/azure/cognitive-services/openai/quickstart?tabs=command-line&pivots=programming-language-studio

Of course, there will be deviations, for example, there is no way to avoid an employee setup a ML model ( say like StableDiffusion ) on their laptop, and play around with it. To be transparent, a platform focused approach should encourage individual innovation, and developers should be allowed to experiment on their own, but the path to production should always be regulated, to avoid future litigation, issues, etc.

Standardized Building Blocks

Just integrating an OpenAI API call as part of an application workflow, is just one part of the problem: normally, enterprise applications are much more complex than that. When you are designing an application for a business use-case, there are different factors that are involved, starting from front-end web applications, data repositories like Synapse, Databricks, Enterprise Service bus for asynchronous message transfer, and so on.

Just creating a ML model might not be enough, you need to manage the full lifecycle of the model. There needs to be a central repository of trusted/approved models, any changes to the model, for example re-training/fine-tuning must be audited via a model transparency framework, and so on.

To be fair, this is too much work for a single ML team, who just want to write some python code, train a model on a dataset, and test it out. But just because individual developers/data scientists do not enjoy doing so does not imply that it is not important. This is where an internal AI Platform shines; it will have all the tools and processes baked in, that can help data scientists be productive quickly.

Some accelerators/offerings that the AI Platform might provide/maintain centrally could be:

  • A trusted repository of curated, governed AI models
  • A trusted repository of PIA ( Privacy Impact Assessment ) approved synthetic data sets for model training
  • Automated tooling to test whether a particular model is aligned with the AI principles of the enterprise or not ; For example : integrate with the EU tool : https://altai.insight-centre.org/
  • 3rd party data repositories, curated based on vendor relationship, or sourced from the market at a price, that can be used to enrich models
  • Workbench/sandbox environments for AI developers, ML scientists, etc to load data quickly, download AI models, and run them on special compute clusters
  • Streamlined and automated model approval/governance process to get your models certified for production quickly

And so on……

You may also choose to work with strategic vendors like C3 AI ( https://c3.ai/c3-ai-platform/ ) for example, and industrialize those platforms, and enable them within your enterprise, adding functionality for self-service, consolidated billing, SLA support, and so on; rather than building an AI platform from scratch.

In summary, a standardized AI Platform can provide a wide array of standard assets that can then be stitched together to build a business product which will cater to real life use-cases.

Conclusion :

No alt text provided for this image
Image generated via Dreamstudio ( https://dreamstudio.ai/generate ), with the prompt : “A computer engineer developing AI applications for his company”

AI in general, and Generative AI in particular is already turning out to be a key enabler for almost all businesses, and digital businesses that embrace AI will most likely turn out to be better positioned to handle the volatile markets, than their counterparts who are not so keep on adopting Digital and Emerging Technologies.

But an enterprise needs a cohesive strategy for its AI roadmap ; It cannot be a set of strategies operating in separate silos, rather a unified approach that can position the enterprise for success, while also ensuring there are no adverse effects, like litigation, or loss of trust amongst the consumer base. I am sure no firm wants to be subjected to a 3 billion USD litigation, which is what OpenAI and Microsoft are facing now ( https://www.vice.com/en/article/wxjxgx/openai-and-microsoft-sued-for-dollar3-billion-over-alleged-chatgpt-privacy-violations ?)

A Platform might be the only feasible approach to allow AI experimentation, and innovation at scale, while also ensuring that there are some guardrails to work with. When I speak to developers, I have found that, in spite of some of the constraints/bottlenecks that a central governance entity/ a platform approach introduces, most developers are happy to have some sort of a policies to protect them or their code/assets from adverse impacts, potential loss of face, etc.

If your enterprise already has a platform focused strategy, work with them to build an AI Platform around the core platform offerings and ensure that all AI applications/research/innovation is anchored around this central platform.

And, in case your enterprise is still evaluating whether to invest in platforms, work with your leadership and educate them on how the AI strategy for the enterprise as a whole, is aligned with the overall Platform strategy, and how an AI platform can help developers build and maintain ML/AI enabled applications, at scale.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了