Step by Step - A quick checklist to manage your AI strategy
When building AI into a solution or building a solution around AI how often do teams get excited about the opportunity versus considering the wider implications?
I built this simple model/checklist which can help shape the more holistic thinking around leaping on the AI bandwagon.
AI seems to have become synonymous with Large Language Models and there's clearly more to it than that with all the flavours of AI available to us.
But companies looking to leverage LLM's in particular are finding the overheads for creating custom or private models quite prohibitive and gravitating back towards the usual suspects.
This comes with an overhead to consider and not just a financial onefor whatever AI technology you are either considering or using right now.
So here's a few things to ensure you consider using the 'STEP ONCE' checklist.
STRATEGY
How much is AI supporting your actual strategy as opposed to 'Techwashing'? Saying you have AI built into your solution may well get you in front of investors or pique the interest of potential customers who themselves want to invest in AI but what problem is it helping to solve or how does it empower people or your organisation?
TRANSPARENCY
If you can't be transparent with how you're implementing AI, your approach, sources of data etc this is likely to cause you challenges. Notwithstanding anything proprietary or unique to you but even that can be shared in ways that don't compromise your IP. If it looks like you have something to hide people will fill in the blanks..
EXPLAINABILITY
Connected to the previous point, explainability is a key element of working with AI models. This is of particular relevance to Machine learning where there is a direct invers correlation between how explainable a model is and how performant it is at its job. The more accurate the outputs the harder it is to explain one you're in the realms of Neural Networks.
领英推荐
POWER
Some problems need the brute force of powerful models and the servers they run on as processes can be intense. But sometimes you don't need a sledgehammer when you're only cracking a nut. As tempting as it can be to throw the kitchen sink at a problem sometimes it's just a case of pointing the best fit AI at the right problem, framed correctly.
OPTIMISATION
Connected to the previous two points there is an optimal model for the problem you have. Your data science teams should be advising you on the sweet spot that gives you al the accuracy and performance you need beyond which the gains are marginal and the explainability reduces. With all due respect to all the incredible technical people I have worked with there can be a temptation to over engineer a solution. You must also factor in what your approach will be to fine tuning and optimising your models going forward. How will you account for new data points? What happens when you see the performance levelling off or even reducing?
NUANCE
Particularly if you are working in a very specific domain you must factor in the nuances. General models will get you so far but where may people come up against barriers are where they have very specific use cases that require custom work. The better you can explain and account for these nuances the better able you are to tackle them even with 'standard' tools.
COMMERCIALISATION
It should go without saying but building models 'at all costs' to enter into the AI 'arms race' has already caught a number of organisations out. Chasing the pot of gold because you 'must have an AI solution' and ignoring the commercial realities will only come back to bite you. OpenAI can clearly keep asking for obscene amounts of money as pioneers pushing the domain forward but for the rest of us we have to face the reality of commercialising the value of what we build. I often refer to great example of doing this in a previous role. Building predictive models for consumer buying behaviour rather than build out all of the models we wanted we just picked one or two that we hypothesised would have the most impact. We built these offline with the data we had available and simple presented the output score of the model into a very simple interface. Once we fine tuned things and got traction and success with customers then we could expand the models, build the engine win the product, the interface etc and the cost to get there was low and predictable.
ETHICS
Much of this seems like it will be introduced via regulation but it certainly helps to have a strategy and approach to ethics for your own use case. Much of this can be guided by the likes of GDPR or CCPA etc if you consider data governance and appropriate use of individual data. Unfortunately AI regulation is slow compared to the pace of development. An ethical AI policy is a great way to get ahead of this. And this can be informed by many of the points I've raised here. Things like explainability and transparency for example. It's true that the big players here such as OpenAI , Google, Microsoft etc have greater philosophical ethics challenges to grapple with in terms of the power of AI or AGI (Artificial General Intelligence) but some of these become your own concerns if you are deciding to build on these technologies vs more of a private or closed system.
This is just a brief overview of these points to consider that some have found helpful. If you'd like a discussion in more expansive terms about any of this I'd be happy to do that.