12 Essential Steps To Be AI-Ready

12 Essential Steps To Be AI-Ready


The Truth About AI

True, AI can significantly boost performance and revenues by transforming organisations in lots of different ways. From how they engage with customers to how they recruit people and manage their finances. In fact, it’s predicted to boost GDP in the UK by 14% come 2030.

But honestly speaking, AI is anything but plug-and-play. And maybe that is why we see an astonishing 85% of AI projects fail to deliver the expected business value.

Something doesn’t add up. You seemingly have the data and access to the AI models, so what’s wrong? Well, let’s go back to the data for a second—because that’s where AI projects normally go off the rails.


Don’t Forget The Data

Broadly speaking, either companies don’t have enough of it, aren’t using it in the right way, have major quality issues, or just don’t have the correct systems to store and warehouse the stuff. We see the same problems time and again.

When it comes to artificial intelligence, getting the foundations right is absolutely critical. AI isn’t a quick process and there isn’t a ‘one size fits all’ solution; this is a long-term strategic investment in your business which will improve over time. However, all this relies on the quality of your model inputs, namely data. If you’re not getting this right, you’re already setting yourself up for failure (and a lot of wasted time, effort and money).

In this blog we’re going to tell you how to prepare your data for AI success. With our 12 steps, you will be able to navigate AI with confidence and start reaping the full power of the technology for better business results.

Here we go.

Discover how to ace your omnichannel analytics with our latest ebook.

#1 Data Volumes

Generally, AI algorithms require significant volumes of data – we really can’t emphasise this enough. However, just how much will depend on the AI use case you’re focused on. One figure often referred to is the need for 10x as many rows (data points) as there are features (columns) in your data set. The baseline for the majority of successful AI projects is normally more than 1 million records for training purposes.

?

#2 Data History

Let’s say you want to use AI for demand forecasting or for marketing mix models. In this case, at Jarmany, we recommend having at least 3 years’ worth of data; otherwise, your model will just repeat the previous year’s outputs. It stands to reason that for AI to detect and predict events better than we can, it needs to work with loads of historical data to uncover the patterns and anomalies that we need it to.

?

#3 Data Relevance

Depending on your use case, you’ll also need specific data sets for your algorithm. For example, marketing mix models aims to measure the impact of various marketing inputs on sales and market share, hence you’ll need data sets such as previous years’ sales, marketing performance and budget allocations.

?

#4 Data Quality

We’ve put this at #4 but maybe we should have put it at #1. It’s massive. If the quality of data you’re inputting into your AI model is poor, you can bet your chances that the AI models output will be poor.

In short, many companies face data quality issues, so there’s every chance your unsuccessful AI project will do nothing more than put a broader issue under the spotlight. Not a bad thing.

So, how do you go about achieving data quality? Essentially, you’re going to have to go through your data and ensure it doesn’t suffer from any of the following:

  • Inconsistency
  • Duplication
  • Inaccuracy
  • Outdatedness
  • Irrelevancy
  • Incompleteness
  • Lack of governance

?

#5 Data Understanding

Whilst we place a massive emphasis on data quality (and rightly so), having a large volume of high-quality data doesn’t stand for much if you don’t have a solid understanding of your data. By this we mean understanding what the data relates to, what the data is telling you, and being able to identify patterns and trends, as well as spikes, dips and outliers in your performance.

Additionally, when it comes to data, it’s key that you have an understanding of what’s happening within the wider business so you can apply business context to the data. For example, if you’re seeing a dip in sales performance can this be attributed to seasonality, or perhaps a stock or distribution issue?

?

#6 Data Labelling

This is pretty much as it sounds. You’re annotating your data, defining it as an image, text, video or audio, to help your learning model find “meaning” in the information. It’s important to remember that labelling—like the next step we’ll go on to talk about—should come after you’ve ensured the data quality.?

Labelling is essentially a manual step done by software engineers and the last thing you want is for an engineer to waste their time labelling duplicated, inaccurate or irrelevant data.


To continue reading steps 7-12, click here.

Join forces with Jarmany to turn your 2024 goals in to reality.


要查看或添加评论,请登录

Ipsos Jarmany的更多文章

社区洞察

其他会员也浏览了