Are you ready for AI?

Are you ready for AI?

When ChatGPT stormed onto the scene, it set a record: the fastest growing consumer application in history with 100m downloads in 2 months. In less than a year, GPT4 entered the scene – rumoured to be trained on 1.76 trillion parameters – making its predecessor look like a quaint cousin.?

This pace of development from trite novelty that composes semi-coherent text to a bot that can pass the bar exam – in under a year – has every boardroom buzzing about AI.??

For every CDO, the capabilities that generative AI represent for use cases from testing with synthetic data to content creation at scale, put AI on the map.?

Of course, AI is much more than just Chat GPT, but it’s a great example of how AI is permeating everyday life and therefore business. Running quietly in the background, many users don’t even realise it’s AI that’s powering everything from scheduling to speech-to-text. Its capabilities are growing at break-neck speed, and with that its scope to improve business.??

As Ben Clinch, Head of Information Architecture at BT, puts it,

[AI] creates an incredible competitive advantage if you can harness it correctly, but it’s also a great threat if it’s imperfectly implemented.



Here and Now

McKinsey claims that AI will add $4.4tn to the global economy, but the path to realising this lofty statement is paved with perils. Why? Because humans are involved in manifesting the unknowable. Generative AI is a tool, much like a knife. It can be used to stab someone, or it can be used to peel a potato. It can be used to generate misinformation at an election-swinging speed and scale, or it can be used to make a PowerPoint slightly less dull. The sheer pace of development and adoption makes developing an AI policy and capabilities assessment a board level imperative for the CDO.?

Thankfully, Sean Russell has some pragmatic advice:

The fear or fiction is that people are going to be cut out of the process is, thankfully, not true.

And it’s likely to remain that way for a while.?



How is Narrow AI (ANI) used?

AI can be loosely divided into 3 types: artificial narrow intelligence (ANI or narrow AI), artificial general intelligence (AGI), and artificial superintelligence (ASI). We are focussing on the first type.?

Dr Dan Ballin states that Narrow AI is?

typically focussed on a specific task and has been around for decades – everything from playing a game of chess to facial recognition on the back of our phone.

Recent breakthroughs in generative artificial intelligence (as opposed to predictive artificial intelligence) such as artificial neural networks based on the transformer architecture and pre-trained on large datasets from the Internet, are able to generate novel human-like content. It brought generative AI into the mainstream and accelerated the investment cycle in many edge case or task-specific systems that do ordinary tasks at superhuman speeds. A good mental reference for whether or not narrow AI is useful to your project is to ask if you have a) a defined problem and b) a measurable goal. If neither of these conditions are present, AI probably is not relevant.?

When we think of highly repetitive tasks that are obvious candidates for automation or otherwise impossible to complete given resource constraints, we look to “Narrow AI”. Think about some recent breakthroughs like ‘Halicin’, an experimental antibiotic ‘discovered’ by Narrow AI, or perhaps AlphaGo by DeepMind, a Google subsidiary, that is not only the world’s best Go player, but it happens to have taught itself to play the game in the first place!??

25 years on from IBM’s Deep Blue supercomputer that beat Gary Kasparov at chess, Narrow AI is injected into every aspect of our life, digital or otherwise. What we watch, what we buy, who we date, how we get to our destination, whether we get a loan, if we get a job interview… Almost every facet of our modern life is so infused with Narrow AI that choice itself is becoming illusory. We’ll leave the philosophical pontifications to those better equipped. For now, let’s get practical.?



What is the Value of Narrow AI to the Enterprise?

The answer to this question largely depends on what your enterprise values, so we’re going start with a broad organisational imperative of increasing productivity (you want to do more, with less, faster).?

If we look at the ‘productivity imperative’ as the outcome businesses use to commercially justify the deployment of resources to specific value use cases, then it’s easy to cast our minds to automation. It’s a simple way of thinking about Narrow AI. What could we achieve if we didn’t rely on human process alone? What could we build, scale, and sell? What could our humans be doing now they aren’t needed for menial and repetitive tasks anymore??

Ben Clinch words it wonderfully,

AI helps you understand the meaning of the data, rather than just the technical aspects of it, that you can then use to further business value.

With abundant cloud processing and highly scalable data platforms, all you need are a few smart people to train machines to automatically run processes at scale, and hey presto, your share price goes up! We have identified 3 real-life use cases that relied on the ‘productivity imperative’ to produce a large amount of value but required a bit more effort to get going:


Use case 1: Personally identifiable information movement between systems as per GDPR Article 30, Records of Processing Activities.

AI automates reporting & categorisation of data subjects and the personal data.


Use case 2: Analysing thousands of temperature sensors throughout the country, predicting when faults may occur and when sensors must be replaced.

Patterns are detected when sensors drop out, which are used to monitor data.


Use case 3: Reducing the impact of fraud on clients by flagging suspicious activity or anticipating where threats might come from.

Unusual activity is monitored and flagged, like the current process on steroids.



Don’t Ignore the Risks

Artificial intelligence clearly holds immense value but let’s not sugar-coat it, there are obvious risks that must be considered. Implementation involves a near-incalculable amount of data that must be protected at all costs – safeguarding data against unauthorised use, access and sharing is of paramount importance. After all, lost data equals lost trust. And without data, and indeed trust, what do we have? Just fancy hypotheticals that can never come to fruition.?

Aside from security, there’s also the major concern of integrating the chosen AI models into existing infrastructure. And that change trickles down through the business as far as the frontline staff, all of whom will require retraining. This is no mean feat, both in terms of cost and in morale as some may fear they’ll be replaced, and some will simply not see the value when it hasn’t been a part of the established workplace culture. Foster the benefits within company culture and this positivity could impact another risk: customer adoption. Once the customers see that AI enhances their lives and personal productivity, however, they’ll be more likely to welcome the new tech with open arms.?

And let’s not forget that the models themselves need to be trained, as well. Data is taken at face value without knowledge of any nuance or biases obvious to human beings – such as the self-driving cars ignoring Black pedestrians, for example.??

Ben Clinch explains further,

Models rot over time, and they require continual oversight, retraining, and making sure that the datasets that they’re being retrained with aren’t introducing new biases.

These examples are not exhaustive, and as the technology develops, so may its uses and risks. We must then ask who should be responsible for these risks and their mitigation? Laws and regulations drawn up around artificial intelligence will no doubt be agreed upon and amended as the offering diversifies and grows, but the basis of what and how data feeds into repositories remains bound by GDPR, at least within the EU and UK. So, it’s reasonable to posit that the government should work with experts to co-create a set of regulations to ensure that AI won’t be used for evil. But there’s also onus on the creators, employers, right down to the users themselves.?



Are You Ready for AI?

So, it begs the question: are we ready for AI? And the answer, perhaps unsurprisingly, is complex.?

We will be, but it requires finesse, understanding and a myriad of security measures. Arguably, the question is moot, because AI is here, ready or not. It is affecting business, our personal lives, even our nightly Netflix recommendations. So, to paraphrase Charles Darwin, adapt or die.

Manage your data well, keep informed about latest technologies and features at every level of access, and welcome our new robot overlords. Over the next 5 years as it becomes more mainstream, work must be done to reassure both the workforce and the general public that AI is a friend, not a precursor to Terminator or some other sci-fi-esque foe.??



The Curious Pragmatist

To maintain prominence and truly modernise a business’ offering, AI needs to be embraced. But it’s not a case of finding your favourite program and fecklessly plumbing in data. You need to crawl before you can walk, then think about running with it. Adopt the mindset of the curious pragmatist, which is to say approach with caution, wonder, and rationality.

Data management sits at the foundation of this imminent futurism, and to go in ham-fisted will have dire effects. Embarrassing, easily avoidable mistakes will show fissures that customers won’t stick around to watch turn into cracks. So, when you think you’re ready to hit go, or you’re just AI-curious, Ortecha can help you find out just how close you are to the future. Talk to us today.?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了