AI: Moving beyond the hype to answer questions every business leader should ask
T-Systems Cloud
No matter where you're in your cloud journey we can help you navigate through the right cloud strategy for your business
Artificial intelligence — specifically Generative AI and Large Language Models (LLMs) — was the hottest topic of 2023. It gripped the business world and the public imagination, promising to deliver change surpassing the scale and speed of the Industrial Revolution, which took more than a century to reach its full impact. ?
But a new year is typically a period of both reflection and looking forward. As we move to the end of Q1 2024, it is time to take a more sobering view of AI. So, what questions must senior leadership teams ask themselves as they continue to shape their AI strategies and plans? ??
The year of AI — heralding a new era
Paul Bevan , Director of Infrastructure Research at Bloor Research , recently remarked: “It’s an understatement to say that 2023 has been the year of AI”. I know my friend will forgive me when I say that 2023 was, in fact, the year of the return of AI.
As Paul and most of us know, artificial intelligence has been around since the 1950s, so what was so special about 2023? Why did AI explode back onto the world stage and not earlier? I’ve spent a great deal of time on this question, and I believe the answer is twofold. The first is the Transformer model, a type of Deep Learning that surfaced in a 2017 paper called Attention is All You Need by a team from Google Brain, Google Research, and a University of Toronto student. ?
In brief, Transformers are Neural Networks – and don’t have any connection with the joint Hasbro/Takara Tomy franchise!? They are in fact computer systems modelled on the human brain and nervous system that learn context and understanding through sequential data analysis. A Transformer Model is the bedrock for natural language processing (NLP) tasks; its ability to translate text and speech in near-real-time was a watershed moment in the evolution of AI. Today, Transformers are the dominant architecture for training LLMs. ???
The second part of the answer is more prosaic: investment, although the story takes an interesting twist. You might expect that the mighty Google would have commercialised this technology, but it didn’t. Instead, it took a relatively unknown startup called OpenAI, established just two years before the paper’s publication, to conceive of ChatGPT (Generative Pre-trained Transformer) and take neural networks into the stratosphere. Its founders were a group of tech visionaries; one name most people will recognise is Elon Musk, although he’s since severed his links with OpenAI.
A sliding doors moment ?
Microsoft is a significant player, having made a series of investments in OpenAI. Its initial cash injection of $1B in 2019 was crucial for the development of ChatGPT because, without it, OpenAI would have been slower to release it or possibly even lost the race.
Moreover, Microsoft’s involvement was pivotal in another way because its Azure public cloud offered the compute capacity OpenAI needed to build the LLMs. The rest, as they say, is history – and it’s still in the making with the recent firing and then rehiring of OpenAI CEO Sam Altman.
The tech corporation’s latest investment is a reported 10 billion dollars. Rumours suggest Microsoft may receive a 75 per cent share of OpenAI’s profits until it recoups its stake, after which it will have a 49 per cent share in OpenAI. The drama behind this deal is worthy of a blockbuster TV series, and I have no doubt one of the entertainment streaming services is already on the case!
Popular use cases for AI
The genie is out of the lamp; there can be no doubt that AI is here to stay and will profoundly affect how organisations from all sectors operate and society in general. Let’s take a quick tour of some of the most popular use cases cited by Gartner.
Big questions for business leaders remain
While Gartner’s use cases are compelling, organisations must carefully navigate their journey. As the world faces an environmental crisis, sustainability is top of mind for most business leaders. While many praise artificial intelligence as a sustainability accelerator, this doesn’t automatically make all AI good for the environment. It’s a highly nuanced picture, and it will take significant joined-up thinking and international collaboration before we strike the right balance.
Fans of Jaws will remember one of the movie’s most iconic lines murmured by a shocked Chief Brodie, “You’re gonna need a bigger boat.” LLMs consume colossal reserves of compute power and energy. Consequently, enterprises considering building their own foundation model need to think about these resources, the impact this will have on the environment, and, indeed, their IT budgets.
Moreover, those considering an in-house model will need to factor in skills, security, and whether they have deep enough pockets to fund all of this before concluding whether it is a viable option.
Certainly, Small Language Models (SLMs) may be an alternative path, but on balance, it may be better to leave the sustainability concerns to the vendors, who, not surprisingly, are going all out to ensure they have sufficient compute resources to run AI-centric hyperscale data centres. Microsoft, for example, is pushing forward with a plan to power its AI using nuclear reactors, which goes some way to explaining a job advert I heard about for a nuclear power programme manager! And Nvidia, which became a $1 trillion company thanks to the AI boom, plans to purchase or generate enough renewable energy to match 100% of its global electricity usage. It’s also encouraging to see new techniques emerging to help reduce the energy AI models devour.
Not forgetting the regulations…
There are also the contentious issues surrounding regulating AI. On 9 December 2023, the European Union released the final version of what it claims is the world’s first-ever comprehensive legal framework on Artificial Intelligence: the EU AI Act. Meanwhile, the UK Government has established five principles underpinning the UK’s AI regulatory approach. Across the pond, the US does not have a comprehensive federal law that specifically regulates the development, deployment, and use of AI. However, some AI-related laws and policies apply to specific sectors or domains, such as healthcare, defence, transportation, education, and privacy.
In October last year, New York City unveiled its Artificial Intelligence Action Plan, drawing its light from many lamps, including the National Institute for Standards and Technology (NIST). ?
领英推荐
The United States has also imposed export controls on some AI chips and the equipment used to manufacture them to limit China’s access to advanced semiconductors it could use for military or anti-human rights purposes. The affected chipmakers include Nvidia, AMD, and Intel, which produce the most popular chips for the AI industry. ?????
Meanwhile, CISOs are scrambling to release guidance on using public chatbots, such as ChatGPT, and implement data protection tools in fear of employees unwittingly releasing company confidential data into the public domain. Despite its age, this Forbes article on how to use AI ethically is a helpful place to start.
Or the talent deficit…
There is also the perennial challenge of talent acquisition and retention, a topic oft-discussed, and the dearth of AI skills only serves to compound the problem. According to some reports, the demand for AI skills has grown by 71 per cent in the past four years, but only 0.8 per cent of the global workforce is skilled in AI. These skills are difficult to assess in candidates, especially with traditional recruitment methods. Moreover, AI experts face a dynamic and evolving field where new tools, techniques, and applications are constantly emerging. This landscape means they must keep learning and updating their skills, which can be costly and difficult for individuals and employers. One solution is to outsource talent. By their nature, managed services providers typically share their expertise with multiple clients, which affords economies of scale and cost savings. And the upside? Boosting job creation: the job market is already brimming with some interesting AI-related roles. ?
What path should business leaders take?
The above brings me to another pivotal question for C-suites: how will your organisation consume AI? Specifically, what will you use it for? While the market and technology remain very fluid, I see three core options for consuming artificial intelligence. ????
I wouldn’t be surprised if there are more variations and options in the coming months – there are no market-agreed standards. Much remains up in the air, and it will be fascinating to see how it lands. If I pen another blog at the end of this year, I suspect it will look completely different – the landscape is moving that fast!
A must-have AI checklist for enterprises
In summary, enterprises should consider the following as a bare minimum when planning for AI as part of their IT and business strategy:
Succeed with AI in partnership
As the AI landscape evolves at near-warp speed, every business can benefit from external expertise. From providing a second opinion on your AI strategy to the infrastructure and tools you need, we can support you.
With a proven pedigree in the AI space, our established international team of experts can provide:
Get in touch if you’d like to start a conversation, or connect with me on LinkedIn for more insights. ?
To start your AI journey with the right foundations you can take advantage of one of our special starter packages here.
Read my previous article on why a cloud strategy is vital and how a Cloud Center of Excellence could drive it.
About Author: Richard Simon , CTO at T-Systems International is a Cloud Native and Open Source advocate, with over 33 years of IT industry experience. He has been working in various roles related to Cloud Computing, over the past 10+ years.
A highly structured, respected, and accomplished IT professional who's accumulated a vast set of experiences, ranging in systems engineering, field engineering, solutions and cloud architecture, project management, and consulting as a trusted advisor.
He has worked for a number of prestigious IT service providers, such as IBM, Lenovo, SUSE, Mirantis, Heptio, World Wide Technology, and most recently, Contino.
Richard runs the YouTube channel, Cloud Therapist, and enjoys making audiences think, during his public speaking engagements at various Cloud Native, DevOps, and Security conferences.
Research Director, IT Infrastructure at Bloor Research
1 年Richard Simon. I forgive you ??
Cloud Services Marketing at T-Systems International GmbH
1 年Interesting read - thanks Richard Simon for the thought provoking insights.
Lead Solution Architect #YourCloudJourneyStartsHere #aws
1 年Nice write-up and tangible recommendations, Richard, thanks for sharing