What Technology Infrastructure Do You Need For Artificial Intelligence (AI)
Bernard Marr
?? Internationally Best-selling #Author?? #KeynoteSpeaker?? #Futurist?? #Business, #Tech & #Strategy Advisor
When I work with companies and organizations that are interested in artificial intelligence and what it can do for them, some of the most frequent questions I am asked are:
· What technology do we need to start working with AI?
· What are our infrastructure needs?
· How do we need to rethink our current approach to information technology?
Recently I got the chance to sit down and talk to Ivo Koerner, VP of IBM Systems, and I took the opportunity to get his views on the subject.
Part of the issue is that because AI as we know it today – which generally means machine learning, deep learning, and neural networks – is such a new and fast-developing field, there are as of yet no hard-and-fast rules about how to do it.
And because until very recently it has generally only been very large, well-resourced companies that have been able to get involved in the "AI gold rush," there aren't a lot of examples out there that it can be learned from!
During our conversation, Ivo told me that there are two primary requirements for any organization wanting to move towards a more automated and intelligent approach to doing business and running their essential processes and operations. Firstly, there is the need for enough (and the correct type of) compute power to carry out the high-speed, high-volume number-crunching that makes AI happen.
The second requirement – and this is the one that generally needs the most careful consideration – is the data itself. AI today usually means machine learning – literally computers that are able to learn for themselves and become increasingly good at carrying out tasks. This learning requires data – usually lots of it, and the data must be accurate, up-to-date, and easily accessible.
Firstly discussing the compute requirements, Ivo explained that the hardware typically involves two kinds of data-processing technologies – computer CPUs (central processing units) and GPUs (graphics processing units). A CPUs is the regular everyday computer chip that can simply be thought of as the “brains” of any computer. It carries out logical operations and simple mathematics involving the data it is fed – and in the case of modern CPUs, it does this very, very quickly.
GPUs are more specialized hardware originally designed for carrying out the more complex mathematical models needed to generate high-end computer imagery of the type seen in movies and video games. During the last decade, it became apparent that these specialized maths chips are also highly suitable for use in machine learning operations.
For both your compute and your data storage requirements, however, an important decision needs to be made early on in the process. Are you going to host all of the infrastructure yourself, or are you going to rely on one of the readily-available cloud-based platform-as-a-service (PaaS) providers?
Relying on a cloud provider (three examples being IBM Cloud, Amazon Web Services, or Google Cloud) may seem the obvious way to go. Initial setup costs are likely to be lower, and the platforms are built to scale as your company requires – while you pay by the hour, or by volume of data, for the service you use.
Certainly, this approach has made it much simpler for many smaller and medium-sized operations to experiment with and deploy AI-driven tools and services, that would have been cost-prohibitive before cloud services were available.
However, consideration should always be made for the future needs of the organization. As the amount of compute resource and data storage you're using goes up, public cloud can get disproportionately expensive compared to maintaining your own infrastructure.
"It's a very important decision you need to take," Ivo tells me. "As the models you build get bigger, you will get to a kind of breaking point – because a public cloud environment may even be more expensive than procuring the infrastructure you need.
“On the other side, if you start investing in your own infrastructure … it's not a big investment, so the money you need to spend compared to other parts of your IT infrastructure is small … it gives you more freedom and more speed, and if you do this over six, 12, or 18 months in a public cloud environment, you may end up spending even more."
Whatever solutions you pick for your compute and data storage – and it may be a hybrid of cloud and on-site infrastructure – the most important consideration is how well the two elements of your AI infrastructure play together.
"The compute system that you have, in the cloud or on-premises, needs very fast access to that data. You need much faster access than in a transactional system … so you really need to think about how you get the data as close as you can – and as fast as you can – to your machine learning model training technology", Ivo says.
Get both of these elements right, and you've made a good start at building the essential technology requirements for running AI and machine learning.
For more insights, you can watch my full interview with Ivo Koerner below, and you can learn more about AI-infused infrastructure here.
Thank you for reading my post. Here at LinkedIn and at Forbes I regularly write about management and technology trends. I have also written a new book about AI, click here for more information. To read my future posts simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter, Facebook, Instagram, Slideshare or YouTube.
About Bernard Marr
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.
LinkedIn has ranked Bernard as one of the world’s top 5 business influencers. He is a frequent contributor to the World Economic Forum and writes a regular column for Forbes. Every day Bernard actively engages his 1.5 million social media followers and shares content that reaches millions of readers.
For more on AI and technology trends, see Bernard Marr’s book Artificial Intelligence in Practice: How 50 Companies Used AI and Machine Learning To Solve Problems and his forthcoming book Tech Trends in Practice: The 25 Technologies That Are Driving The 4Th Industrial Revolution, which is available to pre-order now.
Co-founder ZEIOS AI | AI Developer Architect | Robotics Autonomous Systems -(GenAI/RPA/Reinforcement Learning /Robotics/Swarm Algorithms) at ASU
1 年Thanks this is exactly what I was researching on for AI infrasttucture. Do we set things up in-house or do we go with a PaaS approach. I think PaaS seems the better option for an Small to Medium Business, we can cut down on the costs of a dedicated IT department in case of down-time etc. I still need to run the numbers and see how the landscape is clustered for SMB do they go PaaS or in-house.
--
4 年Ffs
I am an insurance advisor for ICICI prudential life insurance and social media expert Feel free contact me for any type of your requirement related to my work fields. Agent code -01589022
4 年Please donate for hunger we are trying to reach maximum but it can be possible only with the help of your's #donate #help #RentRelief #fundraising #GoFundMe #charity #donations https://gdjpmctrust.org/portfolio-posts/ration-distribution-with-help-of-delhi-police/
Driving Growth At POHA House
4 年Watch Johannes Drooghag’s exclusive interview with Engati where he discusses leveraging AI to help us make decisions.?https://www.youtube.com/watch?v=kmtQMeBTkvM