Don’t go big if you don’t need big: how to keep infrastructure fit for purpose as AI and HPC innovation picks up pace
Saravanan Krishnan
Business Leader | Sales Coach | Tech Enthusiast | People Developer Digital Transformation * AI * Cloud * SaaS * Analytics * Cyber Security
I was shopping in a gadget store recently - shopping is a national pastime of Singapore after all :) - and I saw this cool new device that was really interesting. After giving it some thought, I decided to buy and bring it home. My wife saw it and said ‘Do you not have this one already??’. I was so excited to get this new 'toy' that I hadn’t realized it was already sitting at home waiting to be used!
And there are similar stories in the world of technology. We are excited to read and hear about new technologies, our leaders are asking us questions about the most talked about innovation, but it is possible we already have the capabilities we need. Or as is the case with High Performance Computing (HPC) or AI, it is possible we already have something that is fit for purpose. My advice is ‘don’t go big if you don’t need big’. Of course, you can define big in whatever way you want – market opportunity, investment, or deployment scale, but we should stand firm when we know that the route we have planned is the right one.
AI and HPC are seen as ‘the same but different’
Thanks to continued innovation in HPC especially around enhancing sustainability, and the global momentum in GenAI, I am seeing more intensity when it comes to the HPC versus AI conundrum. Actually, a lot of what we recommend to customers remains the same despite these innovations. It’s all about extracting the best value from data, making sure it is being managed in the right way to do so. But there are important nuances to note.
Both technologies enable the fast processing of vast amounts of data for business impact, often solving complex problems. It’s not either-or: AI/ML workloads can exist on HPC systems and drive expansion of computing capabilities. HPC is becoming more advanced in enabling this whilst also offering efficiency gains. HPC and AI will evolve and remain useful when solving large scale problems. But let’s be clear – they are not mutually exclusive either.
As we delve deeper into GenAI, we do need to look at tailored solutions, based on specific business outcomes. I believe right-sizing can be done by focusing on the performance metrics. I always seek to understand – what is the performance your business really needs? This is one question that technology experts can ask of their business leaders when the inevitable questions about new frontiers appear. And when it comes to GenAI, it is about bringing AI to your data, not the other way around.
My checklist
Not exhaustive, but here are some things to consider when you are looking at these two technologies:
1. Focus on creating or harnessing analytics platforms, data lakehouses and existing AI tools, without the need to change your current course towards AI – especially if it continues to meet your business needs.
2. Think about the scale of the data – I specialize in unstructured data solutions, so I know all too well the need to build infrastructure that can scale for the future. When we are looking at HPC and AI environments we refer to massive amounts of data at scale, so this is critical.
3. Further to this, think about workload type, the use case, the AI model, the parameters and so on. This will indicate the requirements for your specific use case.
4. Security and data privacy – how important is knowing where the data is? Does your industry have regulatory requirements that guide this? And how about responsible and sustainable AI?
领英推荐
5. Consider the ecosystem you need access to – what technologies need to be working together and are you planning towards an infrastructure that will allow this?
For some these are easy questions, for some they are harder, depending on where your organization is on the maturity curve.
Why do I say go big only if you need to go big? Because you don’t need a Ferrari to deliver a pizza, and you might not need that new gadget either!
If you want to have a read of some of our examples of HPC or AI deployment, check out this story on NHN Cloud who selected Dell Technologies to power South Korea’s AI data centre.
In December, we announced updates to our GenAI portfolio with AMD Instinct Accelerators. You can find out more about that here.
You might also want to watch this video with Dell and NVIDIA on how to bring AI to your data.
For other information on our HPC and AI offerings, these links will be useful:
Solution Sales Director @ GSC Technologies | Driving Business Transformation with deep Techno-Commercial Strategy ?? | Skilled Leader ?? | Lifelong Learner?? | Data & AI Enthusiast ??????
1 年A great article to get some perspective on joining the AI race especially for businesses! In my opinion, a majority of the companies, despite having analytic tools and datalakes, are likely struggling to locate, classify and access the right kind of data for them before they can decide whether it’s worth investing into it. Either ways, I believe Dell’s UDS platform will always be a safe bet making it a building-block to enable their AI journey.
CEO & Managing Partner at INCITE | Co-Founder at GGC | Seasoned tech exec helping Asia’s largest organisations transform | Passionate Leader | Advisor | Mentor | Accelerator | Leadership Coach | Keynote Speaker
1 年Totally agree Charles
CTO Data Platforms & AI - Dell Technologies
1 年Great points Sara! I am seeing so many companies deploying new GPU accelerator farms for GenAI workloads. They are led to believe they will require new HPC scratch storage. However most AI workloads are compute-bound, and well-designed scale out NAS solutions can easily keep those expensive GPUs "well fed" without adding another storage silo above the data lakehouse ??. #PowerScale