Introducing an AI-native architecture for AI-driven transformation
It has been my pleasure to meet with valued customers and partners from across Europe, the Middle East and Africa this week at 惠普企业服务 Discover Barcelona to share my vision for our AI-driven future – and to hear from them about the opportunities and challenges they see in capitalizing on this transformative technology.
AI will be the most disruptive technology of our lifetime. It will accelerate opportunities at a scale never seen before, transforming the way we live and work in unimaginable ways.??
We live in a world where AI will enable us to create new solutions for some of our biggest societal challenges and accelerate business transformation across every industry.?
In November 2022, that promise went mainstream with the release of the generative AI model ChatGPT.? It wasn't just a milestone; it was a seismic shift that shook the foundations of the AI landscape and opened a new world of possibilities.
Generative AI is the ultimate workload
We see generative AI as the ultimate, most complex, data-driven hybrid-cloud workload. It creates unique and unprecedented demands on IT and business – and introduces challenges most enterprises have never encountered before. What worked yesterday won’t get them to where they need to be tomorrow.
For example, generative AI is more demanding on infrastructure than traditional workloads. It requires much more computing power because it processes large amounts of distributed data to train and learn, unlike regular software, which follows a set routine.
Harnessing massive amounts of varied, distributed data sets requires an organization to create a data-first pipeline architecture. Traditional software works with organized data. Gen AI analyzes a large mix of data -- some of which is unorganized and unpredictable. It requires more advanced file storage and computing systems.
With Gen AI, the pace of change is also much faster. Foundational models constantly learn, meaning they must train and be tuned regularly to remain accurate and current.
In addition, deploying generative AI into a business process is complex. It requires adjusting an organization’s operating model and training its team to work with it effectively.
Given all this, it’s no surprise that according to a study by Accenture , 73% of enterprises are prioritizing AI over all other digital investments, but 89% say that they need help scaling models into production.?These stats tell us that being successful is not easy and requires the right approach and partner.
HPE’s role in enabling AI-driven innovation
That’s why I believe HPE has a very important role to play in this wave of AI-driven innovation.??
For many years, HPE has supported some of the world’s largest and most sophisticated AI-related projects, including researching cures for diseases, forecasting climate change, and making autonomous cars safer for our streets.??
We know how to help organizations create and adopt the right AI strategy – one that begins with creating a data-first pipeline to feed data from the sources their models require.? One that provides the unique software, infrastructure and services to process, train, tune and deploy models. One that is hybrid by design, recognizing that data may live anywhere – and inferencing often needs to happen at the edge, where business transformation takes place and outcomes are delivered.? And one that is sustainable from the start.??
A new AI-native architecture for the entire AI lifecycle
HPE has developed an AI-native architecture strategy that is simple to deploy and consume to address these needs. It starts with our HPE GreenLake cloud platform as the unified experience across the AI lifecycle.
领英推荐
The AI lifecycle starts with building, training, and then tuning AI models to organizations’ unique needs. Once the model is deployed and integrated in an enterprise’s environment, the final step is inferencing, which is using the existing learnings and new data inputs to make decisions in real-world scenarios.
Each stage of the AI lifecycle has its own unique demands on data, IT resources, and software toolsets. HPE has the unique technology and expertise, built from decades of experience as the global leader in supercomputing, which is an essential component for training the largest AI models, including Large Language Models (LLMs).
However, most enterprises will begin as model users, integrating public and private data to fine-tune existing foundation models for unique enterprise and deploying inferencing solutions with speed, accuracy, and efficiency.
HPE’s full portfolio of solutions enable organizations to adopt AI across training, tuning and inferencing -- from our proven expertise in building systems designed for extreme scale and performance, to implementing and integrating models into existing environments, all the way to high throughput edge solutions, close to where the data is being created.
Through the HPE GreenLake cloud platform, we deliver a consistent experience, providing the resources required to be successful in each stage of the AI lifecycle. The HPE GreenLake cloud platform enables organizations to leverage the right mix of data, models, tools, and AI-native compute and storage resources to rapidly and sustainably create and train models, tune and drive inference and ultimately, deploy models securely into production.
An AI-native approach across the AI lifecycle isn’t just an idea for the future; it’s an experience made possible by HPE today.
Enabling our customers to pursue AI-driven transformation
This week in Barcelona, HPE extended our strategic collaboration with 英伟达 by announcing a new generative AI full-stack solution co-engineered and pre-configured specifically for enterprises to quickly fine tune foundation models using private data that can be deployed everywhere. This solution builds on the innovation we introduced several weeks ago with NVIDIA, when we announced a turnkey, preconfigured supercomputing solution for generative AI to streamline the model development process.?
Together, HPE and NVIDIA are in a unique and strategic position to deliver comprehensive AI solutions for enterprise customers that will dramatically ease their journey to develop and deploy AI models.?
We also announced new AI-native and hybrid cloud offerings at HPE Discover Barcelona this week that bring together HPE’s leadership in hybrid cloud, supercomputing and AI/ML software to enable organizations to become AI-powered businesses.
It was a great moment to host so many customers and partners in Barcelona this week as we imagine the future and showcase the power of HPE’s portfolio that will help enterprises embrace that future.?
In the last several years, the world around us has undergone massive transformation.?Revolutionary technology like AI used to be a once-in-a-lifetime event. This century, it’s more like once every 10 years.?With AI, we are moving even faster. For more than 80 years, HPE has served as a strategic partner through these transformations.?
I believe HPE is distinctly set apart from any other technology company - delivering the next breakthrough innovations through AI that will enable our customers to address the biggest business and societal challenges through one unified hybrid cloud experience. I look forward to partnering with our customers on this exciting journey.
This article originally appeared on the HPE Newsroom .
Lifetime Firehorse - the AI Solutions.
10 个月Is there any PhD program associated with this line of products. HP-UX fan since 1989. ProLiant customer 1996
Congratulations Antonio Neri
Senior GIS Specialist @ NSG | Master of Science in Civil Engineering
11 个月Promising!
*Enhanced by AI | Avid Reader | Corporate Viking | Sales Leader
11 个月Why won’t NVIDIA make more chips?