The Battle for Compute Locality – Computational Storage to Surpass Persistent Memory
Battle Lines are Drawn, but Not Required

The Battle for Compute Locality – Computational Storage to Surpass Persistent Memory

From my last article I mentioned a need or change to storage is coming. I called it “Computational Storage” and since the last post, the SNIA effort mentioned was formalized and I was chosen to lead that effort. What is next and how do we approach this continued line of thinking into 2019? I think it would be a good time to address the Compute Locality debate going in the market today. Here is my friend Tom Caughlin's view from early 2017 to help set the stage. After all, it is good to look at what other complimentary tech is out there to support, augment or further drive this Computational Storage Wave.

The trick is what type of storage are we thinking when we look at each route for compute. You see, there are many people that are excited about another popular 2018 topic, “Persistent Memory”. How does this relate? What do you think is being used to make that 'memory' persistent? It is storage-class memory, meaning it is just storage in a new wrapper at the end of the day. So we have to look at compute in new ways too.

There are three places that we are finding ‘compute’ going:

  1. Status Quo: Staying ‘as is’ in the host CPU, Ala Intel/AMD
  2. Acclerated: The ability to ‘offload’ to FPGA efforts, via Nvidia/Intel/Xilinx, and even Persisent Memory, like Intel Optane.
  3. Intelligent/Innovative: Moving Compute to storage, via Computational Storage and NGD System is key to this success along with others in SNIA.

The relevance here is that with storage and storage products, there is no longer just one way to look at it. We need to think about why we have diverging tech, the value of that tech and of course the ease of using it. After all, Persistent Memory has a lot going on! Summits, Multi-Vendor coming, and even multi-media in the way of PCM, MRAM, RRAM, FeRAM and more. These all have been targeted today at moving them to the memory bus, replacing the DRAM we use today and using protocols like DDR4 and soon Gen-Z. All of this means impact to a system design, use and technology shift. For me, this highlights the continued need to see storage closer to compute, and while this method is doable and usable, it is expensive and requires a bit more ‘user interaction’ than the average architect. After all, we have just started seeing products and this work started back on DDRII bus designs. So why all the hype? Why all the attention? One might say it has to do solely with the names backing the tech. And these names are the same ones that drive underutilized and overpriced platforms that are still focused on a core from decades ago.

That is why an alternative is coming up even faster, and one I am excited about. Computational Storage, and all its iterations are driving more and more traction and in a much shorter adoption time frame. Why is this you ask? It has to do with architecture change, the lack thereof, and the needs of systems to find a way to start anew with compute moving down, not storage moving up. Think about it, we have spent decades building systems that require a CPU to be upgraded to handle some very mundane tasks. Then we say, well let’s offload that work to a GPU, again not saving any money and still spending the storage dollars. Now we finally have a way to provide that CPU resources back for true mission critical work, save on the need (sorry guys) of GPU and FPGA accelerators and be more efficient, useful and fiscally responsible about our storage spend. After all, if you look at IT spend, really look, what is the ‘largest growth’ item… Not in cost, or change, but simple density… Storage! Let’s make it work for the system, not against it.

Computational Storage now comes in a variety of packages, much like persistent memory, where you can get ‘FPGA accelerated', Host CPU managed media and fully integrated, drop-in solutions which for some solid reasons is my favorite. Think about it. The more moving parts, the more change, the slower the adoption. The ability to have a single solution to provide compute acceleration, storage management and high capacity seems to be the best of all worlds. Even my friends at Gartner agree as the fully integrated solution from NGD Systems was selected as a Cool Vendor in Storage Technologies. That says a lot about the value and worth of this approach.

Coming in January, alongside the Persistment Memory Event above is a Full Day SNIA Symposium session dedicated to Computational Storage... It has truly Arrived!!

In the next segment, we will delve into the versions and the value of the solutions that are available in both persistent memory and Computational Storage to see the ways both solutions coexist in the new platform development coming in 2019. 

Long story short, Computational Storage has the ability to create change effeciently, less costly, and faster to adopt than any other 'compute' focused change in history! Join the party and ride the wave!


given the speed advances with persistent memory, it’s completely natural that this would ultimately shift whole architectures, and even change how the chips are architected.

Bret Piatt

Advisor, entrepreneur, and executive who believes in the service-profit chain, net promoter score, and servant leadership.

6 年

Joshua McKenty looks like our Computable Object Store conversations may finally be a category called Computational Storage.

Great article Scott Shadley and thanks for driving the Computational Storage discussion forward!

要查看或添加评论,请登录

Scott Shadley, MBA, Board Member的更多文章

社区洞察

其他会员也浏览了