Vast Data operationalizing AI

Vast Data operationalizing AI

Today, Keith Townsend spoke to John Mao, VP tech Alliances and Neeloy Bhattacharyya, Director AI/HPC Solutions engineering for #AIFD4.

They spoke about how Vast own 6% of the Flash datacenter market, and also their positive earnings results. They have 10 exabytes of data deployed, and a good percentage of that is in AI and HPC workloads.

Vast started as a storage platform and they built features on top of it, like a storage space. This is a global namespace that is write-consistent. A "Vast database" makes it available everywhere (whenever in the data pipeline that you're comfortable). Items are stored as elements, and are available to everyone.

Vast has a disaggregated shared architecture. All data is at persistent layer, logic has no state and doesn't talk to each other. Understanding access patterns around the data lets them pre-staged data to be ready when its needed.

Data scanning can happen on the data layer, and helps Vast "squeeze out the inefficiencies" in the data preparation portion of an AI pipeline.

#AI #datapipline #VastData #AIFD4




Impressive insights on operationalizing AI, looking forward to seeing how these strategies will shape the future of tech alliances and solutions engineering!

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了