Source: Chris Mellor — (Block and Files)
VDURA CEO Ken Claffey believes that the company should be classed alongside DDN, VAST Data, and WEKA as an extreme high-performing and reliable data store for modern AI and traditional HPC workloads.
However, the world of storage buyers needs to re-evaluate VDURA as the PanFS software that was the basis of Panasas’s HPC success has been completely overhauled since Claffey became CEO in September 2023. The company changed its name to VDURA in May 2024 to reflect its transformation and focus on data velocity and durability.

Claffey says VDURA combines the stable and linear performance of a parallel file system with the resilience and cost-efficiency of object storage.
At its core, VDURA’s microservices-based VDP (VDURA Data Platform, the updated PanFS) has a base object store with data accessed by clients through a parallel file system layered on top of that. In addition, there is a unified global namespace, a single control plane, and a single data plane. The metadata management uses a VeLO (Velocity Layer Operations) distributed key-value store running on flash storage with the object storage default being HDD.
Virtualized Protected Object Device (VPOD) storage entities reside on the HDD layer. For data durability, erasure coding is provided within each VPOD and across a VDURA cluster. The VeLO software runs on scale-out 1U director nodes with VDURA’s own hardware using AMD EPYC 9005 CPUs, Nvidia ConnectX-7 network interface cards, Broadcom 200Gb Ethernet, and Phison PCIe NVMe SSDs – Pascari X200s.
VDP has a unified namespace where Director Nodes handle metadata and small files via VeLO and larger data through VPODs. The Director Nodes manage file-to-object mapping, allowing seamless integration between the parallel file system and object storage. Additionally, they support S3.
VPODs can run on hybrid flash-disk nodes and also all-flash V5000 storage nodes, F-Nodes. The Hybrid Storage Nodes incorporate the same 1RU server used with the Director Node and 4RU JBODs running VPODs for cost-effective bulk storage with high performance and reliability.
The F-Nodes have a 1RU server chassis containing up to 12 x 128 TB NVMe QLC SSDs providing 1.536 PB of raw capacity. An AMD EPYC 9005 Series CPU with 384 GB of memory powers each F-Node. Each F-Node includes Nvidia ConnectX-7 Ethernet SmartNICs for low-latency data transfer. As a result, these support high-speed front-end and back-end expansion connectivity.
Coming ScaleFlow software will allow “seamless data movement” across high-performance QLC flash and high-capacity disk.
VDP is a software-defined, on-premises offering, using off-the-shelf hardware, and is being ported to the main public clouds. It will also support GPUDirect Storage (GDS), as well as RDMA and RoCE (v2) this summer.
Claffey says predictions from 3–5 years ago about QLC flash prices dropping to HDD levels have not come true. He tells us: “Enterprise flash would go from 8x to 6x to 4x and then all geniuses were saying, oh, it’s going to go to 2x and then 1x. Remember those forecasts? And then the reality is, the opposite happened. There was no fundamental change in the cost of the drive … Now if you go look at it, go to Best Buy, go wherever you want to go, the gap between a terabyte HDD and a terabyte SSD is close to 8x.”
Therefore, you need a tiered flash-disk architecture to provide flash speed and disk economics. VDURA wants to build the best, most efficient storage infrastructure for AI and HPC. It doesn’t intend to build databases; that’s too high up the AI stack from its storage infrastructure point of view. Instead, it will make itself open and welcoming to all AI databases.
VDURA believes it will be the performance leader in this AI/HPC storage infrastructure space. Early access customers using its all-flash F-Nodes, which go GA in September, say it’s very competitive.
Claffey says VDURA wins bids against rivals. For example, this was exemplified in a US federal bid for a large system integrator. To support this, the system integrator evaluated several competing suppliers offering parallel access storage systems. These systems had to deliver performance sufficient to feed large x86 and GPU compute clusters. Notably, the project supported one of the world’s largest U.S. defense compute clusters. Additionally, it required sub-millisecond latency at massive scale. The bids covered a multi-year rollout with phased performance milestones. Phase 1, scheduled for 2025, required 20 PB of usable capacity. It also demanded sustained throughput exceeding 100 GBps. A phase 2 in 2026 will move up to around 200 PB of usable capacity and 2.5 TBps sustained performance.
VDURA bid a system with V5000 all-flash nodes for performance and HDD extensions for bulk capacity. It was selected by the SI because it matched the performance and capacity needs. It claimed it beat a rival on performance and TCO. The VDURA system also had a better TB per watt rating and a lower carbon footprint than its competitors.
The company reckons it matches DDN and IBM Storage Scale on performance and claims it is easy to use, manage, and reliable.