Velocity • Durability

VDURA Adds RDMA Capability to Provide AI Apps With More Context

VDURA Adds RDMA Capability to Provide AI Apps With More Context

March 17, 2026 Techstrong —VDURA this week added a Remote Direct Memory Access (RDMA) capability to its parallel storage platform as part of a first step toward creating a tiering capability to optimize context for artificial intelligence (AI) applications and agents in a way that bypasses CPU bottlenecks.

Announced at the NVIDIA GTC 2026 conference, the company also announced that the VDURA Data Platform now supports AMD EPYC Turin processors and NVIDIA ConnectX-7 networking adapters. Erik Salo, vice president of product marketing and operations for VDURA, said the ultimate goal is to extend the capabilities of the company’s parallel file storage system in a way that cost-effectively surfaces the right data at the right time for AI applications that require context to generate more reliable output. RDMA makes it possible to transfer data between graphical processor units (GPUs) and storage systems to provide the high-speed access that AI training and inference models require, noted Salo.

VDURA is further extending those capabilities to dynamically manage data placement across multiple tiers of storage based on workload characteristics and access patterns of an AI application or agent. Additionally, a DirectFlow buffer extends that capability to solid-state drives (SSDs) connected to an NVMe backplane. A unified Context Cache Tiering framework then enables read and write access across local SSD and DRAM tiers. Finally, an intelligent writeback of KVCache data ensures only persistent data is written back to durable storage to minimize unnecessary I/O throughput. VDURA next year also plans to extend those capabilities to enable deeper application-directed placement of data while also expanding cross-node cache coherence and adding support for NVIDIA BlueField-4 data processing units (DPUs).

Formerly known as Panasas, the VDURA platform has gained traction in high-performance computing (HPC) environments that historically needed to access massive amounts of data in parallel. With the rise of AI, however, many enterprise IT teams are now starting to realize that legacy storage platforms that are not based on a parallel file storage system are not going to be able to meet the requirements of these applications. While the amount each IT team is investing in IT infrastructure to optimize AI application performance will naturally vary, a recent Futurum Group report projects the global data intelligence, analytics, and infrastructure (DIAI) market will grow at a 17% compound annual growth rate through 2028 off a base of $541.1 billion in 2026, exceeding $1.2 trillion by 2031.

Much of that spending will be distributed across multiple initiatives. For example, AI development and operations are forecasted to increase (24%), while demand for tools needed to observe data will see a similar spike (22%) in 2026. There will be increased demand next year (19%) for data management tools that operate at the semantic level to provide a higher level of abstraction above the raw data stored in, for example, a data lake. In comparison, demand for data integration tools and storage platforms will grow at a slower 12% and 11% rate, respectively, in 2026. However, as the volume of data being generated using AI tools continues to increase, the data storage platform market will be growing at a rate of 18% by 2030, according to the report.

The challenge, as always, will be determining what infrastructure to invest in first depending on the attributes of the AI workloads actually being deployed.