Velocity • Durability

Large Federal System Integrator

V5000 Deployment
Use cases
  • Computational Fluid Dynamics (CFD) simulations 
  • AI model training, inference, and analytics 
  • Mixed HPC+AI pipelines requiring continuous checkpointing and ultralow-latency I/O
Win summary

Goethe University’s Center for Scientific Computing, a leading research institution, achieved substantial measurable results by implementing the VDURA Data Platform V5000 for their AI-driven physics and HPC cluster. With one of Europe’s largest AMD GPU clusters combined with VDURA’s V5000 system, the solution delivers exceptional real-world performance, including 8.4 million IOPs and 670,000 metadata operations per second. As the system grows, performance will scale linearly, effectively meeting future demanding throughput requirements. VDURA’s innovative Multi-Level Erasure Coding™ (MLEC) significantly enhances data durability and availability, ensuring robust 24×7 operation.

Problem

The integrator needed storage that could simultaneously deliver:

  1. Consistent low latency to keep thousands of GPUs saturated.
  2. A single, unified namespace that marries NVMe flash performance with petabyte-scale capacity.
  3. Automated data placement with no external movers or manual migrations.
  4. Dramatically lower cost-per-TB and more efficient TB-per-watt than all-flash alternatives.
  5. Enterprise-grade encryption and a minimal operational footprint.
  6. Freedom from the reliability issues experienced with the incumbent solution and the higher TCO quoted by other competitors.
Solution
Phase Configuration Performance
Phase 1 (2025)
20 PB total – 4 PB NVMe flash + 16 PB HDD capacity extensions
Transfer rates above 800 GB/s
Future
Incremental growth to ~200 PB usable
2.5 TB/s sustained
Platform architecture
  • Director Nodes for metadata and orchestration. 
  • All-NVMe Flash Nodes for extreme AI performance. 
  • HDD Capacity Expansion Nodes for costefficient bulk storage.
  • Unified global namespace provides one data plane and one control plane.
Key selection factors
  • • True parallel-file-system throughput with consistent low latency.
  • Single namespace spanning flash and capacity tiers.
  • No external data movers required for flashto-disk tiering.
  • Greater than 60 percent lower cost per TB versus all-flash designs.
  • Built-in AES-256 encryption.
  • Only one-half FTE needed to manage system.
  • 44 percent better energy efficiency (TB/W) than other all-flash offerings, cutting power and carbon footprint.
Screen Shot 2025-10-08 at 13.10.14 PM
Integrator benefits
Benefit Impact
GPU utilization maximized
Transfer rates above 800 GB/s keep accelerators busy; latency stays consistently low
Lower capital cost
Design reduces spend by greater than 60 percent per TB vs. all-flash alternatives
Energy and space savings
44 percent more efficient TB/W lowers OPEX and rack footprint
Operational simplicity
Unified namespace and policy-driven tiering requires only 1/2 FTE
Security and compliance
Built in end-to-end encryption and automated key management
Screen Shot 2025-10-08 at 13.16.37 PM
Scalability
  • Linear capacity growth to ~200 PB usable without namespace splits. 
  • Performance scales incrementally to 2.5 TB/s sustained as additional All-Flash and Capacity Expansion Storage Nodes are added. 
  • Architecture supports future flash-heavy or disk-dense expansions with no forklift upgrades.
Competitors
  • VDURA replaced a commodity-based TLC all-flash solution using a third-party object store that was cited for poor reliability and flash/object storage integration challenges. 
  • VDURA beat the other top competitor, who offered a QLC-based scale-out file system, by delivering superior total cost of ownership while meeting every performance and efficiency requirement.
Conclusion

By combining flash-class responsiveness with object-storage economics under a single unified global namespace, VDURA enabled this federal integrator to launch a mission-critical HPC+AI environment that is both budget-aligned today and ready to scale an order of magnitude in the years ahead.