A response to Everpure (aka Pure Storage), and to a decade of all-flash economics that were never as advertised
By Ken Claffey, Chairman and CEO, VDURA
This week, Blocks and Files published an open letter from Charles Giancarlo, chairman and CEO of Everpure (aka Pure Storage), explaining why the company is raising prices in the face of what he calls the third “once-in-a-decade” supply chain disruption of the past five years. The letter is well-crafted, confident, and carefully positioned. It is also, viewed honestly, the moment the Flash Emperor walked through town.
For the better part of a decade, the all-flash array industry, Pure first, then VAST Data, then a chorus of others sold customers a single, captivating storyline: flash is now the same price as HDD. Sometimes it was “approaching the price of HDD.” Sometimes it was “effectively the same TCO once you factor in our data reduction.” The version changed. The conclusion never did. Buyers were told they no longer had to think about media tiers, that the era of tiered storage and hybrid arrays was over, that you could build an entire enterprise, eventually an entire AI factory, on flash and never look back.
It was a beautiful story. Just one problem, it was never true.
This is not a critique of flash. Flash is the right medium for hot data, for performance tiers, for metadata, for checkpointing, and it always will be. This is a critique of an architectural pitch that bet your entire data estate against a commodity that was never going to stay cheap and that the people making the pitch never controlled.
A decade of marketing the same chart
If you sat through any all-flash vendor pitch between roughly 2016 and 2024, you saw the same chart, redrawn in slightly different colors. Two cost-per-TB lines — flash and HDD — converging. Sometimes the lines crossed. Sometimes they almost crossed. Sometimes the deck quietly redefined the y-axis from “raw” to “usable” to “effective” so the lines crossed sooner. The takeaway was always the same: stop worrying about HDD economics, the gap is closed.
The sleight of hand sat in three places.
First, raw vs. usable. Raw flash capacity and raw HDD capacity are at least the same kind of number. Once you apply erasure coding, overprovisioning, and reserved capacity, the gap moves, usually in flash’s favor in the deck, because the assumptions get tuned that way.
Second, usable vs. effective. This is where the magic happened. Effective capacity is usable capacity multiplied by an assumed data reduction ratio; dedupe, compression, similarity reduction, sometimes thin provisioning. Quotes routinely assumed 4:1, 5:1, even 6:1 ratios. Production reality across heterogeneous workloads tends to be 1.8:1 to 2.5:1 for general data, and as low as 1.1:1 for AI training datasets, pre-compressed media, and modern object data. When the assumed ratio in the quote is 3x the realized ratio in production, with what amount to largely worthless vendor ‘guarantees’, the cost-per-effective-TB chart is a fiction the customer pays for.
Third, future flash prices. Every chart projected the cost-per-TB of NAND continuing its historical decline. None of them projected NAND being structurally constrained by AI demand for years at a time. Yet here we are. And the same vendors who projected continued decline are now writing customer letters about how the disruption will last “far longer than COVID.”
The gap between flash and HDD is not closed. It never was. Today, on a real cost-per-usable-TB basis with honest data reduction assumptions, flash is roughly 20x more expensive than HDD. Twenty-Times. No marketing chart can make that gap disappear, and Everpure’s letter which raises prices 70% in a year on top of an already-large gap has just removed the last fig leaf.
The Flash Emperor has no clothes.
The hyperscalers always knew
Here is the most damning part of the AFA story: the people building the largest data infrastructure on earth never bought it.
Google’s Colossus is not all-flash. Meta’s storage backbone is not all-flash. Microsoft Azure’s storage is not all-flash. Amazon S3 and the bulk of EBS are not all-flash. These are organizations with the engineering depth to evaluate any architecture they want and the capital to buy any media at any price. Every single one of them runs a mixed-fleet, software-defined architecture. Just enough NVMe flash to saturate the workload, then HDD for everything that doesn’t need flash speed.
Why? Because they did the math. They understood that flash is a performance medium, not a capacity medium, and that pricing your infrastructure against a single commodity you don’t manufacture is a strategic error you only get to make once.
Every credible analysis of hyperscale storage in the last five years has confirmed this. None of those analyses have stopped the AFA chorus from telling enterprise and AI customers a different story.
Everpure’s letter is the confession
Read in this light, Giancarlo’s open letter is not a defense. It is an admission.
The letter says component costs have risen 4x to 10x. It says NAND is being reallocated to higher-margin AI parts. It says new fab capacity takes years and billions of dollars. It says the imbalance “will, unfortunately, last far longer than the COVID-era disruption.” Each of these statements is true. Each of them also contradicts the central premise that drove ten years of AFA marketing: that flash was abundant, getting cheaper, and structurally cost-competitive with HDD. If those things had ever been true, Everpure would not need to write this letter.
And the letter’s economics tell the same story. Giancarlo asserts that Everpure is absorbing pain and operating at the low end of its long-standing product gross margin range. The public financial filings say something different.
In Pure Storage’s most recent full fiscal year (FY26), total gross margin was 72.1%, up from 71.8% the prior year. Q4 FY26 product gross margin was 67.3%, up more than 400 basis points year over year. Revenue grew 20% to $1.059B in the quarter, product revenue grew 25%, and the company added roughly $120M more gross profit in Q4 alone versus the prior comparable period, a quarter that included the same Dec-to-Jan window in which the letter says input costs roughly doubled.
If component costs really rose 4x to 10x, a vendor truly absorbing the pain would show gross margin compression. Gross margin percentage expanded. That is not absorbing pain. That is what profiteering looks like when you run an appliance business with pricing power: BOM costs massively increase, margin percent stays flat for the earnings call, margin dollars expand significantly behind it, and the customer covers the difference via big pricing increases.
And here is the part that should bother every customer most: Everpure cannot fix this. The six advantages the letter cites: simpler hardware, integrated software, compression, Evergreen, DirectFlash, and integrity are presented as differentiators. Viewed honestly, they are the last remaining levers when you cannot move the actual cost driver. Roughly 90% of the bill of materials in a modern all-flash array is the SSD or raw NAND itself. Everpure does not make NAND. VAST does not make NAND. WEKA does not make NAND. None of them controls its cost, its supply, its allocation, or its roadmap. Their architectural raison d’être was to make cheap flash cheaper. And now their own public letter concedes flash will not be cheap for years.
The cloth was never there.
The business model appliance trap
There is a deeper, more structural reason Everpure had to write this letter. It isn’t only that NAND is expensive. It’s that the appliance business model itself forces the vendor’s hand.
In an appliance business, gross margin percentage is directly tied to the cost-of-goods-sold of the hardware you ship. That margin band is a number Wall Street, the analyst community, and the company’s own internal forecasts treat as load-bearing. For a publicly traded storage vendor, drifting outside that narrow band by, say, absorbing 4x to 10x component cost increases without raising customer prices is not a choice the CFO can make. It is a quarterly earnings event with consequences. So when COGS go up, prices go up. They have to. The business model demands it.
Read the letter through that lens and the language softens into context. “Operating at the low end of our long-standing product gross margin range” is not a customer-friendly choice. It is the only public statement consistent with both raising prices and not blowing through the financial guardrails the company has committed to its investors. It is a business model confession dressed up as a customer letter.
This is the structural disadvantage of the appliance model in a commodity-volatile world: the vendor cannot meaningfully insulate the customer from the underlying media market, because the vendor’s own financial structure does not allow it. Pricing power and gross margin discipline become the same thing.
VDURA does not operate this way. We sell our data platform software under a standard SaaS subscription model. Our pricing is decoupled from the underlying hardware cost because the hardware is your choice, sourced from any qualified server vendor on whatever supply contract you can negotiate. Our software pricing/margin does not depend on how expensive your NVMe SSDs got this quarter, and we have no structural incentive to pass component cost increases through to you as a markup. Combined with the supply chain flexibility of a software-defined architecture, the same VDURA software runs on Dell, Supermicro, AIC, and other certified platforms, this gives customers something the appliance model structurally cannot: a transparent view on the hardware costs and storage software layer whose cost does not need to track to the actual storage commodity market, not a vendor’s quarterly margin target.
This is part of why customers are moving. Not only because mixed fleet is the right architecture, though it is, but because SaaS-on-commodity-hardware is the right commercial answer to a media market no vendor can predict.
I watched this from inside
This isn’t a perspective I picked up from a market report. I spent roughly a decade running enterprise storage at Seagate, through the 2018 NAND undersupply, through the COVID-era component shocks, and through the slow, capital-intensive reality of how leading-edge fab capacity actually gets built. New fabs are multi-year, multi-billion-dollar commitments with lead times measured in years, not quarters. Allocation decisions get made by chip vendors balancing margin across their entire customer base, not by storage vendors negotiating from outside that calculus.
I sat through the same AFA pitches everyone else did (we would marvel at the marketing creativity). I watched the same charts get redrawn. I knew what the actual cost-per-usable-TB looked like across realistic workloads, and I knew what the pitch said about cost-per-effective-TB in optimistic scenarios.
That hard-won perspective from inside the device-manufacturing world is exactly what informed our architectural choices at VDURA and why HYDRA was designed never to put the customer or our company at the mercy of any single storage commodity.
Hybrid is not mixed fleet
Chris Mellor’s framing in the Blocks and Files piece lumped every vendor with a tiering story into one bucket: Dell, DDN, NetApp, StorONE, VAST Data’s Amplify, VDURA’s Flash Relief, and WEKA. This is convenient shorthand, but it conflates two very different things: legacy hybrid arrays that bolt a tier onto a pre-AI architecture, and mixed-fleet platforms designed from the ground up to run heterogeneous media as one system. They are not the same.
Dell and NetApp offer tiering bolted onto legacy block and file appliance platforms built for a pre-AI era, with proprietary hardware and HA-pair controllers. DDN is a parallel file system but still delivered as a proprietary HA appliance stack. VAST Data’s Amplify program and its reclaim-old-SSD initiative are, respectfully, financial workarounds to the exact concentration problem described above because VAST can only tier to more flash. WEKA supports tiering, but again, not to a natively managed HDD tier inside the same software stack.
These are legacy hybrid, disparate storage tiers approaches. Some of them are reasonable engineering. None of them are mixed-fleet platforms.
VDURA is.
The VDURA Data Platform, built on our HYDRA architecture, is the only software-defined, AI- and HPC-scale mixed-fleet platform that runs NVMe flash and SATA HDDs natively in the same data plane, same control plane, same global namespace, with full parallel performance and object-grade resilience across multiple type of hyperscale class storage media. There is no second software stack, no external data mover, no third-party backend, no bolt-on overlay. Flash is a performance medium. HDD is a capacity medium. They are unified by one system, not by a management layer sitting on top of someone else’s storage.
This is the architectural pattern Google’s Colossus, Meta, and Microsoft’s storage backbones are built on. It is how HYDRA is built. It is what mixed fleet actually means and it is structurally different from any hybrid array or tiering retrofit on the market.
That matters right now because mixed fleet is the only model that rides independent cost curves and supply chains for flash and disk. When NAND doubles, customers on HYDRA rebalance intelligently across media. When HDD density improves, they benefit without a forklift upgrade. When the next commodity shock hits and Giancarlo is correct that this won’t be the last, the blast radius is not the entire storage infra bill.
This is the hyperscaler playbook, finally available to every AI factory, Neocloud, and research institution that needs the economics and resilience of hyperscale without having to build it themselves.
What AI factories actually need in 2026
Three things the AFA architecture cannot structurally provide.
First, supply chain optionality. Tier down to HDD with no loss of parallel performance for cold data. Your spend tracks the medium that fits the workload, not the medium your vendor’s architecture demands.
Second, commodity diversification. When NAND moves 4x to 10x, a mixed-fleet platform absorbs a fraction of that move. An all-flash platform absorbs all of it and passes most of it to you.
Third, honesty. A vendor whose own public letter concedes the market will not normalize “for years” and whose only answer is “we’ll keep absorbing as much as we can” is telling you the architecture has run out of levers. Believe them. And then ask yourself why they spent the previous decade telling you the medium they don’t make and don’t control was about to be the same price as HDD.
This isn’t theoretical. VDURA just closed a record Q1, and our pipeline is expanding, not contracting, in the teeth of this supply shock. Customers are choosing mixed-fleet architectures that extract maximum performance from expensive flash and pair it with a performance-optimized HDD capacity tier, because that is the only model whose economics hold up when the flash market does what it is doing right now. The market is voting with its infrastructure dollars.
The Flash Emperor walked through town for a decade in a robe that was never there. Mellor’s article and Giancarlo’s letter are the moment the child pointed. The customers who understand what just happened will spend the next decade operating on very different economics than the customers who don’t.
VDURA is the platform for the customers who saw it.
VDURA Resources:
- VDURA Flash Relief Program: Beat VAST, WEKA, or all-flash cost by 50%
- Flash Volatility Index: Model flash price exposure and optimize flash vs. disk economics
Ken Claffey is Chairman and CEO of VDURA. VDURA is Modern Data Storage for AI and HPC, purpose-built on the HYDRA architecture to combine flash-speed parallel file system performance with the durability and cost-efficiency of mixed-fleet object storage in a single unified namespace.