Modern Enterprise and Wallis Mills, Nov 26, 2025 (Modern Enterprise) – In our recent AI Readiness Bottleneck series, we surfaced a quiet truth shaping the outcomes of AI adoption inside the enterprise. The success of AI has less to do with the models themselves and far more to do with the data layer that feeds them. Within that context, we wanted to take a moment to double-click on storage. It has long been treated as infrastructure, a budget line item, and a solved problem. Yet in the AI era, it has re-emerged as the forgotten bottleneck—a bottleneck nested within the broader data bottleneck. Not because organizations lack capacity, but because intelligence requires speed, locality, integrity, and assured availability in ways traditional architectures were never designed to support.
We’re focusing here because data gravity has become one of the defining forces of modern compute. Where data lives determines how fast models can learn, how reliably they can run, and how much of the organization’s investment in GPUs and accelerators ever converts into real performance. It now influences cost, energy consumption, carbon footprint, resilience, and even the credibility of automated decisions. This is no longer a technical consideration. It is a strategic one.
To explore how this shift is unfolding in practice, I sat down with Ken Claffey, CEO of VDURA, whose work sits at the intersection of performance, durability, efficiency, and evolving enterprise workloads. Our conversation traces why general-purpose computing is fading, why storage is becoming a capability layer rather than a commodity, and why the future of AI will depend on environments that can deliver acceleration and trust at the same time.
Why Data Gravity Now Matters
Data gravity has become one of the defining forces of modern compute, reshaping how enterprises think about AI performance. As datasets grow in size, complexity, and sensitivity, they exert a kind of gravitational pull that determines where compute must occur, how quickly information can move, and what level of intelligence can be achieved in real time. This shift marks a departure from an era when data could be moved freely, staged casually, or centralized without consequence.
In the AI context, gravity expresses itself through latency, bandwidth limits, architectural friction, and the physical realities of storage placement. Models cannot train faster than the data that feeds them, cannot reason more reliably than the integrity of the information they draw from, and cannot scale beyond the constraints of their underlying data pathways. This is why organizations find themselves investing heavily in GPUs, only to watch unrealized performance evaporate in the gap between theoretical capacity and actual throughput.
But speed is only part of the equation. Data gravity now influences cost structures, operational resilience, sustainability profiles, regulatory exposure, and customer trust. The closer data is to the workloads that require it, the more efficiently an enterprise can adapt, respond, and learn. The further away it is, architecturally or physically, the more intelligence slows, risks compound, and AI becomes an aspiration rather than a capability.
Understanding data gravity is therefore not merely about optimizing storage. It is about recognizing a foundational shift: in the AI era, data location is no longer an operational detail. It is a determinant of competitive advantage.
The New Strategic Mandate: From Storage to Story
For years, storage lived in the background of enterprise architecture as the quiet machinery beneath the applications and analytics that sat closer to the spotlight. It was the place data went to rest, archived in tiers and volumes, managed through procurement cycles and capacity forecasts. It was unglamorous, dependable, and largely unquestioned. But AI has pulled it forward, revealing that the layer once treated as static is now shaping the truth and tempo of an organization’s intelligence.
As models learn, adapt, and generate outcomes at scale, the character of the data they draw from begins to matter in ways that were easy to ignore before. Integrity becomes a narrative. Lineage becomes a record of reasoning. Availability becomes a measure of whether the system can be trusted. Storage is no longer just where information resides, but instead, where confidence originates. It holds the proof behind automated decisions, the continuity behind insights, the coherence behind what an enterprise claims to know.
In this light, the data layer becomes a kind of storytelling substrate. It determines whether leaders can stand behind the outputs of their systems, whether regulators can trace the path of information, or whether customers can believe what they are shown. The shift is subtle but profound: storage now participates in the story of the organization, shaping perception, credibility, and consequence.
Enterprises that overlook this transformation continue to treat storage as silent infrastructure, unaware that it is already shaping what they can promise and defend. Those who recognize its new role begin to operate differently—not faster for its own sake, but with a kind of clarity that allows acceleration and trust to coexist. And in the emerging landscape of AI, that coexistence may become the defining advantage.
A Conversation With VDURA
To ground this shift in the realities of modern enterprise workloads, I sat down with Ken Claffey, CEO of VDURA, whose career has traced the evolution of storage from high-performance computing to today’s AI-driven architectures. What emerged in our discussion was a clear picture of how the data layer is changing—not incrementally, but fundamentally as organizations push toward higher-velocity learning, continuous inference, and compute at scale.
Ken described a landscape where storage can no longer afford to be static, fragile, or siloed, because the systems above it no longer operate that way. AI introduces workloads that are bursty, parallel, memory-intensive, and intolerant of interruption. In that world, durability is not about retention but about maintaining the state of intelligence itself. As he put it, “If your storage layer cannot feed the GPUs, nothing else matters.”
Our conversation also explored the growing expectation that the data layer adapts as models evolve—not through full re-architecture, but through environments designed for longevity, efficiency, and yield. Ken framed it as a shift away from the general-purpose era and toward one in which infrastructure is shaped by the behavior of the workloads it serves. “General-purpose computing is fading,” he noted, “and the data layer now needs to evolve with the applications, not the other way around.”
Finally, we discussed why efficiency has re-entered the enterprise vocabulary, not as a cost measure, but as a performance truth. AI exposes waste wherever it exists, be it in throughput loss, stranded hardware, energy intensity, or operational drag. As Ken observed, “Efficiency determines yield. You either get performance, or you burn budget.”
Rather than offering prescriptions, the conversation opens a window into how leaders are rethinking storage as a dynamic component of intelligence, and how this evolution is reshaping expectations at the intersection of scale, reliability, and trust.
What stood out most in our discussion was not the sophistication of future architectures, but the reminder that none of it matters if the foundation is unstable. Before organizations concern themselves with lineage, auditability, or model behavior, they must be certain that the storage layer is available, that the data is accessible, and that the information retrieved is intact and uncorrupted. These fundamentals are often assumed, yet they determine whether everything built above them can be trusted at all.
For a deeper look at how these dynamics are unfolding inside real AI environments, the full discussion is available here on Spotify and here on YouTube.
Future Tensions
Even as organizations begin to rethink the role of storage in the AI stack, a new set of pressures is already forming on the horizon. Workloads are shifting from periodic training cycles to continuous learning, from centralized processing to distributed inference, from static datasets to streams that evolve in real time. These changes are introducing expectations that traditional storage architectures were never built to satisfy.
Active storage is emerging as one of the clearest signals of this next phase, with systems that participate in computation rather than merely supplying it. As models demand faster checkpointing, broader parallelism, and greater resilience, the boundary between data and processing blurs. The intelligence sits not above the storage layer, but alongside it.
Near-data compute represents another tension, driven by the physical limits of movement. If data cannot travel fast enough to meet the needs of AI, then compute must migrate closer to where the data already resides. This inversion challenges long-held assumptions about centralization, cloud economics, workload placement, and architectural hierarchy.
A third shift is beginning to take shape in model-native design. Infrastructure that adapts to the behavior of the model rather than forcing the model to adapt to the constraints of the infrastructure. This perspective treats the data layer as something that must evolve alongside intelligence, capable of supporting new modalities, new performance envelopes, and new forms of interaction without reinvention.
Together, these tensions point toward a landscape in which acceleration and trust will depend not on isolated components, but on environments designed for coherence, with systems that can learn, adapt, and scale without sacrificing integrity or resilience. The enterprises preparing for this future are not choosing between speed and certainty. They are designing for both.
As these shifts take shape, the path forward becomes clearer for leaders willing to examine the assumptions sitting beneath their data layer.
Three Questions for Leaders
For executives steering AI adoption, the most important signals are not technical specifications, but the conditions that determine whether intelligence can scale with confidence. These questions serve as a practical lens for assessing whether the organization is prepared for the next chapter of data gravity, performance, and trust.
1. Is our data layer accelerating AI, or is it constraining it?
This is the productivity question—not in terms of human output, but in terms of how much of the enterprise’s investment in compute is actually being realized. If performance gains exist only in theory, the constraint is almost always in the data path, not the model.
2. Can we stand behind the integrity of the outcomes our systems generate?
This is the credibility question. As AI becomes embedded in products, decisions, and customer experiences, organizations must be able to show how information was stored, accessed, protected, and preserved—and whether the lineage behind it can be trusted.
3. Can our storage environment evolve as our models evolve?
This is the adaptability question. The era of general-purpose computing is fading, and infrastructures that cannot evolve alongside changing workloads will force reinvention rather than enable progression.
These are the same dimensions that surface in our conversation with Ken, and in the final ten minutes of the episode, he expands on these three questions with clarity, pragmatism, and a forward-looking perspective that is especially valuable for executive teams evaluating where to invest next.
Acceleration and Trust—The Dual Requirement
The enterprises that succeed in the AI era will be the ones that understand that performance and credibility are no longer separate conversations. Acceleration without trust leads to risk; trust without acceleration leads to stagnation. The data layer now sits at the center of this balance, shaping not only how fast organizations can move, but how confidently they can stand behind the intelligence they deploy.
Storage may once have been treated as an operational afterthought, but it has become foundational to how AI learns, scales, and earns belief. The shift is already underway, and the leaders who recognize it are designing environments where speed and certainty reinforce one another rather than compete. For everyone else, the bottleneck will remain, and not because of models, but because of the layer beneath them.