The cluster architectures for AI training and inference are driving unprecedented growth in the datacenter infrastructure spending, but they are also having a reflective and beneficial impact on HPC architectures thanks to the relative ease with which one can get funding for AI initiatives and the need to upgrade existing HPC systems to do traditional simulation and modeling.
The first full day of the SC25 supercomputing conference opened up this week with the traditional 7 am breakfast by the folks at Hyperion Research, which has only just completed its casing of the HPC market for 2024 and is working on its 2025 numbers and updating its forecasts out to the end of the decade. Earl Joseph and Mark Nossokoff walked through the traditional HPC market with its AI augmentation and its on-premises and cloud deployment models and Bob Sorensen did a deep dive on the quantum computing market. We will follow-up with the quantum analysis separately and stick to the combined AI-augmented HPC and traditional HPC market for now.
Taking it at the highest level, here is how the broadest HPC spending looked for the entire world in the past three years and how it looks for the next five years according to Hyperion:
When Hyperion talks about the hybrid HPC-AI market, by the way, it does not just add traditional HPC with the totality of the AI market. Rather, it teases out all of the deals in the HPC sector and find out the portion of the deals that is for HPC functions and what part is for AI functions that are being added to HPC applications. This is AI augmented scientific and technical computing, not more generic GenAI stuff that is being created by the hyperscalers, cloud builders, and model builders.
With that in mind, Hyperion believes that in 2024, on-premises HPC-AI systems drove $50.39 billion in revenues, up 22.9 percent compared to 2023, while cloud HPC-AI system capacity drove $9.54 billion in sales, up 4.9 percent. Add them together, and the total HPC-AI market accounted for $59.93 billion in sales, up 23.5 percent – well above the 7 percent to 8 percent that has been the historical average for the market over the past decade or so.
If you look out into 2025, Hyperion thinks the overall HPC-AI market, including all kinds of consumption models, will drive $57.75 billion in revenues, up 17 percent compared to 2024, with $12.38 billion for cloud consumption and $57.75 billion for on-premises systems. These numbers include hardware, software, and services for HPC-AI systems, not just the servers. (We will get to that in a minute.)
As you can see from the chart above, the growth in HPC-AI spending is projected to slow a bit but still be around twice the historical average of 7 percent to 8 percent growth per year until the end of the decade.
Let’s break this down a little bit. First, let us look at how the HPC-AI systems spending is carved up by product category. Hyperion Research did not provide a breakdown over time of such data in this year’s breakfast presentation, but it did give a pie chart snapshot of what the breakdown was for 2024:
This pie chart above is an amalgam of two charts presented by analysts Earl Joseph and Mark Nossokoff. The neat bit about this is how the cloud consumption model is finally getting some traction in HPC, representing 15.9 percent of the $59.93 billion in spending for 2024 for HPC-AI wares. (The chart says 15 percent, but it is closer to 16 percent.) But also that 30 percent of cloud spending is for storage compared to 21.7 percent of the spending for on-premises HPC-AI centers. That is $2.86 billion in storage spending on the cloud compared to $6.68 billion in spending on compute (which has networking built in), giving a ratio of 2.33 to 1 for compute to storage. For on premises HPC-AI spending, the ratio of compute ($25.33 billion) to storage (the above mentioned $6.68 billion) is 3.77 to 1. On premises HPC-AI centers are more compute heavy than their cloudy equivalents.
It is not clear what this means. Compute on the cloud is generally very expensive compared to compute on premises amortized over four or five years. Perhaps cloud users have figured out to run a lot more cores for shorter periods of time to lower their compute costs, therefore decreasing the gap between compute and storage spending?
Services is still a fairly large part of the HPC-AI budget – mostly for installation and maintenance of systems and tech support for systems software – and software is still, at 5 percent, a relatively small slice.
Drilling down into the compute part of the HPC-AI market a little more, here is how Hyperion breaks down sales of machinery that is primarily used for HPC and primarily used for AI – and in both cases, traditional HPC simulation and modeling has to be a representative part of the overall workload stack for the spending to count at all. Take a look:
As you can see, traditional HPC revenues went into a little recession in 2023, some of that due to the timing of product lifecycles at HPC system makers and some of that due to a pause as the GenAI boom hit and everyone was trying to figure out what it meant. But sales of traditional HPC machinery rebounded in 2024 and are on a slight uptick for the forecast out through 2029, according to Hyperion.
However, sometime around the middle of 2027, AI-centric iron (more than 50 percent of workloads are AI jobs) will account for more sales than HPC-centric iron (more than 50 percent of workloads are HPC jobs). Over time, as AI functions are infused in HPC applications, this differentiation will be harder and harder to qualify, much less to quantify.
In the meantime, sales of HPC-AI systems were pretty good last year, and grew very nicely here in the first half of 2025, according to Joseph. Here is the breakdown in revenues (the second column is in US dollars) of HPC-AI server sales by vendors:
This data is for on-premises HPC-AI servers, and you will note two things. First, revenues for “non-traditional suppliers,” by which Hyperion means what we call the original design manufacturers, or ODMs, as distinct from the original equipment manufacturers, or OEMs. These ODM companies design HPC-AI iron to spec and got their start as suppliers to the hyperscalers and cloud builders because these companies did not want to pay an OEM premium for their machines. They simply cannot afford to do that and make a profit. We don’t know precisely which ODMs are in this list, but we think there are a lot of them with operations in Taiwan and China and we think it is interesting that they together make almost as much HPC-AI server revenues as Hewlett Packard Enterprise, which is the clear leader in the market after its acquisitions of Compaq, SGI, and Cray over the years.
Dell is, as you might expect, number two in the HPC-AI space, and that might be surprising given that out there in the general purpose market, Dell is considerably larger than HPE when it comes to server revenue streams.
When it comes to HPC-AI systems, the price bands are reasonably distributed but the midrange is weakest, as it generally has been over the decades:
It is funny to us – meaning strange, not hilarious – what constitutes a “leadership” HPC-AI machine in terms of the price band of the system when we see some of the monster systems that the AI giants are installing.
Hyperion says that a leadership HPC-AI machine – what we still think of as a supercomputer – costs $150 million or more, and a supercomputer technically costs somewhere between , but the hyperscalers, cloud builders, and model builders are measuring themselves in multiples of gigawatts in scenarios where 1 GW of processing capacity costs around $50 billion and Nvidia gets somewhere around $35 billion of that. There is somewhere on the order of $600 billion in datacenter capital expenditures on the books for the hyperscalers and cloud builders (and their model builder customers), which is roughly analogous to 12 gigawatts of power.
The four exascale-class supercomputers on the most recent Top500 rankings – which cost on the order of $500 million to $600 million – use between 15.8 megawatts and 38.7 megawatts of power when they are running the High Performance LINPACK benchmark test. (Which is used to give them their Top500 rankings on the most recent list.) Moreover, the entire revenue stream for HPC-AI servers in 2024 would cover around 500 megawatts of “AI factory” capacity.
That said, and as evidenced by the nine new supercomputers announced by the US Department of Energy back in October, the investment in HPC-AI systems is accelerating. The details are a little scarce on these machines, but it looks like these will be more HPC-AI outposts of Oracle Cloud Infrastructure that the DOE labs rent rather than something they will buy outright as has been done for decades. Shifting to a cloud model will mean HPC-AI revenues will level off over the years, but it also means they will average out instead of being lumpy. All we know right now is that Hyperion says for the first half of 2025 the overall HPC-AI market grew by 22 percent, which is pretty close to the 23.5 percent growth rate seen in 2024.
One last thing: We really appreciate the detailed work that Hyperion does and shares with the HPC community.
Sign up to our Newsletter
Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now
.png)






