The Compute Heat Rate - AI, Data Centers, and the Future of Power Market Pricing
I recently wrote an opinion piece for RTO Insider addressing a fascinating new metric that captures the interaction between AI data centers and power markets: the Compute Heat Rate (CHR). These data loads have mushroomed into what can only be described as "stupid land". Consider ERCOT, where the record peak demand set in August 2023 was 85,500 megawatts. The grid operator's latest forecast projects 2030 load to max out at an astonishing 319,650 megawatts—a figure that would drop to 107,000 megawatts if large and medium-sized data loads were excluded.
In response to this critical supply and demand issue, many regulators and politicians are demanding that data loads offer flexibility to avoid pressuring the system peak. The Electric Power Research Institute recently rolled out a proposed framework called "Flex Mosaic" to create common approaches for demand response from these massive facilities. In Texas, Senate Bill 6 even stipulates that data loads can be cut off during grid emergencies before rotating blackouts are enacted.
But while regulators focus on capacity and emergency curtailment, energy markets are driven by price. Industry veteran Hans Royal recently published a paper introducing the CHR to ask an essential question: at what maximum electricity price would a data center operator rationally elect not to consume power?
Here’s the problem: The enormous economic value created per megawatt hour by AI makes these data centers incredibly inflexible.
Traditional large loads, like aluminum smelters or steel producers, provide a self-correcting price mechanism in the market by curtailing operations when power costs reach between $40 and $120 per megawatt hour. AI data centers operate in a completely different universe. Royal estimates a blended CHR of approximately $6,350 per megawatt hour, implying that AI demand will not curtail at prices below roughly 127 times the current wholesale average.
It becomes even more extreme when you separate AI training from AI inference. While the continuous training of a large language model might see a CHR around $500 per megawatt hour, just-in-time inference services—which deliver high-value information for trading, logistical planning, or robotics—could have a CHR exceeding $53,000 per megawatt hour.
The likely outcome is that these massive loads will not curtail at any price level currently observed in US wholesale markets. As data center infrastructure reaches critical mass at specific grid nodes, they will be willing to significantly outbid all other loads to establish locational marginal prices.
With new technologies like the Nvidia Rubin architecture packing the peak power draw of 65 households into a box the size of a refrigerator, this presents massive new demand. This approach will have collateral impacts on other electricity consumers, who may soon find themselves priced out of various markets. It might be that it’s now AI’s grid. We only live - and consume and pay for electricity – on it.