Rethinking Data Center Demand: The Future of AI, Energy Consumption, and Load Projections

Podcast Streaming Services

Energy Future: Powering Tomorrow’s Cleaner World

Peter Kelly-Detwiler

Energy Future: Powering Tomorrow's Cleaner World invites listeners on a journey through the dynamic realm of energy transformation and sustainability. Listen to this podcast on:

Just when you get comfortable thinking you know something, you find out that maybe you don’t. In a series of videos late last year, I addressed the issue of exploding data center electricity demand, and the enormous number of applications utilities have received in recent months.  I have been tracking these on a spreadsheet, based on various press releases and articles in the trade press, and thus far I’ve got over 125,000 MW of new projected data center demand. Not all of this demand is AI-related. Some new load will serve your typical data center applications, while some may even be serving crypto loads, now that crypto is in fashion in Washington. 

But there had been some subtle signs that perhaps this new load might not be as big as the headlines suggest. Skepticism was already the word of the day before news came out of China last week that an open-source AI large language model (LLM) there called DeepSeek was nearly as good as some of the proprietary models being built here in the U.S. by some of the biggest players in the space. The news that mattered most – to markets – was that it was not only competitive, but much cheaper, using fewer chips and far less power. DeepSeek reported that its model took only two months and less than $6 mn to build, using a less advanced (and less costly) H800 Nvidia chip.

The one-day carnage on Wall Street was amazing to behold. Leading chip maker Nvidia’s share price fell off a cliff, losing 17% and 600 billion – with a B – of market value. Modular nuclear and fuel cell stocks got savaged as well, shedding up to 25% off their stock prices. 

Over the ensuing week, additional news filtered out that perhaps those numbers weren’t quite so reliable, coupled with accusations that there had been some so-called distilling - transferring knowledge from OpenAI to DeepSeek, or at least some reverse engineering from other AI models. So, it wasn’t like it was built from scratch.

Now come three questions related to the grid and future power consumption:

  1. How much of DeepSeek’s claims will eventually prove to be true, both in terms of the time and resources required to build their LLM, and are there implications for other large language models that essentially use big chips and lots of power to brute force their way through their trainings?

  2. Is the model really that good?

  3. If one can really build AI capabilities more cheaply, does that in fact lead to Jevons Paradox – i.e., the less expensive that computational capacity is, the more of it we will use.

As far as the first claim, that remains to be verified. However, if it’s remotely true, it could dramatically change how much the current energy-intensive, brute force approach is applied to LLM model development in the future. That would bring energy consumption figures way down, though nobody knows by quite how much – this is all too new.

The second claim also may not stand up to further scrutiny. As noted, some anecdotal evidence I have seen suggests that DeepSeek is not really that good at answering some simple questions. And OpenAI has made some claims that need to be verified. What is thrue is that the model is pretty good. A New York Times tech reporter that spent half of the past Monday playing  with the tech came away impressed, noting that it compared well with OpenAI’s ChatGPT and Anthropic’s Claude. It solved some complex math, physics and reasoning problems at twice the speed of ChatGPT, and its responses to computer programming questions were “as in-depth and speedy as its competitors.” It wasn’t quite so good at composing poetry, planning vacations, or coming up with recipes, but so what? If it’s almost as good, at a fraction of the price…well. So, it looks like there’s a “there” there.

The next question then comes down to use, or so-called “inference.” DeepSeek is free, and it was the most frequently loaded App last week. As defined by Perplexity.AI as “Inference involves using the patterns and relationships learned during training to solve real-world tasks without further learning. For instance, a self-driving car recognizing a stop sign on an unfamiliar road is an example of inference.” Provision of that response to my query was also an example of inference (see what I did there?).  

Inference can help with real-time decision making, and it involves a number of steps: 1) Data Preparation; 2) Model Loading; 3) Processing and Prediction, and 4) Output Generation to give you the information or results you seek. Inference is very energy-intensive, so if we use less on LLMs but they get cheaper and more ubiquitous, what does that mean for energy consumption in that arena? We are so early into the adaptation and adoption of these tools that nobody knows.

But as far as the electricity required, we could be in the midst of a typical Gartner hype cycle, such as the one we experienced in the early 90s Dot.com frenzy – when Pets.com’s sock puppet was going to dominate the dog food industry.

Admittedly, 25% of Dominion Energy’s demand in Virginia is dedicated to serving data centers. And AI will clearly have many uses, some of which we can only imagine today. But the LLMs may run into various limits with declining economies of scale that would eventually reduce expected demand. There will also be substantial gains in processing and cooling efficiencies that drive energy requirements down, and we will probably see those results in years to come. Right now, we are in the early days of throwing money, a first version of chips, and data at the opportunity. But checkbooks and coffers are not limitless and a focus on efficiency will eventually follow – it always does.

There will also be companies that don’t survive the race that will probably be dominated by only a few deep-pocketed participants (although scrappy low-budget start-up DeepSeek suggests that perhaps an oligarchy is not inevitable). If this goes the same way the search engine race did, we will be left with only a small number of well-resourced players. This LLM quest may yield similar results, with most companies failing or being consolidated, and If you don’t believe me, you can go Ask Jeeves.

There’s also a big issue related to these headline demand numbers: the data companies may be filing many more applications than they intend to actually develop, because of the way the process for connecting with the utility actually works. Only a small number of utilities actually have rigorous procedures for evaluating the applications to ensure they are likely to get to physical service. The best ones, like seasoned veteran Dominion Energy, require proof of control of land, a financial commitment from the data company to support required engineering studies, and signature of a Construction Letter of Authorization obligating the applicant to pay for all project-related expenditures regardless of whether the project eventually breaks ground. Only then does an Electric Service Agreement (ESA) get signed that makes its way into the forecast. In fact, the Dominion 2030 forecast is for less load than is actually covered by ESAs.

A review of various forecasts in other parts of the country demonstrates that this same level of rigor is not routinely applied. Thus, it is quite likely that data companies are submitting multiple interconnection requests. Many data companies are likely doing what you are I would do if we needed lots of juice as fast as possible. We’d submit multiple applications to numerous utilities, with the hope that at least some of the applications would “get to Yes.”

It’s not possible to gain insight into what exactly is happening at any point in time, since the industry is competitive and maintains a high degree of confidentiality. But it’s very likely that there are numerous place-holder phantom requests. 

The analogue on the bulk power supply side of the industry may be instructive, where over ten thousand generation projects wait in transmission interconnection queues. If recent history is a guide, fewer than 20% of those endeavors will actually get built. 

If utilities further tighten up their load interconnection requirements, and implement more rigorous procedures that require higher up-front financial commitments, we may get a better sense as to how many real applications are out there.xx 

It’s clear that AI has real value to society, and we are beginning to see some use cases emerge, it’s also clear that we are still in the very early days, with rapidly evolving technologies and business models, and many unanswered questions. However, getting past the current hype cycle will take some time. We won’t know the full implications until we start to see some projects proceed, while others are canceled. If you don’t believe me, ask Perplexity.AI.  It tells me, “several factors suggest that only a fraction of the proposed projects will likely be completed.” Amen to that.

Peter Kelly-Detwiler