Power Grab: AI and the Electric Grid - Part 1

Podcast Streaming Services

Energy Future: Powering Tomorrow’s Cleaner World

Peter Kelly-Detwiler

Energy Future: Powering Tomorrow's Cleaner World invites listeners on a journey through the dynamic realm of energy transformation and sustainability. Listen to this podcast on:

This video is the first in a multi-part series aimed at explaining AI-driven datacenter load and implications for the power grid. It’s not that EV load or power demand from Bitcoin doesn’t matter. It’s just that the potential future demand from datacenters is so much larger.

In May, the Electric Power Institute estimated data centers might consume up to 9% of U.S. electricity generation by 2030. McKinsey thinks EPRI is underestimating the demand. It projects 11 – 12% of total load by 2030, totaling 80,000 MW of demand.

An EPRI May - July survey of 26 utilities showed 60% had requests for new datacenter hookups of 500 MW or larger, and 48% had requests over 1,000 MW.  Almost half said that current datacenter requests exceed 50% of current peak demand. None of the utilities surveyed had current datacenter load over 500 MW.

Dominion Virginia serves the world’s data hub because of local high-speed fiber backbone. It served 2,800 MW of datacenters in 2022, and now it’s closer to 5,000 MW, equal to one-quarter of the entire state’s power usage. 50,000 MW of datacenter load is waiting in line, but Dominion now says it won’t hook up new datacenters over 100 MW for seven years.

In Texas, Oncor faces 59,000 MW of datacenter connection requests, while AEP Ohio – which serves 600 MW of data centers has interconnection requests now exceeding 40,000 MW.

What’s going on? Nobody really saw this coming, with the exception of a few AI industry insiders - even most of them have been surprised. 

Why? The tech is getting better. In 1997 IBM’s Deep Blue Watson beat world chess champion Gary Kasparov. But that was simple math. In 2011, Jeopardy king Ken Jennings got destroyed.

Then in 2016 AlphaGo crushed top Korean Go player Lee Sedol. Go is far harder than chess: 361 squares vs 64. In chess, within two moves you have about 400 potential outcomes vs around 130,000 in Go. 

The AlphaGo team taught its computer to play against humans, and then to play millions of games against itself. 

On the 37th move of Game 1 against Sedol, AlphaGo placed a stone in a location that nobody would have expected, appearing to demonstrate creativity. And won, demonstrated the growing capabilities of AI.

But AI still couldn’t really demonstrate prowess on tests or tough conceptual logic. Now it can. Quickly. In 2022 ChatGPT 3.5 scored in the 40th percentile on the law school LSAT exam. In 2023, it jumped to 88th.  How about the standard SAT? It went from 87th to 97th. Within a few years, we will see “super-intelligence,” where computers exceed the thought processes of the best human.

Why is this occurring so quickly? Quite simply, the machines are training more quickly with more powerful chips. Next week, we’ll discuss chips and the electricity they devour.

Peter Kelly-Detwiler