What utilities can learn from cloud computing and telecom
Cipher News
Bringing you the latest news on the technological solutions we need to combat climate change.
BY: ASTRID ATKINSON
Astrid Atkinson is the co-founder and CEO of Camus Energy, a cloud-native software that gives electric utilities the ability to monitor and manage utility-owned and customer-sited energy devices. You can reach Atkinson at [email protected].
Read this article and more of the latest on climate & tech at ciphernews.com.
Staring down once-in-a-generation growth in energy demand, electric utilities in the United States are facing the prospect that their current systems could soon be overwhelmed, jeopardizing their abilities to maintain reliable and affordable service.
As a former Google executive now an entrepreneur, I want to draw parallels between other mission-critical industries and that of electric utilities. First, let’s level set on the state of play.
Some utilities are relying on conventional ways to meet growing demand, focusing on building additional power generation and the poles and wires needed to get electricity to customers—projects with huge price tags attached.
Encouragingly, though, an expanding cohort of forward-looking utilities is taking a different tack, stepping up investments in technologies and processes that can drive efficiency and optimize existing infrastructure.
Among the most promising are artificial intelligence-powered systems that continuously monitor and dispatch the power generation, conservation and storage capacities of both utility-owned assets and, importantly, customer-sited distributed energy resources (DERs).
Take, for example, a customer’s electric vehicle. By equipping utilities with the means to dynamically activate this resource’s power storage capabilities in both real and anticipated instances of excess renewable power, we’d be able to effectively get more “juice” out of the grid.
It’s well worth the squeeze.
At scale, this approach can help to contain grid expansion costs by deferring or outright eliminating the need for time-intensive and costly capacity additions, including building new “peaker” plants, natural gas-fired power plants that ramp up—at great expense—when power supplies are low.
Still, to many utilities, new is the same as risky—even for solutions that are mature and well-tested. As Cipher reported earlier this year, it can be challenging to bridge the gap between the awareness and acceptance of new technology and the adoption and activation of it.
Much of this reluctance is a human problem. Change is difficult in any organization, and doubly so in one focused on providing a reliable, consistent service. But adjacent high-reliability industries—cloud computing and telecommunications—have successfully navigated similar challenges to the ones facing electric utilities today.
The lesson from those industries is clear: Success in technological innovation comes not just from adopting new tools, but from fundamentally reimagining operations with those tools in mind.
Cloud computing
The computing industry’s shift from software running on single computers to operating on distributed, cloud-based systems mirrors the electric utility industry’s current predicament.
I was part of the team at Google that pioneered the development of cloud-native computing architectures in the early 2000s. We were tasked with ensuring the company’s computing infrastructure was reliable, so we had to both identify roadblocks and develop the software solutions that allowed the system to scale. Our team built solutions including a global content serving system and a global monitoring system that enables real-time visibility into operating conditions across approximately 1.5 billion servers—the equivalent, in the utility industry, of DERs.
Ultimately, the migration led by our team—the equivalent of an electric distribution utility’s grid operations and engineering unit—to these distributed systems enabled us to maintain 99.999% reliability (or better!) across Google’s global web service operations, while allowing workload to grow a million times over in less than a decade.
That transition has made the internet services of today possible—from search to video streaming—and is now powering the AI revolution.
Creating such a system will be key for utilities as well. A global monitoring system, like the one we created, provides operators with the confidence to make big systemic changes, knowing that they have the ability to quickly detect and troubleshoot problems that might arise.
Telecommunications
In the telecommunications industry, the successful evolution from rigid, hardware-dependent systems—landlines and switchboard operators—to flexible, multipurpose wireless networks offers another compelling lesson for utilities.
The industry-wide development of 5G wireless is just the most recent step in this transformation, enabling real-time allocation of network resources to different kinds of traffic, such as voice and internet. Today, the same wireless networks can be reliably used for critical communications, alongside less critical, day-to-day activities like video streaming, and to accommodate exponentially more traffic.
Telecom systems now adapt rapidly to changing demand patterns, automatically throttling and allocating network bandwidth to prevent congestion. AI-driven software could do the same for utilities — help them manage fluctuating energy needs, while increasing efficiency and reliability—by “dynamically” allocating supply and demand sources on the grid.
Going forward, this kind of distributed data management and software-driven orchestration should form the core of reliable energy grid operations.
Borrowing the successful strategies of cloud computing and telecom giants, utilities can build the kind of reliability that both executives and customers can trust. Only then can utilities enable the rapid transformation we need.