Follow-up to “Acceleration and p(Doom)”: 7 Grid AI Questions Answered
Photo Credit: ERCOT

Follow-up to “Acceleration and p(Doom)”: 7 Grid AI Questions Answered

Intro:?

Last week, following our workshop at a DOE cyber and grid modernization conference, I posted the opening monologue and 7 of the questions panelists Chris Lamb of Sandia and Colin Ponce of Livermore talked to. Absent a video or a good recording, though, we didn’t have a transcript to publish. But I did get multiple requests for answers, and the two of them generously agreed to put succinct thoughts down in pixels as a follow-up.

Fortunately, both of them, in addition to being big thinkers are good writers as well. So I only had to perform minimal editing on what follows. As you'll see, they are mainly positive about projecting benefits and sanguine when it comes to risk. That's not me, particularly when it comes to adversarial misuse of AIs. But this is their show. So. without further adieu, here are the questions with answers this time:

1. As you likely know, variations?of AI/ML are already in use by utilities and other grid-serving entities to forecast weather and loads. Are you aware of other already-fielded uses of these technologies?

Colin: I can’t point to specific vendors or products in which I know AI/ML is used; however, other current uses I know of include:

  • Forecasting for market prices, oil production from shales, etc.
  • Frequently used in development of algorithms to accelerate computational solves/simulations.
  • Cybersecurity firms rolling out ML capabilities within their threat monitors.
  • Predictive maintenance in some equipment like compressors and pumps.

2. Where?do you think generative AI will be most helpful in?grid operations?

Colin: I see two great uses of generative AI:

  • LLMs as operator assistants. LLMs are uniquely suited to conversion of information between unstructured and structured data and can work with multimodal data. An LLM can listen to a human operator describing, in natural English at a high level, what she wants done, and the LLM could convert that quickly into a detailed set of precise operating instructions. The operator can then review it and then either alter or execute the commands.
  • Generative AI can fill in missing data with plausible generated data. If sensors are missing, this can be used for plausible “state inference” or “observability completion.”

Chris: For preprocessing data for regulatory review to provide an initial analysis of the fit of provided documentation as required evidence for regulators. LLMs can be used for customer facing support as well, providing immediate system state to customers in case of outages with radically up-to-date estimates on causes and time to rectification.


3. You may or may not be aware that a book is going to published at the end of this month with the title: AI: Unpredictable, Unexplainable, Uncontrollable (Author: Dr. Roman Yampolskiy, University of Louisville). Does that sound right to you, or do you feel that the opposite will prove to be true, that AIs on the grid will be Predictable, Explainable and Controllable??

Colin: AI is like any new technology in that there are caveats and pitfalls that are not well understood when it is new, and over time, improvements in policy, best practices, and technology are needed to make that technology as safe as possible.

Our ability to predict, explain, and control generative AI right now is rather limited. It’s not zero, but it’s not great either. As a result, AI absolutely?has the ability to?cause harm in some cases if used naively, and this is the same for other technologies (e.g., cars, electricity, computers).

However, through investment and work to advance AI Safety technology, and improvements in best practices and policies, we absolutely can achieve a world where AI’s harms are?minimized?and the benefits of AI vastly outweigh those harms.

Chris: We know that AI systems can be unpredictable, unexplainable, certainly, and they can have control over things those AI systems can access, as determined by the engineers designing the AI and the systems it influences. But so what? They are unpredictable in a vanishingly small number of cases, and frankly engineers have control over the access and use of these systems as well.

Many systems are not explainable at scale, but again, if those systems don’t work correctly we don’t try to fix the model – we just train a new one. We can then do root cause analysis on the failure or attempt to recreate it certainly, but explainability at the model level just isn’t really important.

Yes, these systems can be unpredictable, unexplainable, and uncontrollable, but if any of those attributes have significant real effects, it’s the fault of the engineers that designed, trained, and put the models into production.

?

4. Are you aware of grid systems suppliers' stated intentions to begin introducing AI capabilities?into their product lines? In particular, energy management systems, distributed management systems, DERMS, SCADA systems, etc.?

Colin: I have seen advertised solutions to AI-driven energy management systems for buildings, data?centers,?telecom systems, etc.?In particular, providing the ability to help shut down components not in use to save energy. This makes a lot of sense! Beyond this, AI is very buzzword-y right now, so it shows up in lots of marketing material but it is unclear what role it is actually playing if any in some cases.

Chris: There are certainly areas where companies will seek to use AI. Frequently however, traditional algorithmic approaches can be as good or better than AI approaches. SCADA systems will certainly use AI components for event classification and prediction, likely in tandem with other more traditional approaches. These approaches will only be used as decision support tools however, especially in cases where the consequences of a given control action are difficult to eliminate later. Typically, the higher a function is in the Purdue model, the more amenable it will be to AI inclusion.


5. Do you have security concerns about the extent to which Chinese suppliers dominate the market for inverters? And if yes, could AI play a helpful role in better securing the US grid as it comes to rely on an increasing percentage of generation, as well as ancillary services, from inverter-based resources (IBRs)?

Colin: This is definitely a?concern. And yes, AI can certainly?help:?can leverage AI to characterize network communications to/from inverters, detect when any inverters start to communicate anomalously, etc.

Chris: Today, this isn’t really a concern at scale. It may be in the future but known hardening and monitoring techniques are likely sufficient from a cybersecurity perspective. AI systems have a mixed track record in cybersecurity applications.


6. Do you believe we will come to use and then rely on AI systems, trained on operator?data, in control rooms as operator co-pilots?

Colin: No. I think we will come to use (and maybe rely on) AI systems as operator?assistants. I think a human will always be in charge, and will always?have the ability to?redirect, change course, shut down, etc.

Chris: AI systems will be decision support systems, and we will absolutely come to rely on them as such.

?

7. In your opinion, could/should DOE be involved in testing AI-enhanced grid systems before they are approved to operate on the Bulk Electric System?

Colin: I believe DOE should be involved in technology development and best practice development to ensure our AI-powered products are as safe and effective as possible. And should be involved in helping to create and design testing approaches to verify product performance. That said, I do not want to see additional regulatory hurdles around the deployment of AI at this time, as that would, in my view, unnecessarily limit innovation.

Chris: No. The inclusion of AI techniques is up to a given operator, and the regulatory and supervisory structure around that operator. Furthermore, the term “AI” is much too ambiguous to require specific regulation. Expert systems, smaller connectivist models used object recognition systems, and other types of software and hardware constructs are “AI”. Should they be regulated? Researchers have used very simple perceptrons for batch prediction. Should they be regulated? Of course not. At what point do we then impose regulation on a model? When it has a specific number of parameters? What if you have two models operating serially that are just under that number of parameters. Should that be regulated? Overall, not only should we not regulate them, I’m not sure we could if we wanted to.


NB: here's a link for the book referenced in Question 3:

https://www.routledge.com/AI-Unexplainable-Unpredictable-Uncontrollable/Yampolskiy/p/book/9781032576268


//// THE END ////

?

Siby Jose Plathottam

Energy Systems Scientist at Argonne National Laboratory

9 个月

This was a wonderful summary that succintly highlights where AI can be most useful in the power grid.

回复

Thanks to the three of you for sharing. Interesting conversation, AI for preprocessing data for regulation is interesting. Microsoft has published data on AI assisting sec ops as Co Pilot.

Prakash Ranganathan

Associate Professor at University of North Dakota, Director of Center for Cyber Security Research (C2SR) & Cyber Programs

9 个月

Thanks Andy for moderating and summarizing the panel session. Great read.

This is timely Andrew Bochman I just received a draft from CISA today on this very topic and need to respond. I'll need to pick your brain - will talk offline.

Thanks Andy to you and your panelists, I image that similar questions focused on the water sector would have similar answers.

要查看或添加评论,请登录

Andrew Bochman的更多文章

社区洞察

其他会员也浏览了