Is data from Battery Energy Storage Systems becoming to heavy for the cloud?
Image is AI generated

Is data from Battery Energy Storage Systems becoming to heavy for the cloud?

The short answer is probably no. However, the real question is whether these systems are becoming too large to economically push all that data to the cloud. Here, the answer might be "maybe."

Before diving into a conclusion, we need to get into some of the specifics.

In the not-too-distant past, the Battery Energy Storage Systems (BESS) was predominantly dominated by containerized solutions ranging from 300 kWh to about 3 MWh. These systems became popular for backup power, rental power, or smaller sites like microgrids. Another common application was in colocation, where BESS coupled with moderate-scale solar arrays buffered power from photovoltaic (PV) systems to the grid, especially in partially cloudy conditions.

However, the landscape has shifted. Large-scale energy storage has become critical to the energy transition, and an increasing number of mega grid-scale battery parks are now being implemented or planned. Grid-scale battery parks first gained attention when Tesla built the 194 MWh Hornsdale site in Australia in 2017. Fast forward to 2024, and we're routinely seeing sites over 500 MWh worldwide, with some exceeding 3,000 MWh, such as the upgraded Moss Landing battery in California.

These mega-sites are no longer rare. Large installations are appearing globally. For example, Rens Savenije from Ventolines published a list of sites in the Netherlands, with four parks in the planning stages exceeding 500 MWh. https://www.dhirubhai.net/posts/renssavenije_big-batteries-nl-activity-7236623523501256706-Xk_b/

Similar projects are underway in the U.S., China, Australia, the U.K., Belgium, Chile, South Africa, Germany, and other countries, largely driven by the explosive growth of solar energy and the subsequent opportunities and challenges in grid stabilization.

Why Does Storage Size Matter for Data Volume?

For optimization platforms (Optimizers), storage size may not significantly impact data volume. These platforms are mainly concerned with aggregate data from the site, such as total State of Charge (SoC) and power output. While the size of the asset may have huge implications on financial risks, it may not place additional strain on the data platform beyond that of a smaller 2 MWh site.

Availability of capacity risk does make a difference to the optimizer, as an unexpected loss in capacity availability can have significant financial impact even for 1 day.

Therefore, when tasked with monitoring the availability and health of each battery module, storage capacity becomes a significant factor—and here's why:

In health monitoring, the number of battery cells—or more specifically, the number of strings—matters. Strings are the basic building blocks of a battery, created by connecting cells to reach a target operating voltage. A typical operating voltage might be about 1,500V, which translates to roughly 400 cells per string. The Battery Management System (BMS) monitors each cell within a string, ensuring voltage balance, detecting shorts, and monitoring for overheating. If something goes wrong, the BMS takes immediate action. It also provides the Energy Management System (EMS) with overall string data such as maximum and minimum cell voltage, temperature, current, SoC, etc.

The challenge is that BMS is generally static, meaning it doesn't evolve based on site operations in many cases. To compensate, OEM or third-party monitoring systems offer monitoring services and analytics to supplement the BMS by sending string-level data to the cloud for real-time or batch analysis using advanced models. These models can detect issues which the BMS did not initially consider.

The Data Fire Hose

Monitoring each string in a BESS involves tracking multiple variables, typically 20-30 per string, along with cooling system/HVAC data, inverter information, and other balance-of-plant (BoP) data. While some data, like temperatures, can be sampled every 10 seconds or even a minute, other data like voltage and current needs to be sampled at sub-second intervals to detect short-lived fluctuations, such as the early stages of dendrite formation. Detection algorithms work best when given access to data sampled at 10-50 Hz.

Here's where the problem arises in scaling.

Let’s do some quick math. Assume a cloud-based monitoring platform is responsible for 100 traditional containerized BESS, each with about 2.3 MWh of capacity and 24 strings. That means you’re monitoring about 2,400 strings in total globally.

Now, consider one new grid-scale battery plant with a median size of 900 MWh of storage, if we use CATL’s EnerOne product as an example. This would involve monitoring 2,372 strings.

Just by adding 1 of these new grid-scale sites, you’ve nearly doubled the amount of data your platform must ingest, process, and store. Doubling your infrastructure costs, as it is not just storage costs, but also compute. And this example doesn’t even account for other critical systems like inverters, HVAC, and BoP.

The above example is just 1 site, in the list I provide below there are more than 40 sites greater than 500MWh either in operation or construction around the world, with more than 70 more in the planning phases.

The Cost of Data

Handling more data isn't a problem for cloud providers—they have practically unlimited compute and storage capacity. The question is whether it’s economical for you. Doubling your costs for one customer, much less several, may not be economically sustainable.

This situation calls for a rethinking of data and processing strategies to make them more economical at scale. Instead of throwing more money at infrastructure, is there a better approach?

Moving Data Processing Closer to the Source

Like several other industries, BESS data is often benign—99.9% of it is simply saying, “everything is fine.” Only 0.1% of the data flags potential issues. If we take a cue from industries like mining, which often try not to transport raw material long distances for processing, we can consider moving more data processing closer to the site rather than pushing all the bulk data through the cloud pipeline.

By bringing analysis closer to the data, such as processing it on-site, you can reduce costs for a more complicated pipeline in the cloud. Once a condition is detected, then the high frequency data can be sent to the cloud just for the effected unit in the effected time range. This method, while technically challenging, allows for real-time detection and significantly reduces the data transmitted to the cloud. Graphene Edge has spent significant effort developing and proving new technology which uses this approach, managing analytical models in the cloud but deploying them to the plant for real-time processing. This hybrid system retains the cloud’s advantages while keeping data processing economical for large-scale projects. In addition to being more economical, it brings with it other advantages such as reliability and the ability to feed back to the local BMS and EMS new analytical data in real time, even if connectivity becomes a problem. In addition, analytics running on Graphene are able to be updated and enhanced continuously, without the need for shutting down the plant for software updates, or performing software updates at the plant at all. Don’t misunderstand, I am not advocating that no data should be sent to the cloud, this is needed to be able to inform and coordinate the various companies responsible for the operations and monitoring of the system. Additionally, occasionally higher density data needs to be sent for training of machine learning models. However, much less data needs to be sent to the cloud, if the analytics are at least preprocessed on site, on the edge.


Conclusion: A Balanced Approach

This solution won’t fit every case. For small-scale BESS, pushing data to the cloud might remain the easiest simple solution. However, for larger grid-scale storage, edge-based processing may be the key to managing scaling large amounts of data without huge increases to compute costs.

Other potential strategies include reducing sampling frequencies, though that risks missing critical early indicators of failure. Alternatively, relying more on BMS alarms might help, but this would require evergreen BMS updates that can also adapt to site-specific conditions.

In the end, scaling economically, especially with sparse data, may necessitate moving more of the analytics to the edge, balancing cloud and on-site processing to manage the torrent of data efficiently.

To get an idea for yourself of the number of large-scale BESS systems either currently in operation or in planning I have attached a list of sites. I don’t claim this list to be exhaustive, but it may point out the growth we see in the market.

What do you think, do we need to consider other approaches to BESS data other than simply throwing it at the cloud?





Are we aware that Our World is under INVASION of cables and batteries? The first and the easiest step for decarbonization ; we have to explore NOW the Power of ENERGY HARVESTING; at least starting conversion of IoT Data Providers, Sensors at micro levels. Please imagine how much work effort and materials spent for a simple Temperature Sensor averagely ; - 100 meters cable - 1 meter cable tray - 1 meter steel construction - 50 pieces of plastic clips - 0,1 piece of I/O Modul - 2 pieces of label - minimum 4 hours manpower from design phase to commissioing - minimum 1 A4 page for circuit diagram - at least triple times higher cost than the sensor itself - High O&M cost, fire risk, short circuit risk due to wiring, etc. - One of the most important disadvantage is : Limited usage area due to wiring We as ESCOM Enhanced Solutions, we have succeeded to eliminate all the handicaps of Energizing IoT Data Providers by ENERGY HARVESTING .. www.escom-es.com Thanks for your genuine concern for our World...

回复

要查看或添加评论,请登录

Audi Lucas的更多文章

社区洞察

其他会员也浏览了