The Insights Gained from 10 Billion Hours of Building Performance Data

The Insights Gained from 10 Billion Hours of Building Performance Data

Recently, Enertiv’s database topped 10 billion hours of building operations data.

This dataset is made up of the individual components of operating a commercial real estate asset: the labor hours of on-site staff, equipment runtime hours, indoor environmental readings, utility meter consumption, etc.

Internally, this metric is one way we measure growth. It took six years to reach one billion hours of data… and only two more years to reach 10 billion hours.

But “billions of hours of data” doesn’t mean much to commercial real estate owners and operators. 

So, aside from being a nice round number, this milestone gives us the opportunity to connect the dots between building data and asset value, and to explain how data-driven technologies work underneath the hood.

Building operations data 101

Historically, operating expenses have been an afterthought for owners and asset managers. For one, maintenance and repairs, utilities, insurance, base building CapEx, etc. are reported at the building level, so there’s little transparency into what’s driving those numbers. 

Second, financial models already account for a steady annual increase (usually around 3%) in expenses. So as long as there isn’t a disaster, there’s a sense that nothing really needs to be done.

But technology, both software to digitize workflows and IoT sensors, has brought to light what a well-operated property looks like at a granular level, and the significant asset value this level of performance creates. 

So, why “hours” and how is it possible to collect billions of them? 

10 years ago, the hot new technology was digitizing the building’s electricity meter to benchmark the portfolio’s energy consumption in near real time. There was one data point, so per building, each day added 24 hours to the dataset. 

With so few data points, it would take over a thousand years to reach 10 billion hours of data. But if you are also monitoring 200 pieces of equipment in that building, that’s 4,800 hours of data added to the database each day. 

Add the workflows of the building operators and vendors as well as indoor environmental readings like the ambient air temperature and particulate matter. Suddenly, the sheer amount of data starts growing very quickly.

When blended with static data such as maintenance vendor service agreements and warranties, equipment makes and models, and utility bills, this data can be contextualized as very granular benchmarking on multiple dimensions.

But so what, right? How does this help the average owner, asset manager or head of engineering?

Learned Insights

On the most basic level, the more data, the more opportunities to spot potential improvements. To have an “insight.”

This could be something straightforward like “you’re cooling the building when it’s cold outside. Turn that system off when temperatures drop and you’ll save $30,000 a year.”

Or it could be more complex like “there’s a significant difference between the work orders reported by the elevator maintenance vendor and what the sensors indicate is happening. Elevator shutdowns could be reduced by 75% if maintenance was being performed as reported.”

But it's not easy to transform raw data into an insight. It takes an understanding of statistics and data science, building systems and mechanical engineering, and even of different lease structures and real estate investment strategies.

For the most part, there’s no blueprint, it requires stumbling through the dark. 

It also requires constant communication with on-site operators. If there’s something that looks interesting in the data, feedback from the field that “yeah, actually, there is a refrigerant leak” can make all the difference.

Once we know what a refrigerant leak looks like in the data, we can query the entire dataset to see how common it is and understand the effects of factors like the region, equipment model and property type. 

Do this a few hundred times and you’ve started to build a valuable dataset and library of insights.

Democratizing hyper performance

Speaking of communication with on-site operators, there's a fair question that owners often ask when considering adopting technology: "Ok, you collect all this data, how do you know what it should look like? How do you know what the ideal is?"

It's a great question. 

The truth is, that the skill of building engineers, like pretty much any other profession, follows a power law distribution.

This means that in each portfolio, there is likely a small group of hyper performers who run their buildings at a level of performance far above their peers. Unfortunately, there's no amount of training or education that could get every engineer up to this level.

So, while hyper performers might not need technology to do their job extremely well, granular data opens the possibility of reverse engineering what makes them so good, to map the ideal.

Similar to developing an insight, best practices can be compared to the full dataset to see commonalities. "85% of hyper performers run their three redundant pumps in a lead lag configuration on a tri-weekly basis, but only 20% of all properties do this and only 7% in this portfolio."

Another valuable aspect of this exercise is mapping the connections in the data that hyper performers do automatically. That is, a deep understanding of how systems are interconnected and how changes in one place will affect performance elsewhere.

The goal, of course, is that with the assistance of technology, a roadmap of daily tasks and improved transparency can raise performance broadly. 

Patterns no human could see

Ironically, artificial intelligence has somehow become both overhyped and underappreciated.

Overhyped because too many solutions based on the learned insights mentioned earlier brand themselves as "AI driven." Underappreciated because few understand that, for narrow purposes, AI is already very prevalent today.

Those practical applications come from deep learning, which is an AI algorithm that uses massive amounts of data from a specific domain to make a decision that optimizes for a desired outcome.

It does this by training itself to recognize deeply buried patterns and correlations - many of which are invisible or irrelevant to human observers - to make better decisions than a human could.

This super-charged decision making is becoming more important than ever as operators struggle to balance operating efficiency, COVID-19 mitigation, sustainability goals, and tenant comfort.

Still, it's important to remember that the operative phrase of this capability is "massive amounts of data." 

When it comes to AI, even 10 billion hours of data only gets you in the door. Over time however, as more data is captured and analyzed, AI will challenge deeply held beliefs, assumptions and rules of thumb.

 

So, what does 10 billion hours of building operations data represent? In a way it represents a mutually beneficial system where in exchange for a small contribution, you gain access to the combined knowledge of the entire industry. 

In many ways, this is the best version of human (and AI) cooperation. We can't wat to see what can be done with 100 billion hours of data.

Nile C.

Head of Capital Markets at Teamshares

4 年

Excellent work Connell McGill! Huge congrats

Lily Chen

Regional Project Manager/Technical Consultant

4 年

Continue to be impressed with all of your successes, Connell McGill - Congratulations!

Chris Smith

Managing Partner @ Playfair

4 年

Huge milestone Connell McGill ??

要查看或添加评论,请登录

Connell McGill的更多文章

社区洞察

其他会员也浏览了