US Property Data?: Is this as good as it gets?

US Property Data: Is this as good as it gets?

How a building is constructed, maintained and where it is located all have a massive impact on its potential to be damaged or destroyed. That knowledge is as old as insurance itself.

So why do so many underwriters still suffer from lack of decent data about the buildings they insure?

And when better data does get collected for US properties, why does it seem to get lost as it crosses the Atlantic?

London is an important market place for insuring US risks. It provides over 10% of the capacity for speciality risks - those that are hard, or impossible, to place in their home market through admitted carriers. Reinsurers of admitted carriers, insurers of homeowners and small businesses in the Excess & Surplus markets and facultative reinsurers of large corporate risks all need property data.

The emergence and growth of a new type of property insurers in the US such as Hippo and Swyfft has been driven by an expectation of having access to excellent data. They are geared up to perform fast analyses. They believe they can make accurate assessments and offer cheaper premiums. The level of funding for ambitious startups shows that investors are prepared to write large cheques, tolerate years of losses and have the patience to wait in the expectation that their companies will displace less agile incumbents. If this works, it’s not just the traditional markets in the US that will be under threat. The important backstop of the London market is also vulnerable. So what can established companies do to counter these new arrivals?

Neither too hot nor too cold

The challenge for any insurer is how to get the information it needs to accurately assess a risk, without scaring off the customer by asking too many questions. The new arrivals are bypassing the costly and often inaccurate approach of asking for data directly from their insureds, and instead are tapping into new sources of data. Some do this well, others less so. We’re already seeing this across many consumer applications. They lower the sales barrier by suggesting what you need, rather than asking you what you want. Netflix knows the films you like to watch, Amazon recommends the books you should read, and soon you’ll be told the insurance you need for your home.

Health insurers such as Vitality are dramatically improving the relationship with their clients, and reducing loss costs, by rewarding people for sharing their exercise habits. Property insurers that make well informed, granular decisions on how and what they are underwriting will grow their book of business and do so profitably. Those that do not will be undercharging for riskier business. Not a viable long term strategy.

Fixing the missing data problem would be a good place to start.

We recently brought together 28 people from London Market insurers to talk about the challenges they have with getting decent quality data from their US counterparts. We were joined by a handful of the leading companies providing data and platforms to the US and UK markets. Before the meeting we’d conducted a brief survey to check in on the trends. A number of themes emerged but the two questions we kept coming back to were: 1) why is the data that is turning up in London so poor, and 2) what can be done about it?

This is not just a problem for London. If US coverholders, carriers or brokers are unable to provide quality data to London they will increasingly find their insurance and reinsurance getting more expensive, if they can get it at all. Regulators around the world are demanding higher standards of data collection. The shift towards insurers selling direct to consumer is gathering momentum. Those that are adding frictional costs and efficiencies will be squeezed out. This is not new. Rapid systemic changes have been happening since the start of the industrial revolution. In 1830 the first passenger rail service in the world opened between Liverpool and Manchester in the north west of England. Within three months over half of the 26 stagecoaches operating on that route had gone out of business. 

Is the data improving?

Seventy percent of those surveyed believed that the data they are receiving from their US partners has improved little, if at all, in the last five years. Yet the availability of information on properties had improved dramatically in the proceeding 15 years. Why? Because of the widespread adoption of catastrophe models in that period. Models are created from large amounts of hazard and insurance loss data. Analyses of insured properties provide actionable insights and common views of risks beyond what can be achieved with conventional actuarial techniques. These analytics have become the currency of risk, shared across the market between insurers, brokers and reinsurers. The adoption of catastrophe models accelerated after Hurricane Andrew in 1992. Regulators and rating agencies demanded better ways to measure low frequency, high severity events. Insurers quickly realized that the models, and the reinsurers that used the models, penalised poor quality data by charging higher prices.

By the turn of the century information on street address and construction type, two of the most significant determinants of a building’s vulnerability to wind and shake, was being provided for both residential and commercial properties being insured for catastrophic perils in the US and Europe. With just two major model vendors, RMS and AIR Worldwide, the industry only had to deal with two formats. Exchanging data by email, FTP transfer or CD became the norm.

Then little else changed for most of the 21st century. Information about a building’s fire resistance is still limited to surveys and then only for high value buildings, usually buried deep in paper files. Valuation data on the cost of the rebuild, another major factor in determining the potential scale of loss and what is paid to the claimant, is at the discretion of the insured. It's often inaccurate and biased towards low values. 

If data and analytics are at the heart of Insurtech, why does access to data appear to have stalled in the property market?

How does the quality of data compare?

We dug a bit deeper with our group to discover what types of problems they are seeing. In some locations, such as those close to the coast, information on construction has improved in the last decade but elsewhere things are moving more slowly.

Data formats for property are acceptable for standard, homogenous property portfolios being reinsured because of the dominance of two catastrophe modelling companies. For non-admitted business entering the Excess and Surplus market, or high value complex locations there are still no widely adopted standards for insured properties coming into the London market, despite the efforts of industry bodies such as Acord.

Data is still frequently re-keyed multiple times into different systems. Spreadsheets continue to be the preferred medium of exchange and there is no consistency between coverholders. It is often more convenient for intermediaries to aggregate and simplify what may have once been detailed data as it moves between the multiple parties involved. At other times agents simply don’t want to share their client’s information. Street addresses become zip codes, detailed construction descriptions default to simple descriptors such as 'masonry'.

Such data chaos may be about to change. The huge inefficiency of multiple parties cleaning up and formatting the same data has been recognised for years. The London Market Group (LMG), a powerful, well supported body representing Lloyd’s and the London company market has committed substantial funds to build a new 'Target Operating Model' (TOM) for London. This year the LMG commissioned London company Charles Taylor to provide a central service to standardise and centralise the cleaning up of the delegated authority data that moves across the market. Much of it is property data. Once the project is complete, around 60 Lloyd’s managing agents, 250 brokers and over 3,500 global coverholders are expected to finally have access to data in a standard format. This should eliminate the problem of multiple companies doing the same tasks to clean and re-enter data, but still does nothing to fill in the gaps where critical information is missing.

Valuation data is still the problem

Information on property rebuild cost that comes into London is considered “terrible” by 25% of those we spoke to and “poor quality” by 50%.

Todd Rissel, the CEO of e2Value was co-hosting our event. His company is the third largest provider of valuation data in the US. Today over 400 companies are using e2Value information to help their policy holders get accurate assessments of the replacement costs after a loss. Todd started the company 20 years ago, having begun his career as a building surveyor for Chubb.

The lack of quality valuation data coming into London doesn’t surprise Todd. He’s proud of his company’s 98% success in accurately predicting rebuild costs but only a few states, such as California, impose standards on the valuation methods that are being used. Even where high quality information is available, the motivation may not be there to use it. People choose their property insurance mostly on price. It’s not unknown for some insurers to recommend the lowest replacement value in order to reduce the premium, not the most accurate, and the discrepancy gets worse over time.

Have the losses of 2017 changed how data is being reported?

Major catastrophe events have a habit of exposing the properties where data is of poor quality or wrong. Companies insuring such properties tend to suffer disproportionately higher losses. No companies failed after the storms and wildfires of 2017, but more than one senior industry executive has felt the heat for unexpectedly high losses.

Typically after an event, the market 'hardens' (rates get more expensive) and insurers and reinsurers are able to demand higher quality data. 2017 saw the biggest insurance losses for a decade in the US from storms and wildfire. Rates haven't moved.

Insurers and reinsurers have little influence in improving the data they receive.

Over two thirds of people felt that their coverholders, and in some cases insurers, don’t see the need to collect the necessary data. Even if they do understand the importance and value of the data they are often unable to enter it into their underwriting systems and pipe it digitally direct to London. Straight-through processing, and the transfer of information from the agent’s desk to the underwriter in London with no manual intervention, is starting to happen but only the largest or most enlightened coverholders are willing or able to integrate with the systems their carriers are using.

We were joined at our event by Jake Hampton, CEO of Virtual MGA. Jake has been successful in hooking up a handful of companies in London with agents in the US. This is creating a far stronger and faster means to define underwriting rules, share data and assess key information such as valuation data. Users of Virtual MGA are able to review the e2Value data to get a second opinion on information submitted from the agent. If there is a discrepancy between the third party data that e2Value (or others) are providing and what their agent provides, the underwriter can either change the replacement value, or accept what the agent has provided. A further benefit of the dynamic relationship between agent and underwriter is the removal of the pain of monthly reconciliation. Creating separate updated records of what has been written in the month, known as 'bordereau', is no longer necessary. These can be automatically generated from the system.

Even though e2Value is generating very high success rates for the accuracy of its valuation data, there are times when the underwriter may want to double-check the information with the original insured. In the past this required a lengthy back and forth discussion over email between the agent and the insured. JMI Reports is one of the leading provider of surveys in the US. Tim McKendry, CEO of JMI, has partnered with e2Value to create an app that provides near real time answers to an underwriter’s questions. If there is a query, the homeowner can be contacted by the insurer directly and asked to photograph key details in his home to clarify construction details. This goes directly to the agent and underwriter enabling the accurate and fast assessment of rebuild value.

What about Insurtech?

We’ve been hearing a lot in the last few years about how satellites and drones can improve the resolution of data that is available to insurers. But just how good is this data? If insurers in London are struggling to get data direct from their clients, can they too access independent sources of data directly? And does the price charged for this data reflect the value an insurer in London can get from it?

Recent entrants, such as Cape Analytics, have also attracted significant amounts of funding. They are increasing the areas of the US where they provide property information derived by satellite images. EagleView has been providing photographs taken from its own aircraft for almost 20 years. CEO Rishi Daga announced earlier this year that their photographs are now sixteen times higher resolution than the best previously available. If you want to know which of your clients has a Weber barbeque in their back yard, EagleView can tell you. 

Forbes McKenzie, from McKenzie Insurance Services, knows the London market well. He has been providing satellite data to Lloyd’s of London to assist in claims assessment for a couple of years. Forbes started his career in military intelligence. “The value of information is not just about how accurate it is, but how quickly it can get to the end user” says Forbes.

The challenges with data don’t just exist externally. For many insurance companies, the left hand of claims is often disconnected from the right hand of underwriting. Companies find it hard to reconcile the losses they have had with what they are being asked to insure. It’s the curse of inconsistent formats. Claims data lives in one system, underwriting data in another. It’s technically feasible to perform analyses to link the information through common factors such as the address of the location but it’s rarely cost effective or practical to do this across a whole book of business.

One of the barriers for underwriters in London in accessing better data is that companies that supply the data, both new and old, don’t always understand how the London market works. Most underwriters are taking small shares of large volumes of individual properties. Each location is a tiny fraction of the total exposure and an even smaller fraction of the incoming premium. Buying data at a cost per location, similar to what a US domestic insurer is doing, is not economically viable.  

Price must equal value

Last month the chief digital officer of a London syndicate travelled to Insurtech Connect in Las Vegas to meet the companies offering exposure data. He is running a POC against a set of standard criteria, looking for new ways to identify and price US properties. He’s already seeing a wide range of approaches to charging. UK based data providers, or US vendors with local knowledge of how the information is being used, tend to be more accommodating to the needs of the London insurers. There is a large potential market for enhanced US property data in London, but the cost needs to reflect the value.

Todd Rissel may have started his career as a surveyor and now be running a long-established company but he is not shy of working with the emerging companies and doesn’t see them as competition. He has partnerships with data providers such as drone company Betterview to complement and enhance the e2Value data. It is by creating distribution partnerships with some of the newest MGAs and insurers, including market leaders such as Slice and technology providers like Virtual MGA, that e2Value is able to deliver its valuation data to over a third of the companies writing US business.

Looking ahead

It is widely recognised that the London Market needs to find ways to meaningfully reduce the cost of doing business. The multiple organisations through which insurance passes, whether brokers, third party administrators and others, increase the friction and hence cost. Nonetheless, once the risks do find their way to the underwriters, there is a strong desire to find a way to place the business. Short decision chains and a market traditionally characterised by underwriting innovation and entrepreneurial leaders means that London should continue to have a future as the market for specialty property insurance. It’s also a market that prefers to 'buy' rather than 'build'. London insurers are often amongst the first to try out new technology. It welcomes partnerships. The upcoming generation of underwriters understands the value of data and analytics.

It cannot, however, survive in a vacuum. Recent history has shown that those companies with a willingness to write property risks with poor data get hit by some nasty, occasionally fatal surprises after major losses. With the increasing focus by the regulator and Lloyd’s own requirements, casual approaches to risk management are no longer tolerated. Startups with large war chests from both US and Asia see an opportunity to displace London.

Despite the fears that data quality is not what it needs to be, our representatives from the London market are positive about the future. Many of them are looking for ways to create stronger links with coverholders in the US. Technology is recognised as the answer and companies are willing to invest to support their partners and increase efficiency in the future. The awareness of new perils such as wildfire and the opening up of the market for flood insurance is creating new opportunities.

Our recent workshop was the first of what we expect to be more regular engagements between the underwriters and the providers of property information. If you are interested in learning more about how you can get involved, whether as an underwriter, MGA, provider data, broker or other interested party, let me know.

 

 

 

 

 

 

 

Robin Merttens

Executive Chairman of InsTech

6 年

Considering that the whole basis of transacting insurance requires only the collection and analysis of data and then the transfer of money, it is remarkable how bad we still are at managing them both!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了