Data science meets actuarial science. Modeling casualty ILS transactions.
Jessica Schuler
Director, Ledger Investing | Casualty ILS | Strategy | Data & Analytics | Innovation | Actuary
The future of insurance risk modeling is at least a combination of data science and actuarial science. It would be hard to argue otherwise. I’m a member of the Casualty Actuarial Society and have always thought of myself as a non-traditional actuary. I absolutely love collaborating with other disciplines and the market to solve complex problems, such as casualty risk modeling, which continues to evolve. I’ve seen successful applications of data science and actuarial collaborations, and Ledger 's approach to providing a distribution of potential portfolio outcomes is definitely one of them.?
Mark Shoun , Ledger’s Chief Data Scientist, has built the data science team starting with a team of one, to now 18 and growing. The team includes a strong bench of actuaries, including our Chief Actuary - Anand Khare . The team is fluent in actuarial science, but uses cutting-edge statistical methods instead – methods that can be viewed as modern improvements on traditional actuarial science. They also deliver an unprecedented real-time data infrastructure to insurers and investors who can monitor the portfolio’s performance.
ILS investors are looking for consistency, objectivity, and transparency. ?It’s also key that execution times are quick and processes repeatable, and be understood by both carriers and investors. This means models that require little judgment from an analyst and be easily backtested.
I asked Mark a series of questions. Here were his responses.?
What’s needed to structure and price an ILS transaction??
First things first - investor grade data. Transparency is important to investors, and when securitizing for capital management purposes, investors can be thought of as partners, much like how you think of your own investors / board of directors of your insurance company. Risk originators who provide adequate data are likely to attract investor capital quicker and at more favorable terms.
Practically, this means a combination of experience-based (e.g. triangles) and exposure-based data (e.g. policy limits, coverages, types of policyholders as appropriate, underwriting guidelines) to allow our team to provide an estimate of future loss ratios. Investors really value a robust set of historical data to gain comfort with the portfolio’s performance including profitability and volatility. In the U.S., NAIC annual statement data, already available to Ledger and its investors, is helpful to give investors over 30 years of historical performance, with confidence in data governance. When securitizing using a global portfolio or a subset of annual statement product lines, Ledger collects that data subject to agreed- upon procedures.?
In addition to my team's quantitative assessment of the portfolio, Ledger’s underwriting team does a qualitative assessment of the portfolio for ILS investors, and they have their own data requests for that process.
What output do you produce and how is it used??
Our standard output is a set of stochastic projected cashflows from a fully Bayesian modeling framework. That's a mouthful, so let's break it down.
A set of stochastic: Unlike traditional approaches, we're not satisfied with a point estimate, or even an actuarial "reasonable range". Our investors want to know exactly how much risk they're being asked to bear for the rate of return. Instead of providing a single number to represent projected performance, we'll simulate 10,000 equally likely scenarios. If for example 93% of scenarios imply a positive rate of return for the investor, then that means that the likelihood of a positive return is 93%.
projected cashflows: We can't get away with just providing projected loss ratios. The return that ILS investors see is a combination of underwriting profit and investment income (including the amount of time that capital is tied up supporting the reserves), just like insurers. So we need to model the timing of premium, claim payments, and changes to loss reserves. Each stochastic scenario we provide is a coherent set of quarterly underwriting cashflows for the subject book of business.
领英推荐
from a fully Bayesian model: we can't predict the future with perfect accuracy, and there are three major sources of uncertainty we deal with. Process uncertainty is from the random nature of future events – just as with a fair coin, you can't be sure that if you flip it 100 times, it will land heads exactly 50 times. Parameter uncertainty is from drawing conclusions from limited data – if we have a summary of 1000 flips of a coin, we can get a pretty good idea of whether it's fair, but we can't be completely sure. Model uncertainty is from not knowing which potential model of reality is right – maybe a coin flip depends on whether it starts heads-up, maybe it doesn't. Without getting into technicalities, our Bayesian process allows us to easily account for all three sources of uncertainty simultaneously when we generate our forecast simulations.
These projections are handed off to the capital markets team to structure the contract in a way that it is a win-win for all parties involved.?
Today, if and when the data provided is good, our turnaround times on modeling a portfolio are quick - a day or two.?
How do investors respond to Ledger’s quantitative assessment??
Ledger has a bench of investors who are comfortable with our modeling and process and have already invested in casualty insurance risks (approx. $1B in gross written premium to date). The standardized output is provided in a risk report that investors have been familiar with and like. We have completed several large transactions to investors who have engaged their own actuaries to validate Ledger work. Backtesting has been an important way to get users comfortable with our model.
What might the future of risk modeling look like on an insurer's portfolio for ILS investors??
I see a few themes emerging. First, as investors divert more assets to ILS, they will look to third parties to help them understand and price the risks they hold – not just on a per-transaction basis, but at the portfolio level. We spend a lot of time working on appropriately modeling the correlations within pools of ILS assets.
Second: reliable, transparent and objective models will be key to growing the asset class to $1T. These will include models for the spectrum of risk included in casualty, non-cat property, life, etc. Some of these lines are newer like cyber, and some are more well-established, like workers’ compensation. But we all know casualty risk is always evolving, as the underlying insureds are innovating everyday. We continue to work on R&D projects and always will so we can model a broader variety of insurance risks.
Finally, we see our data science as a natural continuation from traditional actuarial practice. We have a healthy respect for the decades of institutional knowledge that the actuarial profession safeguards. The tools that we use are not the same as traditional actuarial models, but we view them as modern ways of fixing known deficiencies in traditional techniques, or of using cutting-edge statistics to obviate the need for actuarial judgment in run-of-the-mill circumstances. Many of the modeling problems we think about every day are also faced by insurers, and we are committed to sharing our techniques and insights with the actuarial community to elevate the standards of the profession as a whole.
Catch Mark and Anand's presentations at upcoming Casualty Actuarial Society CRLS and Annual Meetings. You can also check out Mark's podcast here.