Today we have the third part of our series on Catastrophe Modeling. Click here for part one and here for part two.
“If you challenge conventional wisdom, you will find ways to do things much better than they are currently done.”
Bill James (author of Baseball Abstracts and Godfather of “ Moneyball”)
In this, my final article on CAT models, I wish to spend some time talking about the output of CAT models, how they are used, interpreted and most importantly misinterpreted. I would also like to inform the reader of the direction this field is headed, and I have added an Additional Resources section for the reader’s further edification.
First, to recap, CAT models are computerized simulations that use scientifically derived physical models to estimate where catastrophic events will occur, how severe they will be, how much damage they will cause to properties and what the financial consequences will be to all the stakeholders in a property insurance policy. The big idea is that these simulations do NOT attempt to predict the future. These simulations are realistic representations of any 12-month period. Another way of thinking about this is that a CAT model examines what could happen in the next 12 months, thousands and thousands of times, not what could happen a thousand years from now.
The output from a CAT model looks something like this:
This goes on for thousands and sometimes millions of cycles. The more simulations you run, the more likely you will encounter the black swan event; the ultra rare event whose impact is super consequential. Sometimes, our scientific understanding of risk is limited, and events such as 9/11 or the 2011 Japanese earthquake and tsunami shock everyone.
Of course, running more simulations has its tradeoffs. Will looking at a million years of simulations really provide that much more information than examining 10,000 years? How much longer will it take to run those simulations (time is money, right?)? Do we have the computing power to handle and manage such large quantities of data?
Best practices, developed over a generation of CAT model usage generally recommends that somewhere between 10,000 and 100,000 simulation years are adequate, depending on the peril, to manage the pricing, underwriting and risk management for a typical property insurer. For tornado, hail and winter storm, 10,000 simulation years should suffice. For tropical cyclones, 50,000 simulation years should capture the extremely rare, yet consequential events. For earthquakes, we are likely not to get much additional information if we run more than 100,000 simulation years.
Analyzing simulation results on a year over year basis has many advantages for insurers. Two of the most commonly used statistical results from these simulations are the Average Annual Loss and the Probable Maximum Loss.
Average Annual Loss
The Average Annual Loss is the simple average of all simulation iterations. Also known as AAL, pure premium or expected loss, this metric is the easiest to comprehend and easiest to misuse.
Consider the output of a 10 year simulation for the following 2 policies for a $100,000 building:
If we just compare the AALs for each of these two policies, we would be deceived into thinking that these two policies are equivalent risks. Yet, one policy’s simulation shows consistent and predictable losses while the other is much more volatile. The modeling results in the table from the left will require more premium to support the risk as compared to the table in the above right. The problem when working with simple averages is that how they were constructed is masked. CAT models do calculate standard deviations, yet from my experience, these are rarely used. They should be because they add information about risk that is missing from just the AAL. In our example above, the table to the left has a standard deviation of $30,000 while the table to the right has a standard deviation of $0. This confirms the suspicion that the first policy is riskier than the second. I suggest that if you are going to use AALs in decision making, it is wise to somehow factor in the standard deviation. I have seen pricing formulas that add a percentage of the standard deviation to the AAL as a proxy for future claims. Not only does this add a margin of safety to the premium should the model misrepresent the risk, but this will also distinguish and correctly compensate the insurer between risks with similar AALs but with dissimilar loss severities.
A better metric to use when trying to understand loss severities is the Probable Maximum Loss.
The Probable Maximum Loss
The Probable Maximum Loss is a term long used in the storied history of insurance but ironically a term that better fits the modern era of modeling. Prior to modeling, a Probable Maximum Loss was an underwriter’s best guess as to what the most reasonable large loss could be. For example, an underwriter insuring a $1 billion property portfolio up and down the east coast of the US instinctively knows that there would not be a single $1 billion dollar complete loss to this portfolio. So what loss should this underwriter expect to be the largest that could reasonably occur? Without an appropriate model, that is an extremely difficult and consequential decision to make. The implications affect capital consumption and reinsurance buying criteria which trickles down to into the actual pricing for the policy. These PMLs were often decided upon based on a strange combination of rigorous actuarial analysis and gut feel.
On the other hand, when describing modeling output, the Probable Maximum Loss also know as the PML or exceedance probability (EP), is the probability or likelihood of exceeding a specified loss amount.
For example, a property insurer covering the $1 billion portfolio mentioned above, may want to know what the likelihood of a loss of $100 million or greater is. Or, a rating agency or regulator may want to know what the PML is at a 1% probability. The simulation approach used by CAT models can provide the necessary output to calculate the probabilities of all loss levels. Here is a common table used to summarize PMLs:
So what does this table tell us? For starters, that “1 in 100 year storm” is likely to cause a $47 million loss OR GREATER. A rating agency that uses the 0.4% PML to calculate a risk score would plug in $59 million into that calculation. If your CAT treaty has a $50 million attachment, then roughly a 1% PML will begin ceding loss to that treaty. And so on.
Additionally there are some key ideas that you need to keep in mind when it comes to using this table
- ”Or Greater” – you should get used to and insist that the words “or greater” be added to every loss quoted from this table. If asked, what is my 1 in 100 year PML, the answer from this table would be $47 million OR GREATER!
- A 1 in 100 year loss does not literally mean that this loss (or greater) will happen once in 100 years. This is why we should always add the exceedance probability percentage to the table. What the 1 in 100 year loss means is that there is a 1% chance (1 year divided by 100 years), that event could occur in any 12 month period. It could happen today, tomorrow or 50 years from now. And if it does happen tomorrow, the probability still exists that it can happen again in the next 12 months!
- These are probabilities of losses and not events
- PML tables can be created for entire corporate structures, lines of business, geographic regions, accounts, policies, and even individual properties.
- PMLs are not additive. The 1 in 100 year loss from one company added to the 1 in 100 year loss from another will be less than the sum of the two. This is another benefit of CAT modeling, in that it effectively can capture and quantify the benefits of diversification.
Extending the Use of CAT Models
Hopefully by now you can appreciate what this innovative tool has done to the industry. It has moved us from a very “gut-feel” and arbitrary environment of pricing and risk management to something with a lot more rigor.
And this is precisely where I think the industry should continue to move towards. We should be replacing old arbitrary rules of thumb with the rigor that comes from modeling:
- Capacity should be allocated not based on arbitrary limits on aggregation (essentially the total loss assuming a 100% loss to each policy within a region such as Florida). Why not use modeling output to compute the likelihood of loss and allocate capacity based on those finding?
- Technical premiums should be built from the ground-up based on loss estimates from CAT models for each individual risk.
- All property reinsurance should be purchased using input from CAT models. Using anything else is guessing.
- Predictive modeling and CAT modeling must merge into a seamless process for decision making across the entire company. For example, a predictive model can use output from a CAT model to help underwriters score which submissions they should prioritize, which accounts would benefit from a new product or which accounts are worthwhile to write even at a loss. On the claims side, predictive models can help claims departments prioritize adjuster appointments and settlement options to speed the process and satisfy customers more efficiently.
Future Innovation in CAT Models
It has been 25 years since the first CAT models were commercially available. We have come a long way but in many respects we are still far away from where we need to be.
This CAT article is the longest article we’ve ever posted, so here’s a cute cat photo to reward you for making it this far!
Problems still abound. CAT modeling has proven its worth by giving the necessary tools to (re)insurers to manage their businesses. They are leaps and bounds better than the rules of thumb we used prior to their invention.
But the models themselves are still unproven in many cases. Models continue to harbor a lot of uncertainty about critical questions that loom over the industry. We still don’t know if a CAT 5 hurricane could make landfall in NYC. We do know that the losses would be staggering and consequential to any insurer writing significant property cover there. We also don’t have enough information in CAT models to feel fully confident that it can estimate losses from a severe event. How much claims data from severe earthquakes could we possibly have to have confidence that the model can estimate the loss to a large apartment complex in Los Angeles? Not much! Many of these issues may require decades and more events to sort out.
Data quality is still messy. With all the available property databases and geospatial tools, the exposure data we are passing to the models is still of poor quality. There is considerable room for improvement. If we can’t trust the data we are feeding into models, then we can’t trust the data that comes out of them either. Garbage in – garbage out!
We are finally starting to see the first generation of inland flood models enter the market. Like the other models, there are likely many issues that we will encounter with them, yet, this is an incredible leap forward in technology. Flood maps are unreliable, and the use of a CAT model to estimate flood risk will greatly improve access to flood products for property owners as new insurers feel confident to use these tools to offer their capital.
Finally, CAT modeling output can be improved so that it provides additional insight. AALs and PMLs were a nice start but often lead to more questions than they answer. Karen Clark & Co (Karen founded the first commercially available CAT modeling firm, AIR-Worldwide) has developed the concept of theCharacteristic Event. This concept looks to simplify the PML by focusing on a single return period (such as the 1 in 100 year event) and give insurers insight into the actual events that could reasonably occur. Her firm is focused on translating information about events that cause loss to decision makers in an event based format. Events are easier to understand than probabilities, and this could be a useful step to better decision making. These Characteristic Events also help to answer the “or greater” question hanging over PMLs. What does “or greater” mean in terms of maximum loss? These Characteristic Events are helpful in answering those questions.
We are in a new era of insurance. Technology and innovation are slowly creeping into our day to day activities. Does all of this mean that each of us must become part time actuaries, scientists, data analysts, underwriters, claims professionals and sales people, all wrapped into one? Let us hope so!
CAT Model Vendors
Risk Management Solutions (RMS)
CAT Modeling Slide Deck From Casualty Actuarial Society
Comprehensive Paper From Karen Clark
About Nick Lamparelli
Nick Lamparelli is a 20+ year veteran of the insurance wars. He has a unique vantage point on the insurance industry. From selling home & auto insurance, helping companies with commercial insurance, to being an underwriter with an excess & surplus lines wholesaler to catastrophe modeling Nick has wide experience in the industry. Over past 10 years, Nick has been focused on the insurance analytics of natural catastrophes and big data. Nick serves as our Chief Evangelist.
3 thoughts on “Using Catastrophe Models”
Reblogged this on Nicholas Lamparelli and commented:
My latest article on the InsNerd platform…