Property insurers meticulously manage their aggregate exposures to mitigate risk and ensure financial stability. They methodically assess and monitor the total amount of allocated insurance capacity across all policies to prevent exceeding their ability to pay out claims in the event of widespread disasters or a series of smaller losses. Aggregate exposure management is NOT a probabilistic exercise, but literally a hypothetical βif we had to pay out full limits to all policyholders, what would that amount be?β. The advantage of this is that since it isnβt probabilistic, there are no forecast or modeling errors to deal with (what if the model is very wrong?). The disadvantage is also that this approach isnβt probabilistic! We know that not all policies will experience a full limit loss from a single event. In this regard aggregate management can be overly punitive to some companies versus others. Letβs examine how.
Current Way of Managing Aggregate Exposure
As an example of the current methodology, weβll use Florida carriers that write homeowners insurance. There are 67 counties in Florida. Letβs look at two carriers that write a single $250,000 Cov A policy in each county in Florida.
At first glance, the aggregate capacity utilized is the same between both companies.
67 counties X $250,000 = $16,750,000 of capacity utilized
Now, if we introduce a small variable to this, you can see that using this method of capacity utilization can be suboptimal for one company versus another.
Letβs assume that one carrier has a mandatory 5% wind deductible and the other carrier in our example has a 10% wind deductible under the same scenario as the prior example. If each carrier still wrote one homeowners policy for each of the 67 counties in Florida, technically, BOTH of their aggregate exposure is still $16,750,000. However, these companies clearly do not utilize the capacity similarly. All else being equal, the carrier with the higher mandatory deductible has a lower likelihood of blowing through the aggregate capacity. However, when these firms report their aggregate exposure to their reinsurers or other stakeholders, they will look and appear the same. Under scenarios where capacity is allocated based on aggregate reporting, the company with the higher mandatory deductible is actually punished for having a more conservative underwriting approach. There is a better way.
A Better Way To Manage Aggregate Exposure
Just because a policy is issued does not necessarily mean that capacity has been allocated. If we issue a $250,000 policy for earthquake coverage for a Florida property, have we really allocated $250,000 in aggregate capacity? We say not.
A better way to handle aggregate management exposure is to somehow combine modeled loss output against the aggregate issued. Typically, carriers (and their stakeholders) examine the β1-in-100 yearβ loss (or some variation of that). This is an extremely valuable exercise, but for aggregate exposure management, it means the carrier is swallowing the model loss error, which exists, is real, and, depending on the peril, could be very large. Instead, we suggest an approach that eliminates some, if not all of the modeled loss errors but still recognizes a proper amount of hazard exposure against the aggregate.
Eliminate The Modeled Portfolio Benefit
As the size of a portfolio grows, the Probable Maximum Loss (PML) gets smaller and smaller in relation to the aggregate exposure deployed for the portfolio. Back to our original example, if a carrier writes ONE $250,000 policy across all 67 Florida counties, the aggregate deployed is $16,750,000. The likelihood of a full loss at every location for a single event (or even a single aggregate year) is almost zero. Not all 67 counties will face the maximum sustained winds of a hurricane, and not all locations will suffer a total loss. This is the value of a catastrophe model in that it can estimate the correlation of losses across a portfolio and compute a probable maximum loss for some return period (the most common being 1-in-100 years). But, as mentioned, no model is perfect, and because of model error, a catastrophe model wonβt get this entirely correct. In fact, it could be significantly off.
The approach we suggest takes a middle ground to reduce and potentially remove the modeled loss errors for the portfolio. Instead of using the catastrophe models to estimate the probable maximum loss for the portfolio, we instruct the model to provide us with the probable maximum loss for each location, but instead of having the model correlate losses across locations, we simply sum all of the individual PMLs of the location. For companies that want to stay conservative and remain more faithful to the traditional aggregate exposure model, those firms can simply pin the PML to the most extreme scenario, which, for most model vendors, is the 1-in-10,000-year event.
While this method may not completely eliminate model loss error, it does give a more valid examination of the potential that aggregate losses will be worse than forecasted PML and differentiates between the two companies we compared prior. Carriers that have mandatory higher deductibles will have lower aggregate capacity deployed. Carriers that write excess business will not have their aggregate capacity allocation aligned with primary carriers. Carriers that write smaller primary limits on very large properties exposures will be properly assessed for the capacity utilized versus a similar firm that writes the same limits but to the propertyβs value.
Summary
Managing aggregate exposures is a fundamental exercise of property insurance operations, crucial for maintaining financial stability and ensuring ongoing coverage for policyholders. However, the traditional methods of reporting aggregate exposures can sometimes be overly stringent and fail to accurately reflect the true risk and capacity utilization. Insurers can more effectively manage their aggregate exposures by adopting a more nuanced approach that combines modeled loss output with actual hazard exposure. This approach addresses the discrepancies caused by model errors and allows for a more accurate assessment of risk, particularly for companies with varying deductible structures and coverage types. Ultimately, this refined strategy promotes a more fair and sustainable allocation of capacity, benefiting both insurers and their stakeholders.