This article originally appeared here and we at Insurance Nerds think all of our readers would benefit from the insights in it. Thanks to Praedicat for permission to republish.
We recently did a survey of some of our customers and out of all the answers to the survey, the one that has stuck with me the most was that 83% of underwriters agree that it is their job to protect their company against the “next asbestos,” or what we call latency catastrophe. The picture that comes to mind is of the brave young Dutch boy with his finger in the dike, trying to stop the flood. I’m inspired by the image, but it’s not the little boy’s job. It is the job of the Dutch flood control authorities. Like the Dutch, the P&C insurer needs a latency catastrophe flood control system, which consists of a set of underwriting guidelines developed by management (most effectively with our help, by the way). The underwriter’s job is to write profitable business within the guidelines.
The problem with the idea of the underwriter as the bulwark against the next asbestos is that underwriters operate as expected profit maximizers. They are incentivized to look at an account and, if the premium exceeds the expected loss, they go for it. The technical underwriter spends time identifying the largest drivers of expected loss and engages in a dialogue with the customer about those risks. They can’t possibly identify and discuss all of the loss drivers, so it is reasonable to focus on the risks with the highest expected losses.
The system falls apart with latency catastrophe. These risks typically have relatively low expected losses for a given account, but potentially catastrophic losses across the portfolio because they affect many different accounts across multiple underwriters and even multiple policy years. This is not a problem that underwriters are trained to think about, and it is a different kind of data that needs to be made available, namely data on low probability latent risks for the customer and portfolio accumulation data for the insurer.
The more I’ve thought about the underwriter protecting against the next asbestos, though, the more I am inspired by it. If the underwriter has real-time data on the accumulation of specific named risks in the company’s portfolio, she can look for those risks during account underwriting, and then check whether adding the account fits within established accumulation thresholds. If it does, write the account. If it doesn’t, seek an exclusion or some other means of managing the accumulation (more on those other means in a future blog). A bonus is that when the underwriter engages in dialogue with her customers about these large-scale low-probability risks, underwriting will be working with their customers on the largest risks in society, and equipped with the knowledge their customers need and most likely don’t have.
This is a different way for underwriting management to operate as well. Management will no longer be identifying emerging risks and developing specific underwriting guidelines, including exclusions, based on specific risks. Instead, management will set broad accumulation thresholds and make portfolio accumulation information available to the underwriters. With these data available to the underwriting workflow, the good news is that the underwriters will then just do their job — they already see it that way — and protect the company against the next asbestos.
There’s a lot of interest these days in the underwriter of the future. It is not just about processing more data quickly, but also different kinds of data and different kinds of underwriting guidelines. Praedicat is working with insurers to build this vision for the future of underwriting today.
About Bob Reville
I am Co-Founder, President and Chief Executive Officer of Praedicat. But the more interesting question is, to quote David Byrne, "How did I get here?" Over my career, I acquired enough knowledge of the arcane worlds of both liability risk and catastrophe ("cat") modeling to realize that combining the two would be a great idea, and Praedicat is the result. I started with liability risk. After graduate school, I joined the RAND Corporation as a policy researcher, where I dug into the world of California's workers' compensation, which is a crazy microcosm of the larger world of civil liability. My work on the impact of permanently disabling injury at work led to recommendations for system reform that were adopted in major legislation in 2004. I then became the Director of the RAND Institute for Civil Justice and led a research program on mass litigation and tort reform, including a major study delivered to Congress on the impact of asbestos litigation. Through this, I saw the widespread concern among insurers and the scientific community over the risk of the "next asbestos." Meanwhile, I founded and co-directed the RAND Center for Terrorism Risk Management Policy where I received a crash course on cat modeling applied to terrorism risk. Impressed with the risk insights from cat models, the team of co-founders of Praedicat began to think that perhaps the course of asbestos could have been better for the world -- victims, defendants and insurers alike -- if insurers had had access to the science about asbestos risk in the form of cat models. That was how Praedicat was born. Today, Praedicat is the leading latency catastrophe modeling company and insights from our models are changing casualty insurance and reinsurance. I received my Ph.D. in economics from Brown University.