“If you know a set of basic parameters concerning the ball at rest, can compute the resistance of the table (quite elementary), and can gauge the strength of the impact, then it is rather easy to predict what would happen at the first hit. The second impact becomes more complicated, but possible; you need to be more careful about your knowledge of the initial states, and more precision is called for. The problem is that to correctly compute the ninth impact, you need to take into account the gravitational pull of someone standing next to the table (modestly, Berry’s computations use a weight of less than 150 pounds). And to compute the fifty-sixth impact, every single elementary particle of the universe needs to be present in your assumptions! An electron at the edge of the universe, separated from us by 10 billion light-years, must figure in the calculations, since it exerts a meaningful effect on the outcome”
Page 178 – The Black Swan by Nassim Taleb (As summarized from Prof. Sir Michael Berry in 1978 in his paper Regular and Irregular Motion, in Nonlinear Mechanics)
As an avid reader of insurance related articles and a big fan of the value the industry brings to society, I am a huge proponent of what data and technology can improve and are bringing to our craft. Accuracy in all things insurance has been accelerating rapidly, and the future of the industry is very bright. Our ability to more accurately assess risk, price risk, and manage risk using data and technology means better products and services and, ultimately, more value and return to society and other stakeholders.
So, it is always disheartening when I read articles that take this trend too far. In an article entitled “Will Technology Make Insurance Obsolete?”, the Insurance Journal takes the data & technology trend and extrapolates it to the far edges of the universe. Its logical conclusion is that computing power and data will become so cheap and ubiquitous that we will be able to predict the risk of a single exposure so accurately that this will end the need for insurance pooling as we know it.
As you can guess, I vehemently disagree. Allow me to ask a question to illustrate why.
Take something as simple as auto insurance. When it comes to assessing the risk of a car, driver or policy, would you say that what we KNOW about the exposure is in magnitude, more than or less than, what we DON’T KNOW about the exposure?
Here’s what we can know:
- The quality of the driver
- The type of vehicle (how safe? How old?)
- We have lots of data to determine risk aversion (credit scoring etc)
- We can use telematics to rate your volume of driving, speeds, acceleration etc.
That’s a lot of information and data. And yet, even if we could precisely and accurately use those data points for rating, it wouldn’t nearly be enough. We would still be overwhelmed with uncertainty.
What if you forget to lock your car, and it is stolen?
What if a hail storm blows through and dimples your roof and hood and smashes your windshield?
What if a dump truck hasn’t secured its load and some debris shoots out of it while you are driving the speed limit (because you are a great driver, and we can accurately assess your risk aversion), which blows your tire causing a 10 car pile up and severe injury?
When will these foreseen or unforeseen events occur?
See where I am going with this? We are running into the same problem as illustrated in the opening quote from Taleb’s “Black Swan” book. If we can’t accurately compute the interaction of the 56th impact of a game of billiards without knowing precisely where every electron in the universe is situated, then there is no possibility whatsoever that we can ever know enough to accurately assess and price the risk of a single driver or a single home or a single life or a single injury. There is not enough collectible data in the universe to do this. This is why we build models. Building a model is just assembling the data we do know and feel confident about into a framework that gets us a bunch of the way from here to there (prediction). But models are just a simplistic representation of the world. They can only take us so far before their predictive power plateaus. This is because models, being simplistic representations of real phenomenon, are missing data and the context of how that data integrates into the other datapoints building the model. This is why all models are WRONG, but some are USEFUL.
This is also why pure risk pooling is the single most effective way of eliminating a lot of this uncertainty. By pooling risks with other risks that are very similar in nature, uncertainty in its various forms, begin cancelling each other out. Not completely, but enough so as to allow us to make effective risk transfer products that let all of us go about our daily business without the need to deal with all of the risks we face head on.
So, I am going to say it. . . Insurance, as a mechanism to reduce risk, has worked fine for over 300 years and will continue to be a great risk transfer mechanism indefinitely. What the world needs is not more data, but better data. We need data that supplies us with real information and intelligence. Technology will do a lot of the heavy lifting, but what I hope you now understand is that no matter how much data we dig up, and no matter how much we learn from that data, we will never and can never get enough of what we need. It is likely easier to pinpoint the existence of an electron in the outer reaches of the universe than it will be to accurately map all of the zillions of interactions occurring between billions of free-willed people in schizophrenic universe.