In brief
- Anticipating risk is a key component of insurance profitability, but the gray areas are getting grayer by the minute. With IoT and an ever-increasing number of connected devices, what is a good or bad risk?
- As society evolves, so does the nature of risk. New social and technological developments throw up situations where risks are more complex, claim values are escalating, and risk and liability are often unclear
- If you're using machine learning but don't know (and can’t explain) how it determines rates, you could be breaking the law. Also, how do you spot and counter rating factors with an inbuilt bias? It all depends on understanding the data
As the media slogan says, “Where there’s blame, there’s a claim.”
But it doesn’t say who you’re supposed to claim against. Apportioning blame is more complex than it might seem, particularly for AI, ML, IoT, connected devices and all they entail.
For instance, more and more ports and factories have fully automated forklifts and other heavy machinery. If an automated dockside forklift kills somebody, damages a car or whatever, who's accountable?
Who’s liable?
Again, if a driverless car has an accident, who’s responsible for the damages? The “driver” (who is the prima facie driver?) The AI designer? Could the car maker’s CEO and board members be culpable? Several urgent and often thorny liability questions are awaiting answers. Intriguingly, new EU AI regulations attempt to provide clarity, resolution and principles, but while offering a little respite, they’re unlikely to provide a holistic solution.
Insurers are already taking a harder line on the viability and pricing of risk, and if we run the situation to its logical conclusion, all we’re left with is a mass of new-ish but uninsurable risks. But that doesn't remove the risk; it just means there's no cover. So how does the industry square that with society at large?
In certain historical circumstances, governments have come to the aid of those cast adrift. Take Flood Re, for example. When insurers hesitated about covering the risk of rivers flooding, the UK Government set up the Flood Re scheme.
In the United States, Employer’s Liability and Workers Comp laws stipulate that if your insurer goes under and you have a disease claim that manifested itself 10 years after the insurance company ceased trading, a national insurance fund will cover you. Similarly, as protection against uninsured motorists in the UK, the Motor Insurers Bureau pays out claims as an insurer of last resort and then endeavors to get its money back from the negligent parties.
ChatGPT, the fastest growing app in history
Then there’s the (current) elephant in the room — Generative AI — and lawmakers are attempting to understand the latest developments (via frameworks, policy and legislation) to reassure the public and combat the (perceived) prospect of mass unemployment and other possible events. Meanwhile, with the emergence of GenAI and other automation in insurance, large language models are being used to combine as many different data sources as quickly as possible, correlate it all and come up with seemingly-meaningful conclusions. This capability is becoming increasingly important and commercially attractive. Also, a lot more investment is going into well-defined pipelines and processes for taking the best results into production.
More and more financial services professionals are using large language models in their daily work, bringing new risks for insurers — new sources of problems but opportunities too.
We have techniques (including data federation and federated models) that solve the issue of how to do machine learning when you don't have access to the data. However, there’s still a genuine risk of plagiarism (often unintended) in many circumstances. For example, if you train a model on publicly available data, are you plagiarizing the people who provided the original data?
All chatbots, even large language models, are algorithms that follow complex rules to drive interactions with people. Outcomes of the rule-following process are often unintended (e.g., chatbots that become abusive or antisocial). Who bears the liability there?
Using ML to drive risk rating
What if you're using machine learning to drive rating? Under new EU regulations, decisions made by AI in many financial situations must be traceable, well-documented, explainable and subject to appropriate human oversight. So, if you're using machine learning but don't know and can’t explain how it determines rates, you could be breaking the law.
Another risk is the perception of bias. In many jurisdictions, it’s illegal to use gender as a rating factor. So, if one of your factors functions as a clear proxy for gender, that's illegal. But what if there is a strong correlation between gender and color preferences?
Of course, gender color assumptions themselves are out of date, but it's okay to use a car color as a rating factor as long as you can demonstrate there’s no inherent gender bias. That said, often, it's not okay to identify a rating factor and, thinking it's a good differentiator, use it in your price regardless. You need to consider the wider context.
Compliance is not a tick-box exercise
Treat data science offhand, and you can find yourself in trouble. Regarding compliance, switching on a machine learning tool and implementing a set of ISO standards doesn’t necessarily cover you. Nowadays, data science isn't just a case of “here's the textbook; get on with it.” It's more nuanced than that.
Consequently, current best practices in data science should not be your baseline; they should be a starting point. You must comply with prevailing laws, rules and preferred processes, but they’re evolving fast, and there are still gray areas. Actions and activities that seem safe and appropriate today could be borderline tomorrow. So, remain vigilant and up-to-date with legal, regulatory and ethical developments.
Making all the right connections
White, brown and heavy goods manufacturers are all busy creating new apps and connectivity for their products. The latest elevators, consumer electronics and industrial machinery monitor themselves continually. The idea is that management will know an appliance is about to fail before it breaks down.
So, what does that mean for insurance? If a piece of equipment reports that, “I'll catch fire in 10 days if you carry on using me this way,” and the owner doesn't do anything about it, when the item does burst into flames, it's probably not covered.
Potentially, risk underwriters could insist that now connected devices are producing ongoing maintenance reports, they want to see all the data and price accordingly. In which case, you're insured if you react immediately to a fault alert. But if you habitually leave everything to the last minute, you might not be insured.
Insurers that can rate that kind of data will gain a market advantage. But what about the sheer morality of it all? If an organization could genetically test everyone on the planet and the data profiles detailed which ailments would kill us and when, we would be uninsurable. At the very least, premiums would be much higher if it were known that a heritable and dangerous disease would significantly shorten your life.
Who’s the biggest risk?
I guess increased data and monitoring increases the likelihood of finding ourselves in Bruce Willis’ situation (The Fifth Element) when his flying smart car automatically revokes his license — mid-drive — for flooring the accelerator and sideswiping a police car. A similar principle already exists in marine insurance. If a satellite-tracked vessel goes into certain insurance zones (e.g., off the Horn of Africa), the premium increases automatically and immediately.
Motor risk varies from person to person because of two things — driving style (naturally fast and aggressive or defensive) and experience (familiar with the roads you drive). Insurers believe commuters are very low risk because they're accustomed to traveling the same route each day.
On the other hand, traveling salespeople continually driving across unfamiliar towns and cities are more likely to have an accident. And by the same token, a driver living in a congestion zone only uses their non-compliant car if it’s absolutely necessary to save money. Therefore, when he or she drives outside the congestion zone, they represent more risk because they’re less familiar with their car.
Who pays in the end?
So, If nobody’s liable, who pays for the damage? The public perception is that If a city redesigns a street, creating a cycleway that allows unstable and uninsured cyclists to ride on regardless, the authorities will have a hard time deciding who’s responsible for an accident. The cyclist? The car, the city council or the children playing at the sidewalk’s edge? What about the government? It legislated the size of the cycleway, after all. It's all very well saying, “Where there’s blame, there’s a claim,” but that only applies if you can identify the culprit in law.
The fact is, insurance companies are very clear on the issue of liability. The courts apply the law of the land, and insurers apply the contractual terms of the policy.
However, for the public, the gray areas are getting grayer by the minute.
Want more clarity?
If you’d like to learn how Zoreza Global helps insurers implement automation and predictive analytics to improve rating accuracy while accelerating the claims and underwriting processes, visit www.luxoft.com/industries/insurance or contact us.