Image of white cubes bearing black letters that spell out the word RISK


The likely incoming widespread use of automated vehicles (“AVs”) on UK roads (and, specifically, the need to insure them!) will lead to a number of novel issues for insurers. One such issue for them to grapple with is how they can best achieve a balance between actuarial experience and fairness.

The basic principle of insurance is that losses of the few are paid for by the premiums of the many. However, as readers will be aware, when setting premiums and terms, insurers take into account a variety of different risk characteristics, with the aim that those posing the highest risk pay larger premiums. This can lead to unfairness – or, at least, perceived unfairness – when some characteristics result in certain customers paying higher premiums.

AVs, unlike most traditional vehicles, capture considerably more data about the driver and the way the vehicle is being driven, which potentially provides insurers with better and more accurate information on which to underwrite the risk. Given the potential shift in the amount and nature of data available to insurers, now is a good time to revisit the current law on price discrimination in an insurance context.

Some data can legally be used for price discrimination, which may not necessarily be perceived as fair. Pricing considerations can result in trade-offs between different groups in society, which can be difficult to navigate. For example, should action be taken to improve the market outcomes for some people if such action could lead to worse market outcomes for others?

Compared to some jurisdictions, the United Kingdom has relatively few restrictions on the type of rating factors that insurers can use. Competition and innovation can be a benefit of hyper-personalised risk pricing. However, the effects of who pays more and who pays less can sometimes be concerning. For example, when those with low incomes pay significantly more than others. The Social Market Foundation estimates that the “poverty premium” for motor insurance means that those living in deprived areas currently pay about £300 more a year in premiums.

The development of AVs and artificial intelligence (“AI”) may bring issues of social policy to the forefront and cause the current ‘actuarial defence’ to the use of some protected characteristics and other factors to be re-examined. AI will allow previously unknown patterns in insurance activities to be detected and increase the importance of issues regarding what characteristics can be used and the identification of algorithmic proxies to banned characteristics.

What is the ‘actuarial defence’?

The Equality Act 2010 sets out a number of protected characteristics: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation.

In general, insurers must not discriminate against a person because of a protected characteristic, including when setting premiums. However, actuarially justified discrimination based on some protected characteristics is currently lawful where relationship between the characteristic and loss propensity is established – the ‘actuarial defence’.

For example, age can be used as part of a risk assessment if the information used is relevant and comes from a source on which it is reasonable to rely (s.20A of Schedule 3 to the Equality Act 2010). EHRC Technical guidance on age discrimination in services, public functions, and associations (published 6 April 2016) says:

Information which might be relevant to the assessment of risk includes actuarial or statistical data, future projections or a medical report. It cannot include untested assumptions, stereotypes or generalisations in respect of age”.

Further, it sets out that the following factors may be relevant to determining whether the information comes from a source on which it is reasonable to rely: “the information is current, where data is involved, the method of collection is suitable, the information is representative, the information is credible; for example, it is generally accepted by the scientific or actuarial community”.

Price and value is one of the four outcomes that firms need to assess under the Consumer Duty. Differential pricing for different groups of consumers creates considerations for firms’ fair value assessments, which are required by the FCA. There is a variety of different ways to segment customers, look at differential outcomes for consumers and tailor analysis. Firms must also have regard to consumers with characteristics of vulnerability. In its Consumer Duty: Findings from our review of fair value frameworks, the FCA says:

Our price and value outcome rules do not require firms to charge all customers the same amount, or to make the same level of profit from all customers.”

That said, providing fair value to different groups of customers is central to the FCA’s rules. Firms need to demonstrate how each group of customers receives fair value, even though they can be differentially priced (see Fair Pricing in Financial Services and Pricing practices in the retail general insurance sector: Household insurance).

Insurers were legally prevented from taking gender into account when pricing insurance from 21 December 2012 by the Equality Act 2010 (Amendment) Regulations) 2012. These regulations amended the Equality Act 2010 to reflect a change to European Union law consequent on the ruling by the Court of Justice of the European Union in Case C-236/09 of 1st March 2011 (Association Belge des Consommateurs Test-Achats ASBL and Others v Council). Maggie Craig, who at the time was ABI’s Acting Director General, said:

This gender ban is disappointing news for UK consumers and something the UK insurance industry has fought against for the last decade. The judgment ignores the fact that taking a person’s gender into account, where relevant to the risk, enables men and women alike to get a more accurate price for their insurance”.

What is fair?

“Unfair” Price discrimination in essential products, such as motor insurance, is considered a greater concern than in non-essential products. Where insurance is mandatory or essentially mandatory it becomes more of a social good and less of an economic commodity. There are concerns regarding how to balance accurate rating with how insurance practices impact larger social goals and exacerbate inequalities which exist outside of insurance. (American Academy of Actuaries Discrimination: Considerations  for Machine Learning, AI Models, and Underlying Data). 

Models seeking to improve profitability may discriminate against poorer risks to the extent that a market failure results, and some people are not able to obtain cover, or can only obtain cover at unaffordable prices. This could lead to an increase in underinsurance in some demographics. An inability to classify by risk may impact profitability and product availability. (ABI Insurance in the UK: The Benefits of Pricing Risk).

In non-essential insurance markets, if insurers cannot price using certain characteristics a product may become too expensive for a price sensitive group. People within that group may leave the market and retain their own risk due to poor value. This could lead to price rises for remaining customers, causing increasing numbers to leave the market. This could leave only the highest risks in the market willing to continue to purchase cover at ever-increasing prices.

Pricing based on behaviours that people can easily change may be considered fairer than factors which are based on intrinsic characteristics. For example, mileage and previous convictions can be considered to be within a person’s control whereas age is not. However, there are factors that can either be considered within or outside a person’s control depending on other circumstances such as home address, credit score, previous claims experience, marital status, years a licence has been held, employment status and the car that is driven (model, age etc). Some consumer advocate groups argue that insurance companies should determine premiums only using factors that people can control, while others consider it fair to use other factors if they are actuarially sound.

A number of rating factors could be linked to economic status and cause a “poverty premium” such as age, credit score, home address, payment method, employment status, excess and the car that is driven. However, if credit scores were removed as a rating factor from motor insurance pricing, it is estimated that two thirds of customers would see price increases as the cost of higher risk insureds is spread across all insureds (Frees & Huang The Discriminating (Pricing) Actuary).

Protected characteristics such as race, sex and disability can correlate with poverty attributes. Even where certain characteristics are not used, there is a risk of algorithmic proxies to those characteristics still being used in pricing. Citizens Advice says that its exploratory research has found that people of colour may be paying £250 more a year for car insurance than white people. They found that in areas where there are large communities of people of colour there are higher insurance premiums – termed the “ethnicity penalty”.

The public discourse regarding what is and is not fair to use in pricing may be gaining momentum.

Before insurers were prevented from taking gender into account in pricing in 2012, women had lower premiums for motor insurance due to actuarial data showing that a group they are safer drivers.  Men are more than twice as likely to make a claim on their insurance, despite only driving 75 miles further a year on average. However, even following the change of law in 2012 men continue to pay more for their car insurance than women as they are more likely to have other factors that cause increased premium such as driving convictions, more powerful cars, riskier occupations, and penalty points. Men are also more likely to commit traffic offences. In 2021, men accounted for 82% of those prosecuted for summary motoring offences.

The upshot of the above is that there are greater profit margins on policies issued to women, which subsidise the reduced margin on policies issued to men. The absence of the ability to discriminate in pricing can lead to more targeted marketing to more profitable market segments in order to increase profitability. An example of this is Sheilas’ Wheels.   

Rating factors that correlate to protected characteristics can result in different prices for those groups. For example, occupation is used as a rating factor for motor insurance as different jobs have different risk profiles. However, some professions such as nurses comprise a significantly larger percentage of women than other professions. This means that using data based on occupations can still lead to gender being used as proxy in assessing risk. Men and women often make different lifestyle choices which insurers also consider when working out pricing.

Currently the default approach is to exclude unfair factors from model specification, so that the model is unaware of them. However, this results in discrimination by proxy when unfair factors correlate with legitimate factors. This in turn may disadvantage people with protected characteristics or vulnerability and not satisfy FCA Consumer Duty requirements. An alternative to this is explicit equalisation, where all factors – whether fair or not are included in the model specification.  The discriminatory effects are then removed by averaging outcomes across the unfair factors. However, there may also be legal issues with this approach and difficulties in obtaining all the data necessary to make it work.

In Canada, two different systems of motor insurance operate. Some provinces allow risk pricing and others have pricing restrictions which mean that everyone pays the same basic insurance premium. In the provinces where risk pricing is allowed, the chance of being involved in a fatal road crash is almost 20% lower than in the more heavily regulated provinces. This suggests that risk pricing can have the benefit of leading to safer behaviour.

With the evolution of technology and AI, regulatory bodies may increase their focus on different treatment that is considered undesirable by society, but which can be statistically supported. There are economic arguments to permit risk classification and the ability to differentiate among risks. However, there are also arguments for insurance as a social good and the cross subsidisation of risks.

What about Automated Vehicles and Artificial Intelligence?

As advanced modelling methodologies become widely available to actuaries, the way models are used within financial services is increasingly constrained by legal developments and regulatory scrutiny. Two examples in the UK are the Financial Conduct Authority’s review of general insurance pricing practices and the Information Commissioner’s Office consultation on an AI auditing framework.” (The Actuary Insurance Pricing: the Proxy Problem)

Some customers may benefit from a more accurate assessment of risks through the use of AI. However, the complexity of AI models and potential loss of transparency in insurance pricing makes these assessments more difficult to regulate. AI clearly has the potential to lead to increased issues with the use algorithmic proxies to protected characteristic.

There is already a safety disparity with people living in more deprived areas being more likely to be killed or injured as road users (see Foresight, Government Office for Science Inequalities in Mobility and Access in the UK Transport System). There is a correlation between socioeconomic status and vehicle age, and therefore safety. More deprived areas skew to older vehicles. This is likely to be exacerbated by new cars having automated features which promise significantly improve safety.

Advanced driver assistance functions are increasingly available in new vehicles. Automated vehicles are set to be on our roads by 2026 as the Automated Vehicles Act becomes law. It is predicted that automated vehicles will be largely unavailable to poorer households, at least during the earlier stages of implementation (Future of mobility: inequalities in mobility and access in the UK Transport System). The roll out of AVs, predicted to be a boon to road safety due to the removal of human error, is likely to be highly unevenly distributed across the population. AVs may result in fewer accidents and savings to insurers, once any increase in claim size is factored in. This could increase existing disparity in the amount of car premiums charged to economically disadvantaged groups and those living in disadvantaged areas. 


Questions about the fairness of the ‘actuarial defence’ will continue to arise as insurers gain access to more data (and more risk-specific data) and use this data in more complex models and algorithms. It will be interesting to see if regulators or legislators impose further restrictions as to how insurers can discriminate between different risk factors.

The introduction of AVs and the use of AI could put the ‘actuarial defence’ to the test and cause the relationship between pricing according to risk and discrimination to be re-examined and subject to further regulatory scrutiny.

Joanna Wallens, Associate at Browne Jacobson and Tim Johnson, Partner at Browne Jacobson

Image by Wokandapix from Pixabay

You may also like...