AUTOMATED VEHICLES AND ALGORITHMIC BIAS

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know. – the very recently departed Donald Rumsfeld

Introduction

Rumsfeld’s controversial remark in the context of the Iraq War aside, when trying to make predictions about a future with AVs, there are many things we do not know. The technology is immature, so we do not know exactly what will be under the bonnet of future mainstream AVs. As the Law Commission have noted, we do not know how safe AVs will be, and the ways in which they might fail, until they have been observed under real-world conditions. And, importantly in terms of this article, we do not know, for sure, the exact circumstances under which AVs may show algorithmic bias, or which minorities may be affected by such bias (although we have an idea of this, based on the latest research). At least we know that we do not know these things. These are, in Rumsfeld’s terms, the known unknowns.

Then there are the unknown unknowns. In short, designers of AVs must plan for AVs encountering novel situations that the AV’s artificially intelligent system did not know it did not know. An algorithmically biased AV is likely to encounter more situations that it does not know, and the result of these situations is likely to be discriminatory: more crashes involving pedestrians with disabilities and, possibly, pedestrians of colour. Both of these risks have been publicly raised by academics and institutions as particular problems raised by bias in machine learning as it applies to AVs.

The potential for bias in how AVs will treat unknown situations will need to be considered by legislators, regulators, and litigators alike. Most of all, however, it is the AV companies themselves who should take note of the potential for algorithmic bias. No company will enjoy being the subject of the first test case against the background of a deadly AV accident involving a wheelchair user or a person of colour whom the AV was unable to recognise owing to deficiencies in its training. Certainly no company will relish the reputational damage that comes from a public allegation that discrimination has been baked into its AV.

Unknown unknowns

1. When we talk about AVs, we are simultaneously talking about artificial intelligence. There are two important categories of unknown unknowns that relate to artificial intelligence.Artificial intelligences typically do not know what they do not know. In the literal sense, unlike a human, they cannot ponder what it is that they might not know. At least at this stage of technological development, they are a function of their programming and the inputs from the outside world that are provided to them. As such, if they have a flaw, this is typically discovered after the event, and may provide a point of feedback. Equally, just like a human, an AV cannot know when it will encounter something that its training and experience has not prepared it for – such as obstacles it may not have seen before or damaged street signs that it cannot recognise. These are by definition unknown unknowns, as the AV cannot know what the not-seen-before object will be, and therefore does not know, until it sees it. Judging by current testing of AVs, however, humans tend to be less prescriptive, and less likely to make an obviously irrational decision, when they encounter something on the road that they have not been trained for. Humans have lived experience, and therefore can bring context to bear when they see a new object on the road. AVs’ experience and training, at least at stage of technology, is necessarily bounded. To state the obvious, AVs have not gone through childhood, and are not a full person with the contextual knowledge that comes with this.

2. When observing an artificial intelligence, humans may not know how it will fail – that is, what artificial intelligence does not know – until it fails. For example, during both simulation and real-world testing of AVs by Waymo, its cars recorded when there was an accident or where the AV without human intervention, such as braking, would have been part of an accident. Before a new scenario is encountered, it is unknown what the (unknown) situation will be that will cause another crash or near miss. In this way, a bank of scenarios are created for future AVs to learn from. This is from the point of view of a company testing an AV. From the long-term point of view of society it is an unknown unknown exactly what characteristics the real world objects, or people, will have that lead to a deadly crash involving an AV. But avoiding algorithmic bias insofar as possible lowers the chance that such a crash will be the result of systemic discrimination.

Algorithmic bias

Algorithmic bias may occur in at least two ways. Firstly, an artificial intelligence is only as good as its coding and its experience. Artificial facial recognition systems have led to algorithmic bias on the design side, MIT Media Lab researcher Joy Buolamwinihas observed, because the algorithms have typically been written by white engineers who are the majority in the technology sector. These algorithms are built on pre-existing code libraries, again usually coded by white engineers. Additionally, and importantly, often the training data have been biased towards white subjects, with more white faces going into the training pool than other categories of faces. The result is that the facial recognition systems are more likely to be less accurate on black individuals, according to an FBI co-authored study mentioned in the online article ‘The Perpetual Line-Up’ by Clare Garvie, Alvaro Bedoya and Jonathan Frankle, researchers at Georgetown University.

In a similar sort of way, an artificially intelligent visa streaming tool used by the Home Office worked alongside a secret list of suspect nationalities that informed a ‘risk score’ which decision-makers would take into account when assessing these applications. This was the subject of a legal challenge on grounds of discrimination. In response to this challenge, the Home Office agreed to scrap its use. The central problem was as follows, according to the Joint Council for the Welfare of Immigrants:

“In short, applicants from suspect nationalities were more likely to have their visa application rejected. These visa rejections then informed which nationalities appeared on the list of ‘suspect’ nations. This error, combined with the pre-existing bias in Home Office enforcement (in which some nationalities are targeted for enforcement because they are believed to be easier to remove), accelerated bias in the Home Office’s visa process. Such feedback loops are a well-documented problem with automated decision systems.”

There is some evidence that racial bias may apply to AVs: the performance of several state-of-the-art object detection models in 2019 was uniformly poorer when detecting pedestrians with darker skin tone, according to the study ‘Predictive Inequity in Object Detection’, by researchers at the Georgia Institute of Technology.  The Law Commission has warned that AVs may struggle to see “dark-skinned faces in the dark” (see paragraph 5.75 of the Law Commission’s latest consultation document).

This is not a point that is uniformly accepted. The Director of Thatcham Research has stated: “Truly automated systems are not yet available, however, there’s no reason though why people of differing skin colours will or won’t be recognised. Visual sensors invariably view the world in more or less uniform tones of grey so do not differentiate between people’s race. They also do not register detail such as gender, but use stature, morphology and movement to judge size and shape.”

Even if there were no theoretical reason why such an automated system would work less well in relation to people with darker skin, this statement might underestimate the deeper problems that automated recognition systems have, as outlined above. In other words, even if the technology itself were “neutral”, the way the technology operates is subject to how it has been trained. Unseen biases in training are a function of the inequalities in our world. When a system is trained using samples or scenarios whose selection is tainted by bias, it is much more likely that the end result will lead to discriminatory outcomes: bias in, discrimination out.

The second way in which algorithmic bias may occur, and this brings us back to unknown unknowns, is where there is a new design of wheelchair or mobility scooter. The Law Commission has noted that people with disabilities may be put at risk where AVs “may not have been trained to deal with the full variety of wheelchairs and mobility scooters” (see paragraph 5.75 of the Law Commission’s latest consultation document).

The legal ramifications

Now that the JCWI case has concluded with the Home Office scrapping the visa sifting algorithm, it is no longer a theoretical prospect that cases may be brought against entities whose artificially intelligent algorithm produces a result that is discriminatory.

We are dealing here with indirect discrimination, which is distinct from direct discrimination. Direct discrimination occurs where person A treats person B less favourably than person A treats or would treat others because of person B’s protected characteristic (e.g. race, disability etc). Indirect discrimination occurs where the treatment does not differ between people with and without the protected characteristic, but the result is nonetheless discriminatory. Suppose I make a rule that all employees must work Saturdays. If I fire my Jewish employee who cannot work Saturdays owing to their religion, that is likely to constitute indirect discrimination. Section 19 of the Equality Act 2010 provides the law as to what counts as indirect discrimination:

“19(1) A person (A) discriminates against another (B) if A applies to B a provision, criterion or practice which is discriminatory in relation to a relevant protected characteristic of B’s.

(2) For the purposes of subsection (1), a provision, criterion or practice is discriminatory in relation to a relevant protected characteristic of B’s if—

(a) A applies, or would apply, it to persons with whom B does not share the characteristic,

(b) it puts, or would put, persons with whom B shares the characteristic at a particular disadvantage when compared with persons with whom B does not share it,

(c) it puts, or would put, B at that disadvantage, and(d) A cannot show it to be a proportionate means of achieving a legitimate aim.”

A person may be a company. If a company’s AV is routinely worse at spotting pedestrians with darker skin tones, leading to more accidents involving people of colour, or people on wheelchairs, than those who do not have these characteristics, then there may arguably be indirect discrimination. And victims of such indirect discrimination may have an actionable claim. There is clearly significant legal, reputational and financial risk here.

Conclusion

The word progress has different connotations. There is technological progress, which simply relates to the state of a civilisation’s technology. Then there is societal progress, which involves a broader assessment of values, ethics and equality. AVs have the potential to advance both kinds of progress. But this will only occur if AV companies take a step back and assess what it will take to ensure as little algorithmic bias as possible in its systems.

This is not just a general societal point: it is one that may be backed by equality law. If, when dealing with unknown situations – and AIs do not know what they do not know – an AV’s system can be shown to have implicit biases which make it more unsafe around people with certain protected characteristics, the company putting the AV on the road may be at legal risk. The precedent is there: the Home Office has already been sued in relation to an artificially intelligent algorithm behind a visa screening system that led to discriminatory results. This led to the Home Office scrapping the screening system. There is no reason in principle why companies producing AVs would be immune from such legal challenges. Algorithmic bias therefore represents a legal risk, as well as a financial and reputational one, which companies producing AVs would be wise to take into account.

Paul Erdunast

Image by Gerd Altmann from Pixabay

You may also like...