Facial recognition technology: The recognition we deserve

Marina Goodman
LawSpring
Published in
5 min readJan 28, 2021

--

Illustration by Georgia Mae Lewis | @georgiamaelewis

There has been sustained concern in recent years about the use of facial recognition technology (FRT) which plays into a long-standing debate about the desirable interplay between regulation and innovation. In particular, the use of FRT in different sectors has highlighted how the pace of innovation can prejudice underrepresented groups and do so on a massive scale. Although there have been many exciting moments in technological innovation in recent years (AlphaGo, AlphaFold), there is a more considered balance to be found to promote trust around the use of pioneering technology in contexts where there could be greater political ramifications, such as employment and policing.

FRT is software that can be used to identify an individual (live-scanning of crowds against a watch list) or verify identity (unlocking your phone with your face) by using algorithms to estimate the degree of similarity between faces. Tony Porter, Surveillance Camera Commissioner, explained that an algorithm used in policing FRT transposes an image into an electronic signature which can then be checked against a police watch list. Two primary concerns with FRT are that: (1) there is a lack of regulation which in turn leads to mistrust, and (2) its presence can intimidate and deter and can therefore have a “chilling effect” on democratic rights such as peaceful protest and free association.

The technology has been around for far longer than many would expect and is growing stronger year on year. Back in 2017, the South Wales Police were trialling live FRT which allowed police officers to identify members of a crowd in real time. Surprisingly, despite various companies withdrawing their software from use in certain sectors such as policing (Microsoft, IBM) and criticising misuse (Alibaba’s FRT was used for detecting Uighur Muslims in China), FRT is developing at a steady rate. In fact, in response to the COVID-19 pandemic, a Japanese company, NEC, has launched facial recognition software which it claims can identify people even when they are wearing masks.

Bias in the algorithmic decision-making which underpins FRT has become a point of focus. Computer scientist Joy Buolamwini has cautioned against use of technological innovation without sufficient auditing. During her research as a graduate student in the MIT Media Lab, Buolamwini noticed that generic FRT could not recognise her face and concluded that the generic FRT had not been trained using a diverse selection of images. Biases in the narrow class of people producing the software were being carried across into the FRT, which Buolamwini calls the “coded gaze”. This resulted in overall lower accuracy rates for darker skinned females than lighter skinned males. This inaccuracy and its potential for harm, particularly in the policing context, spurred Buolamwini to set up the Algorithmic Justice League which is a movement in favour of “equitable and accountable AI”. The film “Coded Bias” which covers this story in more detail, is available to rent now on the Barbican’s website.

So how do we currently prevent FRT from being used in a harmful way? The legal and regulatory landscape is intimidating. There is a plethora of guidance, codes, statute and reviews which relate to data processing and its use in algorithmic decision-making. The regulatory landscape is convoluted and sector-specific, adding a layer of complexity which obstructs individuals when they want to vindicate and control their data and information rights. For example, the use of FRT in policing is currently regulated by the Data Protection Act, the Human Rights Act, the Protection of Freedoms Act, the Equality Act and the recently published Surveillance Camera Code of Practice. Until there is a simpler regime, public engagement on data issues is even more crucial.

But meaningful public engagement relies on meaningful explanations, which are hard to come by. Marcus Du Sautoy, Professor of Mathematics at the University of Oxford, discusses the barriers to achieving transparency in the context of algorithmic decision-making in his book, The Creativity Code. In particular, “black box” algorithms make it difficult to untangle the decision-making actually used by the algorithm because it trains itself, creating decision-trees by interacting with data without human intervention. One solution Du Sautoy outlines is the development of a meta-language which an algorithm could use to justify its choices. No such successful meta-language currently exists.

In recent years, regulators have made and continue to make good progress. In November last year, the Centre for Data Ethics and Innovation (CDEI), which was set up to advise the UK government on the ethics of AI and data-driven technology, published its “Review into bias in algorithmic decision-making”. It emphasises the importance of looking at a decision-making process as a whole as opposed to looking solely at the algorithmic sub-process. This is because bias can be introduced both by human and machine intervention (for example, in the selection of the data set used, as Buolamwini discovered).

Additionally, the ICO, the UK’s regulatory body for data protection, and the Alan Turing Institute, a national institute for data science and artificial intelligence, recently published the outcome of their Project ExplAIn in May 2020, “Explaining decisions made with artificial intelligence”. This guidance outlines how organisations can meet their obligations under UK GDPR, including the right to be informed and to access information, in the context of algorithmic decision-making (for example, if a company uses an algorithm to choose who is employed from a pool of applicants).

Sector-specific changes are also being made. In August last year, Ed Bridges and Liberty brought a successful appeal against the South Wales Police. The Court of Appeal ruled that the South Wales Police had not performed an adequate data protection impact assessment because it failed to adequately address the risks both to rights and freedom of data subjects (members of the public). Additionally, the police did not take reasonable steps to investigate potential racial or gender bias (although there was no clear evidence that the FRT used was biased, Buolamwini’s audits suggest a high likelihood of this being the case). The Court of Appeal did not, however, overturn the High Court on its assessment of proportionality. The subsequently released Surveillance Code of Conduct is a significant step towards coordinating responsible use of FRT by the police. Further afield, in the employment context, there are signs that regulators are taking surveillance and monitoring more seriously which in turn will deter entities from collecting data in illegitimate or unexpected ways. One example is the recent case concerning notebooksbilliger.de AG. The ICO has also published a blog on mitigating algorithmic bias in employment decisions.

There is compelling evidence that there needs to be a pace shift in the development lifecycle of innovative products to give teams time to appreciate and address risk. Biases in algorithmic decision-making once again raise the issue that there is insufficient diversity in the technology sector and that the pace of innovation in certain contexts is at odds with protection of underrepresented individuals and reasonable privacy expectations. Hopefully, as we have seen in the ESG context, we will start to see ethical (data safe) behaviour being used to harness improved economic performance rather than viewing it as a hindrance. Innovators have a part to play in engaging with the public on data collection and use. After all, any good company should be able to make the case that their use of our data is worth it.

Marina Goodman is a trainee solicitor at a City law firm, currently sitting in the IP/IT department, with interests in intellectual property and emerging technology.

--

--