The UK government now uses algorithms to make decisions across a vast range of areas,1House of Commons Science and Technology Committee, Algorithms in decision-making. Fourth report of session 2017–19, HC 351, 23 May 2018.
including tax, welfare, law enforcement and criminal justice, immigration, and child protection. Beyond this, the government’s embrace of automation has been remarkably opaque. There is no systematic, public information about how and why public authorities procure, develop and use automated systems.
There are a range of possible reasons for this high level of secrecy. One is the perceived risk that people will use this information to abuse or circumvent government programs.2Lucy Series and Luke Clements, ‘Putting the cart before the horse: resource allocation systems and community care’, Journal of Social Welfare and Family Law, vol 35, 2013, issue 2, page 207.
Another is the fear that disclosure will damage the commercial interests of the government’s suppliers.3Lina Dencik, Arne Hintz, Joanna Redden and Harry Warne, Data scores as governance: investigating uses of citizen scoring in public services – project report, Data Justice Lab, Cardiff University, December 2018.
A third reason is poor record-keeping. The necessary records may not be kept centrally or even at all, or the relevant officials may have moved on.4Crofton Black and Cansu Safak, Government data systems: the bureau investigates, Bureau of Investigative Journalism, 8 May 2019.
There are a range of other potential reasons too, both weak and strong.
This secrecy has several troubling implications. First, it makes it difficult for individuals to know whether they have been subject to an unlawful or unfair automated decision, and thus to challenge it. Second, it hampers wider responses to systemic problems. Presently, advocacy groups and researchers rely mainly on freedom of information requests and anecdotal evidence. This has produced some valuable insights (see, for example, Liberty’s 2019 report on predictive policing5Policing by machine, Liberty, January 2019.
). And it has enabled some litigation (see, for example, the ongoing challenge to the Home Office’s use of an algorithm to filter visa applications6Henry McDonald, ‘AI system for granting UK visas is biased, rights groups claim’, Guardian, 29 October 2019.
). But these methods are resource-intensive and piecemeal, and they may only uncover problems after an automated system has already affected thousands of people. Secrecy can also prevent courts from developing and clarifying the law governing automated systems. This leaves both civil servants and members of the public uncertain about their rights and duties.7R (Unison) v Lord Chancellor  UKSC 51.
In this context, an increasingly pressing question is how public law principles can potentially combat secrecy in the algorithmic state. One candidate is the emerging duty of transparency. If the state uses a policy to guide its discretionary decision-making, ‘it is the antithesis of good government to keep it in a departmental drawer’ (B v Secretary of State for Work and Pensions  EWCA Civ 929
at para 43). Secret policies prevent people from effectively regulating their conduct in accordance with the law and increase the risk of arbitrariness in the exercise of state power.8R (Justice for Health Ltd) v Secretary of State for Health  EWHC 2338 (Admin).
They are ‘in general inconsistent with the constitutional imperative that statute law be made known’ (Salih and Rahmani v Secretary of State for the Home Department  EWHC 2273 (Admin)
at para 52). These concerns would seem to apply whether the policy is set out in a document or encoded in an algorithm.
A related principle is procedural fairness.9R v Secretary of State for the Home Department ex p Doody and other appeals  1 AC 531.
If the state proposes to make a decision that affects a person’s rights and interests, it must generally explain the grounds for that decision and give the person an opportunity to be heard. A person may be unable to make meaningful representations about a technology-assisted decision without details of how the technology works. Fairness can also require government to give reasons for a decision after it has been made.10Oakley v South Cambridgeshire DC  EWCA Civ 71.
This may similarly require the state to provide a detailed explanation of how its automated systems and decision-making models work, to enable affected persons to understand why a decision was made.
The courts have already begun to apply these principles to automated decision-making. In R (Ames) v Lord Chancellor  EWHC 2250 (Admin)
; December 2018/January 2019 Legal Action
33, the claimant sought judicial review of the Legal Aid Agency’s (LAA’s) offer in respect of counsel fees for his criminal defence. The LAA had refused to disclose the ‘calculator’ it used to determine the level of fees offered. The High Court held that this refusal was ‘untenable’ (para 75) and rendered the LAA’s offer invalid. The calculator was found to be ‘a very important part’ of the decision-making process and the LAA ‘plainly owe[d] a duty of transparency and clarity’ in relation to it (para 75).
In R (Eisai Ltd) v National Institute for Health and Clinical Excellence  EWCA Civ 438
, the defendant had refused to provide the claimant with a fully executable version of the model it used to assess the cost-effectiveness of the claimant’s drugs. The Court of Appeal held that this was a denial of procedural fairness. It rejected the defendant’s claims that disclosure would undermine confidentiality or be overly costly. Richards LJ held that ‘the court should in my view be very slow to allow administrative considerations of this kind to stand in the way of its release’ (para 65).
Similarly, in R (Savva) v Kensington and Chelsea RLBC  EWCA Civ 1209
, the Court of Appeal held that a local authority was required to publish and explain its mathematical tool for calculating personal community care budgets, as part of a common law duty to give reasons. The court recognised that the burden of giving reasons ‘would not be insignificant but it is what simple fairness requires’ (para 20).
These decisions demonstrate that courts are willing to adapt administrative law to protect its animating values, including transparency, fairness and intelligibility. But these decisions are only the beginning. Public lawyers must work proactively and creatively to strengthen and develop these principles, with the ultimate goal of maintaining proportionate and systemic transparency. The government’s automated systems should, in principle, be as open as its laws, regulations, and policies.
Public Law Project has already begun to engage with some of the issues presented by automated government decision-making, including through its work on the EU Settlement Scheme.11Joe Tomlinson, Quick and uneasy justice: an administrative justice analysis of the EU Settlement Scheme, Public Law Project, July 2019.
But we plan to take a more deliberate and unified approach to public law and technology in 2020, working with other lawyers, investigators and activists to track automation in government and ensure that it is fair, lawful, and attended by accessible and meaningful public law remedies.