Authors:Mia Leslie
Created:2024-03-05
Last updated:2024-03-27
AI and automation: transparency required
.
.
.
Marc Bloomfield
Description: PLP
AI and automated tools are currently the subject of prominent and mainstream debate, both for their appeal and unique risks, as demonstrated in the much-publicised Horizon scandal. Given these risks, it is vital that safeguards are in place around the use of automated decision-making (ADM), including the right of individuals to contest decisions and outcomes and seek redress when things go wrong. Public Law Project’s (PLP’s) research suggests that current routes for seeking redress, including through the courts, are not operating how they should be, and this is largely down to lack of transparency.
A number of existing legal frameworks contain crucial safeguards that can be interpreted to regulate the use of ADM tools and systems. However, when people are affected by these tools and seek to engage those safeguards to contest a decision, they are routinely unable to use the justice system to do so. The lack of judicial scrutiny of both solely and partially automated decision-making processes is leading to concerns that harms are going unaddressed.
At a roundtable hosted by PLP in December 2023 with civil society organisations and legal professionals working on ADM, many spoke of working with individuals who are subject to unfair decisions but struggle to obtain enough information to understand the role that automation played in determining the outcome that they received. Without basic knowledge of an ADM system and how it operates, individuals and communities face significant barriers in accessing the courts to protect their rights and to challenge systems when they operate unlawfully.
One problem is that even in instances where an ADM system is identified, it is difficult to build up an understanding of the wider decision-making process within which the AI or automated tool sits. Requests under the Freedom of Information Act (FoIA) 2000 are an important tool for trying to obtain more comprehensive information about a tool, system or decision-making process, yet a number of those who attended PLP’s event shared concerns around the effectiveness of the current regime. Many have found that public authorities rely on exemptions to disclosure under the FoIA 2000 – particularly s31 (prejudice to law enforcement) – in a blanket manner with limited consideration being given to whether, for example, there was, in fact, a risk of prejudice to law enforcement and whether it did outweigh the public interest in disclosure of the information.
Such limitations create additional barriers and prevent both individuals and legal practitioners from having adequate access to the broader information needed to understand how the decision-making process operates. This, in turn, prevents them from being able to pursue issues and use the courts to enforce rights.
Even if a client manages to overcome the opacity barrier and is aware that they have been the subject of ADM, individuals are often not willing to pursue litigation due to concerns associated with the often lengthy and burdensome process. In addition, many clients have their immediate issue resolved during pre-action stages, leaving a potentially unlawful decision-making system still in operation.
This is exacerbated by the fact that many of the areas in which ADM is being used, such as immigration and welfare, relate to situations where there is a stark power imbalance. Individuals are largely dependent on the provision of a service by those public authorities, leaving them hesitant or even resistant to the idea of pursuing their issue through the courts.
Ultimately, the onus should not be on clients or practitioners to uncover this information. Transparency around the use of ADM in public decision-making should be led by those operating it, to prevent harm from occurring in the first place. One way that this could be achieved is by introducing a statutory duty on public authorities to comply with the Algorithmic Transparency Recording Standard.1See the Algorithmic Transparency Recording Standard Hub (Central Digital and Data Office and Centre for Data Ethics and Innovation, published 5 January 2023). This would improve the ability of individuals and legal practitioners to understand where ADM and algorithmic tools are being used, and go some way toward mitigating one of the central barriers to contesting unfair decisions and seeking redress.
There is a pressing need for established legal principles and frameworks, which have developed to ensure legality, rationality and fairness in non-automated decision-making processes, to be applied to this relatively new and evolving method of decision-making, to ensure that harms do not go unaddressed.
 
1     See the Algorithmic Transparency Recording Standard Hub (Central Digital and Data Office and Centre for Data Ethics and Innovation, published 5 January 2023). »