Automated decision-making in courts of law: A conversation between Nathalie Smuha and Abdi Aidid

 

Attendees heard Abdi Aidid, assistant professor at the University of Toronto’s Faculty of Law, and Nathalie Smuha, a legal scholar and philosopher at the KU Leuven Faculty of Law and Criminology, examine questions arising from the implementation of automated decision-making tools in the judiciary at a public event hosted by the Schwartz Reisman Institute for Technology and Society on February 12, 2024.


Automated decision-making tools are increasingly being implemented in both public and private sectors. But what happens when these tools are used in courts of law?

Can algorithmic decision-making in the judiciary help clear backlogs in the courts, and is this a justified use of the technology? Do automated decision-making systems make “better” decisions than human judges, and what do we mean by “better”? Should judges, law clerks, and other legal professionals be involved in the design of automated systems for use in the judiciary, and if so, how?

These and related questions were explored by Abdi Aidid, assistant professor at the University of Toronto’s Faculty of Law, and Nathalie Smuha, a legal scholar and philosopher at the KU Leuven Faculty of Law and Criminology, at a public discussion hosted by the Schwartz Reisman Institute for Technology and Society (SRI) and one of the institute’s Research Leads, Anna Su, who is also an assistant professor at the Faculty of Law.

Smuha and Aidid delved into the challenges posed by automation in the judiciary, including implications for the rule of law, the distinction between “decision-making” and “decision support” systems, and the potential pitfalls of relying too heavily on technology.

“There’s a lot of discussion in this area, and a lot of anxiety,” said Aidid in his opening remarks before he and Smuha dissected some crucial concepts and distinctions.

Smuha began by emphasizing the importance of defining terms, opting for the broader “algorithmic systems” over “artificial intelligence” (AI). 

“AI is a vague term and there’s no uniform definition,” said Smuha, “and today it’s used to describe the most sophisticated systems. Let’s be very broad and not limit the discussion.”

Aidid steered the conversation toward the rule of law, a concept Smuha said should be defined not only in terms of procedural correctness but also by examining the substance of laws and their adherence to principles like equality and human rights.

“So, when we talk about automation in the judiciary, it means we also talk about rights like the right to remedy, to protection, to equality,” said Smuha.

Aidid raised a crucial distinction between tools that help judges in their decision-making and tools that autonomously make decisions themselves in the judicial system. 

“We definitely need to distinguish between judicial decision-making systems versus judicial support systems,” said Smuha. “There are plenty of sub-tasks and sub-decisions that a judge needs to look at. But automated systems can have an impact on the final outcome of a case even if they weren’t directly making final decisions.” 

“It’s not right to say ‘We don’t care about judicial support systems, we only care about judicial decision-making systems’,” said Smuha. “We need to look at the whole chain leading up to a final decision. From the very start, by virtue of the fact that these systems are used in a context that’s so sensitive—namely, the normative foundation of our society—we should straight away look at what the impact is.”

Aidid added that “in the public sector especially, there’s a rigidly-enforced distinction between input and output. This flawed distinction kind of masks the extent to which the input might be indicative of the output.”

“From the very start, by virtue of the fact that these systems are used in a context that’s so sensitive—namely, the normative foundation of our society—we should straight away look at what the impact is.” - Nathalie Smuha

A critical point emerged about the use of automated decision-making tools as a cost-cutting measure and to address backlogs in the judiciary, with Aidid pointing out that “these tools almost always get embraced as an austerity measure.”

Smuha noted that while clearing backlogs is an important part of the justice system—“As the saying goes, ‘justice delayed is justice denied’,” she said—we shouldn’t allow this to serve as a blanket justification for the widespread use of the technology.

“When we develop these systems, a translation needs to occur from law to code,” says Smuha, and this translation “entails a lot of normative decisions. We all know that legal concepts can be interpreted in different ways. So, somebody still has to make normative choices that have normative implications when these systems are being built. And it’s not judges.”

Aidid then cited the recently-settled case of three disabled people in Arkansas who sued the state Department of Human Services in 2019 over the fact that an automated decision-making technology excluded people from receiving care and benefits that would have otherwise been granted to them by human decision-makers. Aidid highlighted design choices as factors in this case, suggesting that including health professionals who work with the affected population in the design of the automated decision-making tool would have probably improved outcomes and mitigated harm. 

“One of the reasons why we might still have problems with automated tools at the level of the judiciary is because of the interpretive function in law,” said Aidid. “Legal reasoning requires interpretation. Perhaps our legal notions are too messy for this kind of technology.”

Smuha agreed that “the law can be inherently vague,” but “this isn’t necessarily a weakness, it’s also a strength because it allows us to apply a general concept to a lot of different situations that we may not necessarily foresee.”

Both Smuha and Aidid agreed that we shouldn’t entirely dismiss the notion of automated decision-making tools in the judiciary, suggesting that for straightforward cases with low stakes—e.g., parking tickets, or perhaps a contract dispute in which both parties agree to have an automated system make a decision—automation might be viable. However, Smuha stressed the philosophical challenge of optimizing algorithms for a constantly evolving social construction like the law.

“You can’t just optimize an algorithm for law, because law is a social construction that is constantly in flux. And the nature of judging fellow human beings itself is such that we might feel comfortable with judgments that go against our opinions, because at the end of the day it’s a person making the decision—not a god, not a priest.”

The conversation concluded with reflections on the challenges of addressing biases perpetuated by automation technologies, with Aidid pointing out that perhaps examining bias in automated decision-making tools could better help us bring our own—and society’s—biases to the forefront.

“We could more easily arrive at a consensus on technology not providing socially optimal outcomes,” said Aidid; “easier than we could convince a judge they have biases.”

“We could definitely see this as an opportunity to have a bigger conversation about the fact that technology forces us to put our biases on the table,” said Smuha, “and to reflect on them and make them very explicit to ourselves.”

Want to learn more?


Browse stories by tag:

Related Posts

 
Previous
Previous

SRI’s annual conference, Absolutely Interdisciplinary, returns in May of 2024

Next
Next

Five key elements of Canada’s new Online Harms Act