In 2013, a man in Wisconsin, US, was arrested for attempting to flee a police officer and driving a car used in a recent shooting.
While none of his crimes mandated prison time, the judge in the case said the man had a high risk of recidivism and sentenced him to six years prison.
The judge had considered a report from a controversial computer program called COMPAS, a risk assessment tool developed by a private company.
The example, according to Research Professor at Vrije Universiteit, Brussels, Mireille Hildebrandt, is a reason why lawyers need to collaborate with computer scientists on using artificial intelligence (AI) in law.
“We need constructive distrust, rather than naïve trust in ‘legal tech’,” she says.
“This certainly involves reconsidering the use of potentially skewed discriminatory patterns, as with the COMPAS software that informs courts in the US when taking decisions on parole or sentencing.”
The Research Professor on Interfacing Law and Technology at Vrije Universiteit’s Faculty of Law and Criminology will discuss the impact of AI on law in her inaugural lecture for The Allens Hub for Technology, Law & Innovation on December 13.
She says researchers say there is evidence the COMPAS program is discriminating against black offenders.
“Though researchers agree that based on the data they are more likely to commit future crimes than white offenders, this caused a bug in the outcome regarding black offenders that do not recidivise,” Professor Hildebrandt says.
“They are attributed a higher reoffending rate than white offenders that never recidivise.”
“COMPAS has given rise to a new kind of discussion about bias in sentencing, and once lawyers begin to engage in informed discussion about ‘fairness’ in ‘legal tech’ they may actually inspire more precise understandings of how fairness can be improved in the broader context of legal decision-making,” Professor Hildebrandt says.
Professor Hildebrandt’s research interests concern the implications of automated decision, machine learning and mindless artificial agency for law and the Rule of Law in constitutional democracies.
Recently nominated as ‘one of 100 Brilliant Women in AI Ethics to follow in 2019 and beyond’ by Lighthouse3, she says AI tools “cannot be ‘made’ ethical or responsible by tweaking their code a bit”.
“Instead, we should focus on training lawyers in understanding the assumptions of ‘AI’, especially its dependence on mathematical mappings of legal decision-making, as this has all kinds of implications that are easily overlooked.”
Professor Hildebrandt says lawyers should develop ‘a new hermeneutics’, or a new art of interpretation, that includes a better understanding of what data-driven regulation or predictive technologies can and can’t do.
“This may, for instance, mean that lawyers sit down with data scientists to define ‘fairness’ in computational terms, to avoid discriminatory application of technical decision-support,” she says.
She proposes lawyers should ask three questions before introducing new technologies that will redefine their profession as well as the legal protection they offer: What problem does this technology solve? What problems are not solved? and What problems does it create?
“This requires research, domain expertise and talking to the people who may be affected: regulators, lawyers, but also and especially the ‘users’ of the legal system: citizens, consumers, suspects and defendants, the industry.”
Professor Hildebrandt also holds the Chair of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, at the Institute for Computing and Information Sciences at Radboud University Nijmegen in the Netherlands. She teaches law to computer scientists and will soon appoint a team of computer scientists and lawyers on a 2.5 million grant from the European Research Council for research into legal tech.
She says computer-based predictions of legal judgments could help lawyers and those in need of legal advice decide whether or not to bring a case to court.
AI in the form of ‘argumentation mining’ could also help legal clerks quickly infer relevant case law, statutes and even doctrine with regard to a specific case, while identifying potentially successful lines of argumentation.
“A concern could be that we engage ‘distant reading’ (reading texts via software) before being well versed in ‘close reading’, losing important lawyerly skills that define the mind of a good lawyer,” she says.
“Another concern is that legislatures may want to anticipate the algorithmic implementation of their statutes, writing them in a way easily translated into computer code.
“This may render such statutes less flexible, and thereby both over-inclusive and under-inclusive, or simply unfair and unreasonable.”
AI can improve compliance by possibly pre-empting people’s behaviour and reconfiguring their ‘choice architecture’, so that they are nudged or forced into compliance.
“Sometimes that may be a good thing, as long as this is a decision by a democratic legislature, and as long as such choice architectures are sufficiently transparent and contestable,” she says.
Crime-mapping is another example of involving AI in the administration of justice through policing, but “this may displace the allocation of policing efforts to what the ‘tech’ believes to be the correct focus”.
“Crime-mapping depends on data, which may be skewed – and the most relevant data may actually be missing.
“Blind trust in such systems may undermine effective policing (as they may remain stuck in what the data allows them to see), it may also demotivate street-level policing, as officers may be forced to always check the databases and the algorithms instead of training their own intuition.”
Professor Hildebrandt says AI may contribute to proper compliance of data protection if done well.
“Or it may undermine the objectives of the law, by turning it into a set of check-boxes, where the real impact is circumvented by way of cleverly designed pseudo-compliance.”
Find out more about The Magic of Data Driven Regulation – An evening with Mireille Hildebrandt here.