As artificial intelligence becomes more widely used across Quebec’s legal profession, a McGill researcher warns AI risks eroding trust in the legal system.
Problems are already cropping up in courtrooms, with judges in recent months finding cases of AI-generated fabrications.
AI “hallucinations,” where generative AI tools spit out fictional information, are “fundamentally a part of how these models work,” McGill law professor Jennifer Raso said.
AI tools in courtrooms “risk not just misrepresenting things, but actually creating significant injustices,” she said.
In recent rulings, Quebec judges have found generative AI “hallucinations” in documents, in some cases imposing penalties for the use of non-existent legal references.
A judgment signed in April found that Michel A. Jeanniot, a Montreal arbitrator, had cited non-existent court decisions “hallucinated” by AI.
Last December, an applicant who was representing herself was found to have referenced non-existent decisions in her filing, which the judge said were probably AI hallucinations. Superior Court Judge Catherine Dagenais fined her $500 for filing a document that “contained fictional jurisprudential references.”
And in October, Superior Court Judge Luc Morin fined Jean Laprade $5,000 for including AI-hallucinated decisions in his legal defence.
It is the very design of AI tools that favour hallucinations, Raso said, meaning advances in the technology are unlikely to resolve the issue.
“These models are designed to use very complex statistical analysis mechanisms to basically predict” a response, she said. “Truth is not part of how the model is designed.”
The result can be “a citation of a case that doesn’t exist,” she said, or “a provision of a statute that doesn’t exist.”
AI tools are mostly trained in English, Raso said, which makes them more error-prone in other languages.
“In places like Quebec, where the decisions are made in French, there’s a higher risk of error.”
The Barreau du Québec, which represents Quebec’s lawyers, says it’s taking actions to ensure its members use AI ethically.
“The Barreau is the first professional order to require that all its members take a course on the guidelines for ethical use of artificial intelligence,” Barreau director Catherine Ouimet said in a statement.
Training lawyers “is a good first step,” Raso said, “but the danger is assuming that AI training will do all the work needed.”
Many lawyers are feeling pressure to use AI tools, she said, but don’t question how the tools work and what their shortfalls may be.
The April decision that found arbitrator Jeanniot had cited non-existent court decisions in his ruling was particularly alarming, she said, because it showed that he had abdicated his duty to make a decision himself.
Arbitrators “have a duty to hear the sides, to reflect on the evidence that’s been provided to them, to think about the law and make an expert judgment.”
“We can’t just offload this to an algorithmic decision-making system,” she said. “It’s inappropriate on so many levels.”
Ouimet said the Barreau couldn’t comment on a specific case, so wouldn’t confirm whether or not the association was investigating Jeanniot or other AI-related incidents.
But she said the organization “can engage the various mechanisms at its disposal” in cases of ethical breaches, including penalties such as fines, limits on a lawyer’s practice and disbarment.
Raso said she expected Jeanniot would face some sort of consequence, such as a fine, as well as reputational damage likely to hurt his chances at landing future arbitration gigs.
Although the arbitrator’s decision was challenged in court, Raso said other decision makers, such as bureaucrats handling immigration and social insurance files, face less legal scrutiny.
“It’s a real danger” that AI tools could influence a wide range of decisions, she said.
If “decisions are being generated using tools that are unreliable … that raises fundamental problems for the public’s faith in the justice system,” Raso said.
“If going to an arbitrator or tribunal member is as reliable as shaking a Magic 8 Ball and getting a result, then why would we even go there? This is a recipe for disaster,” she said.
The post AI ‘hallucinations’ risk undermining legal system, McGill researcher warns appeared first on Montreal Gazette.
.png)
1 hour ago
6

















Bengali (BD) ·
English (US) ·