Australia’s murder case court documents feature phony quotations and nonexistent decisions manufactured by AI

by John
Published On:
Australia's murder case court documents feature phony quotations and nonexistent decisions manufactured by AI

A senior Australian lawyer has issued an apology for using artificial intelligence (AI) to generate fake quotes and nonexistent case citations in a murder case submission, leading to a 24-hour delay in court proceedings. The error occurred in the Supreme Court of Victoria and highlights a growing concern over AI’s role in the legal profession.

The Mistake and Apology

Rishi Nathwani, a King’s Counsel and defense lawyer, took full responsibility for the submission of the incorrect information. The fabricated content included fictitious quotes from a speech to the state legislature and citations from cases that did not exist. Nathwani expressed regret on behalf of the defense team, telling Justice James Elliott: “We are deeply sorry and embarrassed for what occurred.”

Impact on the Case

The fake submissions caused a delay in the case of a teenager charged with murder, which the judge had hoped to resolve on Wednesday, August 13. On Thursday, Justice Elliott ruled that the teenager, who could not be identified due to being a minor, was not guilty of murder due to mental impairment. The blunder and delay, however, prompted a strong rebuke from the judge.

“At the risk of understatement, the manner in which these events have unfolded is unsatisfactory,” Justice Elliott said. He emphasized the importance of lawyers’ submissions being accurate for the due administration of justice, which the AI error had undermined.

Discovery of the Errors

The issue was discovered by the judge’s associates, who were unable to find the cited cases and requested the defense lawyers provide copies. The defense lawyers admitted that they had wrongly assumed the AI-generated citations were correct, acknowledging that the submissions contained fictitious material. The submissions had also been shared with the prosecutor, who failed to verify their accuracy.

AI Guidelines in Court

In response to the incident, Justice Elliott reminded the court of guidelines issued last year regarding the use of AI by lawyers. “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified,” he stated.

The AI system used for the erroneous legal citations has not been identified, but the case draws attention to the potential risks of relying on AI without human oversight.

Global Concerns Over AI in Legal Practice

This is not the first instance of AI errors in the legal field. In 2023, a U.S. federal judge imposed $5,000 fines on two lawyers after they submitted fictitious legal research generated by ChatGPT in an aviation injury case. Similarly, Michael Cohen, former personal lawyer to U.S. President Donald Trump, faced the fallout after AI-generated fake court rulings were cited in his legal papers.

In Britain, Justice Victoria Sharp issued a warning that using AI to present false material could amount to contempt of court or, in severe cases, perverting the course of justice, which could lead to a life sentence.

The Growing Role of AI in Courtrooms

AI’s influence in courtrooms continues to grow, with new cases involving AI technology. In April 2023, Jerome Dewald used an AI-generated avatar to present his argument in a New York court, and in May, AI technology was used by the family of a murder victim in Arizona to create a video of the victim delivering a victim impact statement during the killer’s sentencing.

This incident in Victoria serves as an important reminder of the need for caution and thorough verification when integrating AI tools into legal processes. The legal profession must ensure that AI does not undermine the integrity of the justice system.

SOURCE

Leave a Comment