Connect with us

Hi, what are you looking for?

Politics

Federal judges acknowledge court ruling errors tied to staffers’ AI use after Grassley inquiry

Two federal judges admitted that members of their staff used artificial intelligence to prepare court orders over the summer that contained errors.

The admissions, which came from U.S. District Judge Julien Xavier Neals in New Jersey and U.S. District Judge Henry Wingate in Mississippi, came in response to an inquiry by Sen. Chuck Grassley, R-Iowa, who chairs the Senate Judiciary Committee.

Grassley described the recent court orders as ‘error-ridden.’

In letters released by Grassley’s office on Thursday, the judges said the rulings in the cases, which were not connected, did not go through their chambers’ usual review processes before they were released.

The judges both said they have since adopted measures to improve how rulings are reviewed before they are posted.

Neals said in his letter that a June 30 draft decision in a securities lawsuit ‘was released in error – human error – and withdrawn as soon as it was brought to the attention of my chambers.’

The judge said a law school intern used OpenAI’s ChatGPT to perform legal research without authorization or disclosure that he also said was contrary to the chamber’s policy and relevant law school policy.

‘My chamber’s policy prohibits the use of GenAI in the legal research for, or drafting of, opinions or orders,’ Neals wrote. ‘In the past, my policy was communicated verbally to chamber’s staff, including interns. That is no longer the case. I now have a written unequivocal policy that applies to all law clerks and interns.’

Wingate said in his letter that a law clerk used Perplexity ‘as a foundational drafting assistant to synthesize publicly available information on the docket,’ adding that releasing the July 20 draft decision ‘was a lapse in human oversight.’

‘This was a mistake. I have taken steps in my chambers to ensure this mistake will not happen again,’ the judge wrote.

Wingate had removed and replaced the original order in the civil rights lawsuit, declining at the time to give an explanation but saying it contained ‘clerical errors.’

Grassley had requested that the judges explain whether AI was used in the decisions after lawyers in the respective cases raised concerns about factual inaccuracies and other serious errors.

‘Honesty is always the best policy. I commend Judges Wingate and Neals for acknowledging their mistakes and I’m glad to hear they’re working to make sure this doesn’t happen again,’ Grassley said in a statement.

‘Each federal judge, and the judiciary as an institution, has an obligation to ensure the use of generative AI does not violate litigants’ rights or prevent fair treatment under the law,’ the senator continued. ‘The judicial branch needs to develop more decisive, meaningful and permanent AI policies and guidelines. We can’t allow laziness, apathy or overreliance on artificial assistance to upend the Judiciary’s commitment to integrity and factual accuracy. As always, my oversight will continue.’

Lawyers have also faced scrutiny from judges across the country over accusations of AI misuse in court filings. In response, judges have issued fines or other sanctions in several cases over the past few years.

Reuters contributed to this report.

This post appeared first on FOX NEWS

You May Also Like

World

Severe wildfires raging in a Siberian region of Russia have engulfed more than 600,000 hectares of forest, local authorities have warned. In the Eastern...

Business

Netflix said Wednesday its cheaper, ad-supported tier now has 94 million monthly active users — an increase of more than 20 million since its last public...

World

An American basketball player for the Indonesian league was arrested for allegedly attempting to smuggle illegal drugs to the country, police said Thursday. The...

Politics

The State Department said nuclear talks between the U.S. and Iran have been constructive, and President Donald Trump has been clear about wanting to...