Lawyers Caught Using A.I. Explain Why They Did It 404media.co

Jason Koebler and Jules Roscoe, 404 Media:

To do this, we used a crowdsourced database of AI hallucination cases maintained by the researcher Damien Charlotin, which so far contains more than 410 cases worldwide, including 269 in the United States. Charlotin’s database is an incredible resource, but it largely focuses on what happened in any individual case and the sanctions against lawyers, rather than the often elaborate excuses that lawyers told the court when they were caught. Using Charlotin’s database as a starting point, we then pulled court records from around the country for dozens of cases where a lawyer offered a formal explanation or apology. Pulling this information required navigating clunky federal and state court record systems and finding and purchasing the specific record where the lawyer in question tried to explain themselves (these were often called “responses to order to show cause.”) We also reached out to lawyers who were sanctioned for using AI to ask them why they did it. Very few of them responded, but we have included explanations from the few who did.

A May 2024 Stanford study found A.I. legal research tools would invent case law in one-sixth to one-third of searches.

What is striking about 404’s reporting is how many of these lawyers simply disclaim responsibility. I know few people want to admit to being lazy and incautious, but the number of these expensive professionals who blame their assistants instead of taking responsibility for their own filings is shameful.