John Bowie, LawFuel Publisher
There is a Hall of Shame forming in American courtrooms, a growing roll-call of prestigious law firms undone by chatbots who display Trumpian confidence and a loose relationship with reality. For three years, the admission criteria have been simple: file a brief citing cases that never existed, apologise to a federal judge, and wait for the coverage. Sullivan & Cromwell, advising OpenAI on ethical AI, has now earned its plaque. The drinks, as always, are on the algorithm.

On Saturday, April 18, Andrew Dietderich, (pictured) the founder and co-head of S&C’s global restructuring practice, a Chambers Band 1 titan who has called the firm home for nearly three decades, fired off an apologetic letter to Chief Judge Martin Glenn of the U.S. Bankruptcy Court for the Southern District of New York.
Attached was a three-page, single-spaced catalogue of sins. The firm’s emergency motion in the Chapter 15 proceedings involving Prince Global Holdings that BVI-registered remnant of a Cambodian business empire currently starring in U.S. criminal proceedings and which was riddled with what Dietderich diplomatically called “inaccurate citations and other errors.”
The culprit, as is so often the case these days, is our new friend, Artificial intelligence and its party trick of the hallucination. It’s like Timothy Leary has camped in the AI rhealm to test hallucigenic activity.
But in the present case with Sullivan & Cromwell there are some delicious ironies in their AI hallucinations.
S&C is not merely any old Am Law 20 shop. It is the firm that proudly touts its role advising OpenAI on the “safe and ethical deployment” of artificial intelligence, a representation the firm emblazons on its own website.
Second, and even better: the errors were not caught by a plucky solo practitioner or an over-caffeinated law clerk. They were spotted by opposing counsel at Boies Schiller Flexner, until now the undisputed heavyweight champions of Elite Law Firm AI Fails.
BSF, you will recall, had its own starring role in the Hall of Shame when partner John Kucera gamely owned up to hallucinated citations in a high-profile brief. Here was the old champion, gloves on, catching the new contender mid-swing.
To his considerable credit, Dietderich did not point fingers at some hapless associate but instead he signed the mea culpa solo, and reportedly rang BSF to say thank you. In an industry where partners occasionally treat blame like a game of hot potato, this was a class act.
The deeper irony, however, is one of timing. It has been almost exactly three years since the Avianca case first put “AI hallucination” into the legal lexicon. Since then, tools specifically engineered to prevent exactly this nonsense, RealityCheck from BriefCatch being the obvious example, have proliferated in the courts and in media like LawFuel. Yet here we are, with one of the planet’s most profitable firms, in a high-stakes restructuring matter, still letting a large language model’s fictional case law slip through unchecked.
As one rather pithy Claude-generated observation puts it: “An AI making things up with total confidence isn’t a bug. It’s a mirror.” And: “Garbage in, garbage out — but now the garbage speaks in complete sentences and cites its sources.”
There is something almost poetic about it happening in a restructuring case. S&C specialises in breathing new life into distressed companies. Yet its own AI-assisted filing required an emergency do-over. The machine that was supposed to streamline the process instead created more work, more embarrassment, and, let’s not forget for a moment, more billable hours for everyone involved in the clean-up.
The great productivity promise of generative AI, it seems, sometimes merely relocates the labour rather than eliminating it.
Sullivan & Cromwell did the right thing: it corrected the record, apologised fulsomely, and moved on. But the episode leaves a lingering question for every law firm racing to integrate AI into its workflows. If even the firm advising OpenAI on ethical AI cannot quite manage its own house, what hope is there for the rest of us?
Perhaps the safest course remains the oldest one: read the cases yourself, check the statutes twice, and remember that the most reliable legal research tool remains the one between your ears, provided it hasn’t been lulled into complacency by a chatbot that never, ever admits it’s guessing.
Welcome to the club, S&C. One of the most profitable firms on earth demonstrated,with three pages of single-spaced errors, that knowing about responsible AI and practising responsible AI are, it turns out, quite different disciplines.