
As errors are revealed in the ‘citation’ stage, the core of legal documents, the problem of trust in AI use is rising again.
Wall Street’s well-known law firm Sullivan & Cromwell acknowledged and apologized for the fact that the letter in bankruptcy court in southern New York contained inaccurate AI-generated citations, according to Bloomberg News.

The incident occurred in an emergency application filed in bankruptcy proceedings related to Cambodia-based company Prince Group. The application included inaccuracies and errors caused by AI hallucinations, the law firm said in a letter to the court. “We did not follow internal verification procedures in the process of writing the document.”
“We deeply regret the burden this has put on the court and the parties, and we apologize on behalf of the team,” he continued. The law firm added that it will take steps to ensure the accuracy of all submission documents in the future.
AI hallucination refers to a phenomenon that creates information that does not exist as true. It is evaluated as a fatal error because it can lead to precedents or incorrect citations that do not exist in legal documents.
This case is noteworthy in that it occurred at a large global law firm, not a private lawyer or a small law firm. Court problems caused by AI errors have been mainly centered on individual practitioners, but it has been revealed that large organizations can also fail to control.
Data tracking related cases showed that there were more than 900 court cases with confirmed AI hallucinations in the U.S., but extremely rare cases in bankruptcy courts.
Experts find the core of the case in the verification process, not in the technology. The law firm also admitted that it failed to follow internal procedures.
In fact, some judges have publicly reprimanded lawyers for submitting false AI-generated citations. Last year, a bankruptcy court judge publicly pointed out a former senior lawyer at the Gordon Rhys Scully Mansukhani law firm who submitted a fake case made by artificial intelligence, but it did not lead to sanctions against the law firm itself. This is the background of growing concerns that the credibility of legal documents itself could be shaken.
With the rapid spread of the use of AI, the importance of risk management along with productivity improvement is growing at the same time in the legal profession. In particular, it is pointed out that it is difficult to avoid responsibility for the results if human verification is omitted from core tasks such as case law citations.
In the end, this incident is more of a case that revealed the problem of the internal control system that failed to filter it than the error itself created by AI. It shows that the faster the technology is introduced, the more important the standards and responsibility structures to control it are.
SALLY LEE
US ASIA JOURNAL



