Using AI in legal practice does not diminish professional duties or lower the standard of competence; all AI-generated information must be thoroughly verified.
By 2026, generative artificial intelligence (AI) has become a ubiquitous tool within law firms, with the majority of legal professionals reportedly using it for various work-related tasks. While these AI tools, whether legal-specific or general-purpose, demonstrate a remarkable ability to rapidly draft comprehensive and seemingly persuasive legal briefs, a deeper examination often reveals significant flaws. The output frequently includes inaccurate information such as entirely fabricated case citations, misquoted legal texts, and improperly stated legal principles. Unfortunately, many lawyers, pressured by stringent deadlines and heavy caseloads, may only conduct superficial reviews of these AI-generated documents. This oversight can lead to the unwitting submission of deeply flawed, 'hallucinated' briefs to courts, creating serious ethical and professional liabilities. The article strongly emphasizes that AI should only serve as an assistive technology for traditional legal research, never as a replacement, underscoring the critical need to rigorously verify all cited authoritative sources for accuracy and applicability to the specific legal issues at hand.
Judicial bodies are increasingly vocal about the misuse of AI in legal submissions, as evidenced by Special Master Michael R. Wilner's stern ruling in Lacey v. State Farm General Ins. Co. In this case, he expressed profound frustration over briefs containing 'bogus AI-generated research.' Consequently, he not only granted the motion to strike the offending attorneys’ supplemental briefs and denied their discovery motion but also imposed substantial sanctions. The attorneys were ordered to reimburse the court $26,100 for its wasted time and pay an additional $5,000 in fees to opposing counsel. Special Master Wilner explicitly stated that in an era of rapid AI advancements, no reasonably competent attorney should delegate research and writing to this technology without undertaking exhaustive verification of the material. Furthermore, he highlighted the ethical imperative to disclose the 'sketchy AI origins' of such material when sharing it with other legal professionals, as failing to do so puts them at significant risk.
United States Magistrate Judge Mark J. Dinsmore echoed similar sentiments regarding the responsible use of AI in Mid Cent. Operating Eng’rs Health & Welfare Fund v. HoosierVac LLC. He was notably displeased with an attorney who, on three separate occasions, submitted briefs that contained hallucinated information. As a result, Judge Dinsmore recommended a personal sanction of $15,000 against the attorney. He clearly articulated the distinction between using AI as an aid and relying on it uncritically. According to Judge Dinsmore, leveraging AI for initial, high-level research, even with non-legal AI programs, can provide a useful overview. However, he stressed that it is entirely unacceptable to depend on the output of a generative AI program without thoroughly verifying the current legal treatment, the validity, or even the fundamental existence of the cases and authorities presented. Confirming that a cited case constitutes good law is considered a basic and routine expectation for any practicing attorney, a fundamental aspect of professional competence that AI tools do not absolve.
The importance of rigorous verification for AI-generated legal content was further underscored in N.Z. v. Fenix Int’l Ltd. In this case, the court found it necessary to impose sanctions because the attorney had utilized ChatGPT to assist in drafting opposition briefs but critically failed to verify the accuracy and validity of the AI-generated material. A key issue identified was the attorney's inability to recognize when and how ChatGPT was modifying or 'cross-pollinating' her research and writing by supplementing and blending various legal concepts and authorities. This demonstrates a clear lack of oversight and critical review, highlighting the dangers of assuming AI output is reliable without independent confirmation. The ruling reinforces the consistent judicial message that attorneys bear the ultimate responsibility for the factual and legal accuracy of their submissions, regardless of the tools used in their preparation.
A consistent and unequivocal message has emerged from both courts and ethics committees: the integration of artificial intelligence into legal practice does not, under any circumstances, alter a lawyer's fundamental professional duties or reduce the required standard of competence. It is an unshakeable obligation that every single case, citation, and legal proposition presented in a lawyer's work must be meticulously read, thoroughly checked, and definitively confirmed. This rigorous verification process is an integral component of an attorney’s professional and ethical responsibilities. Generative AI fundamentally lacks the capacity to exercise genuine legal judgment. It cannot independently ascertain what the law actually is, confirm the actual existence of a cited case, or accurately determine its direct applicability to a specific set of facts. These complex, critical responsibilities remain squarely with the human lawyer. Ultimately, the lawyer bears complete responsibility for the final work product, and the advent of AI has not, and will not, shift this crucial accountability. The ultimate responsibility and the professional burden of accuracy undeniably stop with the attorney.