The Hidden Dangers of Using AI in Legal Filings: Why “Just Let ChatGPT Draft It” Can Land You in Real Trouble
Artificial intelligence tools have transformed how lawyers research, write, and manage their workloads. Used wisely, they can speed up routine tasks and free attorneys to focus on higher-value strategy. But as recent headlines and court sanctions have shown, relying on general-purpose AI systems to generate legal filings is risky—and, in some cases, professionally perilous.
Below are the key dangers practitioners should understand before integrating AI-generated content into anything submitted to a court.
1. AI Can Make Up Cases, Quotes, and Statutes
Most large language models (LLMs) generate text by predicting what looks right, not by retrieving verified legal authorities. As a result, they can produce fabricated case citations, nonexistent judges, incorrect holdings from real cases, and invented statutes or procedural rules.
Courts across the U.S. have begun issuing sanctions for filings containing “hallucinated” authorities. Even when the narrative sounds plausible, the underlying law can be entirely fictional.
This recently came to light in Chukwuemeka Mezu v. Kristen Mezu. In this matter, counsel submitted a brief using fake citations. When this was discovered, the Appellate Court of Maryland held “the failure to use AI responsibly in legal research raises ethical issues and can result in sanctions when used improperly. It is unquestionably improper for an attorney to submit a brief with fake cases generated by AI.” Counsel was subsequently referred to the Attorney Grievance Commission.
AI-generated legal text must undergo the same verification as any junior associate’s first draft—arguably more.
2. Confidentiality Risks
To draft or analyze documents, some users paste confidential or privileged information into an online AI tool. If the provider stores prompts or uses them to retrain models, that information may be exposed or at least handled in ways inconsistent with ethical obligations and confidentiality obligations.
While enterprise tools may offer privacy protections, consumer-grade AI chat interfaces often do not guarantee confidentiality.
A safer practice is using vetted enterprise solutions or local/offline tools, and always understanding your provider’s data retention policies.
3. Unauthorized Practice of Law (UPL) Concerns
Nonlawyers using AI to produce complex legal filings may unintentionally engage in the unauthorized practice of law. Even attorneys could run afoul of ethical rules if they delegate legal reasoning to a tool without exercising adequate supervision.
Courts and bar associations increasingly emphasize that AI is not a substitute for professional judgment.
4. Lack of Jurisdiction-Specific Nuance
LLMs are trained on general text from many jurisdictions. But legal filings require precise procedural rules, local court customs, jurisdiction-specific precedent, and exact formatting and citation standards
AI often muddies these distinctions, generating arguments or citations appropriate for the wrong court—or the wrong century.
5. Bias and Inaccuracy in Training Data
If the model’s training data contains biased or outdated materials, the output may reflect those biases.
6. Overconfidence and Lack of Explainability
One of AI’s most dangerous traits is its confident tone. A filing can read smoothly and persuasively while still being legally incorrect. Unlike a human, an AI tool cannot explain why it thinks a particular argument works or where a proposition comes from. This false sense of reliability tempts users to trust generated content without proper scrutiny.
7. Ethical and Professional Responsibility Violations
Attorneys who submit AI-generated work without verification may violate rules concerning competence, candor toward the tribunal, diligence, supervision of nonlawyers, and confidentiality.
Some courts now require explicit disclosure of AI involvement in filings, and failure to comply can trigger sanctions.
8. Reputational Damage
Even a single filing containing AI-generated errors can follow an attorney—or a firm—for years. Public sanctions orders spread quickly, often becoming cautionary tales across the legal community.
AI can still be a powerful tool if used with discipline. However, there are some important considerations. One should never submit AI-generated text without verifying every authority. One should only use AI as a drafting assistant, not a legal analyst. As a whole, law firms should train staff and clients on proper use and limitations.
AI is here to stay, and its capabilities will only continue to grow. But in the legal profession—where precision, credibility, and judgment are everything—unchecked reliance on AI can lead to catastrophic mistakes. Used thoughtfully, AI can enhance efficiency. Used carelessly, it can jeopardize careers, cases, and clients.
Written by Michael P. Divver, Esq.




