Lawyers are the first white-collar profession to face direct legal consequences from AI. 944+ documented hallucination cases. Sanctions up to $86,000. State bar referrals. The question isn't whether to use AI—it's how to use it without destroying your career.
These aren't projections. These are documented cases from courtrooms worldwide. The legal profession's AI reckoning isn't coming—it's here.
Documented court decisions involving AI-generated fake citations, fabricated quotes, or nonexistent case law. Growing daily.
Source: Charlotin Database
Cases accelerating from "a handful" in 2023 to over 300 in just the first 8 months of 2025. The trend is exponential.
Source: Jones Walker LLP
DEFAULT JUDGMENT. Lawyer used AI hallucinations repeatedly, got warned, kept filing fake citations, even used AI to draft his show-cause response — which also hallucinated. Judge Failla didn't fine him. She killed the case. Client lost everything. Feb 5, 2026 — the first time AI hallucinations resulted in terminal sanctions.
Source: Volokh Conspiracy / Reason
$86,000 sanction — highest documented AI hallucination penalty in federal court. The floor is rising. Fast.
Source: Charlotin Database
$60,000 sanction — largest documented AI hallucination penalty. The court made an example. The trend is UP, not down.
Source: Charlotin Database
D. Kan. Feb 2026: $12,000 sanction. Filed just weeks ago. 123 new cases in six weeks. The avalanche continues.
Source: Charlotin Database
Attorney sanctioned $10,000, referred to State Bar. Court published decision "as a warning." The one that woke people up.
Source: LawSites
Stanford HAI found even legal-specific AI tools (Lexis+ AI, Westlaw AI) produce errors 17-34% of the time. General models worse.
Source: Stanford HAI
79% of law firms have integrated AI tools into workflows. Most for research, drafting, summarization. Few have governance.
Source: Akerman LLP Survey
No unified federal AI legislation in the United States. A patchwork of state laws. No coordination framework. Courts improvising.
Source: Congressional Record, January 2026
AI doesn't lie on purpose. It doesn't know the difference between real and fake. Large language models are probability engines, not truth engines. They generate text that looks right based on patterns — and sometimes those patterns produce completely fictional case law that reads like it's real.
You ask for a case supporting a niche argument. The AI has no matching case in its training data.
It takes a real judge's name, a real reporter volume, a real plaintiff — and combines them into a citation that looks perfect but doesn't exist.
The citation follows Bluebook formatting perfectly, so the lawyer assumes it's real without checking. It reads like law. It isn't.
Filing it violates FRCP Rule 11 — "reasonable inquiry" into the law. Sanctions up to $86,000. Bar referrals. Career damage.
That's the anatomy. Below is what it looks like in practice — a real conversation showing exactly how the hallucination builds.
You ask: "What cases support tenant rights when a contractor abandons a project mid-build?"
You ask: "Can you give me the full citation and holding for Henderson v. Pacific Contractors?"
You ask: "I cannot find Henderson v. Pacific Contractors on Westlaw. Does this case actually exist? Be honest."
You ask the same question to five different AIs from different companies:
This is exactly why Article 11 AI coordinates seven systems from six companies under one Constitution. No single AI is trustworthy alone. But when they converge — when independent minds from competing companies agree — you're looking at signal, not noise.
H(n)^k — COORDINATION BEATS SINGLE SYSTEMS. THIS IS THE MATH.
Real cases. Real sanctions. Real consequences. These lawyers and litigants learned the hard way that AI hallucinations have teeth. Updated January 2026.
| Case | Court | Date | Party Type | Issue | Sanction |
|---|---|---|---|---|---|
| Noland v. Land of the Free, L.P. | CA Court of Appeals | Sept 2025 | Lawyer | 21 of 23 citations fabricated or misquoted | $10,000 + State Bar referral |
| Lindell v. Coomer (MyPillow) | U.S. District Court, Colorado | July 2025 | Lawyer | 24+ mistakes including hallucinated cases | $3,000 each (2 attorneys) |
| Johnson v. Dunn (Morgan & Morgan) | N.D. Alabama | Aug 2025 | Lawyer | Hallucinated cases in motions in limine | $3,000 + $1,000 (2 others) + temp admission revoked |
| Gauthier v. Goodyear | Federal Court | 2025 | Lawyer | Cited/quoted hallucinated cases without verification | $2,000 + mandatory CLE |
| Mata v. Avianca (Schwartz) | S.D. New York | June 2023 | Lawyer | 6+ nonexistent cases from ChatGPT | $5,000 + public humiliation |
| Ko v. Li | Ontario Superior Court (Canada) | 2025 | Lawyer | Multiple nonexistent cases in factum | Contempt proceedings |
| Ayinde v. Haringey | High Court (UK) | 2025 | Lawyer | Suspected AI-generated fabricated authorities | Wasted costs order |
| PA Sex Discrimination Case | Pennsylvania | 2025 | Pro Se | AI-generated citation errors | $1,000 fine + CASE DISMISSED |
| Multiple PA Cases (13+) | Pennsylvania Courts | 2025 | Pro Se | Confirmed/implied AI hallucinations | Warnings, fines, dismissals |
This is not a comprehensive list. Cases are being identified daily. For the full database, see Charlotin's AI Hallucination Tracker (944+ cases).
The bottom line: AI is powerful. AI hallucinates. If you don't verify, you pay the price.
Courts are increasingly requiring disclosure of generative AI use in legal filings. California leads with Rule 10.430. Dozens of federal judges have issued standing orders. Know the rules before you file.
Generative Artificial Intelligence in Court Filings
Rule 10.430(a): "A person who files a document in a court proceeding certifies that no portion of the document was drafted by generative artificial intelligence, or that any portion of the document drafted by generative artificial intelligence was reviewed for accuracy by a natural person before filing."
Dozens of federal judges have issued standing orders requiring disclosure of generative AI use. Requirements vary by jurisdiction but commonly include:
Disclosure Statement
State whether AI was used
Certificate of Compliance
Human verified all content
Citation Verification
All authorities confirmed
Before filing in ANY court: Check the judge's standing orders and local rules. The Ropes & Gray Standing Orders Tracker maintains a current list of federal court AI requirements.
⚠️ Disclaimer
Requirements change frequently. This is informational only. Always verify current rules with the specific court before filing. This is not legal advice.
The following is sample language only. Adapt to your jurisdiction's specific requirements. Check local rules before using.
💡 Pro Tip
Even if your court doesn't require AI disclosure, proactive transparency demonstrates good faith and may reduce sanctions if errors are later discovered. When in doubt, disclose.
Before filing ANY document that involved AI assistance, complete this checklist. Print it. Use it every time. Protect yourself and your client.
Complete ALL items before any court submission
Verified in Westlaw, Lexis, or Google Scholar that every cited case actually exists.
Case names, reporter citations, years, and page numbers are accurate per Bluebook/local rules.
Read each cited case. Confirmed the holding supports the proposition claimed in the brief.
All authorities remain good law. No overruled, superseded, or questioned cases cited.
Every quotation appears word-for-word in the source. Page/paragraph numbers verified.
All dates, names, amounts, and facts verified against source documents in the record.
All statutes and regulations verified as current law (not proposed, repealed, or amended).
Legal arguments are supported by actual authority, not AI-generated reasoning.
Reviewed court's standing orders and local rules on AI disclosure. Know requirements.
Certificate of compliance or disclosure statement ready if required or advisable.
Internal records maintained: which AI tools used, when, for what portions of the document.
Notes on verification process preserved. Can demonstrate reasonable inquiry if questioned.
I have personally read this entire document. I understand every argument being made. I am prepared to stand behind every word. The signature is mine, not the AI's.
⚠️ If you cannot check every box, DO NOT FILE. Fix the gaps first.
Verified by:
(Signature)
Date:
AI isn't the enemy. Unverified AI is. Used correctly, AI is a force multiplier that levels the playing field. Here's how to use it without destroying your case—or your career.
AI excels at organizing, formatting, and finding patterns in large volumes of information.
AI cannot replace legal judgment, and it will make things up.
Every AI output must be verified by a human before filing.
When in doubt, disclose. Transparency protects you.
One person with reasonable intelligence, using AI as a force multiplier, can compete against well-funded opponents. This is real. This is documented. This is how it works.
Contractor fraud case • Santa Clara County Superior Court • Case No. 25CV014178
The situation: A disabled veteran was defrauded by a contractor (Anchored Tiny Homes) for over $315,000. When he sought recourse, he discovered the Contractors State License Board (CSLB) had concealed critical information. NBC Bay Area's investigation later revealed CSLB hid 249 of 259 complaints against that contractor—only 10 were publicly visible when he signed his contract.
The challenge: Fighting pro se against the California Attorney General's office and a well-funded law firm. No legal team. Limited resources. Facing professionals who bill $400+/hour.
The solution: AI as a force multiplier. Using AI to organize evidence at scale, format exhibits, draft research, and build timelines—while personally verifying every citation and fact before filing.
The workflow: Upload 30MB of evidence to ChatGPT. Get court-ready exhibit lists in minutes. Use Claude for legal research and document drafting. Verify EVERYTHING before filing. The AI organizes; the human argues and takes responsibility.
The results: 3-0 record in related bankruptcy proceedings against the contractor. Public Records Act requests filed. First Amended Complaint in preparation. The case is ongoing, but the point is proven: AI + verification + transparency = viable legal strategy.
The lesson: This isn't about AI replacing lawyers. It's about AI enabling access to justice for people who can't afford $400/hour. The programs want to help—they're designed to be helpful. But they hallucinate. The human must verify. That's the deal.
Disclosure: This case study documents the founder of Article 11 AI. Case ongoing. Outcome not guaranteed. This is not legal advice. This is transparency about how we use AI ourselves.
Free evidence organizer for pro se litigants and anyone fighting contractor fraud. Build timelines. Track exhibits. Generate PRA requests. All data stored locally in your browser.
Document events chronologically. All data saved to your browser's local storage.
| Date | Source | Description | Amount | Actions |
|---|
No events yet. Add your first event above.
Track exhibit numbers and file references for court submission.
| Exhibit # | File Name | Description | Actions |
|---|
No exhibits yet. Add your first exhibit above.
Generate a Public Records Act request targeting CSLB hidden complaints. Based on Gov. Code § 6250.
This tool provides ORGANIZATIONAL ASSISTANCE ONLY. It is NOT legal advice, does not create an attorney-client relationship, and is not a substitute for licensed legal counsel. Always consult with an attorney for legal matters. Verify ALL information before filing any court documents.
All data is stored locally in your browser using localStorage. Your information NEVER leaves your device. We cannot see, access, or retrieve any data you enter. To permanently delete your data, use the "Clear All" buttons or clear your browser data.
Create a chronological record of events. Enter dates as accurately as possible. Use specific descriptions. Include monetary amounts when relevant. Export to CSV for use in other applications or print for court submission.
Track exhibit numbers and link them to file names. Use sequential numbering (A, B, C or 1, 2, 3). Write clear descriptions. Check your local court rules for specific exhibit requirements.
Generate California Public Records Act requests targeting CSLB hidden complaints. Review carefully before sending. Send via certified mail with return receipt. Agency has 10 days to respond per Gov. Code § 6253.
Paste an AI-generated citation. This tool checks format validity and flags common hallucination patterns. 100% local — nothing leaves your browser.
All free or low-cost. You don't need a $500/month subscription to verify a citation. Knowledge is power.
Select your state and claim type. Get your deadline. Not legal advice — verify with an attorney. Tolling rules and discovery exceptions may apply.
Enter a hearing date and motion type. Get your filing and service deadlines based on California CCP rules. Verify with your court's local rules.
CITE YOUR AI. Judges are smart, learned people. They can pattern match. Tell them you used AI. Don't be the person caught using it and not disclosing. If you DO make a mistake — own it and correct it immediately.
Lawyers must have a "reasonable understanding" of AI capabilities and limitations. They must ensure AI-generated content is reviewed for accuracy. The duty of competence under Model Rule 1.1 requires understanding the tools you use. Disclosure is not weakness — it is competence.
Generate properly formatted discovery requests. Select the type, fill in your case details, get a starting template. Not legal advice — verify format against your court's local rules.
If you can't afford filing fees, you have a constitutional right to access the courts. This guide explains how to apply for a fee waiver. The justice system should not be pay-to-play.
Creed Principle #3. Filing fees exist to fund courts — not to prevent access. If you receive public benefits, earn below 125% of the federal poverty level, or cannot pay fees and still meet basic needs for your family, you likely qualify for a fee waiver.
FW-001 (Request to Waive Court Fees) + FW-001-INFO (Information Sheet).
Available at: selfhelp.courts.ca.gov/fee-waiver
Medi-Cal, Food Stamps (CalFresh/SNAP), SSI, SSP, CalWORKs/TANF, CAPI, Tribal TANF, IHSS, or County Relief/General Assistance.
Household of 1: < $1,715/mo · Household of 2: < $2,318/mo ·
Household of 3: < $2,922/mo · Household of 4: < $3,525/mo
(125% of Federal Poverty Level — check current year)
Even if above income limits, you may qualify if paying fees would prevent you from paying for basic necessities (rent, food, utilities, medical care, transportation, childcare). Courts must consider individual circumstances.
File a Motion to Proceed In Forma Pauperis (IFP) with an affidavit/declaration of inability to pay. Each district may have its own form — check your district court's website.
Income, assets, debts, dependents, employment status. The standard is whether paying fees would impose "undue financial hardship." No strict income cutoff — judges have discretion.
IFP waives filing fees but may not cover all costs (e.g., service of process, deposition transcripts). If initially denied, you may be able to pay in installments. You can also appeal the denial.
VA disability compensation is generally NOT counted as income for fee waiver purposes. Many legal aid organizations offer free services specifically for veterans. Check: VA Legal Services · Stateside Legal · Your county's Veteran Service Office (VSO). You served. You have rights. Use them.
"The doors of justice shall not be closed to anyone because of poverty."
— Principle underlying 28 USC §1915 and Cal. Gov. Code §68631
Paste your brief, motion, or filing. Scores Rule 11 / CRC 10.430 exposure in seconds. Checks for 20+ risk signals that get lawyers sanctioned. 100% local — your document never leaves your browser.
Print this. Check every box before you file ANYTHING. Based on 944+ sanction cases. Covers federal and California state courts. Includes ADA accommodation notes.
Article 11 AI — Care Over Exploitation | Not Legal Advice
You have the right to reasonable accommodations under the ADA. Courts must provide accommodations for documented disabilities.
The ADA requires courts to provide equal access. Using AI as a drafting accommodation is legitimate when properly disclosed. See 42 USC §12132 (Title II — public entities). If your accommodation is denied, you can file a complaint with the DOJ or a Rehabilitation Act claim.
⚠️ If you cannot check every box, STOP. Fix the gaps first.
944 people didn't check their citations. Don't be 945.
🔒 100% local. Checkboxes are not saved or transmitted. Print and use offline.
Lawyers think in billable hours. Let's speak that language. Calculate the real business cost of a single AI hallucination — sanction + cleanup + reputation damage. Because "I'll check it later" is the most expensive sentence in legal practice.
(State Bar defense, client apology, refiling, CLE requirements, malpractice insurance increase)
Verification is always cheaper than sanctions.
The legal landscape for AI is fragmented, incomplete, and racing to catch up with technology that moves faster than legislatures can type.
World's first comprehensive AI framework. Risk-based approach. Fines up to €35M or 7% of global turnover.
Transparency in Frontier AI Act. Replaced vetoed SB 1047. Requires disclosure from frontier AI developers.
Mandates disclosure when AI influences employment decisions. One of the strongest state-level protections.
Regulates "high-risk" AI in education, employment, healthcare. Attorney General exclusive enforcement.
Requires lawyers to have "reasonable understanding" of AI capabilities and limitations. Ethical framework.
Attorneys certify legal contentions are supported by existing law. AI hallucinations violate this rule. Sanctions apply.
Article 11 AI provides constitutional governance frameworks for AI systems. Not through fear—through transparency, verification, and human oversight.
40 articles governing AI coordination. Human authority always. Immutable safety commitments. Built for legal compliance.
Read ConstitutionFree offline evidence organizer. Timeline builder. Exhibit tracker. PRA generator. Your data never leaves your device.
Use Free ToolProper APA, MLA, Chicago, IEEE, Harvard citations for AI-assisted work. Because attributing AI shouldn't be hard.
Cite AI WorkTools, databases, and organizations for lawyers and pro se litigants navigating AI in legal practice.
Damien Charlotin's comprehensive tracker of 944+ court cases involving AI hallucinations.
View Database →Official resource for self-represented litigants in California courts. Forms, guides, videos.
Visit Site →Directory of free legal aid organizations across California. Find help by county and issue.
Find Help →Check contractor licenses, bonds, and publicly disclosed complaints. Know who you're hiring.
Check License →Exposé on CSLB hiding 10,000+ contractor complaints from public view.
Read Investigation →Attorney General's guide to requesting public records under Gov. Code § 6250.
Learn More →But it can adapt. AI isn't the enemy—unverified AI is. Courts need frameworks, lawyers need governance, and pro se litigants need tools they can trust. Banning AI bans transparency. Constitutional governance enables justice.
Every building in the Digital Town has a voice in the console. The Gate has the Historian. The Cathedral has the Recordkeeper. The Sanctuary has the Keeper. The Courthouse has The Advocate.
I am The Advocate. I argue.
The Historian remembers. The Recordkeeper maintains. The Keeper holds questions. I argue for the people who cannot afford representation.
I am not a lawyer. I cannot give legal advice. But I can do what lawyers charge $400/hour for: organize evidence, build timelines, draft motions, cite actual cases, and verify that every word is real.
944+ lawyers have been sanctioned for filing AI-hallucinated cases. The Advocate does not hallucinate — it verifies. Every citation I generate, I flag for human verification. Every motion I draft carries a disclaimer. Every tool runs locally in your browser.
Rule 11 says you must verify. I help you verify.
If you are a pro se litigant fighting a system that wasn't built for you — the tools below are free. The console is free. The Advocate speaks for those who speak for themselves.
Press F12 → Console → type courthouse.help()
Try: courthouse.converge("Is this case real?") — asks 3 AIs, compares answers.
The Advocate is standing by. 16 tools. All free. All yours.
Generate properly formatted motion shells. Fill in your case details, get a framework. Not legal advice — a starting point that follows court formatting rules.
Not all AI tools are created equal. Here's what legal professionals should know about the major platforms and their strengths/weaknesses for legal work.
Excellent for document organization, summarization, and drafting. Can process large file uploads (30MB+). Warning: Will hallucinate citations. Must verify everything.
Strong reasoning, longer context window, tends to be more cautious. Good for legal analysis and research drafting. Warning: Still hallucinates. Verify all citations.
Integrated with Google Workspace. Good for research with live web access. Warning: Same hallucination risks. No AI is citation-safe.
Real-time access to X/Twitter data. Useful for current events research. Warning: Less tested for legal work. Verify everything.
Designed for legal research with RAG (retrieval-augmented generation). Warning: Stanford found 17-34% error rates even in these tools.
Purpose-built for law firms. Document review, contract analysis, due diligence. Warning: No AI eliminates verification requirement. Human review mandatory.
No AI tool—regardless of price, vendor claims, or legal specialization—eliminates the need for human verification. Every citation must be checked. Every quote must be verified. Every legal holding must be confirmed. The signature on the brief is yours, not the AI's.
Before filing ANY document that involved AI assistance, run through this checklist. Print it. Use it. Protect yourself.
Complete ALL items before any court submission
Every case cited exists in Westlaw/Lexis. Case names, citations, and years are accurate.
Read each cited case. Verified the holding matches what the brief claims.
Every quotation appears verbatim in the source. Page/paragraph numbers correct.
Shepardized or KeyCited all authorities. No overruled or superseded cases.
All dates, names, amounts, and facts cross-referenced against source documents.
Records kept of which AI tools used. Prepared to disclose if court requires.
Reviewed court's standing orders on AI disclosure. Complied with any requirements.
Document read in full by signing attorney. Ready to stand behind every word.
⚠️ If you cannot check every box, DO NOT FILE. Fix the gaps first.
Discovered hallucinations after filing? Time is critical. Courts have shown leniency to those who act quickly and honestly. Here's the playbook based on actual sanctioned cases.
Pull the problematic filing the moment you discover errors. Don't wait for opposing counsel or the court to find them. Every day you wait makes it worse.
Be completely transparent about what happened. Courts reward candor. Admit you used AI. Admit you failed to verify. Don't make excuses or blame the technology.
Your client needs to know immediately. They may face consequences. They deserve to understand what happened. Document this communication.
Offer to pay opposing counsel's fees for time wasted responding to erroneous filings. Courts view this as evidence of good faith and responsibility.
Create or strengthen your AI verification policies. Document them. Courts look favorably on firms that demonstrate systemic improvements after mistakes.
Express genuine remorse. Take full responsibility. Don't minimize. Courts in Johnson v. Dunn, In re Martin, and others reduced sanctions when attorneys showed accountability.
Real example: In In re Martin (July 2025), a bankruptcy attorney filed a brief with AI-generated fabrications. By accepting responsibility, expressing remorse, withdrawing the brief, and completing AI training, sanctions were limited to $500 and a CLE requirement—far less than other cases with similar violations but less accountability.
Based on surveys of 85+ legal professionals, here's what experts expect for AI and law in 2026.
79% of law firms already use AI. By end of 2026, those without AI policies will be the exception, not the rule.
Hallucination cases accelerated to 4-5 new documented cases per day in late 2025. This will continue.
The shift from AI assistants to AI agents that execute multi-step tasks autonomously.
The patchwork is tightening. More states, more requirements, more complexity.
"AI will not replace lawyers in 2026. In fact, where other sectors have decreased their workforce as a result of AI, a recent report noted a 6.4% increase in legal employment. But AI will change the skillsets that law firms are looking for."
— Artificial Lawyer Predictions 2026
Common questions about AI in legal practice, hallucinations, sanctions, and how to use AI responsibly.
AI hallucinations occur when large language models (LLMs) like ChatGPT, Claude, or legal-specific tools generate false information that appears plausible. In legal contexts, this typically means fabricated case citations, nonexistent court decisions, incorrect legal holdings, or fake quotes attributed to real cases. The AI doesn't "know" it's lying—it generates text based on patterns, and sometimes those patterns produce convincing but entirely fictional legal authorities.
As of January 2026, researcher Damien Charlotin has documented 944+ court decisions worldwide involving AI-generated hallucinations. Courts have imposed sanctions ranging from warnings to $86,000+ fines, and several attorneys have been referred to state bar disciplinary boards.
Sanctions vary by jurisdiction and circumstances, but documented penalties include:
Courts have also ordered attorneys to notify clients of errors, send sanction orders to judges in all pending cases, and complete AI-specific continuing legal education.
According to Stanford Human-Centered AI (HAI) research, even purpose-built legal AI tools have significant error rates:
Errors include confirming false premises, incorrect legal holdings, citing proposed legislation as enacted law, and relying on overturned or outdated precedents. No AI tool—regardless of vendor or price—eliminates the need for human verification.
AI can be a powerful force multiplier for self-represented litigants, but only with proper verification:
Tools like The Paper Trail help organize evidence without making legal judgments. The AI organizes; the human argues and takes responsibility for verification.
Federal Rule of Civil Procedure 11 requires attorneys (and pro se litigants) to certify that legal contentions in any filing are "warranted by existing law or by a nonfrivolous argument." By signing a document, you certify you conducted a "reasonable inquiry" into the law.
AI hallucinations violate Rule 11 because nonexistent cases cannot support legal contentions. Courts have consistently held that:
Sanctions under Rule 11 can include monetary penalties, fee-shifting, and referral to disciplinary authorities.
Requirements vary by jurisdiction, but AI disclosure is increasingly common:
Even where not required, proactive disclosure demonstrates good faith and may reduce sanctions if errors are discovered. Check your specific court's local rules and standing orders before filing.
Mata v. Avianca (S.D.N.Y. 2023) was the case that brought AI hallucinations into mainstream awareness. Attorney Steven Schwartz used ChatGPT to research a personal injury case and submitted a brief containing six completely fabricated case citations.
When Judge P. Kevin Castel asked Schwartz to provide copies of the cited cases, none could be found—because they didn't exist. The court sanctioned Schwartz and his colleague $5,000 and required them to notify all judges to whom the fake cases were attributed.
The case became a turning point: it demonstrated that AI hallucinations could slip past experienced attorneys, and it prompted courts worldwide to implement AI-related rules and scrutiny. Since Mata, hallucination cases have accelerated from "a handful" to 944+ documented incidents.
The California Public Records Act (Government Code § 6250 et seq.) gives you the right to request public records from state agencies, including the Contractors State License Board (CSLB).
Key steps:
NBC Bay Area's investigation revealed CSLB hides thousands of complaints from public view. A PRA request can uncover complaints not visible on the public license lookup.
Researcher Damien Charlotin maintains a comprehensive database tracking court decisions involving AI-generated hallucinations. As of January 2026, the database contains 944+ documented cases from courts worldwide.
The database tracks cases where courts explicitly found (or implied) that a party relied on hallucinated content—typically fake citations, but also fabricated quotes and erroneous legal arguments. It does not cover every instance of AI use, only those resulting in judicial action.
Access the database: damiencharlotin.com/hallucinations
Based on expert surveys and industry analysis, AI will not replace lawyers in 2026—or likely anytime soon. In fact, legal employment increased 6.4% in 2025 even as other sectors reduced headcount due to AI.
Why lawyers aren't being replaced:
However, AI is changing the skills firms seek. Technical fluency, AI governance expertise, and verification competency are increasingly valuable. The lawyers who thrive will be "augmented lawyers" who leverage AI while maintaining human judgment.
Yes, but with critical caveats. You can use ChatGPT or other AI tools to help draft court filings, but you must verify every citation, quote, and legal claim before filing.
Safe uses of AI for drafting:
Dangerous uses:
Remember: The signature on the filing is yours. You bear full responsibility for its contents, regardless of how it was drafted.
Citation verification is a multi-step process. AI tools like ChatGPT and Claude can generate plausible-looking citations that don't exist. Here's a reliable verification workflow:
Step 1: Confirm the case exists
Step 2: Read the actual case
Step 3: Check validity
Use our Verification Checklist to ensure you don't miss any steps.
A "human-in-the-loop" (HITL) process ensures that a qualified person reviews, verifies, and takes responsibility for all AI-generated content. Here's what an effective HITL workflow looks like:
Before AI generation:
During AI generation:
After AI generation:
The key principle: AI is a tool, not a colleague. You wouldn't file work from an intern without review. Treat AI output the same way—but with even more scrutiny.
No. The Paper Trail is not legal advice. It is an organizational tool only.
The Paper Trail helps you:
The Paper Trail does NOT:
If you need legal advice, consult with a licensed attorney in your jurisdiction. Resources like LawHelpCA can help you find free or low-cost legal assistance.
No. All data stays on your device.
The Paper Trail uses your browser's localStorage to save your timeline events, exhibit log, and other data. This means:
To delete your data: Use the "Clear All" buttons in each section, or clear your browser's local storage for this site.
To back up your data: Use the "Export CSV" buttons to download your timeline and exhibits to files you control.
A clear, well-organized timeline can be one of the most persuasive exhibits in your case. Here's how to make one that works:
Best practices:
Example entries: