Skip to main content
March 20, 2026

AI Hallucinations in Legal Filings: What the Nebraska Supreme Court Case Means for Your Firm — Heartland AI

← Back to Blog

A Nebraska attorney's fabricated citations just cost his client an appeal. Here's how law firms can prevent AI-generated errors from destroying cases and careers.

What Happened in Nebraska

On March 20, 2026, the Nebraska Supreme Court issued a ruling that should be mandatory reading for every attorney using AI tools: an Omaha divorce lawyer's appellate brief was struck from the record after it was found to contain 20 fabricated "hallucinated" citations — including fictitious quotes from real cases, misrepresented holdings, and references to cases that simply don't exist.

Of the 63 references in attorney Greg Lake's brief, 57 contained some form of defect.

The consequences were severe. The court struck the brief, referred the attorney for disciplinary action, affirmed the lower court's ruling against his client (a father seeking custody modifications), and left the door open for the opposing party to recover attorney fees. A real client lost a real appeal because his lawyer didn't verify what an AI told him.

The court's admonition was direct: "Whether using AI or not, the obligations of candor, competency, diligence, and making good faith arguments remain the same."

This Keeps Happening — And It's a Pattern

Nebraska's case isn't an isolated incident. Since ChatGPT's public launch, courts across the country have dealt with AI-fabricated citations:

  • Mata v. Avianca (2023) — A New York attorney submitted a brief with six completely fictitious cases generated by ChatGPT, leading to sanctions.
  • Park v. Kim (2024) — A Colorado lawyer received a 2-year suspension for AI-generated fabrications in family court filings.
  • Multiple federal courts have since adopted mandatory AI disclosure rules.

The pattern is always the same: an attorney uses a generative AI tool to draft or research a filing, the AI confidently produces realistic-looking but fabricated citations, and the attorney submits the work without verification. By the time opposing counsel or the court catches the errors, the damage is done — to the client, to the attorney's career, and to public trust in the profession.

What makes the Nebraska case particularly instructive is the court's emphasis on how easy verification would have been. A simple search on Westlaw, LexisNexis, or even the free Nebraska Appellate Courts Online Library would have flagged every single fabricated citation.

Why AI Hallucinations Are Uniquely Dangerous in Legal Work

Generative AI models don't "know" the law. They predict the next most likely sequence of text based on training data. When asked to produce legal citations, they generate text that looks like a citation — correct formatting, plausible case names, realistic-sounding holdings — without any mechanism to verify whether the case actually exists.

This is especially dangerous in legal work because:

  1. The output looks authoritative. AI-generated citations follow proper Bluebook formatting and reference real courts and reporters. They pass the eye test.
  2. The errors are subtle. The Nebraska brief didn't cite entirely fake courts — it cited real cases with fabricated holdings. That's harder to catch on a quick skim.
  3. The stakes are irreversible. Once a brief is filed, the damage is done. Unlike a typo in a business email, a fabricated citation can result in sanctions, malpractice claims, and disbarment.
  4. Attorneys are ethically bound. Rules of Professional Conduct require candor toward the tribunal. "My AI made it up" is not a defense — it's an admission of failure to supervise.

How to Prevent This: A Practical Framework

The good news is that preventing AI hallucination disasters doesn't require abandoning AI tools entirely. It requires building verification into your workflow the same way you'd build quality checks into any other process.

1. Adopt a Firm-Wide AI Usage Policy

Every law firm using AI tools needs a written policy that covers:

  • Which tools are approved for which tasks (drafting, research, summarization, etc.)
  • Mandatory verification requirements before any AI-assisted work product is filed
  • Disclosure obligations — many courts now require affirmative disclosure of AI use
  • Training requirements for all attorneys and staff using AI tools

A policy doesn't have to ban AI. It has to ensure that AI output is never the final product.

2. Implement a Citation Verification Workflow

This is the single most critical safeguard. Before any filing:

  • Every case citation must be verified in a primary legal database (Westlaw, LexisNexis, or a free court library)
  • Every quoted holding must be confirmed against the actual case text
  • Every statutory reference must be checked against current published statutes
  • Cross-check AI summaries against the actual source material — don't just verify the citation exists, verify it says what the AI claims it says

This step would have caught 100% of the errors in the Nebraska case.

3. Use AI for the Right Tasks

Generative AI excels at some legal tasks and is dangerously unreliable at others:

Higher reliability: Summarizing documents you provide, drafting initial templates, organizing arguments, proofreading for grammar and style

Lower reliability: Generating citations from memory, stating legal holdings, quoting specific case language, citing current statutes

The rule of thumb: use AI to process information you give it, not to generate information you'll rely on.

4. Train Your Team — Including Partners

The Nebraska attorney blamed "sloppy copying and pasting" and a broken laptop. These aren't AI problems — they're workflow problems. Training should cover:

  • How generative AI actually works (and why it hallucinates)
  • Hands-on exercises identifying AI-fabricated citations
  • Your firm's specific verification workflow
  • Ethical obligations around AI-assisted work product

5. Build in a Second Set of Eyes

No filing should go out the door with only one person having reviewed it. This has always been good practice, but AI tools make it essential. A second reviewer specifically checking citations and holdings adds a critical safety layer.

The Opportunity for Forward-Thinking Firms

Here's what the Nebraska case really illustrates: the firms that will thrive aren't the ones avoiding AI — they're the ones using it responsibly with proper guardrails.

AI tools genuinely can make legal work more efficient. The Nebraska Attorney General's amicus brief acknowledged as much, noting that AI "can provide benefits to professionals who use it" when applied with "caution and humility." The firms that build robust AI workflows now will be faster, more efficient, and more competitive — while the firms that either avoid AI entirely or use it recklessly will fall behind or face consequences.

This is where working with an AI implementation partner makes a difference. At Heartland AI, we help professional service firms — including law firms — build AI workflows that capture efficiency gains without creating liability. That means custom usage policies, verification workflows tailored to your practice areas, team training, and ongoing support as the tools evolve.

The question isn't whether your firm will use AI. It's whether you'll use it in a way that protects your clients, your reputation, and your license.


Heartland AI helps businesses implement AI responsibly. Book a consultation to discuss AI workflow design for your firm.

Get the Free AI Action Plan

A practical checklist to identify your highest-ROI AI opportunities. No fluff.

Ready to see what AI can do for your business?

Get a free assessment — no obligation, no jargon.

Book Your Free Assessment →