All Posts
ArchitectureFeb 25, 202610 min read

Glass Box AI Transparency: How to Prevent AI Hallucinations in Patent Prosecution

When AI fabricates a prior art citation or mischaracterizes an examiner's rejection, the attorney submits false information to the USPTO. Glass Box architecture makes this structurally impossible.

The Hallucination Problem in Patent AI

Large language models hallucinate. This is not a bug -- it is a fundamental property of how they work. LLMs generate statistically likely text, which sometimes means plausible-sounding but factually incorrect output. In most contexts, this is a minor annoyance. In patent prosecution, it creates three categories of serious harm:

Phantom CitationsCritical

The AI cites a prior art reference that does not exist, or attributes a passage to a reference that does not contain it. The attorney submits arguments based on nonexistent evidence.

Mischaracterized RejectionsHigh

The AI incorrectly describes the examiner's basis for rejection. The attorney responds to the wrong rejection, wasting an Office Action round.

New Matter IntroductionCritical

The AI suggests claim amendments that add subject matter not supported by the original specification. This violates 35 USC 132 and can invalidate claims.

These are not theoretical risks. They are documented outcomes of using general-purpose AI (ChatGPT, Claude, Gemini) for patent prosecution without verification layers. The question is not whether LLMs hallucinate -- it is what architecture prevents hallucinations from reaching the attorney.

Black Box vs. Glass Box: Two Architectures

Black Box AI

  • LLM generates analysis in one pass
  • No verification against source documents
  • Attorney cannot trace output to source
  • Confidence scores are self-reported
  • Hallucinations reach the user

Glass Box AI

  • LLM proposes, validator disposes
  • Every output checked against source
  • Full traceability to OA text and prior art
  • Confidence scores are externally validated
  • Unverifiable outputs are dropped

How Glass Box Works in Abigail

Abigail implements Glass Box transparency through a two-phase architecture on every AI operation:

  1. 1

    LLM Proposes

    The AI generates analysis -- rejection classification, claim mappings, prior art passages, suggested amendments. This is the creative step where the LLM uses its pattern recognition to produce candidate outputs.

  2. 2

    Deterministic Validator Checks

    A separate, non-AI system verifies every claim the LLM made against the actual source documents. Does this prior art passage actually exist in the cited reference? Does this rejection characterization match the examiner's actual text? Does this amendment have specification support?

  3. 3

    Drop or Deliver

    If the validator confirms the output, it is delivered to the attorney with source references. If the validator cannot confirm, the output is dropped entirely. Not flagged. Not marked as low confidence. Dropped. The attorney never sees unverifiable analysis.

What Gets Validated

AI OutputValidation MethodIf Unverifiable
Prior art citationExact text match against reference documentCitation dropped
Rejection characterizationSemantic match against examiner's OA textCharacterization dropped
Claim element mappingElement-level verification against claim textMapping dropped
Amendment suggestionSpecification support check (35 USC 132)Amendment dropped
Examiner dataCross-reference against USPTO recordsData point dropped
Confidence scoreExternal validation (not self-reported)Score recalculated

Why This Matters for Malpractice

An attorney who submits an AI-generated response to the USPTO bears full professional responsibility for its contents. If the AI fabricated a citation and the attorney did not catch it, the attorney -- not the AI vendor -- faces potential malpractice liability and discipline from the state bar.

The Professional Responsibility Question

Black-box AI forces attorneys to manually verify every AI output against source documents. This eliminates much of the time savings AI was supposed to provide. Glass Box architecture does this verification automatically, so the attorney can trust-but-verify efficiently:

  • Every citation is verified against the actual document before presentation
  • Every rejection characterization matches the examiner's actual text
  • Every amendment suggestion has traceable specification support
  • Attorney review shifts from verification to strategic judgment

See Glass Box in Action

Upload an Office Action and see how every AI output is traced back to the source document. Every claim verified. Every citation grounded.

Frequently Asked Questions

Related Guides