# AI Hallucinations in Long Conversations: Why ChatGPT Loses the Plot

 

Most people think of “AI hallucinations” as one-off factual mistakes. But the bigger, more costly flaw happens in long conversations. At first, ChatGPT is accurate and focused. By the 20–30 minute mark, however, the system can: 

- Mix up facts you already agreed on. 

- Reintroduce ideas you discarded. 

- Forget critical constraints. 

- Pull random details from earlier as if they’re still valid. 

This isn’t random failure — it’s structural. Large language models have a context window (a finite memory buffer). Once it fills, older details are compressed or dropped. The AI also gives more weight to recent tokens, letting earlier information fade. Without true memory, small errors pile up. By the 15th continuation, answers may drift far from your original direction while still sounding confident. 

---

## Why This Matters

In professional settings, this flaw is not harmless: 

- In business, it can derail projects with contradictions. 

- In finance, it can lead to miscalculations or faulty assumptions. 

- In law, it may fabricate cases or reintroduce arguments that were already dismissed. 

- In medicine, it can suggest unsafe treatments after initially correct advice. 

 

Hallucinations in long sessions are not just inconvenient — they translate into wasted time, lost money, and real-world risks

---

## Relatable Analogy

Picture a two-hour meeting. The first 20 minutes are sharp, decisions are clear. By the end, one participant is repeating old points, forgetting agreements, and pulling random notes from the start. That’s what ChatGPT does in long conversations: brilliant at the start, sloppy over time. 

---

## Fix Checklist

- Break large tasks into short sessions (5–10 minutes). 

- Summarize key decisions and paste them into each new prompt. 

- Request structured answers (tables, lists, JSON) to limit narrative drift. 

- Use explicit instructions: “If uncertain, reply ‘Unknown’.” 

- Save checkpoints externally (Notion, Google Docs, Git commits). 

---

## Lite Copy-Paste Prompt

You are ChatGPT. Rules:

1.  Do not reintroduce discarded ideas.

2.  If context is unclear, ask me before answering.

3.  If uncertain, reply “Unknown.”

4.  Keep answers structured in bullets or tables.

---

## Case Study: Legal Brief Disaster

 

In 2023, in the case Mata v. Avianca, Inc. before the U.S. District Court for the Southern District of New York, two lawyers submitted a legal brief drafted with ChatGPT. The AI confidently generated six case citations that never existed

 

When Judge P. Kevin Castel reviewed the filing, the fraud was exposed. The attorneys and their law firm were sanctioned with a $5,000 fine and ordered to notify the judges falsely cited. 

 

This incident was widely covered in the press: 

- [The Guardian: Two US Lawyers Fined for Submitting Fake Court Citations from ChatGPT] (https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt?utm_source=chatgpt.com

 

Lesson: In law, where precedent is everything, long-session hallucinations are not a minor flaw. They can cause reputational damage, financial penalties, and even malpractice claims. 

 ---

## Recap

Hallucinations in long conversations stem from memory limits, attention bias, and lack of grounding. The fix is not to abandon AI — but to checkpoint, recap, and reload truth before drift derails your work. 

---

👉 Next week: Bias in AI: How Training Data Skews Results (and How to Fix It). 

🔗 [Subscribe now to get the next Flaw & Fix delivered to your inbox.] 

 ---

Keep reading

No posts found