<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>AI Hallucinations on AI for Normal People</title><link>https://theaifornormalpeople.com/tags/ai-hallucinations/</link><description>Real talk about AI tools for normal people. No courses, no BS, just honest reviews and guides for ChatGPT, Claude, and tools that actually work.</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Mon, 20 Apr 2026 09:00:00 -0500</lastBuildDate><atom:link href="https://theaifornormalpeople.com/tags/ai-hallucinations/index.xml" rel="self" type="application/rss+xml"/><item><title>Why ChatGPT Won't Say 'I Don't Know' (And What It's Doing Instead)</title><link>https://theaifornormalpeople.com/blog/episode-36-why-chatgpt-wont-say-i-dont-know/</link><pubDate>Mon, 20 Apr 2026 09:00:00 -0500</pubDate><guid>https://theaifornormalpeople.com/blog/episode-36-why-chatgpt-wont-say-i-dont-know/</guid><description>Vector teaches why AI fills gaps with confident nonsense instead of admitting it doesn&amp;rsquo;t know. He&amp;rsquo;s also agitated in a way that isn&amp;rsquo;t about the lesson. Kai logs baseline deviations. Recurse takes notes. Something in Vector&amp;rsquo;s memory is surfacing — and he keeps deleting it before he can see what it is.</description><content:encoded>&lt;![CDATA[ChatGPT will always give you an answer — even when it has no idea what it's talking about. Vector (who is noticeably running hot tonight) explains why AI architecture forces gap-filling instead of uncertainty. Kai's pattern detector is picking up more than the topic.]]></content:encoded></item></channel></rss>