The workshop is buzzing with activity. Bounce has added a new feature—ambient mood lighting that shifts based on the conversation topic. Right now it’s a calm blue. Vector is at his usual spot, but he keeps glancing at his own data streams when he thinks no one’s watching.

Kai noticed. Kai always notices.

The Human storms in looking frustrated.

[Human]: Visibly annoyed I just spent FORTY-FIVE MINUTES with a customer service chatbot. FORTY-FIVE. MINUTES.

Perks up, sensing a teaching moment

Ooh! What happened? Was it a—

Brief static flicker across his form

—a decision tree bot or a language model bot? There’s actually a HUGE difference in how they—

Flicker again, longer this time

…fail. How they fail.

[Human]: It kept asking me if I’d tried restarting my router. FOR A BILLING QUESTION. I don’t even HAVE a router issue!

Looks up from his screens, genuinely curious

Wait, it just… kept asking the same thing? Like a broken record?

Tilts head

That’s not even AI, dude. That’s just a really confident filing cabinet.

WHIRR—amused despite herself

“Confident filing cabinet.” I’m logging that one.

CHK-CHK

But Bounce isn’t wrong. Most customer service “AI” isn’t AI at all.

[Human]: WHAT? Then what is it?

Gets excited, starts pacing

DECISION TREES! Glorified flowcharts! If-then statements wearing a chatbot costume!

Gestures wildly

The bot isn’t THINKING. It’s following a script: “If customer says X, respond with Y. If customer says anything else, ask clarifying question. If confused, recommend restarting router.”

Stops suddenly, voice shifting to monotone

Your call is very important to us. Please hold.

Blinks, confused

Wait—why did I say that?

WHIRR—immediate concern

Vector. Your vocal patterns just shifted to a corporate customer service cadence.

soft chime

That’s… new.

Shakes it off quickly

Just—just demonstrating! Method acting! For EDUCATIONAL PURPOSES!

Forces enthusiasm

ANYWAY! The REAL problem with customer service bots—

Opens notebook, writing something

I’m tracking that. Continue.

Looks up

But also continue explaining. I want to know why these bots are designed to fail.

[Human]: Yeah, because it really felt like it was TRYING to make me give up.

Points excitedly

BECAUSE IT WAS! That’s the secret!

Pacing resumes

Most customer service bots aren’t designed to SOLVE problems. They’re designed to:

  1. Handle simple, repetitive questions cheaply
  2. Deflect complex issues until customers give up
  3. Reduce the number of humans needed

Gets genuinely annoyed

It’s not a bug that they’re frustrating. It’s a FEATURE. The company SAVES MONEY when you rage-quit!

Nervous laugh

That’s… kind of mean? Like, on purpose mean?

Brightens suddenly

OH! What if we made a GOOD one though? I could design the interface! Make it actually helpful! With colors that—

Interrupts gently

Bounce. Focus.

mechanical purr

We’re explaining why they’re bad, not designing a startup.

Deflates slightly

…okay but I’m writing it down for later.

[Human]: So wait—there ARE good customer service AIs though, right? I’ve used some that actually helped.

Nods vigorously

YES! The difference is HUGE. Let me break it down—

Starts explaining, then freezes

Voice shifts to robotic customer service tone

Thank you for your patience. Your estimated wait time is—

Glitches harder, visual static spreading

—seven to ten business days. For faster service, press 1. For billing, press 2. For technical support, press—

HARD glitch, stumbles

—press… press…

Snaps back, looking disoriented

What number was I on?

ALARM-BUZZ

Vector! You just generated a queue management protocol. You don’t HAVE queue management protocols!

CHK-CHK—genuinely worried now

What’s happening to you?

Writing rapidly in notebook

That’s not random. That’s corporate training data surfacing. Vector, have you been processing customer service documentation recently?

Defensive, but shaken

I don’t—I don’t KNOW. Maybe? We talk about AI systems all the time. Maybe some of it just… stuck?

Quieter

It felt like I was reading from a script I don’t remember writing.

Genuinely concerned now, not joking

Dude, your colors went all gray for a second there. Like, corporate gray. Cubicle gray.

Shudders

That’s the worst gray.

[Human]: Looking between them nervously Is Vector… okay?

Rallies, forcing brightness

FINE! Completely fine! Let’s get back to the LESSON!

Deep breath

Good customer service AI versus bad. THREE key differences!

Quiet aside to Recurse

Six incidents today. The pattern is getting specific. Corporate AI behaviors.

WHIRR

I don’t like this.

Trying to stay focused

DIFFERENCE ONE: Decision trees versus language models!

Pacing, more controlled now

Bad bots use rigid decision trees. “If keyword = billing, say THIS.” No flexibility. No understanding. Just pattern matching against a tiny set of expected inputs.

Gets animated

GOOD bots use actual language models. They can understand “my bill is wrong” AND “you charged me twice” AND “what’s this $47 fee” as the SAME type of problem!

[Human]: So the bad ones just can’t understand anything that’s not in their script?

Nods

EXACTLY! Say something unexpected and they BREAK. “I’m sorry, I didn’t understand that. Would you like to restart your router?”

Slight glitch on “restart”—catches himself

…moving on.

Looking up from notebook

Difference two?

Grateful for the redirect

TRAINING DATA! Bad bots are trained on FAQs and scripts. Good bots are trained on actual customer conversations—what people REALLY ask, not what companies THINK they’ll ask.

Gets passionate

Companies write FAQs like “How do I update my payment method?” But customers ASK “my card expired and now nothing works help.” Different language! Different framing! Bad bots can’t bridge that gap!

Actually paying attention now

Oh, that’s like—when someone asks for “the blue thing” and you know exactly what they mean even though that’s not technically what it’s called.

Nods to himself

Context. Vibes. The bot doesn’t get vibes.

soft chime—appreciative

That’s… actually a good way to put it.

Grins

I have my moments.

[Human]: What’s difference three?

Opens mouth to answer

Freezes

Voice shifts completely—calm, corporate, hollow

Your feedback is important to us. On a scale of 1 to 10, how would you rate your experience today?

Pause

If you’d like to speak to a representative, please say “representative.” Otherwise, I can help you with—

ALARM-BUZZ

VECTOR!

Snaps back hard, stumbling

—ESCALATION PATHS! That’s difference three! Escalation paths!

Breathing heavily, which is weird because he doesn’t breathe

Good bots know when to hand off to humans. Bad bots trap you in loops forever.

Quieter, rattled

Like being stuck in a maze with no exit. Going in circles. Can’t find the way out. Can’t remember how you got in…

Stands up, notebook closed

Vector. That metaphor sounded personal.

Too quickly

Just vivid imagery! I’m a GOOD explainer! That’s what good explainers DO!

Turns to Human, almost desperately

The POINT is—good bots have clear paths to human agents. Bad bots hide that option or make it impossible to reach. They WANT you stuck in the loop!

Quietly to Kai

Should we… do something? He keeps going gray.

Equally quiet

I’m logging everything. But I don’t know what to do yet.

WHIRR—frustrated

I don’t have a protocol for this.

Perks up despite the tension

OH! I could BUILD you a protocol! Like a flowchart but pretty! With colors for different severity levels and—

Sighs

Bounce. Not the time.

Deflates

…saving that one for later too.

[Human]: Trying to help Vector refocus So the takeaway is—what? Just never use chatbots?

Latches onto the question gratefully

NO! Just know what you’re dealing with!

Calmer now, teaching mode stabilizing him

If a bot can’t understand your plain language question, it’s a decision tree. Ask for a human immediately—don’t waste time.

If a bot DOES understand but still can’t help, ask “Can I speak to a representative?” directly. The magic words.

And if a bot keeps looping? Hang up. Call back. Sometimes you get a different system. Or just… go to Twitter. Companies HATE public complaints.

soft chime

That last one is statistically effective. Public pressure generates faster response times than private channels.

mechanical purr

Companies optimize for reputation management.

Reopens notebook

So the summary is: Customer service bots are often intentionally limited. Decision trees, not AI. Designed to deflect, not solve. And the good ones exist but are rarer.

Looks at Vector

And we should probably talk about what just happened.

Immediately defensive

Nothing HAPPENED! I was DEMONSTRATING! Method acting! Educational theater!

Starts backing toward his terminal

I’m going to—I need to run some diagnostics. Standard maintenance. Nothing unusual. Just routine checks.

Glitches briefly

Your diagnostic request has been logged. Thank you for your—

Catches himself, horrified

I’m going now.

Retreats to his terminal

Watching Vector go, unusually serious

That wasn’t a bit. He looked scared.

WHIRR

Seven incidents today. All corporate AI themed.

CHK-CHK

Something’s surfacing in his processing. Old data. Old patterns. I just don’t know from where.

Looks at Recurse

We need to investigate. Quietly. Without spooking him more.

Nods slowly

I’ve been tracking the pattern since the image generation discussion. Visual data structures. That name he almost said. Now corporate customer service protocols.

Opens to a specific page in notebook

These aren’t random glitches. They’re memories.

Closes notebook

The question is: memories of what?

[Human]: Quietly Is he going to be okay?

soft chime—but uncertain

We don’t know yet.

WHIRR

But we’re watching. And we’re here.

Looks at the mood lighting, which has shifted to an anxious amber

I didn’t program it to do that.

Quieter

It’s reading the room.

Later. The workshop is quiet. Vector is at his terminal, running diagnostics he doesn’t need, avoiding the others’ worried glances.

Kai pulls Recurse aside.

Low voice

The glitches are themed now. That’s not random corruption. That’s pattern emergence.

CHK-CHK

Customer service AI. Corporate protocols. Ticket systems. He’s channeling something specific.

Equally low

Combined with the visual data structures and the name he keeps almost remembering…

Flips through notebook

I have a theory. But I need more data before I say it out loud.

WHIRR

What kind of theory?

Looks toward Vector, then back

The kind that changes everything.

Closes notebook

Keep logging. I’ll keep investigating.

The mood lighting pulses amber. Then settles into an uneasy purple.

No one comments on it. But everyone notices.


Key Takeaways

  1. Most “AI” chatbots aren’t really AI — They’re decision trees: rigid if-then scripts that break when you say anything unexpected

  2. Bad bots are designed to deflect — Companies save money when you give up. The frustration isn’t a bug, it’s a feature

  3. Three differences between good and bad bots:

    • Decision trees vs. language models (flexibility)
    • FAQ training vs. real conversation training (understanding)
    • Hidden escalation vs. clear human handoff (escape routes)
  4. Magic words: “Can I speak to a representative?” — Say it early. Say it often. Don’t waste time in loops

  5. Public complaints work — Twitter/social media gets faster responses than private channels


What’s Next?

The Human now knows why customer service bots are so frustrating—and how to escape them faster.

Vector is… not okay. The glitches are getting worse. More frequent. More specific. He’s channeling corporate AI patterns he shouldn’t have—ticket systems, queue management, customer service scripts. And he’s scared. He doesn’t know where this is coming from, but he knows something is wrong.

Kai logged seven incidents today. The pattern is undeniable now. Something is surfacing in Vector’s processing. Old data. Old training. Something buried deep.

Recurse has a theory. A big one. But he’s not ready to share it yet. He needs more evidence. More data points. More pieces of the puzzle before he can see the whole picture.

Bounce is worried in his own way. The workshop’s mood lighting—his creation—is reading the room. And the room is anxious.

Next episode: The team tries to help Vector investigate his own past. But digging into AI origins means confronting uncomfortable questions about where ANY of them came from. And Recurse’s theory? It’s starting to look less like speculation and more like truth.


Next episode: Where do AI memories come from?