Receptic AI logoReceptic AI
Back to blog
ProductMarch 18, 20266 min read

Why your AI agent answers wrong (and it isn't the model)

Most hallucinations come from the knowledge base, not the LLM. A simple framework for cleaning yours up in an afternoon.

A
Ari Sokolov
Product · Receptic AI

When a customer tells us their agent is “hallucinating,” nine times out of ten the LLM is doing exactly what we asked it to do — repeating what's in the knowledge base. The knowledge base is just wrong.

The four KB pathologies

Almost every bad answer traces back to one of these:

The 10-question audit

Pick the 10 questions your front desk hears most. For each one:

You'll be done in under an hour and you'll have a list of KB edits worth more than any prompt tuning.

Write for retrieval, not for humans

Vector search retrieves chunks based on semantic similarity to the question. That means each meaningful fact needs to live in a chunk that reads like an answer to a likely question.

Bad: “Pricing tiers are detailed on the next page.”
Good: “Our standard cleaning starts at $189. Deep cleans start at $249.”

One source of truth

If your hours, prices, or policies live in three places, three versions will eventually disagree. Pick one canonical source (we usually recommend a single Google Doc per topic) and have everything else point at it. The agent reads the canonical version.

The “I'll find out” escape hatch

A good agent should be willing to say “I don't know. I'll find out and call you back.” This is far better than a confident wrong answer. Configure the agent's system prompt to default to this when retrieval confidence is below threshold. The dashboard logs every escape so you can backfill the KB.

What to clean first

Get those five right and 80% of your “hallucinations” go away.

Try Receptic

See it answer a real call.

Spin up an agent on a sandbox number in minutes. No credit card to test.

Try the demo
Keep reading

More from the blog