Memory & AI insights

Memory is the “context layer” that helps the AI behave consistently over time. It reduces repeated questions and helps operators understand what has already been learned.

At a glance

  • Memory lives on the contact detail page.
  • It includes:
    • Summary
    • Open items
    • Structured AI insights (facts)
    • Interaction log
    • Rebuild from recent calls
  • Memory can show LLM usage metrics (tokens/cost).
Pro tip: treat memory like a shared notebook

Memory works best when it is owned. Decide who can edit it and what counts as “truth” (AI-derived vs operator-confirmed).

What each section means

LLM usage

Shows model/provider usage and cost signals for memory extraction workflows.

Summary

A human-readable snapshot of the contact context.

Open items

Things that are unresolved or require follow-up.

AI Insights (facts)

Structured facts extracted from interactions. Insights can have provenance and may support:

  • inline edits
  • reset-to-AI behavior (when an operator wants to revert a manual override)

Interaction log

A history of memory-related events and recent interactions.

Rebuild from recent calls

Rebuild re-extracts memory from recent calls. It is powerful, but it can have cost and governance implications.

Watch out: rebuild is not “free”

Rebuild can trigger additional extraction and cost. Use it when you have new high-signal calls or you fixed a bad prompt, not as a default habit.

When memory matters most

  • High-touch accounts where context continuity affects conversion.
  • Multi-touch sequences where follow-ups depend on prior calls.
  • Ops teams that want predictable outcomes and fewer “AI surprises.”
Next: make memory usable in campaigns
Align your agent prompt and outcomes so memory and follow-ups reinforce each other.
Did this answer your question? Thanks for the feedback There was a problem submitting your feedback. Please try again later.