honcho-integration
Honcho Integration Guide
What is Honcho
Honcho is an open source memory library for building stateful agents. It works with any model, framework, or architecture. You send Honcho the messages from your conversations, and custom reasoning models process them in the background — extracting premises, drawing conclusions, and building rich representations of each participant over time. Your agent can then query those representations on-demand ("What does this user care about?", "How technical is this person?") and get grounded, reasoned answers.
The key mental model: Peers are any participant — human or AI. Both are represented the same way. Observation settings (observe_me, observe_others) control which peers Honcho reasons about. Typically you want Honcho to model your users (observe_me=True) but not your AI assistant (observe_me=False). Sessions scope conversations between peers. Messages are the raw data you feed in — Honcho reasons about them asynchronously and stores the results as the peer's representation. No messages means no reasoning means no memory.
Your agent accesses this memory through peer.chat(query) (ask a natural language question, get a reasoned answer), session.context() (get formatted conversation history + representations), or both.
Integration Workflow
Follow these phases in order:
Phase 1: Codebase Exploration
Before asking the user anything, explore the codebase to understand:
More from plastic-labs/honcho
migrate-honcho-ts
Migrates Honcho TypeScript SDK code from v1.6.0 to v2.1.1. Use when upgrading @honcho-ai/sdk, fixing breaking changes after upgrade, or when errors mention removed APIs like .core, getConfig, observations, or snake_case properties.
114migrate-honcho
Migrates Honcho Python SDK code from v1.6.0 to v2.1.1. Use when upgrading honcho package, fixing breaking changes after upgrade, or when errors mention AsyncHoncho, observations, Representation class, .core property, or get_config methods.
113honcho-cli
Inspect and debug Honcho workspaces via the `honcho` CLI. Use when investigating peer representations, memory state, session context, queue status, or dialectic quality — any task that requires introspection of a Honcho deployment.
53