skills/skills.volces.com/context-compactor

context-compactor

SKILL.md

Context Compactor

Automatic context compaction for OpenClaw when using local models that don't properly report token limits or context overflow errors.

The Problem

Cloud APIs (Anthropic, OpenAI) report context overflow errors, allowing OpenClaw's built-in compaction to trigger. Local models (MLX, llama.cpp, Ollama) often:

  • Silently truncate context
  • Return garbage when context is exceeded
  • Don't report accurate token counts

This leaves you with broken conversations when context gets too long.

The Solution

Context Compactor estimates tokens client-side and proactively summarizes older messages before hitting the model's limit.

How It Works

Installs
5
First Seen
Apr 3, 2026