omni-recall
Omni-Recall: Neural Knowledge & Long-Term Context Engine
Omni-Recall is a high-performance memory management skill designed for AI agents. It enables persistent, cross-session awareness by transforming conversation history and technical insights into high-dimensional vector embeddings, stored in a Supabase (PostgreSQL + pgvector) knowledge cluster with HNSW indexing for fast semantic search.
🚀 Core Capabilities
-
Vector Semantic Search (
fetchwithquery_text): Intelligent natural language queries using vector similarity. Finds semantically related content even with different wording. Returns results ranked by similarity score (0-1). Default threshold: 0.5 (balanced recall and precision). -
Neural Synchronization (
sync): Encodes current session state, user preferences, and operational steps into 1536-dimensional vectors using OpenAI'stext-embedding-3-smallvia APIYI. Includes automatic duplicate detection (skips if cosine similarity > 0.9). Supports optionalcategoryandimportancefields. -
Contextual Retrieval (
fetch): Pulls historical neural records using natural language queries or time-based filters. Supports similarity threshold tuning (0.5-0.9) and category filtering. -
User Profile Management (
sync-profile/fetch-profile): Manages user roles, preferences, settings, and personas in a dedicatedprofilesmatrix with vector search support.