moorcheh-cookbooks

Pass

Audited by Gen Agent Trust Hub on Apr 20, 2026

Risk Level: SAFE
Full Analysis
  • [EXTERNAL_DOWNLOADS]: The documentation references the installation of official libraries such as moorcheh-sdk and langchain-moorcheh from official registries. It also points to an official starter repository on GitHub (github.com/moorcheh-ai/llm-wiki) for scaffolding projects.
  • [COMMAND_EXECUTION]: The cookbooks include boilerplate shell commands for environment setup, such as pip install, npm install, and git clone. These commands are contextually appropriate for the project setup tasks described in the documentation.
  • [DATA_EXFILTRATION]: The skill demonstrates how to interact with the vendor's API (api.moorcheh.ai) for semantic search and generation. It correctly advises developers to use environment variables for sensitive API keys and to proxy API calls through a backend to avoid client-side exposure of credentials.
  • [PROMPT_INJECTION]: The skill describes building systems (such as RAG and LLM Wiki) that process untrusted external data, which is an inherent surface for indirect prompt injection.
  • Ingestion points: Applications built following these guides ingest content from user-provided files in local directories like ./data or raw/ as seen in references/knowledge_base_rag.md and references/llm_wiki.md.
  • Boundary markers: The guides recommend using an agent schema file (CLAUDE.md) to define instructions, though the code snippets themselves do not show explicit delimiters for the ingested text.
  • Capability inventory: The applications are designed to perform network requests to the Moorcheh API and write processed knowledge back to the local file system as markdown wiki pages.
  • Sanitization: As educational boilerplate, the code snippets prioritize core logic and do not include complex validation or sanitization of the input documents before processing.
Audit Metadata
Risk Level
SAFE
Analyzed
Apr 20, 2026, 05:06 AM