comprehensive-research-agent
Comprehensive Research Agent Best Practices
This skill addresses common failures in multi-step research tasks: unhandled tool errors, missing validation, opaque reasoning, and premature conclusions. It provides structured protocols for source validation, error recovery, and thinking transparency that significantly improves research quality and reliability.
When to Activate
- Task involves web research with search, read_url, or fetch operations
- Task requires gathering information from multiple sources
- Task has explicit requirements for completeness or verification
- Task includes file operations that need validation (save, write, read)
- Any research or information-gathering workflow with 3+ tool interactions
Core Concepts
- Validation Checkpoints: Explicit verification steps at phase transitions to confirm tool outputs, source relevance, and information completeness before proceeding
- Error Recovery Protocols: Mandatory acknowledgment and handling of tool failures with fallback strategies rather than silent continuation
- Source Traceability: Maintaining clear tracking of which sources were actually retrieved vs. referenced from prior knowledge to prevent hallucination
- Substantive Thinking Blocks: Detailed reasoning traces that document insights, connections, gaps, and decision rationale at each step
- Cross-Source Validation: Verifying key claims against multiple sources and explicitly noting consensus, contradictions, and information gaps
More from foryourhealth111-pixel/vibe-skills
ralph-loop
Codex-compatible Ralph loop runner with dual engines (compat local state loop + optional open-ralph-wiggum backend).
7clinical-reports
Write comprehensive clinical reports including case reports (CARE guidelines), diagnostic reports (radiology/pathology/lab), clinical trial reports (ICH-E3, SAE, CSR), and patient documentation (SOAP, H&P, discharge summaries). Full support with templates, regulatory compliance (HIPAA, FDA, ICH-GCP), and validation tools.
4polars
Fast in-memory DataFrame library for datasets that fit in RAM. Use when pandas is too slow but data still fits in memory. Lazy evaluation, parallel execution, Apache Arrow backend. Best for 1-100GB datasets, ETL pipelines, faster pandas replacement. For larger-than-RAM data use dask or vaex.
4lqf_machine_learning_expert_guide
|
3detecting-performance-regressions
|
3creating-data-visualizations
|
3