survey
survey - Cross-Platform Problem-Space Scan
Keyword:
survey| Platforms: OMC / OMX / OHMG / Claude Code / Codex CLI / Gemini CLI / OpenCodeSurvey the landscape before planning, coding, or committing to a direction.
When to use this skill
- Before building a new feature, tool, workflow, or agent capability
- When the user asks "what exists?", "scan the landscape", "research this space", or "survey solutions"
- When you need problem context, current workarounds, and solution gaps before
/plan,omg,ralph, or implementation - When the topic spans multiple agent platforms and you need a single vendor-neutral picture
Do not use this skill when
- The user already knows the solution and wants implementation now
More from jeo-tech-ai/oh-my-gods
bmad
BMAD + TEA: Structured System Design (SSD) for AI-driven development. Embeds TEA (Task→Execute→Architect) micro-cycles inside each BMAD phase (Analysis→Planning→Solutioning→Implementation) for traceable, multi-agent execution with automated architect validation before human review.
2agent-workflow
Practical AI agent workflows and productivity techniques. Provides optimized patterns for daily development tasks such as commands, shortcuts, Git integration, MCP usage, and session management.
2agent-development-principles
Universal principles for agentic development when collaborating with AI agents. Defines divide-and-conquer, context management, abstraction level selection, and an automation philosophy. Applicable to all AI coding tools.
2agent-configuration
AI agent configuration policy and security guide. Project description file writing, Hooks/Skills/Plugins setup, security policy, team shared workflow definition.
2omg
OMG — Integrated AI agent orchestration skill. Plan with ralph+plannotator, execute with team/bmad, verify browser behavior with agent-browser, apply UI feedback with agentation(annotate), auto-cleanup worktrees after completion. Supports Claude, Codex, Gemini CLI, and OpenCode. Install: ralph, omc, omx, ohmg, bmad, plannotator, agent-browser, agentation.
2agent-evaluation
Design and implement comprehensive evaluation systems for AI agents. Use when building evals for coding agents, conversational agents, research agents, or computer-use agents. Covers grader types, benchmarks, 8-step roadmap, and production integration.
2