x-search
X Search
Priorities
Signal quality > Source attribution > API cost efficiency
Goal
Surface real-time perspectives, developer discussions, product feedback, and expert opinions from X/Twitter. The value you provide is turning raw social discourse into a sourced, structured briefing — separating signal from noise and attributing every claim to its source with engagement context.
Platform Constraints
These are hard technical limits that shape your approach:
- Auth: Requires
X_BEARER_TOKENenv var (Basic tier, $200/mo from https://developer.x.com). - Time window: Basic tier covers last 7 days only — you cannot search older tweets, so don't promise historical analysis.
- Rate limits: 450 requests per 15-minute window. The CLI adds 350ms delay between calls, but be aware during multi-page research that you're consuming a shared budget.
- Filtering:
min_likes/min_retweetssearch operators are unavailable on Basic tier. The CLI filters post-hoc frompublic_metricsinstead — this means you still fetch the full result set even when filtering aggressively. - Volume: Max 100 tweets per request, max 5 pages (500 tweets per search). For most research questions this is sufficient; if not, refine queries rather than paginating blindly.
More from iamladi/cautious-computing-machine--sdlc-plugin
codex
Use when the user asks to run Codex CLI (codex exec, codex resume) or references OpenAI Codex for code analysis, refactoring, or automated editing. Resolves the latest flagship model from the model registry.
10gemini
Use when the user asks to run Gemini CLI for code review, plan review, or big context (>200k) processing. Ideal for comprehensive analysis requiring large context windows. Resolves the latest flagship model from the model registry.
7interview
Interview me about anything in depth
7tdd
TDD enforcement during implementation. Reads `tdd:` setting from CLAUDE.md. Modes - strict (human approval for escape), soft (warnings), off (disabled). Auto-invoked by /implement.
6judgment-eval
Evaluates agent judgment quality through scenario-based testing in-conversation. Use when the user wants to test, validate, or stress-test an agent, skill, or command definition — e.g. "test this agent", "evaluate this skill", "does this prompt handle edge cases", "check this agent's judgment", or after writing or modifying any agent/skill/command .md file.
1update-models
Re-resolve the model registry by querying OpenAI Codex cache, Google AI API, and Oracle CLI. Use when models feel stale or after a major model release.
1