examples-auto-run
examples-auto-run
What it does
- Runs
pnpm build && pnpm -r build-checkfirst - Runs
pnpm examples:start-allin auto-input mode (interactive prompts are auto-answered, HITL/MCP/apply-patch are auto-approved). - Executes starts in parallel (default concurrency 4) and pipes each start’s stdout/stderr into its own log file under
.tmp/examples-start-logs/. - Provides start/stop/status/logs/tail helpers via
run.sh. - If the Codex session ends (no disown/nohup), the child processes receive SIGHUP and exit;
stopis also available to clean up manually.
Usage
# Start (auto mode, concurrency=4 by default)
.agents/skills/examples-auto-run/scripts/run.sh start [extra args to examples:start-all]
# If you invoke the skill name alone ($examples-auto-run):
# - when `.tmp/examples-rerun.txt` exists and is non-empty, it will run `rerun` automatically
# - otherwise it runs the default `start` command.
More from openai/openai-agents-js
pnpm-upgrade
Keep pnpm current: run pnpm self-update/corepack prepare, align packageManager in package.json, and bump pnpm/action-setup + pinned pnpm versions in .github/workflows to the latest release. Use this when refreshing the pnpm toolchain manually or in automation.
91docs-sync
Analyze main branch implementation and configuration to find missing, incorrect, or outdated documentation in docs/. Use when asked to audit doc coverage, sync docs with code, or propose doc updates/structure changes. Only update English docs (docs/src/content/docs/**) and never touch translated docs under docs/src/content/docs/ja, ko, or zh. Provide a report and ask for approval before editing docs.
81openai-knowledge
Use when working with the OpenAI API (Responses API) or OpenAI platform features (tools, streaming, Realtime API, auth, models, rate limits, MCP) and you need authoritative, up-to-date documentation (schemas, examples, limits, edge cases). Prefer the OpenAI Developer Documentation MCP server tools when available; otherwise guide the user to enable `openaiDeveloperDocs`.
80changeset-validation
Validate changesets in openai-agents-js using LLM judgment against git diffs (including uncommitted local changes). Use when packages/ or .changeset/ are modified, or when verifying PR changeset compliance and bump level.
79code-change-verification
Run the mandatory verification stack when changes affect runtime code, tests, or build/test behavior in the OpenAI Agents JS monorepo.
78test-coverage-improver
Improve test coverage in the OpenAI Agents JS monorepo: run `pnpm test:coverage`, inspect coverage artifacts, identify low-coverage files and branches, propose high-impact tests, and confirm with the user before writing tests.
76