code-review-context
Codex maintains a context (history of messages) that is sent to the model in inference requests.
- No history rewrite - the context must be built up incrementally.
- Avoid frequent changes to context that cause cache misses.
- No unbounded items - everything injected in the model context must have a bounded size and a hard cap.
- No items larger than 10K tokens.
- Highlight new individual items that can cross >1k tokens as P0. These need an additional manual review.
- All injected fragments must be defined as structs in
core/contextand implement ContextualUserFragment trait
More from openai/codex
babysit-pr
Babysit a GitHub pull request after creation by continuously polling review comments, CI checks/workflow runs, and mergeability state until the PR is merged/closed or user help is required. Diagnose failures, retry likely flaky failures up to 3 times, auto-fix/push branch-related issues when appropriate, and keep watching open PRs so fresh review feedback is surfaced promptly. Use when the user asks Codex to monitor a PR, watch CI, handle review comments, or keep an eye on failures and feedback on an open PR.
1.0Ktest-tui
Guide for testing Codex TUI interactively
730remote-tests
How to run tests using remote executor.
60code-review
Run a final code review on a pull request
49codex-bug
Diagnose GitHub bug reports in openai/codex. Use when given a GitHub issue URL from openai/codex and asked to decide next steps such as verifying against the repo, requesting more info, or explaining why it is not a bug; follow any additional user-provided instructions.
48code-review-testing
Test authoring guidance
42