sora
AI video generation and management for Sora models with character references, editing, and batch workflows.
- Supports video creation, editing, extension, character reference uploads, and asset downloads (video/thumbnail/spritesheet) via bundled CLI with structured prompt augmentation
- Defaults to
sora-2model; supportssora-2-profor higher fidelity and larger resolutions up to 1920x1080 - Handles async job polling, local multi-job queues, and official Batch API planning for offline rendering pipelines
- Enforces content guardrails: no real people, copyrighted material, or content unsuitable for under-18 audiences; requires
OPENAI_API_KEYand Sora API access
Sora Video Generation Skill
Creates or manages Sora video jobs for the current project (product demos, marketing spots, cinematic shots, social clips, UI mocks). Defaults to sora-2 with structured prompt augmentation and prefers the bundled CLI for deterministic runs. Note: $sora is a skill tag in prompts, not a shell command.
When to use
- Generate a new video clip from a prompt
- Create a reusable character reference from a short non-human source clip
- Edit an existing generated video with a targeted prompt change
- Extend a completed video with a continuation prompt
- Poll status, list jobs, or download assets (video/thumbnail/spritesheet)
- Run a local multi-job queue now, or plan a true Batch API submission for offline rendering
Decision tree
- If the user has a short non-human reference clip they want to reuse across shots →
create-character - If the user has a completed video and wants the next beat/continuation →
extend - If the user has a completed video and wants a targeted change while preserving the shot →
edit - If the user has a video id and wants status or assets →
status,poll, ordownload - If the user needs many renders immediately inside Codex →
create-batch(local fan-out, not the Batch API)
More from openai/skills
screenshot
Use when the user explicitly asks for a desktop or system screenshot (full screen, specific app or window, or a pixel region), or when tool-specific capture capabilities are unavailable and an OS-level capture is needed.
2.7Ksecurity-best-practices
Perform language and framework specific security best-practice reviews and suggest improvements. Trigger only when the user explicitly requests security best practices guidance, a security review/report, or secure-by-default coding help. Trigger only for supported languages (python, javascript/typescript, go). Do not trigger for general code review, debugging, or non-security tasks.
2.5Kfigma
Use the Figma MCP server to fetch design context, screenshots, variables, and assets from Figma, and to translate Figma nodes into production code. Trigger when a task involves Figma URLs, node IDs, design-to-code implementation, or Figma MCP setup and troubleshooting.
2.4Kplaywright
Use when the task requires automating a real browser from the terminal (navigation, form filling, snapshots, screenshots, data extraction, UI-flow debugging) via `playwright-cli` or the bundled wrapper script.
2.4Kpdf
Use when tasks involve reading, creating, or reviewing PDF files where rendering and layout matter; prefer visual checks by rendering pages (Poppler) and use Python tools such as `reportlab`, `pdfplumber`, and `pypdf` for generation and extraction.
2.3Kfigma-implement-design
Translates Figma designs into production-ready application code with 1:1 visual fidelity. Use when implementing UI code from Figma files, when user mentions "implement design", "generate code", "implement component", provides Figma URLs, or asks to build components matching Figma specs. For Figma canvas writes via `use_figma`, use `figma-use`.
2.2K