seedance-2-0

Installation
SKILL.md

Seedance 2.0 (ByteDance)

Seedance 2.0 is the ByteDance Seed team's unified multimodal video+audio model (released Feb 2026, globally available via partner APIs April 2026). It is the preferred premium default for cinematic, trailer, teaser, and motion-led work inside OpenMontage whenever any supporting gateway is configured. OpenMontage wraps four gateways directly (seedance_video → fal.ai, seedance_replicate → Replicate, runway_video with model="seedance_2.0" → Runway, higgsfield_video with model="seedance_2.0" → Higgsfield); BytePlus / Freepik / HeyGen-Video-Agent wrappers are on the roadmap. The scoring engine deduplicates by provider="seedance" so whichever gateway the user has configured wins automatically — agents should pass preferred_provider="seedance" to video_selector (or let the scorer pick) rather than routing to a specific gateway by name.

Why it is the OpenMontage premium default

Capability Seedance 2.0 Notes
Single-pass native synced audio Yes Speech + SFX + ambience generated jointly, not post-sync
Multi-shot inside one generation Yes Multiple cuts/shots in a single prompt
Director-level camera control Yes Camera language (dolly, tilt, arc, crane, handheld) honored
Lip-sync from quoted dialogue Yes Character says: "..." matches mouth shapes
Reference conditioning Up to 9 images + 3 video clips + 3 audio clips 12-asset multimodal
Character identity consistency Yes Face/subject stable across shots
Max shot duration 15 s auto / 4–15 s
Resolution ceiling 1080p on some endpoints (720p default on fal.ai) Provider-dependent
Elo (Artificial Analysis) 1269 (#1 as of Feb 2026) Beat Veo 3, Sora 2, Runway Gen-4.5

Switch away only for a specific reason: strict budget (use the fast variant or LTX), user-preferred provider (VEO/Sora/Kling), or a stylistic fit that favors another model.

Related skills
Installs
20
GitHub Stars
3.6K
First Seen
Apr 18, 2026