wan-2-7
Originally fromagentspace-so/runcomfy-skills
Installation
SKILL.md
Wan 2.7 — Pro Pack on RunComfy
runcomfy.com · Text-to-video · GitHub
Wan-AI's Wan 2.7 — flagship video model with multi-reference conditioning and audio-driven lip-sync — hosted on the RunComfy Model API.
npx skills add agentspace-so/runcomfy-skills --skill wan-2-7 -g
When to pick this model (vs siblings)
| You want | Use |
|---|---|
| Lip-sync video to an audio track you supply | Wan 2.7 (audio_url) |
| Multi-reference fine motion control | Wan 2.7 |
| Smooth transitions, accurate motion physics | Wan 2.7 |
| Currently-#1 blind-vote video model | HappyHorse 1.0 |
| Multi-modal cinematic with image+video+audio refs + in-pass voice generation | Seedance 2.0 Pro |
Related skills