image-to-video

Installation
SKILL.md

Image-to-Video — Pro Pack on RunComfy

runcomfy.com · HappyHorse I2V · Wan 2.7 · Seedance 2.0 Pro · GitHub

Image-to-video, intent-routed. This skill doesn't lock you to one model — it picks the right i2v model in the RunComfy catalog based on what the user actually wants: portrait animation, custom-voiceover lip-sync, or multi-modal composition.

npx skills add agentspace-so/runcomfy-skills --skill image-to-video -g

Pick the right model for the user's intent

User intent Model Why
Animate a portrait — keep identity stable HappyHorse 1.0 I2V #1 on Artificial Analysis Arena (Elo 1392); strong facial fidelity
Product reveal / 360 / macro motion HappyHorse 1.0 I2V Geometry preservation + smooth camera moves
Native synchronized ambient audio in one pass HappyHorse 1.0 I2V In-pass audio synthesis
Animate and lip-sync to a custom voiceover track Wan 2.7 + audio_url Accepts your own MP3/WAV (3–30s, ≤15MB) and drives lip-sync to it
Multi-language dub variants (same image, different audio per call) Wan 2.7 + audio_url Same shot, swap audio_url per language
Related skills
Installs
53.3K
GitHub Stars
2
First Seen
12 days ago