ai-avatar-video
Installation
SKILL.md
AI Avatar & Talking Head Video
Put words in a face. This skill routes across RunComfy's audio-driven avatar models — OmniHuman, Wan 2-7 with audio_url, HappyHorse, Seedance v2 — picking the right path for the user's intent and shipping the documented prompts + the exact runcomfy run invoke for each.
runcomfy.com · Lip-sync feature · CLI docs
Powered by the RunComfy CLI
# 1. Install (see runcomfy-cli skill for details)
npm i -g @runcomfy/cli # or: npx -y @runcomfy/cli --version
# 2. Sign in
runcomfy login # or in CI: export RUNCOMFY_TOKEN=<token>
# 3. Generate an avatar video
runcomfy run <vendor>/<model>/<endpoint> \
--input '{"prompt": "...", "audio_url": "https://...", "image_url": "https://..."}' \
--output-dir ./out
Related skills