alibabacloud-avatar-video
Installation
SKILL.md
Human Avatar — Alibaba Cloud AI Video & Speech
Capabilities overview
| Capability | Script | Model / API | Region | Summary |
|---|---|---|---|---|
| LivePortrait | live_portrait.py |
liveportrait |
cn-beijing | Portrait + audio/video → talking video, two steps |
| EMO | portrait_animate.py |
emo-v1 |
cn-beijing | Portrait + audio → talking head, detect + generate |
| AA (AnimateAnyone) | animate_anyone.py |
animate-anyone-gen2 |
cn-beijing | Full-body animation: detect → motion template → video |
| T2I | text_to_image.py |
wan2.x-t2i |
Multi-region | Text → image, default wan2.2-t2i-flash |
| I2V | image_to_video.py |
wan2.x-i2v |
Multi-region | Image → video; T2I→I2V pipeline supported; default wan2.7-i2v-flash |
| Qwen TTS | qwen_tts.py |
qwen3-tts-* |
cn-beijing / Singapore | Text → speech; auto model/voice by scene |
| LingMou | avatar_video.py |
LingMou SDK | cn-beijing | Template-based digital-human broadcast video |
Quick selection guide
Related skills