liveavatar-integrate
Installation
SKILL.md
LiveAvatar Integration
LiveAvatar gives your product a human face — real-time, lip-synced video avatars that speak, react, and maintain eye contact. This skill assesses what you have, recommends the best integration path, and walks you through building it.
Step 1: Discover What the User Has
Before recommending a path, gather context. Check the codebase and conversation for signals. Do not ask questions the codebase already answers.
Signals to look for in the codebase
Scan for these automatically — do not ask the user if you can detect them:
| Signal | Where to look | What it means |
|---|---|---|
| OpenAI / Anthropic / LLM SDK imports | package.json, requirements.txt, imports |
User has their own LLM |
| ElevenLabs / PlayHT / Deepgram TTS SDK | dependencies, imports | User has their own TTS |
| Deepgram / Whisper / AssemblyAI STT SDK | dependencies, imports | User has their own STT |
LiveKit SDK (livekit-server-sdk, @livekit/) |
dependencies | User has LiveKit infra |
| Agora SDK | dependencies | User has Agora infra |
Related skills