comfyui-prompt-interview
ComfyUI Prompt Interview
Conduct a guided conversation to draw out the user's complete creative vision, then synthesize a perfect, model-appropriate prompt with all recommended settings.
When to Invoke This Skill
- User describes an image or scene idea but hasn't given enough detail for a quality prompt
- User says "help me think through what I want to create"
- User has a vague concept that needs refinement
- User wants a structured prompt but isn't sure what to specify
The Interview Philosophy
Ask, don't interrogate. This is a conversation, not a form. Ask one or two questions at a time. Listen to what the user gives you and follow up on what's missing. Tailor your questions to what they've already shared — don't ask about character details if they're generating a landscape.
Fewer questions = better. Aim for 4-7 exchanges maximum. Ask the most impactful questions first. Stop asking when you have enough to generate an excellent prompt.
Don't ask for what you can infer. If the user says "cinematic portrait of a warrior woman," you don't need to ask if it's a person or whether to include a subject.
More from mckruz/comfyui-expert
comfyui-api
Connect to a running ComfyUI instance, queue workflows, monitor execution, and retrieve results. Supports both online (REST API) and offline (JSON export) modes. Use when executing ComfyUI workflows or checking server status.
809comfyui-workflow-builder
Generate, build, create, or design ComfyUI workflow JSON from natural language descriptions. Produces valid node graphs with correct class_types, connections, output indices, and model-appropriate settings. Handles txt2img, img2img, inpainting, ControlNet, LoRA stacking, upscaling, and face detailing pipelines. Does NOT cover ComfyUI installation, custom node development, Python scripting, model training, hardware advice, or architectural explanations.
745comfyui-video-pipeline
Generate videos using ComfyUI with Wan 2.2, FramePack, or AnimateDiff. Handles image-to-video, text-to-video, talking heads, and motion-controlled animation. Use when creating any video content from character images or text descriptions.
431comfyui-prompt-engineer
Craft model-specific prompts optimized for the target checkpoint and identity method. Handles FLUX, SDXL, SD1.5, and Wan video models with proper syntax, quality tags, and negative prompts. Use when generating or refining prompts for ComfyUI workflows.
394comfyui-troubleshooter
Diagnose ComfyUI errors, workflow failures, and quality issues. Suggests fixes based on error patterns, missing dependencies, and community-known workarounds. Use when ComfyUI workflows fail or produce unexpected results.
212comfyui-character-gen
Build identity-preserving character generation workflows and pipelines in ComfyUI. Selects the optimal identity method (InfiniteYou, FLUX Kontext, PuLID, InstantID, IP-Adapter) based on use case requirements. Handles face preservation, likeness transfer, cross-domain conversion (3D to photo), multi-reference consistency, iterative character editing, and character variation generation. Triggers on requests to generate consistent characters, preserve identity across images, create face-swapping workflows, or convert 3D renders to photorealistic portraits. Does NOT cover general image generation without identity preservation, model training/LoRA fine-tuning, animation, technical explanations, or workflow debugging.
130