image-edit
Installation
SKILL.md
Image Edit — Pro Pack on RunComfy
runcomfy.com · Nano Banana Edit · GPT Image 2 Edit · Flux Kontext · Z-Image Inpaint · GitHub
Image edit, intent-routed. This skill doesn't lock you to one model — it picks the right edit model in the RunComfy catalog based on what the user actually wants: batch identity-preservation, multilingual text rewrite, single-shot precise edit, or mask-driven region replacement.
npx skills add agentspace-so/runcomfy-skills --skill image-edit -g
Pick the right model for the user's intent
| User intent | Model | Why |
|---|---|---|
| Batch edit 1–20 images consistently (SKU gallery, A/B variants) | Nano Banana Edit | Up to 20 input images per call; locked aspect/resolution for series |
| Swap background, preserve subject identity | Nano Banana Edit | Strong identity preservation under "keep X unchanged" prompts |
| Localized object removal / addition with spatial language ("the left object", "upper-right corner") | Nano Banana Edit | Honors directional spatial scope |
| Multilingual / non-Latin in-image text rewrite (Japanese kana, Cyrillic, Arabic) | GPT Image 2 Edit | Strongest in class for multilingual typography |
| Multi-reference composition (subject from img1, scene from img2, palette from img3) | GPT Image 2 Edit | Numbered refs route cues correctly |
Related skills