stable-diffusion-image-generation

Installation
Summary

Text-to-image generation and image transformation with Stable Diffusion models via HuggingFace Diffusers.

  • Supports multiple generation modes: text-to-image, image-to-image translation, inpainting, outpainting, and ControlNet spatial conditioning for precise control
  • Compatible with SD 1.5, SDXL, SD 3.0, and Flux models; includes scheduler swapping (Euler, DPM-Solver, LCM) for quality and speed trade-offs
  • LoRA adapter support for efficient style fine-tuning and multi-adapter composition with adjustable weights
  • Memory optimization tools including CPU offloading, attention slicing, xFormers integration, and VAE tiling for resource-constrained environments
SKILL.md

Stable Diffusion Image Generation

Comprehensive guide to generating images with Stable Diffusion using the HuggingFace Diffusers library.

When to use Stable Diffusion

Use Stable Diffusion when:

  • Generating images from text descriptions
  • Performing image-to-image translation (style transfer, enhancement)
  • Inpainting (filling in masked regions)
  • Outpainting (extending images beyond boundaries)
  • Creating variations of existing images
  • Building custom image generation workflows

Key features:

  • Text-to-Image: Generate images from natural language prompts
  • Image-to-Image: Transform existing images with text guidance
  • Inpainting: Fill masked regions with context-aware content
  • ControlNet: Add spatial conditioning (edges, poses, depth)
Related skills

More from davila7/claude-code-templates

Installs
1.0K
GitHub Stars
27.2K
First Seen
Jan 21, 2026