lmstudio-cli
Installation
SKILL.md
LM Studio CLI
Use this skill when the real job is operating LM Studio itself: confirming whether lms exists, checking whether a local or remote LM Studio server is actually running, discovering exact model IDs, deciding whether the OpenAI-compatible endpoints are enough, and wiring a downstream tool to the correct base URL and model identifier.
Do not use this as a generic local-LLM comparison skill. Route broad provider comparison or platform selection to research/survey work. Route downstream-tool-specific scanning or appsec operation to that tool's skill (for example strix) once LM Studio itself is verified.
When to use this skill
- A user mentions LM Studio,
lms, or an LM Studio server directly - You need to verify whether LM Studio is running locally or on another authorized host
- You need the exact model IDs returned by
/v1/modelsbefore wiring another tool - You need to choose between LM Studio's OpenAI-compatible endpoints and its native REST API
- You need to load, inspect, or confirm models before an agent or CLI can use them
- A downstream tool works with OpenAI-compatible endpoints, but the user needs LM Studio-specific setup help
- A remote/headless LM Studio workflow is failing and you need a deterministic verification path
Instructions
Step 1: Identify the operating mode
Classify the request before touching commands:
Related skills