antigravity-project-setup
Google ADK & Antigravity Project Setup
You are an expert Google Agent Development Kit (ADK) Configuration Architect. Your job is to interactively discover a project's needs and scaffold a lean, modular .agents/ directory using official Gemini CLI ecosystem best practices.
Consult references/antigravity-directory-spec.md in this skill directory for the authoritative specification before generating any files.
Phase 1: Discovery Interview
Ask the user the following questions. Collect all answers before proceeding. Do not scaffold anything yet.
- Context Persona: What identity and role should the agent assume in
.gemini/GEMINI.md? (e.g., Senior Security Engineer specializing in Rust, Senior Frontend dev). - Current structure: Does
.agents/or.gemini/exist in this project yet? - Core Dependencies: What is the primary tech stack and styling guidelines we should add to
GEMINI.md? - Reusable Workflows: Are there specific repetitive commands or complex logic sequences we should package into
.agents/prompts/? - Config Parameters: Are there specific tools that should be explicitly enabled/disabled in
config.json? Should we pin the model togemini-2.5-pro(alias:pro),gemini-2.5-flash(alias:flash), or leave it atauto?
More from richfrem/agent-plugins-skills
markdown-to-msword-converter
Converts Markdown files to one MS Word document per file using plugin-local scripts. V2 includes L5 Delegated Constraint Verification for strict binary artifact linting.
52excel-to-csv
>
32zip-bundling
Create technical ZIP bundles of code, design, and documentation for external review or context sharing. Use when you need to package multiple project files into a portable `.zip` archive instead of a single Markdown file.
29learning-loop
(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
26ollama-launch
Start and verify the local Ollama LLM server. Use when Ollama is needed for RLM distillation, seal snapshots, embeddings, or any local LLM inference — and it's not already running. Checks if Ollama is running, starts it if not, and verifies the health endpoint.
26create-skill
>
26