google-ai

Installation
SKILL.md

google-ai

Purpose

This skill enables interaction with Google's Gemini API, allowing access to Pro, Flash, and Ultra models for tasks like text generation, chat, and embedding with up to 1M token context. It's designed for integrating advanced AI capabilities into applications via RESTful endpoints.

When to Use

Use this skill when you need large-context AI processing, such as summarizing long documents, generating code from detailed specs, or handling multi-turn conversations. Apply it in scenarios requiring Google-specific models, like when OpenAI alternatives are insufficient or when integrating with Google Cloud ecosystems.

Key Capabilities

  • Access Gemini Pro for general tasks, Flash for faster inference, and Ultra for complex reasoning.
  • Handle contexts up to 1M tokens, ideal for processing books or codebases.
  • Support multimodal inputs (text, images) via specific endpoints.
  • Embeddings generation for semantic search, using models like text-embedding-004.
  • Rate limiting and quotas managed per API key, with up to 1,000 requests per minute.

Usage Patterns

Always initialize with authentication via the $GOOGLE_API_KEY environment variable. For OpenClaw, invoke this skill by prefixing commands with the skill ID, e.g., google-ai generate. Use JSON payloads for requests and handle responses as JSON objects. Pattern: Set up a request with model selection, then send via HTTP POST; parse the response for output. For repeated use, cache API responses to avoid rate limits.

Related skills
Installs
21
GitHub Stars
5
First Seen
Mar 5, 2026