android-device-automation

Installation
Summary

Vision-driven Android automation from screenshots, no DOM access required.

  • Operates entirely from device screenshots using AI visual understanding; interacts with any visible UI element regardless of underlying technology stack
  • Supports taps, swipes, text input, app launches, and complex multi-step interactions via natural language commands
  • Requires pre-configured vision model (Gemini, Qwen, Doubao, or similar) with API credentials in environment variables
  • Commands run synchronously one at a time; take screenshot, analyze result, then decide next action to maintain the screenshot-analyze-act loop
  • Best practice: launch target app via ADB first for speed, then use this skill for UI automation and verification tasks
SKILL.md

Android Device Automation

CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:

  1. Never run midscene commands in the background. Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
  2. Run only one midscene command at a time. Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
  3. Allow enough time for each command to complete. Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex act commands may need even longer.
  4. Always report task results before finishing. After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.

Automate Android devices using npx -y @midscene/android@1. Each CLI command maps directly to an MCP tool — you (the AI agent) act as the brain, deciding which actions to take based on screenshots.

What act Can Do

Inside a single act call on Android, Midscene can tap, double-tap, long-press, type, clear text, scroll or swipe in any direction, pull to refresh, drag items, zoom with two fingers, press keys, and use system navigation such as Back, Home, or recent apps while working from the current visible screen.

Prerequisites

Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a .env file in the current working directory (Midscene loads .env automatically):

Related skills
Installs
1.6K
GitHub Stars
218
First Seen
Mar 6, 2026