Android-MCP: Bridging AI Agents and Android Devices

5 days ago 3

Android-MCP: Bridging AI Agents and Android Devices

We've been working on Android-MCP, a lightweight, open-source bridge designed to enable AI agents (specifically large language models) to interact with Android devices. The goal is to allow LLMs to perform real-world tasks like app navigation, UI interaction, and automated QA testing without relying on traditional computer vision pipelines or pre-programmed scripts.

The core idea is to leverage ADB and the Android Accessibility API for native interaction with UI elements. This means an LLM can launch apps, tap, swipe, input text, and read view hierarchies directly. A key feature is that it works with any language model, with vision being optional – there's no need for fine-tuned computer vision models or OCR.

Android-MCP operates as an MCP server and offers a rich toolset for mobile automation, including pre-built tools for gestures, keystrokes, capturing device state, and accessing notifications. We've observed typical latency between actions (e.g., two taps) ranging from 2-5 seconds, depending on device specifications and load.

It supports Android 10+ and is built with Python 3.10+. The project is licensed under the MIT License, and contributions are welcome.

You can find more details, installation instructions, and the source code here: https://github.com/CursorTouch/Android-MCP

We're interested to hear thoughts on how this kind of direct interaction could be applied in various scenarios, particularly in areas like automated testing or accessibility enhancements for LLM-driven applications.

Read Entire Article