turtlesim_agent is an AI agent that transforms the classic ROS turtlesim simulator into a creative canvas driven by natural language. Powered by LangChain, this AI agent interprets text-based instructions and translates them into visual drawings, turning the simulated turtle into a digital artist. This project explores how large language models can interact with external environments to exhibit creative behavior. Users can describe shapes or drawing intentions in plain English, and the AI agent reasons through the instructions to execute them using turtlesim's motion commands.
tsim_example_basic.mp4I want you to draw a rainbow composed of 7 semi-circular arcs, each with a different color and a radius ranging from 2.0 cm to 2.7 cm. The colors should follow the traditional rainbow order: violet, indigo, blue, green, yellow, orange, red with the pen's width of 5. Please offset the starting position of each semi-circle by 0.1 cm to avoid overlap.
tsim_eample.mp4Note: This demo was generated using the gemini-2.0-flash model. Please note that results may vary even when using the same model, due to the non-deterministic nature of language models. Outputs may differ depending on factors like prompt phrasing, timing, or model updates.
- ROS 2 Humble Hawksbill (This project has only been tested with ROS 2 Humble. Compatibility with other ROS 2 distributions is not guaranteed.)
- Python 3.10+
- Other dependencies as listed in requirements.txt
turtlesim_agent supports multiple language model providers via LangChain. You need to set API keys for the providers you intend to use.
To make your API keys available in your development environment, add them to your shell configuration file (e.g., ~/.bashrc, ~/.zshrc), then reload the file using source.
💡 You only need to set the keys for the providers you plan to use.
If you're running a local or remote LLM server (e.g., via Ollama), specify the server URL as follows:
To trace and debug agent behavior using LangSmith, set the following environment variables:
Basic Tracing Configuration:
Full Configuration with API Key and Project Name:
To specify which Large Language Model (LLM) your agent should use, you need to configure the model name in two places:
- turtlesim_node.py
- turtlesim_agent.launch.xml (only if you use ROS 2 launch files)
In both turtlesim_node.py and turtlesim_agent.launch.xml, update the agent_model parameter to match the model you want to use.
-
Python (turtlesim_node.py):
self.declare_parameter("agent_model", "gemini-2.0-flash") -
Launch file (turtlesim_agent.launch.xml):
<arg name="agent_model" default="gemini-2.0-flash"/>
💡 The default model is "gemini-2.0-flash". Replace it with your preferred model name (e.g., "gpt-4", "claude-3-opus", etc.).
If you specify a custom model name, make sure it is supported by LangChain. You can verify this by checking or updating the logic inside llm.py.
- If the model is not yet handled, add a corresponding case in llm.py to load the model correctly.
- Refer to LangChain documentation for the latest supported models and providers.
Once you have configured the variables, proceed to build and apply the changes to finalize the setup:
turtlesim_agent offers two modes of interaction:
- A CLI-based interface, recommended for debugging and understanding the agent’s internal reasoning.
- A GUI-based chat interface, ideal for intuitive and user-friendly interaction.
turtlesim_agent utilizes the tools implemented in the tools/ directory as callable functions that it can invoke during the reasoning process to accomplish user-defined drawing tasks.
One of the core ideas behind this project is enabling creative expression through tool augmentation. If you'd like to enhance the agent's capabilities further, feel free to add your own tools to the tools/ directory.
To make new tools available:
- Create a new *_tools.py file in the tools/ directory.
- Define your custom functions using LangChain-compatible signatures.
- Import them in all_tools.py so that the agent can access them.
We welcome any ideas that make TurtleSim Agent more creative and expressive.
Whether it's new tools, smarter models, better prompts, experimental results, or entirely new use cases — feel free to open an issue or pull request. The results of the experiment will be posted and updated on the wiki. If you manage to draw a shape successfully, please update the wiki!