An autonomous task execution plugin for the LLM CLI tool
llm loop is a powerful plugin for the LLM CLI tool that enables autonomous, goal-oriented task execution. Unlike traditional single-turn LLM interactions, llm loop allows the AI to work persistently towards a goal by making multiple tool calls, analyzing results, and iterating until the task is complete.
- Goal-Oriented Execution: Define a task and let the AI work autonomously to complete it
- Tool Integration: Seamlessly use LLM tools and functions to interact with your environment
- Iterative Problem Solving: The AI can chain multiple tool calls and adapt based on results
- Interactive Control: Configure turn limits, approve tool calls, and guide the process
- Comprehensive Logging: Track all interactions and tool calls for debugging and analysis
- Safety Features: Built-in approval mechanisms for potentially dangerous operations
-
Install LLM CLI globally (recommended using uv):
Or using pipx:
-
Configure an LLM model (you'll need an API key):
# For OpenAI llm keys set openai llm models default gpt-4.1-mini # For Anthropic llm keys set anthropic llm models default claude-3-5-sonnet-20241022
Option 1: Install from PyPI (Recommended for most users)
Once the plugin is available on PyPI, you can install it directly using llm or pip:
Option 2: Install from source (for development or latest changes)
-
Clone this repository:
git clone https://github.com/nibzard/llm-loop cd llm-loop -
Install the plugin:
# For regular use from source llm install -e . # For development (includes dev dependencies like pytest, ruff, mypy) pip install -e ".[dev]" -
Verify installation:
Tools are what make llm loop powerful. You can use existing LLM tools or create custom functions:
-m, --model | Specify the LLM model to use |
-s, --system | Override the default system prompt |
-T, --tool | Enable specific tools (can be used multiple times) |
--functions | Python file containing custom tool functions |
--max-turns | Maximum conversation turns (default: 25) |
--td, --tools-debug | Show detailed tool execution information |
--ta, --tools-approve | Manually approve each tool call |
--internal-cl | Chain limit for tool calls within a single turn |
--no-log | Disable logging to database |
--log | Force logging even if globally disabled |
Create a simple tool file first. Note: With the refactoring, dev_tools.py has been moved to llm_loop/plugins/dev_tools.py. The plugin will automatically discover tools from this location if you are running from the project root or have installed the package. For custom tool files outside the package, you still use the --functions flag.
The example dev_tools.py content remains relevant:
Now run the loop:
Make sure to adjust the path to dev_tools.py if you are not in my_flask_app directory and dev_tools.py is not in the parent directory. If you've installed the package in development mode (pip install -e ".[dev]"), llm-loop will attempt to load tools from llm_loop/plugins/dev_tools.py automatically if no --functions flag is provided and the built-in tools are not sufficient.
The AI will:
- Create app.py with Flask routes
- Create HTML templates
- Generate a requirements.txt
- Provide instructions for running the app
This assumes you are running the command from the root of the llm-loop project. If running from elsewhere after installation, and dev_tools.py is part of the installed package, you might not need --functions if those tools are made available by default.
Similar to the above, adjust paths or rely on automatic discovery if applicable.
You can override the default system prompt to customize the AI's behavior:
Use --tools-debug to see exactly what tools are being called:
For potentially dangerous operations, use --tools-approve:
The llm-loop project has been refactored into a modular Python package:
This structure promotes separation of concerns, maintainability, and scalability.
- LLM_MODEL: Default model to use
- LLM_TOOLS_DEBUG: Enable tools debugging by default
- LLM_LOGS_OFF: Disable logging by default
All interactions are logged to an SQLite database by default:
- Location: ~/.config/io.datasette.llm/logs.db
- Disable with --no-log
- Force enable with --log
- Tool Approval: Always use --tools-approve for tools that can modify your system
- Limited Scope: Run in dedicated directories for file operations
- Review Tools: Understand what each tool function does before using it
- Backup Important Data: Especially when using file manipulation tools
Plugin not found after installation:
Tool import errors:
- Ensure your tool functions have proper error handling
- Check that file paths in --functions are correct
- Verify Python syntax in your tool files
Model errors:
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built on the excellent LLM CLI tool by Simon Willison
- Inspired by autonomous AI agent frameworks
- Thanks to the LLM tool ecosystem contributors
Note: This plugin enables powerful autonomous AI behavior. Always review and understand the tools you're providing to the AI, especially those that can modify files or execute system commands.