Build AI agents with Dagger
Overview
Dagger can be used as a runtime and programming environment for AI agents. Benefits include reproducible execution, end-to-end observability, multi-model support, rapid iteration, and easy integration.
Architecture
Dagger's module system allows implementing agentic features as modular components that you can integrate into your application, or use individually.
Each module has the following features:
- Runs in containers, for maximum portability and reproducibility
- Can be run from the command-line, or programmatically via an API
- Generated bindings for Go, Python, Typescript, PHP (experimental), Java (experimental), and Rust (experimental).
- End-to-end tracing of prompts, tool calls, and even low-level system operations. 100% of agent state changes are traced.
- Cross-language extensions. Add your own modules in any language.
- Platform-independent. No infrastructure lock-in! Runs on any hosting provider that can run containers.
Dagger and your IDE are the only dependency for developing and running these modules. The entire environment is containerized, for maximum portability.
Community
Building AI agents on Dagger is an exciting new use case, that still has rough edges. We strongly recommend joining our Community discord. The Dagger community is very welcoming, and will be happy to answer your questions, discuss your use case ideas, and help you get started.
Do this now! It will make the rest of the experience more productive, and more fun.
See also this Twitter thread for examples, discussions and demos.
Examples
Here are several example repositories containing examples of Dagger modules with agentic capabilities, which you can use as inspiration.
- toy-programmer: a very, very simple programmer micro-agent for demo purposes
- melvin: Melvin is Devin's little cousin 😄. An experimental open-source coding agent, made of small composable modules rather than one monolithic app.
- multiagent: a demo using multiple LLMs to solve a problem
- github-go-coder: a Go programmer micro-agent that receives assignments from GitHub issues and creates PRs with it's solutions
- cypress-test-llm-ts: an agent that compares two branches in git for a UI change and creates a Cypress test to cover the change.
- laravel-assistant: an agent that reviews git changes in a local Laravel project to generate tests.
- tictactoe: an agent that plays Tic Tac Toe with a human player.
- dockerfile-optimizer: an agent that analyzes your Dockerfile and suggests improvements for better efficiency, security, and best practices.
Initial setup
1. Install Dagger with llm support
Note: the latest version is 0.17.0-llm.4
. It was released on Feb 28 2025. If you are running an older build, we recommend upgrading.
You will need a development version of Dagger which adds native support for LLM prompting and tool calling.
Once this feature is merged (current target is 0.17), a development build will no longer be required.
Install the development version of LLM-enabled Dagger:
curl -fsSL https://dl.dagger.io/dagger/install.sh | DAGGER_VERSION=0.17.0-llm.4 BIN_DIR=/usr/local/bin sh
You can adjust BIN_DIR
to customize where the dagger
CLI is installed.
Verify that your Dagger installation works:
$ dagger -c version
v0.17.0-llm.4
2.Configure LLM endpoints
Dagger uses your system's standard environment variables to route LLM requests. Currently these variables are supported:
OPENAI_API_KEY
OPENAI_BASE_URL
OPENAI_MODEL
ANTHROPIC_API_KEY
ANTHROPIC_BASE_URL
ANTHROPIC_MODEL
Dagger will look for these variables in your environment, or in a .env
file in the current directory (.env
files in parent directories are not yet supported).
Using with Ollama
You can use Ollama as a local LLM provider. Here's how to set it up:
-
Install Ollama from ollama.ai
-
Start Ollama server with host binding:
OLLAMA_HOST="0.0.0.0:11434" ollama serve
- Get your host machine's IP address:
On macOS / Linux (modern distributions):
ifconfig | grep "inet " | grep -v 127.0.0.1
On Linux (older distributions):
ip addr | grep "inet " | grep -v 127.0.0.1
This step is needed because our LLM type runs inside the engine and needs to reach your local Ollama service. While we're potentially exploring the implementatin of automatic tunneling, for now you'll need to use your machine's actual IP address instead of localhost to allow the containers to communicate with Ollama.
- Configure the following environment variables (replace
YOUR_IP
with the IP address from step 3):
OPENAI_API_KEY="nothing"
OPENAI_BASE_URL=http://YOUR_IP:11434/v1/
OPENAI_MODEL=llama3.2
For example, if your IP is 192.168.64.1:
OPENAI_API_KEY="nothing"
OPENAI_BASE_URL=http://192.168.64.1:11434/v1/
OPENAI_MODEL=llama3.2
- Pull some models to your local Ollama service:
ollama pull llama3.2
Note that to successfully give the LLM Dagger's API to use as a tool, the model should support tools. Here's a list curated by Ollama of models that support tools: Ollama models supporting tools
Run modules from the command-line
Use the dagger
CLI to load a module and call its functions.
For example, to use the toy-programmer
agent module:
dagger -m https://github.com/dagger/agents/toy-programmer
Then, run this command in the dagger shell:
.help
This prints available functions. Let's call one:
go-program "develop a curl clone" | terminal
This calls the go-program
function with a description of a program to write, then runs the terminal
function on the returned container.
You can use tab-completion to explore other available functions.
Integrate Dagger into your application
You can embed Dagger modules into your application. Supported languages are Python, Typescript, Go, Java, PHP - with more language support under development.
- Initialize a Dagger module at the root of your application. This doesn't need to be the root of your git repository - Dagger is monorepo-ready.
dagger init
- Install the modules you wish to load
For example, to install the toy-workspace module:
dagger install github.com/dagger/agents/toy-workspace
- Install a generated client in your project
TODO: this feature is not yet merged in a stable version of Dagger
This will configure Dagger to generate client bindings for the language of your choice.
For example, if your project is a Python application:
dagger client install python
- Re-generate clients
TODO: this feature is not yet merged in a stable version of Dagger
Any time you need to re-generate your client, run:
dagger client generate