Skip to main content

LLM Integration

Dagger's LLM core type includes API methods to attach objects to a Large Language Model (LLM), send prompts, and receive responses.

Prompts

Use the LLM.withPrompt() API method to append prompts to the LLM context:

dagger <<EOF
llm |
with-prompt "What tools do you have available?"
EOF

For longer or more complex prompts, use the LLM.withPromptFile() API method to read the prompt from a text file:

dagger <<EOF
llm |
with-prompt-file ./prompt.txt
EOF

Responses and Variables

Use the LLM.lastReply() API method to obtain the last reply from the LLM

Dagger supports the use of variables in prompts. This allows you to interpolate results of other operations into an LLM prompt:

dagger <<EOF
source=\$(container |
from alpine |
with-directory /src https://github.com/dagger/dagger |
directory /src)
environment=\$(env |
with-directory-input 'source' \$source 'a directory with source code')
llm |
with-env \$environment |
with-prompt "The directory also has some tools available." |
with-prompt "Use the tools in the directory to read the first paragraph of the README.md file in the directory." |
with-prompt "Reply with only the selected text." |
last-reply
EOF
tip

To get the complete message history, use the LLM.History() API method.

Environments

Dagger modules are collections of Dagger Functions. When you give a Dagger module to the LLM core type, every Dagger Function is turned into a tool that the LLM can call.

Environments configure any number of inputs and outputs for the LLM. For example, an environment might provide a Directory, a Container, a custom module, and a string variable. The LLM can use the scalars and the functions of these objects to complete the assigned task.

The documentation for the modules are provided to the LLM, so make sure to provide helpful documentation in your Dagger Functions. The LLM should be able to figure out how to use the tools on its own. Don't worry about describing the objects too much in your prompts because it will be redundant with this automatic documentation.

Consider the following Dagger Function:

package main

import (
"dagger/coding-agent/internal/dagger"
)

type CodingAgent struct{}

// Write a Go program
func (m *CodingAgent) GoProgram(
// The programming assignment, e.g. "write me a curl clone"
assignment string,
) *dagger.Container {
workspace := dag.ToyWorkspace()
environment := dag.Env().
WithToyWorkspaceInput("before", workspace, "tools to complete the assignment").
WithStringInput("assignment", assignment, "the assignment to complete").
WithToyWorkspaceOutput("after", "the completed assignment")

return dag.LLM().
WithEnv(environment).
WithPrompt(`
You are an expert go programmer. You have access to a workspace.
Use the default directory in the workspace.
Do not stop until the code builds.
Your assignment is: $assignment`).
Env().
Output("after").
AsToyWorkspace().
Container()
}

Here, an instance of the ToyWorkspace module is attached as an input to the Env environment. The ToyWorkspace module contains a number of Dagger Functions for developing code: Read(), Write(), and Build(). When this environment is attached to an LLM, the LLM can call any of these Dagger Functions to change the state of the ToyWorkspace and complete the assigned task.

In the Env, a ToyWorkspace instance called after is specified as a desired output of the LLM. This means that the LLM should return the ToyWorkspace module instance as a result of completing its task. The resulting ToyWorkspace object is then available for further processing or for use in other Dagger Functions.