Agent commands. Your AI agent uses this node in workflows. You don’t configure it directly.
The LLM node calls AI models using Simon Willison’s llm library. Use it when a step needs reasoning — summarization, classification, extraction, anything a Python expression can’t handle. It supports multiple providers (OpenAI, Anthropic, Google, local models) through a unified interface.
Parameters
| Parameter | Type | Required | Default | Description |
|---|
prompt | str | Yes | - | Text prompt to send to the model |
model | str | No | See below | Model identifier |
system | str | No | - | System prompt for behavior guidance |
temperature | float | No | 1.0 | Sampling temperature (0.0-2.0) |
max_tokens | int | No | - | Maximum response tokens |
images | list | No | [] | Image URLs or file paths for vision models |
Model resolution
If model is not specified in workflow params, pflow auto-detects based on your configured API keys.
Most users just need an API key:
pflow settings set-env OPENAI_API_KEY "sk-..."
See LLM model settings for the full resolution order and default models per provider.
Output
| Key | Type | Description |
|---|
response | str | Model’s text response |
llm_usage | dict | Token usage metrics |
error | str | Error message (only present on failure) |
Token usage structure
{
"model": "gpt-5.2",
"input_tokens": 150,
"output_tokens": 89,
"total_tokens": 239,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0
}
Model support
These providers are included with pflow - just set your API key:
| Provider | Example models |
|---|
| OpenAI | gpt-5.2, gpt-5.1, gpt-4o |
| Anthropic | claude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5 |
| Google | gemini-3.0-pro, gemini-2.5-flash |
# Set API keys (stored in ~/.pflow/settings.json)
pflow settings set-env OPENAI_API_KEY "sk-..."
pflow settings set-env ANTHROPIC_API_KEY "sk-ant-..."
pflow settings set-env GEMINI_API_KEY "..."
Run llm models to see all available models on your system.
Extending with plugins
pflow uses Simon Willison’s llm library, which supports plugins for additional providers and local models.
Installing plugins
If you installed pflow with uv tool, include plugins during installation:
uv tool install --with llm-openrouter pflow-cli
If you installed with pipx, use inject to add plugins:
pipx inject pflow-cli llm-openrouter
Plugins must be installed in pflow’s environment. Running llm install separately won’t work with isolated installations.
Popular plugins
| Plugin | Install flag | Use case |
|---|
llm-openrouter | --with llm-openrouter | Access Claude, GPT, Llama, Mistral via OpenRouter |
llm-ollama | --with llm-ollama | Run models locally with Ollama |
After installing, set up credentials:
# OpenRouter - get key from https://openrouter.ai/keys
llm keys set openrouter
# Ollama - no API key needed
brew install ollama
ollama serve
ollama pull llama3.2
Using plugin models
OpenRouter models use the openrouter/provider/model format:
### summarize
Summarize content using OpenRouter.
- type: llm
- model: openrouter/anthropic/claude-sonnet-4-5
- prompt: Summarize this
Ollama models use the model name directly:
### summarize
Summarize content using a local model.
- type: llm
- model: llama3.2
- prompt: Summarize this
See the llm plugin directory for more providers including Mistral, Bedrock, and other local model options.
Image support
For vision-capable models, pass image URLs or local file paths:
### describe
Describe the contents of a photo.
- type: llm
- prompt: What's in this image?
- model: gpt-5.2
- images: ["photo.jpg"]
Supported formats: JPEG, PNG, GIF, WebP, PDF
Images can be:
- Local file paths:
photo.jpg, /path/to/image.png
- URLs:
https://example.com/image.jpg
Examples
Basic prompt
### summarize
Summarize the content from the previous step.
- type: llm
- prompt: Summarize: ${read.content}
- model: gpt-4o-mini
With system prompt
### translate
Translate the input text to Spanish.
- type: llm
- system: You are a translator. Respond only with the translation.
- prompt: Translate to Spanish: ${input.text}
- temperature: 0.3
Structured output
### extract
Extract named entities from the document.
- type: llm
- system: Extract entities as JSON with keys: people, places, organizations
- prompt: ${document.content}
Access JSON fields from the response using dot notation in downstream templates: ${extract.response.people}. The template system auto-parses JSON on demand when you use dot notation.
Image analysis
### analyze
Analyze the contents of a user-provided image.
- type: llm
- prompt: Describe the main elements in this image
- model: gpt-5.2
- images: ["${file_path}"]
Error handling
| Error | Cause | Solution |
|---|
| Unknown model | Model ID not recognized | Run llm models to see available models |
| API key required | Missing credentials | Set up with llm keys set <provider> or env var |
| Rate limit | Too many requests | Wait and retry automatically (built-in retry) |
The node retries transient failures automatically (3 attempts, 1 second wait).