Agent commands. Your AI agent uses this node in workflows. You don’t configure it directly.
The LLM node calls AI models using Simon Willison’s llm library. It supports multiple providers (OpenAI, Anthropic, Google, local models) through a unified interface.
Parameters
| Parameter | Type | Required | Default | Description |
|---|
prompt | str | Yes | - | Text prompt to send to the model |
model | str | No | gpt-5.2 | Model identifier |
system | str | No | - | System prompt for behavior guidance |
temperature | float | No | 1.0 | Sampling temperature (0.0-2.0) |
max_tokens | int | No | - | Maximum response tokens |
images | list | No | [] | Image URLs or file paths for vision models |
Output
| Key | Type | Description |
|---|
response | any | Model’s response (auto-parsed JSON or string) |
llm_usage | dict | Token usage metrics |
error | str | Error message (only present on failure) |
Token usage structure
{
"model": "gpt-5.2",
"input_tokens": 150,
"output_tokens": 89,
"total_tokens": 239,
"cache_creation_input_tokens": 0,
"cache_read_input_tokens": 0
}
Model support
These providers are included with pflow - just set your API key:
| Provider | Example models |
|---|
| OpenAI | gpt-5.2, gpt-5.1, gpt-4o |
| Anthropic | claude-opus-4-5, claude-sonnet-4-5, claude-haiku-4-5 |
| Google | gemini-3.0-pro, gemini-2.5-flash |
# Set API keys (stored in ~/.pflow/settings.json)
pflow settings set-env OPENAI_API_KEY "sk-..."
pflow settings set-env ANTHROPIC_API_KEY "sk-ant-..."
pflow settings set-env GEMINI_API_KEY "..."
Run llm models to see all available models on your system.
Extending with plugins
pflow uses Simon Willison’s llm library, which supports plugins for additional providers and local models.
Installing plugins
If you installed pflow with uv tool, include plugins during installation:
uv tool install --with llm-openrouter pflow-cli
If you installed with pipx, use inject to add plugins:
pipx inject pflow-cli llm-openrouter
Plugins must be installed in pflow’s environment. Running llm install separately won’t work with isolated installations.
Popular plugins
| Plugin | Install flag | Use case |
|---|
llm-openrouter | --with llm-openrouter | Access Claude, GPT, Llama, Mistral via OpenRouter |
llm-ollama | --with llm-ollama | Run models locally with Ollama |
After installing, set up credentials:
# OpenRouter - get key from https://openrouter.ai/keys
llm keys set openrouter
# Ollama - no API key needed
brew install ollama
ollama serve
ollama pull llama3.2
Using plugin models
OpenRouter models use the openrouter/provider/model format:
{
"type": "llm",
"params": {
"model": "openrouter/anthropic/claude-sonnet-4-5",
"prompt": "Summarize this"
}
}
Ollama models use the model name directly:
{
"type": "llm",
"params": {
"model": "llama3.2",
"prompt": "Summarize this"
}
}
See the llm plugin directory for more providers including Mistral, Bedrock, and other local model options.
Automatic JSON parsing
The node automatically detects and parses JSON responses:
{
"id": "analyze",
"type": "llm",
"params": {
"prompt": "List 3 colors as JSON array",
"model": "gpt-4o-mini"
}
}
If the response is valid JSON (including markdown-wrapped JSON), response will be the parsed object/array. Otherwise, it’s the raw string.
Image support
For vision-capable models, pass image URLs or local file paths:
{
"id": "describe",
"type": "llm",
"params": {
"prompt": "What's in this image?",
"model": "gpt-5.2",
"images": ["photo.jpg"]
}
}
Supported formats: JPEG, PNG, GIF, WebP, PDF
Images can be:
- Local file paths:
photo.jpg, /path/to/image.png
- URLs:
https://example.com/image.jpg
Examples
Basic prompt
{
"nodes": [
{
"id": "summarize",
"type": "llm",
"params": {
"prompt": "Summarize: ${read.content}",
"model": "gpt-4o-mini"
}
}
]
}
With system prompt
{
"nodes": [
{
"id": "translate",
"type": "llm",
"params": {
"system": "You are a translator. Respond only with the translation.",
"prompt": "Translate to Spanish: ${input.text}",
"temperature": 0.3
}
}
]
}
Structured output
{
"nodes": [
{
"id": "extract",
"type": "llm",
"params": {
"system": "Extract entities as JSON with keys: people, places, organizations",
"prompt": "${document.content}"
}
}
]
}
The response will be automatically parsed if it’s valid JSON.
Image analysis
{
"nodes": [
{
"id": "analyze",
"type": "llm",
"params": {
"prompt": "Describe the main elements in this image",
"model": "gpt-5.2",
"images": ["${file_path}"]
}
}
]
}
Error handling
| Error | Cause | Solution |
|---|
| Unknown model | Model ID not recognized | Run llm models to see available models |
| API key required | Missing credentials | Set up with llm keys set <provider> or env var |
| Rate limit | Too many requests | Wait and retry automatically (built-in retry) |
The node retries transient failures automatically (3 attempts, 1 second wait).