Skip to main content
For the curious. Your AI agent handles template variables automatically. This explains how data flows between nodes and what’s happening when you see ${variable} syntax in workflows or traces.
Template variables let nodes pass data to each other without writing glue code. When you see ${variable} syntax in a workflow, it’s pulling data from previous nodes, workflow inputs, or nested structures.

Basic syntax

The ${variable} syntax accesses values from the shared store:
### summarize

Summarize the content from the read node.

- type: llm
- prompt: ${read.content}
Here, ${read.content} pulls the content output from the read node.

Nested access

Template variables can traverse deeply nested structures:
### extract

Extract the name of the first item from the API response.

- type: llm
- prompt: ${api.response.items[0].name}
This traverses:
  1. api node’s output
  2. response key
  3. items array
  4. First element ([0])
  5. name field

Type preservation

Template variables preserve the original data type when used alone. When combined with text, they become strings.
TemplateOriginal valueResultType
"${count}"42 (int)42int
"Count: ${count}"42 (int)"Count: 42"string
"${config}"{"key": "val"}{"key": "val"}dict
"Prefix ${config}"{"key": "val"}"Prefix {\"key\": \"val\"}"string
Simple templates (just ${var}) preserve type. Complex templates (any surrounding text) become strings.

Inline objects

This type preservation makes inline object construction intuitive:
### process

Process settings and results together.

- type: shell
```yaml stdin
config: ${settings}
data: ${results}
```
If settings is {"timeout": 30} and results is {"status": "ok"}, the resolved stdin is:
{
  "config": {"timeout": 30},
  "data": {"status": "ok"}
}
Without type preservation, both would be stringified JSON requiring manual parsing.

JSON auto-parsing

When a template accesses nested fields on a JSON string, pflow automatically parses it:
## Steps

### fetch

Fetch data from the API.

- type: shell

```shell command
curl https://api.example.com/data
```

### extract

Analyze the first result name from the fetched data.

- type: llm
- prompt: Analyze: ${fetch.stdout.results[0].name}
Even though fetch.stdout is a string containing JSON, the nested access ${fetch.stdout.results[0].name} works because pflow:
  1. Sees you’re trying to access .results
  2. Attempts to parse stdout as JSON
  3. Traverses the parsed structure
  4. Returns the value at results[0].name
This means shell commands that output JSON work directly with template variables — no manual json.loads() needed.

Workflow inputs

Template variables also reference workflow inputs declared in the workflow definition:
## Inputs

### api_key

API key for authenticating with the service.

- type: string

### endpoint

URL of the API endpoint to call.

- type: string

## Steps

### call_api

Call the API endpoint with authentication.

- type: http
- url: ${endpoint}
```yaml headers
Authorization: Bearer ${api_key}
```
When running this workflow, inputs are provided via CLI arguments:
pflow my-workflow api_key="sk-..." endpoint="https://api.example.com"

Stdin input

Inputs can receive piped data by adding stdin: true. See Stdin input for details.

Array notation

Array elements are accessed using bracket notation:
### extract

Extract specific items from the results.

- type: llm
- prompt: First: ${results[0]}, Tag: ${data.items[2].tags[1]}

Batch processing

In batch nodes, a special template variable (${item} by default) represents the current item:
### process

Summarize each file.

- type: llm
- prompt: Summarize: ${file}
- batch:
    items: ${files}
    as: file
The as: "file" creates ${file} as the item variable. See Batch processing for details.

Coalesce operator

The ?? operator returns the first resolved value, skipping variables that don’t exist (e.g., from a branch that didn’t execute):
### report

Report the result from whichever branch ran.

- type: llm
- prompt: Result was: ${success_branch.stdout ?? fallback_branch.stdout}
This is particularly useful with conditional branching where only one path executes. Without coalesce, referencing a node that didn’t run would be an unresolved variable error.

Node metadata

Beyond their primary outputs, some nodes expose metadata that downstream nodes can access through templates. This is useful for cost-aware workflows, debugging, or building on execution details.

LLM token usage

Both llm and claude-code nodes write llm_usage with token counts:
### analyze

Analyze the document.

- type: llm
- model: gpt-4o
- prompt: Analyze this document: ${doc.content}

### log_cost

Log the token usage from the analysis.

- type: shell

```shell command
echo "Model: ${analyze.llm_usage.model}, Tokens: ${analyze.llm_usage.total_tokens}"

Available fields on `${node_id.llm_usage}`:

| Field | Type | Description |
|-------|------|-------------|
| `model` | str | Model that was used |
| `input_tokens` | int | Input tokens consumed |
| `output_tokens` | int | Output tokens generated |
| `total_tokens` | int | Input + output |
| `cache_creation_input_tokens` | int | Tokens used for cache creation |
| `cache_read_input_tokens` | int | Tokens read from cache |

### Claude Code metadata

The `claude-code` node includes additional execution metadata in `llm_usage` beyond the standard token fields:

| Field | Type | Description |
|-------|------|-------------|
| `cost_usd` | float | Cost in USD from Claude Code SDK |
| `duration_ms` | int | Total execution time |
| `session_id` | str | Session ID (use with `resume` parameter to continue conversations) |
| `num_turns` | int | Number of conversation turns |

Access via `${node_id.llm_usage.field}`:

```markdown
### step_one

Start an analysis session.

- type: claude-code
- prompt: Analyze the codebase structure

### step_two

Continue the same session with follow-up work.

- type: claude-code
- prompt: Now refactor the issues you found
- resume: ${step_one.llm_usage.session_id}

Shell command

The shell node stores the resolved command that was actually executed, accessible as ${node_id.command}. Useful for logging or debugging when the command is built dynamically from templates.

HTTP response details

The http node exposes response_headers (dict) and response_time (seconds as float) alongside the response body. Useful for handling pagination links, rate limit headers, or performance monitoring.
Per-node cost is available internally as llm_usage.cost_usd (estimated from token pricing, null for unknown models). Access it via a code node’s inputs dict — it’s not exposed as a template variable because pricing coverage varies by model. Aggregate cost across all nodes appears in the CLI JSON output after the workflow finishes.

Escaping

Literal ${...} text (not a template variable) uses double dollar signs to escape:
### print_price

Print the literal price variable.

- type: shell

```shell command
echo 'Price: $${PRICE}'
```
This produces the literal string Price: ${PRICE} instead of trying to resolve a variable.

Validation

pflow validates template variables at workflow creation time:
  • Unknown variables → Error: “Unresolved variable: ${typo}
  • Type mismatches → Warning: “Expected string, got dict”
  • Invalid syntax → Error: “Invalid template: ${foo.}
This works because node types declare their outputs — pflow knows at creation time what fields exist and what types they have. With arbitrary code, you’d discover these mismatches at runtime.
Most validation happens when the workflow is created (compile-time). Some validations happen during execution (runtime):
  • Compile-time: Variable existence, type compatibility, syntax
  • Runtime: JSON parsing success, nested access on dynamic values
If JSON auto-parsing fails at runtime, you’ll see an “Unresolved variable” error.