Use this file to discover all available pages before exploring further.
pflow provides a CLI for running workflows, managing MCP servers, and configuring settings.
Who runs these commands? Most pflow commands are run by your AI agent, not by you directly. You handle setup (installation, API keys, MCP servers), then your agent uses pflow to build and run workflows. This reference documents all commands so you understand what your agent is doing. See Using pflow for what to expect day-to-day.
No built-in natural language mode. pflow executes workflow files and saved workflows. Your AI agent builds workflows using pflow’s MCP tools or CLI primitives — pflow doesn’t have its own natural language interface.
Preview which nodes would run or serve from cache, without executing
--help
Show help message
Older natural-language workflow generation flags have been removed. Use the current global options shown above, or let your AI agent build .pflow.md workflows directly.
Workflows that declare multiple outputs mark one with stdout: true to pick which output lands on process stdout in text mode:
## Outputs### messagePrimary result — streams to stdout on redirect or pipe.- source: ${emit.stdout}- stdout: true### lengthSecondary metadata. Available via `-o length` or `--output-format json`.- source: ${count.stdout}
Redirecting or piping the CLI in text mode now writes only the marked output to stdout:
pflow stdout-result.pflow.md > result.txt # file contains the message onlypflow stdout-result.pflow.md | next-step # message streams to the next commandpflow stdout-result.pflow.md -o length # override: emit length insteadpflow stdout-result.pflow.md --output-format json # emit all outputs as structured JSON
Single-output workflows don’t need the marker — their one output is unambiguous. Workflows with multiple declared outputs and no stdout: true stream the first declared output and print a warning on stderr naming the other outputs and the three ways to change the routing: add the marker, pass -o, or switch to JSON mode. The validator enforces that at most one output per workflow is marked.
Human-readable output with live progress streamed to stderr and results on stdout. Works the same way in a terminal, CI log, agent bash tool, or subprocess capture:
pflow workflow.pflow.md
Progress lines and execution summary go to stderr. Declared workflow outputs go to stdout.
All workflow results, metrics, and errors serialize to a single JSON object on stdout. Progress and execution summary go to stderr (suppressed with -p).
Minimal stderr output when you want the cleanest possible data stream:
pflow -p workflow.pflow.md | jq '.data'
Suppresses the “Workflow output:” header, the execution summary, and stderr warnings. Data still goes to stdout (same as default mode). Useful for piping into tools that should only see the result.
Agents use this to check workflows before running them — pflow catches template errors, type mismatches, and missing inputs during validation, so problems surface immediately instead of after step 5 fails. Exit code 0 means valid.
Preview what a workflow would do without running it. --dry-run walks the graph using the same cache lookup the engine uses at runtime, but never invokes a node — no shell commands, LLM calls, HTTP requests, file writes, or trace files:
pflow ./workflow.pflow.md --dry-run topic=hello
Cached nodes render with ↻, would-execute nodes with ▸, and a divider marks the cache boundary:
Dry-run for workflow.pflow.md: 2 nodes ↻ fetch (1m ago) ─── cache boundary: 'summarize' ─── ▸ summarize [code]Summary: 1 cached · 1 would execute (1 code)Estimated duration: ~1ms (historical, actual may vary)
When everything is cached, there’s no boundary. When nothing is cached, the divider reads nothing cached — full run.For would-execute LLM nodes, the plan surfaces the cost from the most recent cache entry — labeled ≈ because pricing may have drifted. Per-node duration annotations appear on any would-execute node whose last run took at least 1 second; faster nodes stay bare:
▸ summarize [LLM] ≈ $0.02 (last run 15m ago)▸ upload [shell] ~1.5s (last run 2m ago)
Sum of historical LLM costs across would-execute nodes
estimated_duration_ms
Sum of historical durations across would-execute nodes
cost_basis
"exact" for linear plans; "upper_bound" when branches are enumerated
cache_boundary
Node ID of the first cache miss, or null when everything is cached
execute_by_type
Count of would-execute nodes by type: {"LLMNode": 1, "ShellNode": 2}
nodes_without_history
Would-execute LLM nodes missing cost data — non-zero means the estimate is incomplete
opaque_count
Sub-workflows the planner couldn’t resolve (e.g. workflow: ${dynamic-ref}) — their cost is excluded
Plan entry fields:node_id, node_type, status (cached, execute, sub_workflow, opaque, routing_error), cause, last_cost_usd, last_duration_ms, age_sec.Agents cost-gating on estimated_cost_usd should check opaque_count == 0 and nodes_without_history == 0 first — otherwise the estimate is missing data.
pflow caches node outputs automatically. When your agent re-runs a workflow file, unchanged nodes return instantly from a persistent cache — only nodes whose configuration or inputs changed will re-execute.
# First run: all nodes executepflow ./workflow.pflow.md title=hello# Second run (same inputs): all nodes served from cachepflow ./workflow.pflow.md title=hello# Changed input: only affected nodes re-executepflow ./workflow.pflow.md title=different
The cache is content-addressed — same node config plus same resolved inputs produces the same cache key, regardless of when or how the workflow was run. Cache entries expire after 24 hours. The cache lives at ~/.pflow/cache/cache.db.
The --only flag runs the workflow through the named node and stops. Upstream nodes are served from cache if available, downstream nodes don’t execute:
pflow ./workflow.pflow.md --only process-data
Without -o, --only streams the targeted node or sub-workflow result to stdout instead of the workflow’s full-run outputs. Pass -o <key> when you need a specific named output.This is how agents iterate on a specific node without re-running the full workflow.
Use --no-cache to bypass pflow memo-cache reads, so nodes execute again. Memo cache writes still happen, so the next run without --no-cache benefits from the results:
pflow ./workflow.pflow.md --no-cache
Use this when a node has external side effects (API calls, file writes) that should run again, or when memoized results seem stale. It does not disable LLM provider prompt caching declared with ## Cache / prompt_cache:, OpenAI automatic prompt caching, or Gemini implicit caching.
For nodes that read runtime state (git branch, date, environment variables), use - cache: false to permanently skip caching for that node. Unlike --no-cache, this skips both cache reads and writes:
### get-branchDetect the current git branch.- type: shell- cache: false```shell commandgit branch --show-current
Validation warns when a shell node has no template inputs and no `cache: false` — a sign its cached results may go stale across runs.## Traces and reportsBy default, pflow saves execution traces to `~/.pflow/debug/`:- **Workflow traces**: `workflow-trace-{name}-{timestamp}.json`Generate a structured execution report (one markdown file per node):```bash# During executionpflow my-workflow --report# Custom output directorypflow my-workflow --report-dir ./my-report/# From an existing trace (most recent)pflow report# From a specific tracepflow report ~/.pflow/debug/workflow-trace-my-workflow-20260323-160000.json
Reports include rendered prompts, responses, cost data, error summaries with fix suggestions, and anomaly warnings.Report output directories are replaced as generated snapshots. Custom report
directories must be empty or already contain pflow’s .pflow-report.json
marker.Disable traces with --no-trace for faster execution (the --report flag overrides --no-trace).
Generate a Mermaid flowchart from a workflow. Shows the graph topology — nodes, edges, conditional branches, error routes, inputs, outputs — that’s otherwise scattered across individual node directives in the .pflow.md file.
The command validates the workflow first (same checks as --validate-only). On validation failure, it shows diagnostics and exits with code 1. On success, it outputs Mermaid syntax to stdout.
# Save to file (raw Mermaid syntax)pflow visualize workflow.pflow.md -o diagram.mmd# Save as markdown with title, optional description, and fenced mermaid blockpflow visualize workflow.pflow.md -o diagram.md# Or pipe to clipboardpflow visualize workflow.pflow.md | pbcopy
Mermaid renders natively in GitHub, VS Code, and most markdown viewers — no extra tooling needed. The .md output wraps the diagram in a markdown document with the workflow’s title and description, ready to commit or share.
Option
Description
-o, --output FILE
Write to file. .md extension wraps in markdown with title and fenced code block. Any other extension writes raw Mermaid syntax.
--depth N
Sub-workflow expansion depth (default: 5, 0 = no expansion)
--direction LR|TD
Graph direction: left-to-right or top-down (default: LR)
--descriptions
Add first sentence of each node’s purpose to its label
Sub-workflow nodes (type: workflow) expand into subgraph blocks showing their internal structure. Use --depth 0 to render them as opaque nodes, or --depth 2 to expand nested sub-workflows:
# No sub-workflow expansionpflow visualize workflow.pflow.md --depth 0# Expand two levels deep, top-down layoutpflow visualize workflow.pflow.md --depth 2 --direction TD# Include node descriptions in labelspflow visualize workflow.pflow.md --descriptions
Show Example output
For a workflow with conditional branching:Node shapes indicate type: [["shell"]] rectangles for shell, {"code"} diamonds for decision nodes, (["output"]) stadiums for outputs. Edge styles: --> for normal flow, -->|action| for named branches, -.->|error| for error routes.Workflow inputs and outputs render as dashed-border groups. Batch nodes show as subgraphs with their items. Sub-workflows expand into nested subgraphs with input/output wrappers showing data flow across boundaries.