Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pflow.run/llms.txt

Use this file to discover all available pages before exploring further.

pflow provides a CLI for running workflows, managing MCP servers, and configuring settings.
Who runs these commands? Most pflow commands are run by your AI agent, not by you directly. You handle setup (installation, API keys, MCP servers), then your agent uses pflow to build and run workflows. This reference documents all commands so you understand what your agent is doing. See Using pflow for what to expect day-to-day.

Command structure

pflow [command] [options] [arguments]

Command groups

pflow (default)

Run workflows by name or file

pflow list / find / describe

Find and inspect saved workflows

pflow skill

Publish workflows as AI agent skills

pflow guide / probe

Learn the surface and test single nodes

pflow mcp

Manage MCP server connections

pflow settings

Configure API keys and node filtering

pflow visualize

Generate Mermaid flowchart from a workflow

pflow guide

Get AI agent entry guidance

Main command

The default pflow command runs workflows. Your agent uses this to run saved workflows or workflow files it has created.

Run a saved workflow

pflow my-workflow input=data.txt threshold=0.5

Run from a file

pflow ./workflow.pflow.md
pflow ~/workflows/analysis.pflow.md param=value
No built-in natural language mode. pflow executes workflow files and saved workflows. Your AI agent builds workflows using pflow’s MCP tools or CLI primitives — pflow doesn’t have its own natural language interface.

Global options

OptionDescription
--versionShow pflow version
-v, --verboseShow detailed execution output
-o, --output-key KEYSpecific shared store key to output
--output-format text|jsonOutput format (default: text)
-p, --printMinimal output: suppress header, summary, and warnings
--no-traceDisable workflow trace saving
--cache/--no-cacheEnable/disable memoization cache reads (default: enabled)
--only NODERun workflow through this node then stop
--validate-onlyValidate workflow without running
--dry-runPreview which nodes would run or serve from cache, without executing
--helpShow help message
Older natural-language workflow generation flags have been removed. Use the current global options shown above, or let your AI agent build .pflow.md workflows directly.

Parameter syntax

Pass parameters to workflows using key=value syntax:
pflow my-workflow input=data.txt count=10 enabled=true
Type inference:
  • true / false → boolean
  • 10 → integer
  • 3.14 → float
  • '["a","b"]' → JSON array
  • '{"key":"val"}' → JSON object
  • Everything else → string

Stdin input

Pipe data into workflows that declare an input with stdin: true:
echo "test content" | pflow my-workflow
cat data.csv | pflow csv-analyzer
The workflow must have an input marked to receive stdin:
## Inputs

### data

Data input from stdin pipe.

- type: string
- required: true
- stdin: true
Piped data routes to this input automatically. CLI parameters override stdin if both are provided:
# CLI parameter wins - "override" is used, not piped content
echo "piped" | pflow my-workflow data="override"
For workflow chaining, use the -p flag to output results for the next workflow:
pflow -p step1 | pflow -p step2 | pflow step3

Stdout output

Workflows that declare multiple outputs mark one with stdout: true to pick which output lands on process stdout in text mode:
## Outputs

### message

Primary result — streams to stdout on redirect or pipe.

- source: ${emit.stdout}
- stdout: true

### length

Secondary metadata. Available via `-o length` or `--output-format json`.

- source: ${count.stdout}
Redirecting or piping the CLI in text mode now writes only the marked output to stdout:
pflow stdout-result.pflow.md > result.txt       # file contains the message only
pflow stdout-result.pflow.md | next-step        # message streams to the next command
pflow stdout-result.pflow.md -o length          # override: emit length instead
pflow stdout-result.pflow.md --output-format json   # emit all outputs as structured JSON
Single-output workflows don’t need the marker — their one output is unambiguous. Workflows with multiple declared outputs and no stdout: true stream the first declared output and print a warning on stderr naming the other outputs and the three ways to change the routing: add the marker, pass -o, or switch to JSON mode. The validator enforces that at most one output per workflow is marked.

Output modes

Text mode (default)

Human-readable output with live progress streamed to stderr and results on stdout. Works the same way in a terminal, CI log, agent bash tool, or subprocess capture:
pflow workflow.pflow.md
Progress lines and execution summary go to stderr. Declared workflow outputs go to stdout.

JSON mode

Structured output on stdout for machine parsing:
pflow --output-format json workflow.pflow.md
All workflow results, metrics, and errors serialize to a single JSON object on stdout. Progress and execution summary go to stderr (suppressed with -p). Minimal stderr output when you want the cleanest possible data stream:
pflow -p workflow.pflow.md | jq '.data'
Suppresses the “Workflow output:” header, the execution summary, and stderr warnings. Data still goes to stdout (same as default mode). Useful for piping into tools that should only see the result.

Exit codes

CodeMeaning
0Workflow completed, including runs that completed with warnings (status: "degraded")
1Workflow failed
130Workflow interrupted
Runtime warnings remain visible in stderr, JSON output, traces, and reports; they do not make a completed workflow a process failure.

Validation mode

Validate a workflow without running it:
pflow --validate-only workflow.pflow.md
pflow --validate-only my-saved-workflow
Agents use this to check workflows before running them — pflow catches template errors, type mismatches, and missing inputs during validation, so problems surface immediately instead of after step 5 fails. Exit code 0 means valid.

Dry-run mode

Preview what a workflow would do without running it. --dry-run walks the graph using the same cache lookup the engine uses at runtime, but never invokes a node — no shell commands, LLM calls, HTTP requests, file writes, or trace files:
pflow ./workflow.pflow.md --dry-run topic=hello
Cached nodes render with , would-execute nodes with , and a divider marks the cache boundary:
Dry-run for workflow.pflow.md: 2 nodes

  ↻ fetch  (1m ago)
  ─── cache boundary: 'summarize' ───
  ▸ summarize  [code]

Summary: 1 cached · 1 would execute (1 code)
Estimated duration: ~1ms  (historical, actual may vary)
When everything is cached, there’s no boundary. When nothing is cached, the divider reads nothing cached — full run. For would-execute LLM nodes, the plan surfaces the cost from the most recent cache entry — labeled because pricing may have drifted. Per-node duration annotations appear on any would-execute node whose last run took at least 1 second; faster nodes stay bare:
▸ summarize  [LLM]   ≈ $0.02 (last run 15m ago)
▸ upload     [shell] ~1.5s (last run 2m ago)

JSON output

pflow ./workflow.pflow.md --dry-run --output-format json topic=hello
Top-level shape: {workflow, plan, summary, diagnostics}.

Flag combinations

FlagWith --dry-run
--validate-onlyRejected with exit 1 — different audiences, different exit contracts
--report, --report-dirRejected with exit 1 — no execution means no report
--no-cacheEvery node shows as would-execute
--only NODEPlan stops at the named node
--no-trace, -p, -oAccepted silently — dry-run writes no traces, and the plan itself is the result
Exit 0 on a successful plan, 1 on planner-level failures (missing input, compile error, unresolvable sub-workflow, cycle, max depth exceeded).

Iteration and caching

pflow caches node outputs automatically. When your agent re-runs a workflow file, unchanged nodes return instantly from a persistent cache — only nodes whose configuration or inputs changed will re-execute.
# First run: all nodes execute
pflow ./workflow.pflow.md title=hello

# Second run (same inputs): all nodes served from cache
pflow ./workflow.pflow.md title=hello

# Changed input: only affected nodes re-execute
pflow ./workflow.pflow.md title=different
The cache is content-addressed — same node config plus same resolved inputs produces the same cache key, regardless of when or how the workflow was run. Cache entries expire after 24 hours. The cache lives at ~/.pflow/cache/cache.db.

Run a single node

The --only flag runs the workflow through the named node and stops. Upstream nodes are served from cache if available, downstream nodes don’t execute:
pflow ./workflow.pflow.md --only process-data
Without -o, --only streams the targeted node or sub-workflow result to stdout instead of the workflow’s full-run outputs. Pass -o <key> when you need a specific named output. This is how agents iterate on a specific node without re-running the full workflow.

Bypass memo cache reads

Use --no-cache to bypass pflow memo-cache reads, so nodes execute again. Memo cache writes still happen, so the next run without --no-cache benefits from the results:
pflow ./workflow.pflow.md --no-cache
Use this when a node has external side effects (API calls, file writes) that should run again, or when memoized results seem stale. It does not disable LLM provider prompt caching declared with ## Cache / prompt_cache:, OpenAI automatic prompt caching, or Gemini implicit caching.

Per-node opt-out

For nodes that read runtime state (git branch, date, environment variables), use - cache: false to permanently skip caching for that node. Unlike --no-cache, this skips both cache reads and writes:
### get-branch

Detect the current git branch.

- type: shell
- cache: false

```shell command
git branch --show-current

Validation warns when a shell node has no template inputs and no `cache: false` — a sign its cached results may go stale across runs.

## Traces and reports

By default, pflow saves execution traces to `~/.pflow/debug/`:

- **Workflow traces**: `workflow-trace-{name}-{timestamp}.json`

Generate a structured execution report (one markdown file per node):

```bash
# During execution
pflow my-workflow --report

# Custom output directory
pflow my-workflow --report-dir ./my-report/

# From an existing trace (most recent)
pflow report

# From a specific trace
pflow report ~/.pflow/debug/workflow-trace-my-workflow-20260323-160000.json
Reports include rendered prompts, responses, cost data, error summaries with fix suggestions, and anomaly warnings. Report output directories are replaced as generated snapshots. Custom report directories must be empty or already contain pflow’s .pflow-report.json marker. Disable traces with --no-trace for faster execution (the --report flag overrides --no-trace).

Visualize command

Generate a Mermaid flowchart from a workflow. Shows the graph topology — nodes, edges, conditional branches, error routes, inputs, outputs — that’s otherwise scattered across individual node directives in the .pflow.md file.
pflow visualize workflow.pflow.md
pflow visualize my-saved-workflow
The command validates the workflow first (same checks as --validate-only). On validation failure, it shows diagnostics and exits with code 1. On success, it outputs Mermaid syntax to stdout.
# Save to file (raw Mermaid syntax)
pflow visualize workflow.pflow.md -o diagram.mmd

# Save as markdown with title, optional description, and fenced mermaid block
pflow visualize workflow.pflow.md -o diagram.md

# Or pipe to clipboard
pflow visualize workflow.pflow.md | pbcopy
Mermaid renders natively in GitHub, VS Code, and most markdown viewers — no extra tooling needed. The .md output wraps the diagram in a markdown document with the workflow’s title and description, ready to commit or share.
OptionDescription
-o, --output FILEWrite to file. .md extension wraps in markdown with title and fenced code block. Any other extension writes raw Mermaid syntax.
--depth NSub-workflow expansion depth (default: 5, 0 = no expansion)
--direction LR|TDGraph direction: left-to-right or top-down (default: LR)
--descriptionsAdd first sentence of each node’s purpose to its label
Sub-workflow nodes (type: workflow) expand into subgraph blocks showing their internal structure. Use --depth 0 to render them as opaque nodes, or --depth 2 to expand nested sub-workflows:
# No sub-workflow expansion
pflow visualize workflow.pflow.md --depth 0

# Expand two levels deep, top-down layout
pflow visualize workflow.pflow.md --depth 2 --direction TD

# Include node descriptions in labels
pflow visualize workflow.pflow.md --descriptions

Guide command

The pflow guide command provides the entry guidance for AI agents using pflow.
pflow guide
pflow guide http llm
Without topics it renders the same entry content as pflow --help. Topic composition is introduced in Task 77.