Branch convergence
Workflows now support branch convergence. When downstream nodes need to reference “whichever branch ran” after a conditional split, you can use the new coalesce operator (??) or optional inputs in code nodes to handle skipped branches.- Added the
??coalesce operator for template syntax (${a.stdout ?? b.stdout}). The resolver tries each operand left-to-right and skips branches that did not execute. - Coalesce is supported in all template contexts: inline strings, shell commands, LLM prompts, input dicts, workflow output sources, and batch items.
- Python code nodes now accept
Optional[T]orT | Noneinput annotations. If the source branch didn’t execute,Noneis injected automatically instead of raising a runtime error.
Nested workflows
Workflow nodes now look and behave like every other node type. You pass parameters as regular inputs, and child outputs are exposed via the standard namespace system.Child workflow inputs are validated before execution starts. If you miss a required input or provide the wrong parameter name, the parent workflow fails immediately with an error listing the child’s declared inputs.
- Unified
workflowparameter handles both file paths and saved workflow names. - Non-reserved parameters become child inputs.
- If the child workflow declares
## Outputs, they are exposed to the parent via standard dot notation (${node_id.output_name}). - Fixed relative path resolution so
./child.pflow.mdalways resolves from the parent workflow’s directory, even across deep nesting levels. - The template validator now statically resolves child workflow outputs to catch typos during compilation.
LLM reasoning and cost tracking
You can now control reasoning and thinking depth across all LLM providers using a unified interface. pflow translates your settings to the provider-specific parameters — Anthropic’sthinking_budget, OpenAI’s reasoning_effort, Gemini’s thinking config.- Added
reasoning_effort(xhigh, high, medium, low, minimal, none) andreasoning_max_tokens(direct token budget) to the LLM node. - Added a
model_optionsparameter to the LLM node as an escape hatch for provider-specific fields. - Unified LLM cost access:
${node.llm_usage.cost_usd}now works in workflow templates for both standard LLM and Claude Code nodes. - LLM costs are computed at execution time, making
cost_usdavailable in the shared store immediately after each node runs. - The Claude Code node’s redundant
_claude_metadataoutput was removed; all metadata is consolidated intollm_usage.
Validation and execution robustness
Several compile-time and runtime edge cases around batch processing, validation depth, and parallel execution have been fixed.Highlights- The template validator now infers the internal structure of batch items based on the upstream batch source, catching invalid
${item.field}references at compile time with “did you mean?” suggestions. - Validation now recurses into nested dictionaries and lists (such as the
inputsdict on code nodes) to catch typos and non-existent forward references. - Required workflow inputs now strictly fail validation when provided as empty strings.
- Batch nodes now return an “error” action on partial failures when
error_handling: continueis set, enabling properon-errorrouting for partial batch failures. - Fixed a bug where the validator blocked batch processing entirely on nested workflow nodes.
- Resolved a
_thread.RLockpickle error that caused parallel batch processing to crash at runtime.
- Added
--timeoutand--sse-timeoutflags topflow mcp addso custom timeout values are preserved in config files.
Breaking changes
Breaking changes
Removal of planning module and repair system
The built-in natural language planning module and auto-repair systems have been removed (~40,000 lines of code). AI agents handle planning directly — pflow provides the runtime, validation, and execution primitives they compose.- Removed CLI flags:
--trace-planner,--planner-timeout,--planner-model,--auto-repair,--cache-planner,--save/--no-save,--no-update, and--generate-metadata. - Component and workflow discovery features have been preserved as plain functions for agents to query available resources.
Nested workflow API
- The
workflow_refandworkflow_nameparameters have been consolidated into a singleworkflowparameter. param_mappingandoutput_mappinghave been removed entirely. Pass arguments directly as inputs, and access outputs via standard dot notation.isolatedandscopedstorage modes have been removed.
Unresolvable output errors
When a workflow output source references a node that didn’t execute (e.g., a branch not taken) and no?? coalesce operator is used, it now raises an OutputResolutionError with a precise diagnostic message. Previously, these unresolvable outputs were silently dropped, causing confusing downstream failures in nested workflows.Removed registry run timeout flag
The--timeout flag on the pflow registry run command has been removed as it was redundant. Timeouts can be passed directly as a per-node parameter (e.g., timeout=30).Conditional branching and loops
Workflows now support conditional routing. You can branch based on errors, make data-driven routing decisions in Python code nodes, and create retry loops directly in your.pflow.md files.To prevent infinite loops, an automatic loop guard tracks node visits and raises a
MaxNodeVisitsError if a single node is executed 100 times (configurable via PFLOW_MAX_NODE_VISITS).- Added
- next:,- on-error:, and- next: endsyntax for static and error routing. - Python code nodes support a
nextvariable for dynamic, data-driven routing. - Caching is now automatically invalidated when a node is revisited in a loop, ensuring exit conditions are correctly re-evaluated.
flow.run()is now always wrapped to ensure visit counts reset correctly between executions.- Topological sorting now uses position-based edge filtering to allow valid data dependencies while preventing cycle errors on backward loop edges.
Guaranteed structured JSON
The LLM node now accepts anoutput_schema parameter for guaranteed structured JSON responses. It uses the constrained decoding APIs of model providers (Anthropic, Gemini, OpenAI) instead of prompting for JSON and hoping the model complies.yaml output_schemacode blocks pass JSON Schema dicts directly to thellmlibrary.- The API response is parsed and stored as a
dict, avoiding downstream string parsing. - Code block stripping is safely skipped when a schema is set, since the API returns clean JSON.
Stateful MCP servers and error reporting
MCP servers are no longer restarted for every node in a workflow. A background event loop now acts as a connection pool, keeping server sessions alive across workflow steps. This fixes silent failures where stateful servers (like Playwright browsers or database clients) lost all state between step executions.Highlights- Persistent connection pool keeps both
stdioandhttpMCP sessions alive across nodes. - Automatic crash recovery evicts and retries sessions once if a transport error (like a broken pipe) occurs.
- MCP error reporting now un-wraps internal
ExceptionGrouptask failures to show the actual HTTP error (e.g., “Authentication failed” instead of a raw 40-line traceback). - Fixed a logging bug that prepended “MCP tool failed:” twice.
- Node output formatter correctly detects
errorkeys so MCP failures show as “failed” instead of “succeeded”.
Breaking changes
Breaking changes
Explicit branch target routing
Nodes reached via explicit routing (branch targets) used to silently fall through to the next node in document order. This silent, input-dependent bug has been fixed via parse-time validation. Any node targeted by an action edge must now explicitly declare its next step.Dynamic routing validation
If a Python code node assigns a variable tonext (e.g., next = target_var instead of a literal "node-id"), you must explicitly declare - next: in the markdown step parameters so the parser can build the execution graph.ReadFile outputs raw content
TheReadFile node previously prepended line numbers (N: ) unconditionally to every line it read. This corrupted file content for downstream LLM prompts, templates, and configurations. It has been completely removed; the node now returns raw, unmodified file content.Python code node results
Theresult output variable is now optional in Python code nodes as long as next is declared.First public release on PyPI. pflow is a CLI workflow engine — AI agents
write Highlights
.pflow.md files that chain shell commands, LLM calls, HTTP requests,
and Python code through a shared data store. Workflows run the same way
every time, without burning tokens on repeated tool calls.Agent skills
You can now publish workflows as native skills for AI agents. Thepflow skill
command symlinks your saved workflows to the configuration directories for
Claude Code, Cursor, GitHub Copilot, and Codex.pflow skill saveenriches workflows with usage sections and metadata for the agent.- Support for multiple targets:
--cursor,--copilot,--codex, and--personal. pflow workflow historyshows execution stats and last-used inputs.- Improved discovery matching by including input names and node IDs in the context.
Data integrity
LLM nodes no longer discard prose when extracting JSON. Previously, if a response contained a JSON block, the node threw away the surrounding text. Now, the full response is stored as a string, and JSON parsing happens on-demand via the template system.Highlights- LLM nodes preserve prose explanations alongside code blocks.
- JSON fields are still accessible via dot notation:
${node.response.field}. - Numeric strings (like Discord IDs) declared as
type: stringare no longer coerced to integers. - Batch node error messages now correctly list available outputs for inner items.
- Workflow frontmatter tracks average execution duration for performance monitoring.
Developer experience
Runtime errors in Code nodes now point to the exact line number in your.pflow.md file, rather than the temporary Python script. We also improved
environment variable handling in MCP configurations to support dynamic URLs.Highlights- Code node errors show
LocationandSourcefields with correct line mapping. - MCP server configs now expand environment variables in URLs and
settings.json. - Markdown parser specifically detects and explains nested backtick errors.
Package name
Package name
The PyPI package is
pflow-cli, not pflow (that name was already taken).
This is the first PyPI release — if you installed from git before, switch to:Workflows are documentation
Workflows have moved from JSON to a custom Markdown format (.pflow.md). The
file is the documentation — H1 headers become titles, prose becomes descriptions,
and code blocks define execution logic. Comments and formatting are preserved
when saving, so your notes survive round-trips through the CLI.The internal parser produces the exact same IR structure as before, so
execution logic is unchanged. The migration is purely about authoring
experience and LLM readability.
- New
.pflow.mdextension with YAML frontmatter for metadata. - Line-by-line error reporting with context, replacing JSON syntax errors.
- “Save” operations update the file in place, preserving your comments.
pflow workflow saveextracts the description directly from the document prose.
Native Python execution
The newcode node runs Python in-process, passing native objects (lists,
dicts) between steps without serialization overhead. Unlike the shell node,
it doesn’t need jq to parse inputs — inputs are injected directly as local
variables.- Zero-overhead data passing for heavy transformations.
- Required type annotations catch type mismatches before execution.
stdout/stderrcapture for debugging, with configurable timeouts.
Unix piping and validation
You can now chain workflows using standard Unix pipes. Mark an input withstdin: true, and pflow will route piped data to that specific parameter.
Validation has also been unified: the checks that run during --validate-only
now run before every execution, catching errors like invalid JSON string
templates before any steps run.stdin: trueinput property for explicit pipe routing.- FIFO detection prevents hangs when no input is piped.
- Unified validation logic ensures
--validate-onlymatches runtime behavior. - Improved error messages for unknown node types (no more stack traces).
disallowed_toolsparameter on Claude Code nodes to block specific tools in agentic workflows.- Fixed nested template validation for
${item.field}inside array brackets.
Breaking changes
Breaking changes
Workflow format
JSON workflow files (.json) are no longer supported. Existing workflows
must be converted to the .pflow.md format. The CLI will reject JSON files
with a migration error.Stdin handling
The${stdin} shared store variable has been removed. You must now explicitly
mark an input parameter to receive piped data.CLI changes
pflow workflow saveno longer accepts--description. It extracts the description from the Markdown content (text after the H1 header).- Metadata is now stored in YAML frontmatter rather than a
rich_metadatawrapper.
Batch processing
Need to classify 50 commits with an LLM, or fetch 200 URLs? Add abatch
config to any node and pflow handles the fan-out. Works with every node
type — LLM, shell, HTTP, MCP, all of them.- Sequential and parallel execution with configurable concurrency (
max_concurrent). error_handling: continuekeeps going when individual items fail — you get partial results instead of nothing.- Progress indicators in the CLI so you can see where a 200-item batch is at.
- Access results with
${node.results}, individual items with${node.results[0].response}.
Smarter templates
Template variables like${node.stdout.items[0].name} now parse JSON
automatically. If a shell command outputs a JSON string, you can access
nested fields directly — no more jq extraction steps between every shell
node and the thing that consumes it.Highlights${node.stdout.field}resolves through JSON strings without an intermediate node.- Inline object templates preserve types correctly — no more double-serialization when passing dicts.
- Dicts and lists auto-coerce to JSON strings when mapped to string-typed parameters.
- Optional inputs without defaults resolve correctly instead of erroring.
Before and after
Before and after
Previously you needed an extraction step between a shell command and
anything that wanted its output as structured data:
Shell node fixes
Shell nodes now surfacestderr even when the exit code is zero. Tools
like curl and ffmpeg write diagnostics to stderr on success, and those
warnings were getting lost.Highlightsstderrvisible on successful commands, not just failures.- Trailing newlines stripped from
stdoutby default (disable withstrip_newline: false). - Pipeline-aware error detection for
grep | sedchains where only the last exit code was visible. - Fixed
SIGPIPEcrashes when a subprocess closed its input early.
Breaking changes
Breaking changes
Explicit data wiring
Nodes can no longer silently read from the shared store by key name. All data must be wired through${variable} templates. This prevents a class
of bugs where a node ID collided with a parameter name and got the wrong
value.Claude Code node
task→promptworking_directory→cwdcontextremoved — include it directly in the prompt
Validation that helps you fix things
When something goes wrong, pflow now tells the agent exactly what to do instead of printing a stack trace. Wrong template path? It shows every available output with its type and suggests the correct one.Validation runs automatically before every execution — no separate step
needed. The
--validate-only flag lets agents check a workflow without
running it.- Template references checked against actual node outputs before execution starts.
- “Did you mean?” suggestions for misspelled node names and output paths.
- Type mismatch warnings when connecting incompatible outputs to inputs.
--validate-onlyflag for CI pipelines and agent pre-checks.
Agent tooling
The CLI now has discovery commands so agents can find the right building blocks without knowing what’s available ahead of time.registry discover
takes a natural language description and returns matching nodes.Highlightspflow registry discover "fetch API data and send to Slack"returns matching nodes ranked by relevance.pflow registry run node-type param=valuetests individual nodes outside of a workflow — output is pre-filtered for agents, showing structure without data.pflow instructions usagegives agents a complete guide to pflow’s commands and patterns.- Allow/deny filtering via
pflow settingsto control which nodes are available.
Example: agent discovery flow
Example: agent discovery flow
MCP server improvements
Connecting external tools got more reliable. Server configs now expand environment variables everywhere (URLs, headers, auth fields), and sync only runs when something actually changed.Highlights- Environment variables expanded in all MCP config fields, not just API keys.
- Smart sync skips re-scanning when server configs haven’t changed (~500ms saved on warm starts).
- HTTP transport support for remote MCP servers alongside stdio.
- Better error messages when MCP servers fail to start or authenticate.
Workflow engine
Write a.pflow.md file, run it from the terminal. Steps execute top to
bottom, data flows between them through template variables. Save it with
pflow workflow save and it becomes a command you can run from anywhere.- Run from file path (
pflow workflow.pflow.md) or by name (pflow my-workflow). - Templates reach into nested objects and arrays —
${node.result.data.users[0].email}. - Execution traces saved to
~/.pflow/debug/with per-node inputs, outputs, and timing. - Pipe workflows together:
pflow -p workflow-a | pflow -p workflow-b.
Built-in nodes
Eight node types that cover the common building blocks. MCP bridges to anything else — GitHub, Slack, databases, whatever has an MCP server.Highlightsshell— run commands with dangerous-pattern blocking and timeouts.code— inline Python with native object passing (no serialization overhead).llm— any model via Simon Willison’s llm library, with token tracking.http— all methods, auth, request bodies, automatic JSON parsing.file— read, write, copy, move, delete.mcp— bridge to any MCP server over stdio or HTTP transport.claude-code— delegate agentic subtasks to Claude Code.git/github— common operations without shell scripting.
MCP server
pflow itself runs as an MCP server, so agents in Claude Desktop, Cursor, or any MCP-compatible environment can build and run workflows programmatically.Highlights- 11 tools covering workflow execution, node discovery, and registry inspection.
- Structure-only output mode — agents see schema types without actual data, keeping context windows small.
- Works alongside CLI usage. Same workflows, same registry, different interface.
Quick start
Quick start
What's next
What's next
Batch processing for fan-out patterns, smarter template resolution, and
shell node reliability improvements. See the Roadmap.

