First public release on PyPI. pflow is a CLI workflow engine — AI agents
write Highlights
.pflow.md files that chain shell commands, LLM calls, HTTP requests,
and Python code through a shared data store. Workflows run the same way
every time, without burning tokens on repeated tool calls.Agent skills
You can now publish workflows as native skills for AI agents. Thepflow skill
command symlinks your saved workflows to the configuration directories for
Claude Code, Cursor, GitHub Copilot, and Codex.pflow skill saveenriches workflows with usage sections and metadata for the agent.- Support for multiple targets:
--cursor,--copilot,--codex, and--personal. pflow workflow historyshows execution stats and last-used inputs.- Improved discovery matching by including input names and node IDs in the context.
Data integrity
LLM nodes no longer discard prose when extracting JSON. Previously, if a response contained a JSON block, the node threw away the surrounding text. Now, the full response is stored as a string, and JSON parsing happens on-demand via the template system.Highlights- LLM nodes preserve prose explanations alongside code blocks.
- JSON fields are still accessible via dot notation:
${node.response.field}. - Numeric strings (like Discord IDs) declared as
type: stringare no longer coerced to integers. - Batch node error messages now correctly list available outputs for inner items.
- Workflow frontmatter tracks average execution duration for performance monitoring.
Developer experience
Runtime errors in Code nodes now point to the exact line number in your.pflow.md file, rather than the temporary Python script. We also improved
environment variable handling in MCP configurations to support dynamic URLs.Highlights- Code node errors show
LocationandSourcefields with correct line mapping. - MCP server configs now expand environment variables in URLs and
settings.json. - Markdown parser specifically detects and explains nested backtick errors.
Package name
Package name
The PyPI package is
pflow-cli, not pflow (that name was already taken).
This is the first PyPI release — if you installed from git before, switch to:Workflows are documentation
Workflows have moved from JSON to a custom Markdown format (.pflow.md). The
file is the documentation — H1 headers become titles, prose becomes descriptions,
and code blocks define execution logic. Comments and formatting are preserved
when saving, so your notes survive round-trips through the CLI.The internal parser produces the exact same IR structure as before, so
execution logic is unchanged. The migration is purely about authoring
experience and LLM readability.
- New
.pflow.mdextension with YAML frontmatter for metadata. - Line-by-line error reporting with context, replacing JSON syntax errors.
- “Save” operations update the file in place, preserving your comments.
pflow workflow saveextracts the description directly from the document prose.
Native Python execution
The newcode node runs Python in-process, passing native objects (lists,
dicts) between steps without serialization overhead. Unlike the shell node,
it doesn’t need jq to parse inputs — inputs are injected directly as local
variables.- Zero-overhead data passing for heavy transformations.
- Required type annotations catch type mismatches before execution.
stdout/stderrcapture for debugging, with configurable timeouts.
Unix piping and validation
You can now chain workflows using standard Unix pipes. Mark an input withstdin: true, and pflow will route piped data to that specific parameter.
Validation has also been unified: the checks that run during --validate-only
now run before every execution, catching errors like invalid JSON string
templates before any steps run.stdin: trueinput property for explicit pipe routing.- FIFO detection prevents hangs when no input is piped.
- Unified validation logic ensures
--validate-onlymatches runtime behavior. - Improved error messages for unknown node types (no more stack traces).
disallowed_toolsparameter on Claude Code nodes to block specific tools in agentic workflows.- Fixed nested template validation for
${item.field}inside array brackets.
Breaking changes
Breaking changes
Workflow format
JSON workflow files (.json) are no longer supported. Existing workflows
must be converted to the .pflow.md format. The CLI will reject JSON files
with a migration error.Stdin handling
The${stdin} shared store variable has been removed. You must now explicitly
mark an input parameter to receive piped data.CLI changes
pflow workflow saveno longer accepts--description. It extracts the description from the Markdown content (text after the H1 header).- Metadata is now stored in YAML frontmatter rather than a
rich_metadatawrapper.
Batch processing
Need to classify 50 commits with an LLM, or fetch 200 URLs? Add abatch
config to any node and pflow handles the fan-out. Works with every node
type — LLM, shell, HTTP, MCP, all of them.- Sequential and parallel execution with configurable concurrency (
max_concurrent). error_handling: continuekeeps going when individual items fail — you get partial results instead of nothing.- Progress indicators in the CLI so you can see where a 200-item batch is at.
- Access results with
${node.results}, individual items with${node.results[0].response}.
Smarter templates
Template variables like${node.stdout.items[0].name} now parse JSON
automatically. If a shell command outputs a JSON string, you can access
nested fields directly — no more jq extraction steps between every shell
node and the thing that consumes it.Highlights${node.stdout.field}resolves through JSON strings without an intermediate node.- Inline object templates preserve types correctly — no more double-serialization when passing dicts.
- Dicts and lists auto-coerce to JSON strings when mapped to string-typed parameters.
- Optional inputs without defaults resolve correctly instead of erroring.
Before and after
Before and after
Previously you needed an extraction step between a shell command and
anything that wanted its output as structured data:
Shell node fixes
Shell nodes now surfacestderr even when the exit code is zero. Tools
like curl and ffmpeg write diagnostics to stderr on success, and those
warnings were getting lost.Highlightsstderrvisible on successful commands, not just failures.- Trailing newlines stripped from
stdoutby default (disable withstrip_newline: false). - Pipeline-aware error detection for
grep | sedchains where only the last exit code was visible. - Fixed
SIGPIPEcrashes when a subprocess closed its input early.
Breaking changes
Breaking changes
Explicit data wiring
Nodes can no longer silently read from the shared store by key name. All data must be wired through${variable} templates. This prevents a class
of bugs where a node ID collided with a parameter name and got the wrong
value.Claude Code node
task→promptworking_directory→cwdcontextremoved — include it directly in the prompt
Validation that helps you fix things
When something goes wrong, pflow now tells the agent exactly what to do instead of printing a stack trace. Wrong template path? It shows every available output with its type and suggests the correct one.Validation runs automatically before every execution — no separate step
needed. The
--validate-only flag lets agents check a workflow without
running it.- Template references checked against actual node outputs before execution starts.
- “Did you mean?” suggestions for misspelled node names and output paths.
- Type mismatch warnings when connecting incompatible outputs to inputs.
--validate-onlyflag for CI pipelines and agent pre-checks.
Agent tooling
The CLI now has discovery commands so agents can find the right building blocks without knowing what’s available ahead of time.registry discover
takes a natural language description and returns matching nodes.Highlightspflow registry discover "fetch API data and send to Slack"returns matching nodes ranked by relevance.pflow registry run node-type param=valuetests individual nodes outside of a workflow — output is pre-filtered for agents, showing structure without data.pflow instructions usagegives agents a complete guide to pflow’s commands and patterns.- Allow/deny filtering via
pflow settingsto control which nodes are available.
Example: agent discovery flow
Example: agent discovery flow
MCP server improvements
Connecting external tools got more reliable. Server configs now expand environment variables everywhere (URLs, headers, auth fields), and sync only runs when something actually changed.Highlights- Environment variables expanded in all MCP config fields, not just API keys.
- Smart sync skips re-scanning when server configs haven’t changed (~500ms saved on warm starts).
- HTTP transport support for remote MCP servers alongside stdio.
- Better error messages when MCP servers fail to start or authenticate.
Workflow engine
Write a.pflow.md file, run it from the terminal. Steps execute top to
bottom, data flows between them through template variables. Save it with
pflow workflow save and it becomes a command you can run from anywhere.- Run from file path (
pflow workflow.pflow.md) or by name (pflow my-workflow). - Templates reach into nested objects and arrays —
${node.result.data.users[0].email}. - Execution traces saved to
~/.pflow/debug/with per-node inputs, outputs, and timing. - Pipe workflows together:
pflow -p workflow-a | pflow -p workflow-b.
Built-in nodes
Eight node types that cover the common building blocks. MCP bridges to anything else — GitHub, Slack, databases, whatever has an MCP server.Highlightsshell— run commands with dangerous-pattern blocking and timeouts.code— inline Python with native object passing (no serialization overhead).llm— any model via Simon Willison’s llm library, with token tracking.http— all methods, auth, request bodies, automatic JSON parsing.file— read, write, copy, move, delete.mcp— bridge to any MCP server over stdio or HTTP transport.claude-code— delegate agentic subtasks to Claude Code.git/github— common operations without shell scripting.
MCP server
pflow itself runs as an MCP server, so agents in Claude Desktop, Cursor, or any MCP-compatible environment can build and run workflows programmatically.Highlights- 11 tools covering workflow execution, node discovery, and registry inspection.
- Structure-only output mode — agents see schema types without actual data, keeping context windows small.
- Works alongside CLI usage. Same workflows, same registry, different interface.
Quick start
Quick start
What's next
What's next
Batch processing for fan-out patterns, smarter template resolution, and
shell node reliability improvements. See the Roadmap.

