Current status
Where pflow is today:- Core workflow engine built on PocketFlow
- Node system — file, llm, http, shell, claude-code, and MCP bridge
- AI agent integration via CLI and MCP server
- Discovery — find nodes and workflows by describing what you need
- Template variables — connect node outputs to inputs
- Workflow validation with actionable error messages
- Execution traces for debugging
- Settings management — API keys, node filtering
- Unified model support — use any llm-supported provider for discovery and workflows
- Batch processing — process arrays of items through a single node (sequential or parallel)
Now
Getting pflow into users’ hands Current focus is preparing for public release:- Completing user documentation
- Publishing to PyPI for easy installation
- Ensuring a smooth first-run experience
Next
Model discovery- Show available models to agents based on configured API keys
- Help agents select appropriate models for different tasks
- Benchmark pflow’s efficiency using MCPMark evaluation
- Quantify token savings and latency improvements
Later
More expressive workflows Expanding what workflows can express:- Conditional branching — if/else logic in workflows
- Task parallelism — run independent nodes concurrently (fan-out/fan-in)
- Nested workflow support
- Structured output from LLM nodes (JSON schemas)
- Export workflows to standalone Python code
- Execution preview before running
- Sandbox runtime for shell commands
- Granular permission boundaries
- Export workflows as self-hosted MCP server packages
- Share automation as installable tools
Vision
Long-term ideas on the radar:- Discover and install MCP servers automatically
- Community registry for workflows and MCP servers
- Cloud execution for team use cases
- Workflows exposed as remote HTTP services
Get involved
Built by a developer who got tired of watching agents re-think the same tasks.
Questions or ideas? Reach out — andreas@pflow.run

