Current status
Where pflow is today:- Core workflow engine built on PocketFlow
- Node system — file, llm, http, shell, claude-code, and MCP bridge
- AI agent integration via CLI and MCP server
- Intelligent discovery — find nodes and workflows by describing what you need
- Template variables — connect node outputs to inputs
- Workflow validation with actionable error messages
- Execution traces for debugging
- Settings management — API keys, node filtering
Now
Getting pflow into users’ hands Current focus is preparing for public release:- Completing user documentation
- Publishing to PyPI for easy installation
- Ensuring a smooth first-run experience
Next
Unified model support Using thellm library consistently across pflow:
- Show available models to agents based on configured API keys
- Use
llmfor internal discovery and structured output (not just LLM nodes) - Let users choose their preferred model for pflow’s internal operations
- Benchmark pflow’s efficiency using MCPMark evaluation
- Quantify token savings and latency improvements
Later
More powerful workflows Expanding what workflows can express:- Conditional branching — if/else logic in workflows
- Parallel execution — run independent nodes concurrently
- Nested workflow support in the planner
- Structured output from LLM nodes (JSON schemas)
- Export workflows to standalone Python code
- Execution preview before running
- Sandbox runtime for shell commands
- Granular permission boundaries
- Export workflows as self-hosted MCP server packages
- Share automation as installable tools
Vision
Long-term ideas we’re exploring:- Discover and install MCP servers automatically
- Community registry for workflows and MCP servers
- Cloud execution for team use cases
- Workflows exposed as remote HTTP services
Get involved
Built by a developer who got tired of watching agents re-think the same tasks.
Questions or ideas? I’d love to hear from you — andreas@pflow.run

