Once pflow is installed and connected to your AI tool, you don’t need to learn anything else. Your agent has everything it needs to build and run workflows.
You don’t need to learn the schema
pflow workflows are JSON files with a specific structure - but you’ll never need to write one manually. Your agent:
- Reads pflow’s instructions automatically
- Knows which nodes are available
- Understands how to connect them
- Handles all the technical details
- Creates, updates and reuses workflows for you
You just describe what you want in natural language. Your agent does the rest.
Your agent guides you when needed
When pflow needs something from you, your agent will tell you exactly what to do.
Need an API key?
Your agent: “pflow needs an Anthropic API key for intelligent discovery. Run this command: pflow settings set-env ANTHROPIC_API_KEY 'your-key'”
Need to connect to an API?
Your agent reads the documentation and builds the integration for you using the http node.
Need an MCP server?
Your agent helps you find and install it with your permission.
If you’re calling the same API repeatedly, consider having your agent create an MCP server for it - turning one-off requests into reusable tools.
Something unexpected happen?
Your agent diagnoses the issue using pflow’s structured errors and traces. See How debugging works for details.
You don’t need to memorize any commands. Your agent knows them and will prompt you when necessary.
What happens behind the scenes
When you ask your agent to do something:
- Agent checks for existing workflows - If you’ve done this task before, pflow finds the saved workflow
- Runs instantly if found - No re-planning, no LLM costs, same reliable result
- Updates existing workflow if it needs to be modified - Agent updates it with new functionality or parameters
- Builds new workflow if needed - Agent creates it once, pflow saves it for next time
Over time, your workflow library grows. Tasks that used to require agent reasoning become instant commands.
Not everything needs a workflow
Sometimes you just need to run a single tool - send a Slack message, fetch a file, query an API. Your agent can run individual nodes directly without building a workflow:
pflow registry run mcp-slack-SEND_MESSAGE channel="general" text="Done!"
This is useful for:
- One-off tasks - No workflow needed, just run the tool
- Testing - Your agent tests nodes to understand their output before building workflows
Think of nodes as individual building blocks. Your agent can use them standalone or compose them into workflows - whatever fits the task.
Checking your workflow library
To see what workflows have been saved:
To see details about a specific workflow:
pflow workflow describe my-workflow
See workflow commands for more options.
Optional: Running workflows directly
You don’t have to go through your agent. Saved workflows are CLI commands you can run yourself:
# Run a saved workflow
pflow analyze-logs input=./logs/api.log
# Use in a script
cat data.csv | pflow --output-format json process-csv > output.json
# Schedule with cron
0 9 * * * pflow daily-report >> ~/reports/daily.md
This is useful for:
- CI/CD pipelines - Run workflows as build steps
- Cron jobs - Schedule recurring tasks
- Scripts - Chain workflows with other tools
See CLI overview for all options.
Summary
| Task | Who handles it |
|---|
| Learning pflow commands | Your agent |
| Writing workflow JSON | Your agent |
| Knowing which nodes exist | Your agent |
| Configuring API keys | You (agent tells you the command) |
| Discovering new capabilities | Your agent or you |
| Installing new MCP servers | Your agent or you |
| Running saved workflows | Your agent or you directly |
pflow is designed to stay out of your way. Your agent handles the complexity - you just describe what you want done. Think of pflow as the infrastructure that makes this scalable - turning automation into reusable building blocks that are discoverable, composable, and ready for your agent to use again and again.