When workflows fail, pflow gives your agent structured error data — available fields, types, and fix suggestions. Since agents build and fix workflows, error messages are the primary interface. They say what’s wrong, what’s available instead, and how to fix it.
Your role in debugging is minimal. Most of the time, you don’t need to do anything.
Your agent handles most debugging
When a workflow fails, your agent receives detailed information about what went wrong:
- What failed - Which node, what error category
- What’s available - The fields that DO exist (not just what’s missing)
- Suggestions - “Did you mean?” recommendations
- Execution state - Which nodes succeeded before the failure
Your agent uses this to self-correct. When building or fixing workflows, it reads pflow’s built-in instructions — how to interpret errors, inspect trace files, and resolve common issues.
You don’t need to teach your agent how to debug. The guidance is built in.
What your agent sees
When a workflow fails, errors include the data needed for self-correction:
{
"errors": [{
"message": "Node 'fetch' does not output 'msg'",
"node_id": "process",
"available_fields": ["result", "result.messages", "result.messages[0].text"],
"fixable": true
}]
}
Your agent sees that msg doesn’t exist, but result.messages does. It fixes the template and retries. No human intervention needed.
These errors are specific because the error space is finite — pflow knows every node type and every declared output. When agents compose known building blocks instead of writing arbitrary code, errors say “node X doesn’t output Y, did you mean Z?” instead of handing you a stack trace.
Execution reports
The --report flag generates a structured execution report — a directory of readable markdown files, one per node:
# Report to default location (~/.pflow/reports/{name}/)
pflow my-workflow --report
# Report to a specific directory
pflow my-workflow --report-dir ./report/
The report includes:
- summary.md — pipeline table with per-node cost, errors with fix suggestions, anomaly warnings for suspicious empty outputs
- Per-node files — rendered prompts (what the LLM actually received), full responses, model/token/cost metadata
- Batch items — per-item files showing individual prompts and responses
- Sub-workflows — nested directories mirroring the workflow structure
Reports are ephemeral — the default location is overwritten each run. The trace files are the durable history. You can regenerate a report from any previous run:
# Most recent trace
pflow trace report
# A specific past run
pflow trace report ~/.pflow/debug/workflow-trace-my-workflow-20260323-150000.json -o /tmp/old
Comparing runs with git diff
Write reports to a project-local folder. Stage the report, edit a prompt, re-run — git diff shows exactly what changed:
pflow my-workflow --report-dir ./report/
git add report/
# Edit a prompt, re-run
pflow my-workflow --report-dir ./report/
git diff report/
Each node gets its own file, so you can diff just the node you changed. With stochastic LLM outputs, a full-report diff is noisy — targeted per-node diffs show whether your prompt change had the intended effect.
Trace files
pflow automatically saves detailed execution traces:
- Location:
~/.pflow/debug/workflow-trace-*.json
- When: Every workflow run (success or failure)
- Content: Per-node timing, inputs, outputs, template resolutions, errors
Traces are the raw data behind execution reports. Your agent can read these directly when it needs programmatic access, but the --report output is usually more useful for debugging.
Traces are saved automatically. Use --no-trace if you want to disable this (the --report flag overrides --no-trace).
What only you can fix
Some things require human action. Your agent will tell you when these come up:
If pflow’s discovery features aren’t working:
pflow settings set-env OPENAI_API_KEY "sk-..."
pflow auto-detects available providers. You can optionally override the model:
pflow settings llm set-default gpt-5.2
Your agent can’t configure API keys for security reasons, but it will tell you exactly what command to run.
MCP server issues
If your agent reports MCP tools aren’t available:
# Check what servers are configured
pflow mcp list
# Force re-sync if needed
pflow mcp sync --all
See adding MCP servers for setup details.
Disk cleanup
Trace files accumulate over time. pflow doesn’t auto-delete them. If disk space becomes an issue:
# Remove old traces (check contents first if needed)
rm ~/.pflow/debug/workflow-trace-*.json
Summary
| Situation | Who handles it |
|---|
| Workflow fails with fixable error | Your agent (self-corrects) |
| Agent needs more context | Your agent (reads trace files) |
| API key not configured | You (agent tells you the command) |
| MCP server not connected | You (agent guides you) |
| Disk space from traces | You (manual cleanup) |
pflow is built for self-correction. Your agent has the tools and knowledge to debug most issues — you only step in for setup tasks that require human access.