Documentation Index
Fetch the complete documentation index at: https://docs.pflow.run/llms.txt
Use this file to discover all available pages before exploring further.
For the curious. Your AI agent configures batch processing when needed. This explains what happens when you ask to process many items (files, API results, etc.) and what to expect during execution.
When batch processing happens
Your agent uses batch processing when tasks involve:- Processing each file in a directory listing
- Analyzing each item from an API response
- Running the same LLM prompt on multiple inputs
- Transforming each element in an array
How it works
Abatch configuration is added to a node:
classify node once for each issue. The as: "issue" creates a template variable ${issue} that changes with each iteration.
Configuration options
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
items | template | Yes | - | Array to iterate over (usually ${previous_node.key}) |
as | string | Yes | - | Name for the item variable (e.g., "item", "file", "issue") |
parallel | bool | No | false | Run items concurrently instead of sequentially |
max_concurrent | int | No | 10 | Maximum parallel items (1-100) |
error_handling | string | No | "fail_fast" | "fail_fast" or "continue" |
max_retries | int | No | 0 | Retry failed items this many times |
retry_wait | int | No | 1 | Seconds to wait between retries |
Sequential vs parallel
Sequential (default)
Items are processed one at a time, in order:- Order matters
- Rate limits are strict
- Resources are limited
Parallel
Multiple items are processed concurrently:- Items are independent
- Speed is important
- API/LLM can handle concurrent requests
Error handling
Fail fast (default)
Execution stops immediately on first error:- Any failure means the whole task is invalid
- Errors should be fixed and re-run from scratch
Continue on errors
All items are processed, with errors collected:- Partial results are useful
- Some failures are expected
- All errors should be seen before fixing
Retries
Failed items can be automatically retried:- Transient API errors
- Rate limit recovery
- Network timeouts
What you’ll see
During batch execution, pflow shows real-time progress:✗ and summarized at the end.
Output structure
Batch nodes write a special output structure to the shared store:results contains only successful items — each pairs item (the original input) with the inner node’s outputs. With error_handling: continue, failed items are excluded from results and appear only in errors. count is the total items attempted, success_count equals len(results), and error_count equals len(errors).
Inside a batch node, ${__index__} gives the 0-based position of the current item. Index-based access to results (like ${node.results[0].field}) requires fail_fast mode (the default). With error_handling: continue, use iteration (items: ${node.results}) instead — the validator blocks index access because filtered results don’t preserve original positions.
Subsequent nodes can access results:
Examples
Process files from directory listing
API pagination pattern
Fault-tolerant LLM processing
Per-item configuration
model and reasoning_effort change per item while the prompt template stays the same.
How your agent chooses settings
For LLM calls, your agent typically:- Starts with
max_concurrent: 5 - Monitors rate limits and costs
- Uses
retry_waitfor rate limit recovery
- Checks API rate limits in documentation
- Uses
max_concurrentto respect limits - Adds retries for transient errors
- Uses parallel processing for reads (safe)
- Uses sequential mode for writes (avoids race conditions)
- Uses sequential mode when files depend on each other
Limitations
- No nested batch - You can’t batch a node that’s already in a batch
- No branching within batch - Each item follows the same code path
- Memory usage - All results are held in memory until batch completes
Related
- Template variables - Understanding
${item}variables - Shell node - Often used to prepare arrays
- HTTP node - API pagination patterns
- LLM node - Batch prompt processing

