Skip to main content
Agent commands. Your AI agent uses this node in workflows. You don’t configure it directly.
The shell node runs commands with pipes, redirects, and standard shell features. It blocks dangerous patterns (fork bombs, recursive deletes) before execution, but otherwise commands run with full system access.

Parameters

ParameterTypeRequiredDefaultDescription
commandstrYes-Shell command to execute
stdinanyNo-Input data for the command
cwdstrNoCurrent dirWorking directory
envdictNo{}Additional environment variables
timeoutintNo30Maximum execution time in seconds
ignore_errorsboolNofalseContinue workflow on non-zero exit

Output

KeyTypeDescription
stdoutstrCommand output (UTF-8 or base64 if binary)
stdout_is_binarybooltrue if stdout is base64-encoded
stderrstrError output (UTF-8 or base64 if binary)
stderr_is_binarybooltrue if stderr is base64-encoded
exit_codeintExit code (0=success, -1=timeout, -2=execution failure)
errorstrError message (only on timeout/failure)

Using stdin for data

Always use stdin for data, not command interpolation. Shell escaping breaks on special characters in JSON, and stdin handles any data type safely.
Correct - data via stdin:
### process

Process the API response with jq.

- type: shell
- stdin: ${api.response}

```shell command
jq -r '.data.name'
```
Wrong - data in command (will break on special characters):
### process

Process the API response with jq.

- type: shell

```shell command
echo '${api.response}' | jq
```

stdin type handling

Input typeConversion
strUsed as-is
dict/listSerialized to JSON
int/floatConverted to string
boolLowercase string (true/false)
bytesDecoded UTF-8 (fallback: latin-1)

Validation and error handling

Your agent handles this. pflow validates commands when the workflow is created, not at runtime. If structured data ends up in a command string, the error message tells your agent exactly what to move to stdin and why.
The shell node validates that dicts and lists aren’t embedded directly in command strings (which would break shell parsing). Data should go through stdin instead:
### process

Process the API response with jq.

- type: shell
- stdin: ${api.response}

```shell command
jq -r '.data.name'
```
When a workflow is created, pflow checks if command templates contain dict or list variables. If found, you’ll see an error like:
Shell node 'process': cannot use ${api.response} (type: object) in command parameter.

PROBLEM: Object data embedded in shell commands breaks shell parsing. Dict/list
data contains special characters (quotes, braces, spaces) that break command syntax.

FIX: Move data to stdin, keep command simple:
  - stdin: ${api.response}
  ```shell command
  jq '.field'
  ```
This validation happens at workflow creation time (compile-time), not during execution, so you get immediate feedback.Why this matters: Shell parsers expect text, and JSON data contains special characters ({, }, ", spaces) that have meaning to the shell. Even with careful quoting, it’s fragile. The stdin approach is safer and more reliable.

Security

Commands run directly on your system without sandboxing. While dangerous patterns are blocked, the shell node has full access to your filesystem and network. Sandboxed execution is planned for a future release.
You’re in control. Your agent asks before running workflows, and you can inspect the workflow to see exactly what commands will execute. If you need stronger restrictions, you can disable shell entirely with pflow settings deny shell - but this significantly limits pflow’s capabilities since shell is often used for data processing and filtering.

Blocked patterns

These commands are rejected immediately with an error:
  • rm -rf / and variants (recursive system deletion)
  • dd if=/dev/zero of=/dev/sda (device operations)
  • :(){:|:&};: (fork bombs)
  • chmod -R 777 / (dangerous permissions)
  • sudo rm -rf / (privileged dangerous commands)

Warning patterns

These trigger warnings but execute unless PFLOW_SHELL_STRICT=true:
  • sudo, su -
  • shutdown, reboot, halt
  • systemctl poweroff

Smart error handling

Some commands return non-zero exit codes for valid “not found” results. The shell node treats these as success:
PatternExit codeReason
ls *.txt (no matches)1Empty glob is valid
grep pattern file1Pattern not found is valid
which nonexistent1Command check
command -v foo1Existence check

Examples

Basic command

### list

List files in the temp directory.

- type: shell

```shell command
ls -la /tmp
```

Process JSON with jq

### fetch

Fetch data from the API.

- type: http
- url: https://api.example.com/data

### extract

Extract item names from the response.

- type: shell
- stdin: ${fetch.response}

```shell command
jq -r '.items[].name'
```

With environment variables

### deploy

Run the deployment script.

- type: shell
- env:
    ENV: production
    DEBUG: "false"
- timeout: 120

```shell command
deploy.sh
```

Working directory

### build

Build the project.

- type: shell
- cwd: /path/to/project

```shell command
npm run build
```

Ignoring errors

### cleanup

Remove temporary log files.

- type: shell
- ignore_errors: true

```shell command
rm -f temp/*.log
```

Error handling

Exit codeMeaning
0Success
1+Command-specific error
-1Timeout
-2Execution failure
The node returns error action on non-zero exit (unless ignore_errors is true or it’s an auto-handled pattern like grep). pflow’s template variables handle most data access - you can use ${api.response.items[0].name} to access nested fields and array elements directly without shell commands. Shell is needed when you need to:
  • Iterate over arrays - jq '.items[].name' (templates can’t do wildcards)
  • Filter or transform - jq 'select(.active)', sort, uniq
  • Compute values - wc -l, arithmetic
Common Unix tools are built into macOS: grep, awk, cut, sort, head, tail, curl. For JSON processing, install jq:
brew install jq