What Does Tool Call Parallelism Look Like?
March 2026
When a language model emits multiple tool calls in a single turn, what happens? Do they run one after another, or all at once? And who decides?
The Question
Parallel execution seems obvious: independent operations shouldn't wait on each other. But tools have side effects. A shell command might change state that a subsequent read depends on. Two writes to the same file could race. Even reads aren't always safe — what if one tool call modifies something another reads?
The question isn't whether to parallelize, but how to know when it's safe.
The Answer
The system uses a three-phase execution model with a parallelism policy that categorizes every tool by its safety characteristics.
Phase 1: Emit Start Events
Before any execution begins, the system emits tool_start events for every call in source order:
// Phase 1: emit tool_start for every call before any execution begins
const batchStart = Date.now();
for (const tc of toolCalls) {
state.events.push({
type: 'tool_start',
timestamp: batchStart,
data: { toolCallId: tc.id, name: tc.name },
});
}
This creates a consistent event trace — all tools "start" at the same logical moment, before execution order matters.
Phase 2: Execute — Parallel When Safe, Sequential Otherwise
The core decision lives in canRunInParallel. If the check says yes, all calls execute simultaneously via Promise.all. If no, they serialize via reduce:
const timedResults: TimedResult[] = runParallel
? await Promise.all(toolCalls.map(executeOne))
: await toolCalls.reduce(
async (acc, tc) => [...await acc, await executeOne(tc)],
Promise.resolve([] as TimedResult[]),
);
But what determines runParallel? That's the parallelism policy.
The Safety Categories
Every tool has metadata that includes a parallelSafety field with three possible values:
always — These tools are pure reads with no side effects. fs_read, search_grep, git_log, web_search, memory_query. They can run alongside anything. The result of one cannot affect the result of another.
never — These tools have side effects that escape the tool system. shell_run affects global process state. git_commit changes repository history. memory_write modifies persistent storage. git_branch switches branches. These must run alone.
path-scoped — These tools mutate files, but the mutation is keyed by a path argument. fs_write, fs_delete, fs_move, fs_patch. Multiple path-scoped calls are safe if they target different files. But two writes to the same file could race.
Here's how the registry defines the defaults:
const DEFAULT_META: Record = {
// Fs tools
fs_read: { parallelSafety: 'always', trustTier: 'trusted' },
fs_write: { parallelSafety: 'path-scoped', trustTier: 'standard' },
fs_delete: { parallelSafety: 'path-scoped', trustTier: 'standard' },
// Shell
shell_run: { parallelSafety: 'never', trustTier: 'standard' },
// Memory
memory_write: { parallelSafety: 'never', trustTier: 'trusted' },
memory_query: { parallelSafety: 'always', trustTier: 'trusted' },
// Git
git_commit: { parallelSafety: 'never', trustTier: 'standard' },
};
The Check Logic
The canRunInParallel function implements a simple rule:
- If there's only one call, always allow parallel (trivially true).
- If any call has
parallelSafety: 'never', reject parallel — serialize the whole batch. - For path-scoped tools, check if any two calls target the same path. If yes, reject parallel.
- Otherwise, allow parallel.
canRunInParallel(calls: Array<{ name: string; args: Record }>): boolean {
if (calls.length <= 1) return true;
for (const call of calls) {
const meta = this.getMeta(call.name);
if (meta.parallelSafety === 'never') return false;
}
// Check path-scoped conflicts
const pathScoped = calls.filter(c => this.getMeta(c.name).parallelSafety === 'path-scoped');
if (pathScoped.length <= 1) return true;
const paths = pathScoped.map(c => String(c.args['path'] ?? c.args['file'] ?? ''));
const uniquePaths = new Set(paths);
return uniquePaths.size === paths.length; // No duplicates = safe
}
The beauty is the simplicity: one pass through the calls, one decision, no complex conflict graphs.
Phase 3: Finalize in Source Order
Regardless of execution order, results appear in the order the LLM emitted the calls:
for (let i = 0; i < toolCalls.length; i++) {
const tc = toolCalls[i];
const { result, startTime, endTime } = timedResults[i];
// ... emit tool_end, build executions, append messages
}
The LLM receives a list of tool results that matches the list of tool calls it made. Order is preserved.
What This Reveals
This is a declarative safety model. Each tool declares its behavior, and the runtime enforces a single rule. The alternative would be per-tool conflict detection: "does this tool conflict with that tool?" which would require O(n²) pairwise checks and tool-specific knowledge scattered throughout the codebase.
Instead, tools self-classify. fs_read knows it's a read. shell_run knows it's unsafe to parallelize. fs_write knows it's path-scoped. The runtime asks a single question: given this batch of calls, based on their self-classification, is parallel execution safe?
The result is a clean separation: tools own their metadata, the runtime owns the decision logic, and the execution model stays simple — emit, execute (parallel or serial), finalize.