AI Tool Calling
Enable the LLM to call workflow tools (function calling). The LLM decides which tools to invoke, and DelegateToolCalls executes them.
Create a Tool for the LLM
Tools exposed to the LLM need a description so the LLM knows when to use them:
import { z } from 'zod';
import { BaseTool, Tool, ToolResult } from '@loopstack/common';
@Tool({
uiConfig: {
description: 'Retrieve weather information.',
},
schema: z.object({
location: z.string(),
}),
})
export class GetWeather extends BaseTool {
async call(_args: unknown): Promise<ToolResult> {
return { type: 'text', data: 'Mostly sunny, 14C, rain in the afternoon.' };
}
}Tool Calling Workflow
import {
ClaudeGenerateText,
ClaudeGenerateTextResult,
ClaudeMessageDocument,
DelegateToolCalls,
DelegateToolCallsResult,
} from '@loopstack/claude-module';
import { BaseWorkflow, Final, Guard, Initial, InjectTool, ToolResult, Transition, Workflow } from '@loopstack/common';
@Workflow({ uiConfig: __dirname + '/tool-call.ui.yaml' })
export class ToolCallWorkflow extends BaseWorkflow {
@InjectTool() claudeGenerateText: ClaudeGenerateText;
@InjectTool() delegateToolCalls: DelegateToolCalls;
@InjectTool() getWeather: GetWeather;
llmResult?: ClaudeGenerateTextResult;
delegateResult?: DelegateToolCallsResult;
@Initial({ to: 'ready' })
async setup() {
await this.repository.save(ClaudeMessageDocument, {
role: 'user',
content: 'How is the weather in Berlin?',
});
}
@Transition({ from: 'ready', to: 'prompt_executed' })
async llmTurn() {
const result: ToolResult<ClaudeGenerateTextResult> = await this.claudeGenerateText.call({
claude: { model: 'claude-sonnet-4-6' },
messagesSearchTag: 'message',
tools: ['getWeather'],
});
this.llmResult = result.data;
}
@Transition({ from: 'prompt_executed', to: 'awaiting_tools', priority: 10 })
@Guard('hasToolCalls')
async executeToolCalls() {
const result: ToolResult<DelegateToolCallsResult> = await this.delegateToolCalls.call({
message: this.llmResult!,
document: ClaudeMessageDocument,
});
this.delegateResult = result.data;
}
hasToolCalls() {
return this.llmResult?.stop_reason === 'tool_use';
}
@Transition({ from: 'awaiting_tools', to: 'ready' })
@Guard('allToolsComplete')
async toolsComplete() {}
allToolsComplete() {
return this.delegateResult?.allCompleted;
}
@Final({ from: 'prompt_executed' })
async respond() {
await this.repository.save(ClaudeMessageDocument, this.llmResult!, {
id: this.llmResult!.id,
});
}
}How the Loop Works
setup → llmTurn → [hasToolCalls?]
├─ yes → executeToolCalls → toolsComplete → llmTurn (loop)
└─ no → respond (done)claudeGenerateTextis called withtools: ['getWeather']- If the LLM returns
stop_reason: 'tool_use', the guard routes toexecuteToolCalls delegateToolCallsexecutes the requested tools and stores results- The loop continues back to the LLM
- When no more tool calls are needed, the fallback
@Finalfires
Key Concepts
toolsarray — Lists tool property names the LLM can call (must match@InjectTool()names)delegateToolCalls— Executes tool-call parts from the LLM responsestop_reason === 'tool_use'— The LLM wants to call a toolallCompleted— All delegated tool calls have finished@Guard+priority— Routes between tool calling and final response
Registry References
- tool-call-example-workflow — Complete tool calling loop with GetWeather tool, guard-based routing, and delegate pattern
Last updated on