Skip to Content
DocumentationFeaturesAI Tool Calling

AI Tool Calling

Enable the LLM to call workflow tools (function calling). The LLM decides which tools to invoke, and DelegateToolCalls executes them.

Create a Tool for the LLM

Tools exposed to the LLM need a description so the LLM knows when to use them:

import { z } from 'zod'; import { BaseTool, Tool, ToolResult } from '@loopstack/common'; @Tool({ uiConfig: { description: 'Retrieve weather information.', }, schema: z.object({ location: z.string(), }), }) export class GetWeather extends BaseTool { async call(_args: unknown): Promise<ToolResult> { return { type: 'text', data: 'Mostly sunny, 14C, rain in the afternoon.' }; } }

Tool Calling Workflow

import { ClaudeGenerateText, ClaudeGenerateTextResult, ClaudeMessageDocument, DelegateToolCalls, DelegateToolCallsResult, } from '@loopstack/claude-module'; import { BaseWorkflow, Final, Guard, Initial, InjectTool, ToolResult, Transition, Workflow } from '@loopstack/common'; @Workflow({ uiConfig: __dirname + '/tool-call.ui.yaml' }) export class ToolCallWorkflow extends BaseWorkflow { @InjectTool() claudeGenerateText: ClaudeGenerateText; @InjectTool() delegateToolCalls: DelegateToolCalls; @InjectTool() getWeather: GetWeather; llmResult?: ClaudeGenerateTextResult; delegateResult?: DelegateToolCallsResult; @Initial({ to: 'ready' }) async setup() { await this.repository.save(ClaudeMessageDocument, { role: 'user', content: 'How is the weather in Berlin?', }); } @Transition({ from: 'ready', to: 'prompt_executed' }) async llmTurn() { const result: ToolResult<ClaudeGenerateTextResult> = await this.claudeGenerateText.call({ claude: { model: 'claude-sonnet-4-6' }, messagesSearchTag: 'message', tools: ['getWeather'], }); this.llmResult = result.data; } @Transition({ from: 'prompt_executed', to: 'awaiting_tools', priority: 10 }) @Guard('hasToolCalls') async executeToolCalls() { const result: ToolResult<DelegateToolCallsResult> = await this.delegateToolCalls.call({ message: this.llmResult!, document: ClaudeMessageDocument, }); this.delegateResult = result.data; } hasToolCalls() { return this.llmResult?.stop_reason === 'tool_use'; } @Transition({ from: 'awaiting_tools', to: 'ready' }) @Guard('allToolsComplete') async toolsComplete() {} allToolsComplete() { return this.delegateResult?.allCompleted; } @Final({ from: 'prompt_executed' }) async respond() { await this.repository.save(ClaudeMessageDocument, this.llmResult!, { id: this.llmResult!.id, }); } }

How the Loop Works

setup → llmTurn → [hasToolCalls?] ├─ yes → executeToolCalls → toolsComplete → llmTurn (loop) └─ no → respond (done)
  1. claudeGenerateText is called with tools: ['getWeather']
  2. If the LLM returns stop_reason: 'tool_use', the guard routes to executeToolCalls
  3. delegateToolCalls executes the requested tools and stores results
  4. The loop continues back to the LLM
  5. When no more tool calls are needed, the fallback @Final fires

Key Concepts

  • tools array — Lists tool property names the LLM can call (must match @InjectTool() names)
  • delegateToolCalls — Executes tool-call parts from the LLM response
  • stop_reason === 'tool_use' — The LLM wants to call a tool
  • allCompleted — All delegated tool calls have finished
  • @Guard + priority — Routes between tool calling and final response

Registry References

Last updated on