Skip to Content
DocumentationFeaturesLLM Providers

LLM Providers

Loopstack supports multiple LLM providers through a runtime registry. Provider modules self-register at startup. Workflows and tools resolve providers by name — swap or use multiple providers in parallel without changing workflow code.

Quick Start

Import a provider module. That’s it — the adapter tools are available globally.

import { ClaudeModule } from '@loopstack/claude-module'; @Module({ imports: [LoopCoreModule, ClaudeModule], }) export class AppModule {}

Tool Defaults with @InjectTool

Configure provider and model at the injection site. Call sites stay clean — only pass prompt, tools, and messages.

@Workspace({ ... }) export class MyWorkspace implements WorkspaceInterface { @InjectTool({ provider: 'claude', model: 'claude-opus-4-7', tools: ['read', 'glob'] }) llmGenerateText: LlmGenerateTextTool; @InjectTool({ provider: 'claude' }) llmDelegateToolCalls: LlmDelegateToolCallsTool; }
// Workflow — only call-specific args const result = await this.llmGenerateText.call({ system: 'You are a helpful assistant.', messagesSearchTag: 'message', // provider, model, and tools come from @InjectTool defaults });

Defaults are deep-merged with call-site args. Explicit call-site values always win. undefined does not override a default.

@InjectWorkflow Defaults

Same mechanism for sub-workflows. Defaults merge into every run() call before BullMQ serialization:

@InjectWorkflow({ provider: 'claude', model: 'claude-opus-4-7' }) chatAgent: ChatAgentWorkflow; // Defaults are included automatically await this.chatAgent.run({ system, tools, userMessage });

Using Multiple Providers

Import both modules and configure each tool or workflow injection with its provider:

@Module({ imports: [LoopCoreModule, ClaudeModule, OpenAiModule], }) export class AppModule {} @Workspace({ ... }) export class MyWorkspace implements WorkspaceInterface { @InjectWorkflow({ provider: 'claude', model: 'claude-opus-4-7' }) smartAgent: ChatAgentWorkflow; @InjectWorkflow({ provider: 'openai', model: 'gpt-4o-mini' }) fastAgent: ChatAgentWorkflow; }

Both agents use the same ChatAgentWorkflow class. The provider is selected by the provider arg.

Adapter Tools

All LLM interactions go through adapter tools from @loopstack/llm-provider-module. This ensures validation, interceptors, and logging apply to every LLM call.

ToolPurpose
LlmGenerateTextToolText generation with optional tool calling
LlmGenerateObjectToolStructured output conforming to a JSON Schema
LlmDelegateToolCallsToolExecute tool calls from an LLM response
LlmUpdateToolResultToolHandle async tool completion callbacks

Message Documents

All providers share a single LlmMessageDocument with normalized content. Native API responses are stored in entity.meta.response for provider-specific round-trips.

DocumentContent FormatWidget
LlmMessageDocumentNormalized (text, thinking, tool_call blocks)llm-message

Environment Variables

VariableProviderDescription
ANTHROPIC_API_KEYClaudeAPI key
OPENAI_API_KEYOpenAIAPI key
CLAUDE_MODELClaudeDefault model fallback
OPENAI_MODELOpenAIDefault model fallback

Prefer @InjectTool defaults over env vars in production.

Available Providers

ProviderModuleID
Anthropic Claude@loopstack/claude-module'claude'
OpenAI@loopstack/openai-module'openai'
Last updated on