LLM Providers
Loopstack supports multiple LLM providers through a runtime registry. Provider modules self-register at startup. Workflows and tools resolve providers by name — swap or use multiple providers in parallel without changing workflow code.
Quick Start
Import a provider module. That’s it — the adapter tools are available globally.
import { ClaudeModule } from '@loopstack/claude-module';
@Module({
imports: [LoopCoreModule, ClaudeModule],
})
export class AppModule {}Tool Defaults with @InjectTool
Configure provider and model at the injection site. Call sites stay clean — only pass prompt, tools, and messages.
@Workspace({ ... })
export class MyWorkspace implements WorkspaceInterface {
@InjectTool({ provider: 'claude', model: 'claude-opus-4-7', tools: ['read', 'glob'] })
llmGenerateText: LlmGenerateTextTool;
@InjectTool({ provider: 'claude' })
llmDelegateToolCalls: LlmDelegateToolCallsTool;
}// Workflow — only call-specific args
const result = await this.llmGenerateText.call({
system: 'You are a helpful assistant.',
messagesSearchTag: 'message',
// provider, model, and tools come from @InjectTool defaults
});Defaults are deep-merged with call-site args. Explicit call-site values always win. undefined does not override a default.
@InjectWorkflow Defaults
Same mechanism for sub-workflows. Defaults merge into every run() call before BullMQ serialization:
@InjectWorkflow({ provider: 'claude', model: 'claude-opus-4-7' })
chatAgent: ChatAgentWorkflow;
// Defaults are included automatically
await this.chatAgent.run({ system, tools, userMessage });Using Multiple Providers
Import both modules and configure each tool or workflow injection with its provider:
@Module({
imports: [LoopCoreModule, ClaudeModule, OpenAiModule],
})
export class AppModule {}
@Workspace({ ... })
export class MyWorkspace implements WorkspaceInterface {
@InjectWorkflow({ provider: 'claude', model: 'claude-opus-4-7' })
smartAgent: ChatAgentWorkflow;
@InjectWorkflow({ provider: 'openai', model: 'gpt-4o-mini' })
fastAgent: ChatAgentWorkflow;
}Both agents use the same ChatAgentWorkflow class. The provider is selected by the provider arg.
Adapter Tools
All LLM interactions go through adapter tools from @loopstack/llm-provider-module. This ensures validation, interceptors, and logging apply to every LLM call.
| Tool | Purpose |
|---|---|
LlmGenerateTextTool | Text generation with optional tool calling |
LlmGenerateObjectTool | Structured output conforming to a JSON Schema |
LlmDelegateToolCallsTool | Execute tool calls from an LLM response |
LlmUpdateToolResultTool | Handle async tool completion callbacks |
Message Documents
All providers share a single LlmMessageDocument with normalized content. Native API responses are stored in entity.meta.response for provider-specific round-trips.
| Document | Content Format | Widget |
|---|---|---|
LlmMessageDocument | Normalized (text, thinking, tool_call blocks) | llm-message |
Environment Variables
| Variable | Provider | Description |
|---|---|---|
ANTHROPIC_API_KEY | Claude | API key |
OPENAI_API_KEY | OpenAI | API key |
CLAUDE_MODEL | Claude | Default model fallback |
OPENAI_MODEL | OpenAI | Default model fallback |
Prefer @InjectTool defaults over env vars in production.
Available Providers
| Provider | Module | ID |
|---|---|---|
| Anthropic Claude | @loopstack/claude-module | 'claude' |
| OpenAI | @loopstack/openai-module | 'openai' |