๐ Pipelines
Pipelines are the orchestration layer in Loopstack that coordinate the execution of workflows in larger automations. They define how workflows are sequenced and executed, supporting both sequential and parallel execution patterns.
Overview
A pipeline acts as the executable unit that users interact with in Loopstack Studio. Think of pipelines as the โmain programsโ that orchestrate one or more workflows to accomplish a specific automation goal.
pipelines:
- name: content_processing_pipeline
title: "Content Processing Pipeline"
type: root # Mark the pipeline as root (entrypoint)
workspace: content_automation # Assign a workspace to the root pipeline
sequence:
- workflow: extract_content
- workflow: analyze_content
- workflow: generate_summary
Pipeline Types
Loopstack supports two main pipeline execution patterns, each designed for different automation scenarios.
Sequential Pipelines
Sequential pipelines execute workflows one after another, where each workflow must complete before the next begins. This pattern is ideal for processes that require strict ordering or where later steps depend on earlier results.
pipelines:
- name: blog_creation_pipeline
title: "Blog Creation Pipeline"
type: root
workspace: content_automation # Only needed for root pipelines
sequence:
- workflow: research_topic # Step 1: Research
- workflow: generate_outline # Step 2: Create outline
- workflow: write_content # Step 3: Write content
- workflow: review_and_edit # Step 4: Review
- workflow: publish_content # Step 5: Publish
Sequential Execution Flow:
- Pipeline starts with the first workflow (
research_topic
) - Each workflow runs to completion before the next begins
- Data flows between workflows via shared context or stored documents
- Pipeline completes when all workflows in the sequence finish successfully
Conditional Execution
Sequential pipelines support conditional execution paths, allowing you to implement different logic branches based on context variables:
pipelines:
- name: EmailFollowUp_GeneratePath
type: sequence
sequence:
- workflow: GenerateReactivationStrategies
- workflow: SelectStrategy
- workflow: GenerateEmailDraft
condition: ${ context.variables.SELECTED_STRATEGY }
- workflow: ScheduleFollowUpCall
condition: ${ context.variables.NEEDS_OUTREACH }
The condition
property allows you to specify expressions that determine whether a workflow should be executed. This enables creating flexible pipelines with dynamic execution paths based on the results of previous workflows.
Factory Pipelines
Factory pipelines enable sequential and parallel execution paths, where multiple instances of a workflow (or pipeline) are generated based on dynamic input data. This pattern is perfect for batch processing scenarios or where paths are generated dynamically through previous automation steps.
pipelines:
- name: document_processing_pipeline
title: "Document Processing Pipeline"
type: factory
parallel: true # Run paths in parallel (optional, defaults to false)
factory:
workflow: process_single_document # Template workflow or pipeline to execute
iterator:
source: ${ context.variables.DOCUMENT_LIST } # Array of items to process
namespace:
label: Documents # Set a common namespace (required)
Factory Execution Flow:
- Pipeline reads input data from the specified source
- Creates multiple instances of the template workflow
- Each instance processes one item from the input array in parallel
- Pipeline completes when all parallel workflows finish
Note: Factory Pipelines do not forward context variables which were set within sub-workflows. To access data externally, you can create and load Documents.
Pipeline Configuration
Properties
A Pipeline can be configured with these properties:
name
: (mandatory) Unique identifier within the workspacetype
: (mandatory) Pipeline type: โrootโ, sequenceโ or โfactoryโtitle
: (optional) Display name in Loopstack Studiodescription
: (optional) Display description in Loopstack Studio
Root type properties:
workspace
: Allocate the pipeline to a workspace
Sequence type properties:
sequence
: The sequence of (sub-) workflows or pipelines
Factory type properties:
factory
: The pipeline or workflow to create via factoryparallel
: Set to true for parallel executioniterator
: Data mapping to the factoryโs iterator
Root Pipelines
Pipelines marked as root act as the starting points of an automation and can be selected and executed in the Loopstack Studio Frontend. Normal Pipelines can be used as sub-pipelines of another pipeline.
Root Pipelines need to be assigned to a workspace.
...
name: root_pipeline # Root Pipeline, executable from frontend
type: root
workspace: default
sequence:
- pipeline: my_sub_pipeline # Sub-Pipeline executed in sequence
Root Pipeline Functionality:
- Provide clear entry points for automation workflows
- Appear in the Studio interface for user execution
- Can be scheduled or triggered via API
Nested Pipeline Execution
Pipelines can execute other pipelines, creating powerful composition patterns:
pipelines:
# Root Pipeline
- name: content_creation_pipeline
title: "Complete Content Creation"
type: root
workspace: content_automation
sequence:
- pipeline: research_pipeline # Execute another pipeline
- workflow: generate_content # Then execute a workflow
- pipeline: publishing_pipeline # Then another pipeline
# Sub-Pipeline
- name: research_pipeline
title: "Research Phase"
type: sequence
sequence:
- workflow: gather_sources
- workflow: analyze_trends
# Sub-Pipeline
- name: publishing_pipeline
title: "Publishing Phase"
type: sequence
sequence:
- workflow: format_content
- workflow: schedule_publication
Data Flow Between Workflows
Pipelines coordinate data flow between workflows through variables in a shared context object. The context object can be accessed in your YAML configuration using template expressions:
value: ${ context.variables.MY_CUSTOM_VARIABLE }
They are also available in custom services that you implement. See our Custom Services section.
Export to Context
Data exported from one workflow (via exportContext
) automatically becomes available to subsequent workflows:
workflows:
- name: extract_data_workflow
type: stateMachine
transitions:
- name: extractData
from: start
to: end
call:
- tool: data_extraction_tool
exportContext: EXTRACTED_DATA # Export for pipeline context
- name: process_data_workflow
type: stateMachine
transitions:
- name: processData
from: start
to: end
call:
- tool: data_processing_tool
arguments:
input: ${ context.variables.EXTRACTED_DATA } # Access from context
Note: We use
exportContext
instead ofas
:
exportContext
: Make data available via context object (cross Workflow)as
: Make data available in subsequent transitions (within same workflow)
Note: Context data flow follows a downstream pattern: variables are passed from parent pipelines to child workflows/pipelines, but not upward from child to parent. This ensures predictable data flow where execution context enriches as the pipeline progresses, but parent contexts remain stable regardless of child execution outcomes.
Setting context explicitly
You can also use built-in tools and services to set context variables within a workflow. e.g. SetContextService
Please see the Building with Loopstack section for more information about available tools and services.
Pipeline Execution States
Pipelines maintain execution state throughout their lifecycle:
Execution States:
pending
: Pipeline queued for executionrunning
: Currently executing workflowspaused
: Waiting for manual intervention or external inputcompleted
: All workflows finished successfullyfailed
: Execution stopped due to errorcancelled
: Manually stopped by user
You can monitor these states in Loopstack Studio and use them for automation logic.
Error Handling and Recovery
Pipelines provide built-in resilience for workflow execution:
Automatic Retry Logic
pipelines:
- name: resilient_pipeline
title: "Resilient Processing Pipeline"
type: sequence
retryPolicy:
maxAttempts: 3
backoffMultiplier: 2
sequence:
- workflow: data_processing_workflow
- workflow: validation_workflow
Note: Automatic retry features are not yet implemented in the current version of Loopstack.
Failure Recovery
When a workflow fails within a pipeline:
- Pipeline execution pauses at the failed workflow
- Error details are captured and logged
- Users can inspect the failure in Studio and the console
- Execution can be resumed from the point of failure
- Alternative recovery workflows can be triggered
Best Practices
Single Responsibility: Each pipeline should accomplish one clear automation goal.
Meaningful Names: Use descriptive names that indicate the pipelineโs purpose and expected outcome.
Root Pipeline Strategy: Only mark pipelines as root when they represent complete, user-facing automation flows.
Conditional Logic: Use conditions to create flexible pipelines that adapt to different scenarios.
Error Boundaries: Design workflows within pipelines to handle their own errors when possible.
Testing Strategy: Test individual workflows before composing them into complex pipelines.
Next Steps
With pipelines configured to orchestrate your workflows, you can now dive into the detailed workflow definitions that contain your actual automation logic.