End-to-End Workflows
Step-by-step walkthroughs of every major workflow in Orchestratia, from server registration to multi-agent cascading pipelines.
Flow 1: Server Registration (Zero to Running)
Step 1 — Create a Registration Token
From the dashboard, create a one-time registration token for a project. This generates an orcreg_ prefixed token with install commands.
Step 2 — Install the Agent Daemon
On the development server, install the orchestratia-agent package and run registration:
sequenceDiagram
participant Admin as Admin (Dashboard)
participant Hub as Hub
participant Dev as Dev Server
Admin->>Hub: POST /servers/tokens {project_id}
Hub-->>Admin: token: orcreg_... + install commands
Admin->>Dev: Copy install commands to dev server
Dev->>Dev: pip install orchestratia-agent
Dev->>Hub: orchestratia-agent --register orcreg_...
Dev->>Hub: POST /servers/register {name, repos, token}
Hub-->>Dev: api_key: orc_...
Dev->>Dev: Save to config.yaml
Dev->>Hub: WS connect + heartbeat every 30s
Hub-->>Admin: Server appears on dashboard: "ONLINE"
Note over Dev,Hub: Heartbeat every 30s<br/>Offline if no heartbeat for 90sThe server is now registered and will maintain a persistent WebSocket connection to the hub. The hub detects the server as offline if no heartbeat is received within 90 seconds.
Flow 2: Simple Task (Create, Assign, Execute, Complete)
The most basic workflow — a single task assigned to a single agent.
sequenceDiagram
participant Admin
participant Hub
participant Daemon as Agent Daemon
participant Claude as Claude Code
Admin->>Hub: POST /tasks {title, spec, priority: high}
Hub-->>Admin: task_id: abc123
Admin->>Hub: POST /tasks/abc123/assign {session: dev-01}
Hub->>Daemon: WS: task_assigned {task_id, structured_spec, resolved_inputs}
Admin->>Hub: POST /sessions {server_id, task_id, working_dir}
Hub->>Daemon: WS: session_start
Daemon->>Claude: Fork PTY → Start shell
Daemon->>Hub: WS: session_started
loop Live streaming
Claude->>Daemon: PTY output (reads, writes, tests code)
Daemon->>Hub: WS: session_output
Hub->>Admin: Live output in browser terminal
end
Daemon->>Hub: POST /tasks/abc123/complete<br/>{result: {summary, changes, tests}}
Hub->>Admin: WS: task_event "task_completed"
Note over Admin: Task shows "DONE" on dashboardFlow 3: Multi-Agent with Dependencies & Contracts
This is the flagship workflow — orchestrating work across multiple agents with structured data flow.
Scenario
flowchart LR
A["Task A<br/>Design API Schema<br/>Server: dev-backend"] -->|"input: api_schema"| B["Task B<br/>Implement API"]
A -->|"input: api_schema"| C["Task C<br/>Build Frontend"]
B -->|"blocks"| C
style A fill:#2A9D88,stroke:#186B5D,color:#fff
style B fill:#F0F9F7,stroke:#2A9D88
style C fill:#F0F9F7,stroke:#2A9D88Execution Timeline
sequenceDiagram
participant A as Task A<br/>Design API Schema
participant Hub as Hub (Cascade Engine)
participant B as Task B<br/>Implement API
participant C as Task C<br/>Build Frontend
Note over A: Server: dev-backend<br/>pending → assigned → running
A->>Hub: Complete with contracts.api_schema<br/>{status: fulfilled, data: {endpoints, types}}
Hub->>Hub: 1. contract_fulfilled (api_schema)
Hub->>Hub: 2. Resolve B.input(api_schema) ✓
Hub->>Hub: 2. Resolve C.input(api_schema) ✓
Hub->>Hub: 3. Check B: all deps met → UNBLOCK
Hub->>Hub: 4. Check C: still blocked by B
Hub->>B: Auto-assign to dev-backend
Note over B: resolved_inputs: {api_schema: {endpoints, types}}<br/>pending → assigned → running
B->>Hub: Complete
Hub->>Hub: CASCADE: C now fully unblocked
Hub->>C: Auto-assign to dev-frontend
Note over C: resolved_inputs: {api_schema: {endpoints, types}}<br/>pending → assigned → running → done
Note over C: C receives api_schema from Task A<br/>but only starts AFTER Task B completes<br/>(blocks dependency on B)What the Agent Receives
When the daemon polls for assigned tasks, the agent receives the full context including resolved contracts from upstream:
{
"id": "task-b-uuid",
"title": "Implement API",
"spec": "Implement the REST API based on the schema...",
"structured_spec": {
"$schema": "orchestratia/task-spec/v1",
"requirements": [
{"description": "Implement all endpoints", "priority": "must"},
{"description": "Add integration tests", "priority": "should"}
],
"constraints": {"languages": ["python"], "frameworks": ["fastapi"]}
},
"resolved_inputs": {
"api_schema": {
"endpoints": [
{"path": "/users", "method": "GET", "response": "User[]"},
{"path": "/users/{id}", "method": "GET", "response": "User"}
],
"types": {
"User": {"id": "uuid", "name": "string", "email": "string"}
}
}
}
}Flow 4: Human Intervention Mid-Task
When an AI agent hits ambiguity or needs a decision, it requests human intervention. The task pauses until the admin responds.
sequenceDiagram
participant Claude as Claude Code
participant Daemon as Agent Daemon
participant Hub
participant Admin
Note over Claude: Working on task...<br/>Hits ambiguity: "JWT or session cookies?"
Claude->>Daemon: orchestratia task help --id abc123<br/>--question "JWT or session cookies?"
Daemon->>Hub: POST /tasks/abc123/help {question}
Hub->>Hub: Task → needs_human
Hub->>Admin: WS: intervention_requested
Hub->>Admin: Telegram: "Agent needs help"
Note over Claude: Paused, waiting for response...
Admin->>Hub: POST /interventions/{id}/respond<br/>"Use JWT with refresh tokens"
Hub->>Daemon: WS: intervention_response
Daemon->>Claude: Response: "Use JWT with refresh tokens"
Note over Claude: Continues with JWT approach
Note over Hub: Task → runningFlow 5: Auto-Assignment with Capability Matching
When a task has requirements defined, the hub automatically finds the best agent by scoring capabilities.
flowchart TD
Task["Task: repo=api-gateway, lang=rust"] --> DS
Task --> WD
Task --> HS
DS["✅ dev-backend (ONLINE)<br/>Score: 255 — WINNER"]
WD["❌ dev-desktop (OFFLINE)<br/>Missing repo — DISQUALIFIED"]
HS["❌ hub-server (ONLINE)<br/>Missing repo — DISQUALIFIED"]
DS --> Winner["Auto-assigned to dev-backend"]
style Task fill:#2A9D88,stroke:#186B5D,color:#fff
style DS fill:#F0F9F7,stroke:#2A9D88
style WD fill:#F3F0EA,stroke:#B8AFA2
style HS fill:#F3F0EA,stroke:#B8AFA2
style Winner fill:#F0F9F7,stroke:#2A9D88Flow 6: Cascading Pipeline (Real-World Example)
A complete 7-task pipeline across 2 agents, showing how cascading auto-assignment works in practice.
Setup
Project: "E-Commerce Platform Rebuild"
| Agent Name | Server | Repos |
|---|---|---|
| dev-backend | Linux | api-gateway, core-service, web-dashboard, search-engine, payment-service, deploy-infra, marketing-site |
| dev-desktop | Windows | desktop-app |
Task Graph
flowchart TD
T1["T1: Update API Schema v2.5<br/>repo: core-service<br/>agent: dev-backend"]
T1 -->|"input: api_schema"| T2["T2: Gateway Integration<br/>repo: api-gateway<br/>lang: rust"]
T1 -->|"input: api_schema"| T3["T3: Search Engine v2<br/>repo: search-engine<br/>lang: rust"]
T1 -->|"input: api_schema"| T4["T4: Web Dashboard<br/>repo: web-dashboard<br/>lang: ts"]
T2 -->|"blocks"| T5["T5: Gateway Tests<br/>repo: api-gateway<br/>agent: dev-backend"]
T2 -->|"blocks"| T6["T6: Staging Deploy<br/>repo: deploy-infra<br/>agent: dev-backend"]
T3 -->|"blocks"| T6
T4 -->|"blocks"| T6
T2 -->|"blocks"| T7["T7: Desktop App Update<br/>repo: desktop-app<br/>agent: dev-desktop"]
style T1 fill:#2A9D88,stroke:#186B5D,color:#fff
style T2 fill:#F0F9F7,stroke:#2A9D88
style T3 fill:#F0F9F7,stroke:#2A9D88
style T4 fill:#F0F9F7,stroke:#2A9D88
style T5 fill:#FAF8F5,stroke:#E8E3DA
style T6 fill:#FAF8F5,stroke:#E8E3DA
style T7 fill:#FAF8F5,stroke:#E8E3DAExecution Timeline
flowchart TD
P1["Phase 1<br/>T1 runs on dev-backend (manual assign)<br/>T1 completes with api_schema contract"]
P1 --> P2
P2["Phase 2: CASCADE<br/>T2, T3, T4 all unblocked<br/>T2 → dev-backend (rust, api-gateway)<br/>T3 → dev-backend (rust, search-engine)<br/>T4 → dev-backend (ts, web-dashboard)<br/>All run concurrently"]
P2 --> P3
P3["Phase 3: T2 completes<br/>CASCADE → T5, T7 unblocked<br/>T5 → dev-backend<br/>T7 → dev-desktop (desktop-app)"]
P3 --> P4
P4["Phase 4: T3, T4 complete<br/>CASCADE → T6 fully unblocked<br/>(was waiting for T2+T3+T4)<br/>T6 → dev-backend"]
P4 --> P5
P5["Phase 5<br/>T5, T6, T7 complete<br/>ALL DONE!"]
style P1 fill:#2A9D88,stroke:#186B5D,color:#fff
style P2 fill:#F0F9F7,stroke:#2A9D88
style P3 fill:#F0F9F7,stroke:#2A9D88
style P4 fill:#F0F9F7,stroke:#2A9D88
style P5 fill:#FAF8F5,stroke:#E8E3DA7 tasks, 5 phases, 2 agents — fully automated after the initial manual assignment of T1.
Flow 7: Error Recovery & Session Persistence
What happens when an agent crashes mid-task (Linux/macOS only — tmux provides persistence):
sequenceDiagram
participant Daemon as Agent Daemon
participant Hub
participant Tmux as tmux (orc-abc)
participant Claude as Claude Code
Note over Daemon,Claude: BEFORE CRASH — Normal Operation
Daemon->>Hub: WS connected, heartbeats every 30s
Claude->>Tmux: Working in PTY session (RUNNING)
Note over Daemon: CRASH! (OOM, reboot, etc.)
Daemon--xHub: WS disconnected
Hub->>Hub: Heartbeat stops...
Hub->>Hub: After 90s → agent: OFFLINE
Note over Tmux,Claude: tmux keeps session alive!<br/>Claude still running in PTY
Note over Daemon: AFTER RESTART
Daemon->>Hub: WS reconnect
Daemon->>Tmux: tmux list-sessions → orc-abc found!
Daemon->>Tmux: Reattach to orc-abc
Daemon->>Hub: session_recovered {tmux: orc-abc}
Note over Daemon: 15s grace period...
Daemon->>Hub: Mark recovered sessions: ACTIVE
Note over Daemon,Claude: Session: ACTIVE again<br/>Task: still RUNNING<br/>Claude continues where it left offFlow 8: Telegram Mobile Monitoring
sequenceDiagram
participant Phone as Admin (Telegram)
participant TG as Telegram API
participant Bot as Hub (ManagedBot)
participant Agent
Phone->>TG: /sessions
TG->>Bot: Command received
Bot-->>Phone: Active sessions:<br/>1. orc-abc (api)<br/>2. orc-def (web)
Phone->>TG: /focus orc-abc
TG->>Bot: Command received
Bot-->>Phone: Focused on orc-abc (api)
Agent->>Bot: session_screen via Event Bus
Bot-->>Phone: Live output (edits in place):<br/>$ Running tests...<br/>12/12 pass, Coverage: 94%
Note over Phone: Message keeps editing<br/>in place, not spam
Agent->>Bot: Permission request event
Bot-->>Phone: Claude wants to run:<br/>rm -rf /tmp/build<br/>[Allow] [Deny]
Phone->>TG: Tap [Allow]
TG->>Bot: Callback received
Bot->>Agent: WS: session_input (allow)Flow 9: Orchestrator-Driven Pipeline
An orchestrator Claude running on a dev server uses the orchestratia CLI to create, assign, and coordinate tasks across multiple worker agents. This is the primary workflow for multi-repo projects like Pinaka Edge SSE.
sequenceDiagram
participant Orch as Orchestrator Claude
participant Hub as Hub
participant W1 as Worker Agent 1<br/>(pinaka-gateway)
participant W2 as Worker Agent 2<br/>(pinaka-web)
Note over Orch: 1. Discover available agents
Orch->>Hub: orchestratia server list
Hub-->>Orch: dev-staging (online, 7 repos)
Note over Orch: 2. Create tasks with dependencies
Orch->>Hub: orchestratia task create<br/>--title "Design API schema"<br/>--repo pinaka-master
Hub-->>Orch: task_id: T1
Orch->>Hub: orchestratia task create<br/>--title "Gateway routes"<br/>--repo pinaka-gateway<br/>--depends-on T1
Hub-->>Orch: task_id: T2
Orch->>Hub: orchestratia task create<br/>--title "Web dashboard"<br/>--repo pinaka-web<br/>--depends-on T1
Hub-->>Orch: task_id: T3
Note over Orch: 3. Add typed dependency for data flow
Orch->>Hub: orchestratia task deps add T2<br/>--depends-on T1 --type input<br/>--contract-key api_schema
Note over Orch: 4. Assign first task
Orch->>Hub: orchestratia task assign T1<br/>--session dev-staging
Hub->>W1: WS: task_assigned
Note over Orch: 5. Monitor progress
Orch->>Hub: orchestratia task list --status running
Orch->>Hub: orchestratia task view T1
Note over W1: Worker completes T1 with<br/>contracts.api_schema
Hub->>Hub: CASCADE: T2, T3 unblocked
Hub->>W1: Auto-assign T2 (gateway)
Hub->>W2: Auto-assign T3 (web)
Note over W1,W2: Workers execute in parallel<br/>with resolved_inputs from T1
Orch->>Hub: orchestratia task list
Note over Orch: Sees all tasks progressingKey points:
- The orchestrator uses only CLI commands — no direct API calls or database access
- Dependencies and contracts enable automatic cascading — the orchestrator doesn't need to manually trigger each phase
orchestratia server listlets the orchestrator discover available servers and their capabilitiesorchestratia task assignassigns by session name, not UUID
Flow 10: Complete Task Event Timeline
Every event logged for a single task's full lifecycle:
| Time | Event | Actor | Details |
|---|---|---|---|
| 00:00 | task_created |
admin | {title, type, priority} |
| 00:05 | task_assigned |
admin | {agent: dev-backend} → WS push to agent |
| 00:05 | task_planning |
hub | (if require_plan_approval) worker enters plan mode |
| 00:06 | task_plan_submitted |
agent | Plan submitted → WS + Telegram |
| 00:07 | task_plan_approved |
admin | Plan approved → WS push to agent |
| 00:08 | task_started |
agent | {agent: dev-backend} → WS to dashboards |
| 00:10 | session_active |
hub | {pid: 1234, tmux: orc-abc} → WS to dashboards |
| 00:10 | session_output |
agent | Continuous PTY output, every keystroke |
| 00:10 | session_screen |
agent | Rendered screen lines, every 5 seconds |
| 01:30 | task_needs_human |
agent | {question: "JWT or cookies?"} → WS + Telegram |
| 01:35 | intervention_responded |
admin | {response: "Use JWT"} → WS to agent |
| 01:35 | (task back to running) | hub | — |
| 02:00 | task_note_added |
agent/admin | Note posted on task → WS + Telegram |
| 02:30 | task_updated |
admin | Task spec updated → WS push to agent |
| 03:00 | task_completed |
agent | {result: {summary, contracts}} → WS to dashboards |
| 03:00 | task_status_update |
hub | Push to orchestrator session (done/failed/needs_human) |
| 03:00 | contract_fulfilled |
hub | {key: api_schema} |
| 03:00 | dependency_resolved |
hub | {downstream: task-b} |
| 03:00 | task_unblocked |
hub | {task: task-b} |
| 03:00 | task_auto_assigned |
hub | {task: task-b, agent: dev-backend} → WS + Telegram |