Examples
Complete flow definitions from simple to complex — real tools, real scenarios.
| # | Example | Scenario | Key Features |
|---|---|---|---|
| 1 | Hello World | Minimal two-task flow | Shell executor, depends_on |
| 2 | Rust CI Pipeline | Build once, test in parallel, deploy | DAG parallelism, HTTP executor, secrets |
| 3 | Wait for Service After Deploy | Post-deploy health check | HTTP executor, retries, exponential backoff |
| 4 | Geocode a Batch of Addresses | Concurrent API calls with rate limiting | Queue rate limiting, concurrency control |
| 5 | Build and Push a Docker Image | Container test, build, push | Container executor, cascade failure |
| 6 | Deploy with Slack and Approval | Human-in-the-loop deploy | Approval executor, HTTP webhooks |
| 7 | AI Blog Post Pipeline | Multi-model content generation | Agent executor, output chaining |
| 8 | Vulnerability Scan Fan-Out | Discover Docker images, scan each | Spawn executor, dynamic fan-out |
| 9 | Monorepo Conditional Deploy | Only build/deploy changed services | Conditions, output interpolation |
| 10 | ETL Pipeline | Extract, validate, transform, load | Artifacts, container executor |
| 11 | AI-Powered PR Review | Triage then deep review if high risk | Agent + conditions, multi-model |
| 12 | Staging → Production Promotion | Progressive deploy with child flows | Trigger executor, approval gate |
| 13 | AI Agent via MCP | Background pipeline from a coding agent | MCP integration, async submit |
| 14 | Uptime Monitor | Scheduled health checks with alerting | Schedule, conditions, PagerDuty |
Hello World
Basic task definition and dependencies. The simplest possible flow: two shell tasks where the second waits for the first.
{
"tasks": [
{
"id": "greet",
"executor": "shell",
"config": { "command": "echo 'Hello from Tasked!'" }
},
{
"id": "done",
"executor": "shell",
"config": { "command": "echo 'Workflow complete.'" },
"depends_on": ["greet"]
}
]
}
$ tasked flow submit hello-world.json
Flow f_1a2b submitted — 2 tasks
$ tasked flow status f_1a2b
STATE succeeded
TASKS greet ✓ done ✓
$ tasked task output f_1a2b greet
Hello from Tasked!
$ tasked task output f_1a2b done
Workflow complete.
The greet task runs first. Once it succeeds, done fires. The depends_on field defines the edge in the DAG.
Rust CI Pipeline
DAG parallelism — build once, test in parallel, deploy after all pass.
{
"tasks": [
{
"id": "build",
"executor": "shell",
"config": { "command": "cargo build --release 2>&1" }
},
{
"id": "test-unit",
"executor": "shell",
"config": { "command": "cargo test --lib 2>&1" },
"depends_on": ["build"]
},
{
"id": "test-integration",
"executor": "shell",
"config": { "command": "cargo test --test '*' 2>&1" },
"depends_on": ["build"]
},
{
"id": "clippy",
"executor": "shell",
"config": { "command": "cargo clippy --all-targets -- -D warnings 2>&1" },
"depends_on": ["build"]
},
{
"id": "deploy",
"executor": "http",
"config": {
"url": "https://api.fly.io/v1/apps/my-api/machines",
"method": "POST",
"headers": {
"Authorization": "Bearer ${secrets.FLY_API_TOKEN}",
"Content-Type": "application/json"
},
"body": { "config": { "image": "registry.fly.io/my-api:latest" } }
},
"depends_on": ["test-unit", "test-integration", "clippy"],
"retries": 3
}
]
}
build → [test-unit, test-integration, clippy] → deploy. Three checks run concurrently. Deploy to Fly.io only fires if all three pass.
Wait for Service After Deploy
HTTP executor with retry and exponential backoff.
{
"tasks": [
{
"id": "wait-healthy",
"executor": "http",
"config": {
"url": "https://my-api.fly.dev/healthz",
"method": "GET",
"timeout_secs": 5
},
"retries": 15,
"backoff": { "exponential_jitter": { "initial_delay_ms": 2000 } },
"timeout_secs": 180
}
]
}
Retries up to 15 times with jittered exponential backoff (2s, ~4s, ~8s...). Gives the service up to 3 minutes to come healthy.
Geocode a Batch of Addresses
Queue rate limiting and concurrency control.
First, create a rate-limited queue:
{
"name": "google-maps",
"concurrency": 10,
"rate_limit": { "max_per_second": 40 }
}
Then submit the flow on that queue:
{
"tasks": [
{
"id": "addr-1",
"executor": "http",
"config": {
"url": "https://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway&key=${secrets.GOOGLE_MAPS_KEY}",
"method": "GET"
}
},
{
"id": "addr-2",
"executor": "http",
"config": {
"url": "https://maps.googleapis.com/maps/api/geocode/json?address=350+Fifth+Ave+New+York&key=${secrets.GOOGLE_MAPS_KEY}",
"method": "GET"
}
},
{
"id": "addr-3",
"executor": "http",
"config": {
"url": "https://maps.googleapis.com/maps/api/geocode/json?address=1+Infinite+Loop+Cupertino&key=${secrets.GOOGLE_MAPS_KEY}",
"method": "GET"
}
}
]
}
All tasks run concurrently (up to 10), rate-limited to 40 req/sec. In practice you would use a spawn executor to generate hundreds of address tasks from a CSV — this shows the queue mechanics with three.
Build and Push a Docker Image
Container executor with chaining and cascade failure.
{
"tasks": [
{
"id": "test",
"executor": "container",
"config": {
"image": "node:22-slim",
"command": ["sh", "-c", "npm ci && npm test"],
"working_dir": "/app",
"timeout_secs": 300
}
},
{
"id": "build-image",
"executor": "shell",
"config": {
"command": "docker build -t ghcr.io/myorg/myapp:$(git rev-parse --short HEAD) ."
},
"depends_on": ["test"]
},
{
"id": "push-image",
"executor": "shell",
"config": {
"command": "docker push ghcr.io/myorg/myapp:$(git rev-parse --short HEAD)"
},
"depends_on": ["build-image"]
}
]
}
If tests fail, build and push are automatically cancelled. The container executor runs tests in an isolated node:22-slim image; subsequent shell tasks handle the Docker build and push.
Deploy with Slack and Approval
Approval executor with HTTP webhooks for human-in-the-loop.
{
"tasks": [
{
"id": "run-tests",
"executor": "shell",
"config": { "command": "make test" }
},
{
"id": "notify-slack",
"executor": "http",
"config": {
"url": "${secrets.SLACK_WEBHOOK_URL}",
"method": "POST",
"body": {
"text": "Tests passed for myapp. Awaiting approval to deploy to production."
}
},
"depends_on": ["run-tests"]
},
{
"id": "approve-deploy",
"executor": "approval",
"config": { "message": "Deploy myapp to production?" },
"depends_on": ["notify-slack"]
},
{
"id": "deploy-prod",
"executor": "shell",
"config": { "command": "flyctl deploy --app myapp-prod --image ghcr.io/myorg/myapp:latest" },
"depends_on": ["approve-deploy"]
}
]
}
Tests pass → Slack notification → approval gate → deploy. The approval endpoint can be wired to a Slack button via POST /api/v1/flows/{id}/tasks/approve-deploy/approve.
AI Blog Post Pipeline
Agent executor with output chaining between AI models.
{
"tasks": [
{
"id": "research",
"executor": "agent",
"config": {
"provider": "claude",
"model": "claude-sonnet-4-20250514",
"prompt": "Research the current state of WebAssembly outside the browser (WASI, edge compute, plugin systems). Return structured JSON with fields: key_developments, notable_projects, industry_adoption, and open_challenges.",
"max_tokens": 4096
}
},
{
"id": "write-post",
"executor": "agent",
"config": {
"provider": "claude",
"model": "claude-sonnet-4-20250514",
"prompt": "Write a 1200-word blog post titled 'WebAssembly Beyond the Browser: 2025 State of Play' based on this research:\n\n${tasks.research.output.response}\n\nTarget audience: backend engineers curious about Wasm. Tone: technically rigorous but accessible. Include code examples where relevant.",
"max_tokens": 8192
},
"depends_on": ["research"]
},
{
"id": "generate-tweets",
"executor": "agent",
"config": {
"provider": "openai",
"model": "gpt-4o",
"prompt": "Generate 5 tweet-length posts promoting this blog post. Each should highlight a different angle. Include relevant hashtags.\n\nBlog post:\n${tasks.write-post.output.response}",
"max_tokens": 1024
},
"depends_on": ["write-post"]
}
]
}
Claude researches → Claude writes → GPT-4o generates social copy. Each task's output is interpolated into the next task's prompt via ${tasks.<id>.output.response}.
Vulnerability Scan Fan-Out
Spawn executor — discover work at runtime, fan out dynamically.
{
"tasks": [
{
"id": "discover-images",
"executor": "spawn",
"config": {
"command": "python3 discover_images.py"
},
"spawn_output": ["scan-0", "scan-1", "scan-2", "scan-3", "scan-4"]
},
{
"id": "report",
"executor": "shell",
"config": {
"command": "echo 'All image scans complete'"
},
"depends_on": [
"discover-images/scan-0",
"discover-images/scan-1",
"discover-images/scan-2",
"discover-images/scan-3",
"discover-images/scan-4"
]
}
]
}
The discover_images.py helper script:
#!/usr/bin/env python3
"""Discover local Docker images and generate Trivy scan tasks for the spawn executor."""
import json
import subprocess
result = subprocess.run(
["docker", "images", "--format", "{{.Repository}}:{{.Tag}}"],
capture_output=True,
text=True,
)
images = [img for img in result.stdout.strip().split("\n") if "<none>" not in img and img]
tasks = [
{
"id": f"scan-{i}",
"executor": "shell",
"config": {
"command": f"trivy image --severity HIGH,CRITICAL --format json {img}"
},
}
for i, img in enumerate(images)
]
print(json.dumps(tasks))
The spawn executor runs the script, parses the output, and injects tasks into the running DAG. Each discovered Docker image gets its own Trivy vulnerability scan. The report task waits for all spawned scans to finish.
Monorepo Conditional Deploy
Conditions — only build and deploy services that changed.
{
"tasks": [
{
"id": "detect-changes",
"executor": "shell",
"config": { "command": "git diff --name-only HEAD~1" }
},
{
"id": "build-api",
"executor": "shell",
"config": { "command": "cd services/api && cargo build --release 2>&1" },
"depends_on": ["detect-changes"],
"condition": "tasks[\"detect-changes\"].output.stdout.contains(\"services/api\")"
},
{
"id": "build-web",
"executor": "shell",
"config": { "command": "cd services/web && npm run build" },
"depends_on": ["detect-changes"],
"condition": "tasks[\"detect-changes\"].output.stdout.contains(\"services/web\")"
},
{
"id": "deploy-api",
"executor": "shell",
"config": { "command": "flyctl deploy --app myapp-api --config services/api/fly.toml" },
"depends_on": ["build-api"],
"condition": "!tasks[\"build-api\"].output.skipped"
},
{
"id": "deploy-web",
"executor": "shell",
"config": { "command": "cd services/web && npx wrangler deploy" },
"depends_on": ["build-web"],
"condition": "!tasks[\"build-web\"].output.skipped"
}
]
}
Push touches services/api/ but not services/web/? Only API tasks run. The deploy tasks check output.skipped to avoid deploying a service that was never built.
ETL Pipeline
Artifacts for file passing between tasks.
{
"tasks": [
{
"id": "extract",
"executor": "shell",
"config": {
"command": "curl -sL 'https://data.sfgov.org/api/views/yitu-d5am/rows.csv?accessType=DOWNLOAD' -o $TASKED_ARTIFACTS/sf_permits.csv && wc -l $TASKED_ARTIFACTS/sf_permits.csv"
}
},
{
"id": "validate",
"executor": "shell",
"config": {
"command": "head -1 $TASKED_ARTIFACTS/sf_permits.csv && csvstat --count $TASKED_ARTIFACTS/sf_permits.csv"
},
"depends_on": ["extract"]
},
{
"id": "transform",
"executor": "container",
"config": {
"image": "python:3.12-slim",
"command": [
"python3", "-c",
"import csv,json; reader=csv.DictReader(open('/artifacts/sf_permits.csv')); data=[{k.lower().replace(' ','_'):v for k,v in row.items()} for row in reader]; json.dump(data, open('/artifacts/sf_permits.json','w'))"
],
"timeout_secs": 120
},
"depends_on": ["validate"]
},
{
"id": "load",
"executor": "shell",
"config": {
"command": "psql ${secrets.DATABASE_URL} -c \"\\copy staging.sf_permits FROM '$TASKED_ARTIFACTS/sf_permits.json'\""
},
"depends_on": ["transform"]
}
]
}
Each task reads/writes files via $TASKED_ARTIFACTS. The extract step downloads a CSV, the transform step converts it to JSON inside a container, and the load step copies it into Postgres. Artifacts are cleaned up when the flow completes.
AI-Powered PR Review
Agent + conditions — expensive Opus review only when risk is high.
{
"tasks": [
{
"id": "get-diff",
"executor": "shell",
"config": { "command": "gh pr diff 42 --repo myorg/myapp" }
},
{
"id": "triage",
"executor": "agent",
"config": {
"provider": "claude",
"model": "claude-sonnet-4-20250514",
"prompt": "Analyze this PR diff and assess risk. Return JSON: {\"risk\": \"low|medium|high\", \"reason\": \"...\", \"files_of_concern\": [...]}\n\nDiff:\n${tasks.get-diff.output.stdout}",
"max_tokens": 2048
},
"depends_on": ["get-diff"]
},
{
"id": "deep-review",
"executor": "agent",
"config": {
"provider": "claude",
"model": "claude-opus-4-20250514",
"prompt": "Perform a thorough security and correctness review. Return JSON: {\"issues\": [{\"file\": \"...\", \"line\": N, \"severity\": \"critical|warning|info\", \"description\": \"...\"}]}\n\nDiff:\n${tasks.get-diff.output.stdout}",
"max_tokens": 8192
},
"depends_on": ["triage"],
"condition": "tasks.triage.output.response.contains(\"high\")"
},
{
"id": "post-review",
"executor": "http",
"config": {
"url": "https://api.github.com/repos/myorg/myapp/pulls/42/reviews",
"method": "POST",
"headers": {
"Authorization": "Bearer ${secrets.GITHUB_TOKEN}",
"Accept": "application/vnd.github+json"
},
"body": {
"event": "COMMENT",
"body": "**AI Triage:** ${tasks.triage.output.response}\n\n**Deep Review:** ${tasks.deep-review.output.response}"
}
},
"depends_on": ["triage", "deep-review"]
}
]
}
Sonnet triages risk. Only if "high" does Opus do a full review. Low and medium-risk PRs skip the expensive deep review entirely, saving cost and time.
Staging → Production Promotion
Trigger executor — deploy environments as child flows.
{
"tasks": [
{
"id": "build",
"executor": "shell",
"config": {
"command": "docker build -t ghcr.io/myorg/myapp:$(git rev-parse --short HEAD) . && docker push ghcr.io/myorg/myapp:$(git rev-parse --short HEAD)"
}
},
{
"id": "deploy-staging",
"executor": "trigger",
"config": {
"queue": "deploys",
"wait": true,
"flow": {
"tasks": [
{
"id": "apply",
"executor": "shell",
"config": { "command": "kubectl set image deployment/myapp myapp=ghcr.io/myorg/myapp:${tasks.build.output.stdout} --context staging" }
},
{
"id": "wait-rollout",
"executor": "shell",
"config": { "command": "kubectl rollout status deployment/myapp --context staging --timeout=120s" },
"depends_on": ["apply"]
},
{
"id": "smoke-test",
"executor": "http",
"config": { "url": "https://staging.myapp.com/healthz" },
"depends_on": ["wait-rollout"],
"retries": 5,
"backoff": { "exponential": { "initial_delay_ms": 3000 } }
}
]
}
},
"depends_on": ["build"]
},
{
"id": "approve-prod",
"executor": "approval",
"config": { "message": "Staging is green. Promote to production?" },
"depends_on": ["deploy-staging"]
},
{
"id": "deploy-prod",
"executor": "trigger",
"config": {
"queue": "deploys",
"wait": true,
"flow": {
"tasks": [
{
"id": "apply",
"executor": "shell",
"config": { "command": "kubectl set image deployment/myapp myapp=ghcr.io/myorg/myapp:${tasks.build.output.stdout} --context production" }
},
{
"id": "wait-rollout",
"executor": "shell",
"config": { "command": "kubectl rollout status deployment/myapp --context production --timeout=180s" },
"depends_on": ["apply"]
},
{
"id": "smoke-test",
"executor": "http",
"config": { "url": "https://myapp.com/healthz" },
"depends_on": ["wait-rollout"],
"retries": 10,
"backoff": { "exponential_jitter": { "initial_delay_ms": 3000 } }
}
]
}
},
"depends_on": ["approve-prod"]
}
]
}
Each environment's deploy is a self-contained child flow. If staging fails, prod never gets touched. The trigger executor submits a child flow to the deploys queue and waits for it to complete before proceeding.
AI Agent via MCP
How an AI coding agent uses Tasked as a background tool.
# You're pair-programming. The agent needs to run a slow pipeline.
Agent → tasked_submit_flow:
queue: "background"
tasks:
- id: "scrape-docs"
executor: "shell"
config: { command: "python3 scrape_api_docs.py --site stripe.com/docs/api" }
- id: "summarize"
executor: "agent"
config:
provider: "claude"
model: "claude-sonnet-4-20250514"
prompt: "Summarize these API docs into a migration guide
from v2 to v3: ${tasks.scrape-docs.output.stdout}"
depends_on: ["scrape-docs"]
← Returns: { flow_id: "f_7k2m", state: "running", task_count: 2 }
# Agent continues helping you with other code...
# 2 minutes later:
Agent → tasked_flow_status("f_7k2m"):
← Returns: { state: "succeeded", tasks: { ... } }
Agent → tasked_task_output("f_7k2m", "summarize"):
← Returns: { response: "## Stripe API v2 → v3 Migration Guide\n\n..." }
The agent submits work, continues helping you, then retrieves results. No blocking. Tasked runs the pipeline in the background while the agent handles other tasks in your conversation.
Uptime Monitor
Scheduled execution with conditional PagerDuty alerting.
{
"schedule": "*/5 * * * *",
"tasks": [
{
"id": "check-api",
"executor": "http",
"config": { "url": "https://api.myapp.com/healthz", "timeout_secs": 10 },
"retries": 2,
"backoff": { "fixed": { "delay_ms": 5000 } }
},
{
"id": "check-db",
"executor": "shell",
"config": { "command": "pg_isready -h db.myapp.com -p 5432 -U postgres" }
},
{
"id": "check-redis",
"executor": "shell",
"config": { "command": "redis-cli -h redis.myapp.com ping" }
},
{
"id": "alert-pagerduty",
"executor": "http",
"config": {
"url": "https://events.pagerduty.com/v2/enqueue",
"method": "POST",
"body": {
"routing_key": "${secrets.PAGERDUTY_KEY}",
"event_action": "trigger",
"payload": {
"summary": "Service degradation detected — API: ${tasks.check-api.output.status}, DB: ${tasks.check-db.output.exit_code}, Redis: ${tasks.check-redis.output.exit_code}",
"severity": "critical",
"source": "tasked-monitor"
}
}
},
"depends_on": ["check-api", "check-db", "check-redis"],
"condition": "tasks[\"check-api\"].output.status != 200 || tasks[\"check-db\"].output.exit_code != 0 || tasks[\"check-redis\"].output.exit_code != 0"
}
]
}
Every 5 minutes: check API, Postgres, Redis in parallel. Only page PagerDuty if something is down. The schedule field uses cron syntax, and the condition on the alert task ensures PagerDuty is only triggered when a health check fails.