Docs / Configuration

Configuration

Storage, logging, metrics, and runtime configuration options.

Storage

Tasked uses SQLite with WAL (Write-Ahead Logging) mode for durable, concurrent-safe storage. The serve and mcp commands use a --data-dir directory, while run uses a single --db file.

ModeFlagDefaultDescription
serve--data-dirtasked-dataData directory for persistent storage
mcp--data-dirtasked-dataData directory for persistent storage
run--db:memory:Single SQLite file, discarded on exit by default

The --data-dir flag points to a directory that Tasked creates automatically. Inside it, you'll find a catalog.db for global state and a queues/ subdirectory containing per-queue database files. This layout isolates queue data and simplifies backups.

# Use a specific data directory
tasked-server serve --data-dir /var/lib/tasked

# Persist results from a CLI run
tasked-server run flow.json --db results.db
File permissions

Tasked needs read/write access to the data directory and its contents. SQLite creates additional -wal and -shm files alongside each database.

Logging

Tasked uses structured logging via the tracing crate. Log level is controlled by the RUST_LOG environment variable using EnvFilter syntax.

ModeDefault FilterOutput
servetasked_server=info,tasked=info,tower_http=infostdout
runwarnstdout
mcpwarnstderr

MCP mode writes logs to stderr so that stdout remains clean for JSON-RPC protocol messages.

# Show all debug logs
RUST_LOG=debug tasked-server serve

# Engine trace + quiet HTTP logs
RUST_LOG=tasked=trace,tower_http=warn tasked-server serve

# Debug MCP communication
RUST_LOG=debug tasked-server mcp --data-dir tasked-data

Metrics

In server mode, Tasked exposes a Prometheus-compatible metrics endpoint at /metrics. Metrics are only available when running tasked-server serve.

Available metrics

MetricTypeLabelsDescription
tasked_flows_submitted_totalcounterqueue_idTotal flows submitted
tasked_flows_completed_totalcounterqueue_id, stateTotal flows completed (succeeded/failed)
tasked_tasks_dispatched_totalcounterqueue_id, executorTotal tasks dispatched to executors
tasked_tasks_completed_totalcounterqueue_id, stateTotal tasks completed
tasked_tasks_retried_totalcounterqueue_idTotal task retries
tasked_tasks_readygaugequeue_idTasks currently ready to execute
tasked_engine_cycles_totalcounterTotal engine processing cycles
# Fetch metrics
curl http://localhost:8080/metrics
Real-time monitoring

For live flow monitoring, the SSE endpoint at /api/v1/flows/{fid}/events streams task state changes and flow completion events in real time. See the API Reference for details.

Engine

The engine runs background loops for task dispatch, schedule evaluation, and flow cleanup. These intervals can be tuned via CLI flags or environment variables.

SettingDefaultDescription
cleanup_interval3600sHow often the engine sweeps for terminal flows past their queue's retention_secs
schedule_interval60sHow often the engine evaluates cron schedules and submits due flows

The cleanup loop only deletes flows from queues that have a retention_secs configured. See Queues → Retention for details.

The schedule loop checks all active schedules and submits a new flow whenever a cron expression fires. See Concepts → Schedules for an overview.

Authentication

Tasked supports optional API key authentication to protect the REST API and MCP server. When enabled, all requests must include a valid API key in the Authorization header.

FlagDefaultDescription
--auth-modenoneAuthentication mode: none (open) or api-key
--api-key-The API key to require. Also reads from TASKED_API_KEY env var.
# Enable API key auth via flags
tasked-server serve --auth-mode api-key --api-key my-secret-key

# Or via environment variable
TASKED_API_KEY=my-secret-key tasked-server serve --auth-mode api-key

When --auth-mode api-key is set, clients must include the key as a Bearer token:

curl -H "Authorization: Bearer my-secret-key" \
  http://localhost:8080/api/v1/queues

Health (/healthz) and metrics (/metrics) endpoints are not protected by authentication.

Metrics Push

In addition to the pull-based /metrics endpoint, Tasked can push Prometheus metrics to a remote URL at a fixed interval. This is useful when the server is behind a firewall or in environments where a Prometheus scraper cannot reach the server.

FlagDefaultDescription
--metrics-push-url-URL to POST Prometheus metrics to every 30 seconds
# Push metrics to a Prometheus Pushgateway
tasked-server serve --metrics-push-url https://pushgateway.example.com/metrics/job/tasked

The server sends a POST request with the same Prometheus text exposition format as the /metrics endpoint. If the remote endpoint is unreachable, the push is logged and retried on the next interval.

On this page