Skip to content

craft-easy-jobs CLI (standalone)

craft-easy-jobs is a lightweight standalone package that provides a CLI and runner for Craft Easy-style jobs. Unlike the API job framework, it has no database, no persistent schedules, and no REST API — it is a click-based executable designed for Cloud Run, GitHub Actions, Kubernetes CronJobs, and ad-hoc manual runs.

pip install craft-easy-jobs

# Optional: install with built-in jobs that talk to craft-easy-api
pip install craft-easy-jobs[full]

The only required dependency is click>=8.0. The [full] extra adds craft-easy-api so the built-in jobs (bi-export, token-cleanup, audit-log-archive) have access to Beanie models.

When to use which

Craft Easy has two job systems on purpose — they solve different problems.

Scenario craft-easy-jobs CLI API jobs module
Ad-hoc manual runs Yes No
Cloud Run / ephemeral containers Yes No
Cron scheduling with persistent history No Yes
Multi-instance coordination (distributed lock) No Yes
Embedded inside a FastAPI service No Yes
CI/CD, GitHub Actions, Kubernetes CronJob Yes No
Admin UI with schedule management No Yes
Real-time progress tracking in a database No Yes
Per-tenant or parent-tenant scoping Yes Partial (per_tenant flag)

Rule of thumb: if something should trigger the job (a cron expression you want the API to respect, an admin clicking "Run now", a chain after a previous job) — use the API module. If the job is run from outside the API (Cloud Scheduler, a cron pod, a human at a terminal) — use the CLI.

The two can coexist. The same job definition can be registered in both systems, and teams often wire fast, interactive jobs through the API module while pushing heavy, isolated jobs (BI exports, large data migrations) to dedicated CLI containers.

Defining jobs

# jobs/settlement.py
from craft_easy_jobs import job, JobContext

@job(
    name="settlement",
    description="Monthly revenue settlement",
    timeout_seconds=600,
    retries=2,
    params_schema={
        "period": {"type": "string", "pattern": "^\\d{4}-\\d{2}$"},
    },
)
async def settlement_job(ctx: JobContext):
    period = ctx.params["period"]
    await ctx.log(f"Starting settlement for {period}")
    # ... work ...
    return {"processed": 42, "period": period}

The full decorator reference is in Job Decorators & Context.

Running from the command line

After installing, the CLI is available as craft-easy-job:

# Trivial run
craft-easy-job run token-cleanup

# With parameters
craft-easy-job run settlement --params '{"period": "2026-03"}'

# For a specific tenant
craft-easy-job run bi-export \
  --tenant-id tenant_664abc \
  --params '{"target_type": "bigquery", "project_id": "...", "dataset_id": "...", "configs": [...]}'

# Parent-tenant scope (runs for the tenant and all descendants)
craft-easy-job run org-rollup \
  --tenant-id parent_org_123 \
  --scope parent_tenant

# Explicit sub-tenant list
craft-easy-job run rollup \
  --tenant-id parent_123 \
  --scope parent_tenant \
  --sub-tenant-ids child1,child2,child3

# List every registered job and chain
craft-easy-job list

CLI options reference

Flag Purpose
--params, -p JSON object of parameters (default: {})
--tenant-id, -t Tenant to run the job for
--scope, -s tenant (default), parent_tenant, or system
--sub-tenant-ids Comma-separated list of descendant tenants (parent_tenant scope)

craft-easy-job list prints every registered job and every chain, including their timeouts, retries, and scopes — useful for smoke-testing a deployment.

Scopes

The standalone CLI has a richer tenant model than the API framework. Every job has a scope that determines how it interacts with the tenant tree:

Scope Behaviour
tenant (default) Runs for one tenant. ctx.tenant_id is set; ctx.iter_tenant_ids() returns [tenant_id].
parent_tenant Runs for a parent tenant and all its descendants. ctx.iter_tenant_ids() iterates the tenant plus every sub-tenant.
system Runs globally with no tenant context. Used by maintenance jobs like token-cleanup.

If craft-easy-api is installed, the CLI auto-resolves sub-tenants by calling craft_easy.core.tenant.get_tenant_tree(). Otherwise, pass them explicitly with --sub-tenant-ids.

@job(name="org-rollup", scope="parent_tenant")
async def rollup(ctx: JobContext):
    for tenant_id in ctx.iter_tenant_ids():
        await rollup_tenant(tenant_id)
    return {"processed": len(ctx.iter_tenant_ids())}

Deployment patterns

GitHub Actions

- name: Nightly BI export
  run: |
    pip install craft-easy-jobs[full]
    craft-easy-job run bi-export \
      --tenant-id ${{ secrets.TENANT_ID }} \
      --params '${{ secrets.BI_EXPORT_PARAMS }}'
  env:
    MONGODB_URI: ${{ secrets.MONGODB_URI }}

Google Cloud Run + Cloud Scheduler

FROM python:3.12-slim
RUN pip install craft-easy-jobs[full]
COPY my_jobs.py /app/my_jobs.py
ENV PYTHONPATH=/app
CMD ["craft-easy-job", "run", "month-end-close"]

Point Cloud Scheduler at a Cloud Run job and invoke the image on a cron schedule. The container boots, runs the job, and exits — you pay only for the execution time.

Kubernetes CronJob

apiVersion: batch/v1
kind: CronJob
metadata:
  name: token-cleanup
spec:
  schedule: "0 3 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: cleanup
            image: my-registry/craft-easy-jobs:latest
            command: ["craft-easy-job", "run", "token-cleanup"]
          restartPolicy: OnFailure

Ad-hoc troubleshooting

# On a developer machine
source .venv/bin/activate
pip install craft-easy-jobs[full]
export MONGODB_URI="mongodb://localhost:27017"
craft-easy-job run settlement --params '{"period": "2026-03", "dry_run": true}'

No API, no scheduling — just run the job once with your own parameters.

Programmatic use

The CLI is a thin wrapper around JobRunner.run(), which you can call directly:

import asyncio
from craft_easy_jobs.runner import JobRunner

async def main():
    result = await JobRunner.run(
        "settlement",
        params={"period": "2026-03"},
        tenant_id="tenant_664abc",
    )
    print(result.status)              # "completed" or "failed"
    print(result.duration_seconds)
    print(result.result)              # Return value
    for log in result.logs:
        print(log["level"], log["message"])

asyncio.run(main())

JobResult exposes everything the CLI prints, so embedding the runner in your own tooling is straightforward.

Built-in jobs

Three jobs are shipped with the package and become available when you install craft-easy-jobs[full]:

Job Description Timeout Retries
token-cleanup Delete expired UserAuthentication sessions 120s 0
audit-log-archive Archive old audit log entries 600s 0
bi-export Export resources to BigQuery or Azure SQL 3600s 2

See Built-in Jobs for parameter reference and usage examples.

Limits of the standalone runner

The CLI is intentionally minimal. Things it does not do:

  • No persistent history. JobResult is returned to the caller and that's it. If you need to keep it, write it to your own store.
  • No scheduling. Use cron, Cloud Scheduler, GitHub Actions, or Kubernetes CronJobs to trigger it.
  • No distributed locking. If you invoke the same CLI twice concurrently, you will get two concurrent runs. For single-execution guarantees, use the API framework or a trigger system that dedupes.
  • No REST API. The CLI is the interface.
  • No chain execution. The chain() helper lets you define chains, but triggering them requires the API framework (which reads the chain off a JobSchedule).

If any of these are dealbreakers for your use case, use the API job framework instead.