Audit Log¶
Every mutation in a Craft Easy application is recorded to an audit_log collection automatically. You do not have to call a logger, add decorators, or remember to instrument new endpoints — the framework wires itself into the CRUD hook system at resource registration time, and every insert, update, replace, and delete that flows through a registered resource is captured.
The audit log is the authoritative record of who changed what, when, and from where. It backs compliance reporting, security investigations, and the "show me the history of this record" feature that every admin UI eventually needs.
The AuditEntry model¶
class AuditEntry(BaseDocument):
resource: str # Collection name, e.g. "users", "invoices"
item_id: PydanticObjectId # The document the change targeted
operation: str # "insert" | "update" | "replace" | "delete" | "depersonalize"
user_id: PydanticObjectId | None # The authenticated user who performed the change
user_name: str | None # Display name captured at write time
ip: str | None # Source IP address
changes: dict | None # Field-level diff (update/replace only)
snapshot: dict | None # Full document before deletion (delete only)
Entries also inherit created_at and tenant_id from BaseDocument. The collection is audit_log, with indexes on (resource, item_id) for per-record history lookups, (user_id) for "what has this user done", and (created_at DESC) for chronological queries.
What goes in changes¶
For updates and replaces, changes is a dict of field name → {"old": ..., "new": ...}. Only fields that actually changed are included — an update that touches 20 fields but only modifies 2 produces an entry with 2 keys:
{
"email": {"old": "old@example.com", "new": "new@example.com"},
"phone": {"old": "+46700000000", "new": "+46701111111"}
}
What goes in snapshot¶
For deletes, snapshot is the complete document as it existed immediately before deletion. This is the only place deleted data can be recovered from once soft-delete retention has expired, so don't purge audit entries before you purge soft-deleted documents.
Automatic logging via CRUD hooks¶
When the framework registers a resource it calls register_auto_hooks(), which attaches three audit hooks to every resource:
| Hook | Triggers | What it logs |
|---|---|---|
AFTER_CREATE |
After a successful insert | log_insert(resource, item.id, user) |
AFTER_UPDATE |
After a successful update/replace | log_update(resource, item.id, changes, user) with a computed diff |
AFTER_DELETE |
After a successful delete | log_delete(resource, item.id, snapshot, user) with the full pre-delete document |
The update hook compares ctx.item.model_dump() against ctx.previous key by key. If nothing changed (a no-op PATCH, a hook-only side effect, etc.) it skips writing an entry — an audit log full of empty updates is noise.
Direct calls¶
Everywhere else in your code, if you mutate a document outside the normal CRUD path (a background job, a domain event handler, a migration), call AuditLogger directly so the change does not disappear from the audit trail:
from craft_easy.core.audit.logger import AuditLogger
await AuditLogger.log_update(
resource="invoices",
item_id=invoice.id,
changes={"status": {"old": "pending", "new": "paid"}},
user=ctx.user, # TokenPayload, or None for system operations
)
AuditLogger exposes log_insert, log_update, and log_delete. Each takes an optional user — if omitted or anonymous, the entry is stored with user_id=None, which is the correct representation of a system-initiated change.
REST API¶
All audit endpoints live under /admin/audit-log and require admin or system-scope credentials. Regular users cannot read the audit log, even for their own tenant.
List entries¶
curl "http://localhost:5001/admin/audit-log?resource=users&operation=update&page=1&per_page=50" \
-H "Authorization: Bearer $TOKEN"
| Parameter | Purpose |
|---|---|
resource |
Filter by collection name |
resource_id |
Filter to a specific document |
user_id |
Who performed the operation |
operation |
insert, update, replace, delete, depersonalize |
date_from, date_to |
ISO 8601 range filter on created_at |
page, per_page |
Pagination (default 1 / 50, max per_page 500) |
sort |
Sort field with optional - prefix, default -created_at |
Response:
{
"items": [
{
"id": "664a...",
"resource": "users",
"item_id": "662b...",
"operation": "update",
"user_id": "661c...",
"user_name": "alice@example.com",
"ip": "192.168.1.10",
"changes": {"email": {"old": "old@...", "new": "new@..."}},
"snapshot": null,
"created_at": "2026-04-05T10:15:42+00:00",
"tenant_id": "660a..."
}
],
"total": 128,
"page": 1,
"per_page": 50
}
Statistics¶
curl "http://localhost:5001/admin/audit-log/stats?group_by=day&date_from=2026-03-01T00:00:00Z" \
-H "Authorization: Bearer $TOKEN"
| Parameter | Purpose |
|---|---|
resource, user_id, date_from, date_to |
Same filters as the list endpoint |
group_by |
day (default), week, or month — timeline bucket size |
Response:
{
"total_entries": 4250,
"by_operation": {"insert": 1800, "update": 2100, "delete": 350},
"by_resource": {"users": 1500, "invoices": 2750},
"by_user": [
{"user_id": "661c...", "user_name": "alice", "count": 890},
{"user_id": "663d...", "user_name": "bob", "count": 412}
],
"timeline": [
{"period": "2026-04-01", "count": 142},
{"period": "2026-04-02", "count": 167},
{"period": "2026-04-03", "count": 189}
]
}
The aggregation runs as a MongoDB pipeline — the response is fast even over millions of entries as long as the indexes on resource, user_id, and created_at are in place.
Export¶
# JSON
curl "http://localhost:5001/admin/audit-log/export?format=json&date_from=2026-03-01T00:00:00Z" \
-H "Authorization: Bearer $TOKEN" \
-o audit_log.json
# CSV (flat columns: id, resource, item_id, operation, user_id, user_name, ip, created_at)
curl "http://localhost:5001/admin/audit-log/export?format=csv&resource=invoices" \
-H "Authorization: Bearer $TOKEN" \
-o audit_log.csv
| Parameter | Purpose |
|---|---|
resource, resource_id, user_id, operation, date_from, date_to |
Standard filters |
format |
json (default) or csv |
limit |
Max rows (default 10 000, max 100 000) |
Use this to hand auditors a file or to feed a SIEM that does not speak the REST API. The CSV is intentionally flat — it does not include changes or snapshot dicts, because nested structures do not round-trip through CSV cleanly. Use format=json when you need the full record.
Retention purge¶
curl -X DELETE "http://localhost:5001/admin/audit-log/retention?older_than_days=365" \
-H "Authorization: Bearer $TOKEN"
Response:
{
"message": "Purged audit entries older than 365 days",
"deleted_count": 54321,
"cutoff_date": "2025-04-05T00:00:00+00:00"
}
This endpoint is restricted to system administrators — it is the one place where audit entries are deleted rather than written. Schedule it or run it manually; the framework does not enforce a retention policy on its own.
For scheduled cleanup, use the built-in cleanup_audit_log job instead — it respects the JOBS_AUDIT_RETENTION_DAYS setting and runs through the job framework with full audit-of-audit telemetry.
Tenant scoping¶
Audit queries are tenant-scoped automatically. The query builder checks the caller's scope:
- System users (scope
systemoris_system_user=True) see every entry across every tenant. - Regular admins see only entries whose
tenant_idmatches their own.
Because AuditEntry inherits tenant_id from BaseDocument, the value is set when the entry is created — by the same mechanism that tags every other document with its tenant. There is no way for a tenant admin to exfiltrate audit data from another tenant, and there is no way to configure it off per tenant.
GDPR integration¶
Depersonalization and erasure are audit-relevant events, so they write their own entries with operation="depersonalize":
{
"resource": "users",
"item_id": "664a...",
"operation": "depersonalize",
"changes": {"gdpr_fields": ["email", "phone", "personal_id"], "user_id": "664a..."},
"user_id": null,
"created_at": "2026-04-05T12:00:00+00:00"
}
The changes dict lists which fields were depersonalized. user_id is None because erasure is typically driven by a compliance workflow, not a specific logged-in user — if you need the operator recorded, call the erasure endpoint with a service account token and the token payload will be captured.
See GDPR for the full erasure flow. The erasure-execute endpoint writes one audit entry per depersonalized document plus one per revoked consent plus one for the account disable, so the full operation is reconstructable after the fact.
Settings¶
| Setting | Default | Purpose |
|---|---|---|
AUDIT_LOG_ENABLED |
true |
Master toggle. When false, AuditLogger._log() returns early and no entries are written. Use for local load testing, not production. |
AUDIT_LOG_RETENTION_DAYS |
90 |
Advisory retention window. The framework itself does not purge on this value — the cleanup_audit_log job reads JOBS_AUDIT_RETENTION_DAYS (which defaults to 365) when it runs. Align the two if you rely on the cleanup job. |
MULTI_TENANT_ENABLED |
true |
Controls whether the audit query builder applies the tenant_id filter for non-system callers. |
Common queries¶
# Everything a specific user did in the last 24 hours
curl "http://localhost:5001/admin/audit-log?user_id=661c...&date_from=2026-04-04T00:00:00Z"
# Full history of a specific invoice
curl "http://localhost:5001/admin/audit-log?resource=invoices&resource_id=662b...&sort=created_at"
# All deletes across the tenant in March
curl "http://localhost:5001/admin/audit-log?operation=delete&date_from=2026-03-01T00:00:00Z&date_to=2026-04-01T00:00:00Z"
# All GDPR depersonalizations ever
curl "http://localhost:5001/admin/audit-log?operation=depersonalize&per_page=500"
Combine filters freely — the query builder ANDs them together and uses the indexes on resource, user_id, and created_at.
What is not logged¶
The audit log is scoped to CRUD mutations on registered resources. It does not cover:
- Reads — query the API access log or set up per-endpoint logging if you need read auditing.
- Login attempts — auth events go to the authentication log, not the audit log.
- Out-of-band writes — anything that touches MongoDB directly bypasses the hooks. Do not write raw BSON; go through
Resource. - Job runs — see
JobRundocuments for job history. - Config changes — only changes to documents in registered collections are captured.
If you need to audit something the framework does not capture, call AuditLogger.log_* yourself at the edge where the change happens. That's what the public API is there for.