Skip to content

Hosting & Deployment

Overview

Craft Easy treats deployment as a separate concern from the framework. The framework packages (craft-easy-api, craft-easy-jobs, craft-easy-file-import, craft-easy-admin) are cloud-agnostic — they ship as Python/npm packages plus a Dockerfile. Each consuming project owns its own infrastructure-as-code and decides which cloud to run it on.

To avoid reinventing the Terraform boilerplate, Craft Easy ships a library of reusable modules in craft-easy-template/shared/infrastructure-modules/. When you generate a new project with cookiecutter the relevant modules are copied into the project — you own the copy and can edit it freely.

Hosting Requirements

Requirement Detail
Container runtime Any serverless container platform (scale-to-zero recommended)
Database MongoDB 7.x compatible (MongoDB Atlas, Cosmos DB vCore, or self-managed)
TLS Required for all environments
Secrets Managed secrets store (Key Vault, Secret Manager, etc.)
Monitoring Log aggregation + alerting

Craft Easy has first-class modules for Azure and Google Cloud Platform (GCP), plus a Cloudflare DNS module. You can mix and match — for example running the API and database in Azure while running scheduled jobs in GCP. See Cross-cloud deployments below.


Table of Contents

  1. Module Library
  2. Generating a project
  3. Azure reference architecture
  4. GCP reference architecture
  5. Database choice
  6. Cross-cloud deployments
  7. Pricing
  8. Disaster recovery
  9. Reference: service comparison

1. Module Library

The modules live under craft-easy-template/shared/infrastructure-modules/ and are distributed via cookiecutter into each generated project.

Azure modules

Module Purpose
azure/container-app Container Apps web service with scale-to-zero, health probes, managed identity, AcrPull role
azure/container-app-job Container Apps Job for batch/scheduled work (cron or manual trigger)
azure/container-apps-environment Shared Container Apps Environment wired to Log Analytics
azure/container-registry Azure Container Registry (ACR)
azure/cosmos-db Cosmos DB for MongoDB vCore (real MongoDB engine, not RU)
azure/key-vault Key Vault with RBAC, optional initial secrets, reader principals
azure/log-analytics Log Analytics Workspace with retention and daily quota
azure/static-web-app Static Web Apps for frontends and docs sites

GCP modules

Module Purpose
gcp/cloud-run Cloud Run v2 service, scale-to-zero, dedicated service account, optional VPC connector
gcp/cloud-run-job Cloud Run Jobs + Cloud Scheduler cron integration
gcp/artifact-registry Docker repository with cleanup policies
gcp/atlas-cluster MongoDB Atlas cluster (M0 free tier or dedicated) with user + IP access list
gcp/secret-manager Secret Manager secrets with per-secret IAM accessors
gcp/cloud-logging Log bucket, optional sink, optional error-rate alert policy

Cloudflare module

Module Purpose
cloudflare/dns DNS records on a Cloudflare zone (A, AAAA, CNAME, TXT, MX)

Module contract

Every module follows the same contract:

  • main.tf — resources
  • variables.tf — always takes name, env, and tags / labels
  • outputs.tf — exports what downstream modules need (fqdn, connection_string, egress_ip, principal_id, service_account_email)
  • versions.tf — provider constraints
  • README.md — usage example and input/output tables

Modules are consumed as local paths in a generated project (source = "./modules/azure/container-app"). That means the project owns its copy; upgrades happen by regenerating or by vendoring a newer version from the template.


2. Generating a project

craft-easy-template is a cookiecutter template. Pass cloud_provider to pick the target cloud:

cookiecutter craft-easy-template/api
# Cloud choice (azure, gcp, none):
#   cloud_provider [azure]: azure
#   cloud_region_azure [swedencentral]:
#   enable_jobs_deployment [no]: no

Picking azure or gcp copies the matching module library plus a pre-wired infrastructure/main.tf into the new project. Picking none skips all IaC — you get just the Dockerfile and CI workflow.

Generated layout

{project_slug}/
├── Dockerfile
├── docker-compose.yml
├── src/
├── tests/
├── pyproject.toml
├── .github/workflows/
│   ├── ci.yml
│   └── deploy-azure.yml          # or deploy-gcp.yml
└── infrastructure/
    ├── main.tf                   # Wires the modules together
    ├── variables.tf
    ├── backend.tf                # Remote state (azurerm or gcs)
    ├── terraform.tfvars.example
    ├── environments/
    │   ├── dev.tfvars
    │   └── prod.tfvars
    └── modules/
        └── azure/ or gcp/        # Copied from the shared library

The deploy workflow authenticates via OIDC (no long-lived credentials), builds and pushes the image, then runs terraform apply against the selected environment.

Local deploy

cd infrastructure
terraform init -backend-config="environments/dev.backend.hcl"
terraform plan  -var-file="environments/dev.tfvars"
terraform apply -var-file="environments/dev.tfvars"

3. Azure reference architecture

┌───────────────────────────────────────────────────────┐
│                      Azure                             │
│                                                        │
│  ┌────────────────────────────────────────────────┐   │
│  │        Container Apps Environment                │   │
│  │                                                  │   │
│  │   ┌────────────┐    ┌────────────┐             │   │
│  │   │  API app   │    │  Jobs app  │  (optional) │   │
│  │   │ scale 0-N  │    │  cron/one  │             │   │
│  │   └─────┬──────┘    └─────┬──────┘             │   │
│  └─────────┼──────────────────┼─────────────────────┘   │
│            │                  │                         │
│   ┌────────▼──────────────────▼─────────────────────┐   │
│   │   Cosmos DB vCore (MongoDB, Free or M30+HA)       │   │
│   └────────────────────────────────────────────────────┘   │
│                                                         │
│   ┌─────────┐  ┌──────────┐  ┌──────────────────┐    │
│   │   ACR   │  │Key Vault │  │  Log Analytics   │    │
│   └─────────┘  └──────────┘  └──────────────────┘    │
└───────────────────────────────────────────────────────┘

Wire it up with the modules in examples/azure-only/main.tf:

module "log_analytics" { source = "../../shared/infrastructure-modules/azure/log-analytics" ... }
module "acr"           { source = "../../shared/infrastructure-modules/azure/container-registry" ... }
module "cae"           { source = "../../shared/infrastructure-modules/azure/container-apps-environment" ... }
module "cosmos"        { source = "../../shared/infrastructure-modules/azure/cosmos-db" ... }
module "api"           { source = "../../shared/infrastructure-modules/azure/container-app" ... }
module "jobs"          { source = "../../shared/infrastructure-modules/azure/container-app-job" ... }

4. GCP reference architecture

┌───────────────────────────────────────────────────────┐
│                        GCP                             │
│                                                        │
│   ┌───────────────┐     ┌───────────────┐            │
│   │  Cloud Run    │     │Cloud Run Jobs │ (optional) │
│   │  API service  │     │  + Scheduler  │            │
│   │  scale 0-N    │     │  (cron)       │            │
│   └───────┬───────┘     └───────┬───────┘            │
│           │                     │                     │
│   ┌───────▼─────────────────────▼─────────────────┐  │
│   │                 Secret Manager                 │  │
│   │        (MONGODB_URI and friends)               │  │
│   └────────────────┬───────────────────────────────┘  │
│                    │                                   │
│   ┌────────────────▼─────────────────┐   ┌────────┐   │
│   │  Artifact Registry (Docker)       │   │Logging│   │
│   └───────────────────────────────────┘   └────────┘   │
└───────────────────────────────────────────────────────┘
         MongoDB Atlas (M0 / M10+)

Wire it up with the modules in examples/gcp-only/main.tf:

module "artifact_registry" { source = "../../shared/infrastructure-modules/gcp/artifact-registry" ... }
module "atlas"             { source = "../../shared/infrastructure-modules/gcp/atlas-cluster" ... }
module "secrets"           { source = "../../shared/infrastructure-modules/gcp/secret-manager" ... }
module "api"               { source = "../../shared/infrastructure-modules/gcp/cloud-run" ... }
module "jobs"              { source = "../../shared/infrastructure-modules/gcp/cloud-run-job" ... }

5. Database choice

Both clouds run MongoDB 7.x via different products:

Criterion Cosmos vCore (Azure) Atlas (multi-cloud)
MongoDB compatibility ~90% 100%
Free tier storage 32 GB 512 MB (M0)
Backup included Yes (PITR 7-35 days) Snapshots (PITR extra on M10+)
Beanie ODM Yes Yes
Azure credit eligible Yes No
Multi-cloud No Yes
Atlas Search / full-text No Yes
Vendor lock-in Medium Low

Rule of thumb: - On Azure — start with Cosmos vCore Free, it is covered by the Azure credit and has 64× more storage than Atlas M0. Escalate to Atlas only if you hit a compatibility wall (missing aggregation operator, Atlas Search, etc.). Switching is five minutes: change MONGODB_URI. - On GCP — Atlas. GCP has no MongoDB-compatible managed database (Firestore, Bigtable, Cloud SQL, AlloyDB, Spanner are all different query models). - Never use Cosmos DB RU-based. It is a translation layer over a proprietary engine, not real MongoDB, and silently breaks Beanie.


6. Cross-cloud deployments

Because every module exports the inputs its neighbours need (egress_ip, connection_string, principal_id, service_account_email) you can freely mix clouds. The canonical example is examples/hybrid/main.tf:

# API + database stay in Azure (covered by the Azure credit)
module "api"    { source = "../../shared/infrastructure-modules/azure/container-app" ... }
module "cosmos" {
  source            = "../../shared/infrastructure-modules/azure/cosmos-db"
  allowed_ip_ranges = [module.jobs.egress_ip] # GCP NAT
  ...
}

# Jobs run in GCP (free Cloud Run Jobs tier, no Azure credit needed)
module "jobs" {
  source       = "../../shared/infrastructure-modules/gcp/cloud-run-job"
  schedule     = "0 2 * * *"
  env_vars = {
    MONGODB_URI  = module.cosmos.connection_string
    API_BASE_URL = "https://${module.api.fqdn}"
  }
  ...
}

Cross-cloud fallpits

  • Egress and whitelisting. Cross-cloud traffic leaves your VPC and arrives at the other cloud over the public internet. Pin the source to a NAT gateway on the egress side and an IP allow-list on the ingress side. The modules expose egress_ip / allowed_ip_ranges for exactly this.
  • Latency. Inter-cloud RTT is tens of milliseconds. Keep chatty workloads (hot API paths, high-frequency DB reads) inside one cloud. Batch jobs and scheduled workloads are fine across clouds.
  • Secret sharing. Don't copy secrets between providers. Instead, pass them through at Terraform time (module.cosmos.connection_string) so the ground truth lives in one place.
  • Observability. You get two dashboards, two log stores, and two alert channels. Forward logs to a single sink (Grafana Cloud, Datadog, or one of the clouds) if unified observability matters.
  • Egress cost. Outbound bandwidth is billed per GB on both clouds. Cheap for APIs, expensive for bulk data pipelines.

7. Pricing

Rough orders of magnitude. Use each cloud's pricing calculator for binding numbers.

Azure (Sweden Central)

Scenario Per month
Hobby (4 small apps, scale-to-zero, Cosmos Free) ~$5
Test/staging (1 always-on replica, Cosmos Free) ~$20
Production (2 regions, HA, Cosmos M30) ~$925

The Visual Studio / Microsoft Partner Azure credit (around $190/month depending on plan) covers hobby and test completely and roughly 20% of a production bill.

GCP (europe-north1)

Scenario Per month
Hobby (Cloud Run scale-to-zero, Atlas M0) $0
Test/staging (Atlas M10) ~$67
Production (Cloud Run + Atlas M30 + HA) ~$761

Cloud Run free tier (180K vCPU-sec, 360K GiB-sec, 2M requests/month) is more generous than Container Apps and tends to cover hobby projects indefinitely.


8. Disaster recovery

Requirement Target Achievable
RTO < 4 hours < 2 minutes (multi-region load balancer + managed HA)
RPO < 1 hour < 1 minute (synchronous replica + PITR)

On Azure, pair two Container Apps Environments behind Front Door with a Cosmos vCore M30 + HA pair. On GCP, pair two Cloud Run regions behind the Global HTTPS load balancer with an Atlas M30 replica set across two regions. Either path reaches the RTO/RPO targets above.


9. Reference: service comparison

Compute

Criterion Azure Container Apps GCP Cloud Run
Scale-to-zero Yes Yes
Cold start 2-5 s 1-3 s
Free tier 180K vCPU-sec, 360K GiB-sec 180K vCPU-sec, 360K GiB-sec, 2M req
VNet / VPC Full integration VPC connector (serverless VPC access)
Managed identity Yes (User-assigned MI) Yes (dedicated service account)
Custom domains + TLS Managed, free Managed, free

Database

Criterion Cosmos vCore Atlas Cosmos RU (avoid)
MongoDB compatibility ~90% 100% ~60% (incompatible with Beanie)
Free tier storage 32 GB 512 MB
Backup included Yes (PITR) Snapshots (PITR extra) Periodic
Multi-cloud No Yes No
Vendor lock-in Medium Low High

GCP-native MongoDB alternatives

There is none. GCP's document/NoSQL services (Firestore, Bigtable, Cloud SQL, AlloyDB, Spanner) all use different query languages and do not speak the MongoDB wire protocol. Use Atlas (available as a GCP marketplace service) or self-host MongoDB on GKE.


NOTE: Prices are Q1 2026 estimates. Verify with: - Azure Pricing Calculator - Google Cloud Pricing Calculator - MongoDB Atlas Pricing