Files
sop-ultime/sop-back/README.md
2026-05-05 16:52:40 +02:00

5.0 KiB

Smash or Pass — Backend

FastAPI + SQLAlchemy (SQLite) + boto3 (MinIO/S3). Serves the API consumed by sop-front.

See ../CLAUDE.md for full architecture notes.


Stack

  • Python 3.12, FastAPI, Uvicorn
  • SQLAlchemy 2 (sync) + SQLite
  • Pydantic v2 + pydantic-settings
  • boto3 for S3-compatible object storage (MinIO)

Project layout

sop-back/
├── app/
│   ├── main.py                 # FastAPI factory + lifespan (DB + bucket init)
│   ├── core/{config,deps}.py   # Settings, admin gate
│   ├── db/database.py          # Engine, SessionLocal, get_db, Base
│   ├── models/models.py        # Collection, Character
│   ├── schemas/schemas.py      # Pydantic schemas
│   ├── services/storage.py     # MinIO upload/delete, bucket bootstrap
│   └── api/routes/
│       ├── health.py           # /health, /admin/status
│       ├── collections.py      # /collections
│       └── admin.py            # /admin/* (require_admin)
├── requirements.txt
├── Dockerfile
└── .env.example

Configuration

Copy .env.example to .env and adjust:

Var Default Notes
ADMIN_ENABLED false When true, /admin/* routes are exposed and the frontend renders the admin panel
ALLOWED_ORIGINS ["*"] CORS, JSON array string
DATABASE_URL sqlite:///./data/sop.db SQLite file path
S3_ENDPOINT_URL http://localhost:9000 What the backend uses to reach MinIO
S3_PUBLIC_URL http://localhost:9000 What gets stored in s3_url and dereferenced by the browser
S3_ACCESS_KEY / S3_SECRET_KEY minioadmin / minioadmin Credentials
S3_BUCKET sop Auto-created on startup, set public-read
S3_REGION us-east-1 Required by boto3

Security note: ADMIN_ENABLED=true exposes admin endpoints to anyone who can reach the backend. There is no user auth — by design, per project spec. In production, either (a) put the backend behind an authenticated reverse proxy / VPN, (b) keep ADMIN_ENABLED=false outside of admin sessions, or (c) replace the gate with proper auth.


Local development

You need a running MinIO. Easiest path: spin it up via Docker.

# from repo root
docker compose up -d minio minio-init

Then run the backend natively:

cd sop-back
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# edit .env: set ADMIN_ENABLED=true if you want to upload
uvicorn app.main:app --reload

The SQLite DB file is created at sop-back/data/sop.db on first startup; tables are auto-created. There is no Alembic — if you change models, delete the file during dev.


API surface

Method Path Auth Purpose
GET /health Liveness
GET /admin/status {admin_enabled: bool}
GET /collections List collections (with character_count)
GET /collections/{id} Collection + characters
POST /admin/collections admin multipart name + files[]
POST /admin/collections/{id}/characters admin multipart files[]
DELETE /admin/collections/{id} admin Delete collection + S3 objects
DELETE /admin/characters/{id} admin Delete one character + its S3 object

Allowed image types: image/jpeg, image/png, image/webp, image/gif.


Production deployment

The repo-root docker-compose.yml already wires backend + MinIO + bucket init. From the repo root:

docker compose up -d --build backend minio minio-init

For a real deployment you should override these defaults:

  1. Use long, random MinIO credentials. Edit the MINIO_ROOT_USER / MINIO_ROOT_PASSWORD env on the minio service and the matching S3_ACCESS_KEY / S3_SECRET_KEY on the backend.
  2. Set S3_PUBLIC_URL to the public hostname browsers will hit (e.g. https://media.example.com). Put MinIO behind a TLS-terminating reverse proxy on that hostname.
  3. Set ALLOWED_ORIGINS to the exact frontend origin(s) — never ["*"] in prod.
  4. Persist volumes: minio-data and backend-data (the SQLite file) — already declared in compose. Back them up.
  5. Run behind a reverse proxy (Caddy / Traefik / nginx) terminating TLS in front of port 8000.
  6. Consider switching DATABASE_URL to PostgreSQL if you expect concurrent writes.

Without Docker

Same install steps as local, but:

pip install -r requirements.txt gunicorn
gunicorn app.main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000

Run under systemd or a supervisor of your choice. Front it with nginx/Caddy for TLS.


Schema migrations

There are none yet — Base.metadata.create_all runs on startup. If the schema changes incompatibly, either:

  • Wipe data/sop.db (dev), or
  • Add Alembic before the next prod deploy.