Smash or Pass — Backend
FastAPI + SQLAlchemy (SQLite) + boto3 (MinIO/S3). Serves the API consumed by sop-front.
See ../CLAUDE.md for full architecture notes.
Stack
- Python 3.12, FastAPI, Uvicorn
- SQLAlchemy 2 (sync) + SQLite
- Pydantic v2 +
pydantic-settings boto3for S3-compatible object storage (MinIO)
Project layout
sop-back/
├── app/
│ ├── main.py # FastAPI factory + lifespan (DB + bucket init)
│ ├── core/{config,deps}.py # Settings, admin gate
│ ├── db/database.py # Engine, SessionLocal, get_db, Base
│ ├── models/models.py # Collection, Character
│ ├── schemas/schemas.py # Pydantic schemas
│ ├── services/storage.py # MinIO upload/delete, bucket bootstrap
│ └── api/routes/
│ ├── health.py # /health, /admin/status
│ ├── collections.py # /collections
│ └── admin.py # /admin/* (require_admin)
├── requirements.txt
├── Dockerfile
└── .env.example
Configuration
Copy .env.example to .env and adjust:
| Var | Default | Notes |
|---|---|---|
ADMIN_ENABLED |
false |
When true, /admin/* routes are exposed and the frontend renders the admin panel |
ALLOWED_ORIGINS |
["*"] |
CORS, JSON array string |
DATABASE_URL |
sqlite:///./data/sop.db |
SQLite file path |
S3_ENDPOINT_URL |
http://localhost:9000 |
What the backend uses to reach MinIO |
S3_PUBLIC_URL |
http://localhost:9000 |
What gets stored in s3_url and dereferenced by the browser |
S3_ACCESS_KEY / S3_SECRET_KEY |
minioadmin / minioadmin |
Credentials |
S3_BUCKET |
sop |
Auto-created on startup, set public-read |
S3_REGION |
us-east-1 |
Required by boto3 |
Security note:
ADMIN_ENABLED=trueexposes admin endpoints to anyone who can reach the backend. There is no user auth — by design, per project spec. In production, either (a) put the backend behind an authenticated reverse proxy / VPN, (b) keepADMIN_ENABLED=falseoutside of admin sessions, or (c) replace the gate with proper auth.
Local development
You need a running MinIO. Easiest path: spin it up via Docker.
# from repo root
docker compose up -d minio minio-init
Then run the backend natively:
cd sop-back
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
cp .env.example .env
# edit .env: set ADMIN_ENABLED=true if you want to upload
uvicorn app.main:app --reload
- API: http://localhost:8000
- Interactive docs: http://localhost:8000/docs
- Health: http://localhost:8000/health
The SQLite DB file is created at sop-back/data/sop.db on first startup; tables are auto-created. There is no Alembic — if you change models, delete the file during dev.
API surface
| Method | Path | Auth | Purpose |
|---|---|---|---|
| GET | /health |
— | Liveness |
| GET | /admin/status |
— | {admin_enabled: bool} |
| GET | /collections |
— | List collections (with character_count) |
| GET | /collections/{id} |
— | Collection + characters |
| POST | /admin/collections |
admin | multipart name + files[] |
| POST | /admin/collections/{id}/characters |
admin | multipart files[] |
| DELETE | /admin/collections/{id} |
admin | Delete collection + S3 objects |
| DELETE | /admin/characters/{id} |
admin | Delete one character + its S3 object |
Allowed image types: image/jpeg, image/png, image/webp, image/gif.
Production deployment
Docker (recommended)
The repo-root docker-compose.yml already wires backend + MinIO + bucket init. From the repo root:
docker compose up -d --build backend minio minio-init
For a real deployment you should override these defaults:
- Use long, random MinIO credentials. Edit the
MINIO_ROOT_USER/MINIO_ROOT_PASSWORDenv on theminioservice and the matchingS3_ACCESS_KEY/S3_SECRET_KEYon the backend. - Set
S3_PUBLIC_URLto the public hostname browsers will hit (e.g.https://media.example.com). Put MinIO behind a TLS-terminating reverse proxy on that hostname. - Set
ALLOWED_ORIGINSto the exact frontend origin(s) — never["*"]in prod. - Persist volumes:
minio-dataandbackend-data(the SQLite file) — already declared in compose. Back them up. - Run behind a reverse proxy (Caddy / Traefik / nginx) terminating TLS in front of port 8000.
- Consider switching
DATABASE_URLto PostgreSQL if you expect concurrent writes.
Without Docker
Same install steps as local, but:
pip install -r requirements.txt gunicorn
gunicorn app.main:app -k uvicorn.workers.UvicornWorker -w 2 -b 0.0.0.0:8000
Run under systemd or a supervisor of your choice. Front it with nginx/Caddy for TLS.
Schema migrations
There are none yet — Base.metadata.create_all runs on startup. If the schema changes incompatibly, either:
- Wipe
data/sop.db(dev), or - Add Alembic before the next prod deploy.