Add YAML config support and Compose deployment example

This commit is contained in:
Steve W
2026-04-09 21:06:46 +00:00
parent 8d1109c309
commit 3e9904576f
3 changed files with 129 additions and 20 deletions

View File

@@ -4,6 +4,43 @@
The service ships with a `Dockerfile` based on `python:3.12-slim-bookworm` using [uv](https://astral.sh/uv/) for fast dependency installation.
### Configuration sources
The application now supports two configuration sources:
- environment variables
- a YAML config file
Load order:
1. per-request overrides
2. environment variables
3. YAML config file
4. built-in defaults
Supported config file locations:
- `config.yml`
- `config.yaml`
- `/config/config.yml`
- `/config/config.yaml`
You can also set an explicit config path with:
```bash
export EMAIL_CLASSIFIER_CONFIG=/path/to/config.yml
```
Example `config.yml`:
```yaml
llm:
provider: anthropic
base_url: https://api.minimax.io/anthropic
api_key: your_api_key_here
model: MiniMax-M2.7
temperature: 0.1
timeout_seconds: 60
max_retries: 3
```
### Building
```bash
@@ -15,19 +52,50 @@ docker build -t email-classifier .
```bash
docker run -d --name email-classifier \
-p 7999:7999 \
-e LLM_PROVIDER=openai \
-e LLM_BASE_URL=http://your-ollama:11434/v1 \
-e LLM_API_KEY=none \
-e LLM_MODEL=qwen2.5-7b-instruct.q4_k_m \
-e LLM_TEMPERATURE=0.1 \
-e EMAIL_CLASSIFIER_CONFIG=/config/config.yml \
-e EMAIL_CLASSIFIER_DB_PATH=/data/email_classifier.db \
-v /path/to/config.yml:/config/config.yml:ro \
-v /path/to/local/data:/data \
email-classifier
```
Mount a persistent volume for `/data` (or wherever `EMAIL_CLASSIFIER_DB_PATH` points) to preserve the dedupe database across container restarts.
### Building for a Remote Registry
Environment variables still override file-based config, so you can keep most settings in YAML and override just one or two values at deploy time.
## Docker Compose example
```yaml
services:
email-classifier:
image: your-registry.example.com/your-org/email-classifier:latest
container_name: email-classifier
ports:
- "7999:7999"
environment:
EMAIL_CLASSIFIER_CONFIG: /config/config.yml
EMAIL_CLASSIFIER_DB_PATH: /data/email_classifier.db
# Optional overrides. Env vars win over YAML values.
# LLM_MODEL: MiniMax-M2.7
# LLM_TIMEOUT_SECONDS: "90"
volumes:
- ./config.yml:/config/config.yml:ro
- ./data:/data
restart: unless-stopped
# If your LLM backend runs on the Docker host, one option is:
# extra_hosts:
# - "host.docker.internal:host-gateway"
```
### Compose notes
- Mount the YAML config read-only into the container, typically at `/config/config.yml`
- Mount a writable volume for `/data` so dedupe state survives restarts
- Override specific values with environment variables when needed
- If the LLM backend is another container on the same Compose network, use its service name in `base_url`
- If the LLM backend runs on the host, use `host.docker.internal` or a host-gateway mapping where appropriate
## Building for a Remote Registry
```bash
docker build -t \
@@ -57,16 +125,16 @@ The workflow tags the image as:
### Deployment Considerations
- **Network access** — The container needs to reach your LLM backend. If using Ollama on the host, use `host.docker.internal` (Linux) or `docker.for.mac.localhost` (macOS) as the base URL.
- **Network access** — The container needs to reach your LLM backend. If using Ollama or another service on the host, use `host.docker.internal` or an explicit host-gateway mapping.
- **Dedupe persistence** — Mount a volume for the SQLite database to persist dedupe state across deploys.
- **Port** — The container exposes port `7999`. Map it to any host port you prefer.
- **Health check** — The service does not currently expose a dedicated `/health` endpoint. Use `GET /docs` as a liveness probe.
## Production Checklist
- [ ] Set `LLM_API_KEY` to a real key (not `none`) in production
- [ ] Use HTTPS for `LLM_BASE_URL` in production
- [ ] Provide either a YAML config file or the required `LLM_*` environment variables
- [ ] Use HTTPS for remote `LLM_BASE_URL` values in production
- [ ] Mount a persistent volume for `EMAIL_CLASSIFIER_DB_PATH`
- [ ] Set appropriate resource limits (CPU/memory) on the container
- [ ] Configure `LLM_MAX_RETRIES` and `LLM_TIMEOUT_SECONDS` to suit your LLM backend's reliability
- [ ] Set `LLM_TEMPERATURE=0.1` (or similar low value) for consistent classification results
- [ ] Keep `LLM_TEMPERATURE` low for consistent classification results