cb4eb43209a43b801b76a97eba31458df92a6977
All checks were successful
Build and Publish Docker Image / build-and-push (push) Successful in 2m35s
email-classifier
FastAPI service that classifies email using a configurable LLM backend.
What changed
The classifier no longer hardcodes a single Ollama + OpenAI-compatible endpoint. It now supports:
- OpenAI-compatible APIs
- Anthropic-compatible APIs
- per-request overrides for provider, model, endpoint, and temperature
- global defaults through environment variables
This makes it suitable for local Ollama, hosted OpenAI-compatible services, and MiniMax's recommended Anthropic-compatible API.
Environment configuration
Defaults are loaded from environment variables:
export LLM_PROVIDER=openai
export LLM_BASE_URL=http://ollama.internal.henryhosted.com:9292/v1
export LLM_API_KEY=none
export LLM_MODEL=qwen2.5-7b-instruct.q4_k_m
export LLM_TEMPERATURE=0.1
export LLM_TIMEOUT_SECONDS=60
export LLM_MAX_RETRIES=3
MiniMax example
MiniMax recommends Anthropic-compatible integration.
export LLM_PROVIDER=anthropic
export LLM_BASE_URL=https://api.minimax.io/anthropic
export LLM_API_KEY=your_minimax_key
export LLM_MODEL=MiniMax-M2.7
API
POST /classify
Request body:
{
"email_data": {
"subject": "Can you review this by Friday?",
"body": "Hi Daniel, please review the attached budget proposal."
},
"provider": "anthropic",
"base_url": "https://api.minimax.io/anthropic",
"model": "MiniMax-M2.7",
"temperature": 0.1
}
All override fields are optional. If omitted, the service uses the global env config.
Response shape:
{
"needs_action": true,
"category": "question",
"priority": "high",
"task_description": "Review the budget proposal and respond by Friday",
"reasoning": "Direct request with a deadline requires follow-up",
"confidence": 0.91
}
Architecture
app/config.py: global and per-request LLM settingsapp/llm_adapters.py: provider adaptersapp/classifier.py: classification orchestration, retries, normalizationapp/prompts.py: system promptapp/routers/classify_email.py: thin API route
Notes
- OpenAI-compatible providers use the OpenAI SDK.
- Anthropic-compatible providers use the Anthropic SDK.
- Per-request
api_keyis supported, but excluded from response serialization. - The service normalizes malformed model output and falls back safely after retry exhaustion.
Description
Languages
Python
97.4%
Dockerfile
2.6%