Covers: overview, setup, API reference, configuration, testing, deployment, and known quirks.
68 lines
1.5 KiB
Markdown
68 lines
1.5 KiB
Markdown
# Setup & Installation
|
|
|
|
## Prerequisites
|
|
|
|
- Python 3.12+
|
|
- [uv](https://astral.sh/uv/) package manager
|
|
- An LLM backend (Ollama, LM Studio, MiniMax, OpenAI, or any OpenAI/Anthropic-compatible API)
|
|
|
|
## Quick Start
|
|
|
|
```bash
|
|
# Clone the repository
|
|
git clone https://git.danhenry.dev/daniel/email-classifier.git
|
|
cd email-classifier
|
|
|
|
# Install dependencies
|
|
uv sync
|
|
|
|
# Start the server
|
|
uv run uvicorn app.main:app --host 0.0.0.0 --port 7999
|
|
```
|
|
|
|
The API will be available at `http://localhost:7999`. Auto-generated API docs are at `http://localhost:7999/docs` (Swagger UI) and `http://localhost:7999/redoc`.
|
|
|
|
## Environment Variables
|
|
|
|
The service is configured entirely through environment variables. See [Configuration](configuration.md) for the full reference.
|
|
|
|
A minimal `.env` file for local development with Ollama:
|
|
|
|
```bash
|
|
LLM_PROVIDER=openai
|
|
LLM_BASE_URL=http://localhost:11434/v1
|
|
LLM_API_KEY=none
|
|
LLM_MODEL=qwen2.5-7b-instruct.q4_k_m
|
|
LLM_TEMPERATURE=0.1
|
|
```
|
|
|
|
## Using Docker
|
|
|
|
```bash
|
|
# Build the image
|
|
docker build -t email-classifier .
|
|
|
|
# Run the container
|
|
docker run -p 7999:7999 \
|
|
-e LLM_PROVIDER=openai \
|
|
-e LLM_BASE_URL=http://host.docker.internal:11434/v1 \
|
|
-e LLM_API_KEY=none \
|
|
-e LLM_MODEL=qwen2.5-7b-instruct.q4_k_m \
|
|
email-classifier
|
|
```
|
|
|
|
## Dependency Management
|
|
|
|
This project uses [uv](https://astral.sh/uv/) for dependency management. Do not use `pip` directly.
|
|
|
|
```bash
|
|
# Add a new dependency
|
|
uv add <package>
|
|
|
|
# Sync dependencies (after pulling changes)
|
|
uv sync
|
|
|
|
# Run with uv (recommended)
|
|
uv run uvicorn app.main:app --reload
|
|
```
|