📄 README.md 547 bytes Yesterday 16:47 📋 Raw

LLM swap test configurations for comparing extraction quality across models.

How to use

Edit config.yaml to point extractor at a different model:

extraction:
  model: ollama-remote/qwen3:8b   # default: llama3.2:3b
  alternatives:
    - ollama-remote/llama3.2:3b
    - ollama-remote/qwen3:8b
    - ollama-remote/phi4:14b

Run comparison mode to see which model produces better extractions
on the same input text.

Test results

Run comparisons and log results here:

./llm-swaps/eval_results/<model-name>-<date>.json