Session: 2026-05-05 12:20:05 UTC
- Session Key: agent:socrates:telegram:direct:8386527252
- Session ID: 1a65759c-fbc5-4287-b629-488244df126e
- Source: telegram
Conversation Summary
user: Can you give me the overview of default models for our agents and the fallback chains?
assistant: Here's your agent/fallback overview:
Current Session (Socrates)
- Running model: kimi-k2.6:cloud
From MEMORY.md — The Board's default models:
| Agent | Default Model | Role |
|---|---|---|
| Socrates 🧠 | ollama/kimi-k2.5:cloud → now kimi-k2.6:cloud |
Backend, pipelines, AI architecture |
| Daedalus 🎨 | ollama/gemma4:26b |
Frontend, UI/UX, design system |
| Wadsworth 📋 | ollama/glm-5.1:cloud |
Chief of Staff, routing, coordination |
Gateway Fallback Chain (from OpenClaw config audit, 2026-04-18):
glm-5.1:cloud → kimi-k2.5:cloud → gemma4:latest → qwen2.5-coder:7b
Family Assistant Pipeline (v1.0, local on Gaming PC):
- Default LLM parse: qwen2.5-coder:7b
- Vision/OCR: qwen3-vl:8b (keep_alive: 0)
- Embeddings: nomic-embed-text
Permanent Model Rules (family assistant):
- ✅ qwen2.5-coder:7b — single model for ALL pipeline tasks
- ✅ qwen3-vl:8b — vision/OCR only
- ⛔ No home-assist model — ever
- ⛔ No deepseek-r1 — burns thinking tokens, fails JSON
- ⛔ No qwen3:8b — thinking can't be disabled, 3-13x slower
Note: Your current session shows kimi-k2.6:cloud running — looks like the model got bumped from k2.5 to k2.6 since the last config audit. Auto-failover or manual update?