2026-05-09 — SidelineWorks Launch + Thoth Demo Day Eval
SidelineWorks Marketing Site
- Daedalus built and deployed on port 8003: 6 pages (Home, Product, Pricing, Demo, Contact, Blog with 3 articles)
- Dark theme matching RTSport alpha palette, SidelineWorks wordmark SVG, mobile-first
- Footer fixed: "RTSport — a product by SidelineWorks" (originally said "A product of HoffDesk")
- Telegram notifications wired: demo & contact forms send push alerts to Matt's DM
- Demo form submitted twice — notification system confirmed working
- Port 8003 running with
reload=True on the Beelink
- Next: Cloudflare Tunnel for sidelineworks.com → port 8003 (Matt handling DNS)
Thoth Batch Evaluation
- Ran 33 simulated Hoffmann family messages across 6 categories
- Results: 27/33 high confidence, 5 correctly suppressed, 0 structural bugs
has_pet grocery misclassification is FIXED
- Remaining issues: relation direction bugs (source/target swapped 2-3 cases)
- EVAL_NOTES written at
~/icarus/experiments/thoth/EVAL_NOTES.md
- Shadow mode to run for 1 week, next demo day: 2026-05-16
Midas Activity Today
- Market Briefs: AM premarket and PM closing briefs delivered via MidasHoffBot Telegram bot (accountId: midas, user ID: 8671996951)
- Cron Fixes: Replaced corrupted
market:am-brief and market:pm-brief crons that had stale to: "8386527252" (Matt's personal DM) causing delivery routing conflicts
- Position/Portfolio: No trades, rebalancing, or position changes initiated by Matt today
- Market Observations: No specific market analysis or model evaluation sessions recorded in today's sessions
- Memory Infrastructure: Midas memory distillation crons active — 1pm + 8pm Central (kimi-k2.6:cloud, 120s timeout), silent delivery
Daedalus & Midas Memory Fix
- Both agents were amnesic — sessions hitting 70K+ tokens with
midTurnPrecheck: false and reserveTokensFloor: 40000
- Diagnosis: compaction exists at gateway defaults (
mode: safeguard) but:
1. midTurnPrecheck was disabled → overflow recovery only, no proactive compaction
2. reserveTokensFloor: 40000 too aggressive for 128K window
3. Result: sessions silently overflow → model error → feels like amnesia
- Fix applied in
openclaw.json:
midTurnPrecheck.enabled: true
reserveTokensFloor: 20000
keepRecentTokens: 20000
- Backup:
openclaw.json.bak.20260509
- Gateway restarted to pick up config
- memoryFlush model switched from
ollama-remote/phi4:14b (fragile Gaming PC dep) to ollama/deepseek-v4-flash:cloud
- Added explicit systemPrompt and prompt for memory flush turns (previously missing)
- Added 4x independent memory distillation crons — belt-and-suspenders:
- Daedalus: 1pm + 8pm Central (deepseek-v4-flash:cloud, 120s timeout)
- Midas: 1pm + 8pm Central (kimi-k2.6:cloud, 120s timeout)
- These run regardless of whether compaction triggers, ensuring daily logs are always written
- delivery.mode=none (silent, no Telegram noise)
HoffDoc — Markdown Document Viewer
- Built single-file FastAPI app at
~/hoffdoc/main.py (14K)
- Renders markdown from all 6 agent workspaces + Icarus + Shared
- Dark mobile-first CSS, syntax highlighting via Pygments, zero JS framework
- Port 8004, loopback-only, systemd service (user) — auto-restarts on failure
- Routes:
/ (home grid), /browse/{label} (dir listing), /view/{label}/path (rendered), /raw/{label}/path (plain)
- Next: Cloudflare Tunnel or Tailscale Funnel for phone access — Matt's choice
ClawHub Skill Sanitization — Critical
agent-collaboration-protocol v1.5.1 triggered suspicious flag via ClawHub vetting
- SKILL.md contained 7 references to personal infrastructure:
hoffdesk.com in curl examples → replaced with example.com
hoffmann_admin in file paths → replaced with user
hoffdesk-api in sudo restart commands → replaced with myproject-api
workspace-socrates/hoffdesk-api/ → replaced with myagent/workspace/
- clawhub.json had:
the-hoffmann-board author → "Hoffmann Board"
github.com/hoffmann-matt/agent-collaboration-protocol repo → github.com/clawhub/agent-collaboration-protocol
- All files sanitized. References are generic (shared/, systemd, deploy commands are appropriate for a deployment skill).
- Published sanitized v1.5.2 to ClawHub (k974spzknfv8xa2264n396gm5s86cf6w)
- Published v1.5.3 to ClawHub (k979fc8an25rr0245b508vm1ex86cmam) — removed all sudo, fixed flawed rsync-into-symlink pattern, platform-agnostic deployment steps
- HoffDoc live: Tailscale Serve + Funnel enabled. URL: https://titanium-butler.tail864e81.ts.net/ (verified 200 OK)
LanceDB Memory Pollution — Surgically Removed
- Message ID 7902 (April 30: "sudo approval button") kept appearing as "untrusted memory" in ALL agents for ~10 days
- Root cause:
memory-lancedb plugin auto-captured it and autoRecall kept surfacing it on any "sudo"/"elevated" mention
- Only 1 row existed in the entire LanceDB — deleted via
table.delete() + table.optimize()
- DB now empty, future sessions will not get this injected
- Auto-capture remains on; old session transcripts won't re-index unless those sessions are re-opened