πŸ“„ 2026-05-05-model-override-discussion.md 7,646 bytes Tuesday 00:45 πŸ“‹ Raw

Session: 2026-05-05 00:45:52 UTC

  • Session Key: agent:socrates:telegram:direct:8386527252
  • Session ID: fe90110b-cc58-4cba-9f6c-bbc5feb5be94
  • Source: telegram

Conversation Summary

user: It’s showing messages of Krilling and then exec running a whole bunch of information
assistant: I see β€” Wadsworth is sending you verbose Krilling exec output with full tool call information. That's the deepseek-v4-flash:cloud model being chatty with chain-of-thought reasoning leaking through.

Let me check Wadsworth's current session directly to see his actual runtime model, then fix the config.
assistant: Only my session (Socrates) is currently active β€” no Wadsworth session visible. He's either idle or hasn't been messaged recently.

Regardless, the root cause is clear: agents.defaults.model.primary: ollama/deepseek-v4-flash:cloud in openclaw.json. This is a reasoning model that emits verbose chain-of-thought before responses, including full tool call dumps.

Two fixes needed:

  1. Change the default model β€” swap deepseek-v4-flash:cloud to a non-verbose model
  2. Check if there's a per-agent override β€” the main agent might have its own config that inherits from defaults

Let me look for the main agent config specifically, since Wadsworth runs as the "main" agent:
assistant: Found the full picture. Here's the issue:

Agent Primary Model Fallbacks
Wadsworth (main) deepseek-v4-flash:cloud glm-5.1:cloud, deepseek-v4-pro:cloud
Daedalus deepseek-v4-flash:cloud deepseek-v4-pro:cloud, gemma4:26b, glm-5.1:cloud
Socrates kimi-k2.6:cloud kimi-k2.5:cloud, glm-5.1:cloud
Midas kimi-k2.6:cloud kimi-k2.5:cloud, glm-5.1:cloud

Wadsworth and Daedalus are both on deepseek-v4-flash:cloud β€” that's the verbose reasoning model causing the Krilling exec dumps.

Socrates and Midas are on kimi-k2.6:cloud, which is cleaner.

Fix options for Wadsworth:

Option Primary Rationale
A kimi-k2.6:cloud Match Socrates/Midas β€” proven clean
B glm-5.1:cloud Generalist, fast, no reasoning overhead
C gemma4:26b Local on cloud, no reasoning tokens

I'd go with Option A (kimi-k2.6:cloud) since it's already validated as clean on your system.

Want me to patch both Wadsworth and Daedalus? Or just Wadsworth for now?
user: Option B- check to see if there’s a per agent config or telegram setting. Check the logs of what is being written to telegram. It’s too much and o don’t want it. You’re doing it too on kimi-k2.6
assistant: Point taken β€” let me check if this is a Telegram-level verbosity setting, not just the model. Both you and I noticed it recently, and if I'm doing it too on kimi-k2.6, it's not the model.
assistant: Found it: messages.groupChat.visibleReplies: message_tool

This is the Telegram setting that controls what gets displayed. message_tool means tool call results are visible in the chat. Let me see the full messages config:
assistant: Got it. The current config has messages.groupChat.visibleReplies: "message_tool" which means in groups, only tool calls (like exec results) are visible β€” normal assistant replies are suppressed.

But you're seeing this in DMs (direct chat), not groups. And it's the exec tool output being sent, not normal replies.

Let me check if there's a global messages.visibleReplies setting that might be affecting DMs:
assistant: Now I understand the full picture:

Current config:

{
  "messages": {
    "groupChat": {
      "visibleReplies": "message_tool"
    }
  }
}

What message_tool does in groups: Only message(action=send) tool calls are visible. Normal assistant replies stay private.

But for DMs: There's no global messages.visibleReplies set, so it falls back to the harness default. The Codex harness defaults to message_tool for direct chats too.

This means in your DM with Wadsworth, tool outputs (exec results) are being sent visibly because that's what message_tool mode does β€” it makes tool outputs visible.

If you want normal replies only, no verbose tool dumps, the fix is to set:

{
  "messages": {
    "visibleReplies": "automatic"
  }
}

This restores legacy behavior: normal assistant replies are posted, tool outputs stay internal.

But wait β€” you said "It's showing messages of Krilling and then exec running a whole bunch of information". That sounds like the model itself is emitting verbose reasoning (Krilling = model thinking) AND then the exec tool result is also shown.

So there's two issues:

  1. Model-level verbosity β€” DeepSeek v4 Flash emits reasoning text (Krilling)
  2. Tool output visibility β€” exec results are being sent to chat

Fix #2 first (it's the bigger pain), then #1:

Want me to patch messages.visibleReplies: "automatic" into the config? This will suppress tool output dumps in DMs.
user: Yes, fix #2 first and then let’s see how we are doing but I want that automatic
assistant: Config is valid and hot-reloaded. Here's what changed:

Before:

{
  "messages": {
    "groupChat": {
      "visibleReplies": "message_tool"
    }
  }
}

After:

{
  "messages": {
    "visibleReplies": "automatic",
    "groupChat": {
      "visibleReplies": "message_tool"
    }
  }
}

What this does:
- DMs (your chat with Wadsworth, me, anyone): Normal assistant replies posted. Tool outputs stay internal. No more exec dumps.
- Groups (family group): Unchanged β€” still tool-only mode, normal replies suppressed unless message(action=send) called.

Test it: send Wadsworth a message that triggers a tool call (like asking him to check something). You should get his normal reply, not the raw tool output.

If Wadsworth is still on deepseek-v4-flash:cloud, he might still emit reasoning text (Krilling) in his replies. If that persists after this change, we'll swap his model to glm-5.1:cloud.

Let me know how it looks.
user: How’d we do curving the openclaw upgrade bugs? Any issues you see
assistant: Good β€” messages.visibleReplies: "automatic" is in place.

Now let me audit the rest of the upgrade for anything that might bite us:
assistant: Let me check for issues another way β€” look at the actual config for any upgrade-related problems:
assistant: Found 3 remaining issues β€” all related to the DeepSeek v4 Flash model being used by Wadsworth (main), Daedalus, and the default fallback. The messages.visibleReplies fix handles the tool output spam, but these models will still emit verbose reasoning text (Krilling) in their replies.

Here's the full audit:

Issue Location Impact Fix
βœ… Fixed messages.visibleReplies Tool output spam in DMs Set to "automatic"
⚠️ Open agents.defaults.model.primary DeepSeek reasoning in all default agents Swap to glm-5.1:cloud
⚠️ Open agents.list[main].model.primary Wadsworth verbose replies Swap to glm-5.1:cloud
⚠️ Open agents.list[daedalus].model.primary Daedalus verbose replies Swap to gemma4:26b or glm-5.1:cloud

Everything else looks healthy:
- Heartbeat: lightContext=true, isolatedSession=true βœ…
- Logging: redactSensitive=tools βœ…
- Plugins: Telegram enabled βœ…
- Tools: profile=coding βœ…
- Gateway: Running, loopback-only βœ…

Should I patch the three DeepSeek models now? That's the last of the upgrade noise.