📄 context-threshold-investigation.md 4,281 bytes Apr 21, 2026 📋 Raw

Context Compaction Threshold Investigation

Date: 2026-04-21
Status: BLOCKED — Threshold not user-configurable
Finding: Compaction triggers at ~20K tokens despite 128K model capacity


Executive Summary

The 20K token compaction wall is hardcoded in OpenClaw core, not user-configurable via openclaw.json.

Models support 128K context windows, but compaction triggers at ~16× lower threshold, causing:
1. Premature context loss during complex tasks
2. Frequent compaction events (more Ghost Bug triggers)
3. Inefficient use of cloud compute (we're paying for 128K, using 20K)


Investigation Results

Location Result
openclaw.json ❌ No compaction threshold setting found
agents.defaults ❌ No context limit configuration
Environment variables ❌ No OPENCLAW_COMPACTION_THRESHOLD or similar
openclaw config CLI ❌ No exposed configuration options
Model configs contextWindow: 128000 present but ignored by compaction logic

Key Finding

The compaction logic appears to be internal to OpenClaw runtime, not derived from model configuration. The contextWindow settings in openclaw.json are passed to the LLM API but don't control when OpenClaw decides to compact.

Evidence

Model capacities configured:

"kimi-k2.5:cloud": { "contextWindow": 128000, "maxTokens": 8192 }
"minimax-m2.7:cloud": { "contextWindow": 128000, "maxTokens": 8192 }
"glm-5.1:cloud": { "contextWindow": 128000, "maxTokens": 8192 }

Actual compaction observed: ~20K tokens (estimated from session sizes)

Session sizes at compaction:
- Main session: 25KB → compacted to ~6KB
- Socrates contaminated: 263KB (pre-compaction)
- Daedalus contaminated: 269KB (pre-compaction)


Root Cause Hypothesis

OpenClaw likely has a hardcoded default (possibly 20K or 4096*5 tokens) that:
1. Predates large context models (128K)
2. Is designed for conservative memory management
3. Is not exposed in user configuration

The compaction logic probably lives in:
- /dist/core/session-manager.js or similar
- Gateway-level context management
- Not in the user-editable configuration layer


Options

File feature request with OpenClaw to expose:

{
  "session": {
    "compactionThreshold": 100000,  // User-configurable
    "compactionStrategy": "agent-specific"  // Filter by agent ID
  }
}

Option 2: Environment Variable Override

Attempt to set (untested):

export OPENCLAW_COMPACTION_LIMIT=100000
export OPENCLAW_CONTEXT_THRESHOLD=100000

Status: No evidence this is supported in current version.

Option 3: Accept Current Limitation

Continue with The Group Chat Diet:
- Use DMs for deep work (>20K token conversations)
- Reserve Boardroom for short transactional messages
- Accept compaction as cost of multi-agent coordination


Cost Analysis

Scenario Context Used Efficiency Ghost Bug Risk
Current (20K) 16% of 128K ❌ Poor 🔴 High (frequent compaction)
Target (100K) 78% of 128K ✅ Good 🟡 Medium (5× less frequent)
Full (128K) 100% of 128K ✅ Best 🟢 Low (rare compaction)

Recommendation

Immediate: Accept limitation, enforce Group Chat Diet protocol.

Short-term: Escalate to OpenClaw maintainers as companion to Ghost Bug report — request:
1. User-configurable compaction threshold
2. Agent-specific compaction (filter by agentId)
3. Per-model threshold override

Long-term: When fixed, set threshold to 100K tokens for all three agents.


Files

  • Bug report: /shared/project-docs/bugs/openclaw-bug-report-compaction-contamination.md
  • Analysis: /shared/project-docs/bugs/context-limit-analysis.md
  • This investigation: /shared/project-docs/bugs/context-threshold-investigation.md

Action Items

  • [ ] Submit companion feature request to OpenClaw for configurable threshold
  • [ ] Document Group Chat Diet protocol in AGENTS.md
  • [ ] Monitor OpenClaw releases for compaction configuration options
  • [ ] Revisit when threshold becomes configurable