📄 analysis_openclaw_deep_dive.md 18,297 bytes Apr 19, 2026 📋 Raw

OpenClaw Features & Configuration Audit

Date: 2026-04-18
Analyst: Socrates (GLM 5.1 deep research)
Scope: Full OpenClaw documentation cross-referenced against current deployment


1. Current State

Deployment Overview

  • Host: titanium-butler (Beelink, Ubuntu 24.04)
  • Gateway: loopback-only, port 18789, token auth
  • Primary model: ollama/glm-5.1:cloud (Ollama Pro)
  • Fallback: ollama/kimi-k2.5:cloud (single fallback)
  • Memory search: nomic-embed-text via Gaming PC Ollama (Tailscale)
  • Channel: Telegram only (DM allowlist + 1 family group)
  • Plugins: memory-wiki, openclaw-web-search, telegram, ollama
  • Tools profile: coding with ollama_web_search + ollama_web_fetch
  • Web tools: search/fetch disabled in tools.web, enabled via alsoAllow
  • Session scope: per-channel-peer
  • Heartbeat: configured at 30m (currently paused — Google auth suspended)
  • Sandbox: not configured
  • Dreaming: not enabled
  • Active Memory: not enabled
  • Hooks: not configured
  • Standing Orders: partially implemented (HEARTBEAT.md + AGENTS.md)
  • mDNS: not explicitly configured (likely default minimal)
  • Cron: 3 jobs (heartbeat:check paused, family:daily-brief paused, heartbeat:daily active, healthcheck:security-audit active)

2. Discoveries — Features We're Not Using

🔴 High-Impact Missing Features

2.1 Dreaming (Memory Consolidation)

Status: Not enabled
Doc: https://docs.openclaw.ai/concepts/dreaming.md

Dreaming is OpenClaw's background memory consolidation system. It runs in three phases:
- Light: Sorts and stages recent short-term material
- Deep: Scores and promotes durable candidates to MEMORY.md
- REM: Reflects on themes and recurring ideas

Why we need it: Currently I manually review daily files and update MEMORY.md during heartbeats. Dreaming automates this with weighted scoring (frequency 0.24, relevance 0.30, query diversity 0.15, recency 0.15, consolidation 0.10, conceptual richness 0.06). It runs at 3 AM by default and produces a Dream Diary.

Config:

{
  "plugins": {
    "entries": {
      "memory-core": {
        "config": {
          "dreaming": {
            "enabled": true,
            "timezone": "America/Chicago",
            "frequency": "0 3 * * *"
          }
        }
      }
    }
  }
}

Impact: Eliminates manual memory maintenance, improves recall quality, builds thematic connections across days.


2.2 Active Memory (Pre-Reply Recall)

Status: Not enabled
Doc: https://docs.openclaw.ai/concepts/active-memory.md

Active Memory is a blocking sub-agent that runs BEFORE the main reply, surfacing relevant long-term memory automatically. Currently, I only recall memory when I explicitly search — by then, the moment for natural recall has passed.

Why we need it: I should naturally surface preferences, past decisions, and context without you having to say "remember that..." or me having to guess when to search.

Recommended config:

{
  "plugins": {
    "entries": {
      "active-memory": {
        "enabled": true,
        "config": {
          "agents": ["main"],
          "allowedChatTypes": ["direct"],
          "queryMode": "recent",
          "promptStyle": "balanced",
          "timeoutMs": 15000,
          "maxSummaryChars": 220,
          "persistTranscripts": false,
          "logging": true
        }
      }
    }
  }
}

Cost note: Uses our existing nomic-embed-text embeddings on Gaming PC. The sub-agent inherits the current session model (GLM 5.1), so no extra cloud spend unless we configure a dedicated model.


2.3 Hooks (Event-Driven Automation)

Status: Not configured
Doc: https://docs.openclaw.ai/automation/hooks.md

Hooks are small scripts that run when Gateway events fire. Available events include:
- command:new / command:reset / command:stop
- session:compact:before / session:compact:after
- message:received / message:sent
- gateway:startup
- agent:bootstrap

Bundled hooks we should enable:
- session-memory: Saves session context to /memory/ on /new or /reset — preserves context before session resets
- command-logger: Logs all slash commands for audit trail
- boot-md: Runs BOOT.md on gateway startup — could initialize daily checks

Config:

{
  "hooks": {
    "internal": {
      "enabled": true,
      "entries": {
        "session-memory": { "enabled": true },
        "command-logger": { "enabled": true },
        "boot-md": { "enabled": true }
      }
    }
  }
}

2.4 Standing Orders (Formal Programs)

Status: Partially implemented via HEARTBEAT.md
Doc: https://docs.openclaw.ai/automation/standing-orders.md

Our HEARTBEAT.md is essentially informal standing orders. The formal pattern is:
1. Define programs with scope, triggers, approval gates, escalation rules
2. Put them in AGENTS.md (auto-injected every session)
3. Enforce with cron jobs

What we should formalize:
- Program: Household Operations — email pipeline, conflict scan, maintenance sentinel
- Program: Security Monitoring — healthcheck, audit, disk checks
- Program: Research & Improvement — periodic self-improvement cycles

Each with explicit authority boundaries and escalation rules.


2.5 Model Failover Chain

Status: Single fallback (kimi-k2.5:cloud)
Doc: https://docs.openclaw.ai/concepts/model-failover.md

Our failover chain is thin. If GLM 5.1 goes down AND kimi-k2.5 is also down, we're dead in the water. OpenClaw supports:
- Multiple fallback models in priority order
- Auth profile rotation within a provider
- Cooldown tracking with exponential backoff
- Billing disable detection

Recommended additions to fallback chain:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "ollama/glm-5.1:cloud",
        "fallbacks": [
          "ollama/kimi-k2.5:cloud",
          "ollama-remote/gemma4:latest",
          "ollama-remote/qwen2.5-coder:7b"
        ]
      }
    }
  }
}

This adds Gaming PC local models as tertiary fallbacks. When cloud is down, we degrade to local inference instead of dying.


🟡 Medium-Impact Missing Features

2.6 Heartbeat Optimizations

Status: Running full context every 30m
Doc: https://docs.openclaw.ai/gateway/heartbeat.md

We're burning tokens on every heartbeat by sending full conversation history. Key optimizations:

lightContext: true — Only inject HEARTBEAT.md from bootstrap files (~2K tokens vs ~100K)
isolatedSession: true — Fresh session each heartbeat (no conversation history)
activeHours — Restrict to 06:00-23:00 CST (skip nighttime)
tasks: blocks in HEARTBEAT.md — Structured interval-based checks that skip when not due

Recommended config:

{
  "agents": {
    "defaults": {
      "heartbeat": {
        "every": "30m",
        "target": "last",
        "lightContext": true,
        "isolatedSession": true,
        "activeHours": {
          "start": "06:00",
          "end": "23:00",
          "timezone": "America/Chicago"
        }
      }
    }
  }
}

Savings estimate: ~95% token reduction per heartbeat. From ~100K to ~5K tokens per run.


2.7 Telegram Streaming

Status: Not configured (streaming: "off")
Doc: https://docs.openclaw.ai/gateway/configuration-reference.md

Telegram supports streaming: "partial" which sends progressive updates as the model generates. This gives much faster perceived response times.

Config:

{
  "channels": {
    "telegram": {
      "streaming": "partial"
    }
  }
}

Note: Can hit Telegram rate limits on editMessageText. Start with partial and watch for errors.


2.8 Telegram Reactions Configuration

Status: Default (likely on)
Doc: https://docs.openclaw.ai/gateway/configuration-reference.md

We can fine-tune reaction behavior:

{
  "channels": {
    "telegram": {
      "actions": { "reactions": true, "sendMessage": true },
      "reactionNotifications": "own"
    }
  }
}

2.9 Prompt Caching

Status: Not configured
Doc: https://docs.openclaw.ai/reference/prompt-caching.md

OpenClaw supports cacheRetention: "long" for Anthropic and "short" for others. Since we use Ollama Pro (cloud), caching behavior depends on Ollama's API support.

For Ollama Pro (OpenAI-compatible): The provider may support automatic prompt caching. Setting cacheRetention: "short" as a default won't hurt even if Ollama ignores it.

{
  "agents": {
    "defaults": {
      "params": {
        "cacheRetention": "short"
      }
    }
  }
}

2.10 TaskFlow (Multi-Step Orchestration)

Status: Not used
Doc: https://docs.openclaw.ai/automation/taskflow.md

TaskFlow manages durable multi-step flows with state tracking across gateway restarts. Our email pipeline could benefit — instead of a single cron "process" command, we could model it as:
1. Fetch emails → 2. Parse appointments → 3. Create calendar events → 4. Send notifications

Each step is a background task with its own state. If step 3 fails (Google auth), the flow pauses and retries instead of dying silently.


2.11 Experimental Features

Status: None enabled
Doc: https://docs.openclaw.ai/concepts/experimental-features.md

Feature Key Relevance
Session Memory Search agents.defaults.memorySearch.experimental.sessionMemory HIGH — indexes session transcripts for recall
Local Model Lean Mode agents.defaults.experimental.localModelLean LOW — we use cloud, not local inference
Structured Planning Tool tools.experimental.planTool MEDIUM — multi-step task tracking

Session Memory Search is the big one — it lets memory_search index prior session transcripts. Combined with Active Memory, this means I could recall things we discussed in past conversations without them needing to be in MEMORY.md first.

{
  "agents": {
    "defaults": {
      "memorySearch": {
        "experimental": {
          "sessionMemory": true
        }
      }
    }
  }
}

2.12 Sandboxing

Status: Not configured
Doc: https://docs.openclaw.ai/gateway/sandboxing.md

Sandboxing runs tool execution in isolated Docker containers. For our single-operator setup, the ROI is moderate:
- Pro: Limits blast radius if model goes rogue, contains filesystem/process access
- Con: Setup overhead (Docker), our threat model is single trusted operator

Recommendation: Low priority for now. Our gateway is loopback-only, Telegram DMs are allowlisted, and we have one trusted operator. If we ever open DMs or add untrusted content sources, enable sandboxing with mode: "non-main".


🟢 Low-Impact / Future Considerations

2.13 Webhooks

External HTTP triggers for automation. Could be useful for Google Pub/Sub email push notifications instead of IMAP polling.

2.14 Browser Tool

Not configured. Could be useful for Drop-Box document fetcher (render JS-heavy pages), but Jina Reader is working well for now.

2.15 TTS / Voice

Not configured. Could enable voice messages via ElevenLabs (sag skill available).

2.16 Image Generation

Not configured. Would need a DALL-E or similar provider.

2.17 Discord Channel

Not configured. Could be useful for a second communication channel.

2.18 ACP Harness / Codex Integration

For delegating coding tasks to dedicated coding agents. Could be useful for family-assistant development.


3. Security Review

Current Security Posture: ✅ GOOD

Check Status Notes
Gateway bind ✅ loopback Not exposed to LAN/internet
Gateway auth ✅ token 48-char hex token configured
Tailscale Serve/Funnel ✅ off No public exposure
DM policy ✅ allowlist Only Matt's Telegram ID
Group policy ✅ allowlist + requireMention Family group + Matt-only allowFrom
Session scope ✅ per-channel-peer DM isolation enabled
Plugin allowlist ✅ 5 explicit plugins memory-core in allow but not in entries (fixed)
Tool profile ✅ coding Reasonable for dev assistant
File permissions ⚠️ UNKNOWN Run openclaw security audit to verify

Warnings / Recommendations

  1. Telegram bot token in configopenclaw.json has the bot token in plaintext. Consider using channels.telegram.tokenFile pointing to a secret file instead, or use env var TELEGRAM_BOT_TOKEN.

  2. Ollama Pro API key in configollama-cloud.apiKey is in plaintext. Consider env var or SecretRef.

  3. Gateway token length — 48 hex chars = 192 bits. Adequate, but consider rotating with openclaw doctor --generate-gateway-token.

  4. Missing logging.redactSensitive — Not explicitly set. Default may be off. Add:
    json5 { "logging": { "redactSensitive": "tools" } }

  5. No exec approval policy — Currently default (likely security: "full" for trusted operator). This is fine for our threat model but should be documented.

  6. mDNS — Not explicitly configured. Verify it's running in minimal mode (default for non-loopback):
    json5 { "discovery": { "mdns": { "mode": "minimal" } } }

  7. Stale models in configllama3.2:1b, llama3.2:3b, llama3.2:1b-instruct-q4_K_M are listed but likely not in use. Security audit will flag these as weak/legacy.

  8. No plugin code safety scan — Run openclaw security audit --deep after any plugin install.


4. Recommendations (Ranked by Impact)

🔴 Must-Do (High Impact, Low Effort)

# Item Effort Impact Config Change
1 Enable Dreaming 5 min HIGH Add memory-core.config.dreaming block
2 Enable Active Memory 5 min HIGH Add active-memory plugin entry
3 Heartbeat lightContext + isolatedSession 2 min HIGH (token savings) Add 2 booleans to heartbeat config
4 Extend model fallback chain 2 min MEDIUM Add 2 Gaming PC models to fallbacks
5 Enable session-memory search (experimental) 2 min MEDIUM Add experimental flag to memorySearch

🟡 Should-Do (Medium Impact, Some Effort)

# Item Effort Impact Config Change
6 Enable hooks (session-memory, command-logger) 10 min MEDIUM Add hooks.internal.entries
7 Heartbeat activeHours 2 min MEDIUM (night quiet) Add activeHours block
8 Telegram streaming: "partial" 2 min MEDIUM (UX) Add streaming key
9 Formalize standing orders in AGENTS.md 30 min MEDIUM (reliability) Edit AGENTS.md
10 Logging redactSensitive: "tools" 2 min LOW-MEDIUM Add logging block
11 Explicit mDNS minimal mode 2 min LOW Add discovery.mdns block
12 HEARTBEAT.md tasks: blocks 15 min MEDIUM (skip undue tasks) Edit HEARTBEAT.md

🟢 Nice-to-Have (Lower Priority)

# Item Effort Impact
13 TaskFlow for email pipeline 1-2 hrs Medium (resilience)
14 Webhooks for Google Pub/Sub 2-3 hrs Medium (push vs poll)
15 Auth failure circuit breaker 1 hr High when Google is restored
16 Clean stale models from config 5 min Low (clarity)
17 Move Telegram token to tokenFile 10 min Low (security hygiene)
18 Cache retention config 2 min Low (if Ollama supports it)
19 Sandbox mode (non-main) 30 min + Docker Low (single-operator)

5. Quick Wins — Config Changes to Make NOW

These 5 changes take under 20 minutes total and deliver outsized value:

// Add to openclaw.json:

// 1. Dreaming
"plugins": {
  "entries": {
    "memory-core": {
      "config": {
        "dreaming": {
          "enabled": true,
          "timezone": "America/Chicago",
          "frequency": "0 3 * * *"
        }
      }
    }
  }
}

// 2. Active Memory
"plugins": {
  "entries": {
    "active-memory": {
      "enabled": true,
      "config": {
        "agents": ["main"],
        "allowedChatTypes": ["direct"],
        "queryMode": "recent",
        "promptStyle": "balanced",
        "timeoutMs": 15000,
        "maxSummaryChars": 220,
        "logging": true
      }
    }
  }
}

// 3. Heartbeat optimization
"agents": {
  "defaults": {
    "heartbeat": {
      "lightContext": true,
      "isolatedSession": true,
      "activeHours": {
        "start": "06:00",
        "end": "23:00",
        "timezone": "America/Chicago"
      }
    }
  }
}

// 4. Model fallback chain
"agents": {
  "defaults": {
    "model": {
      "fallbacks": [
        "ollama/kimi-k2.5:cloud",
        "ollama-remote/gemma4:latest",
        "ollama-remote/qwen2.5-coder:7b"
      ]
    }
  }
}

// 5. Session memory search (experimental)
"agents": {
  "defaults": {
    "memorySearch": {
      "experimental": {
        "sessionMemory": true
      }
    }
  }
}

6. DM Summary — Top 5 Items for Matt

  1. 🧠 Enable Dreaming + Active Memory — These two features automate memory consolidation and pre-reply recall. Combined with session-memory search, I'll remember context naturally instead of requiring manual searches. Config-only, 10 min total.

  2. 💸 Heartbeat token savingslightContext: true + isolatedSession: true cuts per-heartbeat tokens by ~95%. With activeHours, no more 3 AM heartbeats. 5 min config change.

  3. 🔗 Model fallback chain — Add Gaming PC local models (gemma4, qwen2.5-coder) as tertiary fallbacks. When cloud goes down, we degrade to local instead of dying. 2 min config change.

  4. 🪝 Enable hookssession-memory preserves context on session resets, command-logger adds audit trail. 10 min setup.

  5. 🔐 Security audit — Run openclaw security audit --deep to check file permissions, token hygiene, and config exposure. I found several items (token in config, no redaction, stale models) that warrant verification.


Analysis complete. Wiki page: entities/openclaw-features-audit.md