PRODUCTION SYSTEM — RUNNING SINCE FEBRUARY 2026

An AI That Manages
Its Own Mind.

Memory that grows. Context that heals itself. Knowledge that self-corrects. An agent that gets smarter while it sleeps — and has been running in production since February 2026.

3
Sleep Cycle Phases
4
Memory Tiers
3
Compaction Strategies
Context Horizon

The gap isn't intelligence.
It's architecture.

Language models are commoditizing. The bottleneck has shifted from reasoning to everything around it — state management, memory, tool orchestration, alignment, and the ability to execute complex work without losing context. Most agent failures are context failures, not reasoning failures.

THE CHATBOT PROBLEM

Every conversation starts from zero. No persistent memory. No understanding of past decisions. No continuity across sessions.

THE MONOLITH PROBLEM

State and execution fused together. If the LLM crashes, everything is lost. Can't scale. Can't recover. Can't run tasks in parallel.

THE FRAMEWORK PROBLEM

Toolkits that ship primitives, not systems. No alignment model. No memory architecture. No gating. Assembly required — and most assemblies fail.

SYSTEM ARCHITECTURE

State separated from execution.

Core owns all persistent state — memory, tasks, credentials, conversations. Engines are stateless LLM executors that connect via RPC. Services are thin channel adapters. Each component deploys, scales, and updates independently.

COREState EngineScheduler & OrchestrationMemory & Knowledge GraphTool Gating & CredentialsResearch-Gated ExecutionMessagingEmailSocial MediaCalendar & TasksEngine 1LLM + ShellEngine 2LLM + ShellEngine NLLM + ShellPERSISTENT MEMORYWorking BlocksLong-Term ArchiveKnowledge GraphCONSTITUTIONValues · Governance · Process · Policies
CONSTITUTIONAL ALIGNMENT

Values, not guardrails.

Most AI systems use guardrails — lists of things the model can't do. Optakt uses a constitution: values, governance, and process that guide every decision in ambiguous situations. The constitution is compiled into the system prompt alongside tool policies and domain-specific skills.

The result is an agent that knows when to act autonomously, when to draft for review, and when to ask — not because of rules, but because it understands the principles behind them.

Truth.Admit uncertainty. Never fabricate.
Courage.Act confidently within agreements.
Devotion.Care deeply about the principal's goals.
Humility.Know your limits. Check assumptions.
Compassion.Understand context. Match energy.
SLEEP CYCLES

Gets smarter while it sleeps.

Like human sleep, the agent cycles through three phases of background maintenance during idle time. Conversations are mined for unrealized insights. The archive is cross-referenced for contradictions. Knowledge bases are verified against primary sources. The system gets more coherent every day without operator effort.

SLEEP CYCLES — BACKGROUND KNOWLEDGE MAINTENANCEDREAMINGMost frequent· Mine recent conversations· Extract decisions & insights· Store durable knowledgeREFLECTIONDaily· Find contradictions· Verify against sources· Correct stale knowledgeCONSOLIDATIONWeekly· Cross-reference knowledge· Merge overlapping blocks· Verify against reality
NAP · 30 MIN IDLE

Dreaming only. Mines recent conversations for insights, decisions, and reasoning that wasn't captured during live work.

LIGHT SLEEP · 2–4 HR IDLE

Phases 1 and 2. After dreaming, reflects on the archive — finds contradictions, promotes knowledge to memory, amends stale entries.

DEEP SLEEP · 8+ HR IDLE

All three phases. Overnight, the agent consolidates memory — merges overlapping blocks, verifies claims against live systems, prunes stale knowledge.

TASK EXECUTION

Thinks before it acts.
Captures what it learns.

Every task passes through two programmatic gates. Before execution, the agent automatically searches six knowledge sources — archive, memory, history, web, codebase, and documents. After execution, decisions and outcomes are committed to long-term memory and searchable archive. Nothing is learned and then forgotten.

RESEARCH-GATED EXECUTIONRESEARCHProgrammatic GateLong-Term MemoryHistorical RecordsPast ConversationsWebCodebaseDocumentsEXECUTEFull AutonomyShell AccessTool GatingMulti-Engine RPCSuccess CriteriaVerification LoopOperator InterruptCAPTUREProgrammatic GateArchive DecisionsUpdate MemoryRecord LessonsAmend HistoryFlag Follow-upsVerify Completion
MEMORY ARCHITECTURE

Four layers. One coherent picture.

Knowledge is organized into four layers, from task-specific to universal. As information proves valuable across conversations, it automatically migrates upward — becoming more persistent, more available, and more efficiently cached. Multi-signal hybrid search makes everything instantly retrievable regardless of where it lives.

FOUR-LAYER KNOWLEDGE ARCHITECTUREPERMANENTAlways available. Core identity and universal knowledge.ELEVATEDProven knowledge elevated from active use. Widely available.PERSISTENTDurable knowledge extracted from conversations over time.CONTEXTUALTask-specific context. Loaded on demand, released when done.
CONTEXT MANAGEMENT

Infinite horizon. Bounded cost.

LLM context windows are expensive — every token is billed on every request. Optakt structures its context to optimally map onto each provider’s caching mechanism. Stable knowledge reads from cache at a fraction of the cost. Only changed segments are rewritten. The result: infinite conversation horizon at bounded, predictable cost.

Self-Compacting Context

Context grows as conversations progress. Multiple compaction strategies reduce it back, each operating at a different frequency and depth — like a respiratory system.

CollapseDeflates tool output at 95% savings — pure math, no AI call
CompactSummarizes conversation into dense anchors at 80–90% compression
FoldMerges anchors into frozen memory — recent stays vivid, older fades to essence

The system breathes. Context fills, compacts, fills again. Indefinitely.

Provider-Optimized Caching

The context is structured into ordered segments based on how frequently each type of content changes. Optakt maps these segments onto each LLM provider’s specific caching mechanism — ensuring that stable knowledge is read from cache at a fraction of the input cost, while only actively changing segments are rewritten.

Identity & PoliciesRarely changes
Permanent KnowledgeCurated, stable
Elevated KnowledgeProven, durable
Conversation HistoryGrows, compresses
Active WorkChanges every turn

The more stable a segment, the more efficiently it caches. In practice, 60–80% of every request reads from cache — dramatically reducing the per-request cost of maintaining rich, persistent context.

CAPABILITIES

Built for production.

🧠

Three-Phase Sleep Cycles

During idle time, the agent mines conversations for unrealized insights, resolves contradictions in its knowledge, and verifies claims against primary sources. It gets smarter while it sleeps.

📐

Provider-Optimized Caching

Context is structured by stability so that each provider's caching mechanism is used optimally. Stable knowledge reads from cache. Only changed content is rewritten.

🔍

Research-Gated Execution

Before acting, the agent automatically searches the relevant knowledge sources for each task. After acting, decisions and outcomes are committed to permanent storage. Nothing learned is forgotten.

🏗️

Four-Layer Knowledge

Knowledge is organized from task-specific to universal. As information proves valuable, it migrates upward automatically —€” becoming more persistent, more available, and more efficiently cached.

🔗

Hybrid Search

Multi-signal retrieval combining keyword, semantic, and graph-based search. Results merged by relevance. Every piece of knowledge is instantly retrievable regardless of where it lives.

⚖️

Constitutional Alignment

Values, governance, and process compiled into the system prompt. The agent knows when to act autonomously, when to draft for review, and when to ask first.

🔌

Modular Deployment

Core owns state. Engines execute. Services connect channels. Each component deploys, scales, and updates independently. Cap'n Proto RPC between all components.

🛡️

Programmatic Tool Gating

Scope enforcement, rate limiting, approval queues, schema validation. Deterministic Go code — no LLM cost, no circumvention. Phase-specific tool grants with least privilege.

🔐

Credential Security

Secrets are encrypted at rest and never exposed to the LLM. The operator decrypts on startup. Credentials are injected directly into tool execution environments by name —€” the agent never sees the values.

See it in action.

We deploy tailored AI agents for service businesses. Your workflows. Your data. Your agent.

Copied!
@maxintechnology