A modular agent platform that separates state from execution. Constitutional alignment. Persistent memory with hybrid search. Programmatic tool gating. Multi-engine horizontal scaling. Deploy one Core per project, scale Engines to match load.
Language models are commoditizing. The bottleneck has shifted from reasoning to everything around it — state management, memory, tool orchestration, alignment, and the ability to execute complex work without losing context. Most agent failures are context failures, not reasoning failures.
Every conversation starts from zero. No persistent memory. No understanding of past decisions. No continuity across sessions.
State and execution fused together. If the LLM crashes, everything is lost. Can't scale. Can't recover. Can't run tasks in parallel.
Toolkits that ship primitives, not systems. No alignment model. No memory architecture. No gating. Assembly required — and most assemblies fail.
Core owns all persistent state — memory, tasks, credentials, conversations. Engines are stateless LLM executors that connect via RPC. Services are thin channel adapters. Each component scales, crashes, and restarts independently.
Most AI systems use guardrails — lists of things the model can't do. Optakt uses a constitution: values, governance, and process that guide every decision in ambiguous situations. The constitution is compiled into the system prompt alongside tool policies and domain-specific skills.
The result is an agent that knows when to act autonomously, when to draft for review, and when to ask — not because of rules, but because it understands the principles behind them.
21 cognitive tools, each with programmatic gating. No tool call reaches execution without passing scope, schema, rate limit, and approval checks — all in deterministic Go, no LLM cost.
Three-signal retrieval: BM25 full-text, semantic similarity via Voyage 4 Large embeddings, and knowledge graph expansion. Reciprocal Rank Fusion merges results. No single-signal blind spots.
Engines are disposable. Fresh filesystem per workload. If one crashes, Core resubmits. Multiple Engines run simultaneously on different conversations. Horizontal scaling without state synchronization.
Analyze → Decompose → Plan → Provision → Execute → Review → Assess → Study. Every task follows a structured lifecycle with phase-specific tool grants and success criteria verified against reality.
Five-gate chain: scope, schema, rate limit, cache, approval. Every tool call passes through deterministic Go code before execution. No LLM involvement in access control decisions.
Telegram, WhatsApp, email, social media, calendar — all channels unified through thin service adapters. Each service connects to Core via Cap'n Proto RPC. Add channels without touching the agent.
Multi-provider fallback chains with subscription and API models. Claude Code, Anthropic API, OpenAI — automatic failover. Per-phase model assignment: expensive models for reasoning, fast models for compaction.
Pipeline collapse compresses task traces after completion. Anchor compaction summarizes older messages into rolling and frozen anchors — two layers of progressive compression. Emergency stripping handles hard token limits mid-step. Full history preserved separately.
Write code, run builds, manage git, deploy services, browse the web, process documents. The Engine has unrestricted local execution — gated at Core, not at the tool level.
Not a system prompt — a compiled operating system. Constitution, tool policies, and domain skills assembled per-workload. The agent's behavior is deterministic at the architecture level, adaptive at the reasoning level.
Every task flows through an 8-phase pipeline. Each phase has a specific job, specific tool grants, and specific boundaries. The agent carries full context throughout — no handoff loss between phases. Three compaction mechanisms keep the live conversation manageable: pipeline collapse compresses completed task traces, anchor compaction progressively summarizes into rolling and frozen layers, and emergency stripping handles hard token limits mid-step.
A strict information hierarchy with authority flowing downward. The constitution defines behavior, memory holds living knowledge, the archive records decisions and lessons. History preserves the full audit trail — every message, append-only, never compacted.
We deploy tailored AI agents for service businesses. Your workflows. Your data. Your agent.