Skip to main content
SINT OS is what happens when you combine OpenClaw (the agent runtime), SINT Protocol (the governance engine), SINT Avatar (the face), and multimodal AI (the senses) into a single operating system for autonomous agents.
SINT OS is not a separate product — it’s the unified stack. Each component already exists and runs independently. SINT OS is the integration layer that makes them work as one system.

Architecture

┌─────────────────────────────────────────────────────────────┐
│                         SINT OS                              │
│                                                              │
│  ┌──────────┐  ┌──────────┐  ┌──────────┐  ┌─────────────┐ │
│  │  Avatar   │  │ Console  │  │ Outreach │  │  CMO        │ │
│  │  (Face)   │  │ (Brain)  │  │ (Hands)  │  │ (Content)   │ │
│  │  3D+Voice │  │ 31 mods  │  │ LinkedIn │  │ 18 skills   │ │
│  └────┬──────┘  └────┬─────┘  └────┬─────┘  └─────┬───────┘ │
│       │              │             │               │         │
│       └──────────────┼─────────────┼───────────────┘         │
│                      │             │                          │
│          ┌───────────┴─────────────┴───────────┐             │
│          │     @sint/os-core (Orchestrator)     │             │
│          │  Boot → Govern → React → Evidence    │             │
│          └───────────┬─────────────────────────┘             │
│                      │                                        │
│     ┌────────────────┼────────────────────┐                  │
│     │                │                    │                   │
│  ┌──┴──────────┐  ┌──┴──────────┐  ┌─────┴──────────┐      │
│  │  OpenClaw    │  │  SINT       │  │  Evidence      │      │
│  │  Adapter     │  │  Protocol   │  │  HUD           │      │
│  │  (T0-T3)    │  │  (Gateway)  │  │  (Ledger View) │      │
│  └──┬──────────┘  └──┬──────────┘  └────────────────┘      │
│     │                │                                       │
│     │   ┌────────────┴─────────────────────┐                │
│     │   │     SINT Protocol Core            │                │
│     │   │  • Capability Tokens (Ed25519)    │                │
│     │   │  • Policy Gateway (32 endpoints)  │                │
│     │   │  • Evidence Ledger (SHA-256)       │                │
│     │   │  • 12 Protocol Bridges            │                │
│     │   └───────────────────────────────────┘                │
│     │                                                        │
│  ┌──┴──────────────────────────────────┐                    │
│  │     OpenClaw Runtime                 │                    │
│  │  • WebSocket Gateway (port 18789)    │                    │
│  │  • 20+ Channel Plugins              │                    │
│  │  • MCP Tool Router                   │                    │
│  │  • Session Manager                   │                    │
│  │  • Cron Scheduler                    │                    │
│  │  • Docker Sandbox                    │                    │
│  └──────────────────────────────────────┘                    │
└──────────────────────────────────────────────────────────────┘

The Stack

LayerComponentWhat It DoesRepo
FaceSINT Avatar3D WebGL avatar, ElevenLabs TTS, Conversation Compiler, 18 widget typessint-ai/sint-avatars
BrainSINT Console31 features, visual workflow builder, agent orchestration, Web3sint-ai/sint-agents
HandsSINT OutreachLinkedIn automation, auto-send engine, sequence engine, A/B testingsint-ai/sint-outreach
ContentCMO Operator18-skill video→content pipeline, multi-channel publishsint-ai/sint-cmo-operator
SpineSINT ProtocolGovernance: capability tokens, policy gateway, evidence ledger, 12 bridgessint-ai/sint-protocol
OSOpenClawAgent runtime: gateway, channels, MCP, sessions, cron, sandboxopenclaw/openclaw

Core Packages

@sint/os-core

The main entrypoint. Boots all components, orchestrates governance, connects avatar reactions to policy events.
import { SintOS } from "@sint/os-core";

const os = new SintOS({
  gatewayUrl: "http://localhost:4100",   // SINT Protocol gateway
  agentId: "my-agent-pubkey",
  openclawWsUrl: "ws://127.0.0.1:18789", // OpenClaw gateway
  avatar: {
    serverUrl: "http://localhost:3005",   // SINT Avatar
  },
  evidenceHud: { enabled: true },
  crossSystemPolicies: DEFAULT_PHYSICAL_POLICIES,
});

await os.boot();
What boot() does:
  1. Connects to SINT Protocol gateway → verifies health
  2. Initializes OpenClaw adapter → maps all tools to T0-T3 tiers
  3. Connects to Avatar server → verifies 3D face is running
  4. Starts Evidence HUD → begins ledger stream
  5. Activates cross-system policy engine

@sint/openclaw-adapter

The governance choke-point. Every OpenClaw action passes through here before execution.
const adapter = os.getAdapter();

// Govern a tool call
const result = await adapter.governToolCall({
  tool: "exec",
  params: { command: "rm -rf /tmp/data" },
  elevated: true,
});
// result.tier === "T3"
// result.outcome === "escalate"
// result.approvalId === "apr-123"

Tier Classification

Every OpenClaw tool, MCP server, and node action is classified into a safety tier:
TierDescriptionOpenClaw ExamplesGovernance
T0Observe onlyread, web_search, web_fetch, image, pdf, memory_search, session_status, sessions_listAuto-approved, no gateway call
T1Reversible digitalwrite, editGateway validates, auto-approved with valid token
T2Requires governancemessage.send, tts, image_generate, sessions_spawn, cron.add, exec (non-elevated)Gateway validates + may require approval
T3Physical / irreversibleexec (elevated), canvas.eval, nodes.camera_snap, nodes.invoke, nodes.screen_recordGateway validates + human approval required
T0 calls never hit the network — they’re approved locally in microseconds. This means governance adds zero latency to read operations.

Cross-System Policies

The key differentiator from basic sandbox. These policies span multiple subsystems:
PolicyWhen ActiveDenied ActionsWhy
no-fs-while-movingrobot.movingFile writes, editsPrevents controller corruption during motion
no-exec-while-movingrobot.movingShell executionPrevents control interference
no-deploy-while-activecmd_velDeploy, restartCan’t restart while velocity commands are active
no-network-while-armeddrone.armedNetwork, execSafety critical — no external access while armed
// Activate a system state
adapter.getStateTracker().activate("robot.moving");

// Now file writes are automatically denied
const result = await adapter.governToolCall({
  tool: "write",
  params: { path: "/robot/config.yaml" },
});
// result.allowed === false
// result.reason === "[Cross-System Policy: no-fs-while-moving] ..."
Custom policies are supported:
const os = new SintOS({
  // ...
  crossSystemPolicies: [
    {
      name: "no-reboot-during-surgery",
      whenActive: "patient.connected",
      denyActions: ["system:reboot*", "deploy:*"],
      reason: "Cannot disrupt systems during active patient monitoring",
    },
  ],
});

Avatar Bridge

Connects governance events to visual avatar reactions. When SINT denies an action, the avatar shakes its head and spawns a status widget. When it escalates, the avatar looks surprised and spawns an approval widget.
Governance EventAvatar ExpressionAnimationWidget
T0 approvedefault
T1 approvedefaultHead-Nod-Yes✅ Status (3s)
T2 denythinkingThoughtful-Head-Shake🔴 Status (8s)
T3 escalatesurprisedThinking⚠️ Approval action (30s)

Evidence HUD

Real-time viewer of the SHA-256 evidence ledger. Displays a rolling window of governance decisions with chain integrity verification.
const hud = os.getEvidenceHUD();
hud.onEntry((entry) => {
  console.log(`${entry.tier} ${entry.outcome}: ${entry.resource}`);
});

// Verify chain integrity
const { valid, brokenAt } = hud.verifyChainIntegrity();

SINT Avatar — The Face

The avatar is a full 3D experience (React + Three.js + @react-three/fiber) with:

Conversation Compiler

A neural rewriter (gpt-4.1-nano) that sits between the AI agent and the TTS pipeline. It:
  1. Takes raw agent output (markdown, JSON, technical text)
  2. Rewrites it for spoken delivery (contractions, rounding, natural language)
  3. Infers emotion and animation
  4. Spawns contextual UI widgets
Agent output: "Deployed to Railway. Build 47 passed. 3 of 5 tests green."
    ↓ Conversation Compiler
Spoken: "Deploy's live. Build passed, three of five tests green."
Emotion: smile
Animation: Head-Nod-Yes
Widgets: [status: "Deploy" → success, progress: "Tests" → 60%]

Widget System (18 Types)

The avatar spawns contextual widgets during conversation:
WidgetDescription
statusSuccess/error/pending indicator
codeSyntax-highlighted code block
metricKPI with trend arrow
progressProgress bar with percentage
listOrdered/unordered list
chartBar, line, or pie chart
terminalTerminal output display
linkClickable link
tableData table with headers
diffCode diff (additions/deletions)
actionActionable button (approve, run, etc.)
imageImage with caption
githubLive GitHub repo status (PRs, issues, stars)
dashboardFull dashboard with KPIs (live from Paperclip API)
goalsAgent goals/objectives
tasksTask list with status
agentsActive agents display
activityActivity timeline

How It All Fits Together

Example: “Jarvis, deploy the staging build”

1. Voice input → Qwen3.5-Omni / OpenClaw Voice Wake → text
2. Text → OpenClaw Gateway → SINT agent session
3. Agent decides: tool call "exec" { command: "railway up --environment staging" }
4. @sint/openclaw-adapter classifies → T2 (deploy)
5. Cross-system check → is robot.moving? No → continue
6. SINT Policy Gateway → intercept → check capability token scope
7. Gateway: outcome = "approve" (token has deploy:staging scope)
8. Evidence ledger: logs SHA-256 entry
9. OpenClaw executes the command
10. Result → Conversation Compiler → "Staging deploy is live."
11. Avatar: smile + Head-Nod-Yes + [status: "Staging" → success] widget
12. Evidence HUD: new entry appears in real-time feed

Example: “Jarvis, move the robot 2m forward”

1. Voice → text: "move the robot 2 meters forward"
2. Agent decides: node action "invoke" { command: "move_forward", distance: 2 }
3. @sint/openclaw-adapter classifies → T3 (physical)
4. Cross-system check → all clear
5. SINT Policy Gateway → intercept → T3 requires human approval
6. Gateway: outcome = "escalate", approvalId = "apr-456"
7. Avatar: surprised + Thinking + [action: "Approve 2m forward?"] widget
8. Human clicks "Approve" on widget
9. Gateway resolves approval → "approved"
10. Node action executes: robot moves 2m forward
11. State tracker: activate("robot.moving") → deactivate when done
12. Evidence ledger: full chain — request → approval → execution → completion

Quick Start

# Clone all repos
git clone https://github.com/sint-ai/sint-protocol.git
git clone https://github.com/sint-ai/sint-avatars.git

# Start SINT Protocol gateway
cd sint-protocol && pnpm install && pnpm dev

# Start SINT Avatar
cd sint-avatars && pnpm install && pnpm dev

# SINT OS connects them
import { SintOS } from "@sint/os-core";
const os = new SintOS({
  gatewayUrl: "http://localhost:4100",
  agentId: "your-agent",
  avatar: { serverUrl: "http://localhost:3005" },
  evidenceHud: { enabled: true },
});
await os.boot();

Packages

PackageDescriptionTests
@sint/os-coreMain entrypoint, lifecycle, orchestration9
@sint/openclaw-adapterOpenClaw governance middleware, tier classifier, cross-system policies42
@sint/integration-langchainLangChain callback handler + tool wrapper14
@sint/gate-capability-tokensEd25519 capability tokens
@sint/gate-policy-gateway32-endpoint policy gateway
@sint/gate-evidence-ledgerSHA-256 hash chain evidence
12 bridgesROS2, MCP, gRPC, MAVLink, A2A, MQTT, OPC-UA, IoT, OpenRMF, Economy, Swarm, HAL
Total30 packages1,400+

Protocol

SINT Protocol governance engine

Console

Visual control center

Architecture

System architecture and package map

Roadmap

Development roadmap