Architecture
TARX is a local-first AI system built on four services running on your machine.
Service Map
┌─────────────────────────────────────────────┐
│ Your Machine │
│ │
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
│ │ Daemon │ │ Mesh │ │ Embed │ │
│ │ :11435 │ │ :11436 │ │ :11437 │ │
│ │ Local AI │ │ Rust │ │ nomic │ │
│ └──────────┘ └──────────┘ └──────────┘ │
│ │ │ │
│ ┌──────────┐ ┌──────────┐ │
│ │Cognitive │ │ SQLite │ │
│ │ :11438 │ │memory.db │ │
│ └──────────┘ └──────────┘ │
│ │
└─────────────────────────────────────────────┘
│
MCP Protocol
│
┌─────────────────┐
│ Claude / TARX │
│ (any client) │
└─────────────────┘Daemon (port 11435)
The inference daemon runs TARX’s fine-tuned local model. It handles chat completions, memory storage, and space management.
- Model: ~5GB on disk, 18 tok/s on Apple Silicon
- Context window: 32K tokens
- API: OpenAI-compatible
/v1/chat/completions
Mesh (port 11436)
A Rust binary that connects your node to the TARX mesh network. Other TARX users can contribute inference capacity — encrypted, opt-in.
- Binary: 4.6MB standalone
- Protocol: Custom peer discovery + encrypted routing
- Status: V1 — stub economy, no real credits
Embeddings (port 11437)
Runs nomic-embed-text-v1.5 for RAG (Retrieval Augmented Generation).
Every file, memory, and conversation chunk is embedded for semantic search.
- Dimensions: 768
- Chunk size: 512 characters, 128 overlap
- Storage:
knowledge_embeddingstable in SQLite
Cognitive Engine (port 11438)
Tracks your cognitive state across sessions — focus depth, decision fatigue, context readiness. Feeds the TarxTicker and adapts Claude’s response style.
- Scoring: context_switch_rate, focus_depth, decision_velocity
- Styles: concise, detailed, step-by-step, collaborative
- Window: 2-hour rolling session analysis
Data Storage
All data lives in ~/Library/Application Support/tarx/memory.db (SQLite).
Files are stored on disk at ~/Library/Application Support/tarx/files/.
Nothing is uploaded. Nothing leaves your machine.