TARX is now in App Store review
We submitted TARX to the App Store today. Build v1.1, build 2.
Here's what's inside.
TARX runs a local AI model on your device using Apple's FoundationModels framework. We call it Soul — it handles context, intent, lightweight reasoning, and memory. For heavier queries, TARX connects to the Supercomputer at compute.tarx.com for Mind-class inference. Both always run in parallel. You never pick a mode.
Spaces let you organize conversations by project. Each Space has its own memory, its own files, its own context. Everything stored in SQLite on your device. Nothing synced to our servers.
Voice works. Tap and talk. TARX listens, thinks, and responds. The microphone NSUsageDescription doesn't just say "microphone access" — it explains exactly why: voice input for conversational AI. Every permission prompt in the build tells you what and why.
Here's what's deliberately not in v1: Supercomputer contribution. The mesh network exists and works — your Mac can contribute idle compute and earn credits. But we set meshEnabled = false for the App Store build. We're shipping clean. No background compute, no peer networking, no complexity that could confuse a reviewer or a first-time user. That comes in v1.2.
What you won't find in the binary: zero third-party analytics SDKs. No Mixpanel. No Amplitude. No Firebase. We use a local telemetry service that writes events to disk. Nothing phones home.
All data stays on device. All of it. Your conversations, your files, your memories. The only network calls are inference requests to compute.tarx.com when you need Supercomputer power, and even those can be disabled with a privacy flag.
The only thing we ask of you at launch — tell us what TARX does for your thinking.