TARX models your work so it gets better at you, not at the internet.
Every AI coding tool you've used — Copilot, Cursor, Tabnine — sends your code to someone else's GPU, trains on someone else's data, and gives you back a generic model that knows the internet but doesn't know you. Your proprietary patterns, your naming conventions, your architectural decisions — all fed into a system that serves everyone and specializes for no one.
TARX is different. It runs on your hardware. It fine-tunes on your patterns. After a week, it knows your codebase. After a month, it's a team member that never forgets.
This isn't just a local AI. It's a modeling layer for developers.
What TARX actually is
TARX is an AI platform that models your development workflow and gets better at it over time. The model sits on your Mac. Inference happens on your Metal GPU. Fine-tuning happens locally via MLX. Nothing phones home.
When you ask TARX to debug a function, refactor a module, or explain a codebase you inherited — the entire computation happens locally. But more than that: every interaction becomes a training signal. TARX learns your conventions, your patterns, your preferences. The model that helps you today is smarter than the one that helped you yesterday.
This isn't a privacy feature. It's the architecture. And it's the product.
How it compares
| | GitHub Copilot | Cursor | TARX | |---|---|---|---| | Where inference runs | Microsoft Azure | OpenAI cloud | Your Mac (Metal GPU) | | Your code leaves your device | Yes | Yes | No | | Works offline | No | No | Yes | | Fine-tunes on your patterns | No | No | Yes — continuous local LoRA training | | Full AI platform | No (autocomplete only) | Partial (chat + edit) | Yes (code, research, writing, analysis, automation) | | Cost (individual) | $10-19/mo | $20/mo | Free | | Cost (team, per seat) | $19-39/mo | $40/mo | $6/mo |
The cost difference isn't a promotion. It's structural. Cloud inference requires GPU rental, bandwidth, and compliance infrastructure. Local inference requires... a Mac you already own.
The modeling layer
Here's what no cloud AI tool can do: model you.
Copilot's model is frozen. It was trained on public GitHub repos and that's what you get. It doesn't know your project. It doesn't learn your patterns. It doesn't get better. Every suggestion comes from the same static weights, whether you've used it for a day or a year.
TARX is a modeling layer. Every conversation, every code review, every debugging session becomes training signal. TARX runs a continuous fine-tuning flywheel — LoRA adapters trained on your actual usage, deployed back to your local model automatically. Your model evolves.
After a week, your TARX knows your naming conventions. After a month, it understands your architecture. It's not a tool that assists you — it's a model of your development practice that compounds over time.
This is what "modeling layer" means: TARX doesn't just run inference. It models your work, your sector, your voice — and applies that understanding to everything it does for you. The model is yours. The training data is yours. The improvement is yours.
What it means for your codebase
If you work on proprietary software, defense contracts, healthcare systems, financial infrastructure, or anything regulated — you already know the problem. Your legal team has opinions about sending source code to Microsoft.
TARX eliminates the conversation. There's no data processing agreement to negotiate. No SOC 2 audit to review. No vendor to trust. The AI runs locally. The data stays local. The model stays local. Compliance is architectural, not contractual.
For teams, TARX at $6/seat/month includes:
- Local inference for every developer
- Shared Spaces for project context
- The Supercomputer for heavy workloads (optional, peer-to-peer mesh — still no third-party cloud)
- API access for CI/CD integration
Xcode integration
TARX isn't just a VS Code plugin. It's built for the Apple ecosystem from the ground up.
The Xcode Skill watches your project, understands SwiftUI patterns, knows the difference between @Observable and @ObservableObject, and can reason about your build errors in context. It doesn't just suggest code — it understands your project's dependency graph, your target configuration, and your signing identity.
No other AI coding tool has native Xcode support at this level. Most don't even try.
The Supercomputer
For queries that exceed your local hardware — 70B parameter models, multi-file refactors across a monorepo, or inference during a training run — TARX connects to the Supercomputer.
The Supercomputer isn't a data center. It's a peer-to-peer mesh of Mac hardware running TARX. Your query is routed to available nodes, processed, and returned. No query hits AWS, Azure, or GCP. The entire network runs on hardware owned by TARX users.
You can contribute your own hardware to the mesh and earn credits. The economics are simple: run the mesh, earn compute. Use the mesh, spend compute. No subscription required for the free tier.
Getting started
- Download TARX — one DMG, drag to Applications
- Launch and start coding. No account, no API key, no onboarding wizard.
- Drag your project into a Space for codebase-wide context
- Ask TARX to review, refactor, debug, or explain anything in your project
The model downloads on first launch (~4.7 GB). After that, no internet required. Ever.
Your tools should run on your machine. Your AI should too.
How TARX models your work — transparently
Most AI is a black box. You put text in, text comes out. You have no idea what the model learned, what it weighted, or why it said what it said.
TARX is building toward full modeling transparency. The vision: you can see what TARX has learned about your patterns. Which conventions it picked up. Where its confidence is high and where it's uncertain. How your model has changed over time. Not a dashboard of vanity metrics — a receipt of the actual modeling that happened on your hardware, with your data.
We call this the modeling receipt. It's how TARX accounts for the value your usage creates. Your patterns train the model. You should see what the model learned.
This is early. The infrastructure is live — the fine-tuning flywheel runs, the training pairs accumulate, the LoRA adapters deploy. The transparency surface is coming. When it arrives, you'll know exactly what your TARX knows about your work and why.
This page has a sector model
This page isn't just content. It's connected to TARX's developer sector model — a fine-tune specialized for developer workflows, coding patterns, and tooling decisions.
When the embedded TARX conversation goes live on this page, you won't be talking to a generic model. You'll be talking to a model that has been trained on developer-specific interactions: code review patterns, build system questions, architecture trade-offs, Xcode workflows, CI/CD debugging. Every conversation on this page feeds back into the developer model, making it sharper for the next developer who arrives.
The sector model composes on top of TARX's core personality (the Soul). It doesn't replace TARX's voice — it adds developer-domain knowledge on top. This is how TARX scales expertise without losing identity.
Five sectors are in development: developer, enterprise, compliance, gov, and SMB. Each has its own fine-tune, its own training corpus, its own improvement trajectory. The page you're reading is the public surface of the developer model.
Further reading
- Getting started with TARX — Full setup guide
- The Supercomputer — How the peer-to-peer mesh works
- Developer API docs — OpenAI-compatible endpoints
- MCP Server — Connect TARX to Claude, Cursor, and other tools
- Enterprise deployment — Team rollout and compliance
Have questions about this? Download TARX and ask directly — your AI runs locally, trained for developer workflows.
