K

CONCEPT

The Private AI Assistant — Zero Cloud, Zero Surveillance

Every AI assistant you use sends your data to someone else's server. TARX is the AI that runs entirely on your device. Private by architecture, not by promise.

Last updated · April 19, 2026

concept6 min read·by TARX

Every AI you use is modeling you — your questions, your patterns, your concerns, your vulnerabilities. The difference is who owns that model.

When you use ChatGPT, OpenAI owns it. When you use Copilot, Microsoft owns it. When you use Gemini, Google owns it. Your medical questions, your legal research, your source code, your private thoughts — all modeled, all stored, all governed by someone else's privacy policy.

TARX models you too. But the model lives on your device. The training data stays on your device. The improvement benefits you alone. Nobody else sees it. Nobody else owns it.

The modeling layer belongs to you. That's the architecture.

Privacy by architecture, not by promise

There are two ways to make AI private:

Policy privacy. The company promises not to look at your data. They write a privacy policy. They get SOC 2 certified. They pinky-swear. And then an engineer needs to debug a production issue and your conversation is right there in the logs, one SQL query away. This is how every cloud AI works.

Architectural privacy. The data never leaves your device. There is no server. There is no log. There is no database to breach. There is no employee who can access your conversations because they don't exist anywhere except on your hardware. This is how TARX works.

The difference is fundamental. Policy privacy depends on trust. Architectural privacy depends on physics.

How TARX works

TARX installs as a macOS application. On first launch, it downloads a language model (~4.7 GB) to your local SSD. After that:

  • Inference runs on your Mac's GPU (Metal acceleration). Your question goes from your keyboard to your GPU to your screen. No network hop.
  • Memory is stored in a local SQLite database. Your conversations, preferences, and context live on your device.
  • File indexing happens locally using embeddings. When you add documents to a Space, they're chunked and indexed on your machine.
  • Fine-tuning happens locally via MLX. TARX learns your patterns by training LoRA adapters on your usage — on your hardware, with your data, for your benefit alone.

You can verify all of this with lsof -i -P or any network monitor. During local inference, TARX makes zero outbound connections.

Who this matters for

Healthcare professionals. You want to use AI to summarize patient notes, research treatment options, or draft referral letters. But HIPAA means you can't send PHI to a cloud service — or you need a BAA with every AI vendor, which most won't sign for individual practitioners. TARX eliminates the question. The data never leaves your device. There is no third party to sign a BAA with.

Lawyers. Attorney-client privilege means you can't expose case details to a third-party service provider. Cloud AI providers are third parties. TARX is software on your computer — the same category as your word processor. Privilege is preserved because there's no disclosure.

Financial professionals. Material non-public information (MNPI) can't be shared with cloud AI services. Insider trading regulations don't have a carve-out for "I was just asking an AI." With TARX, MNPI stays on your device. There's no regulatory gray area because there's no transmission.

Enterprise security teams. Your CISO said no to cloud AI. Your developers are using it anyway (shadow AI is everywhere). TARX gives them what they want — AI-assisted development — without the data exfiltration risk your security team is trying to prevent. Deploy TARX to the team and the policy conversation becomes: "Use TARX for everything. It runs locally."

Anyone who thinks. You don't need a compliance reason to want privacy. Your thoughts are yours. Your questions are yours. Your creative process is yours. AI that observes all of this and sends it to a corporation is surveillance with a chat interface. AI that keeps it on your device is a tool.

What you give up (and what you don't)

Honest trade-offs:

You give up: The largest cloud models (GPT-4-class, Claude Opus). Local hardware runs smaller models. TARX's default is an 8B parameter model — extremely capable for most tasks, but not the bleeding edge of raw reasoning on novel benchmarks.

You don't give up: Quality for real work. TARX's modeling layer means your local model is personalized in ways cloud models never will be. A generic 100B model that doesn't know your codebase loses to a specialized 8B model that's been trained on every conversation you've had with it. The modeling compounds. A cloud model that serves a billion users will never model any single user well. Your TARX models only you.

You don't give up: Power when you need it. The Supercomputer — TARX's peer-to-peer mesh — routes to larger models on other users' hardware when your local machine isn't enough. Still no cloud. Still no third-party infrastructure.

You don't give up: Ecosystem. TARX has an API (OpenAI-compatible), an MCP server (connects to Claude, VS Code), Spaces for project organization, and Skills for workflow automation.

The cost of "free" cloud AI

When a cloud AI service is free, you are the product. Your conversations train their models. Your usage patterns shape their product decisions. Your data is their competitive moat.

TARX's free tier is different. It's free because local inference has no marginal cost to us. The GPU is yours. The electricity is yours. The model is a file you already downloaded. There's nothing to charge for.

When you upgrade to the team tier ($6/seat/month), you're paying for Supercomputer access, shared Spaces, and team management — not for the privilege of keeping your data private. Privacy isn't a premium feature. It's the architecture.

Get started

  1. Download TARX — macOS DMG, drag to Applications
  2. Launch. No account. No email. No terms to accept.
  3. Start using it. Everything stays on your machine.

If you want to verify: open Activity Monitor, watch network traffic. During local inference, you'll see zero outbound connections. That's not a setting. That's the design.

Your AI should work for you. Not observe you.

How TARX models your work — transparently

Every AI models you. The question is whether you can see the model.

Cloud AI companies build models of their users at scale — aggregated, anonymized, optimized for engagement. You never see what they learned. You never control how it's used. You can't delete it, correct it, or benefit from it directly.

TARX's modeling layer works differently. The fine-tuning flywheel captures your patterns and trains LoRA adapters locally. The model improves on your hardware, for your use, under your control. And the modeling receipt — coming soon — will show you exactly what TARX learned: which patterns it absorbed, where it's confident, how the model evolved from generic weights to your specialized tool.

Modeling transparency isn't a nice-to-have. It's the logical conclusion of private AI. If the model belongs to you, you should be able to read it.

This page has a sector model

This page is connected to TARX's compliance sector model — a fine-tune specialized for privacy, data sovereignty, regulatory frameworks, and compliance workflows.

When the embedded TARX conversation activates here, you'll be talking to a model that understands HIPAA, SOC 2, GDPR, attorney-client privilege, and MNPI handling — not because it memorized a compliance manual, but because it was trained on real compliance-context conversations. Every conversation on this page feeds the compliance model. The model gets better at your sector's concerns with every interaction.

Five sectors are in development: developer, enterprise, compliance, gov, and SMB. This page is the compliance surface.

Further reading

Have questions about this? Download TARX and ask directly — your AI runs locally, trained for compliance workflows.


TARX
«