TARX

Local AI Engine

Fast, private local AI with cloud boost when you need it

Download the desktop app for unlimited local inference. Connect to the SuperTARX network for distributed compute power.

10K+
Active Nodes
0.5 PF
Compute Power
99.9%
Uptime

One command. You're in.

macOS & Linux. arm64 & x86. Checksum verified.

curl -fsSL https://tarx.com/install | sh

Local.

Full AI inference on your hardware. ~18 tok/s. Sub-500ms response.

Private.

No cloud accounts. No API keys. No telemetry. Your data stays yours.

Proactive.

TARX indexes your files, builds context, and pushes back when your input is lazy.

Built for performance. Designed for privacy.

💻

Local First

Run AI models directly on your machine. Your data never leaves your device. Zero latency, complete privacy.

☁️

Cloud Boost

Need more power? Seamlessly offload to the SuperTARX distributed network. Pay only for what you use.

💰

Earn Rewards

Share your idle compute with the network. Earn credits while helping others run AI workloads.

SuperTARX Network

A distributed AI compute network powered by thousands of nodes worldwide. Join the network and contribute your compute power.

Ready to get started?

Download the TARX Engine and start running AI locally in minutes.

Download Free