Fast, private local AI with cloud boost when you need it
Download the desktop app for unlimited local inference. Connect to the SuperTARX network for distributed compute power.
macOS & Linux. arm64 & x86. Checksum verified.
curl -fsSL https://tarx.com/install | shFull AI inference on your hardware. ~18 tok/s. Sub-500ms response.
No cloud accounts. No API keys. No telemetry. Your data stays yours.
TARX indexes your files, builds context, and pushes back when your input is lazy.
Run AI models directly on your machine. Your data never leaves your device. Zero latency, complete privacy.
Need more power? Seamlessly offload to the SuperTARX distributed network. Pay only for what you use.
Share your idle compute with the network. Earn credits while helping others run AI workloads.
A distributed AI compute network powered by thousands of nodes worldwide. Join the network and contribute your compute power.