Two Mac Minis in Lakeway, Texas are now the TARX Supercomputer.
The TARX Supercomputer, as of today, is two Mac Minis sitting on a desk in Lakeway, Texas. One M4 Pro, one M2. Combined: 40 GPU cores, 48GB unified memory. They run 24/7 behind a Cloudflare tunnel at compute.tarx.com.
That's the bootstrap. That's the entire network.
Here's how it works. When a TARX user sends a query that needs more power than their local 3B model can handle, the request routes to compute.tarx.com. The Supercomputer runs a fine-tuned 7B model — TARX Mind, a LoRA-trained Qwen 2.5 with our identity and reasoning baked into the weights. Inference happens, the response streams back, and the query is discarded. No logs. No storage. Just compute.
The economics are simple. When a Mac contributes idle compute to the mesh, 70% of the inference revenue goes to the hardware owner. 30% goes to TARX for coordination. At scale, this means your Mac earns money while you sleep.
Right now there's no "at scale." There's two Macs and a founder who checks the dashboard before bed. But the architecture doesn't change. The Supercomputer at 2 nodes is the same Supercomputer at 200,000 nodes. Same protocol. Same privacy model. Same economics.
Why not just rent GPUs? Because the point isn't compute. The point is that AI infrastructure should be owned by the people who use it. Renting NVIDIA clusters means your data passes through someone else's hardware, governed by someone else's terms. The Supercomputer is Apple Silicon in people's homes. Every node is someone's actual machine.
The mesh runs a Rust binary called tarx-supercomputer. It exposes health, inference, and credit endpoints. It monitors hardware utilization via sysinfo. It's open source.
At 1.2 million users, someone's Mac Mini running overnight will earn more than their electric bill. That's the point.