The TARX API is live. Drop-in OpenAI replacement.
api.tarx.com is live.
It's an OpenAI-compatible inference API. If your code talks to OpenAI, it talks to TARX. Change the base URL. That's it.
from openai import OpenAI
client = OpenAI(
base_url="https://api.tarx.com/v1",
api_key="your-tarx-key"
)
response = client.chat.completions.create(
model="tarx-mind",
messages=[{"role": "user", "content": "Hello"}]
)
Same SDK. Same response format. Same streaming. The difference is what happens to your data after the request completes.
Nothing. Nothing happens to it. It's gone.
Your data doesn't train our model. Not because we're nice. Because architecturally, it never touches a server that could store it. Inference runs on the Supercomputer mesh — distributed Apple Silicon hardware. Queries are processed and discarded. There's no database accumulating your prompts. There's no pipeline feeding them into a training run. The architecture makes storage impossible, not just impractical.
Free tier gives you 100 requests per day. No credit card. Just enter your email at tarx.com/api-access, get a key, and start building.
If you're running TARX locally, you don't even need a key. Point at localhost:11435/v1 and the API is the same, except now inference runs on your own hardware. Zero network calls. Zero latency from a round trip. Just your machine and your model.
MCP integration is one command:
tarx mcp add claude
That wires TARX tools into Claude Code, Cursor, or any MCP-compatible client. Memory, file search, inference — all available as tool calls.
The API is live. Go build something.