Developer API
OpenAI-compatible inference that runs locally. Point your existing SDK atlocalhost:11435and everything just works. Nothing stored. Ever.
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:11435/v1",
api_key="not-needed" # local inference
)
response = client.chat.completions.create(
model="tarx",
messages=[{"role": "user", "content": "Hello"}],
stream=True
)
for chunk in response:
print(chunk.choices[0].delta.content, end="")Install TARX first: curl -fsSL cli.tarx.com/install | sh
/v1/chat/completions/v1/models/v1/embeddings/healthTARX local inference stores zero data beyond your machine. No telemetry on prompts or responses. No training on your data. The Supercomputer mesh uses ephemeral compute — queries are processed and discarded. Set privacy_flag=true to ensure queries never leave your hardware.
Local
Free
Run on your hardware
Supercomputer
Free tier
Mesh-augmented inference
Enterprise
Contact us
Custom deployment
Add TARX tools to Claude Code, Cursor, or any MCP-compatible client.
tarx mcp add claude # adds to ~/.claude.json
tarx mcp add cursor # adds to cursor config
tarx mcp add vscode # adds to VS Code