K

DEVELOPER RUNTIME

One runtime. Start local. Scale to Supercomputer.

Install TARX locally with zero setup and point your OpenAI-compatible code at it. When you need more power, sign in, add a TARX API key, and point to Supercomputer: same code, same contract. Enterprise teams run the identical runtime on private deployments or keep existing provider agreements.

HOW IT STARTS

01

Start local

Install TARX, point your existing OpenAI-compatible SDK at the local runtime, and make your first call with no account or API key.

02

Unlock Supercomputer

Sign in only when local hardware is not enough. Create a TARX API key, switch to the hosted endpoint, and keep the same request shape.

03

Keep the contract

Move the same app across local development, hosted Supercomputer, customer-scoped infrastructure, or approved enterprise provider agreements.

LOCAL QUICKSTART

Keep your SDK. Change the base URL. Local runtime is the first-run experience and does not require a hosted account or API key.

Python
from openai import OpenAI

client = OpenAI(
    base_url="http://127.0.0.1:11440/v1",
    api_key="not-needed"
)

response = client.chat.completions.create(
    model="tarx",
    messages=[{"role": "user", "content": "Explain this function"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Install TARX first: curl -fsSL cli.tarx.com/install | sh

HOSTED UNLOCK

Hosted access

Local first. Key when you need hosted execution.

TARX Code is the right long-term control surface for runtime policy, docs, and hosted access. Today, the unlock path is simple: sign in, create a TARX API key, and point the same client athttps://api.tarx.com/v1.

No key is required for local use. Sign in only when you want hosted Supercomputer execution.

shell
export TARX_API_KEY="your_tarx_key"

curl https://api.tarx.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $TARX_API_KEY" \
  -d '{
    "model": "tarx",
    "messages": [{"role": "user", "content": "Summarize this project"}]
  }'

EXECUTION PLANES

Local

No-key local runtime

Best for development, private workflows, and low-latency daily use. Install TARX and call the local runtime directly.

  • +No hosted dependency
  • +Local-first state and files
  • +OpenAI-compatible start

Supercomputer

Hosted execution

Use Supercomputer for heavier reasoning, longer contexts, multimodal workloads, and shared capacity without changing your app contract.

  • +TARX account + key
  • +Same request format
  • +Joules plus familiar usage indicators

Enterprise

Private deployment or BYO agreements

Run TARX on customer-scoped infrastructure or keep existing provider agreements with TARX as the runtime, policy, and evidence layer.

  • +Customer-scoped infrastructure
  • +Provider allowlists and policy
  • +Same runtime contract

WHY IT STICKS

TARX is not just another endpoint.

Compatibility gets you in the door. Runtime state is what makes the platform worth keeping. TARX is where developers gain routing, memory, files, tools, artifacts, and evidence without rebuilding their app every time execution moves between local, hosted, and enterprise environments.

  • +OpenAI-compatible entry point for fast adoption
  • +Route policies across local, hosted, and enterprise execution
  • +Sessions, memory, files, tools, and task state
  • +Route evidence for debugging and enterprise trust
  • +One runtime contract across text, voice, vision, and future media workloads

PRICING SHAPE

Local Runtime

Free

No key required. Best for development, private workflows, and local-first shipping.

Hosted API

Joules + usage

Use a TARX key when you want hosted Supercomputer execution. Billing is in Joules, with familiar usage indicators so developer teams can compare and forecast.

Enterprise

Custom / BYO

Deploy TARX on customer-scoped infrastructure or route through provider agreements the company already has.

Local runtime is free forever. Hosted Supercomputer is billed in Joules, with familiar usage indicators shown beside Joules so teams can compare, forecast, and buy without fake conversion math.

MCP + CODE

Use TARX locally as an API runtime, then add MCP and TARX Code workflows when you want repo context, tool orchestration, or richer runtime state.

shell
tarx mcp add claude
tarx mcp add cursor
tarx mcp add vscode

ENTERPRISE PATH

Enterprise runtime

Deploy TARX inside the company trust boundary with governance, routing policy, auditability, and customer-scoped infrastructure.

Enterprise BYO agreements

Keep existing provider contracts. Let TARX become the runtime, policy, and evidence layer across providers your company already approves.

FAQ

Do I need an API key to start?

No. Local runtime is the entry point. Install TARX, point your OpenAI-compatible SDK at the local runtime, and make your first call without an account or hosted key.

What are Joules?

Joules are TARX's native unit for hosted Supercomputer compute. They reflect usage across reasoning depth, context size, multimodal work, and runtime duration.

Why show familiar usage indicators too?

Developers already compare hosted AI products with token and usage-based pricing. TARX shows Joules as the native meter and familiar usage indicators beside it for comparison; we do not publish fake token-to-Joule conversions.

How does hosted pricing work?

Local runtime stays free. Hosted Supercomputer requires a TARX key and is billed in Joules, with clear usage reporting in your TARX account.

What makes TARX different from a generic AI router?

Routers mostly abstract providers. TARX is a runtime: it carries state, files, memory, tools, route policies, and evidence across local, hosted, and enterprise execution.

Can my company keep its existing model agreements?

Yes. Enterprise adoption does not have to rip out existing provider contracts. TARX can become the runtime, policy, and evidence layer while your company keeps approved vendor relationships and BYO credentials.

Will TARX stay text-only?

No. The contract is being shaped for multimodal work: text, embeddings, voice, vision, image workflows, and long-running media jobs like video generation.

TARX
Bridge is waiting for activity
Computer is ready