Unlock the Hive Mind.
SharX provides high-performance LLM inference by aggregating idle GPU power globally. Get DeepSeek-R1, Llama-3, and Mistral speeds at 80% lower cost than Big Tech clouds.
Use this as your base_url in OpenAI SDKs.
Blazing Fast
Low-latency inference running on H100 & A100 clusters nearby.
Drop-in Ready
100% compatible with OpenAI libraries. Just change the URL.
Private & Secure
No training on your data. Ephemeral processing nodes only.
Cost Effective
Pay only for computed tokens. No idle server fees.
Authentication
SharX API uses Bearer Token authentication.
Tip: Your API key carries full privileges. Do not share it in client-side code (browsers/apps). Always route requests through your own backend server.
/v1/chat/completions
Request
from openai import OpenAI
client = OpenAI(
base_url="https://apillm.semburat.online/v1",
api_key="sk-sharx-xxxxxxxx"
)
response = client.chat.completions.create(
model="deepseek-r1:32b",
messages=[{"role": "user", "content": "Hello!"}],
temperature=0.7
)
print(response.choices[0].message.content)
Response
{
"id": "chatcmpl-sharx-123",
"object": "chat.completion",
"created": 1709123456,
"model": "deepseek-r1:32b",
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! I am SharX AI."
},
"finish_reason": "stop"
}],
"usage": { "total_tokens": 35 }
}
/v1/models
from openai import OpenAI
client = OpenAI(base_url="...", api_key="...")
print(client.models.list())
/v1/balance
Request
import requests
res = requests.get("https://apillm.semburat.online/v1/balance",
headers={"Authorization": "Bearer ..."})
print(res.json())
Response
{
"object": "balance",
"total_credits": 100.00,
"used_credits": 12.50,
"remaining_credits": 87.50,
"currency": "USD",
"status": "active"
}
/v1/tokenize
Request
import requests
data = {"model": "deepseek-r1:32b", "content": "Count me!"}
res = requests.post(
"https://apillm.semburat.online/v1/tokenize",
json=data,
headers={"Authorization": "Bearer ..."}
)
print(res.json())
Response
{
"object": "list",
"count": 3,
"tokens": [1520, 220, 11],
"model": "deepseek-r1:32b"
}