One Base URL
Use https://givemesometoken.xyz/v1 in OpenAI-compatible clients such as Chatbox, Cherry Studio, and LobeChat.
LaiDian Token
Use one endpoint for chat, Responses, model listing, and image generation. The rate multiplier is 1x. Actual model availability depends on your API key, group, quota, and upstream status.
https://givemesometoken.xyz/v1
Authorization: Bearer sk-your-api-key
Use https://givemesometoken.xyz/v1 in OpenAI-compatible clients such as Chatbox, Cherry Studio, and LobeChat.
No inflated multiplier and no hidden markup. Usage and billing are visible in the console.
gpt-image-2 uses /v1/images/generations, not the regular chat endpoint.
Create an API key in the console, then send it as a Bearer token. This example uses gpt-5.4; you can also use gpt-5.5 if your key has access.
curl https://givemesometoken.xyz/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"messages": [
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Describe LaiDian Token in one sentence."}
]
}'
https://givemesometoken.xyz/v1. The client will append
/chat/completions, /responses, or the image endpoint automatically.
| Purpose | Method and Path | Notes |
|---|---|---|
| Model list | GET /v1/models |
Lists models available to the current API key. Availability depends on account, group, quota, and upstream status. |
| Chat completions | POST /v1/chat/completions |
OpenAI Chat Completions compatible format. Recommended for Chatbox, Cherry Studio, and LobeChat. |
| Responses | POST /v1/responses |
Use this when your SDK or client supports the Responses API. |
| Image generation | POST /v1/images/generations |
Use with image models such as gpt-image-2. |
| Image edits | POST /v1/images/edits |
Requires a client and model that support image editing. |
| Native Gemini | /v1beta |
For Gemini SDK/CLI style requests. Do not use this for regular OpenAI-compatible clients. |
Chat Completions is the most compatible entry point. It supports both normal and streaming responses.
curl -N https://givemesometoken.xyz/v1/chat/completions \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.5",
"stream": true,
"messages": [
{"role": "user", "content": "Write a three-line product intro."}
]
}'
const response = await fetch("https://givemesometoken.xyz/v1/chat/completions", {
method: "POST",
headers: {
"Authorization": "Bearer sk-your-api-key",
"Content-Type": "application/json"
},
body: JSON.stringify({
model: "gpt-5.4",
messages: [{ role: "user", content: "Hello, LaiDian Token." }]
})
});
const data = await response.json();
console.log(data.choices[0].message.content);
from openai import OpenAI
client = OpenAI(
api_key="sk-your-api-key",
base_url="https://givemesometoken.xyz/v1",
)
resp = client.chat.completions.create(
model="gpt-5.4",
messages=[
{"role": "user", "content": "Explain what an API relay is."}
],
)
print(resp.choices[0].message.content)
If your SDK or client supports the Responses API, call /v1/responses directly.
curl https://givemesometoken.xyz/v1/responses \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5.4",
"input": "Give me one API key security tip for developers."
}'
Image models should not use the chat endpoint. Use /v1/images/generations; the example model is gpt-image-2.
curl https://givemesometoken.xyz/v1/images/generations \
-H "Authorization: Bearer sk-your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-image-2",
"prompt": "A minimal product hero image for a transparent AI API relay, clean Apple-style, warm light.",
"size": "1024x1024"
}'
Image editing uses multipart form-data. Upload the source image as the image field.
curl https://givemesometoken.xyz/v1/images/edits \
-H "Authorization: Bearer sk-your-api-key" \
-F "model=gpt-image-2" \
-F "prompt=Add a tiny green token in the center." \
-F "image=@input.png"
| Client | Recommended Setup | Models |
|---|---|---|
| Chatbox | Select OpenAI API or OpenAI Compatible. Set Base URL to https://givemesometoken.xyz/v1. |
gpt-5.4, gpt-5.5 |
| Cherry Studio | Use an OpenAI-compatible provider and set the custom API endpoint to https://givemesometoken.xyz/v1. |
Add models manually according to your console access. |
| LobeChat | Use the OpenAI Compatible provider and paste your LaiDian Token sk-... key. |
Use the models returned by /v1/models. |
| Gemini SDK/CLI | For native Gemini protocol, use https://givemesometoken.xyz/v1beta. |
Use the corresponding Gemini models. |
/chat?No. For clients like Chatbox, use https://givemesometoken.xyz/v1. The client appends the endpoint path automatically.
Model availability depends on your API key, group, quota, and upstream status. Check GET /v1/models first.
Use /v1/images/generations with an image model such as gpt-image-2. Normal chat clients may not support image endpoints.
Make sure the request includes Authorization: Bearer sk-your-api-key and that the key has not been deleted, disabled, or copied incorrectly.
Open the console and check your balance, plan, group assignment, and usage records.