API Documentation
OpenClaw Token — Global AI model relay. Connect to Claude, GPT, Gemini without VPN.
Quick Start
1. Base URL
https://token.openclaw-token.shop2. Authentication
All requests require an API key. Create one in the Tokens page after login.
Authorization: Bearer sk-your-api-key
3. First Request
curl https://token.openclaw-token.shop/v1/chat/completions \
-H "Authorization: Bearer sk-your-key" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello!"}]
}'
4. SDK Setup (2 lines)
from openai import OpenAI
client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk-your-key", baseURL: "https://token.openclaw-token.shop/v1" });
Claude
Anthropic Claude models. Supports both OpenAI-compatible and native Anthropic API format.
| Model | Input $/1M | Output $/1M | Cache $/1M | Best For |
|---|---|---|---|---|
claude-opus-4-7 | $0.90 | $4.50 | $0.09 | Latest flagship |
claude-opus-4-6 | $0.90 | $4.50 | $0.09 | Complex reasoning |
claude-sonnet-4-6 Recommended | $0.52 | $2.60 | $0.052 | Best value, Claude Code default |
claude-haiku-4-5-20251001 | $0.18 | $0.90 | $0.018 | Fastest, lightest |
Python — OpenAI SDK
response = client.chat.completions.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a Python quicksort"}
]
)
print(response.choices[0].message.content)
Python — Anthropic SDK
from anthropic import Anthropic
anthropic_client = Anthropic(api_key="sk-your-key", base_url="https://token.openclaw-token.shop")
message = anthropic_client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a Python quicksort"}]
)
print(message.content[0].text)
Node.js
const res = await client.chat.completions.create({
model: "claude-sonnet-4-6",
max_tokens: 1024,
messages: [{ role: "user", content: "Write a JS quicksort" }],
});
console.log(res.choices[0].message.content);
GPT
OpenAI GPT-5 series. Supports both /v1/chat/completions (auto-converted) and /v1/responses (native).
| Model | Description |
|---|---|
gpt5.4-plus Flagship | GPT-5.4 — strongest general capability |
gpt-5.4-mini-plus | GPT-5.4 lightweight |
gpt-5.3-codex-plus | Codex — optimized for code |
gpt-5.2-plus | GPT-5.2 |
Python
response = client.chat.completions.create(
model="gpt5.4-plus",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)
Node.js
const res = await client.chat.completions.create({
model: "gpt5.4-plus",
max_tokens: 1024,
messages: [{ role: "user", content: "Explain quantum computing" }],
});
console.log(res.choices[0].message.content);
Gemini
Google Gemini text models via OpenAI-compatible format.
| Model | Input $/1M | Output $/1M | Best For |
|---|---|---|---|
gemini-3-flash-preview | $0.40 | $2.40 | Fast text generation |
gemini-3.1-flash-lite-preview | $0.20 | $1.20 | Lightest reasoning |
gemini-3-pro-preview | $1.60 | $9.60 | High-quality reasoning |
gemini-3.1-pro-preview | $1.60 | $9.60 | Latest Pro |
Python
response = client.chat.completions.create(
model="gemini-3-flash-preview",
messages=[{"role": "user", "content": "Hello, Gemini!"}]
)
print(response.choices[0].message.content)
Gemini Flash Image
Google Gemini 3.1 Flash Image — conversational image generation and editing. Generate, edit, and iterate on images through multi-turn chat.
/v1/chat/completionsModels & Pricing
| Model | Input $/1M | Output $/1M | Notes |
|---|---|---|---|
gemini-3.1-flash-image | $0.05 | per call | Fast image gen, ~2520 tokens per image |
gemini-3.1-flash-image-preview | $0.25 | $40.00 | Per-token billing |
Parameters
| Parameter | Values | Notes |
|---|---|---|
model * | gemini-3.1-flash-image | Required |
messages * | Array | Chat messages. Supports system/user/assistant roles |
max_tokens | Integer | Recommended: 2000+ for image output |
temperature | 0–2 | Controls creativity |
top_p | 0–1 | Nucleus sampling |
top_k | Integer | Top-k sampling |
stream | true / false | Streaming supported |
Resolution & Aspect Ratio (via extra_body)
Control output resolution and aspect ratio via extra_body.google.image_config:
| Parameter | Values | Notes |
|---|---|---|
image_size | "512", "1K", "2K", "4K" | Default: 1K. Must use uppercase K |
aspect_ratio | "1:1", "3:4", "4:3", "9:16", "16:9", "3:2", "2:3", "4:5", "5:4", "1:4", "4:1", "1:8", "8:1", "21:9" | Default: model decides |
Resolution Reference
| image_size | 1:1 | 16:9 | 9:16 | 4:3 | 3:4 |
|---|---|---|---|---|---|
512 | 512×512 | 704×384 | 384×704 | 576×448 | 448×576 |
1K | 1024×1024 | 1408×768 | 768×1408 | 1152×896 | 896×1152 |
2K | 2048×2048 | 2816×1536 | 1536×2816 | 2304×1792 | 1792×2304 |
4K | 4096×4096 | 5632×3072 | 3072×5632 | 4608×3584 | 3584×4608 |
Image Input (for editing)
| Format | Notes |
|---|---|
| Base64 data URL | data:image/png;base64,... in image_url content part |
| Supported formats | PNG, JPEG, WebP, HEIC |
| Multiple images | Up to 14 reference images per request |
Response
Images are automatically uploaded to CDN and returned as URLs in markdown format:

Python — Text-to-Image
import re
from openai import OpenAI
client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
response = client.chat.completions.create(
model="gemini-3.1-flash-image",
max_tokens=2000,
messages=[{"role": "user", "content": "Draw a cute orange cat at sunset"}]
)
content = response.choices[0].message.content
# Extract image URL from markdown
url = re.search(r'!\[image\]\((\S+)\)', content)
if url:
print(f"Image URL: {url.group(1)}")
Python — 2K / 4K Resolution
# 2K resolution with 16:9 aspect ratio
response = client.chat.completions.create(
model="gemini-3.1-flash-image",
max_tokens=2000,
messages=[{"role": "user", "content": "A mountain landscape at sunset"}],
extra_body={"google": {"image_config": {
"image_size": "2K",
"aspect_ratio": "16:9"
}}}
)
# 4K resolution (takes 60-200s, large images)
response = client.chat.completions.create(
model="gemini-3.1-flash-image",
max_tokens=2000,
messages=[{"role": "user", "content": "A detailed city skyline"}],
extra_body={"google": {"image_config": {
"image_size": "4K",
"aspect_ratio": "16:9"
}}}
)
Python — Image Editing (pass image + instruction)
import base64
# Read local image as base64
with open("input.png", "rb") as f:
img_b64 = base64.b64encode(f.read()).decode()
response = client.chat.completions.create(
model="gemini-3.1-flash-image",
max_tokens=2000,
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "Change the background to a beach scene"},
{"type": "image_url", "image_url": {
"url": f"data:image/png;base64,{img_b64}"
}}
]
}]
)
Python — Multi-turn Editing (iterative refinement)
# Turn 1: Generate initial image
messages = [{"role": "user", "content": "Draw a simple red house"}]
r1 = client.chat.completions.create(
model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)
# Turn 2: Edit the result
messages.append({"role": "assistant", "content": r1.choices[0].message.content})
messages.append({"role": "user", "content": "Add a blue sky and green grass"})
r2 = client.chat.completions.create(
model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)
# Turn 3: Further refinement
messages.append({"role": "assistant", "content": r2.choices[0].message.content})
messages.append({"role": "user", "content": "Add a sun and some clouds"})
r3 = client.chat.completions.create(
model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)
Python — Style Control with System Prompt
response = client.chat.completions.create(
model="gemini-3.1-flash-image",
max_tokens=2000,
temperature=1.5, # higher = more creative
top_p=0.9,
messages=[
{"role": "system", "content": "You are a cartoon artist. Always draw in cute kawaii style."},
{"role": "user", "content": "Draw a dog playing in a park"}
]
)
Node.js
// Generate
const res = await client.chat.completions.create({
model: "gemini-3.1-flash-image",
max_tokens: 2000,
messages: [{ role: "user", content: "Draw a sunset landscape" }],
});
const content = res.choices[0].message.content;
const urlMatch = content.match(/!\[image\]\((\S+)\)/);
if (urlMatch) console.log("Image:", urlMatch[1]);
// Edit with image input
import fs from "fs";
const imgB64 = fs.readFileSync("input.png").toString("base64");
const edited = await client.chat.completions.create({
model: "gemini-3.1-flash-image",
max_tokens: 2000,
messages: [{
role: "user",
content: [
{ type: "text", text: "Make this image more colorful" },
{ type: "image_url", image_url: { url: `data:image/png;base64,${imgB64}` } },
],
}],
});
• Supports multi-turn conversation for iterative image editing.
• System prompt controls art style (cartoon, photorealistic, etc.).
• Use Chinese or English prompts.
• Text + Image input supported via
image_url content type.• 1K generation takes 10–30s. 2K takes 60–180s. 4K takes 60–300s.
•
image_size must use uppercase K ("2K" ✓ "2k" ✗).•
aspect_ratio controls shape. Without it, model decides automatically.
GPT Image
OpenAI's image generation and editing models. Support text-to-image generation, image editing, mask-based inpainting, transparent backgrounds, and multiple output formats.
/v1/images/generations — Generate images/v1/images/edits — Edit images (multipart/form-data)Models & Pricing
| Model | Pricing | Notes |
|---|---|---|
| GPT Image 1.5 (latest, 4x faster, 20% cheaper) | ||
gpt-image-1.5-vvip | Token-based: input $5/M, output $32/M | Fastest, recommended |
| GPT Image 1 | ||
gpt-image-1-vvip | Token-based: input $5/M, output $40/M | Higher quality, slower |
| GPT Image 2 (legacy) | ||
gpt-image-2 | $0.01/call | Standard queue |
gpt-image-2-vip | $0.02/call | Priority queue |
Output Token Counts (gpt-image-1 & 1.5)
Billing is based on tokens. Larger sizes and higher quality = more tokens = higher cost.
| Quality | 1024×1024 | 1024×1536 | 1536×1024 |
|---|---|---|---|
low | 272 | 408 | 400 |
medium | 1,056 | 1,584 | 1,568 |
high | 4,160 | 6,240 | 6,208 |
Generation Parameters
| Parameter | Values | Notes |
|---|---|---|
model * | gpt-image-1.5-vvip / gpt-image-1-vvip / gpt-image-2 | Required |
prompt * | Text (max 32,000 chars) | Required. Image description |
size | auto, 1024x1024, 1536x1024, 1024x1536, 2048x2048, 2048x1152, 3840x2160 | Default: auto. gpt-image-2 supports up to 4K (3840px, 16px aligned, ratio ≤3:1) |
quality | auto / low / medium / high | Default: auto. Low=fastest, high=most detail |
n | 1–10 | Number of images. gpt-image-2 only supports n=1 |
background | auto / transparent / opaque | Default: auto. Transparent requires png/webp output |
output_format | png / jpeg / webp | Default: png. GPT Image 1/1.5 only |
output_compression | 0–100 | JPEG/WebP compression. 0=best quality, 100=smallest |
moderation | auto / low | Content filter strictness. Low=less restrictive |
response_format | url / b64_json | Default: url (DALL-E compat). b64_json returns base64 |
Edit Parameters (multipart/form-data)
| Parameter | Type | Notes |
|---|---|---|
model * | string | Required |
prompt * | string | Description of the edit |
image * | file | Source image(s). Multiple images as reference supported |
mask | file | PNG with alpha channel marking edit region. Must match image size, <50MB |
size | string | Same options as generation |
quality | string | auto / low / medium / high |
response_format | string | url / b64_json |
Python — Generate (gpt-image-1.5)
from openai import OpenAI
client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
response = client.images.generate(
model="gpt-image-1.5-vvip",
prompt="A cute orange cat wearing sunglasses at the beach",
size="1024x1024",
quality="high",
n=1
)
print(response.data[0].url)
Python — All options (transparent + compression + base64)
response = client.images.generate(
model="gpt-image-1.5-vvip",
prompt="A product photo of a sneaker, isolated, studio lighting",
size="1024x1024",
quality="high",
background="transparent",
moderation="low",
output_format="png",
output_compression=30,
response_format="b64_json",
n=1
)
import base64
image_data = base64.b64decode(response.data[0].b64_json)
with open("output.png", "wb") as f:
f.write(image_data)
Python — Multiple images (n=4)
# gpt-image-1/1.5 supports n=1~10, gpt-image-2 only n=1
response = client.images.generate(
model="gpt-image-1.5-vvip",
prompt="A colorful butterfly",
size="1024x1024",
quality="medium",
n=4
)
for i, img in enumerate(response.data):
print(f"Image {i+1}: {img.url}")
Python — Edit image
# Basic edit
response = client.images.edit(
model="gpt-image-1.5-vvip",
image=open("input.png", "rb"),
prompt="Add red sunglasses to the cat",
n=1
)
print(response.data[0].url)
# Edit with mask (region-specific inpainting)
response = client.images.edit(
model="gpt-image-1.5-vvip",
image=open("input.png", "rb"),
mask=open("mask.png", "rb"), # alpha channel marks edit area
prompt="Place a golden crown here",
n=1
)
Node.js
// Generate
const image = await client.images.generate({
model: "gpt-image-1.5-vvip", prompt: "A cute cat",
size: "1024x1024", quality: "high", n: 1,
});
console.log(image.data[0].url);
// Edit
import fs from "fs";
const edited = await client.images.edit({
model: "gpt-image-1.5-vvip",
image: fs.createReadStream("input.png"),
prompt: "Add sunglasses", n: 1,
});
Model Comparison
| Feature | gpt-image-1.5 | gpt-image-1 | gpt-image-2 |
|---|---|---|---|
| Speed | 4x faster | Standard | Standard |
| Pricing | Token-based | Token-based | Fixed per-call |
| n (batch) | 1–10 | 1–10 | 1 only |
| Sizes | auto/1024/1536 | auto/1024/1536 | auto/1024–3840 (2K/4K) |
| Quality | auto/low/med/high | auto/low/med/high | auto/low/med/high |
| Transparent BG | ✓ | ✓ | ✓ |
| output_format | png/jpeg/webp | png/jpeg/webp | png/jpeg/webp |
| Compression | ✓ | ✓ | ✓ |
| Editing | ✓ | ✓ | ✓ |
| Mask editing | ✓ | ✓ | ✓ |
• URLs expire in 24h — download immediately or use
response_format: "b64_json".• Generation takes 10–120s depending on quality and model.
•
background: "transparent" requires output_format set to png or webp.• Mask file must match source image dimensions, <50MB, with alpha channel.
Wan 2.6 Text-to-Image
Alibaba's high-quality image generation model. Supports Chinese/English prompts, custom aspect ratios, batch generation (up to 4 images), negative prompts, and intelligent prompt enhancement. Average generation time ~7 seconds.
/v1/images/generationsPricing
| Model | Price | Notes |
|---|---|---|
wan2.6-t2i | $0.02/image | Up to 4 images per call. Fee = price × n |
Parameters
| Parameter | Values | Default | Notes |
|---|---|---|---|
model * | wan2.6-t2i | — | Required |
prompt * | Text | — | ≤2100 chars. Chinese/English. Required |
size | 1280*1280, 1472*1104, 1104*1472, 1696*960, 960*1696 | 1280*1280 | Total pixels in [1280², 1440²], ratio [1:4, 4:1] |
n | 1–4 | 4 | Number of images. Charged per image |
negative_prompt | Text | — | ≤500 chars. Describe what to avoid |
prompt_extend | true / false | true | AI auto-enriches prompt (+3-4s latency). Great for short prompts |
watermark | true / false | false | Adds "AI生成" watermark to bottom-right |
seed | 0–2147483647 | Random | Same seed = reproducible results |
Recommended Sizes
| Ratio | Size | Use case |
|---|---|---|
| 1:1 | 1280*1280 | Square, avatars, icons |
| 4:3 | 1472*1104 | Landscape photos |
| 3:4 | 1104*1472 | Portrait photos |
| 16:9 | 1696*960 | Widescreen, banners |
| 9:16 | 960*1696 | Phone wallpaper, stories |
Python — Basic
response = client.images.generate(
model="wan2.6-t2i",
prompt="一只戴墨镜的橘猫在海边冲浪,超写实摄影",
size="1280*1280",
n=1,
)
print(response.data[0].url) # URL valid 24h
Python — All options
# Full parameters: negative prompt + seed + no auto-enrich + no watermark
response = client.images.generate(
model="wan2.6-t2i",
prompt="一只柴犬在草地上奔跑,摄影大片风格",
negative_prompt="模糊, 低质量, 变形, 多余的腿",
size="1472*1104", # 4:3 landscape
n=2,
seed=42,
prompt_extend=False, # use prompt as-is
watermark=False,
)
for i, img in enumerate(response.data):
print(f"Image {i+1}: {img.url}")
Python — Batch (n=4)
# Generate 4 images at once (default n=4)
response = client.images.generate(
model="wan2.6-t2i",
prompt="一朵盛开的红色玫瑰,微距摄影",
size="1280*1280",
n=4,
)
import urllib.request
for i, img in enumerate(response.data):
urllib.request.urlretrieve(img.url, f"output_{i+1}.png")
print(f"Saved output_{i+1}.png")
Node.js
const image = await client.images.generate({
model: "wan2.6-t2i",
prompt: "一只橘猫在海边,超写实",
negative_prompt: "模糊, 低质量",
size: "1280*1280",
n: 1,
seed: 42,
prompt_extend: true,
watermark: false,
});
console.log(image.data[0].url);
• URLs expire in 24h — download promptly.
• Average generation ~7 seconds.
•
prompt_extend adds 3-4s but significantly improves results for short prompts.• Use
negative_prompt to avoid common artifacts: "模糊, 低质量, 变形, 多余手指".• Size uses
* separator (not x): 1280*1280 ✅ 1280x1280 ❌Wan 2.7 / Doubao Image
| Model | Price | Notes |
|---|---|---|
wan2.7-image | $0.027/call | Returns 4 images. Uses chat/completions |
doubao-seedream-5-0-260128 | $0.04/call | SeedReam 5.0. Uses images endpoint |
doubao-seedream-4-5-251128 | $0.04/call | SeedReam 4.5 |
doubao-seedream-5-0-lite-260128 | $0.04/call | SeedReam Lite |
Python — Wan 2.7
# Wan 2.7 uses chat/completions, returns image URLs in content
response = client.chat.completions.create(
model="wan2.7-image",
messages=[{"role": "user", "content": [{"type": "text", "text": "A cute shiba inu"}]}]
)
for choice in response.choices:
for item in choice.message.content:
if item.get("image"):
print(item["image"])
Python — Doubao
response = client.images.generate(
model="doubao-seedream-5-0-260128",
prompt="A cute shiba inu, photorealistic",
size="2K"
)
print(response.data[0].url)
Wan 2.6 Video
Async video generation. Submit a task, poll for results. Three models for different use cases.
/v1/video/generations — Submit task/v1/videos/{task_id} — Poll statusModels & Pricing (per second)
| Model | Type | 720P/s | 1080P/s | Notes |
|---|---|---|---|---|
wan2.6-t2v | Text-to-Video | $0.058 | $0.100 | No audio option |
wan2.6-i2v | Image-to-Video | $0.058 | $0.100 | Always has audio |
wan2.6-i2v-flash | I2V (fast) | $0.0145 / $0.029 | $0.024 / $0.048 | Silent / Audio pricing |
• wan2.6-i2v-flash, 720P, 5s, silent: $0.0145 × 5 = $0.0725
• wan2.6-i2v-flash, 720P, 5s, audio: $0.029 × 5 = $0.145
All Parameters
| Parameter | Values | Applies to | Notes |
|---|---|---|---|
model * | wan2.6-t2v / wan2.6-i2v / wan2.6-i2v-flash | All | Required |
prompt * | Text | All | ≤1500 chars |
resolution | 720P / 1080P | All | Default: 720P (flash/t2v), 1080P (i2v) |
duration | 2–15 | All | Default: 5 seconds |
input_reference * | Image URL | i2v / i2v-flash | Single image as first frame. JPEG/PNG/BMP/WEBP, 240–8000px, ≤20MB |
images | Array of URLs | i2v / i2v-flash | Alternative to input_reference. wan2.7+ supports 2 images for first+last frame |
audio | true / false | i2v-flash | Default: false. Costs 2× silent |
audio_url | Audio URL | i2v / i2v-flash | Custom audio. wav/mp3, 3–30s, ≤15MB |
negative_prompt | Text | All | ≤500 chars |
prompt_extend | true / false | All | Default: true. Auto-enrich |
shot_type | single / multi | i2v / i2v-flash | Camera angles. multi requires prompt_extend=true |
watermark | true / false | All | Default: false |
seed | 0–2147483647 | All | Reproducibility |
Python — Text-to-Video (wan2.6-t2v)
import requests, time
BASE = "https://token.openclaw-token.shop"
HEADERS = {"Authorization": "Bearer sk-your-key", "Content-Type": "application/json"}
# 1. Submit
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
"model": "wan2.6-t2v",
"prompt": "An orange cat running on a beach, slow motion",
"negative_prompt": "blurry, low quality",
"resolution": "720P",
"duration": 5,
"seed": 42,
})
task_id = r.json()["id"]
# 2. Poll (every 10s, may take 1~3 min)
while True:
data = requests.get(f"{BASE}/v1/videos/{task_id}", headers=HEADERS).json()
if data["status"] == "completed":
print(data["metadata"]["url"]) # URL valid 24h
break
elif data["status"] == "failed":
print("Error:", data.get("error", {}).get("message"))
break
time.sleep(10)
Python — Image-to-Video silent (wan2.6-i2v-flash)
# Silent (default, cheaper)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
"model": "wan2.6-i2v-flash",
"prompt": "The cat starts running",
"input_reference": "https://your-image-url.png",
"resolution": "720P",
"duration": 5,
})
# Poll same as above...
Python — Image-to-Video with audio (wan2.6-i2v-flash)
# AI-generated audio (2× silent price)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
"model": "wan2.6-i2v-flash",
"prompt": "A guitarist performing on stage",
"input_reference": img_url,
"resolution": "1080P",
"duration": 5,
"audio": True,
})
Python — Custom audio file
# Your own background audio (wav/mp3, 3~30s, ≤15MB)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
"model": "wan2.6-i2v-flash",
"prompt": "Musician performing",
"input_reference": img_url,
"resolution": "720P",
"duration": 5,
"audio": True,
"audio_url": "https://your-audio.mp3",
})
Python — Multi-shot high quality (wan2.6-i2v)
# Standard quality model, multi-camera, all options
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
"model": "wan2.6-i2v",
"prompt": "Cat exploring a garden, multiple angles",
"negative_prompt": "blurry, shaky",
"input_reference": img_url,
"resolution": "1080P",
"duration": 10,
"shot_type": "multi",
"prompt_extend": True,
"watermark": False,
"seed": 42,
})
Node.js
const BASE = "https://token.openclaw-token.shop";
const headers = { "Authorization": "Bearer sk-your-key", "Content-Type": "application/json" };
// Submit
const res = await fetch(`${BASE}/v1/video/generations`, {
method: "POST", headers,
body: JSON.stringify({
model: "wan2.6-i2v-flash", prompt: "Cat running",
input_reference: "https://your-image.png",
resolution: "720P", duration: 5, audio: true,
}),
});
const { id: taskId } = await res.json();
// Poll
while (true) {
const d = await (await fetch(`${BASE}/v1/videos/${taskId}`, { headers })).json();
if (d.status === "completed") { console.log(d.metadata.url); break; }
if (d.status === "failed") { console.log(d.error); break; }
await new Promise(r => setTimeout(r, 10000));
}
• Typical time: 50–180 seconds.
• URLs expire in 24h.
• Image format for i2v: JPEG/PNG/BMP/WEBP, 240–8000px, ≤20MB.
Wan 2.2 / 2.5 Video
| Model | Price | Type |
|---|---|---|
wan2.2-t2v-plus | $0.02/call | Text-to-Video |
wan2.2-i2v-plus | $0.02/call | Image-to-Video |
wan2.2-i2v-flash | $0.02/call | I2V (fast) |
wan2.5-t2v-preview | $0.04/call | Text-to-Video v2.5 |
wan2.5-i2v-preview | $0.04/call | I2V v2.5 |
Same submit+poll pattern as Wan 2.6. See code examples above.
Claude Code CLI
export ANTHROPIC_BASE_URL="https://token.openclaw-token.shop"
export ANTHROPIC_API_KEY="sk-your-key"
claude # defaults to claude-sonnet-4-6
claude --model claude-opus-4-7 # specify model
~/.bashrc or ~/.zshrc for permanent setup.SDK Setup
OpenAI SDK
pip install openai
from openai import OpenAI
client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
npm install openai
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk-your-key", baseURL: "https://token.openclaw-token.shop/v1" });
Anthropic SDK
pip install anthropic
from anthropic import Anthropic
client = Anthropic(api_key="sk-your-key", base_url="https://token.openclaw-token.shop")
Demo Scripts
Complete runnable Python scripts that test all parameters. Click to download.
| Script | Tests | Coverage | Download |
|---|---|---|---|
demo_gpt_image.py | GPT Image 1/1.5: quality, sizes, n, transparent, formats, compression, moderation, b64, edit, mask | 11 test cases | ⬇ Download |
demo_gpt_image_2.py | GPT Image 2: generate, edit, mask, b64, quality, sizes, advanced options | 7 test cases | ⬇ Download |
demo_wan26.py | T2I, T2V, I2V silent/audio/custom-audio, multi-shot, all params | 11 test cases | ⬇ Download |
demo_gemini_image.py | Gemini Flash Image: 512/1K/2K/4K, all ratios, styles, Chinese, temperature, multi-turn, edit | 11 test groups | ⬇ Download |
demo_wan_qwen_image.py | Wan 2.7 / Qwen image generation + editing | — | ⬇ Download |
# Run GPT Image 1/1.5 full test (11 cases)
NEWAPI_BASE_URL="https://your-api-gateway.example.com/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_gpt_image.py
# Specify model
python3 bin/demo_gpt_image.py --model gpt-image-1-vvip
# Run GPT Image 2 full test
NEWAPI_BASE_URL="https://token.openclaw-token.shop/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_gpt_image_2.py
# Run Wan 2.6 full test (all models)
NEWAPI_BASE_URL="https://token.openclaw-token.shop/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_wan26.py
# Wan 2.6 - only text-to-image
python3 bin/demo_wan26.py --only-t2i
# Wan 2.6 - only video
python3 bin/demo_wan26.py --skip-t2i
Compatibility
Pricing (Live)
Real-time pricing from GET /api/pricing. Prices update automatically.
FAQ
How to get an API key?
Register and log in, then go to Tokens page and create a new token.
What's the balance unit?
USD ($). 500,000 quota = $1.
Are cache tokens priced differently?
Yes. Cache read tokens use the cache ratio shown in pricing (typically 10% of input price). Cache creation = input × 1.25.
Do GPT models only support Responses API?
No. GPT models support both /v1/chat/completions (auto-converted) and /v1/responses (native). Standard OpenAI SDK works out of the box.
How long are generated image/video URLs valid?
24 hours. Download promptly after generation.
Video generation timeout?
Video is async (submit + poll). Submit returns instantly. Typical generation: 50–180 seconds. No timeout on polling.
Which platforms are compatible?
Claude Code CLI, OpenAI SDK, Anthropic SDK, ChatGPT Next Web, LobeChat, Cursor, Windsurf, and any OpenAI-compatible client.