API Documentation

OpenClaw Token — Global AI model relay. Connect to Claude, GPT, Gemini without VPN.

Quick Start

1. Base URL

BASE https://token.openclaw-token.shop

2. Authentication

All requests require an API key. Create one in the Tokens page after login.

Header
Authorization: Bearer sk-your-api-key

3. First Request

cURL
curl https://token.openclaw-token.shop/v1/chat/completions \
  -H "Authorization: Bearer sk-your-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "max_tokens": 1024,
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

4. SDK Setup (2 lines)

Python
from openai import OpenAI

client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
Node.js
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk-your-key", baseURL: "https://token.openclaw-token.shop/v1" });

Claude

Anthropic Claude models. Supports both OpenAI-compatible and native Anthropic API format.

ModelInput $/1MOutput $/1MCache $/1MBest For
claude-opus-4-7$0.90$4.50$0.09Latest flagship
claude-opus-4-6$0.90$4.50$0.09Complex reasoning
claude-sonnet-4-6 Recommended$0.52$2.60$0.052Best value, Claude Code default
claude-haiku-4-5-20251001$0.18$0.90$0.018Fastest, lightest
Cache pricingCache tokens are billed at cache ratio (shown above). No hidden surcharges. Cache creation = input × 1.25.

Python — OpenAI SDK

Python
response = client.chat.completions.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Write a Python quicksort"}
    ]
)
print(response.choices[0].message.content)

Python — Anthropic SDK

Python
from anthropic import Anthropic

anthropic_client = Anthropic(api_key="sk-your-key", base_url="https://token.openclaw-token.shop")
message = anthropic_client.messages.create(
    model="claude-sonnet-4-6",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Write a Python quicksort"}]
)
print(message.content[0].text)

Node.js

JavaScript
const res = await client.chat.completions.create({
  model: "claude-sonnet-4-6",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Write a JS quicksort" }],
});
console.log(res.choices[0].message.content);

GPT

OpenAI GPT-5 series. Supports both /v1/chat/completions (auto-converted) and /v1/responses (native).

ModelDescription
gpt5.4-plus FlagshipGPT-5.4 — strongest general capability
gpt-5.4-mini-plusGPT-5.4 lightweight
gpt-5.3-codex-plusCodex — optimized for code
gpt-5.2-plusGPT-5.2

Python

Python
response = client.chat.completions.create(
    model="gpt5.4-plus",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Explain quantum computing"}]
)
print(response.choices[0].message.content)

Node.js

JavaScript
const res = await client.chat.completions.create({
  model: "gpt5.4-plus",
  max_tokens: 1024,
  messages: [{ role: "user", content: "Explain quantum computing" }],
});
console.log(res.choices[0].message.content);

Gemini

Google Gemini text models via OpenAI-compatible format.

ModelInput $/1MOutput $/1MBest For
gemini-3-flash-preview$0.40$2.40Fast text generation
gemini-3.1-flash-lite-preview$0.20$1.20Lightest reasoning
gemini-3-pro-preview$1.60$9.60High-quality reasoning
gemini-3.1-pro-preview$1.60$9.60Latest Pro

Python

Python
response = client.chat.completions.create(
    model="gemini-3-flash-preview",
    messages=[{"role": "user", "content": "Hello, Gemini!"}]
)
print(response.choices[0].message.content)

Gemini Flash Image

Google Gemini 3.1 Flash Image — conversational image generation and editing. Generate, edit, and iterate on images through multi-turn chat.

POST /v1/chat/completions

Models & Pricing

ModelInput $/1MOutput $/1MNotes
gemini-3.1-flash-image$0.05per callFast image gen, ~2520 tokens per image
gemini-3.1-flash-image-preview$0.25$40.00Per-token billing

Parameters

ParameterValuesNotes
model *gemini-3.1-flash-imageRequired
messages *ArrayChat messages. Supports system/user/assistant roles
max_tokensIntegerRecommended: 2000+ for image output
temperature02Controls creativity
top_p01Nucleus sampling
top_kIntegerTop-k sampling
streamtrue / falseStreaming supported

Resolution & Aspect Ratio (via extra_body)

Control output resolution and aspect ratio via extra_body.google.image_config:

ParameterValuesNotes
image_size"512", "1K", "2K", "4K"Default: 1K. Must use uppercase K
aspect_ratio"1:1", "3:4", "4:3", "9:16", "16:9", "3:2", "2:3", "4:5", "5:4", "1:4", "4:1", "1:8", "8:1", "21:9"Default: model decides

Resolution Reference

image_size1:116:99:164:33:4
512512×512704×384384×704576×448448×576
1K1024×10241408×768768×14081152×896896×1152
2K2048×20482816×15361536×28162304×17921792×2304
4K4096×40965632×30723072×56324608×35843584×4608

Image Input (for editing)

FormatNotes
Base64 data URLdata:image/png;base64,... in image_url content part
Supported formatsPNG, JPEG, WebP, HEIC
Multiple imagesUp to 14 reference images per request

Response

Images are automatically uploaded to CDN and returned as URLs in markdown format:

Response content
![image](https://i.ibb.co/xxxxx/image.jpg)
Image URLsGenerated image URLs expire in 1 hour. Download or display immediately. If CDN upload fails, falls back to base64 inline data.

Python — Text-to-Image

Python
import re
from openai import OpenAI

client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")

response = client.chat.completions.create(
    model="gemini-3.1-flash-image",
    max_tokens=2000,
    messages=[{"role": "user", "content": "Draw a cute orange cat at sunset"}]
)

content = response.choices[0].message.content
# Extract image URL from markdown
url = re.search(r'!\[image\]\((\S+)\)', content)
if url:
    print(f"Image URL: {url.group(1)}")

Python — 2K / 4K Resolution

Python
# 2K resolution with 16:9 aspect ratio
response = client.chat.completions.create(
    model="gemini-3.1-flash-image",
    max_tokens=2000,
    messages=[{"role": "user", "content": "A mountain landscape at sunset"}],
    extra_body={"google": {"image_config": {
        "image_size": "2K",
        "aspect_ratio": "16:9"
    }}}
)

# 4K resolution (takes 60-200s, large images)
response = client.chat.completions.create(
    model="gemini-3.1-flash-image",
    max_tokens=2000,
    messages=[{"role": "user", "content": "A detailed city skyline"}],
    extra_body={"google": {"image_config": {
        "image_size": "4K",
        "aspect_ratio": "16:9"
    }}}
)

Python — Image Editing (pass image + instruction)

Python
import base64

# Read local image as base64
with open("input.png", "rb") as f:
    img_b64 = base64.b64encode(f.read()).decode()

response = client.chat.completions.create(
    model="gemini-3.1-flash-image",
    max_tokens=2000,
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "Change the background to a beach scene"},
            {"type": "image_url", "image_url": {
                "url": f"data:image/png;base64,{img_b64}"
            }}
        ]
    }]
)

Python — Multi-turn Editing (iterative refinement)

Python
# Turn 1: Generate initial image
messages = [{"role": "user", "content": "Draw a simple red house"}]
r1 = client.chat.completions.create(
    model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)

# Turn 2: Edit the result
messages.append({"role": "assistant", "content": r1.choices[0].message.content})
messages.append({"role": "user", "content": "Add a blue sky and green grass"})
r2 = client.chat.completions.create(
    model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)

# Turn 3: Further refinement
messages.append({"role": "assistant", "content": r2.choices[0].message.content})
messages.append({"role": "user", "content": "Add a sun and some clouds"})
r3 = client.chat.completions.create(
    model="gemini-3.1-flash-image", max_tokens=2000, messages=messages
)

Python — Style Control with System Prompt

Python
response = client.chat.completions.create(
    model="gemini-3.1-flash-image",
    max_tokens=2000,
    temperature=1.5,     # higher = more creative
    top_p=0.9,
    messages=[
        {"role": "system", "content": "You are a cartoon artist. Always draw in cute kawaii style."},
        {"role": "user", "content": "Draw a dog playing in a park"}
    ]
)

Node.js

JavaScript
// Generate
const res = await client.chat.completions.create({
  model: "gemini-3.1-flash-image",
  max_tokens: 2000,
  messages: [{ role: "user", content: "Draw a sunset landscape" }],
});
const content = res.choices[0].message.content;
const urlMatch = content.match(/!\[image\]\((\S+)\)/);
if (urlMatch) console.log("Image:", urlMatch[1]);

// Edit with image input
import fs from "fs";
const imgB64 = fs.readFileSync("input.png").toString("base64");
const edited = await client.chat.completions.create({
  model: "gemini-3.1-flash-image",
  max_tokens: 2000,
  messages: [{
    role: "user",
    content: [
      { type: "text", text: "Make this image more colorful" },
      { type: "image_url", image_url: { url: `data:image/png;base64,${imgB64}` } },
    ],
  }],
});
Tips • Images returned as CDN URLs (expire in 1h). Falls back to base64 if upload fails.
• Supports multi-turn conversation for iterative image editing.
System prompt controls art style (cartoon, photorealistic, etc.).
• Use Chinese or English prompts.
Text + Image input supported via image_url content type.
• 1K generation takes 10–30s. 2K takes 60–180s. 4K takes 60–300s.
image_size must use uppercase K ("2K""2k" ✗).
aspect_ratio controls shape. Without it, model decides automatically.

GPT Image

OpenAI's image generation and editing models. Support text-to-image generation, image editing, mask-based inpainting, transparent backgrounds, and multiple output formats.

POST /v1/images/generations — Generate images
POST /v1/images/edits — Edit images (multipart/form-data)

Models & Pricing

ModelPricingNotes
GPT Image 1.5 (latest, 4x faster, 20% cheaper)
gpt-image-1.5-vvipToken-based: input $5/M, output $32/MFastest, recommended
GPT Image 1
gpt-image-1-vvipToken-based: input $5/M, output $40/MHigher quality, slower
GPT Image 2 (legacy)
gpt-image-2$0.01/callStandard queue
gpt-image-2-vip$0.02/callPriority queue

Output Token Counts (gpt-image-1 & 1.5)

Billing is based on tokens. Larger sizes and higher quality = more tokens = higher cost.

Quality1024×10241024×15361536×1024
low272408400
medium1,0561,5841,568
high4,1606,2406,208

Generation Parameters

ParameterValuesNotes
model *gpt-image-1.5-vvip / gpt-image-1-vvip / gpt-image-2Required
prompt *Text (max 32,000 chars)Required. Image description
sizeauto, 1024x1024, 1536x1024, 1024x1536, 2048x2048, 2048x1152, 3840x2160Default: auto. gpt-image-2 supports up to 4K (3840px, 16px aligned, ratio ≤3:1)
qualityauto / low / medium / highDefault: auto. Low=fastest, high=most detail
n110Number of images. gpt-image-2 only supports n=1
backgroundauto / transparent / opaqueDefault: auto. Transparent requires png/webp output
output_formatpng / jpeg / webpDefault: png. GPT Image 1/1.5 only
output_compression0100JPEG/WebP compression. 0=best quality, 100=smallest
moderationauto / lowContent filter strictness. Low=less restrictive
response_formaturl / b64_jsonDefault: url (DALL-E compat). b64_json returns base64

Edit Parameters (multipart/form-data)

ParameterTypeNotes
model *stringRequired
prompt *stringDescription of the edit
image *fileSource image(s). Multiple images as reference supported
maskfilePNG with alpha channel marking edit region. Must match image size, <50MB
sizestringSame options as generation
qualitystringauto / low / medium / high
response_formatstringurl / b64_json

Python — Generate (gpt-image-1.5)

Python
from openai import OpenAI

client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")

response = client.images.generate(
    model="gpt-image-1.5-vvip",
    prompt="A cute orange cat wearing sunglasses at the beach",
    size="1024x1024",
    quality="high",
    n=1
)
print(response.data[0].url)

Python — All options (transparent + compression + base64)

Python
response = client.images.generate(
    model="gpt-image-1.5-vvip",
    prompt="A product photo of a sneaker, isolated, studio lighting",
    size="1024x1024",
    quality="high",
    background="transparent",
    moderation="low",
    output_format="png",
    output_compression=30,
    response_format="b64_json",
    n=1
)

import base64
image_data = base64.b64decode(response.data[0].b64_json)
with open("output.png", "wb") as f:
    f.write(image_data)

Python — Multiple images (n=4)

Python
# gpt-image-1/1.5 supports n=1~10, gpt-image-2 only n=1
response = client.images.generate(
    model="gpt-image-1.5-vvip",
    prompt="A colorful butterfly",
    size="1024x1024",
    quality="medium",
    n=4
)
for i, img in enumerate(response.data):
    print(f"Image {i+1}: {img.url}")

Python — Edit image

Python
# Basic edit
response = client.images.edit(
    model="gpt-image-1.5-vvip",
    image=open("input.png", "rb"),
    prompt="Add red sunglasses to the cat",
    n=1
)
print(response.data[0].url)

# Edit with mask (region-specific inpainting)
response = client.images.edit(
    model="gpt-image-1.5-vvip",
    image=open("input.png", "rb"),
    mask=open("mask.png", "rb"),  # alpha channel marks edit area
    prompt="Place a golden crown here",
    n=1
)

Node.js

JavaScript
// Generate
const image = await client.images.generate({
  model: "gpt-image-1.5-vvip", prompt: "A cute cat",
  size: "1024x1024", quality: "high", n: 1,
});
console.log(image.data[0].url);

// Edit
import fs from "fs";
const edited = await client.images.edit({
  model: "gpt-image-1.5-vvip",
  image: fs.createReadStream("input.png"),
  prompt: "Add sunglasses", n: 1,
});

Model Comparison

Featuregpt-image-1.5gpt-image-1gpt-image-2
Speed4x fasterStandardStandard
PricingToken-basedToken-basedFixed per-call
n (batch)1–101–101 only
Sizesauto/1024/1536auto/1024/1536auto/1024–3840 (2K/4K)
Qualityauto/low/med/highauto/low/med/highauto/low/med/high
Transparent BG
output_formatpng/jpeg/webppng/jpeg/webppng/jpeg/webp
Compression
Editing
Mask editing
Important
• URLs expire in 24h — download immediately or use response_format: "b64_json".
• Generation takes 10–120s depending on quality and model.
background: "transparent" requires output_format set to png or webp.
• Mask file must match source image dimensions, <50MB, with alpha channel.

Wan 2.6 Text-to-Image

Alibaba's high-quality image generation model. Supports Chinese/English prompts, custom aspect ratios, batch generation (up to 4 images), negative prompts, and intelligent prompt enhancement. Average generation time ~7 seconds.

POST /v1/images/generations

Pricing

ModelPriceNotes
wan2.6-t2i$0.02/imageUp to 4 images per call. Fee = price × n

Parameters

ParameterValuesDefaultNotes
model *wan2.6-t2iRequired
prompt *Text≤2100 chars. Chinese/English. Required
size1280*1280, 1472*1104, 1104*1472, 1696*960, 960*16961280*1280Total pixels in [1280², 1440²], ratio [1:4, 4:1]
n144Number of images. Charged per image
negative_promptText≤500 chars. Describe what to avoid
prompt_extendtrue / falsetrueAI auto-enriches prompt (+3-4s latency). Great for short prompts
watermarktrue / falsefalseAdds "AI生成" watermark to bottom-right
seed02147483647RandomSame seed = reproducible results

Recommended Sizes

RatioSizeUse case
1:11280*1280Square, avatars, icons
4:31472*1104Landscape photos
3:41104*1472Portrait photos
16:91696*960Widescreen, banners
9:16960*1696Phone wallpaper, stories

Python — Basic

Python
response = client.images.generate(
    model="wan2.6-t2i",
    prompt="一只戴墨镜的橘猫在海边冲浪,超写实摄影",
    size="1280*1280",
    n=1,
)
print(response.data[0].url)  # URL valid 24h

Python — All options

Python
# Full parameters: negative prompt + seed + no auto-enrich + no watermark
response = client.images.generate(
    model="wan2.6-t2i",
    prompt="一只柴犬在草地上奔跑,摄影大片风格",
    negative_prompt="模糊, 低质量, 变形, 多余的腿",
    size="1472*1104",   # 4:3 landscape
    n=2,
    seed=42,
    prompt_extend=False,  # use prompt as-is
    watermark=False,
)
for i, img in enumerate(response.data):
    print(f"Image {i+1}: {img.url}")

Python — Batch (n=4)

Python
# Generate 4 images at once (default n=4)
response = client.images.generate(
    model="wan2.6-t2i",
    prompt="一朵盛开的红色玫瑰,微距摄影",
    size="1280*1280",
    n=4,
)
import urllib.request
for i, img in enumerate(response.data):
    urllib.request.urlretrieve(img.url, f"output_{i+1}.png")
    print(f"Saved output_{i+1}.png")

Node.js

JavaScript
const image = await client.images.generate({
  model: "wan2.6-t2i",
  prompt: "一只橘猫在海边,超写实",
  negative_prompt: "模糊, 低质量",
  size: "1280*1280",
  n: 1,
  seed: 42,
  prompt_extend: true,
  watermark: false,
});
console.log(image.data[0].url);
Tips
• URLs expire in 24h — download promptly.
• Average generation ~7 seconds.
prompt_extend adds 3-4s but significantly improves results for short prompts.
• Use negative_prompt to avoid common artifacts: "模糊, 低质量, 变形, 多余手指".
• Size uses * separator (not x): 1280*12801280x1280

Wan 2.7 / Doubao Image

ModelPriceNotes
wan2.7-image$0.027/callReturns 4 images. Uses chat/completions
doubao-seedream-5-0-260128$0.04/callSeedReam 5.0. Uses images endpoint
doubao-seedream-4-5-251128$0.04/callSeedReam 4.5
doubao-seedream-5-0-lite-260128$0.04/callSeedReam Lite

Python — Wan 2.7

Python
# Wan 2.7 uses chat/completions, returns image URLs in content
response = client.chat.completions.create(
    model="wan2.7-image",
    messages=[{"role": "user", "content": [{"type": "text", "text": "A cute shiba inu"}]}]
)
for choice in response.choices:
    for item in choice.message.content:
        if item.get("image"):
            print(item["image"])

Python — Doubao

Python
response = client.images.generate(
    model="doubao-seedream-5-0-260128",
    prompt="A cute shiba inu, photorealistic",
    size="2K"
)
print(response.data[0].url)

Wan 2.6 Video

Async video generation. Submit a task, poll for results. Three models for different use cases.

POST /v1/video/generations — Submit task
GET /v1/videos/{task_id} — Poll status

Models & Pricing (per second)

ModelType720P/s1080P/sNotes
wan2.6-t2vText-to-Video$0.058$0.100No audio option
wan2.6-i2vImage-to-Video$0.058$0.100Always has audio
wan2.6-i2v-flashI2V (fast)$0.0145 / $0.029$0.024 / $0.048Silent / Audio pricing
Pricing examples • wan2.6-t2v, 720P, 5s: $0.058 × 5 = $0.29
• wan2.6-i2v-flash, 720P, 5s, silent: $0.0145 × 5 = $0.0725
• wan2.6-i2v-flash, 720P, 5s, audio: $0.029 × 5 = $0.145

All Parameters

ParameterValuesApplies toNotes
model *wan2.6-t2v / wan2.6-i2v / wan2.6-i2v-flashAllRequired
prompt *TextAll≤1500 chars
resolution720P / 1080PAllDefault: 720P (flash/t2v), 1080P (i2v)
duration215AllDefault: 5 seconds
input_reference *Image URLi2v / i2v-flashSingle image as first frame. JPEG/PNG/BMP/WEBP, 240–8000px, ≤20MB
imagesArray of URLsi2v / i2v-flashAlternative to input_reference. wan2.7+ supports 2 images for first+last frame
audiotrue / falsei2v-flashDefault: false. Costs 2× silent
audio_urlAudio URLi2v / i2v-flashCustom audio. wav/mp3, 3–30s, ≤15MB
negative_promptTextAll≤500 chars
prompt_extendtrue / falseAllDefault: true. Auto-enrich
shot_typesingle / multii2v / i2v-flashCamera angles. multi requires prompt_extend=true
watermarktrue / falseAllDefault: false
seed02147483647AllReproducibility

Python — Text-to-Video (wan2.6-t2v)

Python
import requests, time

BASE = "https://token.openclaw-token.shop"
HEADERS = {"Authorization": "Bearer sk-your-key", "Content-Type": "application/json"}

# 1. Submit
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
    "model": "wan2.6-t2v",
    "prompt": "An orange cat running on a beach, slow motion",
    "negative_prompt": "blurry, low quality",
    "resolution": "720P",
    "duration": 5,
    "seed": 42,
})
task_id = r.json()["id"]

# 2. Poll (every 10s, may take 1~3 min)
while True:
    data = requests.get(f"{BASE}/v1/videos/{task_id}", headers=HEADERS).json()
    if data["status"] == "completed":
        print(data["metadata"]["url"])  # URL valid 24h
        break
    elif data["status"] == "failed":
        print("Error:", data.get("error", {}).get("message"))
        break
    time.sleep(10)

Python — Image-to-Video silent (wan2.6-i2v-flash)

Python
# Silent (default, cheaper)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
    "model": "wan2.6-i2v-flash",
    "prompt": "The cat starts running",
    "input_reference": "https://your-image-url.png",
    "resolution": "720P",
    "duration": 5,
})
# Poll same as above...

Python — Image-to-Video with audio (wan2.6-i2v-flash)

Python
# AI-generated audio (2× silent price)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
    "model": "wan2.6-i2v-flash",
    "prompt": "A guitarist performing on stage",
    "input_reference": img_url,
    "resolution": "1080P",
    "duration": 5,
    "audio": True,
})

Python — Custom audio file

Python
# Your own background audio (wav/mp3, 3~30s, ≤15MB)
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
    "model": "wan2.6-i2v-flash",
    "prompt": "Musician performing",
    "input_reference": img_url,
    "resolution": "720P",
    "duration": 5,
    "audio": True,
    "audio_url": "https://your-audio.mp3",
})

Python — Multi-shot high quality (wan2.6-i2v)

Python
# Standard quality model, multi-camera, all options
r = requests.post(f"{BASE}/v1/video/generations", headers=HEADERS, json={
    "model": "wan2.6-i2v",
    "prompt": "Cat exploring a garden, multiple angles",
    "negative_prompt": "blurry, shaky",
    "input_reference": img_url,
    "resolution": "1080P",
    "duration": 10,
    "shot_type": "multi",
    "prompt_extend": True,
    "watermark": False,
    "seed": 42,
})

Node.js

JavaScript
const BASE = "https://token.openclaw-token.shop";
const headers = { "Authorization": "Bearer sk-your-key", "Content-Type": "application/json" };

// Submit
const res = await fetch(`${BASE}/v1/video/generations`, {
  method: "POST", headers,
  body: JSON.stringify({
    model: "wan2.6-i2v-flash", prompt: "Cat running",
    input_reference: "https://your-image.png",
    resolution: "720P", duration: 5, audio: true,
  }),
});
const { id: taskId } = await res.json();

// Poll
while (true) {
  const d = await (await fetch(`${BASE}/v1/videos/${taskId}`, { headers })).json();
  if (d.status === "completed") { console.log(d.metadata.url); break; }
  if (d.status === "failed") { console.log(d.error); break; }
  await new Promise(r => setTimeout(r, 10000));
}
Important • Video generation is async: submit returns task ID, poll every 10s.
• Typical time: 50–180 seconds.
• URLs expire in 24h.
• Image format for i2v: JPEG/PNG/BMP/WEBP, 240–8000px, ≤20MB.

Wan 2.2 / 2.5 Video

ModelPriceType
wan2.2-t2v-plus$0.02/callText-to-Video
wan2.2-i2v-plus$0.02/callImage-to-Video
wan2.2-i2v-flash$0.02/callI2V (fast)
wan2.5-t2v-preview$0.04/callText-to-Video v2.5
wan2.5-i2v-preview$0.04/callI2V v2.5

Same submit+poll pattern as Wan 2.6. See code examples above.

Claude Code CLI

Shell
export ANTHROPIC_BASE_URL="https://token.openclaw-token.shop"
export ANTHROPIC_API_KEY="sk-your-key"

claude                          # defaults to claude-sonnet-4-6
claude --model claude-opus-4-7  # specify model
PersistAdd exports to ~/.bashrc or ~/.zshrc for permanent setup.

SDK Setup

OpenAI SDK

Python
pip install openai
from openai import OpenAI
client = OpenAI(api_key="sk-your-key", base_url="https://token.openclaw-token.shop/v1")
Node.js
npm install openai
import OpenAI from "openai";
const client = new OpenAI({ apiKey: "sk-your-key", baseURL: "https://token.openclaw-token.shop/v1" });

Anthropic SDK

Python
pip install anthropic
from anthropic import Anthropic
client = Anthropic(api_key="sk-your-key", base_url="https://token.openclaw-token.shop")

Demo Scripts

Complete runnable Python scripts that test all parameters. Click to download.

ScriptTestsCoverageDownload
demo_gpt_image.pyGPT Image 1/1.5: quality, sizes, n, transparent, formats, compression, moderation, b64, edit, mask11 test cases⬇ Download
demo_gpt_image_2.pyGPT Image 2: generate, edit, mask, b64, quality, sizes, advanced options7 test cases⬇ Download
demo_wan26.pyT2I, T2V, I2V silent/audio/custom-audio, multi-shot, all params11 test cases⬇ Download
demo_gemini_image.pyGemini Flash Image: 512/1K/2K/4K, all ratios, styles, Chinese, temperature, multi-turn, edit11 test groups⬇ Download
demo_wan_qwen_image.pyWan 2.7 / Qwen image generation + editing⬇ Download
Shell
# Run GPT Image 1/1.5 full test (11 cases)
NEWAPI_BASE_URL="https://your-api-gateway.example.com/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_gpt_image.py

# Specify model
python3 bin/demo_gpt_image.py --model gpt-image-1-vvip

# Run GPT Image 2 full test
NEWAPI_BASE_URL="https://token.openclaw-token.shop/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_gpt_image_2.py

# Run Wan 2.6 full test (all models)
NEWAPI_BASE_URL="https://token.openclaw-token.shop/v1" \
NEWAPI_API_KEY="sk-your-key" \
python3 bin/demo_wan26.py

# Wan 2.6 - only text-to-image
python3 bin/demo_wan26.py --only-t2i

# Wan 2.6 - only video
python3 bin/demo_wan26.py --skip-t2i

Pricing (Live)

Real-time pricing from GET /api/pricing. Prices update automatically.

Loading pricing data...

FAQ

How to get an API key?

Register and log in, then go to Tokens page and create a new token.

What's the balance unit?

USD ($). 500,000 quota = $1.

Are cache tokens priced differently?

Yes. Cache read tokens use the cache ratio shown in pricing (typically 10% of input price). Cache creation = input × 1.25.

Do GPT models only support Responses API?

No. GPT models support both /v1/chat/completions (auto-converted) and /v1/responses (native). Standard OpenAI SDK works out of the box.

How long are generated image/video URLs valid?

24 hours. Download promptly after generation.

Video generation timeout?

Video is async (submit + poll). Submit returns instantly. Typical generation: 50–180 seconds. No timeout on polling.

Which platforms are compatible?

Claude Code CLI, OpenAI SDK, Anthropic SDK, ChatGPT Next Web, LobeChat, Cursor, Windsurf, and any OpenAI-compatible client.