https://stream.ai.wizzo.media · view as JSONAll endpoints (except /api/health) require:
Authorization: Bearer <API_KEY>
Use the wizzoai API key (obtained via market_service::api_key("wizzoai") in the PHP CORE).
{
"model": "string (optional) - e.g. \"gpt-5\", \"gemini-2.5-pro\". Defaults to provider default.",
"provider": "string (optional) - \"openai\" | \"gemini\" | \"anthropic\" | \"xai\". Defaults to \"openai\".",
"prompt": "string (required) - the user message.",
"settings": {
"type": "string - log/billing tag (e.g. \"article_generation\").",
"system_msg": "string - system prompt.",
"history": "array - previous turns: [{role:\"user\"|\"assistant\"|\"tool\", content|name|tool_call_id|...}].",
"max_tokens": "number - response length limit (default 15000).",
"temperature": "number | null - sampling temperature.",
"json": "bool - if true, the answer is parsed as JSON before being returned.",
"json_schema": "string|object - optional JSON schema for structured output.",
"image_input": "string | string[] - URL / data-URI / base64 image(s) for vision or image editing. URLs are downloaded server-side with anti-bot fallbacks.",
"tools": "array - tool/function definitions (OpenAI-style) for tool-calling.",
"web_search": "bool - enable provider web-search tool when supported."
}
}Returns the AI answer token-by-token as an SSE stream. Use this when you want to render the answer live as it is generated.
{"type":"chunk","content":"..."} - one or more, with the next text fragment{"type":"action","subtype":"tool_calls","calls":[...]} - emitted once if the model called a tool{"type":"done","input_tokens":N,"output_tokens":N,"wizzo_tokens":N} - terminal success event{"type":"error","message":"...","code":"..."} - terminal error eventReturns the full AI answer as a STRING in `data`, matching the legacy PHP m_ai::call() shape. When settings.json=true the string is cleaned (markdown fences/Hebrew quote escapes removed) and is safe to pass through json_decode() on the client. Use /api/ai/json when you want the response already parsed.
{
"data": "The full answer as a string (raw text, or a JSON string when settings.json=true, or {type:\"tool_calls\",calls:[...]} when the model called a tool)"
}
Like /api/ai but forces settings.json=true and returns the answer ALREADY PARSED as an object/array in `data`. Use this when you want to skip the json_decode() step on the client.
{
"data": {
"example_field": "parsed object/array - depends on the model output"
}
}
Generate an image (or edit one when settings.image_input is provided). Returns an array of public URLs hosted on stream.ai.wizzo.media (valid for 24 hours).
POST /api/ai_image{
"data": [
"https://stream.ai.wizzo.media/uploads/ai_images/openai_173...png"
]
}
Returns an OpenAI embedding vector for the given text (defaults to text-embedding-3-small, 1536 dims).
{
"data": [
0.0123,
-0.0456,
"..."
]
}
Returns the active model catalog (id, name, provider, supports_tools, supports_vision, prices, ...).
GET /api/get_models · POST /api/get_models{
"data": {
"models": [
"..."
]
}
}
Creates a short-lived OpenAI Realtime session for browser WebRTC voice. Rate-limited to 5/minute per user.
{
"voice": "shimmer (default) | alloy | echo | ..."
}
Called by the client when a voice session ends. Body: { duration_ms: number }.
No auth. Returns { status: "ok", timestamp }.
4xx for client errors (prompt_not_set, monthly_limit, INVALID_TOOL_CALL, ...), 5xx for provider/server errors
{
"error": "short_code",
"message": "human readable Hebrew message"
}