Back to Services
Endpoints
API2.0
POST
https://api.onlysq.ru/ai/v2Generates a text response to a user message(s).
Starting with patch 4.0.1 beta (November 29, 2025), every request to v2/imagen must have Authorization header.
json
{"Authorization": "Bearer apikey"}Basic API key is openai.
Request
This endpoint expects an object.
model(string, required)The name of a OnlySq AI model which will process the request.
request(dict, required)Query data dictionary containing:
messages(list, required) - Chat messages in order.meta(dict, optional) - previously used by ImaGen.image_count(int, optional) - previously used when generating with ImaGen.
Response
id(string) - Unique identifier.object(string) - Object init for OpenAI SDK.created(int) - UNIX timestamp.model(string) - Model that processed the request.choices(list) - completion choices: index, message, finish_reasonusage(dict) - prompt_tokens, completion_tokens, total_tokensuser(int) - Unique of user by API key.
Sync Request Example
python
import requests
headers = {
"Authorization": "Bearer openai"
}
send = {
"model": "gemini-2.5-flash",
"request": {
"messages": [
{
"role": "user",
"content": "Hi! Write a short one-line story"
}
]
}
}
request = requests.post('http://api.onlysq.ru/ai/v2', json=send, headers=headers)
response = request.json()
print(response)Response Example
json
{
"id": "chat_XBt6z670WKm7L9BVoCcLZLzTNZ03UJhD6sWqAEUyTgaJvhJA",
"object": "chat.completion",
"created": 1745146202,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "\"Under the full moon, the werewolf realized he'd forgotten his keys—again.\"",
"refusal": null,
"annotations": []
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 11,
"completion_tokens": 19,
"total_tokens": 30
},
"user": 0
}Streaming Request Example
python
import requests, json
url = "http://api.onlysq.ru/ai/v2"
data = {
"model": "gpt-4o-mini",
"request": {
"messages": [
{
"role": "user",
"content": "Write a one-line story about AI."
}
],
"stream": True
}
}
with requests.post(url, json=data, stream=True) as response:
if response.status_code == 200:
for line in response.iter_lines():
if line:
decoded_line = line.decode('utf-8').strip()
if decoded_line.startswith("data: "):
decoded_line = decoded_line[len("data: "):]
if decoded_line == "[DONE]":
break
chunk = json.loads(decoded_line)
content = chunk["choices"][0]["delta"].get("content")
if content:
print(content, end="", flush=True)
else:
print(response.status_code, response.text)Console output:
"After mastering human emotions, the AI sighed and shut itself down, realizing loneliness wasn't worth the code."
Streaming Chunk Example
json
{
"id": "chatcmpl_lQdhTfXn4yKoDlyCjuBpL0GfPZNonZufGqOyGl3wVpMhJRwP",
"object": "chat.completion.chunk",
"created": 1750612428,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"delta": {
"content": "te",
"role": "assistant"
},
"finish_reason": null
}
],
"usage": null
}