OpenAI-compatible chat completion response.
Non-streaming (stream: false or omitted): Returns a single JSON object with the complete response.
Streaming (stream: true): Returns Server-Sent Events (SSE) with text/event-stream content type.
Each SSE event is prefixed with data: followed by a JSON object:
data: {"id":"chatcmpl-abc123","object":"chat.completion.chunk","created":1743037634,"model":"gpt-4","choices":[{"index":0,"delta":{"content":"Hello"},"finish_reason":null}]}
The stream ends with:
data: [DONE]
Key fields per chunk:
id: unique completion ID
object: always chat.completion.chunk
created: Unix timestamp
choices[0].delta.content: incremental text fragment
choices[0].finish_reason: null during stream, stop on final chunk
JavaScript client example:
const response = await fetch(
'https://app.customgpt.ai/api/v1/projects/{projectId}/chat/completions',
{
method: 'POST',
headers: {
'Authorization': 'Bearer YOUR_API_KEY',
'Content-Type': 'application/json'
},
body: JSON.stringify({
messages: [{ role: 'user', content: 'Hello' }],
stream: true
})
}
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split('\n');
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
const payload = line.slice(6);
if (payload === '[DONE]') break;
const chunk = JSON.parse(payload);
const text = chunk.choices[0]?.delta?.content || '';
process.stdout.write(text);
}
}