curl --request POST \
--url https://api.secton.org/v1/chat/completions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "copilot-zero",
"messages": [
{
"role": "user",
"content": "Hello, how can I help you?"
}
],
"temperature": 0.7,
"max_tokens": 1024,
"stream": false,
"instructions": "<string>"
}
'{
"object": "chat.completion",
"model": "copilot-zero",
"organization_id": "org_1234",
"messages": [
{
"role": "user",
"content": "Hello! How can I assist you today?"
}
],
"usage": {
"prompt_tokens": 50,
"completion_tokens": 100,
"total_tokens": 150
}
}Create a chat completion based on a list of messages. Requires authorization.
curl --request POST \
--url https://api.secton.org/v1/chat/completions \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"model": "copilot-zero",
"messages": [
{
"role": "user",
"content": "Hello, how can I help you?"
}
],
"temperature": 0.7,
"max_tokens": 1024,
"stream": false,
"instructions": "<string>"
}
'{
"object": "chat.completion",
"model": "copilot-zero",
"organization_id": "org_1234",
"messages": [
{
"role": "user",
"content": "Hello! How can I assist you today?"
}
],
"usage": {
"prompt_tokens": 50,
"completion_tokens": 100,
"total_tokens": 150
}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Chat completion request payload
Model ID to use for the completion
"copilot-zero"
Sampling temperature between 0 and 1.
Maximum tokens to generate (max 4096).
Whether to stream partial responses.
Optional instructions for the model.
Chat completion response
"chat.completion"
"copilot-zero"
"org_1234"