OpenAI Compatible

Chat Completions

post

OpenAI Compatible API Endpoint for Foresight Models.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use

temperatureany ofOptional

Sampling temperature between 0 and 2

numberOptional
or
nullOptional
max_tokensany ofOptional

Maximum number of tokens to generate

integerOptional
or
nullOptional
top_pany ofOptional

Nucleus sampling parameter

numberOptional
or
nullOptional
streamany ofOptional

Whether to stream back partial progress

Default: false
booleanOptional
or
nullOptional
nany ofOptional

Number of chat completion choices to generate

Default: 1
integerOptional
or
nullOptional
stopany ofOptional

Up to 4 sequences where the API will stop generating

stringOptional
or
string[]Optional
or
nullOptional
seedany ofOptional

Deterministic sampling seed

integerOptional
or
nullOptional
Responses
chevron-right
200

Successful Response

application/json
idstringRequired

A unique identifier for the chat completion

objectconst: chat.completionOptional

The object type

Default: chat.completion
createdintegerRequired

Unix timestamp of when the completion was created

modelstringRequired

The model used for the chat completion

usageany ofOptional

Usage statistics for the completion request

or
nullOptional
post
/openai/chat/completions

List Models

get

List available models.

Responses
chevron-right
200

Successful Response

application/json
objectconst: listOptional

The object type

Default: list
get
/openai/models
200

Successful Response

Completions

post

OpenAI-compatible text completion endpoint.

Authorizations
AuthorizationstringRequired
Bearer authentication header of the form Bearer <token>.
Body
modelstringRequired

ID of the model to use

promptany ofRequired

The prompt(s) to generate completions for

stringOptional
or
string[]Optional
temperatureany ofOptional

Sampling temperature between 0 and 2

numberOptional
or
nullOptional
max_tokensany ofOptional

Maximum number of tokens to generate

integerOptional
or
nullOptional
top_pany ofOptional

Nucleus sampling parameter

numberOptional
or
nullOptional
streamany ofOptional

Whether to stream back partial progress

Default: false
booleanOptional
or
nullOptional
nany ofOptional

Number of completions to generate

Default: 1
integerOptional
or
nullOptional
stopany ofOptional

Up to 4 sequences where the API will stop generating

stringOptional
or
string[]Optional
or
nullOptional
seedany ofOptional

Deterministic sampling seed

integerOptional
or
nullOptional
Responses
chevron-right
200

Successful Response

application/json
idstringRequired

A unique identifier for the completion

objectconst: text_completionOptional

The object type

Default: text_completion
createdintegerRequired

Unix timestamp of when the completion was created

modelstringRequired

The model used for the completion

usageany ofOptional

Usage statistics for the completion request

or
nullOptional
post
/openai/completions

Last updated