Skip to main content

Last Update: 7/13/2025

Grok Chat Completion API

The Grok Chat Completion API allows you to generate conversational responses using xAI's Grok language models. This document provides an overview of the API endpoints, request parameters, and response structure.

Endpoint

POST https://platform.llmprovider.ai/v1/chat/completions

Request Headers

HeaderValue
AuthorizationBearer YOUR_API_KEY
Content-Typeapplication/json

Request Body

The request body should be a JSON object with the following parameters:

ParameterTypeDescription
modelstringThe model to use (e.g., grok-beta).
messagesarrayA list of message objects representing the conversation history.
streamboolean(Optional) Whether to stream the response as it is generated.
max_tokensinteger(Optional) The maximum number of tokens to generate.
temperaturenumber(Optional) Sampling temperature, between 0 and 2.
top_pnumber(Optional) Nucleus sampling probability, between 0 and 1.
stoparray(Optional) Up to 4 sequences where the API will stop generating further tokens.
presence_penaltynumber(Optional) Penalty for new tokens based on their presence in the text so far.
frequency_penaltynumber(Optional) Penalty for new tokens based on their frequency in the text so far.

Example Request

{
"model": "grok-beta",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Tell me a joke."
}
],
"max_tokens": 50,
"temperature": 0.7
}

Response Body

The response body will be a JSON object containing the generated completions and other metadata.

FieldTypeDescription
idstringUnique identifier for the completion.
objectstringThe type of object returned, usually chat.completion.
createdintegerTimestamp of when the completion was created.
modelstringThe model used for the completion.
choicesarrayA list of generated completion choices.
usageobjectToken usage statistics for the request.

Example Response

{
"id": "cmpl-6aF1d2e3G4H5I6J7K8L9M0N1",
"object": "chat.completion",
"created": 1678491234,
"model": "grok-beta",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Why don't scientists trust atoms? Because they make up everything!"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 10,
"completion_tokens": 16,
"total_tokens": 26
}
}

Example Request

curl -X POST https://platform.llmprovider.ai/v1/chat/completions \
-H "Authorization: Bearer $YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "grok-beta",
"messages": [
{
"role": "user",
"content": "Hello!"
}
]
}'

For more details, refer to the xAI API documentation.