Chat
The Chat API section covers Claude-compatible conversations, Gemini-native multimodal messaging, OpenAI-compatible Chat Completions, and the newer Responses API.
Native Claude Format
Use the Claude Messages API when you need Anthropic-compatible payloads, tool use, or system prompts.
https://api.dgrid.ai
POST
/v1/messagesRequest Body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID, such as claude-3-5-sonnet-20241022. |
max_tokens | integer | Yes | Maximum output token count. |
messages | array | Yes | Conversation message list. |
messages[].role | string | Yes | user or assistant. |
messages[].content | string or array | Yes | Message content or content blocks. |
system | string | No | System instructions. |
temperature | number | No | Sampling temperature. |
top_p | number | No | Top-p sampling. |
top_k | integer | No | Top-k sampling. |
stop_sequences | array | No | Stop sequences. |
stream | boolean | No | Enable streaming responses. |
tools | array | No | Tool schema definitions. |
tool_choice | object | No | Tool selection strategy. |
Response Body
| Field | Type | Description |
|---|---|---|
id | string | Message identifier. |
type | string | Always message. |
role | string | Always assistant. |
content | array | Returned content blocks. |
content[].type | string | text or tool_use. |
content[].text | string | Text body when the content type is text. |
model | string | Model that produced the output. |
stop_reason | string | end_turn, max_tokens, stop_sequence, or tool_use. |
usage | object | Token usage metadata. |
Gemini Media Recognition
Use Gemini-native multimodal parts to analyze images, audio, video, or mixed media in a single request.
https://api.dgrid.ai
POST
/v1/models/{model}:generateContentPath Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Model ID such as gemini-1.5-pro. |
Request Body
| Field | Type | Required | Description |
|---|---|---|---|
body | object | Yes | The current example sends an empty JSON object {}. |
Response Body
| Field | Type | Description |
|---|---|---|
candidates | array | Candidate responses returned by the model. |
candidates[].content | object | Generated content object. |
candidates[].content.role | string | Role returned in the generated content block. |
candidates[].content.parts | array | Returned content parts. |
candidates[].finishReason | string | Finish reason string. |
candidates[].safetyRatings | array | Safety evaluation results. |
usageMetadata | object | Token accounting metadata. |
usageMetadata.promptTokenCount | integer | Prompt token count. |
usageMetadata.candidatesTokenCount | integer | Candidate output token count. |
usageMetadata.totalTokenCount | integer | Total token count. |
Gemini Text Chat
Use the Gemini native chat format when you want a lightweight text-only payload without switching providers.
https://api.dgrid.ai
POST
/v1/models/{model}:generateContentRequest Body
| Field | Type | Required | Description |
|---|---|---|---|
body | object | Yes | The current example sends an empty JSON object {}. |
Response Body
| Field | Type | Description |
|---|---|---|
candidates | array | Candidate responses returned by the model. |
candidates[].content | object | Generated content object. |
candidates[].content.role | string | Role returned in the generated content block. |
candidates[].content.parts | array | Returned content parts. |
candidates[].finishReason | string | Finish reason string. |
candidates[].safetyRatings | array | Safety evaluation results. |
usageMetadata | object | Token accounting metadata. |
usageMetadata.promptTokenCount | integer | Prompt token count. |
usageMetadata.candidatesTokenCount | integer | Candidate output token count. |
usageMetadata.totalTokenCount | integer | Total token count. |
Chat Completions
Use the OpenAI-compatible Chat Completions format for standard multi-turn chat, structured output, and tool calling.
https://api.dgrid.ai
POST
/v1/chat/completionsRequest Body
| Field | Type | Required | Default | Description |
|---|---|---|---|---|
model | string | Yes | - | Target model ID. |
messages | array | Yes | - | Conversation message list. |
messages[].role | string | Yes | - | system, user, assistant, or tool. |
messages[].content | string | Yes | - | Message content. |
messages[].name | string | No | - | Optional participant name. |
messages[].tool_calls | array | No | - | Tool invocation payloads. |
messages[].tool_call_id | string | No | - | Tool call identifier. |
temperature | number | No | 1 | Sampling temperature. |
top_p | number | No | 1 | Nucleus sampling value. |
n | integer | No | 1 | Number of choices to generate. |
stream | boolean | No | false | Enable SSE streaming. |
max_tokens | integer | No | - | Maximum token count. |
max_completion_tokens | integer | No | - | Max completion-only tokens. |
presence_penalty | number | No | 0 | Presence penalty. |
frequency_penalty | number | No | 0 | Frequency penalty. |
logit_bias | object | No | - | Token bias configuration. |
stop | string or array | No | - | Stop sequence. |
tools | array | No | - | Tool definitions. |
tool_choice | string or object | No | auto | Tool selection behavior. |
response_format | object | No | - | Response schema or JSON mode config. |
seed | integer | No | - | Deterministic seed. |
user | string | No | - | End-user identifier. |
Response Body
| Field | Type | Description |
|---|---|---|
id | string | Completion identifier. |
object | string | Always chat.completion. |
created | integer | Creation timestamp. |
model | string | Model that served the request. |
choices | array | Returned choices. |
choices[].message | object | Assistant message object. |
choices[].message.role | string | Response role. |
choices[].message.content | string | Response text. |
choices[].message.tool_calls | array | Tool call payloads. |
choices[].finish_reason | string | stop, length, content_filter, or tool_calls. |
usage | object | Token usage breakdown. |
Responses
Use the OpenAI Responses API when you want stateful flows, reasoning-specific options, or newer OpenAI tooling patterns.
https://api.dgrid.ai
POST
/v1/responsesRequest Body
| Field | Type | Required | Description |
|---|---|---|---|
model | string | Yes | Target model ID. |
body.model | string | Yes | The current example only sends the model field in the request body. |
Response Body
| Field | Type | Description |
|---|---|---|
id | string | Response identifier. |
object | string | Always response. |
created_at | integer | Creation timestamp. |
status | string | Response lifecycle state. |
model | string | Model used for inference. |
output | array | Output items. |
output[].type | string | Typically message. |
output[].role | string | Output role. |
output[].content | array | Output content blocks. |
output[].content[].type | string | output_text for text payloads. |
output[].content[].text | string | Output text. |
usage | object | Token usage summary. |
