Skip to content

Moderations

The Moderations API provides OpenAI-compatible content safety checks for user-generated or model-generated text.

Moderations

Evaluate text input against common safety categories such as hate, harassment, self-harm, sexual content, and violence.

POST
https://api.dgrid.ai
POST/v1/moderations
Authorization
Authorization: Bearer <DGRID_API_KEY>
Request
application/json
Response
200 · application/json

Request Body

FieldTypeRequiredDescription
inputstring or arrayYesText content to moderate.
modelstringNoModeration model such as text-moderation-latest or text-moderation-stable.

Response Body

FieldTypeDescription
idstringModeration request ID.
modelstringModel used for moderation.
resultsarrayModeration result entries.
results[].flaggedbooleanWhether the input was flagged.
results[].categoriesobjectBoolean category decisions.
results[].categories.hatebooleanHate speech flag.
results[].categories.hate/threateningbooleanThreatening hate speech flag.
results[].categories.harassmentbooleanHarassment flag.
results[].categories.self-harmbooleanSelf-harm flag.
results[].categories.sexualbooleanSexual content flag.
results[].categories.violencebooleanViolence flag.
results[].category_scoresobjectContinuous scores for each category.