Skip to Content
Unified docs shell with shared Classifyre tokens and acid-green highlight accents.
DetectorsToxicity

Toxicity

Schema-driven detector documentation.

TOXICactiveP16 params2 examples
Detector Metadata
Capability catalog entry from all_detectors.json.

Categories

CONTENT

Supported Asset Types

TXTTABLEURL

Recommended Model

detoxify
Parameters
Configuration parameters for the Toxicity detector. Shared from `ContentDetectorConfig`.
ParameterTypeRequiredDescriptionDefaultConstraints
enabled_patternsarrayNoSpecific content types to detect
enabled_patterns[]enumNoContent detector pattern types Allowed values: toxicity, severe_toxicity, obscene, threat, insult, identity_attack, nsfw, nsfw_explicit
severity_thresholdenum | nullNoMinimum severity to reportnull
confidence_thresholdnumberNoMinimum confidence to report (0-1)0.7min 0, max 1
max_findingsinteger | nullNoMaximum number of findings to returnnull
model_nameenumNoDetoxify model variant Allowed values: original, unbiased, multilingualoriginal