safetySettings property
Optional. A list of unique SafetySetting instances for blocking unsafe
content.
This will be enforced on the GenerateAnswerRequest.contents and
GenerateAnswerResponse.candidate. There should not be more than one
setting for each SafetyCategory type. The API will block any contents and
responses that fail to meet the thresholds set by these settings. This list
overrides the default settings for each SafetyCategory specified in the
safety_settings. If there is no SafetySetting for a given
SafetyCategory provided in the list, the API will use the default safety
setting for that category. Harm categories HARM_CATEGORY_HATE_SPEECH,
HARM_CATEGORY_SEXUALLY_EXPLICIT, HARM_CATEGORY_DANGEROUS_CONTENT,
HARM_CATEGORY_HARASSMENT are supported.
Refer to the
guide
for detailed information on available safety settings. Also refer to the
Safety guidance to
learn how to incorporate safety considerations in your AI applications.
Implementation
final List<SafetySetting> safetySettings;