GenerationConfig class final
Configuration options for model generation and outputs. Not all parameters are configurable for every model. Next ID: 29
- Inheritance
-
- Object
- ProtoMessage
- GenerationConfig
Constructors
-
GenerationConfig({int? candidateCount, List<
String> stopSequences = const [], int? maxOutputTokens, double? temperature, double? topP, int? topK, int? seed, String responseMimeType = '', Schema? responseSchema, Value? responseJsonSchema, Value? responseJsonSchemaOrdered, double? presencePenalty, double? frequencyPenalty, bool? responseLogprobs, int? logprobs, bool? enableEnhancedCivicAnswers, List<GenerationConfig_Modality> responseModalities = const [], SpeechConfig? speechConfig, ThinkingConfig? thinkingConfig, ImageConfig? imageConfig, GenerationConfig_MediaResolution? mediaResolution}) - GenerationConfig.fromJson(Object? j)
-
factory
Properties
- candidateCount → int?
-
Optional. Number of generated responses to return. If unset, this will
default to 1. Please note that this doesn't work for previous generation
models (Gemini 1.0 family)
final
- enableEnhancedCivicAnswers → bool?
-
Optional. Enables enhanced civic answers. It may not be available for all
models.
final
- frequencyPenalty → double?
-
Optional. Frequency penalty applied to the next token's logprobs,
multiplied by the number of times each token has been seen in the respponse
so far.
final
- hashCode → int
-
The hash code for this object.
no setterinherited
- imageConfig → ImageConfig?
-
Optional. Config for image generation.
An error will be returned if this field is set for models that don't
support these config options.
final
- logprobs → int?
-
Optional. Only valid if
google.ai.generativelanguage.v1beta.GenerationConfig.response_logprobs. This sets the number of top logprobs to return at each decoding step in theCandidate.logprobs_result. The number must be in the range of0, 20.final - maxOutputTokens → int?
-
Optional. The maximum number of tokens to include in a response candidate.
final
- mediaResolution → GenerationConfig_MediaResolution?
-
Optional. If specified, the media resolution specified will be used.
final
- presencePenalty → double?
-
Optional. Presence penalty applied to the next token's logprobs if the
token has already been seen in the response.
final
- qualifiedName → String
-
The fully qualified name of this message, i.e.,
google.protobuf.Durationorgoogle.rpc.ErrorInfo.finalinherited - responseJsonSchema → Value?
-
Optional. Output schema of the generated response. This is an alternative
to
response_schemathat accepts JSON Schema.final - responseJsonSchemaOrdered → Value?
-
Optional. An internal detail. Use
responseJsonSchemarather than this field.final - responseLogprobs → bool?
-
Optional. If true, export the logprobs results in response.
final
- responseMimeType → String
-
Optional. MIME type of the generated candidate text.
Supported MIME types are:
text/plain: (default) Text output.application/json: JSON response in the response candidates.text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.final -
responseModalities
→ List<
GenerationConfig_Modality> -
Optional. The requested modalities of the response. Represents the set of
modalities that the model can return, and should be expected in the
response. This is an exact match to the modalities of the response.
final
- responseSchema → Schema?
-
Optional. Output schema of the generated candidate text. Schemas must be a
subset of the OpenAPI schema
and can be objects, primitives or arrays.
final
- runtimeType → Type
-
A representation of the runtime type of the object.
no setterinherited
- seed → int?
-
Optional. Seed used in decoding. If not set, the request uses a randomly
generated seed.
final
- speechConfig → SpeechConfig?
-
Optional. The speech generation config.
final
-
stopSequences
→ List<
String> -
Optional. The set of character sequences (up to 5) that will stop output
generation. If specified, the API will stop at the first appearance of a
stop_sequence. The stop sequence will not be included as part of the response.final - temperature → double?
-
Optional. Controls the randomness of the output.
final
- thinkingConfig → ThinkingConfig?
-
Optional. Config for thinking features.
An error will be returned if this field is set for models that don't
support thinking.
final
- topK → int?
-
Optional. The maximum number of tokens to consider when sampling.
final
- topP → double?
-
Optional. The maximum cumulative probability of tokens to consider when
sampling.
final
Methods
-
noSuchMethod(
Invocation invocation) → dynamic -
Invoked when a nonexistent method or property is accessed.
inherited
-
toJson(
) → Object -
override
-
toString(
) → String -
A string representation of this object.
override
Operators
-
operator ==(
Object other) → bool -
The equality operator.
inherited
Constants
- fullyQualifiedName → const String