serverStreamingPredict method

Stream<StreamingPredictResponse> serverStreamingPredict(
  1. StreamingPredictRequest request
)

Perform a server-side streaming online prediction request for Vertex LLM streaming.

Throws a http.ClientException if there were problems communicating with the API service. Throws a StatusException if the API failed with a Status message. Throws a ServiceException for any other failure.

Implementation

Stream<StreamingPredictResponse> serverStreamingPredict(
  StreamingPredictRequest request,
) {
  final url = Uri.https(
    _host,
    '/v1beta1/${request.endpoint}:serverStreamingPredict',
  );
  return _client
      .postStreaming(url, body: request)
      .map(StreamingPredictResponse.fromJson);
}