Types for Cloud Speech-to-Text API Client#

class google.cloud.speech_v1.types.Any#
type_url#

Field google.protobuf.Any.type_url

value#

Field google.protobuf.Any.value

class google.cloud.speech_v1.types.CancelOperationRequest#
name#

Field google.longrunning.CancelOperationRequest.name

class google.cloud.speech_v1.types.DeleteOperationRequest#
name#

Field google.longrunning.DeleteOperationRequest.name

class google.cloud.speech_v1.types.Duration#
nanos#

Field google.protobuf.Duration.nanos

seconds#

Field google.protobuf.Duration.seconds

class google.cloud.speech_v1.types.GetOperationRequest#
name#

Field google.longrunning.GetOperationRequest.name

class google.cloud.speech_v1.types.ListOperationsRequest#
filter#

Field google.longrunning.ListOperationsRequest.filter

name#

Field google.longrunning.ListOperationsRequest.name

page_size#

Field google.longrunning.ListOperationsRequest.page_size

page_token#

Field google.longrunning.ListOperationsRequest.page_token

class google.cloud.speech_v1.types.ListOperationsResponse#
next_page_token#

Field google.longrunning.ListOperationsResponse.next_page_token

operations#

Field google.longrunning.ListOperationsResponse.operations

class google.cloud.speech_v1.types.LongRunningRecognizeMetadata#

Describes the progress of a long-running LongRunningRecognize call. It is included in the metadata field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

progress_percent#

Approximate percentage of audio processed thus far. Guaranteed to be 100 when the audio is fully processed and the results are available.

start_time#

Time when the request was received.

last_update_time#

Time of the most recent processing update.

last_update_time

Field google.cloud.speech.v1.LongRunningRecognizeMetadata.last_update_time

progress_percent

Field google.cloud.speech.v1.LongRunningRecognizeMetadata.progress_percent

start_time

Field google.cloud.speech.v1.LongRunningRecognizeMetadata.start_time

class google.cloud.speech_v1.types.LongRunningRecognizeRequest#

The top-level message sent by the client for the LongRunningRecognize method.

config#

Required Provides information to the recognizer that specifies how to process the request.

audio#

Required The audio data to be recognized.

audio

Field google.cloud.speech.v1.LongRunningRecognizeRequest.audio

config

Field google.cloud.speech.v1.LongRunningRecognizeRequest.config

class google.cloud.speech_v1.types.LongRunningRecognizeResponse#

The only message returned to the client by the LongRunningRecognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages. It is included in the result.response field of the Operation returned by the GetOperation call of the google::longrunning::Operations service.

results#

Output only. Sequential list of transcription results corresponding to sequential portions of audio.

results

Field google.cloud.speech.v1.LongRunningRecognizeResponse.results

class google.cloud.speech_v1.types.Operation#
deserialize()#

Creates new method instance from given serialized data.

done#

Field google.longrunning.Operation.done

error#

Field google.longrunning.Operation.error

metadata#

Field google.longrunning.Operation.metadata

name#

Field google.longrunning.Operation.name

response#

Field google.longrunning.Operation.response

class google.cloud.speech_v1.types.OperationInfo#
metadata_type#

Field google.longrunning.OperationInfo.metadata_type

response_type#

Field google.longrunning.OperationInfo.response_type

class google.cloud.speech_v1.types.RecognitionAudio#

Contains audio data in the encoding specified in the RecognitionConfig. Either content or uri must be supplied. Supplying both or neither returns [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]. See content limits.

audio_source#

The audio source, which is either inline content or a Google Cloud Storage uri.

content#

The audio data bytes encoded as specified in RecognitionConfig. Note: as with all bytes fields, protobuffers use a pure binary representation, whereas JSON representations use base64.

uri#

URI that points to a file that contains audio data bytes as specified in RecognitionConfig. The file must not be compressed (for example, gzip). Currently, only Google Cloud Storage URIs are supported, which must be specified in the following format: gs://bucket_name/object_name (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc. Code.INVALID_ARGUMENT]). For more information, see Request URIs.

content

Field google.cloud.speech.v1.RecognitionAudio.content

uri

Field google.cloud.speech.v1.RecognitionAudio.uri

class google.cloud.speech_v1.types.RecognitionConfig#

Provides information to the recognizer that specifies how to process the request.

encoding#

Encoding of audio data sent in all RecognitionAudio messages. This field is optional for FLAC and WAV audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1.Recognitio nConfig.AudioEncoding].

sample_rate_hertz#

Sample rate in Hertz of the audio data sent in all RecognitionAudio messages. Valid values are: 8000-48000. 16000 is optimal. For best results, set the sampling rate of the audio source to 16000 Hz. If that’s not possible, use the native sample rate of the audio source (instead of re- sampling). This field is optional for FLAC and WAV audio files and required for all other audio formats. For details, see [AudioEncoding][google.cloud.speech.v1.Recognitio nConfig.AudioEncoding].

audio_channel_count#

Optional The number of channels in the input audio data. ONLY set this for MULTI-CHANNEL recognition. Valid values for LINEAR16 and FLAC are 1-8. Valid values for OGG_OPUS are ‘1’-‘254’. Valid value for MULAW, AMR, AMR_WB and SPEEX_WITH_HEADER_BYTE is only 1. If 0 or omitted, defaults to one channel (mono). Note: We only recognize the first channel by default. To perform independent recognition on each channel set enable_separate_recognition_per_channel to ‘true’.

enable_separate_recognition_per_channel#

This needs to be set to true explicitly and audio_channel_count > 1 to get each channel recognized separately. The recognition result will contain a channel_tag field to state which channel that result belongs to. If this is not true, we will only recognize the first channel. The request is billed cumulatively for all channels recognized: audio_channel_count multiplied by the length of the audio.

language_code#

Required The language of the supplied audio as a BCP-47 language tag. Example: “en-US”. See Language Support for a list of the currently supported language codes.

max_alternatives#

Optional Maximum number of recognition hypotheses to be returned. Specifically, the maximum number of SpeechRecognitionAlternative messages within each SpeechRecognitionResult. The server may return fewer than max_alternatives. Valid values are 0-30. A value of 0 or 1 will return a maximum of one. If omitted, will return a maximum of one.

profanity_filter#

Optional If set to true, the server will attempt to filter out profanities, replacing all but the initial character in each filtered word with asterisks, e.g. “f***”. If set to false or omitted, profanities won’t be filtered out.

speech_contexts#

Optional array of [SpeechContext][google.cloud.speech.v1.SpeechContext]. A means to provide context to assist the speech recognition. For more information, see Phrase Hints.

enable_word_time_offsets#

Optional If true, the top result includes a list of words and the start and end time offsets (timestamps) for those words. If false, no word-level time offset information is returned. The default is false.

enable_automatic_punctuation#

Optional If ‘true’, adds punctuation to recognition result hypotheses. This feature is only available in select languages. Setting this for requests in other languages has no effect at all. The default ‘false’ value does not add punctuation to result hypotheses. Note: This is currently offered as an experimental service, complimentary to all users. In the future this may be exclusively available as a premium feature.

metadata#

Optional Metadata regarding this request.

model#

Optional Which model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the RecognitionConfig. .. raw:: html <table> .. raw:: html <tr> :: <td><b>Model</b></td> <td><b>Description</b></td> .. raw:: html </tr> .. raw:: html <tr> :: <td><code>command_and_search</code></td> <td>Best for short queries such as voice commands or voice search.</td> .. raw:: html </tr> .. raw:: html <tr> :: <td><code>phone_call</code></td> <td>Best for audio that originated from a phone call (typically recorded at an 8khz sampling rate).</td> .. raw:: html </tr> .. raw:: html <tr> :: <td><code>video</code></td> <td>Best for audio that originated from from video or includes multiple speakers. Ideally the audio is recorded at a 16khz or greater sampling rate. This is a premium model that costs more than the standard rate.</td> .. raw:: html </tr> .. raw:: html <tr> :: <td><code>default</code></td> <td>Best for audio that is not one of the specific audio models. For example, long-form audio. Ideally the audio is high-fidelity, recorded at a 16khz or greater sampling rate.</td> .. raw:: html </tr> .. raw:: html </table>

use_enhanced#

Optional Set to true to use an enhanced model for speech recognition. If use_enhanced is set to true and the model field is not set, then an appropriate enhanced model is chosen if an enhanced model exists for the audio. If use_enhanced is true and an enhanced version of the specified model does not exist, then the speech is recognized using the standard version of the specified model.

audio_channel_count

Field google.cloud.speech.v1.RecognitionConfig.audio_channel_count

enable_automatic_punctuation

Field google.cloud.speech.v1.RecognitionConfig.enable_automatic_punctuation

enable_separate_recognition_per_channel

Field google.cloud.speech.v1.RecognitionConfig.enable_separate_recognition_per_channel

enable_word_time_offsets

Field google.cloud.speech.v1.RecognitionConfig.enable_word_time_offsets

encoding

Field google.cloud.speech.v1.RecognitionConfig.encoding

language_code

Field google.cloud.speech.v1.RecognitionConfig.language_code

max_alternatives

Field google.cloud.speech.v1.RecognitionConfig.max_alternatives

metadata

Field google.cloud.speech.v1.RecognitionConfig.metadata

model

Field google.cloud.speech.v1.RecognitionConfig.model

profanity_filter

Field google.cloud.speech.v1.RecognitionConfig.profanity_filter

sample_rate_hertz

Field google.cloud.speech.v1.RecognitionConfig.sample_rate_hertz

speech_contexts

Field google.cloud.speech.v1.RecognitionConfig.speech_contexts

use_enhanced

Field google.cloud.speech.v1.RecognitionConfig.use_enhanced

class google.cloud.speech_v1.types.RecognitionMetadata#

Description of audio data to be recognized.

interaction_type#

The use case most closely describing the audio content to be recognized.

industry_naics_code_of_audio#

The industry vertical to which this speech recognition request most closely applies. This is most indicative of the topics contained in the audio. Use the 6-digit NAICS code to identify the industry vertical - see https://www.naics.com/search/.

microphone_distance#

The audio type that most closely describes the audio being recognized.

original_media_type#

The original media the speech was recorded on.

recording_device_type#

The type of device the speech was recorded with.

recording_device_name#

The device used to make the recording. Examples ‘Nexus 5X’ or ‘Polycom SoundStation IP 6000’ or ‘POTS’ or ‘VoIP’ or ‘Cardioid Microphone’.

original_mime_type#

Mime type of the original audio file. For example audio/m4a, audio/x-alaw-basic, audio/mp3, audio/3gpp. A list of possible audio mime types is maintained at http://www.iana.org/assignments/media- types/media-types.xhtml#audio

audio_topic#

Description of the content. Eg. “Recordings of federal supreme court hearings from 2012”.

audio_topic

Field google.cloud.speech.v1.RecognitionMetadata.audio_topic

industry_naics_code_of_audio

Field google.cloud.speech.v1.RecognitionMetadata.industry_naics_code_of_audio

interaction_type

Field google.cloud.speech.v1.RecognitionMetadata.interaction_type

microphone_distance

Field google.cloud.speech.v1.RecognitionMetadata.microphone_distance

original_media_type

Field google.cloud.speech.v1.RecognitionMetadata.original_media_type

original_mime_type

Field google.cloud.speech.v1.RecognitionMetadata.original_mime_type

recording_device_name

Field google.cloud.speech.v1.RecognitionMetadata.recording_device_name

recording_device_type

Field google.cloud.speech.v1.RecognitionMetadata.recording_device_type

class google.cloud.speech_v1.types.RecognizeRequest#

The top-level message sent by the client for the Recognize method.

config#

Required Provides information to the recognizer that specifies how to process the request.

audio#

Required The audio data to be recognized.

audio

Field google.cloud.speech.v1.RecognizeRequest.audio

config

Field google.cloud.speech.v1.RecognizeRequest.config

class google.cloud.speech_v1.types.RecognizeResponse#

The only message returned to the client by the Recognize method. It contains the result as zero or more sequential SpeechRecognitionResult messages.

results#

Output only. Sequential list of transcription results corresponding to sequential portions of audio.

results

Field google.cloud.speech.v1.RecognizeResponse.results

class google.cloud.speech_v1.types.SpeechContext#

Provides “hints” to the speech recognizer to favor specific words and phrases in the results.

phrases#

Optional A list of strings containing words and phrases “hints” so that the speech recognition is more likely to recognize them. This can be used to improve the accuracy for specific words and phrases, for example, if specific commands are typically spoken by the user. This can also be used to add additional words to the vocabulary of the recognizer. See usage limits.

phrases

Field google.cloud.speech.v1.SpeechContext.phrases

class google.cloud.speech_v1.types.SpeechRecognitionAlternative#

Alternative hypotheses (a.k.a. n-best list).

transcript#

Output only. Transcript text representing the words that the user spoke.

confidence#

Output only. The confidence estimate between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. This field is set only for the top alternative of a non-streaming result or, of a streaming result where is_final=true. This field is not guaranteed to be accurate and users should not rely on it to be always provided. The default of 0.0 is a sentinel value indicating confidence was not set.

words#

Output only. A list of word-specific information for each recognized word. Note: When enable_speaker_diarization is true, you will see all the words from the beginning of the audio.

confidence

Field google.cloud.speech.v1.SpeechRecognitionAlternative.confidence

transcript

Field google.cloud.speech.v1.SpeechRecognitionAlternative.transcript

words

Field google.cloud.speech.v1.SpeechRecognitionAlternative.words

class google.cloud.speech_v1.types.SpeechRecognitionResult#

A speech recognition result corresponding to a portion of the audio.

alternatives#

Output only. May contain one or more recognition hypotheses (up to the maximum specified in max_alternatives). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.

channel_tag#

For multi-channel audio, this is the channel number corresponding to the recognized result for the audio from that channel. For audio_channel_count = N, its output values can range from ‘1’ to ‘N’.

alternatives

Field google.cloud.speech.v1.SpeechRecognitionResult.alternatives

channel_tag

Field google.cloud.speech.v1.SpeechRecognitionResult.channel_tag

class google.cloud.speech_v1.types.Status#
code#

Field google.rpc.Status.code

details#

Field google.rpc.Status.details

message#

Field google.rpc.Status.message

class google.cloud.speech_v1.types.StreamingRecognitionConfig#

Provides information to the recognizer that specifies how to process the request.

config#

Required Provides information to the recognizer that specifies how to process the request.

single_utterance#

Optional If false or omitted, the recognizer will perform continuous recognition (continuing to wait for and process audio even if the user pauses speaking) until the client closes the input stream (gRPC API) or until the maximum time limit has been reached. May return multiple StreamingRecognitionResults with the is_final flag set to true. If true, the recognizer will detect a single spoken utterance. When it detects that the user has paused or stopped speaking, it will return an END_OF_SINGLE_UTTERANCE event and cease recognition. It will return no more than one StreamingRecognitionResult with the is_final flag set to true.

interim_results#

Optional If true, interim results (tentative hypotheses) may be returned as they become available (these interim results are indicated with the is_final=false flag). If false or omitted, only is_final=true result(s) are returned.

config

Field google.cloud.speech.v1.StreamingRecognitionConfig.config

interim_results

Field google.cloud.speech.v1.StreamingRecognitionConfig.interim_results

single_utterance

Field google.cloud.speech.v1.StreamingRecognitionConfig.single_utterance

class google.cloud.speech_v1.types.StreamingRecognitionResult#

A streaming speech recognition result corresponding to a portion of the audio that is currently being processed.

alternatives#

Output only. May contain one or more recognition hypotheses (up to the maximum specified in max_alternatives). These alternatives are ordered in terms of accuracy, with the top (first) alternative being the most probable, as ranked by the recognizer.

is_final#

Output only. If false, this StreamingRecognitionResult represents an interim result that may change. If true, this is the final time the speech service will return this particular StreamingRecognitionResult, the recognizer will not return any further hypotheses for this portion of the transcript and corresponding audio.

stability#

Output only. An estimate of the likelihood that the recognizer will not change its guess about this interim result. Values range from 0.0 (completely unstable) to 1.0 (completely stable). This field is only provided for interim results (is_final=false). The default of 0.0 is a sentinel value indicating stability was not set.

result_end_time#

Output only. Time offset of the end of this result relative to the beginning of the audio.

channel_tag#

For multi-channel audio, this is the channel number corresponding to the recognized result for the audio from that channel. For audio_channel_count = N, its output values can range from ‘1’ to ‘N’.

language_code#

Output only. The BCP-47 language tag of the language in this result. This language code was detected to have the most likelihood of being spoken in the audio.

alternatives

Field google.cloud.speech.v1.StreamingRecognitionResult.alternatives

channel_tag

Field google.cloud.speech.v1.StreamingRecognitionResult.channel_tag

is_final

Field google.cloud.speech.v1.StreamingRecognitionResult.is_final

language_code

Field google.cloud.speech.v1.StreamingRecognitionResult.language_code

result_end_time

Field google.cloud.speech.v1.StreamingRecognitionResult.result_end_time

stability

Field google.cloud.speech.v1.StreamingRecognitionResult.stability

class google.cloud.speech_v1.types.StreamingRecognizeRequest#

The top-level message sent by the client for the StreamingRecognize method. Multiple StreamingRecognizeRequest messages are sent. The first message must contain a streaming_config message and must not contain audio data. All subsequent messages must contain audio data and must not contain a streaming_config message.

streaming_request#

The streaming request, which is either a streaming config or audio content.

streaming_config#

Provides information to the recognizer that specifies how to process the request. The first StreamingRecognizeRequest message must contain a streaming_config message.

audio_content#

The audio data to be recognized. Sequential chunks of audio data are sent in sequential StreamingRecognizeRequest messages. The first StreamingRecognizeRequest message must not contain audio_content data and all subsequent StreamingRecognizeRequest messages must contain audio_content data. The audio bytes must be encoded as specified in RecognitionConfig. Note: as with all bytes fields, protobuffers use a pure binary representation (not base64). See content limits.

audio_content

Field google.cloud.speech.v1.StreamingRecognizeRequest.audio_content

streaming_config

Field google.cloud.speech.v1.StreamingRecognizeRequest.streaming_config

class google.cloud.speech_v1.types.StreamingRecognizeResponse#

StreamingRecognizeResponse is the only message returned to the client by StreamingRecognize. A series of zero or more StreamingRecognizeResponse messages are streamed back to the client. If there is no recognizable audio, and single_utterance is set to false, then no messages are streamed back to the client.

Here’s an example of a series of ten StreamingRecognizeResponses that might be returned while processing audio:

  1. results { alternatives { transcript: “tube” } stability: 0.01 }

  2. results { alternatives { transcript: “to be a” } stability: 0.01 }

  3. results { alternatives { transcript: “to be” } stability: 0.9 } results { alternatives { transcript: ” or not to be” } stability: 0.01 }

  4. results { alternatives { transcript: “to be or not to be” confidence: 0.92 } alternatives { transcript: “to bee or not to bee” } is_final: true }

  5. results { alternatives { transcript: ” that’s” } stability: 0.01 }

  6. results { alternatives { transcript: ” that is” } stability: 0.9 } results { alternatives { transcript: ” the question” } stability: 0.01 }

  7. results { alternatives { transcript: ” that is the question” confidence: 0.98 } alternatives { transcript: ” that was the question” } is_final: true }

Notes:

  • Only two of the above responses #4 and #7 contain final results; they are indicated by is_final: true. Concatenating these together generates the full transcript: “to be or not to be that is the question”.

  • The others contain interim results. #3 and #6 contain two interim results: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stability results.

  • The specific stability and confidence values shown above are only for illustrative purposes. Actual values may vary.

  • In each response, only one of these fields will be set: error, speech_event_type, or one or more (repeated) results.

error#

Output only. If set, returns a [google.rpc.Status][google.rpc.Status] message that specifies the error for the operation.

results#

Output only. This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or one is_final=true result (the newly settled portion), followed by zero or more is_final=false results (the interim results).

speech_event_type#

Output only. Indicates the type of speech event.

error

Field google.cloud.speech.v1.StreamingRecognizeResponse.error

results

Field google.cloud.speech.v1.StreamingRecognizeResponse.results

speech_event_type

Field google.cloud.speech.v1.StreamingRecognizeResponse.speech_event_type

class google.cloud.speech_v1.types.Timestamp#
nanos#

Field google.protobuf.Timestamp.nanos

seconds#

Field google.protobuf.Timestamp.seconds

class google.cloud.speech_v1.types.WordInfo#

Word-specific information for recognized words.

start_time#

Output only. Time offset relative to the beginning of the audio, and corresponding to the start of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.

end_time#

Output only. Time offset relative to the beginning of the audio, and corresponding to the end of the spoken word. This field is only set if enable_word_time_offsets=true and only in the top hypothesis. This is an experimental feature and the accuracy of the time offset can vary.

word#

Output only. The word corresponding to this set of information.

end_time

Field google.cloud.speech.v1.WordInfo.end_time

start_time

Field google.cloud.speech.v1.WordInfo.start_time

word

Field google.cloud.speech.v1.WordInfo.word