The Google Cloud team announced the stable release of the Cloud Text-to-Speech speech synthesis API with experimental audio profiles function and support for several new languages. Service for decoding audio Cloud Speech-to-Text has learned to recognize different speakers and independently determine the language of the recording from several possible ones.
Along with the transition to a stable working regime, the API for translating written speech into spoken language now supports a number of new languages and voices created with the help of WaveNet technology. In total, 14 languages and dialects are available, which are spoken by 30 standard "voices" and 26 ones that are based on WaveNet.
Audio profile function is available in beta mode. It allows you to automatically optimize the audio file for a particular device: smart watches and other wearable gadgets, smartphones, headphones, conventional and stereo speakers, smart home audio systems, car speakers. You can also set the mode to "default".
Cloud Speech-to-Text API received the function of recognizing speakers by voice. Using machine learning, the system, when transcribing, separates the replicas of different people and marks them with numbers. However, at the beginning of the audio file processing, you need to specify the number of speakers.
Also, the Google Cloud team added the automatic language detection function to the record. Using the API for their applications, the developer can specify up to 4 languages in one query. At the time of writing, the tool supports 120 languages.
The technology of speech synthesis Google used for a long time only in its own products. For third-party developers it became available in March 2018 with a choice of 32 voices and 12 languages. And the service of decoding the oral speech used to be called the Cloud Speech API, and the current name was received in April 2018, along with new models for analyzing calls and video.