Scientists from MIT created the ML-model Speech2Face, which generates portrait from the spectrogram of a person’s speech. It recognizes gender, age and, by emphasis, ethnicity.
Real Images of People, Reconstructed Images and Voice-Based Images
The model is based on data from the AVSpeech kit with short clips. Audio and video tracks in them are pre-divided. A total of a million such files in the collection, among them there are about a hundred thousand people.
Having received a short video at the beginning, one part of the algorithm reworkes a person's face on the basis of frames so that it is in full face, with a neutral expression. Another part of the algorithm works with the audio track. It recreates the spectrogram, recognizes the voice and generates a portrait using a parallel neural network.
The quality check showed that the model copes well with the definition of sex, but is not yet able to correctly estimate age with an accuracy of 10 years. In addition, a problem with race definition was discovered: the algorithm best dealed with the drawing of faces of people of European or Asian origin. As the researchers say, this is due to the uneven distribution of the races in the training set.