AI to Cool Down Google's Servers

DeepMind's algorithm allowed to save energy - for 9 months the efficiency index increased from 12% to 30%
20 August 2018   1023

Google used DeepMind's AI technology to fully automate the cooling system in its data centers. The corporation began to use the algorithm in 2016, but then it just gave the engineers advice on reducing costs. In August 2018 the system began to work completely autonomously.

Researchers trained the algorithm using the "Training with reinforcement" method. Every five minutes, the AI ​​collects data from thousands of sensors inside the data center. The algorithm determines which configurations of the cooling system reduce the energy consumption in the  best way and independently includes them. Although its work is fully automated, the company's engineers can intervene at any time.

We wanted to achieve energy savings with less operator overhead. Automating the system enabled us to implement more granular actions at greater frequency, while making fewer mistakes.
 

Dan Fuenffinger
Data Centre Operator, Google

In total, the system monitors more than 120 different parameters of the data center operation, including air conditioning control, closing and opening of windows, fan speed and others.

After full automation, the DeepMind's algorithm allowed to save more energy - for 9 months the efficiency index increased from 12% to 30%.


Energy Consumption

According to Google data center vice president Joe Cava, the project will help the company save millions of dollars and reduce carbon dioxide emissions into the environment. In the long term, the system will help solve the problem of climate change, according to representatives of Google.

MelNet Algorithm to Simulate Person's Voice

It analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas
11 June 2019   339

Facebook AI Research team has developed a MelNet algorithm that synthesizes speech with characteristics specific to a particular person. For example, it learned to imitate the voice of Bill Gates.

MelNet analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas.

Just the length of the replicas limits capabilities of the algorithm. It reproduces short phrases very close to the original. However, the person's intonation changes when he speaks on different topics, with different moods, different pitches. The algorithm is not yet able to imitate this, therefore long sentences sound artificially.

MIT Technology Review notes that even such an algorithm can greatly affect services like voice bots. There just all communication is reduced to an exchange of short remarks.

A similar approach - analysis of speech spectrograms - was used by scientists from Google AI when working on the Translatotron algorithm. This AI is able to translate phrases from one language to another, preserving the peculiarities of the speaker's speech.