Nvidia to Represent Self-Driving Safety Report

In particular, the creators said that the main aspect of security is the autonomy of an unmanned vehicle
24 October 2018   707

NVIDIA published a report on the safety of unmanned vehicles, in which it tried to highlight all aspects of the new machines. The main thing that made the emphasis in the document - is proof of the safety of autopilots.

In the report, the company described some technical development processes. In particular, the creators said that the main aspect of security is the autonomy of an unmanned vehicle. In other words, the onboard AI must have sufficient performance not to apply for commands to the control room, but to make a decision independently in most cases. However, this is not always the case.

The report states that the safety of travel on unmanned vehicles is based on four fundamental principles.

The first is a powerful platform of artificial intelligence with the possibility of deep learning. Such a platform should be able to work with sensors, surround vision systems and other devices. Of course, this will require a powerful hardware.

The second important aspect is data processing. It is assumed that each vehicle will generate petabytes of data, so some of the information will need to be processed remotely in data centers. All this will require a completely new computing architecture and infrastructure.

Self-Driving Safety Report Illustration
Self-Driving Safety Report Illustration

The third principle is testing of unmanned vehicles in real conditions, on ordinary roads and under various driving conditions. This will allow to collect more information about the trip and supplement it with conventional computer simulation.

The fourth and last are security measures for drivers involved in testing. The NVIDIA report noted that test participants must be trained before operating such machines. 

In 2017, NVIDIA introduced a new version of the Drive PX platform for unmanned vehicles. The company claimed that it would become the basis for autonomy of the fifth level (full autonomy of unmanned vehicles), but so far this has not happened.

MelNet Algorithm to Simulate Person's Voice

It analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas
11 June 2019   318

Facebook AI Research team has developed a MelNet algorithm that synthesizes speech with characteristics specific to a particular person. For example, it learned to imitate the voice of Bill Gates.

MelNet analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas.

Just the length of the replicas limits capabilities of the algorithm. It reproduces short phrases very close to the original. However, the person's intonation changes when he speaks on different topics, with different moods, different pitches. The algorithm is not yet able to imitate this, therefore long sentences sound artificially.

MIT Technology Review notes that even such an algorithm can greatly affect services like voice bots. There just all communication is reduced to an exchange of short remarks.

A similar approach - analysis of speech spectrograms - was used by scientists from Google AI when working on the Translatotron algorithm. This AI is able to translate phrases from one language to another, preserving the peculiarities of the speaker's speech.