Intel to Present Neural Compute Stick 2

Neural Compute Stick 2 is an autonomous neural network on a USB drive
15 November 2018   1321

At the Beijing conference, Intel introduced Neural Compute Stick 2, a device that facilitates the development of smart software for peripheral devices. These include not only network equipment, but also IoT systems, video cameras, industrial robots, medical systems and drones. The solution is intended primarily for projects that use computer vision.

Neural Compute Stick 2 is an autonomous neural network on a USB drive and should speed up and simplify the development of software for peripheral devices by transferring most of the computation needed for learning to the specialized Intel Movidius Myriad X processor. Neural Compute Engine, responsible for the high-speed neural network of deep learning.

The first Neural Compute Stick was created by Movidius, which was acquired by Intel in 2016. The second version is 8 times faster than the first one and can work on Linux OS. The device is connected via a USB interface to a PC, laptop or peripheral device.

Intel said that Intel NCS 2 allows to quickly create, configure and test prototypes of neural networks with deep learning. Calculations in the cloud and even access to the Internet for this is not needed.

The module with a neural network has already been released for sale at a price of $ 99. Even before the start of sales, some developers got access to Intel NCS 2. With its help, projects such as Clean Water AI, which use machine vision with a microscope to detect harmful bacteria in water, BlueScan AI, scanning the skin for signs of melanoma, and ASL Classification, real-time translates sign language into text.

Over the Movidius Myriad X VPU, Intel worked with Microsoft, which was announced at the Developer Day conference in March 2018. The AI ​​platform is expected to appear in upcoming Windows updates.

MelNet Algorithm to Simulate Person's Voice

It analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas
11 June 2019   312

Facebook AI Research team has developed a MelNet algorithm that synthesizes speech with characteristics specific to a particular person. For example, it learned to imitate the voice of Bill Gates.

MelNet analyzes the spectrograms of the audio tracks of the usual TED Talks, notes the speech characteristics of the speaker and reproduces short replicas.

Just the length of the replicas limits capabilities of the algorithm. It reproduces short phrases very close to the original. However, the person's intonation changes when he speaks on different topics, with different moods, different pitches. The algorithm is not yet able to imitate this, therefore long sentences sound artificially.

MIT Technology Review notes that even such an algorithm can greatly affect services like voice bots. There just all communication is reduced to an exchange of short remarks.

A similar approach - analysis of speech spectrograms - was used by scientists from Google AI when working on the Translatotron algorithm. This AI is able to translate phrases from one language to another, preserving the peculiarities of the speaker's speech.