MIT to Improve Cloud-Based Machine Learning Security

New method combines two encryption techniques and keeps neural networks operate quickly
20 August 2018   1070

A team of researchers from MIT presented a combined method of data encryption for cloud artificial intelligence models at a computer security conference organized by USENIX. Protected with its help, the neural network works 20-30 times faster than those that use traditional techniques.

In addition, privacy remains: the cloud server does not receive the full amount of confidential data, and the user remains unaware of the parameters of the neural network. According to researchers, their system could be useful to hospitals for diagnosis of diseases from MRI photographs using cloud-based AI models.

In cloud computing, two techniques are commonly used: homomorphic encryption and garbled circuits. The first receives and performs calculations completely on the encrypted data and generates a result that the user can decode. However, a convolutional neural network creates noise during processing that grows and accumulates with each layer, so the need to filter the interference significantly reduces the computational speed.

The second technique is a form of computation for which two participants are required. The system takes their input data, processes it and sends each its result. In this case, the parties exchange information, but do not have an idea of ​​what it means. However, the width of the communication channel required for data exchange directly depends on the complexity of the calculations.

With respect to cloud neural networks, the technique shows itself well only on nonlinear layers that perform simple operations. On linear, using complex mathematics, the speed is reduced to a critical level.

The MIT team proposed a solution that uses the strengths of these two methods and bypasses the weak ones. So, the user starts on his device an encryption system using the technique of distorted circuits and loads data encrypted with a homomorphic method into the cloud neural network. Thus, both parties to the process are divided by data: the user device performs calculations on the distorted circuits and sends the data back to the neural network.

Separation of the workload allows to bypass the strong noise of data on each layer, which occurs with homomorphic encryption. In addition, the system limits communication on the technique of distorted circuits to only nonlinear layers.

The final touch is protection using the "secret exchange" scheme. When a user downloads encrypted data to a cloud service, they are separated, and each part receives a secret key. During the calculation, each participant has only a portion of the information. They are synchronized at the end, and only then the user requests from the service his secret key to decrypt the results.

As a result, the user gets the result of classification, but remains unaware of the model parameters, and the cloud service does not have access to the entire volume of data, which ensures privacy.

Neural networks require large processing power for processing data, and they are provided by cloud servers. However, MIT researchers are studying another option: the development of chips of a new architecture for the operation of neural networks on the device itself. In February 2018, they introduced a prototype processor, where the calculations are performed 3-7 times faster, and the power consumption is reduced by 95%.

Nvidia to Open SPADE Source Code

SPADE machine learning system creates realistic landscapes based on rough human sketches
15 April 2019   679

NVIDIA has released the source code for the SPADE machine learning system (GauGAN), which allows for the synthesis of realistic landscapes based on rough sketches, as well as training models associated with the project. The system was demonstrated in March at the GTC 2019 conference, but the code was published only yesterday. The developments are open under the non-free license CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0), allowing use only for non-commercial purposes. The code is written in Python using the PyTorch framework.

Sketches are drawn up in the form of a segmented map that determines the placement of exemplary objects on the scene. The nature of the generated objects is set using color labels. For example, a blue fill turns into sky, blue into water, dark green into trees, light green into grass, light brown into stones, dark brown into mountains, gray into snow, a brown line into a road, and a blue line into the river. Additionally, based on the choice of reference images, the overall style of the composition and the time of day are determined. The proposed tool for creating virtual worlds can be useful to a wide range of specialists, from architects and urban planners to game developers and landscape designers.

Objects are synthesized by a generative-adversarial neural network (GAN), which, based on a schematic segmented map, creates realistic images by borrowing parts from a model previously trained on several million photographs. In contrast to the previously developed systems of image synthesis, the proposed method is based on the use of adaptive spatial transformation followed by transformation based on machine learning. Processing a segmented map instead of semantic markup allows you to achieve an exact match of the result and control the style.

To achieve realism, two competing neural networks are used: the generator and the discriminator (Discriminator). The generator generates images based on mixing elements of real photos, and the discriminator identifies possible deviations from real images. As a result, a feedback is formed, on the basis of which the generator begins to assemble more and more qualitative samples, until the discriminator ceases to distinguish them from the real ones.