MIT to Improve Cloud-Based Machine Learning Security

New method combines two encryption techniques and keeps neural networks operate quickly
20 August 2018   1452

A team of researchers from MIT presented a combined method of data encryption for cloud artificial intelligence models at a computer security conference organized by USENIX. Protected with its help, the neural network works 20-30 times faster than those that use traditional techniques.

In addition, privacy remains: the cloud server does not receive the full amount of confidential data, and the user remains unaware of the parameters of the neural network. According to researchers, their system could be useful to hospitals for diagnosis of diseases from MRI photographs using cloud-based AI models.

In cloud computing, two techniques are commonly used: homomorphic encryption and garbled circuits. The first receives and performs calculations completely on the encrypted data and generates a result that the user can decode. However, a convolutional neural network creates noise during processing that grows and accumulates with each layer, so the need to filter the interference significantly reduces the computational speed.

The second technique is a form of computation for which two participants are required. The system takes their input data, processes it and sends each its result. In this case, the parties exchange information, but do not have an idea of ​​what it means. However, the width of the communication channel required for data exchange directly depends on the complexity of the calculations.

With respect to cloud neural networks, the technique shows itself well only on nonlinear layers that perform simple operations. On linear, using complex mathematics, the speed is reduced to a critical level.

The MIT team proposed a solution that uses the strengths of these two methods and bypasses the weak ones. So, the user starts on his device an encryption system using the technique of distorted circuits and loads data encrypted with a homomorphic method into the cloud neural network. Thus, both parties to the process are divided by data: the user device performs calculations on the distorted circuits and sends the data back to the neural network.

Separation of the workload allows to bypass the strong noise of data on each layer, which occurs with homomorphic encryption. In addition, the system limits communication on the technique of distorted circuits to only nonlinear layers.

The final touch is protection using the "secret exchange" scheme. When a user downloads encrypted data to a cloud service, they are separated, and each part receives a secret key. During the calculation, each participant has only a portion of the information. They are synchronized at the end, and only then the user requests from the service his secret key to decrypt the results.

As a result, the user gets the result of classification, but remains unaware of the model parameters, and the cloud service does not have access to the entire volume of data, which ensures privacy.

Neural networks require large processing power for processing data, and they are provided by cloud servers. However, MIT researchers are studying another option: the development of chips of a new architecture for the operation of neural networks on the device itself. In February 2018, they introduced a prototype processor, where the calculations are performed 3-7 times faster, and the power consumption is reduced by 95%.

TensorFlow 2.0 to be Released

New major release of the machine learning platform brought a lot of updates and changes, some stuff even got cut
01 October 2019   241

A significant release of the TensorFlow 2.0 machine learning platform is presented, which provides ready-made implementations of various deep machine learning algorithms, a simple programming interface for building models in Python, and a low-level interface for C ++ that allows you to control the construction and execution of computational graphs. The system code is written in C ++ and Python and is distributed under the Apache license.

The platform was originally developed by the Google Brain team and is used in Google services for speech recognition, facial recognition in photographs, determining the similarity of images, filtering spam in Gmail, selecting news in Google News and organizing the translation taking into account the meaning. Distributed machine learning systems can be created on standard equipment, thanks to the built-in support in TensorFlow for spreading computing to multiple CPUs or GPUs.

TensorFlow provides a library of off-the-shelf numerical computation algorithms implemented through data flow graphs. The nodes in such graphs implement mathematical operations or entry / exit points, while the edges of the graph represent multidimensional data arrays (tensors) that flow between the nodes. The nodes can be assigned to computing devices and run asynchronously, simultaneously processing all the suitable tensors at the same time, which allows you to organize the simultaneous operation of nodes in the neural network by analogy with the simultaneous activation of neurons in the brain.

Get more info about the update at official website.