AI Benchmark App to be Released for Android

The application tests the motherboard, processor and RAM, and then issues a number indicating the effectiveness of the AI ​​on the device
27 July 2018   383

Specialists in the field of computer vision from the Swiss Higher Technical School of Zurich have created an application that assesses the performance of smartphones for working with artificial intelligence.

The results of the test will be useful for researchers of AI, manufacturers of components and developers of Android. Thanks to the information received, they will be able to learn and correct the shortcomings of the devices in order to improve the efficiency of their work with AI.

AI Benchmark can be downloaded on Google Play and run on any smartphone with Android 4.1 and higher. The application tests the motherboard, processor and RAM, and then issues a number indicating the effectiveness of the AI ​​on the device.

When testing AI Benchmark evaluates the ability of the smartphone to edit high-resolution images, recognize objects in photos and classify them. In addition, the system tests the algorithms used in autopilot for cars.

Summarizing the results presented on the project's website, the researchers came to the following conclusions:

  • Qualcomm - theoretically can give good results, but not enough drivers;
  • Huawei - quite outstanding results;
  • Samsung - there is no acceleration support, but powerful processors;
  • Mediatek - good results for middle-class devices.

The final rating of smartphones and their hardware platforms is presented below:

Rating of Android in AI Benchmark
Rating of Android in AI Benchmark

One of the creators of AI Benchmark Andrey Ignatov said that the application was developed for about three months. The idea of ​​its creation arose because of the lack of information about the limitations of the use of modern AI on smartphones. This is due to the fact that currently all algorithms work remotely on servers, not on the device, except for some pre-installed applications.

Experts are sure that in the future AI technologies will be no less important than a good camera in a smartphone, so they want to actively participate in the development of this field.

MIT to Improve Cloud-Based Machine Learning Security

New method combines two encryption techniques and keeps neural networks operate quickly
20 August 2018   124

A team of researchers from MIT presented a combined method of data encryption for cloud artificial intelligence models at a computer security conference organized by USENIX. Protected with its help, the neural network works 20-30 times faster than those that use traditional techniques.

In addition, privacy remains: the cloud server does not receive the full amount of confidential data, and the user remains unaware of the parameters of the neural network. According to researchers, their system could be useful to hospitals for diagnosis of diseases from MRI photographs using cloud-based AI models.

In cloud computing, two techniques are commonly used: homomorphic encryption and garbled circuits. The first receives and performs calculations completely on the encrypted data and generates a result that the user can decode. However, a convolutional neural network creates noise during processing that grows and accumulates with each layer, so the need to filter the interference significantly reduces the computational speed.

The second technique is a form of computation for which two participants are required. The system takes their input data, processes it and sends each its result. In this case, the parties exchange information, but do not have an idea of ​​what it means. However, the width of the communication channel required for data exchange directly depends on the complexity of the calculations.

With respect to cloud neural networks, the technique shows itself well only on nonlinear layers that perform simple operations. On linear, using complex mathematics, the speed is reduced to a critical level.

The MIT team proposed a solution that uses the strengths of these two methods and bypasses the weak ones. So, the user starts on his device an encryption system using the technique of distorted circuits and loads data encrypted with a homomorphic method into the cloud neural network. Thus, both parties to the process are divided by data: the user device performs calculations on the distorted circuits and sends the data back to the neural network.

Separation of the workload allows to bypass the strong noise of data on each layer, which occurs with homomorphic encryption. In addition, the system limits communication on the technique of distorted circuits to only nonlinear layers.

The final touch is protection using the "secret exchange" scheme. When a user downloads encrypted data to a cloud service, they are separated, and each part receives a secret key. During the calculation, each participant has only a portion of the information. They are synchronized at the end, and only then the user requests from the service his secret key to decrypt the results.

As a result, the user gets the result of classification, but remains unaware of the model parameters, and the cloud service does not have access to the entire volume of data, which ensures privacy.

Neural networks require large processing power for processing data, and they are provided by cloud servers. However, MIT researchers are studying another option: the development of chips of a new architecture for the operation of neural networks on the device itself. In February 2018, they introduced a prototype processor, where the calculations are performed 3-7 times faster, and the power consumption is reduced by 95%.