Scientist to Use AI For Newborns Diagnostics

The main goal of the study is to create an algorithm that detects deviations in the development of limb movements of newborns in the first few months
12 July 2018   178

A team of scientists from the University of Southern California and the University of Madrid used AI to detect abnormalities in the development of newborns. The algorithm classifies the movements of the limbs and according to these data creates a forecast is for 1-12 months. This is reported by Venture Beat.

Scientists used the data of the laboratory for monitoring neuromotorics of newborns, located at the University of Southern California. Accelerometers, gyroscopes and magnetometers were attached to the feet of children. The algorithm collected data from the sensors for the left and right legs, then calculated the duration of the movements, the average and maximum acceleration, and other indicators.

Then the developers manually entered the age of the child, a scaled development score and information about it (typical or atypical), collected the predictive model. After using binary classification algorithms, taking into account the 3 best results for minimizing errors.

Based on the obtained data, artificial intelligence predicted delays in development for the first six months with an accuracy of 83.9%. For a period of 6-12 months, the accuracy was slightly lower - 77%. Detailed text and results are published in the article.

[S]tudies have demonstrated that kinematic variables, such as kicking frequency, spatiotemporal organization, and interjoint and interlimb coordination, are different between infants with typical development … and infants at risk … including infants with intellectual disability, myelomeningocele, Down syndrome, as well as infants born preterm.
 

Researchers

The main goal of the study is to create an algorithm that detects deviations in the development of limb movements of newborns in the first few months. This will allow to take purposeful actions. Studies have shown that between children with normal development and children in the risk group, there are kinematic differences. The latter include the frequency of movement of the legs, spatial orientation and coordination of the limbs.

MIT to Improve Cloud-Based Machine Learning Security

New method combines two encryption techniques and keeps neural networks operate quickly
20 August 2018   124

A team of researchers from MIT presented a combined method of data encryption for cloud artificial intelligence models at a computer security conference organized by USENIX. Protected with its help, the neural network works 20-30 times faster than those that use traditional techniques.

In addition, privacy remains: the cloud server does not receive the full amount of confidential data, and the user remains unaware of the parameters of the neural network. According to researchers, their system could be useful to hospitals for diagnosis of diseases from MRI photographs using cloud-based AI models.

In cloud computing, two techniques are commonly used: homomorphic encryption and garbled circuits. The first receives and performs calculations completely on the encrypted data and generates a result that the user can decode. However, a convolutional neural network creates noise during processing that grows and accumulates with each layer, so the need to filter the interference significantly reduces the computational speed.

The second technique is a form of computation for which two participants are required. The system takes their input data, processes it and sends each its result. In this case, the parties exchange information, but do not have an idea of ​​what it means. However, the width of the communication channel required for data exchange directly depends on the complexity of the calculations.

With respect to cloud neural networks, the technique shows itself well only on nonlinear layers that perform simple operations. On linear, using complex mathematics, the speed is reduced to a critical level.

The MIT team proposed a solution that uses the strengths of these two methods and bypasses the weak ones. So, the user starts on his device an encryption system using the technique of distorted circuits and loads data encrypted with a homomorphic method into the cloud neural network. Thus, both parties to the process are divided by data: the user device performs calculations on the distorted circuits and sends the data back to the neural network.

Separation of the workload allows to bypass the strong noise of data on each layer, which occurs with homomorphic encryption. In addition, the system limits communication on the technique of distorted circuits to only nonlinear layers.

The final touch is protection using the "secret exchange" scheme. When a user downloads encrypted data to a cloud service, they are separated, and each part receives a secret key. During the calculation, each participant has only a portion of the information. They are synchronized at the end, and only then the user requests from the service his secret key to decrypt the results.

As a result, the user gets the result of classification, but remains unaware of the model parameters, and the cloud service does not have access to the entire volume of data, which ensures privacy.

Neural networks require large processing power for processing data, and they are provided by cloud servers. However, MIT researchers are studying another option: the development of chips of a new architecture for the operation of neural networks on the device itself. In February 2018, they introduced a prototype processor, where the calculations are performed 3-7 times faster, and the power consumption is reduced by 95%.