Scientist to Use AI For Newborns Diagnostics

The main goal of the study is to create an algorithm that detects deviations in the development of limb movements of newborns in the first few months
12 July 2018   363

A team of scientists from the University of Southern California and the University of Madrid used AI to detect abnormalities in the development of newborns. The algorithm classifies the movements of the limbs and according to these data creates a forecast is for 1-12 months. This is reported by Venture Beat.

Scientists used the data of the laboratory for monitoring neuromotorics of newborns, located at the University of Southern California. Accelerometers, gyroscopes and magnetometers were attached to the feet of children. The algorithm collected data from the sensors for the left and right legs, then calculated the duration of the movements, the average and maximum acceleration, and other indicators.

Then the developers manually entered the age of the child, a scaled development score and information about it (typical or atypical), collected the predictive model. After using binary classification algorithms, taking into account the 3 best results for minimizing errors.

Based on the obtained data, artificial intelligence predicted delays in development for the first six months with an accuracy of 83.9%. For a period of 6-12 months, the accuracy was slightly lower - 77%. Detailed text and results are published in the article.

[S]tudies have demonstrated that kinematic variables, such as kicking frequency, spatiotemporal organization, and interjoint and interlimb coordination, are different between infants with typical development … and infants at risk … including infants with intellectual disability, myelomeningocele, Down syndrome, as well as infants born preterm.
 

Researchers

The main goal of the study is to create an algorithm that detects deviations in the development of limb movements of newborns in the first few months. This will allow to take purposeful actions. Studies have shown that between children with normal development and children in the risk group, there are kinematic differences. The latter include the frequency of movement of the legs, spatial orientation and coordination of the limbs.

MIT CSAIL to Fight AI Bias

As reported, bias in AI leads to poor search results or user experience
19 November 2018   55

A team of scientists from the Laboratory of Informatics and Artificial Intelligence MIT has published a paper dedicated to the fight against the misconceptions that arise in neural networks in the learning process. The main attention is paid to the problem of preserving the accuracy of predicted AI results.

Since scientists have been trying to cope with the problem of discriminatory misconceptions of AI for the first year, there are traditional methods of operations in this area. Usually, for correcting training, a certain amount of information is added to the data set, which allows the neural network to obtain more accurate data on a particular sample.

Thus, in one experiment, the AI ​​should have noted the expected level of income of individuals in the presented selection. As a result of a discriminatory misconception that appeared in the process of learning, AI twice as often marked men as individuals with high incomes. Increasing the number of female profiles in a training dataset allowed us to reduce the error by 40%.

The problem with traditional methods is that the data sets prepared in this way do not reflect the actual distribution of the population. This increases the fallacy of predictions issued by AI.

In their work “Why Is My Classifier Discriminatory?” Scientists offer several possible solutions to the problem. They believe that increasing the size of the training dataset without changing the proportions of the represented gender, social and racial groups will allow the AI ​​to cope with discriminatory errors independently. According to the researchers, the collection of additional information from the same source that provided the initial data packet will avoid covariant bias.

This method can be costly, as it will have to pay for the work of specialists marking up additional data. However, researchers are confident that in many cases such costs will be justified.

The second option is clustering groups of the population most vulnerable to discrimination and the subsequent separate processing of these clusters with the introduction of additional variables. Scientists suggest using this method when obtaining additional data is difficult or impossible.