MIT CSAIL to Fight AI Bias

As reported, bias in AI leads to poor search results or user experience
19 November 2018   468

A team of scientists from the Laboratory of Informatics and Artificial Intelligence MIT has published a paper dedicated to the fight against the misconceptions that arise in neural networks in the learning process. The main attention is paid to the problem of preserving the accuracy of predicted AI results.

Since scientists have been trying to cope with the problem of discriminatory misconceptions of AI for the first year, there are traditional methods of operations in this area. Usually, for correcting training, a certain amount of information is added to the data set, which allows the neural network to obtain more accurate data on a particular sample.

Thus, in one experiment, the AI ​​should have noted the expected level of income of individuals in the presented selection. As a result of a discriminatory misconception that appeared in the process of learning, AI twice as often marked men as individuals with high incomes. Increasing the number of female profiles in a training dataset allowed us to reduce the error by 40%.

The problem with traditional methods is that the data sets prepared in this way do not reflect the actual distribution of the population. This increases the fallacy of predictions issued by AI.

In their work “Why Is My Classifier Discriminatory?” Scientists offer several possible solutions to the problem. They believe that increasing the size of the training dataset without changing the proportions of the represented gender, social and racial groups will allow the AI ​​to cope with discriminatory errors independently. According to the researchers, the collection of additional information from the same source that provided the initial data packet will avoid covariant bias.

This method can be costly, as it will have to pay for the work of specialists marking up additional data. However, researchers are confident that in many cases such costs will be justified.

The second option is clustering groups of the population most vulnerable to discrimination and the subsequent separate processing of these clusters with the introduction of additional variables. Scientists suggest using this method when obtaining additional data is difficult or impossible.

Nvidia to Open StyleGan Source Code

This machine learning project allows to create of people faces by imitating photographs
11 February 2019   640

NVIDIA has open source code if developments related to the StyleGAN project, which allows generating images of new faces of people by imitating photographs. The system automatically takes into account aspects of the placement of individuals and makes the result indistinguishable from real photos (most of the respondents could not distinguish the original photos from the generated ones). For the synthesis of individuals, a machine learning system based on a generative-competitive neural network (GAN) is used. The code is written in Python using the TensorFlow framework and published under the Creative Commons BY-NC 4.0 license (for non-commercial use only).

Both ready-made trained models and collections of images for self-learning of a neural network are available for download. The basic model was trained on the basis of the Flickr-Faces-HQ (FFHQ) collection, which includes 70,000 high-quality (1024x1024) PNG images of people's faces. At the same time, the system is not tied to persons - as an example, the variants trained on collections of photographs of cars, cats and beds are shown. It requires one or more NVIDIA graphics cards (Tesla V100 GPU recommended), at least 11 GB of RAM, NVIDIA 391.35+ drivers, CUDA 9.0+ tools and the cuDNN 7.3.1 library.

The system allows you to synthesize the image of a new face based on interpolation of features of several faces, combining features characteristic of them, as well as adapting the final image to the required age, gender, hair length, smile character, nose shape, skin color, glasses, face rotation in the photo. The generator considers the image as a collection of styles, automatically separates the characteristic details (freckles, hair, glasses) from common high-level attributes (posture, gender, age changes) and allows you to combine them in an arbitrary form with the definition of the dominant properties through weights.