IBM to Launch Neural Network Learning Control Service

Developers believe that new service will bring greater transparency to the reasons for the decisions made by AI and can eliminate the "black box problem"
20 September 2018   833

IBM has developed a service for monitoring the processes that occur during the training of neural networks. The system identifies emerging misconceptions and gives greater transparency to the reasons for the decisions made by AI.

The new tool works with popular AI-frameworks, such as Watson, Tensorflow, SparkML, AWS SageMaker and AzureML. The service is implemented on the IBM Cloud platform and will help monitor the learning process by making the necessary adjustments. According to the representatives of the company, the software is easy to adapt to any architecture of the neural network. Moreover, the system is able to automatically offer correction of input data to eliminate delusions.

The service shows the parameters of the learning process using visual diagrams, which makes the user's work easier. Among the data displayed is a combination of factors accepted for consideration, confidence in the decision made and the foundation of this confidence. In addition, changes to the parameters are stored in the log, which will allow you to study the actions of AI more closely.

The monitoring service is not free, but at the same time IBM said it plans to release an open source version of the product. The company declares this as a contribution to international cooperation in eliminating AI's misconceptions.

The reasons for the decisions made by artificial intelligence are in most cases hidden from the end user. At the same time, studies have shown that neural networks are able to assimilate inherent misconceptions and stereotypes, for example, gender or racial. This gave rise to some mistrust of AI and the fear of losing control over the technology. According to an IBM poll, 82% of entrepreneurs consider the introduction of neural networks. However, while 60% are afraid of possible problems, and 63% are not sure that they will be able to confidently manage new tools.

The so-called "black box problem", consisting in the non-transparency of AI decisions, is taken seriously by the world community. Work to increase transparency and trust is being carried out quite actively. In September 2018, MIT scientists published their development, illustrating the decision-making process by the neural network.

Nvidia to Open SPADE Source Code

SPADE machine learning system creates realistic landscapes based on rough human sketches
15 April 2019   679

NVIDIA has released the source code for the SPADE machine learning system (GauGAN), which allows for the synthesis of realistic landscapes based on rough sketches, as well as training models associated with the project. The system was demonstrated in March at the GTC 2019 conference, but the code was published only yesterday. The developments are open under the non-free license CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0), allowing use only for non-commercial purposes. The code is written in Python using the PyTorch framework.

Sketches are drawn up in the form of a segmented map that determines the placement of exemplary objects on the scene. The nature of the generated objects is set using color labels. For example, a blue fill turns into sky, blue into water, dark green into trees, light green into grass, light brown into stones, dark brown into mountains, gray into snow, a brown line into a road, and a blue line into the river. Additionally, based on the choice of reference images, the overall style of the composition and the time of day are determined. The proposed tool for creating virtual worlds can be useful to a wide range of specialists, from architects and urban planners to game developers and landscape designers.

Objects are synthesized by a generative-adversarial neural network (GAN), which, based on a schematic segmented map, creates realistic images by borrowing parts from a model previously trained on several million photographs. In contrast to the previously developed systems of image synthesis, the proposed method is based on the use of adaptive spatial transformation followed by transformation based on machine learning. Processing a segmented map instead of semantic markup allows you to achieve an exact match of the result and control the style.

To achieve realism, two competing neural networks are used: the generator and the discriminator (Discriminator). The generator generates images based on mixing elements of real photos, and the discriminator identifies possible deviations from real images. As a result, a feedback is formed, on the basis of which the generator begins to assemble more and more qualitative samples, until the discriminator ceases to distinguish them from the real ones.