Google to Open Dopamine Source Code

The tool for neural network training is based on TensorFlow, a library for machine learning
30 August 2018   1171

Google Brain Team published the source code of the Dopamine framework, which allows the implementation of training with reinforcement for neural networks. The repository contains 15 Python files with documentation. The tool is based on TensorFlow, a library for machine learning.

The framework is based on the Arcade Learning Environment platform, which evaluates the performance of AI using video games. Developers also got access to sets of source data for training and tests on 60 games supported by the platform. This approach makes it possible to standardize the process of working with neural networks and to obtain reproducible results.

Dopamine supports 4 learning models: deep Q-learning, C51, Implicit Quantile Network and a simplified version of Rainbow.


Dopamine

Simultaneously with the placement of the source code, Google launched a website with tools to visualize the process of interacting with AI via Dopamine. The site supports work with multiple agents simultaneously, provides access to statistics, training models and planning through TensorBoard.

Pablo Samuel Castro and Marc G. Bellemare, Google Brain Team researchers expressed the hope that the flexibility and ease of use of the tool developed by their group will inspire developers to try out new ideas.

This is not Google's first step towards increasing the availability of tools for neural networks. In 2017, the company announced the launch of the project Google.ai, a project to democratize the achievements in the field of machine learning.

Get more info at GitHub.

Neural Network to Create Landscapes from Sketches

Nvidia created GauGAN model that uses generative-competitive neural networks to process segmented images and create beautiful landscapes from peoples' sketches
20 March 2019   130

At the GTC 2019 conference, NVIDIA presented a demo version of the GauGAN neural network, which can turn sketchy drawings into photorealistic images.

The GauGAN model, named after the famous artist Paul Gauguin, uses generative-competitive neural networks to process segmented images. The generator creates an image and transfers it to the discriminator trained in real photographs. He in turn pixel-by-pixel tells the generator what to fix and where.

Simply put, the principle of the neural network is similar to the coloring of the coloring, but instead of children's drawings, it produces beautiful landscapes. Its creators emphasize that it does not just glue pieces of images, but generates unique ones, like a real artist.

Among other things, the neural network is able to imitate the styles of various artists and change the times of the day and year in the image. It also generates realistic reflections on water surfaces, such as ponds and rivers.

So far, GauGAN is configured to work with landscapes, but the neural network architecture allows us to train it to create urban images as well. The source text of the report in PDF is available here.

GauGAN can be useful to both architects and city planners, and landscape designers with game developers. An AI that understands what the real world looks like will simplify the implementation of their ideas and help you quickly change them. Soon the neural network will be available on the AI ​​Playground.