AI to Diagnose Eye Diseases

The system was developed in connection with the Moorfields Eye Hospital
15 August 2018   792

The British company DeepMind has introduced a system that, based on the optical coherence tomography (OCT) image of the retina, identifies up to 50 eye diseases.

The system was developed in conjunction with the Moorfields Eye Hospital - its doctors make over a thousand OCT images every day. The specialists take a lot of time to process even one - sometimes this leads to the fact that the disease progresses and causes irreversible changes in the visual organs.

OCT Analysis
OCT Analysis 

The new algorithm allows to reduce the analysis time to several seconds. It is based on two neural networks. The first - the segmentation network - converts the original OCT images into a tissue map. It distinguishes features of diseases and their hearth: hemorrhage, damage and so on. The second neural network - classification network - analyzes a 3D map and diagnoses it. The system not only represents the accuracy of the analysis as a percentage, but it also prioritizes the treatment and gives its recommendations.

The algorithm was trained on a set of 15 thousand pictures from 7,5 thousand patients, accompanied by diagnoses of doctors. The main difference between the development of DeepMind and similar ones is that the system reflects how it came to some conclusions. In addition, it works with any type of images - it will allow using it in any medical center.

Now the algorithm is being clinically tested in Moorfields Eye Hospital. According to its creators, it can take 3-5 years. In case of successful results, the system will be used by 30 more medical centers and clinics throughout the country.

In February 2018, scientists from Google and a subsidiary of medical company Verily discovered a new method that predicts heart disease with the help of machine learning. Based on the scan of the retina, the system provides accurate patient data: age, blood pressure, addictions.

Neural Network to Create Landscapes from Sketches

Nvidia created GauGAN model that uses generative-competitive neural networks to process segmented images and create beautiful landscapes from peoples' sketches
20 March 2019   138

At the GTC 2019 conference, NVIDIA presented a demo version of the GauGAN neural network, which can turn sketchy drawings into photorealistic images.

The GauGAN model, named after the famous artist Paul Gauguin, uses generative-competitive neural networks to process segmented images. The generator creates an image and transfers it to the discriminator trained in real photographs. He in turn pixel-by-pixel tells the generator what to fix and where.

Simply put, the principle of the neural network is similar to the coloring of the coloring, but instead of children's drawings, it produces beautiful landscapes. Its creators emphasize that it does not just glue pieces of images, but generates unique ones, like a real artist.

Among other things, the neural network is able to imitate the styles of various artists and change the times of the day and year in the image. It also generates realistic reflections on water surfaces, such as ponds and rivers.

So far, GauGAN is configured to work with landscapes, but the neural network architecture allows us to train it to create urban images as well. The source text of the report in PDF is available here.

GauGAN can be useful to both architects and city planners, and landscape designers with game developers. An AI that understands what the real world looks like will simplify the implementation of their ideas and help you quickly change them. Soon the neural network will be available on the AI ​​Playground.