Google to Start New AI Services Test

Artificial intelligence uses neural networks of AutoML to recognize human speech and translate texts, as well as search for objects on images
26 July 2018   857

Google reported on the start of testing of new tools based on machine learning. Artificial intelligence uses neural networks of cloud service AutoML to recognize human speech and translate texts, as well as search for objects on images. In addition, the company launched alpha testing of tensor processors.

Google's goal was to provide machine learning for companies and developers who lack knowledge or resources to solve problems. Therefore AI learns to recognize human speech and translate texts. These skills are taught in the AutoML Natural Language and AutoML Translate services, respectively.

AI is empowerment, and we want to democratize that power for everyone and every business — from retail to agriculture, education to healthcare. AI is no longer a niche in the tech world — it’s the differentiator for businesses in every industry. And we’re committed to delivering the tools that will revolutionize them.

Fei-Fei Li

Chief scientist, Google AI

In addition to these tools, Google introduced:

  • Cloud Vision API, which learns to recognize handwriting from PDF and TIFF files. It also determines the location of the object in the image.
  • AI Contact Center is a tool designed for telephone conversations with subscribers. During the call, it recognizes human speech and tries to solve the problem. In case of failure, the AI ​​redirects the subscriber to the human operator (in Google it is called "agent's assist") and reports the information received.
  • Alpha testing of the third generation of tensor processors.

The company seeks to increase the presence of AI in all spheres of life in order to simplify it and direct it to development. Cloud service AutoML appeared in January 2018, and six months later beta testing began.

Neural Network to Create Landscapes from Sketches

Nvidia created GauGAN model that uses generative-competitive neural networks to process segmented images and create beautiful landscapes from peoples' sketches
20 March 2019   138

At the GTC 2019 conference, NVIDIA presented a demo version of the GauGAN neural network, which can turn sketchy drawings into photorealistic images.

The GauGAN model, named after the famous artist Paul Gauguin, uses generative-competitive neural networks to process segmented images. The generator creates an image and transfers it to the discriminator trained in real photographs. He in turn pixel-by-pixel tells the generator what to fix and where.

Simply put, the principle of the neural network is similar to the coloring of the coloring, but instead of children's drawings, it produces beautiful landscapes. Its creators emphasize that it does not just glue pieces of images, but generates unique ones, like a real artist.

Among other things, the neural network is able to imitate the styles of various artists and change the times of the day and year in the image. It also generates realistic reflections on water surfaces, such as ponds and rivers.

So far, GauGAN is configured to work with landscapes, but the neural network architecture allows us to train it to create urban images as well. The source text of the report in PDF is available here.

GauGAN can be useful to both architects and city planners, and landscape designers with game developers. An AI that understands what the real world looks like will simplify the implementation of their ideas and help you quickly change them. Soon the neural network will be available on the AI ​​Playground.