Google to Start New AI Services Test

Artificial intelligence uses neural networks of AutoML to recognize human speech and translate texts, as well as search for objects on images
26 July 2018   1160

Google reported on the start of testing of new tools based on machine learning. Artificial intelligence uses neural networks of cloud service AutoML to recognize human speech and translate texts, as well as search for objects on images. In addition, the company launched alpha testing of tensor processors.

Google's goal was to provide machine learning for companies and developers who lack knowledge or resources to solve problems. Therefore AI learns to recognize human speech and translate texts. These skills are taught in the AutoML Natural Language and AutoML Translate services, respectively.

AI is empowerment, and we want to democratize that power for everyone and every business — from retail to agriculture, education to healthcare. AI is no longer a niche in the tech world — it’s the differentiator for businesses in every industry. And we’re committed to delivering the tools that will revolutionize them.
 

Fei-Fei Li

Chief scientist, Google AI

In addition to these tools, Google introduced:

  • Cloud Vision API, which learns to recognize handwriting from PDF and TIFF files. It also determines the location of the object in the image.
  • AI Contact Center is a tool designed for telephone conversations with subscribers. During the call, it recognizes human speech and tries to solve the problem. In case of failure, the AI ​​redirects the subscriber to the human operator (in Google it is called "agent's assist") and reports the information received.
  • Alpha testing of the third generation of tensor processors.

The company seeks to increase the presence of AI in all spheres of life in order to simplify it and direct it to development. Cloud service AutoML appeared in January 2018, and six months later beta testing began.

AI to Recognize Text Written by Invisible Keyboard

Developers said they tried to increase the typing speed on the on-screen keyboards
06 August 2019   185

Korean developers have created an algorithm that recognizes text printed on an imaginary keyboard on a touchscreen. Such a “keyboard” is not tied to a specific area on the screen, and the “keys” are not limited to clear squares.

As a result, a person types blindly in a QWERTY layout without thinking about where the keyboard should be and whether it got into the key.

Imaginary Buttons Press CloudsImaginary Buttons Press Clouds

According to the developers, they tried to increase the typing speed on the on-screen keyboards. The on-screen keyboard, unlike the hardware keyboard, does not offer feedback that confirms pressing. There is a risk to miss and not press the desired button. Because of this, people endlessly stare at the screen and eventually print more slowly.

The new algorithm allows you not to worry about this, you can enter text from memory, and the keyboard with 96% accuracy will guess what the person wanted to say. Tests have shown that the average typing speed on an imaginary keyboard is slightly less than on a hardware keyboard: 45 words per minute versus 51.