Microsoft to Open ONNX Runtime Source Code

It's a high-performance engine for machine learning models in the ONNX (Open Neural Network Exchange) format
07 December 2018   1195

Microsoft announced the deployment of ONNX Runtime source code on GitHub. The project is a high-performance engine for machine learning models in the ONNX (Open Neural Network Exchange) format, ensuring compatibility of ML models with free AI frameworks (TensorFlow, Cognitive Toolkit, Caffe2, MXNet). Therefore, ONNX Runtime is used to optimize computations in models of deep learning of neural networks.

With the translation of the project into open source, the company hopes to attract more people to the development of machine learning. Moreover, Microsoft promised to respond quickly to commits.

To use ONNX Runtime, it is necessary to determine the ONNX model and select a tool for it. Their list and instructions are available on the GitHub page. Microsoft offers several options for those who do not know where to start:

  • Download ready-made ResNet or TinyYOLO models from ONNX Model Zoo;
  • Create your own computer vision models using Azure Custom Vision Service
  • convert models created in TensorFlow, Keras, Scikit-Learn or CoreML using ONNXMLTools and TF2ONNX;
  • train new models using Azure machine learning and save the result in ONNX format.

According to Microsoft spokesman Eric Boyd, the Bing Search, Bing Ads and Office services teams were able to achieve twice the performance of ML models using ONNX Runtime compared to standard solutions. Therefore, it is important to support the project by both users and large companies. As for the latter, while they embody the following projects:

  • Microsoft and Intel are implementing the nGraph compiler;
  • NVIDIA is working on TensorRT integration;
  • Qualcomm is looking forward to developing the Snapdragon mobile platform.

In early December 2017, ONNX was transferred from the stage of early access to a project corresponding to the conditions of industrial operation. Companies urged the community to join the project and help create a unified platform for engaging with in-depth training tools.

AI to Recognize Text Written by Invisible Keyboard

Developers said they tried to increase the typing speed on the on-screen keyboards
06 August 2019   152

Korean developers have created an algorithm that recognizes text printed on an imaginary keyboard on a touchscreen. Such a “keyboard” is not tied to a specific area on the screen, and the “keys” are not limited to clear squares.

As a result, a person types blindly in a QWERTY layout without thinking about where the keyboard should be and whether it got into the key.

Imaginary Buttons Press CloudsImaginary Buttons Press Clouds

According to the developers, they tried to increase the typing speed on the on-screen keyboards. The on-screen keyboard, unlike the hardware keyboard, does not offer feedback that confirms pressing. There is a risk to miss and not press the desired button. Because of this, people endlessly stare at the screen and eventually print more slowly.

The new algorithm allows you not to worry about this, you can enter text from memory, and the keyboard with 96% accuracy will guess what the person wanted to say. Tests have shown that the average typing speed on an imaginary keyboard is slightly less than on a hardware keyboard: 45 words per minute versus 51.