Deep learning API for Python

Small review of Gluon API -  clear, concise, simple yet powerful and efficient API for deep learning
16 October 2017   1470

The Gluon API specification is an effort to improve speed, flexibility, and accessibility of deep learning technology for all developers, regardless of their deep learning framework of choice. The Gluon API offers a flexible interface that simplifies the process of prototyping, building, and training deep learning models without sacrificing training speed. 

Check this official screencast tutorials on Gluon on different IDEs.

 

Main advantages:

  • Simple, Easy-to-Understand Code: Gluon offers a full set of plug-and-play neural network building blocks, including predefined layers, optimizers, and initializers.
  • Flexible, Imperative Structure: Gluon does not require the neural network model to be rigidly defined, but rather brings the training algorithm and model closer together to provide flexibility in the development process.
  • Dynamic Graphs: Gluon enables developers to define neural network models that are dynamic, meaning they can be built on the fly, with any structure, and using any of Python’s native control flow.
  • High Performance: Gluon provides all of the above benefits without impacting the training speed that the underlying engine provides.

Learn more at GitHub

Facebook to Release PyTorch 1.0

This release added support for large cloud platforms, a C ++ interface, a set of JIT compilers
10 December 2018   129

Facebook has released a stable version of the library for machine learning PyTorch 1.0. This iteration added support for large cloud platforms, a C ++ interface, a set of JIT compilers, and various improvements.

The stable version received a set of JIT compilers that eliminate the dependence of the code on the Python interpreter. The model code is transformed into Torch Script - a superstructure over Python. Keeping the opportunity to work with the model in the Python environment, the user can download it to other projects not related to this language. So, the PyTorch developers state that the code processed in this way can be used in the C ++ API.

The torch.distributed package and the torch.nn.parallel.DistributedDataParallel module are completely redesigned. torch.distributed now has better performance and works asynchronously with the Gloo, NCCL and MPI libraries.

The developers added a C ++ wrapper to PyTorch 1.0. It contains analogs of Python interface components, such astorch.nn,torch.optim, torch.data. According to the creators, the new interface should provide high performance for C ++ applications. True, the C ++ API is still experimental, but it can be used in projects now.

To improve the efficiency of working with PyTorch 1.0, a Torch Hub repository has been created, which stores pre-trained models of neural networks. You can publish your own development using the hubconf.py file, after which the model will be available for download by any user via the torch.hub.load API.

Support for C extensions and the module torch.utils.trainer were removed from the library.

Facebook released the preliminary version of PyTorch 1.0 at the beginning of October 2018, and in two months the developers brought the framework to a stable state.