Nvidia to Present Turing Graphic Architecture

Simultaneously with the announcement of NVIDIA Turing, Jensen Huang presented the first video cards built on the new architecture
15 August 2018   802

At the SIGGRAPH computer graphics conference in Vancouver, NVIDIA CEO Jensen Huang talked about the company's new development - the GPU architecture of NVIDIA Turing, which supports hybrid rendering. This technology combines ray tracing in real time, machine learning models, simulation and rasterization. The first products based on Turing will appear on the market in the fourth quarter of 2018.

The new development of NVIDIA has received support for ray tracing in real time, which is provided by special processors - RT-cores. They accelerate the processing of light and sound in a voluminous environment up to 10 Gigarays per second, as well as ray tracing calculations 25 times compared to the previous Pascal architecture.

In addition, NVIDIA Turing is equipped with tensor cores to improve the operation of deep neural networks, which perform up to 500 trillion tensor operations per second. This performance is used by the new NVIDIA NGX SDK to integrate graphics, sounds and video into applications with trained neural networks.

The new streaming multiprocessor, equipped with video accelerators based on Turing, adds an integer execution block in parallel to the data channel with a floating point and a new unified cache architecture with twice the bandwidth as compared to Pascal. GPUs equipped with 4608 CUDA cores provide up to 16 trillion integer calculations per second in parallel with floating point operations.

Simultaneously with the announcement of NVIDIA Turing, Jensen Huang presented the first video cards built on the new architecture: Quadro RTX 8000 for $ 10,000, Quadro RTX 6000 for $ 6300 and Quadro RTX 5000 for $ 2300. In addition, the company's CEO has announced the NVIDIA Quadro RTX server for on-demand rendering in large data centers. Among the main characteristics of new products:

  • from 16 GB of memory type GDDR6 from Samsung, which supports the processing of complex graphics resources like movies in 8K format;
  • NVIDIA NVLink technology for connecting two video cards and getting a cluster capable of transmitting data at 100 GB / s, and also having up to 96 GB of memory;
  • native support for VirtualLink technology for unified connection of VR devices via USB Type-C;
  • graphical tools Variable Rate Shading, Multi-View Rendering and VRWorks Audio for advanced VR applications.

During the conference, the company also published training materials on the use of NVIDIA RTX technologies and Microsoft DXR extensions for DirectX for developers wishing to achieve cinematic quality graphics in their gaming projects.

Nvidia to Open SPADE Source Code

SPADE machine learning system creates realistic landscapes based on rough human sketches
15 April 2019   674

NVIDIA has released the source code for the SPADE machine learning system (GauGAN), which allows for the synthesis of realistic landscapes based on rough sketches, as well as training models associated with the project. The system was demonstrated in March at the GTC 2019 conference, but the code was published only yesterday. The developments are open under the non-free license CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0), allowing use only for non-commercial purposes. The code is written in Python using the PyTorch framework.

Sketches are drawn up in the form of a segmented map that determines the placement of exemplary objects on the scene. The nature of the generated objects is set using color labels. For example, a blue fill turns into sky, blue into water, dark green into trees, light green into grass, light brown into stones, dark brown into mountains, gray into snow, a brown line into a road, and a blue line into the river. Additionally, based on the choice of reference images, the overall style of the composition and the time of day are determined. The proposed tool for creating virtual worlds can be useful to a wide range of specialists, from architects and urban planners to game developers and landscape designers.

Objects are synthesized by a generative-adversarial neural network (GAN), which, based on a schematic segmented map, creates realistic images by borrowing parts from a model previously trained on several million photographs. In contrast to the previously developed systems of image synthesis, the proposed method is based on the use of adaptive spatial transformation followed by transformation based on machine learning. Processing a segmented map instead of semantic markup allows you to achieve an exact match of the result and control the style.

To achieve realism, two competing neural networks are used: the generator and the discriminator (Discriminator). The generator generates images based on mixing elements of real photos, and the discriminator identifies possible deviations from real images. As a result, a feedback is formed, on the basis of which the generator begins to assemble more and more qualitative samples, until the discriminator ceases to distinguish them from the real ones.