Nvidia to Present Turing Graphic Architecture

Simultaneously with the announcement of NVIDIA Turing, Jensen Huang presented the first video cards built on the new architecture
15 August 2018   253

At the SIGGRAPH computer graphics conference in Vancouver, NVIDIA CEO Jensen Huang talked about the company's new development - the GPU architecture of NVIDIA Turing, which supports hybrid rendering. This technology combines ray tracing in real time, machine learning models, simulation and rasterization. The first products based on Turing will appear on the market in the fourth quarter of 2018.

The new development of NVIDIA has received support for ray tracing in real time, which is provided by special processors - RT-cores. They accelerate the processing of light and sound in a voluminous environment up to 10 Gigarays per second, as well as ray tracing calculations 25 times compared to the previous Pascal architecture.

In addition, NVIDIA Turing is equipped with tensor cores to improve the operation of deep neural networks, which perform up to 500 trillion tensor operations per second. This performance is used by the new NVIDIA NGX SDK to integrate graphics, sounds and video into applications with trained neural networks.

The new streaming multiprocessor, equipped with video accelerators based on Turing, adds an integer execution block in parallel to the data channel with a floating point and a new unified cache architecture with twice the bandwidth as compared to Pascal. GPUs equipped with 4608 CUDA cores provide up to 16 trillion integer calculations per second in parallel with floating point operations.

Simultaneously with the announcement of NVIDIA Turing, Jensen Huang presented the first video cards built on the new architecture: Quadro RTX 8000 for $ 10,000, Quadro RTX 6000 for $ 6300 and Quadro RTX 5000 for $ 2300. In addition, the company's CEO has announced the NVIDIA Quadro RTX server for on-demand rendering in large data centers. Among the main characteristics of new products:

  • from 16 GB of memory type GDDR6 from Samsung, which supports the processing of complex graphics resources like movies in 8K format;
  • NVIDIA NVLink technology for connecting two video cards and getting a cluster capable of transmitting data at 100 GB / s, and also having up to 96 GB of memory;
  • native support for VirtualLink technology for unified connection of VR devices via USB Type-C;
  • graphical tools Variable Rate Shading, Multi-View Rendering and VRWorks Audio for advanced VR applications.

During the conference, the company also published training materials on the use of NVIDIA RTX technologies and Microsoft DXR extensions for DirectX for developers wishing to achieve cinematic quality graphics in their gaming projects.

AI to be Used to Create 3D Motion Sculptures

The system developed by the MIT and Berkeley scientists is called MoSculp and is based on artificial inteligence
21 September 2018   110

MoSculp, the joint work of MIT scientists and the University of California at Berkeley, is built on the basis of a neural network. The development analyzes the video recording of a moving person and generates what the creators called "interactive visualization of form and time." According to the lead specialist of the project Xiuming Zhang, software will be useful for athletes for detailed analysis of movements.

At the first stage, the system scans the video frame-by-frame and determines the position of key points of the object's body, such as elbows, knees, ankles. For this, scientists decided to resort to the OpenPose library, developed by the Carnegie Mellon University. Based on the received data, the neural network compiles a 3D model of the person in each frame, and calculates the trajectory of the motion, obtaining a "motion sculpture".

At this stage, the image, according to the developers, suffers from a lack of textures and details, so the application integrates the "sculpture" in the original video. To avoid overlapping, MoSculp calculates a depth map for the original object and the 3D model.

MoSculp 3D Model
MoSculp 3D Model

The operator can adjust the image during the processing, select the "sculpture" material, color, lighting, and also what parts of the body will be tracked. The system is able to print the result using a 3D printer.

The team of researchers announced plans to further develop the MoSculp technology. Developers want to achieve from the processing system more than one object on the video, which is currently impossible. The creators of the technology believe that the program will be used to study group dynamics, social disorders and interpersonal interactions.

The principle of creating a 3D model based on human movements has been used before. For example, in August 2018, scientists at the same University of California at Berkeley demonstrated an algorithm that transfers the movements of one person to another.