AI to be Used to Create 3D Motion Sculptures

The system developed by the MIT and Berkeley scientists is called MoSculp and is based on artificial inteligence
21 September 2018   843

MoSculp, the joint work of MIT scientists and the University of California at Berkeley, is built on the basis of a neural network. The development analyzes the video recording of a moving person and generates what the creators called "interactive visualization of form and time." According to the lead specialist of the project Xiuming Zhang, software will be useful for athletes for detailed analysis of movements.

At the first stage, the system scans the video frame-by-frame and determines the position of key points of the object's body, such as elbows, knees, ankles. For this, scientists decided to resort to the OpenPose library, developed by the Carnegie Mellon University. Based on the received data, the neural network compiles a 3D model of the person in each frame, and calculates the trajectory of the motion, obtaining a "motion sculpture".

At this stage, the image, according to the developers, suffers from a lack of textures and details, so the application integrates the "sculpture" in the original video. To avoid overlapping, MoSculp calculates a depth map for the original object and the 3D model.

MoSculp 3D Model
MoSculp 3D Model

The operator can adjust the image during the processing, select the "sculpture" material, color, lighting, and also what parts of the body will be tracked. The system is able to print the result using a 3D printer.

The team of researchers announced plans to further develop the MoSculp technology. Developers want to achieve from the processing system more than one object on the video, which is currently impossible. The creators of the technology believe that the program will be used to study group dynamics, social disorders and interpersonal interactions.

The principle of creating a 3D model based on human movements has been used before. For example, in August 2018, scientists at the same University of California at Berkeley demonstrated an algorithm that transfers the movements of one person to another.

Nvidia to Open SPADE Source Code

SPADE machine learning system creates realistic landscapes based on rough human sketches
15 April 2019   679

NVIDIA has released the source code for the SPADE machine learning system (GauGAN), which allows for the synthesis of realistic landscapes based on rough sketches, as well as training models associated with the project. The system was demonstrated in March at the GTC 2019 conference, but the code was published only yesterday. The developments are open under the non-free license CC BY-NC-SA 4.0 (Creative Commons Attribution-NonCommercial-ShareAlike 4.0), allowing use only for non-commercial purposes. The code is written in Python using the PyTorch framework.

Sketches are drawn up in the form of a segmented map that determines the placement of exemplary objects on the scene. The nature of the generated objects is set using color labels. For example, a blue fill turns into sky, blue into water, dark green into trees, light green into grass, light brown into stones, dark brown into mountains, gray into snow, a brown line into a road, and a blue line into the river. Additionally, based on the choice of reference images, the overall style of the composition and the time of day are determined. The proposed tool for creating virtual worlds can be useful to a wide range of specialists, from architects and urban planners to game developers and landscape designers.

Objects are synthesized by a generative-adversarial neural network (GAN), which, based on a schematic segmented map, creates realistic images by borrowing parts from a model previously trained on several million photographs. In contrast to the previously developed systems of image synthesis, the proposed method is based on the use of adaptive spatial transformation followed by transformation based on machine learning. Processing a segmented map instead of semantic markup allows you to achieve an exact match of the result and control the style.

To achieve realism, two competing neural networks are used: the generator and the discriminator (Discriminator). The generator generates images based on mixing elements of real photos, and the discriminator identifies possible deviations from real images. As a result, a feedback is formed, on the basis of which the generator begins to assemble more and more qualitative samples, until the discriminator ceases to distinguish them from the real ones.