AI to Manipulate Objects, Seen For a 1st Time

New system that helps robots to manipulate objects is called Dense Object Nets and is developed by the Massachusetts Institute of Technology scientist
12 September 2018   137

Researchers from the Massachusetts Institute of Technology (MIT) have developed a system for robots called Dense Object Nets (DON), which interacts with objects of an unfamiliar form. It virtually decomposes the object into its constituent parts, remembers its characteristics and the way it interacts with it. When the algorithm encounters a new object, it tries to understand whether its parts are similar to those seen previously.

The system examines the object at different angles using cameras on the manipulator, then recognizes the images and determines the coordinates of all points of the object. On average, the analysis takes about 20 minutes.

During the training, the researchers showed the DON sneakers and taught the system to raise it in a certain way. When the algorithm first saw another shoe in different angles, it realized that it had a similar object in front of it, and raised it in the same way.

Another example is a mug with a liquid. Unlike most similar systems, DON can lift it by the handle, even if it stands upright or upside down.

Founders of DON hope that their technology will find use in warehouses of such large retailers as Amazon and Walmart. In addition, robots can work as house helper.

AI to be Used to Create 3D Motion Sculptures

The system developed by the MIT and Berkeley scientists is called MoSculp and is based on artificial inteligence
21 September 2018   119

MoSculp, the joint work of MIT scientists and the University of California at Berkeley, is built on the basis of a neural network. The development analyzes the video recording of a moving person and generates what the creators called "interactive visualization of form and time." According to the lead specialist of the project Xiuming Zhang, software will be useful for athletes for detailed analysis of movements.

At the first stage, the system scans the video frame-by-frame and determines the position of key points of the object's body, such as elbows, knees, ankles. For this, scientists decided to resort to the OpenPose library, developed by the Carnegie Mellon University. Based on the received data, the neural network compiles a 3D model of the person in each frame, and calculates the trajectory of the motion, obtaining a "motion sculpture".

At this stage, the image, according to the developers, suffers from a lack of textures and details, so the application integrates the "sculpture" in the original video. To avoid overlapping, MoSculp calculates a depth map for the original object and the 3D model.

MoSculp 3D Model
MoSculp 3D Model

The operator can adjust the image during the processing, select the "sculpture" material, color, lighting, and also what parts of the body will be tracked. The system is able to print the result using a 3D printer.

The team of researchers announced plans to further develop the MoSculp technology. Developers want to achieve from the processing system more than one object on the video, which is currently impossible. The creators of the technology believe that the program will be used to study group dynamics, social disorders and interpersonal interactions.

The principle of creating a 3D model based on human movements has been used before. For example, in August 2018, scientists at the same University of California at Berkeley demonstrated an algorithm that transfers the movements of one person to another.