Four scientists from the University of California at Berkeley developed an algorithm that creates on the basis of a video with a dance a fake record on which another person performs the same movements. For deep processing, it requires a twenty-minute shooting at 120 frames per second.
The technology is based on an algorithm on generative-competitive neural networks. A separate subroutine processes pre-recorded video (source and target) and imposes motions on a simple figure - the frame of the human body.
Then the algorithm transfers professional movements to the record of amateur dance and "aligns" the final video so that the figure does not strongly "jerk" from frame to frame and the person was where he was supposed to.
Researchers admit that the synthesized video looks realistic though it is not devoid of artifacts: body parts sometimes tremble or even disappear, and some frames look blurred. In addition, the algorithm does not know how to handle the behavior of the tissue when a person moves, so people on the target video wear tight clothing that almost does not form wrinkles.
This type of video processing is called a "deep fake". In mid-August, 2018, experts from the Carnegie Mellon University presented Recycle-GAN, which is capable of recreating the facial expressions of one person on the face of another, modeling the blossoming of the flower and changing the weather on video recordings. A similar result is provided by the application FakeApp, released in January 2018, as well as the algorithms Face2Face and HeadOn.