Developer Zaid Alyafeai of the University of Oil and Fossils named by King Fahd created a browser version of the pix2pix system - it turns the user's sketches into photographs in real time. While the service can only redraw the cats, the facades of buildings and shoes.
The TensorFlow.js library loads a neural network model on the user's computer - this allows to conduct calculations on the user's device. While drawing shoes or cats, the user draws a detail of the image, and the system turns it into a "photo" in real time.
When drawing the houses, the algorithm splits the image into areas with objects of different types. For example, a roof or window will have different colors.
Specialists from the University of California at Berkeley introduced the pix2pix algorithm based on a generative-competitive network in 2016. Initially, the authors created a system that needed to be deployed on a computer. Third-party specialists could transfer it to the browser, but the algorithm took as input only a complete sketch and worked on the server, so the service was shut down. The source code is available at GitHub.