Researchers from the Massachusetts Institute of Technology (MIT) presented an algorithm for a convolutional neural network, which automatically transfers objects from one image to another. In this case, the user does not need to select parts of the image or define their boundaries. This is reported by The Next Web.
The editor, called Semantic Soft Segmentation (SSS), divides objects and background into different segments. The system analyzes the color, transparency and texture of the edges of objects. It takes into account the semantic proximity of the pixels: they can belong to two objects simultaneously. As a result, on a new background, the objects look clear and without torn edges. The algorithm processes one image on average in 4 minutes.
In August 2018, the author of the blog, AI Weirdness told about the generative-controversial neural network AttnGAN, which draws images by text description. The problem with the algorithm is that it requires too precisely defined picture parameters and sometimes can not determine the boundaries of objects.