Research team from Shanghai Technical University
using the framework
The toolkit takes a XNUMXD image as input and synthesizes a modified result based on the selected model. Three transformation options are supported:
Creating a moving object that repeats the movements on which the model was trained. Passing appearance elements from the model to the object (for example, changing clothes). Generation of a new angle (for example, synthesizing an image in profile based on a frontal photo). All three methods can be combined, for example, it is possible to generate a video from a photo that imitates the performance of a complex acrobatic stunt in different clothes.
In the process of synthesis, the operations of selecting an object in a photograph and forming the missing elements of the background when moving are simultaneously performed. A model for a neural network can be trained once and used for various transformations. For loading
Unlike transformation methods based on transformation by key points that describe the location of the body in two-dimensional space, in Impersonator an attempt was made to synthesize a three-dimensional mesh (mesh) with a description of the body using machine learning methods.
The proposed method allows manipulations taking into account the personalized shape of the body and the current posture, simulating the natural movements of the limbs.
To preserve the original information, such as textures, style, colors, and face recognition, the transformation process uses
Source: opennet.ru