Using machine learning to detect emotions and control your facial expressions

Andrey Savchenko from the Nizhny Novgorod branch of the Higher School of Economics published the result of his research in the field of machine learning related to the recognition of emotions on the faces of people present in photographs and videos. The code is written in Python using PyTorch and distributed under the Apache 2.0 license. Several ready-made models are available, including those suitable for use on mobile devices.

Based on the library, another developer created the sevimon program, which allows you to track changes in emotions using a video camera and help control facial muscle tension, for example, to eliminate overexertion, an indirect effect on mood, and, with prolonged use, to prevent the appearance of expression lines. The CenterFace library is used to determine the position of a face in a video. The sevimon code is written in Python and distributed under the AGPLv3 license. At the first start, models are loaded, after which the program does not require an Internet connection and works completely offline. Instructions for running on Linux/UNIX and Windows have been prepared, as well as a docker image for Linux.

Sevimon works as follows: first, a face is determined on the camera image, then the face is compared with each of the eight emotions (anger, contempt, disgust, fear, joy, lack of emotions, sadness, surprise), after which a similarity score is given for each emotion. The obtained values ​​are stored in the log in text format for further analysis by the sevistat program. For each emotion in the settings file, you can set the upper and lower limits of the values, at the intersection of which a reminder is immediately issued.

Source: opennet.ru

Add a comment