AI has helped to better understand how human brain performs face recognition
Credit: Neural Computation
Scientists from Salk Institute (USA), Skoltech (Russia), and Riken Center for Brain Science (Japan) investigated a theoretical model of how populations of neurons in the visual cortex of the brain may recognize and process faces and their different expressions and how they are organized. The research was recently published in Neural Computation and highlighted on its cover.
Humans have amazing abilities to recognize a huge number of individual faces and interpret facial expressions extremely well. These abilities play a key role in human social interactions. However, how the human brain processes and stores such complex visual information is still poorly understood.
Skoltech scientists Anh-Huy Phan and Andrzej Cichocki, with their colleagues from the US and Japan, Sidney Lehky and Keiji Tanaka, decided to better understand how the visual cortex processes and stores information related to face recognition. Their approach was based on the idea that a human face can be conceptually represented as a collection of parts or components, including eyes, eyebrow, nose, mouth, etc. Using a machine learning approach, they applied a novel tensor algorithm to decompose faces into a set of components or images called tensorfaces as well as their associated weights, and represented faces by linear combinations of those components. In this way, they build a mathematical model describing the work of the neurons involved in face recognition.
“We used novel tensor decompositions to represent faces as a set of components with specified complexity, which can be interpreted as model face cells and indicate that human face representations consist of a mixture of low- and medium-complexity face cells,” said Skoltech Professor Andrzej Cichocki.
Related Journal Article