Structures of Emotion

Avital Meshi

Medium: Wearable AI and an online Zoom performance.

Structures of Emotion aims to examine the way humans and machines read and interpret emotional expressions. The work realizes the difficulty of translating ‘feelings’ into words. We analyze the complexity of emotion recognition by comparing human and computer vision, reducing the subject’s emotional input to a facial expression seen through a digital screen. We compare the accuracy of the classification between human and computer vision by asking the participant to detect their own recorded expressions once completed.

When we see someone smiling, does it necessarily mean that this person is ‘happy’? Our need to conceptualize and translate facial expressions into language is part of our natural learning process with which we attempt to understand the world. This process is often reductive and biased. The work also examines the impact of how we are seen by others and how this, in return, changes our behavioral responses. When we are told that we seem tired, angry or sad, and we don't identify as such, how does it make us feel?

The technologies that we design often reflect our own worldviews. The AI system used in this project is trained to recognize facial expressions as one of seven human-defined primary emotions. Such ocular-centric systems are built to estimate aspects of an individual’s identity or state of mind based on external appearances. This design brings to mind pseudo-scientific physiognomic practices, which are notorious for their discriminatory nature and surface too often in AI-based computer vision algorithms. The use of both AI and human analysis of facial expressions reminds us that the technology is far from maturing beyond its maker, and that both humans and machines still have much to learn.

This work was developed in collaboration with Treyden Chiaravalloti.