top of page
Search

Using AI and EEG to decode human emotions

Updated: Apr 22


Transforming EEG Data into Visuals


At the exhibition we use an Enobio® head cap with 20 channels to capture the brain signals (EEG, electroencephalography) of the participants while they are experiencing four different stations. Before the beginning of their tour in the exhibition, we record participants’ baseline, i.e., their brain activity at rest, to account only for the affective/cognitive state of the participants relevant to the exhibition.


The EEG signals are filtered and processed in real-time to remove the ocular, muscle and movement artifacts, as well as the line noise. The EEG bandpower in different frequency-bands is extracted, namely Theta (4-8 Hz), Alpha (8- 13 Hz), Beta (13-30 Hz), and Gamma (30-45 Hz) bands. This information is used to calculate our ExperienceLab features: valence, arousal, engagement, attention, and fatigue in each of the four stations.


Using AI and EEG to decode human emotions

Valence is defined in the pleasure-displeasure continuum, and ranges from unpleasant to pleasant (Barrett et al., 1999). Whereas arousal is defined as the level of activation, and refers to the general level of alertness and wakefulness of a person (Barrett et al., 1999). Valence and arousal represent the two axes of the circumplex model of affect (Russell, 1980), in which according to Russell, all emotions can be represented. Once these variables are extracted continuously over time, they are averaged to correspond to one value per station in the circumplex model of affect, in which valence is the horizontal axis and arousal the vertical axis. Regarding our cognitive variables, these are displayed in the time continuum (every 5 sec) in the form of bars. The average value across all participants that have participated in the exhibition is also visualized as a less opaque point in the circumplex and as a line in the cognitive variables’ representation.



Comments


bottom of page