Multi-modal emotional database: AvID

Rok Gajšek, Vitomir Štruc, France Mihelič, Anja Podlesek, Luka Komidar, Gregor Sočan, Boštjan Bajec: Multi-modal emotional database: AvID. In: Informatica (Ljubljana), 33 (1), pp. 101-106, 2009.

Abstract

This paper presents our work on recording a multi-modal database containing emotional audio and video recordings. In designing the recording strategies a special attention was payed to gather data involving spontaneous emotions and therefore obtain a more realistic training and testing conditions for experiments. With specially planned scenarios including playing computer games and conducting an adaptive intelligence test different levels of arousal were induced. This will enable us to both detect different emotional states as well as experiment in speaker identification/verification of people involved in communications. So far the multi-modal database has been recorded and basic evaluation of the data was processed.

BibTeX (Download)

@article{Inform-Gajsek_2009,
title = {Multi-modal emotional database: AvID},
author = {Rok Gaj\v{s}ek and Vitomir \v{S}truc and France Miheli\v{c} and Anja Podlesek and Luka Komidar and Gregor So\v{c}an and Bo\v{s}tjan Bajec},
url = {http://luks.fe.uni-lj.si/nluks/wp-content/uploads/2016/09/avid.pdf},
year  = {2009},
date = {2009-01-01},
journal = {Informatica (Ljubljana)},
volume = {33},
number = {1},
pages = {101-106},
abstract = {This paper presents our work on recording a multi-modal database containing emotional audio and video recordings. In designing the recording strategies a special attention was payed to gather data involving spontaneous emotions and therefore obtain a more realistic training and testing conditions for experiments. With specially planned scenarios including playing computer games and conducting an adaptive intelligence test different levels of arousal were induced. This will enable us to both detect different emotional states as well as experiment in speaker identification/verification of people involved in communications. So far the multi-modal database has been recorded and basic evaluation of the data was processed.},
keywords = {avid, database, dataset, emotion recognition, facial expression recognition, speech, speech technologies, spontaneous emotions},
pubstate = {published},
tppubtype = {article}
}