Issue |
Acta Acust.
Volume 6, 2022
|
|
---|---|---|
Article Number | 55 | |
Number of page(s) | 14 | |
Section | Hearing, Audiology and Psychoacoustics | |
DOI | https://doi.org/10.1051/aacus/2022032 | |
Published online | 29 November 2022 |
Technical & Applied Article
Auditory-visual scenes for hearing research
1
Carl-von-Ossietzky Universität, Oldenburg, Dept. Medical Physics and Acoustics, Cluster of Excellence “Hearing4all”, Carl-von-Ossietzky-Str. 9–11, 26129 Oldenburg, Germany
2
Technical University of Munich, Audio Information Processing, Department of Electrical and Computer Engineering, Theresienstr. 90, 80333 München, Germany
3
RWTH Aachen University, Institute for Hearing Technology and Acoustics, Kopernikusstr. 5, 52074 Aachen, Germany
4
Erasmus University Medical Center, Rotterdam, Department of Otorhinolaryngology and Head and Neck Surgery, Burgemeester Oudlaan 50, 3062 PA Rotterdam, Netherlands
* Corresponding authors: Steven.van.de.Par@uni-oldenburg.de, stephan.ewert@uni-oldenburg, seeber@tum.de
Received:
25
October
2021
Accepted:
27
July
2022
While experimentation with synthetic stimuli in abstracted listening situations has a long standing and successful history in hearing research, an increased interest exists on closing the remaining gap towards real-life listening by replicating situations with high ecological validity in the lab. This is important for understanding the underlying auditory mechanisms and their relevance in real-life situations as well as for developing and evaluating increasingly sophisticated algorithms for hearing assistance. A range of ‘classical’ stimuli and paradigms have evolved to de-facto standards in psychoacoustics, which are simplistic and can be easily reproduced across laboratories. While they ideally allow for across laboratory comparisons and reproducible research, they, however, lack the acoustic stimulus complexity and the availability of visual information as observed in everyday life communication and listening situations. This contribution aims to provide and establish an extendable set of complex auditory-visual scenes for hearing research that allow for ecologically valid testing in realistic scenes while also supporting reproducibility and comparability of scientific results. Three virtual environments are provided (underground station, pub, living room), consisting of a detailed visual model, an acoustic geometry model with acoustic surface properties as well as a set of acoustic measurements in the respective real-world environments. The current data set enables i) audio–visual research in a reproducible set of environments, ii) comparison of room acoustic simulation methods with “ground truth” acoustic measurements, iii) a condensation point for future extensions and contributions for developments towards standardized test cases for ecologically valid hearing research in complex scenes.
Key words: Complex acoustic environments / Speech intelligibility / Room acoustics / Ecological validity
© The Author(s), published by EDP Sciences, 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.