Issue |
Acta Acust.
Volume 6, 2022
Topical Issue - Auditory models: from binaural processing to multimodal cognition
|
|
---|---|---|
Article Number | 31 | |
Number of page(s) | 18 | |
DOI | https://doi.org/10.1051/aacus/2022028 | |
Published online | 28 July 2022 |
Technical & Applied Article
Interactive spatial speech recognition maps based on simulated speech recognition experiments
Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, 26111 Oldenburg, Germany
* Corresponding author: marc.rene.schaedler@uni-oldenburg.de
Received:
2
April
2021
Accepted:
1
July
2022
In their everyday life, the speech recognition performance of human listeners is influenced by diverse factors, such as the acoustic environment, the talker and listener positions, possibly impaired hearing, and optional hearing devices. Prediction models come closer to considering all required factors simultaneously to predict the individual speech recognition performance in complex, that is, e.g. multi-source dynamic, acoustic environments. While such predictions may still not be sufficiently accurate for serious applications, such as, e.g. individual hearing aid fitting, they can already be performed. This raises an interesting question: What could we do if we had a perfect speech intelligibility model? In a first step, means to explore and interpret the predicted outcomes of large numbers of speech recognition experiments would be helpful, and large amounts of data demand an accessible, that is, easily comprehensible, representation. In this contribution, an interactive, that is, user manipulable, representation of speech recognition performance is proposed and investigated by means of a concrete example, which focuses on the listener’s head orientation and the spatial dimensions – in particular width and depth – of an acoustic scene. An exemplary modeling toolchain, that is, a combination of an acoustic model, a hearing device model, and a listener model, was used to generate a data set for demonstration purposes. Using the spatial speech recognition maps to explore this data set demonstrated the suitability of the approach to observe possibly relevant listener behavior. The proposed representation was found to be a suitable target to compare and validate modeling approaches in ecologically relevant contexts, and should help to explore possible applications of future speech recognition models. Ultimately, it may serve as a tool to use validated prediction models in the design of spaces and devices which take speech communication into account.
Key words: Interactive speech recognition map / Speech perception model / Speech recognition performance / Complex acoustic scene / Speech masking / Speech in noise / Impaired hearing / Hearing loss compensation
© The Author(s), published by EDP Sciences, 2022
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.