Issue |
Acta Acust.
Volume 8, 2024
|
|
---|---|---|
Article Number | 17 | |
Number of page(s) | 17 | |
Section | Hearing, Audiology and Psychoacoustics | |
DOI | https://doi.org/10.1051/aacus/2024009 | |
Published online | 01 April 2024 |
Scientific Article
Comparison of 2D and 3D multichannel audio rendering methods for hearing research applications using technical and perceptual measures
1
Auditory Signal Processing and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
2
Medical Physics and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
* Corresponding author: merle.gerken@uol.de
Received:
10
February
2023
Accepted:
5
March
2024
Virtual reality with multichannel audio playback is increasingly used in hearing aid research. The purpose of this study is to compare horizontal (2D) and periphonic (3D) rendering methods in terms of localization, minimum audible angle, and perceptual ratings related to spatial quality. Higher Order Ambisonics, Vector-Base Amplitude Panning, and Nearest Speaker Selection were used, with playback through 16, 29 and 45 speakers. The results show that an improvement in vertical localization can be obtained by using periphonic rendering instead of horizontal rendering. The perceptual advantage of periphonic rendering depends on the spatial complexity of the scene; it disappears in complex acoustic environments. Scenes with low acoustic complexity, such as a single primary sound source in a room, benefit from Nearest Speaker Selection rendering. For more complex scenes with multiple sound sources, such as a symphony orchestra in a concert hall with many primary sources, or traffic on a road with moving sources, horizontal rendering methods such as 2D Higher Order Ambisonics will provide similar or better performance.
Key words: 3D audio rendering / Ambisonics / Sound quality perception / Sound source localization / Virtual acoustics
© The Author(s), Published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.