Open Access
Issue
Acta Acust.
Volume 5, 2021
Article Number 55
Number of page(s) 14
Section Hearing, Audiology and Psychoacoustics
DOI https://doi.org/10.1051/aacus/2021048
Published online 21 December 2021
  1. P. Zahorik, D.S. Brungart, A.W. Bronkhorst: Auditory distance perception in humans: A summary of past and present research. Acta Acustica United with Acustica 91, 3 (2005) 409–420. [Google Scholar]
  2. A.J. Kolarik, B.C.J. Moore, P. Zahorik, S. Cirstea, S. Pardhan: Auditory distance perception in humans: A review of cues, development, neuronal bases, and effects of sensory loss. Attention, Perception, & Psychophysics 78, 2 (2016) 373–395. https://doi.org/10.3758/s13414-015-1015-1. [CrossRef] [PubMed] [Google Scholar]
  3. D.S. Brungart, W.M. Rabinowitz: Auditory localization of nearby sources. Head-related transfer functions. The Journal of the Acoustical Society of America 106, 3 (1999) 1465–1479. https://doi.org/10.1121/1.427180. [CrossRef] [PubMed] [Google Scholar]
  4. D.S. Brungart, W.M. Rabinowitz: Auditory localization in the near-field, in Proc. of the 3rd International Conference on Auditory Display, Palo Alto, CA, USA. 1996, pp. 1–5. [Google Scholar]
  5. J.M. Arend, A. Neidhardt, C. Pörschmann: Measurement and perceptual evaluation of a spherical near-field HRTF set, in Proc. of the 29th Tonmeistertagung – VDT International Convention, Cologne, Germany. 2016, pp. 356–363. [Google Scholar]
  6. D.S. Brungart: Auditory localization of nearby sources. III. Stimulus effects. The Journal of the Acoustical Society of America 106, 6 (1999) 3589–3602. https://doi.org/10.1121/1.428212. [CrossRef] [PubMed] [Google Scholar]
  7. N. Kopčo, B.G. Shinn-Cunningham: Effect of stimulus spectrum on distance perception for nearby sources. The Journal of the Acoustical Society of America 130, 3 (2011) 1530–1541. https://doi.org/10.1121/1.3613705. [CrossRef] [PubMed] [Google Scholar]
  8. N. Kopčo, S. Huang, J.W. Belliveau, T. Raij, C. Tengshe, J. Ahveninen: Neuronal representations of distance in human auditory cortex. Proceedings of the National Academy of Sciences 109, 27 (2012) 11019–11024. https://doi.org/10.1073/pnas.1119496109. [CrossRef] [PubMed] [Google Scholar]
  9. N. Kopčo, K. Kumar Doreswamy, S. Huang, S. Rossi, J. Ahveninen: Cortical auditory distance representation based on direct-to-reverberant energy ratio. NeuroImage 208 (2020) 116436. https://doi.org/10.1016/j.neuroimage.2019.116436. [CrossRef] [PubMed] [Google Scholar]
  10. B.G. Shinn-Cunningham: Localizing sound in rooms, in Proc. of the ACM SIGGRAPH and EUROGRAPHICS Campfire: Acoustic Rendering for Virtual Environments, Snowbird, Utah. 2001, pp. 17–22. [Google Scholar]
  11. J.M. Arend, H.R. Liesefeld, C. Pörschmann: On the influence of non-individual binaural cues and the impact of level normalization on auditory distance estimation of nearby sound sources. Acta Acustica 5, 10 (2021) 1–21. https://doi.org/10.1051/aacus/2021001. [CrossRef] [EDP Sciences] [Google Scholar]
  12. A. Kan, C. Jin, A. van Schaik: A psychophysical evaluation of near-field head-related transfer functions synthesized using a distance variation function. The Journal of the Acoustical Society of America 125, 4 (2009) 2233–2242. https://doi.org/10.1121/1.3081395. [CrossRef] [PubMed] [Google Scholar]
  13. S. Spagnol, E. Tavazzi, F. Avanzini: Distance rendering and perception of nearby virtual sound sources with a near-field filter model. Applied Acoustics 115 (2017) 61–73. https://doi.org/10.1016/j.apacoust.2016.08.015. [CrossRef] [Google Scholar]
  14. O.S. Rummukainen, S.J. Schlecht, T. Robotham, A. Plinge, E.A.P. Habets: Perceptual study of near-field binaural audio rendering in six-degrees-of-freedom virtual reality, in Proc. of IEEE VR, Osaka, Japan. 2019, pp. 1–7. https://doi.org/10.1109/VR.2019.8798177. [Google Scholar]
  15. A. Lindau, S. Weinzierl: Assessing the plausibility of virtual acoustic environments. Acta Acustica United with Acustica 98, 5 (2012) 804–810. https://doi.org/10.3813/AAA.918562. [Google Scholar]
  16. M. Slater: Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B 364 (2009) 3549–3557. https://doi.org/10.1098/rstb.2009.0138. [CrossRef] [PubMed] [Google Scholar]
  17. M. Hofer, T. Hartmann, A. Eden, R. Ratan, L. Hahn: The role of plausibility in the experience of spatial presence in virtual environments. Frontiers in Virtual Reality 10, April (2020) 1–9. https://doi.org/10.3389/frvir.2020.00002. [Google Scholar]
  18. U. Reiter: Perceived quality in game audio, in Grimshaw M (Ed.), Game Sound Technology and Player Interaction: Concepts and Developments, Chapter 8, IGI Global, Hershey, PA, USA. 2011, pp. 153–174. https://doi.org/10.4018/978-1-61692-828-5.ch008. [CrossRef] [Google Scholar]
  19. D. Ackermann, F. Fiedler, F. Brinkmann, M. Schneider, S. Weinzierl: On the acoustic qualities of dynamic pseudobinaural recordings. The Journal of the Audio Engineering Society 68, 6 (2020) 418–427. https://doi.org/10.17743/jaes.2020.0036. [CrossRef] [Google Scholar]
  20. J.M. Arend, S.V. Amengual Garí, C. Schissler, F. Klein, P.W. Robinson: Six-degrees-of-freedom parametric spatial audio based on one monaural room impulse response. The Journal of the Audio Engineering Society 69, 7/8 (2021) 557–575. https://doi.org/10.17743/jaes.2021.0009. [CrossRef] [Google Scholar]
  21. F. Brinkmann, L. Aspöck, D. Ackermann, S. Lepa, M. Vorländer, S. Weinzierl: A round robin on room acoustical simulation and auralization. The Journal of the Acoustical Society of America 145, 4 (2019) 2746–2760. https://doi.org/10.1121/1.5096178. [CrossRef] [PubMed] [Google Scholar]
  22. A. Neidhardt, N. Knoop: Binaural walk-through scenarios with actual self-walking using an HTC Vive, in Proc. of the 43rd DAGA, Kiel, Germany. 2017, pp. 283–286. [Google Scholar]
  23. A. Neidhardt, A.I. Tommy, A.D. Pereppadan: Plausibility of an interactive approaching motion towards a virtual sound source based on simplified BRIR sets, in Proc. of the 144th AES Convention, Milan, Italy. 2018, pp. 1–11. [Google Scholar]
  24. S.V. Amengual Garí, J.M. Arend, P. Calamia, P.W. Robinson: Optimizations of the spatial decomposition method for binaural reproduction. The Journal of the Audio Engineering Society 68, 12 (2020) 959–976. https://doi.org/10.17743/jaes.2020.0063. [Google Scholar]
  25. A. Neidhardt, A.M. Zerlik: The availability of a hidden real reference affects the plausibility of position-dynamic auditory AR. Frontiers in Virtual Reality 2, 678875 (2021) 1–17. https://doi.org/10.3389/frvir.2021.678875. [CrossRef] [Google Scholar]
  26. VRACE: VRACE Research Team. https://vrace-etn.eu/research-team/. Accessed: 2021-11-09. [Google Scholar]
  27. Oculus: Oculus Developer. https://developer.oculus.com/blog/near-field-3d-audio-explained. Accessed: 2021-11-09. [Google Scholar]
  28. Magic Leap: Magic Leap Developer. https://developer.magicleap.com/en-us/learn/guides/lumin-sdk-soundfield-audio. Accessed: 2021-11-09. [Google Scholar]
  29. Resonance Audio: Resonance Audio Developer. https://resonance-audio.github.io/resonance-audio/develop/overview.html. Accessed: 2021-11-09. [Google Scholar]
  30. T. Carpentier, M. Noisternig, O. Warusfel: Twenty years of Ircam Spat: Looking back, looking forward, in Proc. of 41st International Computer Music Conference (ICMC), Denton, TX, USA. 2015, pp. 270–277. [Google Scholar]
  31. D. Poirier-Quinot, B.F.G. Katz: The Anaglyph binaural audio engine, in Proc. of the 144th AES Convention, Milan, Italy. 2018, pp. 1–4. [Google Scholar]
  32. M. Cuevas-Rodríguez, L. Picinali, D. González-Toledo, C. Garre, E. de la Rubia-Cuestas, L. Molina-Tanco, A. Reyes-Lecuona: 3D tune-in toolkit: An open-source library for real-time binaural spatialisation. PLoS One 14, 3 (2019) 1–37. https://doi.org/10.1371/journal.pone.0211899. [Google Scholar]
  33. K. Strelnikov, M. Rosito, P. Barone: Effect of audiovisual training on monaural spatial hearing in horizontal plane. PLoS One 6, 3 (2011) 1–9. https://doi.org/10.1371/journal.pone.0018344. [Google Scholar]
  34. A. Isaiah, T. Vongpaisal, A.J. King, D.E.H. Hartley: Multisensory training improves auditory spatial processing following bilateral cochlear implantation. The Journal of Neuroscience 34, 33 (2014) 11119–11130. https://doi.org/10.1523/JNEUROSCI.4767-13.2014. [CrossRef] [PubMed] [Google Scholar]
  35. C. Valzolgher, C. Campus, G. Rabini, M. Gori, F. Pavani: Updating spatial hearing abilities through multisensory and motor cues. Cognition 204 (2020) 104409. https://doi.org/10.1016/j.cognition.2020.104409. [CrossRef] [PubMed] [Google Scholar]
  36. A. Neidhardt, F. Klein, N. Knoop, T. Köllmer: Flexible Python tool for dynamic binaural synthesis applications, in Proc. of the 142nd AES Convention, Berlin, Germany. 2017, pp. 1–5. [Google Scholar]
  37. B. Bernschütz: A spherical far field HRIR/HRTF compilation of the Neumann KU 100, in Proc. of the 39th DAGA, Merano, Italy. 2013, pp. 592–595. [Google Scholar]
  38. R.O. Duda, W.L. Martens: Range dependence of the response of a spherical head model. The Journal of the Acoustical Society of America 104, 5 (1998) 3048–3058. https://doi.org/10.1121/1.423886. [CrossRef] [Google Scholar]
  39. V. Ralph Algazi, C. Avendano, R.O. Duda: Estimation of a spherical-head model from anthropometry. The Journal of the Audio Engineering Society 49, 6 (2001) 472–479. [Google Scholar]
  40. D. Romblom, B. Cook: Near-Field Compensation for HRTF Processing, in Proc. of the 125th AES Convention, San Francisco, USA. 2008, pp. 1–6. [Google Scholar]
  41. J.M. Arend, C. Pörschmann: Synthesis of near-field HRTFs by directional equalization of far-field datasets, in Proc. of the 45th DAGA, Rostock, Germany. 2019, pp. 1454–1457. [Google Scholar]
  42. J.M. Arend, M. Ramírez, H.R. Liesefeld, C. Pörschmann: Supplementary material for “Do near-field cues enhance the plausibility of non-individual binaural rendering in a dynamic multimodal virtual acoustic scene?”. Nov. 2021. https://doi.org/10.5281/zenodo.5656726. [Google Scholar]
  43. A. Lindau, F. Brinkmann: Perceptual evaluation of headphone compensation in binaural synthesis based on non-individual recordings. The Journal of the Audio Engineering Society 60, 1/2 (2012) 54–62. [Google Scholar]
  44. V. Erbes, M. Geier, H. Wierstorf, S. Spors: Free database of low-frequency corrected head-related transfer functions and headphone compensation filters, in Proc. of the 127th AES Convention, New York, NY, USA. 2017, pp. 1–5. [Google Scholar]
  45. S.W. Greenhouse, S. Geisser: On methods in the analysis of profile data. Psychometrika 24, 2 (1959) 95–112. https://doi.org/10.1007/BF02289823. [CrossRef] [Google Scholar]
  46. B. Bruya: Effortless attention: A new perspective in the cognitive science of attention and action. MIT Press, Cambridge, MA, 2010. https://doi.org/10.7551/mitpress/9780262013840.001.0001. [CrossRef] [Google Scholar]
  47. W. Schneider, R.M. Shiffrin: Controlled and automatic human information processing: I. Detection, search, and attention. Psychological Review 84, 1 (1977) 1–66. https://doi.org/10.1037/0033-295X.84.1.1. [CrossRef] [Google Scholar]
  48. P. Demonte: HARVARD speech corpus – audio recording 2019. University of Salford. Collection, 2019. URL https://doi.org/10.17866/rd.salford.c.4437578.v1. [Google Scholar]
  49. ITU-R BS.1770-4: Algorithms to measure audio programme loudness and true-peak audio level. International Telecommunications Union, Geneva, 2015. [Google Scholar]
  50. A. Maravita, C. Spence, J. Driver: Multisensory integration and the body schema: Close to hand and within reach. Current Biology 13, 13 (2003) 531–539. https://doi.org/10.1016/S0960-9822(03)00449-4. [Google Scholar]
  51. M. Gori, T. Vercillo, G. Sandini, D. Burr: Tactile feedback improves auditory spatial localization. Frontiers in Psychology 5 (2014) 1–7. https://doi.org/10.3389/fpsyg.2014.01121. [CrossRef] [PubMed] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.