Topical Issue - CFA 2022
Open Access
Review
Issue
Acta Acust.
Volume 7, 2023
Topical Issue - CFA 2022
Article Number 64
Number of page(s) 22
DOI https://doi.org/10.1051/aacus/2023056
Published online 08 December 2023
  1. J. Skowronek, A. Raake, G.H. Berndtsson, O.S. Rummukainen, P. Usai, A.N.B. Gunkel, M. Johanson, E.A.P. Habets, L. Malfait, D. Lindero, A. Toet: Quality of experience in telemeetings and videoconferencing: a comprehensive survey. IEEE Access 10 (2022) 63885–63931. [CrossRef] [Google Scholar]
  2. M. Bunz, G. Meikle: The internet of things. Wiley, Hoboken, NJ, USA, 2017. [Google Scholar]
  3. Ericsson ConsumerLab: 10 Hot Consumer Trends 2030: The internet of senses, 2019. [Google Scholar]
  4. Detection and Classification of Acoustic Scenes and Events: https://dcase.community. Accessed 27.11.2023. [Google Scholar]
  5. X. Huang, J. Baker, R. Reddy: A historical perspective of speech recognition. Communications of the ACM 57, 1 (2014). [Google Scholar]
  6. M. Clerc, L. Bougrain, F. Lotte: Brain computer interfaces 1: foundations and methods. Wiley, 2016. [CrossRef] [Google Scholar]
  7. M. Clerc, L. Bougrain, F. Lotte: Brain computer interfaces 2: technologies and applications. Wiley, 2016. [CrossRef] [Google Scholar]
  8. N. Zacharov: Sensory evaluation of sound. Taylor & Francis Group, 2019. [Google Scholar]
  9. International Telecommunication Union (ITU-T): Study Group 12. https://www.itu.int/en/ITU-T/about/groups/Pages/sg12.aspx. Accessed November 27, 2023. [Google Scholar]
  10. European Telecommunication Standards Institute: Technical Committee Speech and Multimedia Transmission Quality. https://www.etsi.org/committee/stq. Accessed November 27, 2023. [Google Scholar]
  11. 3rd Generation Partnership Project: https://www.3gpp.org. Accessed November 27, 2023. [Google Scholar]
  12. J.L. Flanagan, D.A. Berkley, K.L. Shipley: A digital teleconferencing system with integrated modalities for human/machine communication: HuMaNet, in: Acoustics, Speech, and Signal Processing, IEEE International Conference on, IEEE Computer Society, 1991. [Google Scholar]
  13. H. Buchner, S. Spors, W. Kellermann, R. Rabenstein: Full-duplex communication systems using loudspeaker arrays and microphone arrays, in: Proceedings of IEEE International Conference on Multimedia and Expo, IEEE, 2002. [Google Scholar]
  14. F. Khalil, J.P. Jullien, A. Gilloire: Microphone array for sound pickup in teleconference systems. Journal of the Audio Engineering Society 42, 9 (1994) 691–700. [Google Scholar]
  15. W. Kellermann: Analysis and design of multirate systems for cancellation of acoustical echoes, in: ICASSP-88, International Conference on Acoustics, Speech, and Signal Processing, IEEE, 1988. [Google Scholar]
  16. A. Gilloire, M. Vetterli: Adaptive filtering in subbands with critical sampling: analysis, experiments, and application to acoustic echo cancellation. IEEE Transactions on Signal Processing 40, 8 (1992) 1862–1875. [CrossRef] [Google Scholar]
  17. M.J. Evans, A.I. Tew, J.A.S. Angus: Spatial audio teleconferencing – which way is better? ICAD, 1997. [Google Scholar]
  18. Recommendation ITU-T P.700: Calculation of loudness for speech communication. ITU-T, 2021. https://www.itu.int/rec/T-REC-P.700-202106-I/en. [Google Scholar]
  19. M. Wong, R. Duraiswami: Shared-space: spatial audio and video layouts for videoconferencing in a virtual room, in: Immersive and 3D Audio: from Architecture to Automotive (I3DA), 2021, pp. 1–6. https://doi.org/10.1109/I3DA48870.2021.961097. [Google Scholar]
  20. M. Miyoshi, N. Koizumi: NNT’s research on acoustics for future telecommunication services. Applied Acoustics 36 (1992) 307–326. [CrossRef] [Google Scholar]
  21. P. Cochrane, D. Heatley, K.H. Cameron: Telepresence-visual telecommunications into the next century, in: Fourth IEE Conference on Telecommunications, Manchester, UK, IEEE, 1993, pp. 175–180. [Google Scholar]
  22. A. Rimell: Immersive spatial audio for telepresence applications: system design and implementation, in: 16th AES International Conference: Spatial Sound Reproduction, Paper 16-033, AES, 1999. [Google Scholar]
  23. A. Raake, C. Schlegel, K. Hoeldtke, M. Geier, J. Ahrens: Listening and conversational quality of spatial audio conferencing, in: 40th International AES Conference: Spatial Audio: Sense the Sound of Space, AES, 2010. [Google Scholar]
  24. A.J. Berkhout, D. de Vries, P. Vogel: Acoustic control by wave field synthesis. Journal of the Acoustical Society of America 93, 5 (1993) 2764–2778. [CrossRef] [Google Scholar]
  25. R. Nicol, M. Emerit: 3D-sound reproduction over an extensive listening area: a hybrid method derived from holophony and ambisonic, in: 16th AES International Conference: Spatial Sound Reproduction, Paper 16-039, AES, 1999. [Google Scholar]
  26. T. Ziemer: Wave field synthesis, in: Psychoacoustic Music Sound Field Synthesis, Current Research in Systematic Musicology, vol. 7, Springer, 2020. https://doi.org/10.1007/978-3-030-23033-3_8. [CrossRef] [Google Scholar]
  27. M.A. Gerzon: Periphony: with-height sound reproduction. Journal of the Audio Engineering Society 21, 1 (1973) 2–10. [Google Scholar]
  28. J.S. Bamford: An analysis of ambisonic sound systems of first and second order. M.Sc. thesis, University of Waterloo, 1995. [Google Scholar]
  29. J. Daniel, S. Moreau, R. Nicol: Further investigations of high-order ambisonics and wavefield synthesis for holophonic sound imaging, in: 114th AES Convention, Paper 5788, AES, 2003. [Google Scholar]
  30. V. Pulkki: Virtual sound source positioning using vector base amplitude panning. Journal of the Audio Engineering Society 45, 6 (1997) 456–466. [Google Scholar]
  31. H. Møller: Fundamentals of binaural technology. Applied Acoustics 36, 3–4 (1992) 171–218. [CrossRef] [Google Scholar]
  32. V. Larcher: Techniques de spatialisation des sons pour la réalité virtuelle. Ph.D. thesis, University of Paris 6, 2001. [Google Scholar]
  33. R. Nicol: Binaural technology. AES Monograph, 2010. [Google Scholar]
  34. A. Roginska, P. Geluso: Immersive sound: the art and science of binaural and multi-channel audio, 1st ed., Routledge, 2017. https://doi.org/10.4324/9781315707525. [Google Scholar]
  35. J. Blauert: Spatial hearing: the psychophysics of human sound localization. The MIT Press, 1996. https://doi.org/10.7551/mitpress/6391.001.0001. [Google Scholar]
  36. J. Daniel: Spatial sound encoding including near field effect: Introducing distance coding filters and a viable new Ambisonic format, in: AES 23rd International Conference, AES, 2003. [Google Scholar]
  37. F. Olivieri, N. Peters, D. Sen: Scene-based audio and higher order ambisonics: a technology review and application to next-generation audio, vr and 360° video, EBU Technical Review, 2018. [Google Scholar]
  38. J. Daniel: Représentation de champs acoustiques, application à la transmission et à la restitution de scènes sonores complexes dans un contexte multimédia. Ph.D. thesis, University of Paris 6, 2000. [Google Scholar]
  39. P. Lecomte, P.A. Gauthier, A. Berry, A. Garcia, C. Langrenne: Directional filtering of Ambisonic sound scenes, in: AES International Conference on Spatial Reproduction – Aesthetics and Science, AES, 2018. [Google Scholar]
  40. P. Lecomte, P.A. Gauthier, C. Langrenne, A. Berry, A. Garcia: Cancellation of room reflections over an extended area using Ambisonics. Journal of the Acoustical Society of America 143 (2018) 811–828. [CrossRef] [PubMed] [Google Scholar]
  41. G. Theile: Multichannel natural recording based on psychoacoustic principles, in: AES 108th Convention, Preprint 5156, AES, Paris, 2000. [Google Scholar]
  42. Soundferences orgnaized by the Society Tregor Sonore: https://tregorsonore.fr/index.php/sonferences-du-tregor/. Accessed November 27, 2023. [Google Scholar]
  43. P.G. Craven, M.A. Gerzon, US Patent, 4042779, 1977. [Google Scholar]
  44. B. Rafaely: Analysis and design of spherical microphone arrays. IEEE Transactions on Speech and Audio Processing 13, 1 (2005) 135–143. [Google Scholar]
  45. D.P. Jarrett, E.A.P. Habets, P.A. Naylor: Theory and applications of spherical microphone array processing, in: Topics in Signal Processing, Springer, 2017. [Google Scholar]
  46. B. Rafaely: Fundamentals of spherical array processing, in: Springer Topics in Signal Processing, Springer, 2019. [CrossRef] [Google Scholar]
  47. S. Moreau, J. Daniel, S. Bertet: 3D sound field recording with Higher Order Ambisonics – Objective measurements and validation of spherical microphone, in: AES 120th Convention, Paper 6857, AES, 2006. [Google Scholar]
  48. F. Zotter, M. Frank: Higher-order ambisonic microphones and the wave equation (linear, lossless), in: Ambisonics. Springer Topics in Signal Processing, vol. 19, Springer, Cham, 2019. [CrossRef] [Google Scholar]
  49. N. Epain, J. Daniel: Improving spherical microphone arrays, in: AES 124th Convention, Paper 7479, 2008. [Google Scholar]
  50. J. Palacino, R. Nicol: Spatial sound pick-up with a low number of microphones. ICA, 2013. [Google Scholar]
  51. M.-V. Laitinen, L. Laaksonen, J. Vilkamo: Spatial audio representation and rendering. Patent EP 3757992, 2020. [Google Scholar]
  52. Diapason: Rennes Opera goes 3D for Don Giovanni, L’Opéra de Rennes se met à la 3D pour Don Giovanni (in French), 2009. https://www.diapasonmag.fr/a-laune/lopera-de-rennes-se-met-a-la-3d-pour-don-giovanni-12989.html. Accessed November 27, 2023. [Google Scholar]
  53. mh acoustics LLC: https://mhacoustics.com. Accessed November 27, 2023. [Google Scholar]
  54. Zylia: https://www.zylia.co. Accessed November 27, 2023. [Google Scholar]
  55. A. Farina, L. Tronchin: 3D sound characterization in theatres employing microphone arrays. Acta Acustica united with Acustica 99 (2013) 118–125. [CrossRef] [Google Scholar]
  56. P. Massé: Analysis, treatment, and manipulation methods for spatial room impulse responses measured with spherical microphone arrays. Ph.D. thesis, Sorbonne Université, 2019. [Google Scholar]
  57. J. Daniel, S. Kitic: Echo-enabled direction-of-arrival and range estimation of a mobile source in ambisonic domain, in: 2022 30th European Signal Processing Conference (EUSIPCO), Belgrade, Serbia, IEEE, 2022, pp. 852–856. https://doi.org/10.23919/EUSIPCO55093.2022.9909743. [CrossRef] [Google Scholar]
  58. J. Blauert (Ed.), The technology of binaural listening. Springer, 2020. https://doi.org/10.1007/978-3-642-37762-4. [Google Scholar]
  59. D.R. Begault, E.M. Wenzel, M.R. Anderson: Direct comparison of the impact of head tracking, reverberation, and individualized head-related transfer functions on the spatial perception of a virtual speech source. Journal of the Audio Engineering Society 49 (2001) 904–916. [Google Scholar]
  60. E. Hendrickx, P. Stitt, J.-C. Messonnier, J.-M. Lyzwa, B.F.G. Katz, C. de Boishéraud: Influence of head tracking on the externalization of speech stimuli for non-individualized binaural synthesis. Journal of the Acoustical Society of America 141, 3 (2017) 2011–2023. [CrossRef] [PubMed] [Google Scholar]
  61. H. Møller, M.F. Sørensen, D. Hammershøi, C.B. Jensen: Head related transfer functions of human subjects. Journal of the Audio Engineering Society 43, 5 (1995) 300–321. [Google Scholar]
  62. V.R. Algazi, R.O. Duda, D.P. Thompson, C. Avendano: The CIPIC HRTF database, in: Proceedings of the 2001 IEEE Workshop on the Applications of Signal Processing to Audio and Acoustics, IEEE, 2001. [Google Scholar]
  63. J.M. Pernaux, M. Emerit, J. Daniel, R. Nicol: Perceptual evaluation of static binaural sound, in: 22nd AES International Conference: Virtual, Synthetic, and Entertainment Audio, AES, 2002. [Google Scholar]
  64. LISTEN HRTF database: http://recherche.ircam.fr/equipes/salles/listen/. Accessed November 27, 2023. [Google Scholar]
  65. ARI HRTF database: https://www.oeaw.ac.at/isf/das-institut/software/hrtf-database. Accessed November 27, 2023. [Google Scholar]
  66. FABIAN HRTF database: https://depositonce.tu-berlin.de/items/bff6568a-5735-4ebc-b3fa-ac10707b7beb. Accessed November 27, 2023. [Google Scholar]
  67. N. Gupta, A. Barreto, M. Joshi, J.C. Agudelo: HRTF database at FIU DSP Lab, in: 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, 2010, pp. 169–172. https://doi.org/10.1109/ICASSP.2010.5496084. [CrossRef] [Google Scholar]
  68. K. Watanabe, Y. Iwaya, Y. Suzuki, S. Takane, S. Sato: Dataset of head-related transfer functions measured with a circular loudspeaker array. Acoustical Science and Technology 35, 3 (2014) 159–165. [CrossRef] [Google Scholar]
  69. C.T. Jin, P. Guillon, N. Epain, R. Zolfaghari, A. van Schaik, A.I. Tew, C. Hetherington, J. Thorpe: Creating the Sydney York morphological and acoustic recordings of ears database. IEEE Transactions on Multimedia 16, 1 (2014) 37–46. [CrossRef] [Google Scholar]
  70. ITA HRTF database: https://www.akustik.rwth-aachen.de/go/id/lsly. Accessed November 27, 2023. [Google Scholar]
  71. F. Brinkmann, M. Dinakaran, R. Pelzer, P. Grosche, D. Voss, S. Weinzierl: A cross-evaluated database of measured and simulated HRTFs including 3D head meshes, anthropometric features, and headphone impulse responses. Journal of the Audio Engineering Society 67, 9 (2019) 705–719. [CrossRef] [Google Scholar]
  72. I. Engel, R. Daugintis, T. Vicente, A.O.T. Hogg, J. Pauwels, A.J. Tournier, L. Picinali: The SONICOM HRTF dataset. Journal of the Audio Engineering Society 71, 5 (2023) 241–253. [CrossRef] [Google Scholar]
  73. P. Minnaar, J. Plogsties, C. Flemming, Directional resolution of head-related transfer functions required in binaural synthesis. Journal of the Audio Engineering Society 53, 10 (2005) 919–929. [Google Scholar]
  74. S. Carlile, C. Jin, V. van Raad: Continuous virtual auditory space using HRTF interpolation: Acoustic and psychophysical errors, in: Proceedings of the First IEEE Pacific-Rim Conference on Multimedia, IEEE, 2000, pp. 220–223. [Google Scholar]
  75. R. Martin, K. McAnally: Interpolation of head-related transfer functions. Technical Report DSTO-RR-0323, Australian Government – Department of Defence, 2007. [Google Scholar]
  76. BiLi Project (in French): https://www.espace-sciences.org/sciences-ouest/310/dossier/immersion-dans-le-son. Accessed November 27, 2023. [Google Scholar]
  77. T. Carpentier, H. Bahu, M. Noisternig, O. Warusfel: Measurement of a head-related transfer function database with high spatial resolution, in: 7th Forum Acusticum, Krakow, Poland, EAA, 2014. [Google Scholar]
  78. F. Rugeles Ospina: Individualisation de l’écoute binaurale: création et transformation des indices spectraux et des morphologies des individus. Ph.D. thesis, University of Paris 6, 2016. [Google Scholar]
  79. F. Rugeles Ospina, M. Emerit, B.F.G. Katz: The three-dimensional morphological database for spatial hearing research of the BiLi project, in: Proc. of Meetings on Acoustics, Acoustical Society of America (ASA), 2015. [Google Scholar]
  80. P. Majdak, F. Zotter, F. Brinkmann, J. De Muynke, M. Mihocic, M. Noisternig: Spatially oriented format for acoustics 2.1: Introduction and recent advances, Journal of the Audio Engineering Society 70, 7/8 (2022) 565–584. [CrossRef] [Google Scholar]
  81. P. Majdak, Y. Iwaya, T. Carpentier, R. Nicol, M. Parmentier, A. Roginska, Y. Suzuki, K. Watanabe, H. Wierstorf, H. Ziegelwanger, M. Noisternig: Spatially oriented format for acoustics: a data exchange format representing head-related transfer functions, in: AES 134th Convention, AES, 2013. [Google Scholar]
  82. SOFA (Spatially Oriented Format for Acoustics): https://www.sofaconventions.org/mediawiki/index.php/SOFA_(Spatially_Oriented_Format_for_Acoustics). Accessed November 27, 2023. [Google Scholar]
  83. D.N. Zotkin, R. Duraiswami, E. Grassi, N.A. Gumerov: Fast head-related transfer function measurement via reciprocity. Journal of the Acoustical Society of America 120, 4 (2006) 2202–2215. [CrossRef] [PubMed] [Google Scholar]
  84. G. Enzner: 3D-continuous-azimuth acquisition of head-related impulse responses using multi-channel adaptive filtering, in: 2009 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, IEEE, 2009, pp. 325–328. [CrossRef] [Google Scholar]
  85. M. Pollow, B. Masiero, P. Dietrich, J. Fels, M. Vorländer: Fast measurement system for spatially continuous individual HRTFs, in: 4th Int. Symposium on Ambisonics and Spherical Acoustics, 25th AES UK Conference, AES, University of York, UK, 2012. [Google Scholar]
  86. P. Majdak, P. Balazs, B. Laback: Multiple exponential sweep method for fast measurement of head-related transfer functions. Journal of the Audio Engineering Society 55, 7/8 (2007) 623–637. [Google Scholar]
  87. J. Richter, G. Behler, J. Fels: Evaluation of a fast HRTF measurement system, in: 140th International AES Convention, France, Paris, AES, 2016. [Google Scholar]
  88. S. Busson, R. Nicol, V. Choqueuse, V. Lemaire: Non-linear interpolation of head related transfer function. CFA, 2006. [Google Scholar]
  89. P. Guillon, R. Nicol, L. Simon: Head-Related Transfer Functions reconstruction from sparse measurements considering a priori knowledge from database analysis: a pattern recognition approach, in: AES 125th Convention, Paper 7610, AES, 2008. [Google Scholar]
  90. B.-S. Xie: Recovery of individual head-related transfer functions from a small set of measurements. Journal of the Acoustical Society of America 132, 1 (2012) 282–294. [CrossRef] [PubMed] [Google Scholar]
  91. M. Maazaoui, O. Warusfel: Estimation of individualized HRTF in unsupervised conditions, in: 140th International AES Convention, AES, 2016. [Google Scholar]
  92. A. Moreau, O. Warusfel: Identification de HRTFs individuelles par selfies binauraux et apprentissage machine. CFA, 2022. [Google Scholar]
  93. E.M. Wenzel, M. Arruda, D.J. Kistler, F.L. Wightman: Localization using nonindividualized head-related transfer functions. Journal of the Acoustical Society of America 94, 1 (1993) 111–123. [CrossRef] [PubMed] [Google Scholar]
  94. P.M. Hofman, J.G. Van Riswick, A.J. Van Opstal: Relearning sound localization with new ears. Nature neuroscience 1, 5 (1998) 417–421. [CrossRef] [PubMed] [Google Scholar]
  95. D. Poirier-Quinot, B.F.G. Katz: On the improvement of accomodation to non-individual HRTFs via VR active learning and inclusion of a 3D room response. Acta Acustica 5 (2021) 25. [CrossRef] [EDP Sciences] [Google Scholar]
  96. F.L. Wightman, D.J. Kistler: Headphone simulation of free-field listening. II: Psychophysical validation. Journal of the Acoustical Society of America 85, 2 (1989) 868–878. [CrossRef] [PubMed] [Google Scholar]
  97. T.D. Mrsic-Flogel, A.J. King, R.L. Jenison, J.W. Schnupp: Listening through different ears alters spatial response fields in ferret primary auditory cortex. Journal of Neurophysiology 86 (2001) 1043–1046. [CrossRef] [PubMed] [Google Scholar]
  98. J.C. Middlebrooks: Virtual localization improved by scaling nonindividualized external-ear transfer functions in frequency. Journal of the Acoustical Society of America 106, 3 (1999) 1493–1510. [CrossRef] [PubMed] [Google Scholar]
  99. C.T. Jin, P. Leong, J. Leung, A. Corderoy, S. Carlile: Enabling individualized virtual auditory space using morphological measurements, in: Proceedings of the First IEEE Pacific-Rim Conference on Multimedia, Citeseer, 2000. [Google Scholar]
  100. B.F.G. Katz: Boundary element method calculation of individual head-related transfer function. I. Rigid model calculation. Journal of the Acoustical Society of America 110, 5 (2001) 2440–2448. [CrossRef] [PubMed] [Google Scholar]
  101. V.R. Algazi, R.O. Duda, R. Duraiswami, N.A. Gumerov, Z. Tang: Approximating the head-related transfer function using simple geometric models of the head and torso. Journal of the Acoustical Society of America 112, 5 (2002) 2053–2064. [CrossRef] [PubMed] [Google Scholar]
  102. D.N. Zotkin, J. Hwang, R. Duraiswami, L.S. Davis: HRTF personalization using anthropometric measurements, in: 2003 IEEE workshop on applications of signal processing to audio and acoustics, IEEE, 2003. [Google Scholar]
  103. S. Hwang, Y. Park, Y. Park: Modeling and customization of head related impulse responses based on general basis functions in time domain. Acta Acustica United with Acustica 94, 6 (2008) 965–980. [CrossRef] [Google Scholar]
  104. S. Hwang, Y. Park: Interpretations on principal components analysis of head-related impulse responses in the median plane. Journal of the Acoustical Society of America 123, 4 (2008) EL65–EL71. [CrossRef] [PubMed] [Google Scholar]
  105. M. Dellepiane, N. Pietroni, N. Tsingos, M. Asselot, R. Scopigno: Reconstructing head models from photographs for individualized 3D-audio processing, in: Computer Graphics Forum, Blackwell Publishing Ltd., Oxford, UK, 2008, pp. 1719–1727. [CrossRef] [Google Scholar]
  106. S. Xu, Z. Li, G. Salvendy: Individualized head-related transfer functions based on population grouping. Journal of the Acoustical Society of America 124, 5 (2008) 2708–2710. [CrossRef] [PubMed] [Google Scholar]
  107. A. Lindau, J. Estrella, S. Weinzierl: Individualization of dynamic binaural synthesis by real time manipulation of ITD, in: 128th Audio Engineering Society Convention, AES, 2010. [Google Scholar]
  108. K. Iida, Y. Ishii, S. Nishioka: Personalization of head-related transfer functions in the median plane based on the anthropometry of the listener’s pinnae. Journal of the Acoustical Society of America 136, 1 (2014) 317–333. [CrossRef] [PubMed] [Google Scholar]
  109. K.J. Fink, L. Ray: Individualization of head related transfer functions using principal component analysis. Applied Acoustics 87 (2015) 162–173. [CrossRef] [Google Scholar]
  110. R. Bomhardt, M. Lins, J. Fels: Analytical ellipsoidal model of interaural time differences for the individualization of head-related impulse responses, Journal of the Audio Engineering Society 64, 11 (2016) 882–894. [CrossRef] [Google Scholar]
  111. R. Nicol, M. Emerit, L. Gros, HRTF “prêt-à-porter” pour le son binaural dans les futurs contenus d’Orange. CFA, 2018. [Google Scholar]
  112. B.F.G. Katz, G. Parseihian: Perceptually based head-related transfer function database optimization. Journal of the Acoustical Society of America 131 (2012) EL99–EL105. [CrossRef] [PubMed] [Google Scholar]
  113. P.Y. Michaud, R. Nicol: Multi dimensional scaling of perceived dissimilarities between non-individual HRTFs: investigating the perceptual space of binaural synthesis. BiLi Project Deliverable, 2015. [Google Scholar]
  114. P. Guillon, T. Guignard, R. Nicol: Head-related transfer function customization by frequency scaling and rotation shift based on a new morphological matching method, in: 125th AES Convention, Paper 7550, AES, 2008. [Google Scholar]
  115. M. Emerit, F. Rugeles Ospina, R. Nicol: Transformer un jeu de HRTF en un autre à partir de données morphologiques. CFA – VISHNO, 2016. [Google Scholar]
  116. Y. Kahana, P.A. Nelson: Boundary element simulations of the transfer function of human heads and baffled pinnae using accurate geometric models. Journal of Sound and Vibration 300, 3–5 (2007) 552–579. [CrossRef] [Google Scholar]
  117. M. Pollow, K.-V. Nguyen, O. Warusfel, T. Carpentier, M. Müller-Trapet, M. Vorländer, M. Noisternig: Calculation of head-related transfer functions for arbitrary field points using spherical harmonics decomposition. Acta Acustica united with Acustica 98, 1 (2012) 72–82. [CrossRef] [Google Scholar]
  118. D.J. Kistler, F.L. Wightman: A model of head-related transfer functions based on principal components analysis and minimum-phase reconstruction. Journal of the Acoustical Society of America 91, 3 (1992). [Google Scholar]
  119. J. Blauert, J. Braasch (Eds.): The technology of binaural understanding. Springer, 2020. https://doi.org/10.1007/978-3-030-00386-9. [CrossRef] [Google Scholar]
  120. R. Nicol, M. Emerit: Reproducing 3D-sound for videoconferencing: a comparison between holophony and ambisonic. D.A.F.X., 1998. [Google Scholar]
  121. J.-M. Jot, V. Larcher, J.-M. Pernaux: A comparative study of 3-D audio encoding and rendering techniques, in: 16th AES International Conference: Spatial Sound Reproduction, Paper 16-025, AES, 1999. [Google Scholar]
  122. M. Relieu: La téléprésence, ou l’autre visiophonie. Réseaux 5, 144 (2007) 183–223. [Google Scholar]
  123. S. Brix, T. Sporer, J. Plogsties: CARROUSO – an European approach to 3D-audio, in: 110th AES Convention, Paper 5314, AES, 2001. [Google Scholar]
  124. R. Väänänen, O. Warusfel, M. Emerit: Encoding and rendering of perceptual sound scenes in the CARROUSO project, in: 22nd International AES Conference: Virtual, Synthetic, and Entertainment Audio, AES, 2002. [Google Scholar]
  125. E. Corteel, U. Horbach, R.S. Pellegrini: Multichannel inverse filtering of multiexciter distributed mode loudspeaker for wave field synthesis, in: 112th AES Convention, Paper 5611, AES, 2002. [Google Scholar]
  126. BlueJeans: https://www.bluejeans.com/. Accessed November 27, 2023. [Google Scholar]
  127. BT MeetMe with Dolby Voice: www.btconferencing.com/meetme-with-dolby-voice/meetme-with-dolby-voice_en.pdf. Accessed November 27, 2023. [Google Scholar]
  128. Dolby Voice: https://docs.dolby.io/communications-apis/docs/guides-dolby-voice. Accessed November 27, 2023. [Google Scholar]
  129. Cisco IX5000 Series: https://www.cisco.com/c/en/us/products/collateral/collaboration-endpoints/ix5000-series/datasheet-c78-733257.html. Accessed November 27, 2023. [Google Scholar]
  130. F. Rumsey: Spatial audio processing: Upmix, downmix, shake it all about. Journal of the Audio Engineering Society 61, 6 (2013) 474–478. [Google Scholar]
  131. W.H. Nam, T. Lee, S.C. Ko, Y. Son, H.K. Chung, K.-R. Kim, J. Kim, S. Hwang, K. Lee: AI 3D immersive audio codec based on content-adaptive dynamic down-mixing and up-mixing framework, in: 151st AES Convention, Paper 10525, AES, 2021. [Google Scholar]
  132. G. Lorho, N. Zacharov: Subjective evaluation of virtual home theater sound systems for loudspeakers and headphones, in: 116th AES Convention, Paper 6141, AES, 2004. [Google Scholar]
  133. C. Pike, F. Melchior: An assessment of virtual surround sound systems for headphone listening of 5.1 multichannel audio, in: 134th AES Convention, Paper 8819, AES, 2013. [Google Scholar]
  134. H. Møller, C.B. Jensen, D. Hammershøi, M.F. Søresen: Design criteria for headphones. Journal of the Audio Engineering Society 43, 4 (1995) 218–232. [Google Scholar]
  135. P. Rueff, R. Nicol, J. Palacino: Characterization of a wide selection of headphones for binaural reproduction: measurement of electro-acoustic, magnetic and ergonomics features. BiLi Project Deliverable, 2015. [Google Scholar]
  136. F. Baumgarte, C. Faller: Binaural cue coding–part I: psychoacoustic fundamentals and design principles. IEEE Transactions on Speech and Audio Processing 11, 6 (2003) 509–519. [CrossRef] [Google Scholar]
  137. C. Faller, F. Baumgarte: Binaural cue coding–part II: schemes and applications. IEEE Transactions on Speech and Audio Processing 11, 6 (2003) 520–531. [CrossRef] [Google Scholar]
  138. M.A. Gerzon: Ambisonics in Multichannel Broadcasting and Video. Journal of the Audio Engineering Society 33, 11 (1985) 859–871. [Google Scholar]
  139. A. Daniel: Spatial auditory blurring and applications to multichannel audio coding. Ph.D. thesis, University of Paris 6, 2011. [Google Scholar]
  140. Standard ISO/IEC 23008-3:2019: Information Technology – High Efficiency Coding and Media Delivery in Heterogeneous Environments – Part 3: 3D Audio, 2019. [Google Scholar]
  141. S.R. Quackenbush, J. Herre: MPEG standards for compressed representation of immersive audio. Proceedings of the IEEE 109, 9 (2021) 1578–1589. [CrossRef] [Google Scholar]
  142. IVAS: https://www.3gpp.org/technologies/ivas-highlights. Accessed November 27, 2023. [Google Scholar]
  143. ITU-R BS.1116-3: Methods for the subjective assessment of small impairments in audio systems, Technical Report, 2015. [Google Scholar]
  144. ITU-R BS.1284-2: General methods for the subjective assessment of sound quality, Technical Report 2019. [Google Scholar]
  145. R. Nicol, L. Gros, C. Colomes, M. Noisternig, O. Warusfel, H. Bahu, B.F.G. Katz, L.S.R. Simon: A roadmap for assessing the quality of experience of 3D audio binaural rendering, in: EAA Joint Symposium on Auralization and Ambisonics, EAA, 2014. [Google Scholar]
  146. J.M. Pernaux, M. Emerit, R. Nicol: Perceptual evaluation of binaural sound synthesis: the problem of reporting localization judgments, in: 114th AES Convention, Paper 5789, AES, 2003. [Google Scholar]
  147. H. Bahu, T. Carpentier, M. Noisternig, O. Warusfel: Comparison of different egocentric pointing methods for 3D sound localization experiments. Acta Acustica united with Acustica 102, 1 (2016) 107–118. [CrossRef] [Google Scholar]
  148. P. Guillon: Individualisation des indices spectraux pour la synthèse binaurale: recherche et exploitation des similarités inter-individuelles pour l’adaptation ou la reconstruction de HRTF. Ph.D. thesis, Le Mans Université, 2009. [Google Scholar]
  149. D. Poirier-Quinot, B.F.G. Katz: Assessing the impact of Head-Related Transfer Function individualization on task performance: case of a virtual reality shooter game. Journal of the Audio Engineering Society 68, 4 (2020) 248–260. [CrossRef] [Google Scholar]
  150. S. Agrawal, A. Simon, S. Bech, K. Bærentsen, S. Forchhammer: Defining immersion: literature review for research on audiovisual experiences. Journal of the Audio Engineering Society 68, 6 (2020) 404–417. [CrossRef] [Google Scholar]
  151. R. Nicol, O. Dufor, L. Gros, P. Rueff, N. Farrugia: EEG measurement of binaural sound immersion, in: EAA Spatial Audio Signal Processing Symposium, EAA, 2019. [Google Scholar]
  152. E. Hendrickx, M. Paquier, V. Koehl: Audiovisual spatial coherence for 2D and stereoscopic-3D movies. Journal of the Audio Engineering Society 63, 11 (2015) 889–899. [CrossRef] [Google Scholar]
  153. J. Moreira, L. Gros, R. Nicol, I. Viaud-Delmon: Spatial auditory-visual integration: the case of binaural sound on a smartphone, in: AES 145th Convention, paper 10130, AES, 2018. [Google Scholar]
  154. S. Moulin, R. Nicol, L. Gros, P. Mamassian: Audio-visual spatial integration in distance dimension - when wave field synthesis meets stereoscopic-3D, in: 55th AES International Conference: Spatial Audio, AES, 2014. [Google Scholar]
  155. I.P. Howard, W.B. Templeton: Human spatial orientation. John Wiley & Sons, 1966. [Google Scholar]
  156. N. Côté, V. Koehl, M. Paquier: Ventriloquism on distance auditory cues, in: Acoustics 2012 Joint Congress, SFA and IOA, 2012. [Google Scholar]
  157. S. Moulin, R. Nicol, L. Gros: Auditory distance perception in real and virtual environments, in: Proceedings of the ACM Symposium on Applied Perception (SAP ‘13), Association for Computing Machinery (ACM), 2013. https://doi.org/10.1145/2492494.2501876. [Google Scholar]
  158. P. Zahorik: Asymmetric visual capture of virtual sound sources in the distance dimension. Frontiers in Neuroscience 16 (2022) 958577. [CrossRef] [PubMed] [Google Scholar]
  159. E. Hendrickx, M. Paquier, V. Koehl, J. Palacino: Ventriloquism effect with sound stimuli varying in both azimuth and elevation. Journal of the Acoustical Society of America 138 (2015) 3686–3697. [CrossRef] [PubMed] [Google Scholar]
  160. M. Rébillat, X. Boutillon, É. Corteel, B.F. Katz: Audio, visual, and audio-visual egocentric distance perception by moving subjects in virtual environments. ACM Transactions on Applied Perception (TAP) 9, 4 (2012) 1–17. [CrossRef] [Google Scholar]
  161. J. Blascovich, J. Loomis, A.C. Beall, K.R. Swinth, C.L. Hoyt, J.N. Bailenson: Immersive virtual environment technology as a methodological tool for social psychology. Psychological Inquiry 13, 2 (2002) 103–124. [CrossRef] [Google Scholar]
  162. G. Keidser, G. Naylor, D.S. Brungart, A. Caduff, J. Campos, S. Carlile, M.G. Carpenter, G. Grimm, V. Hohmann, I. Holube, S. Launer, T. Lunner, R. Mehra, F. Rapport, M. Slaney, K. Smeds: The quest for ecological validity in hearing science: what it is, why it matters, and how to advance it. Ear Hear 41, Suppl. 1 (2020) 5S–19S. [CrossRef] [PubMed] [Google Scholar]
  163. R. Larson, M. Csikszentmihalyi: Flow and the foundations of positive psychology, in: The experience sampling method, Springer, 2014. [Google Scholar]
  164. J. Moreira: Evaluer l’apport du binaural dans une application mobile audiovisuelle. Ph.D. thesis, CNAM, 2019. [Google Scholar]
  165. T. Robotham, O.S. Rummukainen, M. Kurz, M. Eckert, E.A.P. Habets: Comparing direct and indirect methods of audio quality evaluation in virtual reality scenes of varying complexity. IEEE Transactions on Visualization and Computer Graphics 28, 5 (2022) 2091–2101. [CrossRef] [PubMed] [Google Scholar]
  166. L. Turchet, M. Lagrange, C. Rottondi, G. Fazekas, N. Peters, J. Østergaard, F. Font, T. Bäcksträm, C. Fischione: The internet of sounds: convergent trends, insights, and future directions. IEEE Internet of Things Journal 10, 13 (2023) 11264–11292. [CrossRef] [Google Scholar]
  167. BirdNET: https://birdnet.cornell.edu. Accessed November 27, 2023. [Google Scholar]
  168. C.M. Wood, S. Kahl, P. Chaon, M.Z. Peery, H. Klinck: Survey coverage, recording duration and community composition affect observed species richness in passive acoustic surveys. Methods in Ecology and Evolution 12, 5 (2021) 885–896. [CrossRef] [Google Scholar]
  169. S. Kahl, C.M. Connor, M. Eibl, H. Klinck: BirdNET: a deep learning solution for avian diversity monitoring. Ecological Informatics 61 (2021) 101236. [CrossRef] [Google Scholar]
  170. BUGG: https://www.bugg.xyz. Accessed November 27, 2023. [Google Scholar]
  171. S.S. Sethi, N.S. Jones, B.D. Fulcher, L. Picinali, D.J. Clink, H. Klinck, C.D.L. Orme, P.H. Wrege, R.M. Ewers: Characterizing soundscapes accross diverse ecosystems using a universal acoustic feature set. PNAS 117, 29 (2020) 17049–17055. [CrossRef] [PubMed] [Google Scholar]
  172. S.S. Sethi, R.M. Ewers, N.S. Jones, A. Signorelli, L. Picinali, C.D.L. Orme: SAFE Acoustics: an opensource, real-time eco-acoustic monitoring network in the tropical rainforests of Borneo. Methods in Ecology and Evolution 11 (2020) 1182–1185. [CrossRef] [Google Scholar]
  173. P. Lecomte, M. Melon, L. Simon: Spherical fraction beamforming, in: IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, IEEE, 2020, pp. 2996–3009. https://doi.org/10.1109/TASLP.2020.3034516. [CrossRef] [Google Scholar]
  174. P. Lecomte, T. Blanchard, M. Melon, L. Simon, K. Hassan, R. Nicol: One eighth of a sphere microphone array, in: Forum Acusticum, Lyon, France, EAA, 2020, pp. 313–318. [Google Scholar]
  175. T. Blanchard, P. Lecomte, M. Melon, L. Simon, K. Hassan, R. Nicol: Experimental acoustic scene analysis using One-Eighth spherical fraction microphone array. Journal of the Acoustical Society of America 151, 1 (2022) 180–192. [CrossRef] [PubMed] [Google Scholar]
  176. R. Nicol, C. Plapous, L. Avenel, T. Le Du: Recording and analyzing infrasounds to monitor human activities in buildings, in: Forum Acusticum, Torino, Italy, EAA, 2023. [Google Scholar]
  177. T. Li, A.K. Sahu, A. Talwalkar, V. Smith: Federated learning: challenges, methods, and future directions. IEEE Signal Processing Magazine 37, 3 (2020) 50–60. [Google Scholar]
  178. A machine that lends an ear: https://hellofuture.orange.com/en/a-machine-that-lends-an-ear/. Accessed November 27, 2023. [Google Scholar]
  179. L. Delphin-Poulat, C. Plapous: Mean teacher with data augmentation for DCASE 2019 Task 4. Technical Report, DCASE Challenge, 2019. [Google Scholar]
  180. J.F. Gemmeke, D.P.W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R.C. Moore, M. Plakal, M. Ritter: Audio set: an ontology and human-labeled dataset for audio events, in: 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP), New Orleans, LA, USA, IEEE, 2017, pp. 776–780. https://doi.org/10.1109/ICASSP.2017.7952261. [CrossRef] [Google Scholar]
  181. H. Si-Mohammed, C. Haumont, A. Sanchez, C. Plapous, F. Bouchnak, J.-P. Javaudin, A. Lécuyer: Designing functional prototypes combining BCI and AR for home automation, in: Virtual Reality and Mixed Reality, EuroXR, Springer, Cham, 2022. https://doi.org/10.1007/978-3-031-16234-3_1. [Google Scholar]
  182. M. Schreuder, T. Rost, M. Tangermann: Listen, you are writing! Speeding up online spelling with a dynamic auditory BCI. Frontiers in Neuroscience 5 (2011) 112. [CrossRef] [PubMed] [Google Scholar]
  183. A. Jain, R. Bansal, A. Kumar, K.D. Singh: A comparative study of visual and auditory reaction times on the basis of gender and physical activity levels. International Journal of Applied and Basic Medical Research 5, 2 (2015) 124–127. [CrossRef] [PubMed] [Google Scholar]
  184. M. Schreuder, B. Blankertz, M. Tangermann: A new auditory multi-class brain-computer interface paradigm: Spatial hearing as an informative cue. PLoS One 5, 4 (2010) e9813. https://doi.org/10.1371/journal.pone.0009813. [CrossRef] [PubMed] [Google Scholar]
  185. A. Belitski, J. Farquhar, P. Desain: P300 audio-visual speller. Journal of Neural Engineering 8, 2 (2011) 025022. [CrossRef] [PubMed] [Google Scholar]
  186. L. Guého: Interface cerveau-machine basée sur des stimuli auditifs, Rapport de stage Master 2 Acoustique et Musicologie. Aix-Marseille Université, Orange Labs, 2022. [Google Scholar]
  187. S. Orts-Escolano, C. Rhemann, S. Fanello, W. Chang, A. Kowdle, Y. Degtyarev, D. Kim, P.L. Davidson, S. Khamis, M. Dou, V. Tankovivh, C. Loop, Q. Cai, P.A. Chou, S. Mennicken, J. Valentin, V. Pradeep, S. Wang, S.B. Kang, P. Kohli, Y. Lutchyn, C. Keskin, S. Izadi: Holoportation: virtual 3D teleportation in real-time, in: Proceedings of the 29th Annual Symposium on User Interface Software and Technology, ACM, 2016, pp. 741–754. [CrossRef] [Google Scholar]
  188. B. Jones, Y. Zhang, P.N.Y. Wong, S. Rintel: Belonging there: VROOM-ing into the Uncanny Valley of XR telepresence, in: Proceedings of the ACM on Human-Computer Interaction, vol. 5, CSCW1, ACM, 2021. Article 59. https://doi.org/10.1145/3449133. [Google Scholar]
  189. KHRONOS: https://www.khronos.org. Accessed November 27, 2023. [Google Scholar]
  190. J. Choi, I. Jung, C.-Y. Kang: A brief review of sound energy harvesting. Nano Energy 56 (2019) 169–183. ISSN 2211-2855 [CrossRef] [Google Scholar]
  191. S. Garrett: Thermoacoustic engines and refrigerators. CFA/VISHNO, 2016. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.