Open Access
Issue
Acta Acust.
Volume 8, 2024
Article Number 36
Number of page(s) 11
Section Hearing, Audiology and Psychoacoustics
DOI https://doi.org/10.1051/aacus/2024021
Published online 13 September 2024

© The Author(s), Published by EDP Sciences, 2024

Licence Creative CommonsThis is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1 Introduction

Loudness is a basic psychoacoustic sensation describing the perceived magnitude of a sound (e.g., [1]). In the last decades, several studies used behavioural reverse correlation methods (e.g., [2, 3]) to estimate perceptual weights in loudness judgments. Typically, the effect of small variations of a stimulus feature (e.g., level) of a part of a stimulus (e.g., the first hundred milliseconds of a sound) on the overall loudness (also termed “global loudness” in the literature) of the sound is measured (e.g., [4]) using a correlational or regression analysis. Data from several studies showed that listeners apply non-uniform temporal weights when judging the overall loudness of a time-varying sound (e.g., [57]). They consistently showed that the first 100–300 ms receive a higher weight than later portions of the stimulus. The higher weight on the first segment means that, for example, a 5 dB increase in the level of this segment causes a stronger increase in global loudness than a 5 dB increase in the level of a later segment. This effect is commonly referred to as the primacy effect. It is observed at different sound levels and is also found in the presence of background noise [8]. The primacy effect can be modelled by a simple temporal decay function followed by a temporal integration of this function over the duration of the segment [9]. Oberfeld et al. [10] showed that this approach can also be used for stimuli where different segment durations are combined.

For stimuli consisting of two temporal parts with a silent interval in between, a second primacy effect occurs after the silent interval, if the duration of this silent interval is 350 ms or longer [11]. This second primacy effect is also found if a silent interval is only introduced into one of several spectral components of a stimulus [12]. Some studies also showed a recency effect, i.e., a higher weight to the end of a signal, but this effect is less strong and much less consistently observed than the primacy effect (e.g., [6, 13]).

Several experiments showed that different spectral regions of a complex sound also differ in their contribution to global loudness (e.g., [1417]). Unfortunately, the data on spectral weights are much more divergent than the data on temporal weights. Some studies reported higher weights on low frequencies, some higher weights on high frequencies, and others a U-shaped pattern with higher weights at both edges of the sound spectrum. A confounding factor may be that these studies often used different characteristics of the spectral components, for example spectral components with the same physical level (e.g., [15, 16, 18]), sensation level (e.g., [15]) or loudness [4]. Oberfeld et al. [4] showed that, for loudness-matched spectral components, the weight on low frequency components is higher than that on high frequency components. Their data further indicate that the temporal and the spectral weighting of the loudness of a time-varying sound are independent processes. While several aspects of the influence of temporal and spectral characteristics on loudness are accounted for by several loudness models (e.g., [1921], for a comparison), the temporal weighting in loudness perception is not accounted for by current loudness models [8].

The loudness of a sound also depends on its spatial position (i.e., the direction of sound incidence). Sivonen and Ellermeier [22] showed that, at equal loudness, the level of sounds from different source locations can vary by up to 10 dB. Differences in level at equal loudness for tonal stimulation at either 500 Hz, 1000 Hz or 5000 Hz were also reported in Yamauchi and Omato [23]. Sivonen and Ellermeier [22] argued that this effect of location on the loudness is mainly due to the head-related transfer function (HRTF).

An open question is, how listeners judge the overall loudness of a spatially distributed sound field, especially if the sounds from the different spatial directions are matched in loudness, i.e., when differences in loudness due to the HRTF are accounted for. Would they assign the same or different weights to all spatial directions when judging the overall loudness of a spatially distributed sound field? It is conceivable that the perception of sounds emanating from different spatial positions (or sound directions) may be affected by cognitive or emotional aspects, such as a potential threat imposed by the source, e.g., “sounds which come from above a person are perceived as particularly dangerous and annoying” [24]. This might imply that the overall loudness of an ensemble of sounds arriving from different directions may be dominated by the sound coming from above the listener. In general, for positions outside the visual field, e.g., above or behind the listener, the auditory system is the primary source of information. Thus, for these positions, the auditory information may be more important than for positions within the visual field, which might result in higher weights assigned to positions outside the field of view. Alternatively, sounds emanating from the spatial position which are in the focus of the visual attention could exert a stronger influence on the overall loudness judgments than sounds emanating from positions outside the focus of attention. For example, if a listener fixates a point straight ahead, this might result in a higher weight assigned to sounds emanating from this frontal position. It is also possible that different listeners use different strategies, e.g., depending on their preferred sensory input. Taken together, there are several potential reasons why non-uniform spatial loudness weights can be expected, although it is difficult to predict which of the alternative weighting patterns listeners would show. In any case, there seem to be no previous studies investigating spatial loudness weights, and the objective of the present study was to fill this gap.

Using a method of behavioural reverse correlation (e.g., [2, 25]), the present study investigated, if non-uniform spatial weights are observed in loudness judgments, which would parallel the observation of non-uniform spectral and temporal loudness weights. Five different positions in space were considered: front, back, left, right and straight above the listener (top). The sounds were produced by five loudspeakers in an acoustically treated lab space. To make sure that the measured spatial weights are not an artefact of the above-mentioned differences in loudness due to the head-related transfer function, the sounds were first equalised in loudness before running the measurements on spatial weights. In addition, the localisation performance of the listeners was measured to investigate how accurately the sounds were perceived in terms of their position in space. Spatial weights were measured in two different conditions. First, in a simultaneous condition, sounds were presented simultaneously from all five spatial positions. In this condition, to enable the listeners to differentiate the different sound sources (positions in space) and to minimise spectral masking between the sound sources, each of the five loudspeakers played a bandpass noise with different centre frequency. Second, in a sequential condition, the sounds from the five loudspeakers were presented consecutively. In this condition, all loudspeakers produced noise with the same broad spectrum.

2 Methods

2.1 Apparatus

For the experiments, four horizontal positions (front, 0°; back, 180°; left, 270°; right, 90°) and one vertical position (top, elevation of 90°) were used as spatial positions. At each position, a self-powered loudspeaker (Genelec 6010) was mounted at a distance of 1.75 m from the centre, where the listeners were seated during the experiment. The setup was placed in a sound-attenuated room with dimensions of 5.40 m (length), 5.25 m (width), and 3.68 m (height). The reverberation time (T60 in terms of 3·T20) was about 0.5 s in an octave band around 63 Hz, about 0.25 s in an octave band around 125 Hz and ranged from 0.13 s to 0.15 s in octave bands at 250, 500, 1000, 2000, 4000, and 8000 Hz. All stimuli were generated digitally using MATLAB at a sampling rate of 44.1 kHz. To play back signals via five channels, the multi-channel audio tool SoundMexPro (www.soundmexpro.de) was used. The stimuli were converted from digital into analogue signals using a set of four 8-channel D/A converters (RME ADI-8 QS). Four converters were used, because the five loudspeakers were part of a larger loudspeaker array containing a total number of 32 loudspeakers. The experiment was controlled using MATLAB. During the experiment, the listeners received instructions on a computer screen about the specific task. The screen was mounted directly above the front speaker. The listeners were instructed to keep the eyes on the computer screen and sit as still as possible during the measurement in order to maintain the alignment within the setup. The listeners pressed a button corresponding to their decision on a handheld keypad.

2.2 Stimuli

The stimuli of the two conditions were band-limited low-noise noises. Low-noise noise was used to reduce the intrinsic level fluctuations of the noise [26]. The low-noise noise bands used in this study were generated using the first method of Kohlrausch et al. [27]. A Gaussian white noise was generated and band pass filtered with a fast Fourier transform (FFT) filter, i.e., the amplitudes of all frequency components outside the desired frequency range were set to zero. Afterwards, the noise was divided by its Hilbert envelope and again filtered using the FFT-based bandpass filtering described above. These last two steps were iterated two times. For each stimulus presentation, a new noise sample was generated.

Five band-pass noises (BPNs) were used in the simultaneous condition. All had a bandwidth of two Bark. They differed with respect to the centre frequency. The spectral separation of adjacent bands was two Bark. In the following, the five bands are numbered, with BPN1 referring to the lowest band and BPN5 referring to the highest band. The five BPNs had the following lower/upper cut-off frequencies: 100/300 Hz (BPN1), 510/770 Hz (BPN2), 1080/1480 Hz (BPN3), 2000/2700 Hz (BPN4) and 3700/5300 Hz (BPN5). The spectral characteristics of the five band-pass noises were chosen to ensure that the sound arriving from each of the five spatial positions was clearly different and that spectral masking should not affect the results.

In the sequential condition, a noise with lower cut-off frequency of 100 Hz and an upper cut-off frequency of 12000 Hz (i.e., a bandwidth of 22 Bark) was used. In the following, this noise will be referred to as broadband noise and abbreviated as BBN.

2.3 Loudness matching

In a first step, the level at equal loudness was estimated for all stimuli, i.e., for all combinations of sound spectrum and spatial position. To this end, a loudness matching experiment was used, with a reference sound presented from the frontal loudspeaker. For the simultaneous condition, this reference sound was the middle noise band (BPN3, see subsection 2.2), and for the sequential condition, it was the BBN. The reference had a sound pressure level of 65 dB SPL. The levels at equal loudness were estimated with an adaptive two-interval two-alternative forced choice (2I, 2AFC) procedure. The task of the listener was to decide which of the two sounds was the louder one. For each test stimulus, three different starting levels relative to the reference level were used: −10 dB, 0 dB and 10 dB. The level of the test signal was varied according to a one-up one-down rule [28], i.e., it was reduced in the next trial if the test signal was perceived louder than the reference and increased if the reference sound was perceived louder than the test signal. The step size was 8 dB at the beginning of a track, it was reduced to 4 dB after the first upper reversal, and finally set to 2 dB after the second upper reversal. With this step size, each track continued for another four reversals. The average of the levels at these last four reversals was taken as an estimate of the level at equal loudness. The arithmetic mean of the three estimates (i.e., of the three tracks with the three starting levels) for each stimulus and spatial position was taken as the final estimate of the level at equal loudness for this sound spectrum/spatial position combination. Based on the loudness-matching task, the sounds were presented to the listeners at the individually loudness matched levels from the different positions and all listeners confirmed that the sounds now had an equal loudness. The signal duration of all test and reference stimuli in the loudness matching experiment was 300 ms, including 50-ms raised-cosine ramps at on- and offset. For both conditions, the level differences at equal loudness were analysed with repeated-measures analyses of variance (rmANOVAs) using an univariate approach with Huynh-Feldt (HF) correction for the degrees of freedom [29]. The correction factor is reported, and partial is reported as measure of association strength. An α-level of .05 was used for all analyses.

2.4 Spatial weights in the simultaneous condition

Spatial weights were estimated for all combinations of the five BPNs and the five directions. All BPNs had a duration of 300 ms, including 50-ms raised-cosine ramps. In each trial, it was chosen randomly which of the BPNs was played back from which loudspeaker. The level for each combination of noise band and spatial position was drawn independently and at random from either a normal distribution with a higher mean or a normal distribution with a lower mean. The means of the two distributions had a distance of 2 dB and both had a standard deviation of σ = 2.5 dB (i.e., the average level of BPN3 in the frontal position was 64 dB SPL for the distribution with the lower mean and 66 dB SPL for the distribution with the higher mean). All stimuli (combinations of BPN and spatial position) had the same loudness when presented at the grand mean of the two distributions. This level was individually chosen on the basis of the preceding loudness matching experiment. To avoid overly loud or soft segments the range of possible sound pressure levels was limited to the mean of the distribution ± 3 ∙ σ. In an initial block of practice trials, the listeners received 150 trials. The listeners responded whether the overall loudness of the sound presented on the current trial was louder or softer than the average of all preceding trials. In the first 100 of these 150 trials, trial-by-trial feedback was provided, indicating whether their response (loud or soft) corresponded to the mean of the distribution the levels were drawn from (higher mean or lower mean). Following the block of practice trials, nine experimental blocks were presented. These contained all 240 (5!·2) possible combinations of the five noise bands, the five directions, and the two distributions. They were preceded by 10 orientation trials (that were excluded from the further analysis). Thus, in a single block, 250 trials were rated in their loudness. The different combinations were presented in randomised order. A multiple logistic regression was used to estimate the weights for each BPN from each spatial position (i.e., for a total of 25 combinations) from the trial-by-trial data. The binary responses (“louder” or “softer”) served as the dependent variable. The predictors (i.e., the levels of the 25 combinations of BPN and spatial position) were entered simultaneously. A detailed description of the underlying decision model is given in Oberfeld and Plank [25]. The obtained regression coefficients were normalised so that the mean of the absolute values of the resulting normalised weights was 1.0. To estimate the predictive power of the fitted logistic regression models the area under the receiver operating characteristic (ROC) curve was calculated (for details, see [5]). Values of 0.5 and 1.0 correspond to chance performance and perfect performance of the model, respectively. The normalised temporal weights were analysed with repeated-measures analyses of variance (rmANOVAs) using a univariate approach with Huynh-Feldt (HF) correction for the degrees of freedom [29]. The correction factor is reported, and partial is reported as measure of association strength. An α-level of .05 was used for all analyses.

2.5 Spatial weights in the sequential condition

Spatial weights were measured for five BBN stimuli, each 100 ms long and presented consecutively, separated by 10-ms silent intervals. Each of the five BBN stimuli of a trial was presented from a different position in space. The experiment on spatial weights was essentially the same as in the simultaneous condition, but now the temporal position was the parameter that replaced the spectral characteristics of the BPNs. After a block of practice trials (equivalent to the block of practice trials in the simultaneous condition), nine blocks including all 240 combinations of direction, temporal positions and distributions preceded by 10 orientation trials were rated in their loudness. A multiple logistic regression was used to determine the weights for each temporal position from each position in space, i.e., 25 combinations in total. As in the simultaneous condition, the weights were normalised to a mean of 1.0. The statistical analysis was the same as used for the simultaneous condition.

2.6 Sound localisation

In the third part of the experiment, the ability to localise the sounds from the five different spatial positions was quantified. To this end, a single sound was presented from one of the five positions in space and the listeners had to indicate the direction from which the sound was presented by pressing the corresponding key on a keypad. For both conditions, 25 stimuli were presented from each spatial position. As for the loudness matching, the sound duration was 300 ms, including 50-ms raised cosine ramps at on- and offset.

2.7 Listeners

All listeners had hearing thresholds ≤20 dB HL at the standard audiometric frequencies between 125 and 8000 Hz. Eight listeners (3 male, 5 female; Age 23–36 yrs.; M = 30 yrs.; SD = 6 yrs.) participated in the measurements for the simultaneous condition. One of these eight listeners did not finish the sound localisation task for the BPN. Ten listeners (5 male, 5 female; Age 21–37 yrs.; M = 28 yrs.; SD = 7 yrs.) participated in the measurements for the sequential condition with broadband noises. Four of these ten listeners of the sequential condition also participated in the simultaneous condition.

The listeners took part voluntarily and were paid for the participation in the experiment, unless they were scientific members of the work group. All of them signed a written declaration of consent. The approval was granted by the ethics committee of the medical faculty of the Otto von Guericke University (ethical approval number 06/16).

3 Results and discussion

3.1 Simultaneous condition

Figure 1 shows the mean normalised weights for the five different spatial positions determined from the simultaneous condition (BPN, blue circles). The spatial weights are shown for each of the BPN separately (light blue circles) and averaged across the 5 BPNs (dark blue circles). Error bars indicate the interindividual standard errors. For a better visibility, the results for the BPN3 and BPN4 are slightly shifted against each other on the abscissa. The sensitivity in terms of d’ was calculated for each listener. It ranged from 0.65 to 1.09 (M = 0.94; SD = 0.14). For the eight fitted logistic regression models, the area under the ROC curve ranged between 0.77 and 0.93 (M = 0.86, SD = 0.05), indicating reasonably good predictive power [30]. For each BPN, the weights were hardly affected by the spatial position. They only differed from each other in terms of the average weight. An rmANOVA with the within-subject factors spatial position and BPN number did not show a significant effect of the spatial position [F(4, 28) = 1.21, = 0.97, p = 0.331,  = 0.15, HF]. The combined effect of BPN number × spatial position was also not significant [F(16, 112) = 1.45, = 0.46, p = 0.203,  = 0.17, HF]. Thus, the spatial weights did not differ significantly between frequency bands. In sum, neither the position corresponding to the focus of visual attention (front), nor a position outside the field of view representing a potential threat (top and back) received a higher weight than the remaining positions.

thumbnail Figure 1

Mean normalised weight for five different spatial positions for the simultaneous condition. Results are shown for each BPN separately (light blue circles, the numbers in the symbols indicate the BPN number with 1 referring to the lowest and 5 referring to the highest centre frequency) and averaged across all BPNs (dark blue circles). Error bars indicate plus and minus one interindividual standard error. For a better visibility, the symbols of BPN3 and BPN4 are slightly shifted horizontally against each other. The grey dashed line indicates a uniform weight across all positions.

The BPN number (i.e., center frequency) had a significant influence on the estimated weight [F(4, 28) = 10.62, = 0.32, p = 0.007,  = 0.60, HF]. The vertical shift of the curves connecting the data points for each BPN in Figure 1 reflects non-uniform spectral weights for the five BPNs. Figure 2 shows the mean normalised weight for each of the BPN (averaged across position in space). The higher weight on the lowest BPN (BPN1) compared to the middle BPN (BPN3) is in agreement with Oberfeld et al. [4]. They observed that, for three simultaneously presented equally-loud bandpass noises with different centre frequencies, the lowest band received the highest weight when judging the overall loudness of the sound. The present data indicate that also the weight of the highest band (BPN5) is slightly higher than that of the three remaining bands (BPN2-BPN4) in the middle of the frequency range. This was not observed in Oberfeld et al. [4]. The reason for this discrepancy could be the different choice of bandpass noises in the two studies. Oberfeld et al. [4] used three bands with the following lower/upper cut-off frequency: 200 Hz/510 Hz, 1080 Hz/1720 Hz, and 3150 Hz/5300 Hz. Thus, the upper cut-off frequency for the highest band was the same in their study and in the present study (see Methods section for details of the BPN stimuli of the present study) but, apart from that similarity, the spectrum was divided differently by the stimuli of the two studies. The difference in the weight assigned to the highest frequency component could be an effect of the number of bands. Kortekaas et al. [16] found that individual spectral weights differ between a stimulus with fifteen and a stimulus with seven components (see, e.g., their Fig. 6). For the same number of spectral components (in their case pure tones) as in the present study (five) but a smaller maximum spectral range (2119 Hz), Leibold et al. [17] observed higher weights at both edges of the spectrum, similar to what is observed in the present study. In contrast to the present study, the highest weight assigned to the highest component and not the lowest component. This may be due to a different choice of component levels. In Leibold et al. [17], all components had the same sound pressure level whereas in the present study, they had the same loudness. Jesteadt, et al. [31] observed for broadband noise (~80–8000 Hz) a U-shaped pattern of spectral weights, similar to the pattern observed in the present study, although the highest frequency band generally received a higher weight than the lowest band, particularly at lower noise levels.

thumbnail Figure 2

Mean normalised weights for the five BPN with BPN1 referring to the lowest and BPN5 referring to the highest centre frequency. Error bars indicate plus and minus one interindividual standard error.

3.2 Sequential condition

Figure 3 shows the spatial weights for each temporal segment separately (light orange squares) and averaged across all temporal segments (dark orange squares). Data representation is the same as in Figure 1. The sensitivity in terms of d’ was calculated for each listener. It ranged from 0.73 to 1.23 (M = 0.99; SD = 0.15), indicating a performance comparable to that of the listeners in the simultaneous condition. The area under the ROC curve for the ten fitted logistic regression models ranged from 0.79 to 0.91 (M = 0.85; SD = 0.04), indicating reasonably good predictive power [30]. The weights as function of the spatial position are more or less a vertically shifted version of each other for the different temporal segments, indicating higher weights for earlier temporal segments. The curves connecting the data points for each BPN are very close to horizontal straight lines, indicating largely uniform spatial weights. An rmANOVA with the within-subjects factors spatial position and temporal segment did not show a significant effect of the spatial position on the normalised weights [F(4, 36) = 1.41, = 0.80, p = 0.260,  = 0.14, HF]. The interaction between spatial position and temporal segment was also not significant [F(16, 144) = 0.61, = 0.95, p = 0.867,  = 0.06, HF], indicating that the spatial weighting function did not significantly depend on the temporal position of the segment within the longer stimulus. Thus, as in the simultaneous condition, largely uniform spatial weights were observed somewhat unexpectedly, i.e., there is no evidence for a “dominance” of either the front position (focus of visual attention) or the top or back positions (potentially threatening sound sources located outside the field of view) in global loudness judgments. Taken together with the results from the simultaneous condition discussed above, the present data thus argue against the expected non-uniform spatial weights in global loudness judgments.

thumbnail Figure 3

Same as Figure 1, but now showing the mean normalised weights and standard errors for five different temporal segments separately (light orange squares, the numbers in the symbols indicate the segment number) and averaged across all temporal segments (dark orange squares).

The failure to observe higher weights for sounds arriving from directly above the listeners in the present study is compatible with the study by Fastl et al. [24], who concluded from their results that “the hypothesis that sounds presented from above a subject are perceived as louder and more annoying” is not supported by their data. The present study can be seen as an extension of their conclusion in the sense that also in the context of the presence of equally-loud sounds from other locations, a sound from straight above or behind the listener is not weighted differently. However, the slightly higher loudness of equally intense sounds arriving from the front (compared to from above the listeners) reported by Fastl et al. [24] is not reflected in a higher loudness weight for the front position in our data.

The temporal segment had a significant influence on the estimated weight [F(4, 36) = 21.16, = 0.40, p < 0.001,  = 0.71, HF]. Figure 4 shows the mean normalised weights for the different temporal segments (averaged across spatial position). As expected a primacy effect (see Fig. 4) is observed, compatible with a large number of previous studies (e.g., [5, 6, 13, 32]). Oberfeld et al. [10] proposed an exponential decay function that describes the temporal course of the weight w(t):

(1)

thumbnail Figure 4

Mean normalised weights for the five temporal segments of the BBN (dark orange squares). The data points are plotted relative to the onset of the stimulus. Error bars indicate plus and minus one interindividual standard error. The green line indicates the weights predicted by a function proposed in Oberfeld et al. [9] using the parameters of a fit to eleven data sets of the literature reported in Oberfeld et al. [10].

where Dr is the “dynamic range” of the weights (i.e., the weight at sound onset relative to the asymptotic weight) and τ is the corresponding time constant. Oberfeld et al. [10] fitted the parameters of this decay function to the results of eleven experiments from eight different studies on temporal loudness weights. The resulting values were τ = 272 ms and Dr = 4.2. The weight that is assigned to a certain temporal segment is given by the integral of the decay function over the time of this segment. The light green line in Figure 3 shows the temporal weights that were calculated for the temporal segments in this experiment using the above mentioned exponential decay function with those parameters reported in Oberfeld et al. [10]. The primacy effect found in this study is well predicted by the weighting function derived from the eleven experiments, i.e., the present data agree with that reported in the literature.

In this context, it also seems relevant to consider studies investigating the weighting of spatial cues for lateralization or localization. These studies investigated the role of binaural cues (interaural time differences, ITD and/or interaural level differences, ITD) on the lateralisation of the sound (e.g., [33, 34]). For periodic trains of short stimuli, Brown and Stecker [33] showed that the ITD or ILD at the beginning of the stimulus determined the lateralisation, reflecting an onset dominance. This is likely to be associated with the precedence effect [35]. Amplitude modulation can change this pattern slightly and aperiodic pulse trains can reduce the onset dominance [36]. In any case, it is important to note that the primacy effect in lateralisation happens on a time scale of a few milliseconds, while the primacy effect in loudness judgments is observed on a time scale of several hundreds of milliseconds [9]. Using a combination of narrowband noises with different centre frequencies, Ahrens et al. [37] showed, among others (and as expected from theories of binaural processing), that ITD cues receive a higher spectral weight at low frequencies and ILD at high frequencies.

In contrast to the present study, the above mentioned studies used synthetic interaural parameters and, more importantly, asked for lateralisation. The present study investigated the role of the spatial position on the loudness judgment. Thus, in this experiment the listeners were not asked where they localised the sound. Moreover, the results of the localisation task for the BBN (see following subsection) indicate that the listeners perceived a sequence of five different spatial positions rather than a whole stimulus which is localised at the spatial position of the first segment.

In summary, the temporal weights (shown here) and the spectral weights (shown in the previous subsection) derived from the data largely agree with those reported in the literature. The data showed no interaction between spatial and temporal or spatial and spectral weights, i.e., these weights were unaffected by the spatial position. This parallels with results suggesting that the temporal weighting is independent of the spectral weighting [4, 12].

The fact that the same uniform pattern of spatial weights was observed both for simultaneously and sequentially presented spatial components, and both for narrowband noise bursts and broadband noise bursts, makes it unlikely that the observed pattern is very specific to the stimuli presented in the experiment of this study. In general, the small individual differences in the weights do not support the hypothesis that different listeners use different strategies, e.g., depending on their preferred sensory input.

At present, explanations for the absence of non-uniform spatial weighting pattern must remain speculative. One potential explanation could be that the effects of the different mechanisms that might potentially result in non-uniform weights, but with opposing weighting patterns (front position favoured by visual attention, top or back position favoured because outside the field of view and thus potentially particularly relevant) neutralised each other, resulting in largely uniform spatial weights. An alternative explanation could be that, as discussed above, the human auditory system is capable of simultaneously monitoring all directions in space, unlike the human visual system. With this background, distributing the auditory attention evenly across all spatial directions, which would result in uniform spatial weights, appears to be a reasonable strategy.

3.3 Loudness matching

Figure 5 shows levels at equal loudness for the five positions in space. The levels are expressed as differences between the level of the sound from a certain position in space (as indicated on the abscissa) and the level of the equally-loud reference sound (BPN3 for the simultaneous condition), which was presented from the loudspeaker in front of the listener. Average thresholds across all listeners are shown with symbols, error bars indicate the interindividual standard errors. Data for the BPNs are shown with light blue circles (numbers in the middle indicate the number of the BPN) and data for the BBN are indicated with orange squares. For a better visibility the data for the different stimuli are shifted horizontally against each other. Note that a different set of listeners participated in the loudness-matching experiment for the BPN and for the BBN (see Methods). The grey dashed line indicates a level difference of 0 dB, i.e., an equal loudness at the same level of test and reference sound.

thumbnail Figure 5

Level difference at equal loudness between the level of the sound from one of the five positions in space (indicated on the abscissa) and the level of the reference sound (BPN3 for the BPNs; BBN for the BBN) presented from the frontal loudspeaker. Mean level differences (symbol) and the interindividual standard error (errorbars) are shown. The results for the BPNs are indicated with light blue circles. The number of the BPN is shown in the middle of the circle with 1 referring to the lowest and 5 referring to the highest centre frequency. The data for the sequential condition with the BBN are shown with orange squares. The data points for the different stimuli are shifted against eachother horizontally for a better visibility. The dashed line indicates the test and reference sound had the same loudness at the same level.

For the BBN, the level difference at equal loudness was slightly negative (about −1.5 dB), when the sound was presented from the right or the left side. It was slightly positive when presented from the back (1.5 dB) or the top (2.5 dB). An rmANOVA with the factor spatial position showed that the effect of this factor on the level difference at equal loudness was significant [F(4, 36) = 41.53, = 0.64, p < 0.001,  = 0.82, HF].

In general, the small differences at equal loudness are in agreement with data from the literature. Using pink noise, Remmers and Prante [38] measured levels at equal loudness for different horizontal positions with a resolution of 60 degrees. The reference sound was a diffuse sound with an A-weighted sound pressure level of either 75.4 dB or 85.3 dB. For both reference levels, the level at equal loudness hardly differed between the different positions except for the position behind the listener (back), where an about 3 dB higher level was required. Sivonen [39] also used a pink noise to measure loudness at different directions in an anechoic environment. The reference sound was a sound presented on the frontal position. Sivonen [39] showed individual data for positions from front to back in steps of 30 degrees. Large individual differences were observed. For four of their five listeners, the sound from the left had to be attenuated by about 0.5–2 dB to be perceived as equally loud as the frontal reference and the sound from the back had to be amplified by 1–2 dB to obtain the same loudness as the reference. For the remaining listeners, the shape of the level-direction curve was similar to those of the others but shifted to a lower level (if the level difference is expressed as in the present study and not as in Sivonen [39]. Both studies [38, 39] did only measure loudness in the horizontal plane.

Fastl et al. [24] measured the loudness of three environmental sounds (railway noise, road traffic noise, aircraft noise) when presented from a frontal loudspeaker or a loudspeaker above the head of the listener. Using magnitude estimation or cross-modality matching with line length, it was shown that loudness tends to be the same or slightly higher for a frontal presentation of the sound. They also used categorical scaling but apparently the resolution of this procedure was too coarse to differentiate between the loudness of these two directions. In the present study, a positive level difference between equally loud test signal from above and reference signal presented from the frontal loudspeaker was measured. Thus, when presented at the same level, the sound from above was perceived to be softer than that from the frontal reference, in agreement with the results of Fastl et al. [24].

Given the difference in loudness for equal-intensity sounds from different sound source locations, it is possible that the results of the weighting experiment would have been different if sounds had been presented at the same intensity. Data on temporal loudness weights show a strong effect of a difference in mean level (and, thus, loudness) between stimulus components, in the sense that components with a higher mean level, i.e., louder components, tend to get assigned much higher weights than components with a lower mean level, i.e., softer components [40]). Assuming that a similar loudness dominance is found for spatial weights, a sound from above should have, at the same intensity, a lower weight than a sound that it is presented from the front and one from right or left a slightly higher weight that that from the front.

The data for the BPN show larger level differences between test and reference sound at equal loudness than the data for the BBN. This is mainly due to the choice of the reference, i.e., the BPN3 played back from the loudspeaker in front of the listener. As expected, the level difference for the test stimulus with the same spectrum (BPN3) and spatial position (front) was very close to zero. As for the BBN, the level difference was slightly negative for the left and right position. For the positions back and top, it was again very close to zero. While the mean interindividual standard deviation of the BPN3 was, as for the BBN, rather small (1.2 dB for the BBN and 1.6 dB for BPN3), it reached higher values for the other BPN (5.0, 3.0, 3.5 and 5.0 dB for BPN1, BPN2, BPN4 and BPN5, respectively). This indicates that the task was easier for the listeners when they had to compare the same sounds to each other and was more difficult as the spectral separation between the test BPN and the reference BPN increased.

An rmANOVA with the factors spatial position and BPN number showed a significant effect of the BPN [F(4, 28) = 14.66, = 0.88, p < 0.001,  = 0.68, HF], of the spatial position [F(4, 28) = 14.05, = 0.45, p < 0.001,  = 0.67, HF], and of the interaction of the two factors [F(16, 112) = 3.12, = 0.67, p = 0.002,  = 0.31, HF].

The level differences at equal loudness for the two bands above the reference band, BPN4 and BPN 5 were always negative, including the condition when the sound was presented from the frontal loudspeaker. This is presumably due to the shape of the HRTF. Using a maximum length sequence technique, Møller et al. [41] measured HRTFs for different positions of the sound source in an anechoic chamber. For a frontal position, their HRTFs showed that the frequency regions corresponding to the spectrum of BPN4 and BPN5 in the present study were amplified by up to 10 dB compared to the frequency region of BPN3. A small amplification of about 2 dB was also found for the frequency region of BPN2 but not for that of BPN1. Obviously, the transfer function of the left ear (which was measured in Møller et al. [41]) changes with direction and not all details of the HRTFs can be directly linked to the level difference at equal loudness that was measured in the present study. Since loudness is determined by the information in both ears, the HRTFs of both ears and their interaction need to be considered, especially for lateral positions.

The hypothesis of the HRTF playing a key role when interpreting levels at equal loudness for different positions in space was already mentioned in Sivonen and Ellermeier [22]. They found that the HRTF explained most of their loudness data. They measured levels at equal loudness of third octave noises centred at either 400 Hz, 1000 Hz, or 5000 Hz for different positions in space. The reference sound had the same spectrum as the test sound and was always presented from the frontal loudspeaker. The reference level was either 45 dB or 65 dB. For both reference levels, they observed that a low-frequency sound with a centre frequency of 400 Hz and presented from the left had a level about 3 dB lower than the equally-loud frontal reference. For a sound from above or behind the listener, the level had to be slightly higher (1 dB) than that from the frontal reference at a level of 65 dB. This was not observed for the lower reference level. A comparable result was observed for the lowest three bands (BPN1-BPN3) of the present study, when considering that, in contrast to Sivonen and Ellermeier [22], the reference spectrum was not matched to that of the test signal. In their study, directional effects on the levels at equal loudness seemed to have been larger for the other two centre frequencies. Such a clear trend is not found in the present study.

3.4 Localisation

Tables 1 and 2 show the results of the localisation experiment for the BPN and the BBN, respectively. The results are expressed as percentage of all ratings for the presentation from this spatial position. In Table 1, the results of all BPNs are combined, i.e., they are not analysed separately for each noise band.

Table 1

Average confusion matrix observed in the localisation experiment for BPNs, averaged across the five BPNs. Shown are the percentages of trials on which a sound presented from the position specified by the column were perceived from the position specified by the row.

Table 2

Average confusion matrix observed in the localisation experiment for the BBN. Data representation is the same as in Table 1.

For the BPNs, the listeners were able to assign the sounds to the correct positions in 100.0 % of the cases when these were presented from the left or the right side. When BPN were presented from the front, the back or the top position, the listeners assigned the sounds to the correct position in only 36.0 %, 37.1 % and 33.7 % of the cases, respectively. These values roughly correspond to chance performance for three positions in space (33.3%). This result is not unexpected, because interaural differences cannot be used to differentiate between these three positions, at least, if the ears are not positioned, as in, e.g., in the American barn owl [42, 43] at largely different heights at the head (which would facilitate the differentiation of the top position from the frontal and back positions). Instead, the sound is likely to be localised by using a direction-specific spectral filtering due to the shape of the outer ear (e.g., [44]). These spectral changes may not be so obvious for the restricted frequency ranges excited by the BPNs. This hypothesis is supported by the results reported below for the BBN, which covers a considerably larger spectral range than the BPNs.

Given the poor localisation of the position in the median plane, one has to interpret the spatial weights for these spatial positions in the simultaneous condition (where the BPNs were used) with caution. It is possible that the weights for the frontal, top and back position derived from the simultaneous condition reflect an average weight for the three positions. Under this assumption, the data shown in Figure 2 suggest that this average weight is not different from the weights assigned to either side of the listener (left and right).

The results of the localisation experiment with the BBN are shown in Table 2. In contrast to the results shown in Table 1 for the BPNs, the listeners seem to have hardly any problem in distinguishing the five spatial positions. When sounds were presented from the front or from the right side, they were assigned to this position to 100.0 %. When sounds were presented from the left, they were assigned to the top in only 0.4 % of the cases. This corresponds to one answer of a single listener. When sounds were presented from the back or the top they were assigned to these positions in 97.2 % and 92.0 % of the cases, respectively.

For the spatial weight data shown in Figure 3, this result indicates that the listeners were well aware of the different spatial positions. Thus, in contrast to the simultaneous condition with BPN stimuli, all weights obtained in the sequential condition with the BBN can be unequivocally assigned to the five positions. For the BBN, all three positions in the median plane had the same weight as those of the left and right position. Thus, even an average weight for these positions in the median plane (which may be used in the simultaneous condition with the BPN) would be the same as the weight for the left and right position of a sound source.

4 Summary and conclusions

Spatial weights were measured for judgments of the overall loudness of sound fields consisting of sound emanating from five different spatial positions (front, left, right, back, and top) in two conditions in a sound-treated room with low reverberation. In a simultaneous condition, spectrally non-overlapping bandpass noises (BPNs) from the five positions were presented simultaneously, while in the sequential condition temporally non-overlapping broadband noise (BBN) bursts were presented from the five spatial positions. Before the main task (determining spatial weights), the levels at equal loudness were determined individually for each combination of stimulus spectrum (5 BPNs, BBN) and spatial position. The dependence of these level differences on the spatial position were small compared to the difference in loudness between the different centre frequencies of the BPN. It is likely that most aspects of the loudness matching data can be explained on the basis of the head-related transfer function. The individual levels at equal loudness were used in the main task, to ensure that each stimulus had the same loudness. The measurement of the spatial weights showed no significant effect of spatial position. Weights were similar across the five positions, irrespective of presentation mode (simultaneous or sequential) or stimulus type (narrowband noise or broadband noise). This makes it unlikely that the observed pattern of results is very specific to the stimuli presented in the experiments. It also shows that the weighting in space is independent of the weighting on time or across frequency. Thus, according to the present data, in global loudness judgments for spatially distributed sound fields, sounds emanating from positions in space well outside the field of view (back) or that may be perceived as a potential threat (top) were weighted similarly as sounds arriving from the front or from the left or right side. It cannot be ruled out completely that the uniform weights are the result of a combination of two non-uniform weights, but with opposing weighting patterns, that neutralised each other. A more simple and straight-forward interpretation of the data is that the human auditory system is capable of simultaneously monitoring all directions in space, unlike our visual system. In this case, an even distribution of auditory attention across all spatial directions would be a reasonable strategy.

Acknowledgments

The project was supported by Deutsche Forschungsgemeinschaft (DFG; VE 373/2-1 and OB346/6-1). Special thanks go to all the listeners who participated in the experiment.

Conflicts of interest

The authors declared no conflict of interests.

Data availability statement

Data are available on request from the authors.

References

  1. ISO 532-1:2017 Acoustics – Methods for calculating loudness – part 1: Zwicker method. [Google Scholar]
  2. A.J. Ahumada, J. Lovell: Stimulus features in signal detection. Journal of the Acoustical Society of America 49 (1971) 1751–1756. [CrossRef] [Google Scholar]
  3. B.G. Berg: Analysis of weights in multiple observation tasks. Journal of the Acoustical Society of America 86 (1989) 1743–1746. [CrossRef] [Google Scholar]
  4. D. Oberfeld, W. Heeren, J. Rennies, J.L. Verhey: Spectro-temporal weighting of loudness. PLOS One 7 (2012) e50184. [CrossRef] [PubMed] [Google Scholar]
  5. K. Dittrich, D. Oberfeld: A comparison of the temporal weighting of annoyance and loudness. Journal of the Acoustical Society of America 126 (2009) 3168–3178. [CrossRef] [PubMed] [Google Scholar]
  6. W. Ellermeier, S. Schrödl: Temporal weights in loudness summation, in Bonnet C, editor. Fechner Day 2000 Proceedings of the 16th Annual Meeting of the International Society for Psychophysics. Universite′ Louis Pasteur, Strasbourg, 2000, pp. 169–173. [Google Scholar]
  7. J. Rennies, J.L. Verhey: Temporal weighting in loudness of broadband and narrowband signals. Journal of the Acoustical Society of America 126 (2009) 951–954. [CrossRef] [PubMed] [Google Scholar]
  8. A. Fischenich, J. Hots, J. Verhey, D. Oberfeld: Temporal weights in loudness: investigation of the effects of background noise and sound. PLOS One 14 (2019) e0223075. [CrossRef] [PubMed] [Google Scholar]
  9. D. Oberfeld, L. Jung, J. Hots, J.L. Verhey: Evaluation of a model of temporal weights in loudness judgments. Journal of the Acoustical Society of America 144 (2018a) EL119–EL124. [CrossRef] [PubMed] [Google Scholar]
  10. D. Oberfeld, J. Hots, J.L. Verhey: Temporal weights in the perception of sound intensity: effects of sound duration and number of temporal segments. Journal of the Acoustical Society of America 143 (2018b) 943–953. [CrossRef] [PubMed] [Google Scholar]
  11. A. Fischenich, J. Hots, J.L. Verhey, D. Oberfeld: The effect of silent gaps on temporal weights in loudness judgments. Hearing Research 395 (2020) 108028. [CrossRef] [PubMed] [Google Scholar]
  12. A. Fischenich, J. Hots, J. Verhey, D. Oberfeld: Temporal loudness weights are frequency specific. Frontiers in Psychology 12 (2021) 588571. [CrossRef] [PubMed] [Google Scholar]
  13. B. Pedersen, W. Ellermeier: Temporal weights in the level discrimination of time-varying sounds. Journal of the Acoustical Society of America 123 (2008) 963–972. [CrossRef] [PubMed] [Google Scholar]
  14. K.A. Doherty, R.A. Lutfi: Spectral weights for overall level discrimination in listeners with sensorineural hearing loss. Journal of the Acoustical Society of America 99 (1996) 1053–1058. [CrossRef] [PubMed] [Google Scholar]
  15. W. Jesteadt, D.L. Valente, S. Joshi, K.K. Schmid: Perceptual weights for loudness judgments of six-tone complexes. Journal of the Acoustical Society of America 136 (2014) 728–735. [CrossRef] [PubMed] [Google Scholar]
  16. R. Kortekaas, S. Buus, M. Florentine: Perceptual weights in auditory level discrimination. Journal of the Acoustical Society of America 113 (2003) 3306–3322. [CrossRef] [PubMed] [Google Scholar]
  17. L.J. Leibold, H. Tan, S. Khaddam, W. Jesteadt: Contributions of individual components to the overall loudness of a multitone complex. Journal of the Acoustical Society of America 121 (2007) 2822–2831. [CrossRef] [PubMed] [Google Scholar]
  18. K.A. Doherty, R.A. Lutfi: Level discrimination of single tones in a multitone complex by normal-hearing and hearing-impaired listeners. Journal of the Acoustical Society of America 105 (1999) 1831–1840. [CrossRef] [PubMed] [Google Scholar]
  19. J. Chalupper, H. Fastl: Dynamic loudness model (DLM) for normal and hearing-impaired listeners. Acta Acustica United with Acustica 88 (2002) 378–386. [Google Scholar]
  20. B. Glasberg, B.C.J. Moore: A model of loudness applicable to time-varying sounds. Journal of the Audio Engineering Society 50 (2002) 331–341. [Google Scholar]
  21. J. Rennies, J.L. Verhey, H. Fastl: Comparison of loudness models for time-varying sounds. Acta Acustica United with Acustica 96 (2010) 383–396. [Google Scholar]
  22. V.P. Sivonen, W. Ellermeier: Directional loudness in an anechoic sound field, head-related transfer functions, and binaural summation. Journal of the Acoustical Society of America 119 (2006) 2965–2980. [CrossRef] [PubMed] [Google Scholar]
  23. G. Yamauchi, A. Omoto: Measuring variation of loudness in consideration of arrival direction and distribution width of sound. Acoustics, Science & Technology 39 (2018) 406–416. [CrossRef] [Google Scholar]
  24. H. Fastl, S. Kuwano, S. Namba: Railway bonus and aircraft malus for different directions of the sound source? in Proceedings of the Internoise, Rio de Janeiro, Brazil (2005). [Google Scholar]
  25. D. Oberfeld, T. Plank: The temporal weighting of loudness: effects of the level profile. Attention, Perception, & Psychophysics 73 (2011) 189–208. [CrossRef] [PubMed] [Google Scholar]
  26. W.M. Hartmann, J. Pumplin: Noise power fluctuations and the masking of sine signals. Journal of the Acoustical Society of America 83 (1988) 2277–2289. [CrossRef] [PubMed] [Google Scholar]
  27. A. Kohlrausch, R. Fassel, M. van der Heijden, R. Kortekaas, S. van de Par, A.J. Oxenham, D. Püschel: Detection of tones in low-noise noise: further evidence for the role of envelope fluctuations. Acustica United with Acta Acustica 83 (1997) 659–669. [Google Scholar]
  28. H. Levitt: Transformed up-down methods in psychoacoustics. Journal of the Acoustical Society of America 49 (1971) 467–477. [CrossRef] [Google Scholar]
  29. H. Huynh, L.S. Feldt: Estimation of the Box correction for degrees of freedom from sample data in randomized block and split-plot designs. Journal of Educational Statistics 1 (1976) 69–82. [CrossRef] [Google Scholar]
  30. D.W. Hosmer Jr, S. Lemeshow: Applied logistic regression, 2nd ed., John Wiley & Sons Inc., New York, 2000. [CrossRef] [Google Scholar]
  31. W. Jesteadt, S.M. Walker, O.A. Ogun, B. Ohlrich, K.E. Brunette, M. Wroblewski, K.K. Schmid: Relative contributions of specific frequency bands to the loudness of broadband sounds. Journal of the Acoustical Society of America 142 (2017) 1597–1610. [CrossRef] [PubMed] [Google Scholar]
  32. S. Namba, S. Kuwano, T. Kato: Loudness of sound with intensity increment. Japanese Psychological Research 18 (1976) 63–72. [CrossRef] [Google Scholar]
  33. A.D. Brown, G.C. Stecker: Temporal weighting of interaural time and level differences in high-rate click trains. Journal of the Acoustical Society of America 128 (2010) 332–341. [CrossRef] [PubMed] [Google Scholar]
  34. G.C. Stecker, J.D. Ostreicher, A.D. Brown: Temporal weighting functions for interaural time and level differences. III. Temporal weighting for lateral position judgments. Journal of the Acoustical Society of America 134 (2013) 1242–1252. [CrossRef] [PubMed] [Google Scholar]
  35. J. Blauert: Spatial hearing: the psychophysics of human sound localization, revised ed., MIT, Cambridge, 1997. [Google Scholar]
  36. G.C. Stecker: Temporal weighting functions for interaural time and level differences. V. Modulated noise carriers. Journal of the Acoustical Society of America 143 (2018) 686–695. [CrossRef] [PubMed] [Google Scholar]
  37. A. Ahrens, S.N. Joshi, B. Epp: Perceptual weighting of binaural lateralization cues across frequency bands. Journal of the Association for Research in Otolaryngiology 21 (2020) 485–496. [CrossRef] [PubMed] [Google Scholar]
  38. H. Remmers, H. Prante: Untersuchung zur Richtungsabhängigkeit von breitbandigen Schallen (Research on the directionality for broadband sounds), in Proceedings of the DAGA ‘91, Bochum, Germany, pp. 537–540. ISBN: 3-923835-09-4 (1991). [Google Scholar]
  39. V.P. Sivonen: Directional loudness and binaural summation for wideband and reverberant sounds. Journal of the Acoustical Society of America 121 (2007) 2852–2861. [CrossRef] [PubMed] [Google Scholar]
  40. A. Fischenich, J. Hots, J.L. Verhey, J. Guldan, D. Oberfeld: Temporal loudness weights: primacy effects, loudness dominance and their interaction. PLOS One 16 (2021) e0261001. [CrossRef] [PubMed] [Google Scholar]
  41. H. Møller, M.F. Sørensen, D. Hammershøi, C.B. Jensen: Head related transfer functions of human subjects. Journal of the Audio Engineering Society 43 (1995) 300–321. [Google Scholar]
  42. E.I. Knudsen, M. Konishi: Mechanisms of sound localization in the Barn Owl (Tyto alba). Journal of Comparative Physiology 133 (1979) 13–21. [CrossRef] [Google Scholar]
  43. M. Krings, L. Rosskamp, H. Wagner: Development of ear asymmetry in the American barn owl (Tyto furcata pratincola). Zoology 126 (2018) 82–88. [CrossRef] [PubMed] [Google Scholar]
  44. B. Zonooz, E. Arani, K.P. Körding, P.A.T.R. Aalbers, T. Celikel, A.J. van Opstal: Spectral weighting underlies perceived sound elevation. Nature Scientific Reports 9 (2019) 1642. [CrossRef] [Google Scholar]

Cite this article as: Hots J. Oberfeld D. & Verhey JL. 2024. Spatial weights in loudness judgements. Acta Acustica, 8, 36.

All Tables

Table 1

Average confusion matrix observed in the localisation experiment for BPNs, averaged across the five BPNs. Shown are the percentages of trials on which a sound presented from the position specified by the column were perceived from the position specified by the row.

Table 2

Average confusion matrix observed in the localisation experiment for the BBN. Data representation is the same as in Table 1.

All Figures

thumbnail Figure 1

Mean normalised weight for five different spatial positions for the simultaneous condition. Results are shown for each BPN separately (light blue circles, the numbers in the symbols indicate the BPN number with 1 referring to the lowest and 5 referring to the highest centre frequency) and averaged across all BPNs (dark blue circles). Error bars indicate plus and minus one interindividual standard error. For a better visibility, the symbols of BPN3 and BPN4 are slightly shifted horizontally against each other. The grey dashed line indicates a uniform weight across all positions.

In the text
thumbnail Figure 2

Mean normalised weights for the five BPN with BPN1 referring to the lowest and BPN5 referring to the highest centre frequency. Error bars indicate plus and minus one interindividual standard error.

In the text
thumbnail Figure 3

Same as Figure 1, but now showing the mean normalised weights and standard errors for five different temporal segments separately (light orange squares, the numbers in the symbols indicate the segment number) and averaged across all temporal segments (dark orange squares).

In the text
thumbnail Figure 4

Mean normalised weights for the five temporal segments of the BBN (dark orange squares). The data points are plotted relative to the onset of the stimulus. Error bars indicate plus and minus one interindividual standard error. The green line indicates the weights predicted by a function proposed in Oberfeld et al. [9] using the parameters of a fit to eleven data sets of the literature reported in Oberfeld et al. [10].

In the text
thumbnail Figure 5

Level difference at equal loudness between the level of the sound from one of the five positions in space (indicated on the abscissa) and the level of the reference sound (BPN3 for the BPNs; BBN for the BBN) presented from the frontal loudspeaker. Mean level differences (symbol) and the interindividual standard error (errorbars) are shown. The results for the BPNs are indicated with light blue circles. The number of the BPN is shown in the middle of the circle with 1 referring to the lowest and 5 referring to the highest centre frequency. The data for the sequential condition with the BBN are shown with orange squares. The data points for the different stimuli are shifted against eachother horizontally for a better visibility. The dashed line indicates the test and reference sound had the same loudness at the same level.

In the text

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.