{"title": "Optimal models of sound localization by barn owls", "book": "Advances in Neural Information Processing Systems", "page_first": 449, "page_last": 456, "abstract": "Sound localization by barn owls is commonly modeled as a matching procedure where localization cues derived from auditory inputs are compared to stored templates. While the matching models can explain properties of neural responses, no model explains how the owl resolves spatial ambiguity in the localization cues to produce accurate localization near the center of gaze. Here, we examine two models for the barn owl's sound localization behavior. First, we consider a maximum likelihood estimator in order to further evaluate the cue matching model. Second, we consider a maximum a posteriori estimator to test if a Bayesian model with a prior that emphasizes directions near the center of gaze can reproduce the owl's localization behavior. We show that the maximum likelihood estimator can not reproduce the owl's behavior, while the maximum a posteriori estimator is able to match the behavior. This result suggests that the standard cue matching model will not be sufficient to explain sound localization behavior in the barn owl. The Bayesian model provides a new framework for analyzing sound localization in the barn owl and leads to predictions about the owl's localization behavior.", "full_text": "Optimal models of sound localization by barn owls\n\nBrian J. Fischer\nDivision of Biology\n\nCalifornia Institute of Technology\n\nPasadena, CA\n\nfischerb@caltech.edu\n\nAbstract\n\nSound localization by barn owls is commonly modeled as a matching procedure\nwhere localization cues derived from auditory inputs are compared to stored tem-\nplates. While the matching models can explain properties of neural responses, no\nmodel explains how the owl resolves spatial ambiguity in the localization cues to\nproduce accurate localization for sources near the center of gaze. Here, I exam-\nine two models for the barn owl\u2019s sound localization behavior. First, I consider\na maximum likelihood estimator in order to further evaluate the cue matching\nmodel. Second, I consider a maximum a posteriori estimator to test whether a\nBayesian model with a prior that emphasizes directions near the center of gaze\ncan reproduce the owl\u2019s localization behavior. I show that the maximum likeli-\nhood estimator can not reproduce the owl\u2019s behavior, while the maximum a poste-\nriori estimator is able to match the behavior. This result suggests that the standard\ncue matching model will not be suf\ufb01cient to explain sound localization behavior in\nthe barn owl. The Bayesian model provides a new framework for analyzing sound\nlocalization in the barn owl and leads to predictions about the owl\u2019s localization\nbehavior.\n\n1 Introduction\n\nBarn owls, the champions of sound localization, show systematic errors when localizing sounds.\nOwls localize broadband noise signals with great accuracy for source directions near the center of\ngaze [1]. However, localization errors increase as source directions move to the periphery, consistent\nwith an underestimate of the source direction [1]. Behavioral experiments show that the barn owl\nuses the interaural time difference (ITD) for localization in the horizontal dimension and the interau-\nral level difference (ILD) for localization in the vertical dimension [2]. Direct measurements of the\nsounds received at the ears for sources at different locations in space show that disparate directions\nare associated with very similar localization cues. Speci\ufb01cally, there is a similarity between ILD\nand ITD cues for directions near the center of gaze and directions with eccentric elevations on the\nvertical plane. How does the owl resolve this ambiguity in the localization cues to produce accurate\nlocalization for sound sources near the center of gaze?\n\nTheories regarding the use of localization cues by the barn owl are drawn from the extensive knowl-\nedge of processing in the barn owl\u2019s auditory system. Neurophysiological and anatomical studies\nshow that the barn owl\u2019s auditory system contains specialized circuitry that is devoted to extracting\nspectral ILD and ITD cues and processing them to derive source direction information [2]. It has\nbeen suggested that a spectral matching operation between ILD and ITD cues computed from audi-\ntory inputs and preferred ILD and ITD spectra associated with spatially selective auditory neurons\nunderlies the derivation of spatial information from the auditory cues [3\u20136]. The spectral matching\nmodels reproduce aspects of neural responses, but none reproduces the sound localization behavior\nof the barn owl. In particular, the spectral matching models do not describe how the owl resolves am-\nbiguities in the localization cues. In addition to spectral matching of localization cues, it is possible\n\n1\n\n\fthat the owl incorporates prior experience or beliefs into the process of deriving direction estimates\nfrom the auditory input signals. These two approaches to sound localization can be formalized using\nthe language of estimation theory as maximum likelihood (ML) and Bayesian solutions, respectively.\n\nHere, I examine two models for the barn owl\u2019s sound localization behavior in order to further evalu-\nate the spectral matching model and to test whether a Bayesian model with a prior that emphasizes\ndirections near the center of gaze can reproduce the owl\u2019s localization behavior. I begin by viewing\nthe sound localization problem as a statistical estimation problem. Maximum likelihood and max-\nimum a posteriori (MAP) solutions to the estimation problem are compared with the localization\nbehavior of a barn owl in a head turning task.\n\n2 Observation model\n\nTo de\ufb01ne the localization problem, we must specify an observation model that describes the infor-\nmation the owl uses to produce a direction estimate. Neurophysiological and behavioral experiments\nsuggest that the barn owl derives direction estimates from ILD and ITD cues that are computed at an\narray of frequencies [2, 7, 8]. Note that when computed as a function of frequency, the ITD is given\nby an interaural phase difference (IPD).\n\nHere I consider a model where the observation made by the owl is given by the ILD and IPD spectra\nderived from barn owl head-related transfer functions (HRTFs) after corruption with additive noise.\nFor a source direction (\u03b8, \u03c6), the observation vector r is expressed mathematically as\n\nr = (cid:20) rILD\n\nrIPD (cid:21) = (cid:20) ILD\u03b8,\u03c6\n\nIPD\u03b8,\u03c6 (cid:21) +(cid:20) \u03b7ILD\n\u03b7IPD (cid:21)\n\n(1)\n\nwhere the ILD spectrum ILD\u03b8,\u03c6 = [ILD\u03b8,\u03c6(\u03c91), ILD\u03b8,\u03c6(\u03c92), . . . , ILD\u03b8,\u03c6(\u03c9Nf )] and the IPD spec-\ntrum IPD\u03b8,\u03c6 = [IPD\u03b8,\u03c6(\u03c91), IPD\u03b8,\u03c6(\u03c92), . . . , IPD\u03b8,\u03c6(\u03c9Nf )] are speci\ufb01ed at a \ufb01nite number of fre-\nquencies. The ILD and IPD cues are computed directly from the HRTFs as\n\nand\n\nILD\u03b8,\u03c6(\u03c9) = 20 log10\n\n|\u02c6hR(\u03b8,\u03c6)(\u03c9)|\n|\u02c6hL(\u03b8,\u03c6)(\u03c9)|\n\nIPD\u03b8,\u03c6(\u03c9) = \u03d5R(\u03b8,\u03c6)(\u03c9) \u2212 \u03d5L(\u03b8,\u03c6)(\u03c9),\n\n(2)\n\n(3)\n\nwhere the left and right HRTFs are written as \u02c6hL(\u03b8,\u03c6)(\u03c9) = |\u02c6hL(\u03b8,\u03c6)(\u03c9)|ei\u03d5L(\u03b8,\u03c6)(\u03c9) and\n\u02c6hR(\u03b8,\u03c6)(\u03c9) = |\u02c6hR(\u03b8,\u03c6)(\u03c9)|ei\u03d5R(\u03b8,\u03c6)(\u03c9), respectively.\nThe noise corrupting the ILD spectrum is modeled as a Gaussian random vector with independent\nand identically distributed (i.i.d.) components, \u03b7ILD(\u03c9j) \u223c N (0, \u03c3). The IPD spectrum noise vector\nis assumed to have i.i.d. components where each element has a von Mises distribution with parame-\nter \u03ba. The von Mises distribution can be viewed as a 2\u03c0-periodic Gaussian distribution for large \u03ba\nand is a uniform distribution for \u03ba = 0 [9]. I assume that the ILD and IPD noise terms are mutually\nindependent.\n\nWith this noise model, the likelihood function has the form\n\nwhere the ILD likelihood function is given by\n\npr|\u0398,\u03a6(r|\u03b8, \u03c6) = prILD|\u0398,\u03a6(rILD|\u03b8, \u03c6)prIPD|\u0398,\u03a6(rIPD|\u03b8, \u03c6)\n\nprILD|\u0398,\u03a6(rILD|\u03b8, \u03c6) =\n\n1\n\n(2\u03c0\u03c32)Nf /2 exp[\u2212\n\n1\n2\u03c32\n\nNf\n\nXj=1\n\n(rILD(\u03c9j) \u2212 ILD\u03b8,\u03c6(\u03c9j))2]\n\nand the IPD likelihood function is given by\n\n(4)\n\n(5)\n\nprIPD|\u0398,\u03a6(rIPD|\u03b8, \u03c6) =\n\n1\n\n(2\u03c0I0(\u03ba))Nf\n\nexp[\u03ba\n\nNf\n\nXj=1\n\ncos(rIPD(\u03c9j) \u2212 IPD\u03b8,\u03c6(\u03c9j))]\n\n(6)\n\nwhere I0(\u03ba) is a modi\ufb01ed Bessel function of the \ufb01rst kind of order 0. The likelihood function will\nhave peaks at directions where the expected spectral cues ILD\u03b8,\u03c6 and IPD\u03b8,\u03c6 are near the observed\nvalues rILD and rIPD.\n\n2\n\n\f3 Model performance measure\n\nI evaluate maximum likelihood and maximum a posteriori methods for estimating the source direc-\ntion from the observed ILD and IPD cues by computing an expected localization error and compar-\ning the results to an owl\u2019s behavior. The performance of each estimation procedure at a given source\ndirection is quanti\ufb01ed by the expected absolute angular error E[|\u02c6\u03b8(r) \u2212 \u03b8| + | \u02c6\u03c6(r) \u2212 \u03c6| | \u03b8, \u03c6]. This\nmeasure of estimation error is directly compared to the behavioral performance of a barn owl in\na head turning localization task [1]. The expected absolute angular error is approximated through\nMonte Carlo simulation as\n\nE[|\u02c6\u03b8(r) \u2212 \u03b8| + | \u02c6\u03c6(r) \u2212 \u03c6| | \u03b8, \u03c6] \u2248 \u00b5({|\u02c6\u03b8(ri) \u2212 \u03b8|}N\n\n(7)\ni=1) is the circular mean of the angles\nwhere the ri are drawn from pr|\u0398,\u03a6(r|\u03b8, \u03c6) and \u00b5({\u03b8i}N\ni=1. The error is computed using HRTFs for two barn owls [10] and is calculated for direc-\n{\u03b8i}N\ntions in the frontal hemisphere with 5\u25e6 increments in azimuth and elevation, as de\ufb01ned using double\npolar coordinates.\n\ni=1) + \u00b5({| \u02c6\u03c6(ri) \u2212 \u03c6|}N\n\ni=1)\n\n4 Maximum likelihood estimate\n\nThe maximum likelihood direction estimate is derived from the observed noisy ILD and IPD cues\nby \ufb01nding the source direction that maximizes the likelihood function, yielding\n\n(\u02c6\u03b8ML(r), \u02c6\u03c6ML(r)) = arg max\n(\u03b8,\u03c6)\n\npr|\u0398,\u03a6(r|\u03b8, \u03c6).\n\n(8)\n\nThis procedure amounts to a spectral cue matching operation. Each direction in space is associated\nwith a particular ILD and IPD spectrum, as derived from the HRTFs. The direction with associated\ncues that are closest to the observed cues is designated as the estimate. This estimator is of particular\ninterest because of the claim that salience in the neural map of auditory space in the barn owl can be\ndescribed by a spectral cue matching operation [3, 4, 6].\n\nThe maximum likelihood estimator was unable to reproduce the owl\u2019s localization behavior. The\nperformance of the maximum likelihood estimator depends on the two likelihood function parame-\nters \u03c3 and \u03ba, which determine the ILD and IPD noise variances, respectively. For noise variances\nlarge enough that the error increased at peripheral directions, in accordance with the barn owl\u2019s\nbehavior, the error also increased signi\ufb01cantly for directions near the center of the interaural coordi-\nnate system (Figure 1). This pattern of error as a function of eccentricity, with a large central peak,\nis not consistent with the performance of the barn owl in the head turning task [1]. Additionally,\ndirections near the center of gaze were often confused with directions in the periphery leading to a\nhigh variability in the direction estimates, which is not seen in the owl\u2019s behavior.\n\n5 Maximum a posteriori estimate\n\nIn the Bayesian framework, the direction estimate depends on both the likelihood function and the\nprior distribution over source directions through the posterior distribution. Using Bayes\u2019 rule, the\nposterior density is proportional to the product of the likelihood function and the prior,\n\np\u0398,\u03a6|r(\u03b8, \u03c6|r) \u221d pr|\u0398,\u03a6(r|\u03b8, \u03c6)p\u0398,\u03a6(\u03b8, \u03c6).\n\n(9)\nThe prior distribution is used to summarize the owl\u2019s belief about the most likely source directions\nbefore an observation of ILD and IPD cues is made. Based on the barn owl\u2019s tendency to underes-\ntimate source directions [1], I use a prior that emphasizes directions near the center of gaze. The\nprior is given by a product of two one-dimensional von Mises distributions, yielding the probability\ndensity function\n\np\u0398,\u03a6(\u03b8, \u03c6) =\n\nexp[\u03ba1 cos(\u03b8) + \u03ba2 cos(\u03c6)]\n\n(2\u03c0)2I0(\u03ba1)I0(\u03ba2)\n\n(10)\n\nwhere I0(\u03ba) is a modi\ufb01ed Bessel function of the \ufb01rst kind of order 0. The maximum a posteriori\nsource direction estimate is computed for a given observation by \ufb01nding the source direction that\nmaximizes the posterior density, yielding\n\n(\u02c6\u03b8MAP(r), \u02c6\u03c6MAP(r)) = arg max\n(\u03b8,\u03c6)\n\np\u0398,\u03a6|r(\u03b8, \u03c6|r).\n\n(11)\n\n3\n\n\fFigure 1: Estimation error in the model for the maximum likelihood (ML) and maximum a posteriori\n(MAP) estimates. HRTFs were used from owls 884 (top) and 880 (bottom). Left column: Estimation\nerror at 685 locations in the frontal hemisphere plotted in double polar coordinates. Center column:\nEstimation error on the horizontal plane along with the estimation error of a barn owl in a head\nturning task [1]. Right column: Estimation error on the vertical plane along with the estimation\nerror of a barn owl in a head turning task. Note that each plot uses a unique scale.\n\n4\n\n\fFigure 2: Estimates for the MAP estimator on the horizontal plane (left) and the vertical plane (right)\nusing HRTFs from owl 880. The box extends from the lower quartile to the upper quartile of the\nsample. The solid line is the identity line. Like the owl, the MAP estimator underestimates the\nsource direction.\n\nIn the MAP case, the estimate depends on spectral matching of observations with expected cues for\neach direction, but with a penalty on the selection of peripheral directions.\n\nIt was possible to \ufb01nd a MAP estimator that was consistent with the owl\u2019s localization behavior\n(Figures 1,2). For the example MAP estimators shown in Figures 1 and 2, the error was smallest in\nthe central region of space and increased at the periphery. The largest errors occurred at the vertical\nextremes. This pattern of error qualitatively matches the pattern of error displayed by the owl in a\nhead turning localization task [1].\n\nThe parameters that produced a behaviorally consistent MAP estimator correspond to a likelihood\nand prior with large variances. For the estimators shown in Figure 1, the likelihood function para-\nmeters were given by \u03c3 = 11.5 dB and \u03ba = 0.75 for owl 880 and \u03c3 = 10.75 dB and \u03ba = 0.8 for owl\n884. For comparison, the range of ILD values normally experienced by the barn owl falls between\n\u00b1 30 dB [10]. The prior parameters correspond to an azimuthal width parameter \u03ba1 of 0.25 for owl\n880 and 0.2 for owl 884 and an elevational width parameter \u03ba2 of 0.25 for owl 880 and 0.18 for owl\n884.\n\nThe implication of this model for implementation in the owl\u2019s auditory system is that the spectral\nlocalization cues ILD and IPD do not need to be computed with great accuracy and the emphasis on\ncentral directions does not need to be large in order to produce the barn owl\u2019s behavior.\n\n6 Discussion\n\n6.1 A new approach to modeling sound localization in the barn owl\n\nThe simulation results show that the maximum likelihood model considered here can not reproduce\nthe owl\u2019s behavior, while the maximum a posteriori solution is able to match the behavior. This\nresult suggests that the standard spectral matching model will not be suf\ufb01cient to explain sound lo-\ncalization behavior in the barn owl. Previously, suggestions have been made that sound localization\nby the barn owl can be described using the Bayesian framework [11, 12], but no speci\ufb01c models\nhave been proposed. This paper demonstrates that a Bayesian model can qualitatively match the\nowl\u2019s localization behavior. The Bayesian approach described here provides a new framework for\nanalyzing sound localization in the owl.\n\n6.2 Failure of the maximum likelihood model\n\nThe maximum likelihood model fails because of the nature of spatial ambiguity in the ILD and\nIPD cues. The existence of spatial ambiguity has been noted in previous descriptions of barn owl\nHRTFs [3, 10, 13]. As expected, directions near each other have similar cues. In addition to sim-\n\n5\n\n\filarity of cues between proximal directions, distant directions can have similar ILD and IPD cues.\nMost signi\ufb01cantly, there is a similarity between the ILD and IPD cues at the center of gaze and at\nperipheral directions on the vertical plane. The consequence of such ambiguity between distant di-\nrections is that noise in measuring localization cues can lead to large errors in direction estimation,\nas seen in the ML estimate. The results of the simulations suggest that a behaviorally accurate so-\nlution to the sound localization problem must include a mechanism that chooses between disparate\ndirections which are associated with similar localization cues in such a way as to limit errors for\nsource directions near the center of gaze. This work shows that a possible mechanism for choosing\nbetween such directions is to incorporate a bias towards directions at the center of gaze through a\nprior distribution and utilize the Bayesian estimation framework. The use of a prior that emphasizes\ndirections near the center of gaze is similar to the use of central weighting functions in models of\nhuman lateralization [14].\n\n6.3 Predictions of the Bayesian model\n\nThe MAP estimator predicts the underestimation of peripheral source directions on the horizontal\nand vertical planes (Figure 2). The pattern of error displayed by the MAP estimator qualitatively\nmatches the owl\u2019s behavioral performance by showing increasing error as a function of eccentricity.\nOur evaluation of the model performance is limited, however, because there is little behavioral data\nfor directions outside \u00b1 70 deg [15,16]. For the owl whose performance is displayed in Figure 1, the\nlargest errors on the vertical and horizontal planes were less than 20 deg and 11 deg, respectively.\nThe model produces much larger errors for directions beyond 70 deg, especially on the vertical plane.\nThe large errors in elevation result from the ambiguity in the localization cues on the vertical plane\nand the shape of the prior distribution. As discussed above, for broadband noise stimuli, there is a\nsimilarity between the ILD and IPD cues for central and peripheral directions on the vertical plane\n[3, 10, 13]. The presence of a prior distribution that emphasizes central directions causes direction\nestimates for both central and peripheral directions to be concentrated near zero deg. Therefore,\nestimation errors are minimal for sources at the center of gaze, but approach the magnitude of the\nsource direction for peripheral source directions. Behavioral data shows that localization accuracy\nis the greatest near the center of gaze [1], but there is no data for localization performance at the\nmost eccentric directions on the vertical plane. Further behavioral experiments must be performed\nto determine if the owl\u2019s error increases greatly at the most peripheral directions.\n\nThere is a signi\ufb01cant spatial ambiguity in the localization cues when target sounds are narrow-\nband. It is well known that spatial ambiguity arises from the way that interaural time differences are\nprocessed at each frequency [17\u201319]. The owl measures the interaural time difference for each fre-\nquency of the input sound as an interaural phase difference. Therefore, multiple directions in space\nthat differ in their associated interaural time difference by the period of a tone at that frequency are\nconsistent with the same interaural phase difference and can not be distinguished. Behavioral exper-\niments show that the owl may localize a phantom source in the horizontal dimension when the signal\nis a tone [20]. Based on the presence of a prior that emphasizes directions near the center of gaze,\nI predict that for low frequency tones where phase equivalent directions lie near the center of gaze\nand at directions greater than 80 deg, confusion will always lead to an estimate of a source direction\nnear zero degrees. This prediction can not be evaluated from available data because localization of\ntonal signals has only been systematically studied using 5 kHz tones with target directions at \u00b1 20\ndeg [19]. Because the prior is broad, the target direction of \u00b1 20 deg and the phantom direction of\n\u00b1 50 deg may both be considered central.\nThe ILD cue also displays a signi\ufb01cant ambiguity at high frequencies. At frequencies above 7 kHz,\nthe ILD is non-monotonically related to the vertical position of a sound source [3, 10] (Figure 3).\nTherefore, for narrowband sounds, the owl can not uniquely determine the direction of a sound\nsource from the ITD and ILD cues.\nI predict that for tonal signals above 7 kHz, there will be\nmultiple directions on the vertical plane that are confused with directions near zero deg. I predict\nthat confusion between source directions near zero deg and eccentric directions will always lead to\nestimates of directions near zero deg. There is no available data to evaluate this prediction.\n\n6\n\n\fFigure 3: Model predictions for localization of tones on the vertical plane. (A) ILD as a function of\nelevation at 8 kHz, computed from HRTFs of owl 880 recorded by Keller et al. (1998). (B) Given\nan ILD of 0 dB, a likelihood function (dots) based on matching cues to expected values would be\nmultimodal with three equal peaks. If the target is at any of the three directions, there will be large\nlocalization errors because of confusion with the other directions. If a prior emphasizing frontal\nspace (dashed) is included, a posterior density equal to the product of the likelihood and the prior\nwould have a main peak at 0 deg elevation. Using a maximum a posteriori estimate, large errors\nwould be made if the target is above or below. However, few errors would be observed when the\ntarget is near 0 deg.\n\n6.4 Testing the Bayesian model\n\nFurther head turning localization experiments with barn owls must be performed to test predictions\ngenerated by the Bayesian hypothesis and to provide constraints on a model of sound localization.\nExperiments should test the localization accuracy of the owl for broadband noise sources and tonal\nsignals at directions covering the frontal hemisphere. The Bayesian model will be supported if, \ufb01rst,\nlocalization accuracy is high for both tonal and broadband noise sources near the center of gaze\nand, second, peripherally located sources are confused for targets near the center of gaze, leading\nto large localization errors. Additionally, a Bayesian model should be \ufb01t to the data, including\npoints away from the horizontal and vertical planes, using a nonparametric prior [21, 22]. While the\nmodel presented here, using a von Mises prior, qualitatively matches the performance of the owl, the\nperformance of the Bayesian model may be improved by removing assumptions about the structure\nof the prior distribution.\n\n6.5 Implications for neural processing\n\nThe analysis presented here does not directly address the neural implementation of the solution\nto the localization problem. However, our abstract analysis of the sound localization problem has\nimplications for neural processing. Several models exist that reproduce the basic properties of ILD,\nITD, and space selectivity in ICx and OT neurons using a spectral matching procedure [3, 5, 6].\nThese results suggest that a Bayesian model is not necessary to describe the responses of individual\nICx and OT neurons. It may be necessary to look in the brainstem motor targets of the optic tectum\nto \ufb01nd neurons that resolve the ambiguity present in sound stimuli and show responses that re\ufb02ect\nthe MAP solution. This implies that the prior distribution is not employed until the \ufb01nal stage of\nprocessing. The prior may correspond to the distribution of best directions of space-speci\ufb01c neurons\nin ICx and OT, which emphasizes directions near the center of gaze [23].\n\n6.6 Conclusion\n\nThis analysis supports the Bayesian model of the barn owl\u2019s solution to the localization problem\nover the maximum likelihood model. This result suggests that the standard spectral matching model\nwill not be suf\ufb01cient to explain sound localization behavior in the barn owl. The Bayesian model\n\n7\n\n\fprovides a new framework for analyzing sound localization in the owl. The simulation results using\nthe MAP estimator lead to testable predictions that can be used to evaluate the Bayesian model of\nsound localization in the barn owl.\n\nAcknowledgments\n\nI thank Kip Keller, Klaus Hartung, and Terry Takahashi for providing the head-related transfer\nfunctions and Mark Konishi and Jos\u00b4e Luis Pe\u02dcna for comments and support.\n\nReferences\n\n[1] E.I. Knudsen, G.G. Blasdel, and M. Konishi. Sound localization by the barn owl (Tyto alba) measured\n\nwith the search coil technique. J. Comp. Physiol., 133:1\u201311, 1979.\n\n[2] M. Konishi. Coding of auditory space. Annu. Rev. Neurosci., 26:31\u201355, 2003.\n[3] M.S. Brainard, E.I. Knudsen, and S.D. Esterly. Neural derivation of sound source location: Resolution of\n\nspatial ambiguities in binaural cues. J. Acoust. Soc. Am., 91(2):1015\u20131027, 1992.\n\n[4] B.J. Arthur. Neural computations leading to space-speci\ufb01c auditory responses in the barn owl. Ph.D.\n\nthesis, Caltech, 2001.\n\n[5] B.J. Fischer. A model of the computations leading to a representation of auditory space in the midbrain\n\nof the barn owl. D.Sc. thesis, Washington University in St. Louis, 2005.\n\n[6] C.H. Keller and T.T. Takahashi. Localization and identi\ufb01cation of concurrent sounds in the owl\u2019s auditory\n\nspace map. J. Neurosci., 25:10446\u201310461, 2005.\n\n[7] I. Poganiatz and H. Wagner. Sound-localization experiments with barn owls in virtual space: in\ufb02uence of\nbroadband interaural level difference on head-turning behavior. J. Comp. Physiol. A, 187:225\u2013233, 2001.\n[8] D.R. Euston and T.T. Takahashi. From spectrum to space: The contribution of level difference cues to\n\nspatial receptive \ufb01elds in the barn owl inferior colliculus. J. Neurosci., 22(1):284\u2013293, Jan. 2002.\n\n[9] Evans M., Hastings N., and Peacock B. von Mises Distribution. In Statistical Distributions, 3rd ed., pages\n\n189\u2013191. Wiley, New York, 2000.\n\n[10] C.H. Keller, K. Hartung, and T.T. Takahashi. Head-related transfer functions of the barn owl: measure-\n\nment and neural responses. Hearing Research, 118:13\u201334, 1998.\n\n[11] R.O. Duda. Elevation dependence of the interaural transfer function, chapter 3 in Binaural and Spatial\nHearing in Real and Virtual Environments, pages 49\u201375. New Jersey: Lawrence Erlbaum Associates,\n1997.\n\n[12] Witten I.B. and Knudsen E.I. Why seeing is believing: Merging auditory and visual worlds. Neuron,\n\n48:489\u2013496, 2005.\n\n[13] J.F Olsen, E.I. Knudsen, and S.D. Esterly. Neural maps of interaural time and intensity differences in the\n\noptic tectum of the barn owl. J. Neurosci., 9:2591\u20132605, 1989.\n\n[14] R.M. Stern and H.S. Colburn. Theory of binaural interaction based on auditory-nerve data. IV. A model\n\nfor subjective lateral position. J. Acoust. Soc. Am., 64:127\u2013140, 1978.\n\n[15] H. Wagner. Sound-localization de\ufb01cits induced by lesions in the barn owl\u2019s auditory space map. J.\n\nNeurosci., 13:371\u2013386, 1993.\n\n[16] I. Poganiatz, I. Nelken, and H. Wagner. Sound-localization experiments with barn owls in virtual space:\n\nin\ufb02uence of interaural time difference on head-turning behavior. J. Ass. Res. Otolarnyg., 2:1\u201321, 2001.\n\n[17] T. Takahashi and M. Konishi. Selectivity for interaural time difference in the owl\u2019s midbrain. J. Neurosci.,\n\n6(12):3413\u20133422, 1986.\n\n[18] J.A. Mazer. How the owl resolves auditory coding ambiguity. Proc. Natl. Acad. Sci. USA, 95:10932\u2013\n\n10937, 1998.\n\n[19] K. Saberi, Y. Takahashi, H. Farahbod, and M. Konishi. Neural bases of an auditory illusion and its\n\nelimination in owls. Nature Neurosci., 2(7):656\u2013659, 1999.\n\n[20] E.I. Knudsen and M. Konishi. Mechanisms of sound localization in the barn owl (Tyto alba) measured\n\nwith the search coil technique. J. Comp. Phys. A, (133):13\u201321, 1979.\n\n[21] Liam Paninski. Nonparametric inference of prior probabilities from Bayes-optimal behavior. In Y. Weiss,\nB. Sch\u00a8olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18, pages 1067\u2013\n1074. MIT Press, Cambridge, MA, 2006.\n\n[22] Stocker A.A. and Simoncelli E.P. Noise characteristics and prior expectations in human visual speed\n\nperception. Nature Neurosci., 9(4):578\u2013585, 2006.\n\n[23] E.I. Knudsen and M. Konishi. A neural map of auditory space in the owl. Science, 200:795\u2013797, 1978.\n\n8\n\n\f", "award": [], "sourceid": 38, "authors": [{"given_name": "Brian", "family_name": "Fischer", "institution": null}]}