{"title": "An Adaptive and Heterodyne Filtering Procedure for the Imaging of Moving Objects", "book": "Neural Information Processing Systems", "page_first": 662, "page_last": 673, "abstract": null, "full_text": "662 \n\nAN ADAPTIVE AND HETERODYNE FILTERING PROCEDURE \n\nFOR THE IMAGING OF MOVING OBJECTS \n\nF. H. Schuling, H. A. K. Mastebroek and W. H. Zaagman \nBiophysics Department, Laboratory for General Physics \nWestersingel 34, 9718 eM Groningen, The Netherlands \n\nABSTRACT \n\nRecent experimental work on the stimulus velocity dependent time resolving \npower of the neural units, situated in the highest order optic ganglion of the \nblowfly, revealed the at first sight amazing phenomenon that at this high level of \nthe fly visual system, the time constants of these units which are involved in the \nprocessing of neural activity evoked by moving objects, are -roughly spoken(cid:173)\ninverse proportional to the velocity of those objects over an extremely wide range. \nIn this paper we will discuss the implementation of a two dimensional heterodyne \nadaptive filter construction into a computer simulation model. The features of this \nsimulation model include the ability to account for the experimentally observed \nstimulus-tuned adaptive temporal behaviour of time constants in the fly visual \nsystem. The simulation results obtained, clearly show that the application of such \nan adaptive processing procedure delivers an improved imaging technique of \nmoving patterns in the high velocity range. \n\nA FEW REMARKS ON THE FLY VISUAL SYSTEM \n\nthe diptera, \n\nThe visual system of \n\nthe blowfly Calliphora \nerythrocephala (Mg.) is very regularly organized and allows therefore very precise \noptical stimulation techniques. Also, long term electrophysiological recordings can \nbe made relatively easy in this visual system, For these reasons the blowfly (which \nis well-known as a very rapid and 'clever' pilot) turns out to be an extremely \nsuitable animal for a systematic study of basic principles that may underlie the \ndetection and further processing of movement information at the neural level. \n\nincluding \n\nIn the fly visual system the input retinal mosaic structure is precisely \nmapped onto the higher order optic ganglia (lamina, medulla, lobula), This means \nthat each neural column in each ganglion in this visual system corresponds to a \ncertain optical axis in the visual field of the compound eye. In the lobula complex \na set of wide-field movement sensitive neurons is found, each of which integrates \nthe input signals over the whole visual field of the entire eye, One of these wide \nfield neurons, that has been classified as H I by Hausen J has been extensively \nstudied both anatomically2, 3, 4 as well as electrophysiologically5, 6, 7, The \nobtained results generally agree very well with \nin behavioral \noptomotor experiments on movement detection8 and can be understood in terms of \nReichardts correlation model9, 10. \n\nthose found \n\nis sensitive \n\nThe H I neuron \n\nto horizontal movement and directionally \nselective: very high rates of action potentials (spikes) up to 300 per second can be \nrecorded from this element in the case of visual stimuli which move horizontally \ninward, i.e. from back to front in the visual field (pre/erred direction), whereas \nmovement horizontally outward, i.e, from front to back (null direction) suppresses \nits activity, \n\n\u00a9 American Institute of Physics 1988 \n\n\f663 \n\nEXPERIMENTAL RESULTS AS A MODELLING BASE \n\nWhen the H I neuron is stimulated in its preferred direction with a step wise \npattern displacement, it will respond with an increase of neural activity. By \nrepeating this stimulus step over and over one can obtain the averaged response: \nafter a 20 ms latency period the response manifests itself as a sharp increase in \naverage firing rate followed by a much slower decay to the spontaneous activity \nlevel. Two examples of such averaged responses are shown in the Post Stimulus \nTime Histograms (PSTH's) of figure 1. Time to peak and peak height are related \nand depend on modulation depth, stimulus step size and spatial extent of the \nstimulus. The tail of the responses can be described adequately by an exponential \ndecay toward a constant spontaneous firing rate: \n\nR(t)=c+a - e( -t/1) \n\n(1) \n\nFor each setting of the stimulus parameters, \n\nthe response parameters, \ndefined by equation (1), can be estimated by a least-squares fit to the tail of the \nPSTH. The smooth lines in figure 1 are the results of two such fits. \n\ntlmsl \n\n300 !~, \n\" \" . \n\n'00 -\n\no \" \n\no \n\no \n\nOJ \n\n~ I'JO \n0\\ \n~ \ntf 100 \n\nw= 11'/s \n\n)0 \n\n10 \n\n8 \n\n\u2022 M:O.'O \n\no MoO IO \n\n\" Mdl05 \n\n1.00 \n\n600 \n\n800 \n\n__ .L..._ - - ' -_\n\n0.3 \n\nI \n\n_ ---', __ L - ' _-----L, _ _ -'--_ _ \n\n10 ) 0 \n\n100 \n\n)00 \n\nW (\"lsI \n\nFig.l \n\nFig.2 \n\nA veraged responses (PSTH's) obtained from \nthe H I neuron, being \nadapted to smooth stimulus motion with velocities 0.36 0 /s (top) and \n11 0 /s (bottom) respectively. The smooth lines represent least-squares \nfits to the PSTH's of the form R(t)=c+a-e(-t/1). Values of f for the \ntwo PSTH's are 331 and 24 ms respectively (de Ruyter van Steveninck et \nal.7). \nFitted values of f as a function of adaptation velocity for \nthree \nmodulation depths M. The straight line is a least-squares fit to represent \nthe data for M=0.40 \nin the region w:0.3-100 o/s. It has the form \nf=Q - w-13 with Q=150 ms and 13=0.7 (de Ruyter van Steveninck et al.7). \n\n\f664 \n\nFigure 2 shows fitted values of the response time constant T as a function of \nthe angular velocity of a moving stimulus (a square wave grating in most \nexperiments) which was presented to the animal during a period long enough to let \nits visual system adapt to this moving pattern and before the step wise pattern \ndisplacement (which reveals 1') was given. The straight line, described by \n\n(2) \n\n(with W in 0 Is and T in ms) represents a least-squares fit to the data over the \nvelocity range from 0.36 to 125 0 Is. For this range, T varies from 320 to roughly \n10 ms, with a=150\u00b11O ms and ~=0.7\u00b10.05. Defining the adaptation range of 1 as \nthat interval of velocities for which 1 decreases with increasing velocity, we may \nconclude from figure 2 that within the adaptation range, 1 is not very sensitive to \nthe modulation depth. \n\nThe outcome of similar experiments with a constant modulation depth of the \npattern (M=0.40) and a constant pattern velocity but with four different values of \nthe contrast frequency fc (Le. the number of spatial periods per second that \ntraverse an individual visual axis as determined by the spatial wavelength As of the \npattern and the pattern velocity v according to fc=v lAs) reveal also an almost \ncomplete independency of the behaviour of 1 on contrast frequency. Other \nexperiments in which the stimulus field was subdivided into regions with different \nadaptation velocities, made clear that the time constants of the input channels of \nthe H I neuron were set locally by the values of the stimulus velocity in each \nstimulus sub-region. Finally, it was found that the adaptation of 1 is driven by \nthe stimulus velocity, independent of its direction. \n\nto be \n\ntuned locally within a \n\nThese findings can be summarized qualitatively as follows: in steady state, \nthe response time constants 1 of the neural units at the highest level in the fly \nvisual system are found \nlarge velocity range \nexclusively by the magnitude of the velocity of the moving pattern and not by its \ndirection, despite the directional selectivity of the neuron itself. We will not go \ninto the question of how this amazing adaptive mechanism may be hard-wired in \nthe fly visual system. Instead we will make advantage of the results derived thus \nfar and attempt to fit the experimental observations into an image processing \napproach. A large number of theories and several distinct classes of algorithms to \nencode velocity and direction of movement in visual systems have been suggested \nby, for example, Marr and Ullman I I and van Santen and Sperling12. \n\nWe hypothesize that the adaptive mechanism for the setting of the time \nconstants leads to an optimization for the overall performance of the visual system \nby realizing a velocity independent representation of the moving object. In other \nwords: within the range of velocities for which the time constants are found to be \ntuned by the velocity, the representation of that stimulus at a certain level within \nthe visual circuitry, should remain independent of any variation in stimulus \nvelocity. \n\nOBJECT MOTION DEGRADATION: MODELLING \n\nGiven the physical description of motion and a linear space invariant model, \nthe motion degradation process can be represented by the following convolution \nintegral: \n\nco \n\ng(x,y)= J J (h(x - u,y-v) \u2022 flu, v\u00bb dudv \n\nco \n\n-00 -00 \n\n(3) \n\n\f665 \n\nwhere f(u,v) is the object intensity at position (u,v) in the object coordinate \nframe, h(x-u,y-v) is the Point Spread Function (PSF) of the imaging system, \nwhich is the response at (x,y) to a unit pulse at (u,v) and g(x,y) is the image \nintensity at the spatial position (x,y) as blurred by the imaging system. Any \npossible additive white noise degradation of the already motion blurred image is \nneglected in the present considerations. \n\nthe reader is referred \n\nFor a review of principles and techniques in the field of digital image \ndegradation and restoration, \nto Harris 13, Sawchuk 14, \nSondhi 15, Nahi 16, A boutalib et al. 1 7, 18, Hildebrand 19, Rajala de Figueiredo20 . \nIt has been demonstrated first by Aboutalib et al.17 that for situations in which \nthe motion blur occurs in a straight line along one spatial coordinate, say along the \nhorizontal axis, it is correct to look at the blurred image as a collection of \ndegraded line scans through the entire image. The dependence on the vertical \ncoordinate may then be dropped and eq. (3) reduces to: \n\ng~~ J ~x-u) -f(u)du \ng(x)= k b(x - u) - f(u)du \n\n(4) \n\nrelative movement, \n\nthe \n\n(5) \n\nGiven \n\nthe mathematical description of \n\nthe \n\ncorresponding PSF can be derived exactly and equation (4) becomes: \n\nwhere R is the extent of the motion blur. Typically, a discrete version of (5), \napplicable for digital image processing purposes, is described by: \n\nL \n\ng(k)=l: h(k-I)\u00b7 f(l) \n\n; k=I, ... ,N \n\n(6) \n\nI \n\nwhere k and I take on integer values and L is related to the motion blur extent. \n\nAccording to Aboutalib et al. 18 a scalar difference equation model (M,a,b,c) \n\ncan then be derived to model the motion degradation process: \n\nx(k+l) = M\u00b7 x(k)+a\u00b7 f(k) \n\ng(k) = b\u00b7 x(k)+c \u2022 f(k) \n\n; k=I, ... ,N \n\n(7) \n\nh(i) = cof1(i)+Cl~(i-l)+ ...... +cmA(i-m) \n\nwhere x(k) is the m-dimensional state vector at position k along a scan line, f(k) is \nthe input intensity at position k, g(k) is the output intensity, m is the blur extent, \nN is the number of elements in a line, c is a scalar, M, a and b are constant \nmatrices of order (mxm), (mxl) and (lxm) respectively, containing the discrete \nvalues Cj of the blurring PSF h(j) for j=O, ... ,m and 1::.(.) is the Kronecker delta \nfunction. \n\n\f666 \n\nINFLUENCE OF BOTH TIME CONSTANT AND VELOCITY \nON THE AMOUNT OF MOTION BLUR IN AN ARTIFICIAL \n\nRECEPTOR ARRAY \n\nTo start with, we incorporate in our simulation model a PSF, derived from \nequation (1), to model the performance of all neural columnar arranged filters in \nthe lobula complex, with the restriction that the time constants f \nremain fixed \nthroughout the whole range of stimulus velocities. Realization of this PSF can \neasily be achieved via the just mentioned state space model. \n\nI. \n\\ . \nI. \n\\ . \n\n\\ \n\\ \n\\ \n\\ \n\n\\ \n\n, \n\" \n, \n\n\"-\n\n\\ \n\\ \n\\ \n\\ \n\n. . \n, \n, \n\" , , \n, , \n\n7 \n\n\\ \n\n\\ \n\n300 \n\n250 \n\n200 \n\n150 \n:3 100 \n. \n<{ 50 \nw \n1 \n0 \n:::> \n..... \n::i 250 \na.. \n~ \n<{ 200 \n\n0 \n\n150 \n\n100 \n\n50 \n\n\\ . \n\n\\ \n\\ \n\\ \n\\ \n\n\\ \n\n\\ \n\n\" \n\" \n\n\" \n\nO~--~~----~~~~--~~--~ \n20 \n\n15 \n\n10 \n\n5 \n\no \n\nPOSITION IN \n\nARTIFICIAL RECEPTOR ARRAY \n\n\u2022 \n\nFig.3 \n\nupper part. Demonstration of the effect that an increase in magnitude of \nthe time constants of an one-dimensional array of filters will result in \nincrease in motion blur (while the pattern velocity remains constant). \nOriginal pattern shown in solid lines is a square-wave grating with a \nspatial wavelength equal to 8 artificial receptor distances. The three \nother wave forms drawn, show that for a gradual increase increase in \nmagnitude of the time constants, the representation of the original \nsquare-wave will consequently degrade. lower part. A gradual increase in \nvelocity of the moving square-wave (while the filter time constants are \nkept fixed) results also in a clear increase of degradation. \n\n\f667 \n\nFirst we demonstrate the effect that an increase in time constant (while the \npattern velocity remains the same) will result in an increase in blur. Therefore we \nintroduce an one dimensional array of filters all being equipped with the same \ntime constant in their impulse response. The original pattern shown in square and \nsolid lines in the upper part of figure 3 consists of a square wave grating with a \nspatial period overlapping 8 artificial receptive filters. The 3 other patterns drawn \nthere show that for the same constant velocity of the moving grating, an increase \nin the magnitude of the time constants of the filters results in an increased blur in \nthe representation of that grating. On the other hand, an increase in velocity \n(while the time constants of the artificial receptive units remain the same) also \nresults in a clear increase in motion blur, as demonstrated in the lower part of \nfigure 3. \n\nInspection of the two wave forms drawn by means of the dashed lines in \nboth upper and lower half of the figure, yields the conclusion, that (apart from \nrounding errors \nthe rather small number of artificial filters \navailable), equal amounts of smear will be produced when the product of time \nconstant and pattern velocity is equal. For the upper dashed wave form the \nvelocity was four times smaller but the time constant four times larger than for its \nequivalent in the lower part of the figure. \n\nintroduced by \n\nADAPTIVE SCHEME \n\nIn designing a proper image processing procedure our next step is \n\nto \nincorporate the experimentally observed flexibility property of the time constants \nin the imaging elements of our device. In figure 4a a scheme is shown, which \nfilters the information with fixed time constants, not influenced by the pattern \nvelocity. In figure 4b a network is shown where the time constants also remain \nfixed no matter what pattern movement is presented, but now at the next level of \ninformation processing, a spatially differential network is incorporated in order to \nenhance blurred contrasts. \n\nIn the filtering network in figure 4c, first a measurement of the magnitude \nof the velocity of the moving objects is done by thus far hypothetically introduced \nmovement processing algorithms, modelled here as a set of receptive elements \nsampling the environment in such a manner that proper estimation of local pattern \nvelocities can be done. Then the time constants of the artificial receptive elements \nwill be \nthe same \ndifferential network as in scheme 4b, is used. \n\nthe estimated velocities and finally \n\ntuned according \n\nto \n\nThe actual tuning mechanism used for our simulations is outlined in figure \n5: once given the range of velocities for which the model is supposed to be \noperational, and given a lower limit for the time constant 'f min ('f min can be the \nsmallest value which physically can be realized), the time constant will be tuned to \na new value according to the experimentally observed reciprocal relationship, and \nwill, for all velocities within the adaptive range, be larger than the fixed minimum \nvalue. As demonstrated in the previous section the corresponding blur in the \nrepresentation of the moving stimulus will thus always be larger than for the \nsituation in which the filtering is done with fixed and smallest time constants \n.,. min. More important however is the fact that due to this tuning mechanism the \nblur will be constant since the product of velocity and time constant is kept \nconstant. So, once the information has been processed by such a system, a velocity \nindependent representation of the image will be the result, which can serve as the \ninput for the spatially differentiating network as outlined in figure 4c. \n\nThe most elementary form for this differential filtering procedure is the one \n\n\f668 \n\nin which the gradient of two filters K-I and K+l which are the nearest neighbors \nof filter K, is taken and then added with a constant weighing factor to the central \noutput K as drawn in figure 4b and 4c, where the sign of the gradient depends on \nthe direction of the estimated movement. Essential for our model is that we claim \nthat this weighing factor should be constant throughout the whole set of filters \nand for the whole high velocity range in which the heterodyne imaging has to be \nperformed. Important to notice is the existence of a so-called settling time, i.e. the \nminimal time needed for our movement processing device to be able to accurately \nmeasure the object velocity. [Note: this time can be set equal to zero in the case \nthat the relative stimulus velocity is known a priori, as demonstrated in figure 3]. \nSince, without doubt, within this settling period estimated velocity values will \ncome out erroneously and thus no optimal performance of our imaging device can \nbe expected, in all further examples, results after this initial settling procedure \nwill be shown. \n\n2 \n\n5 \n\n3 \n\n, \n\nA \n\nB \n\nC \n\nFig. 4 \n\nPattern movement in this figure is to the right. \nA: Network consisting of a set of filters with a fixed, pattern velocity \n\n; ~ v yV 9' / y ~' 7 .' r~ ;/Y i \u00a7. Y \nr39 rYO [~l [~l i~J \n't\"if~'n \n\nindependent, time constant in their impulse response. \n\nB: Identical network as \n\nin figure 4A now followed by a spatially \ndifferentiating circuitry which adds the weighed gradients of two \nneighboring filter outputs K-l and K+I to the central filter output \nK. \n\nC: The \n\ntime constants of \n\nthe filtering network are \n\ntuned by a \nhypothetical movement estimating mechanism, visualized here as a \nnumber of receptive elements, of which the combined output tunes \nthe filters. A detailed description of this mechanism is shown in \nfigure 5. This tuned network is followed by an identical spatially \ndifferentiating circuit as described in figure 4B. \n\n\f669 \n\nincreasing velocity \n\n\u2022 \n\nv \n\n(<\u00a5s) \n\n1: \n\n----_ .. \n\n1: min \n\ndecreasing time constant \n\nFig. 5 \n\nDetailed description of the mechanism used to tune the time constants. \nThe time constant f of a specific neural channel is set by the pattern \nvelocity according to the relationship shown in the insert, which is \nderived from eq. (2) with cx=- I and 13= I. \n\n6 \n\nr \n\ni', \n\n\" \n\" \n\nr 'I , , , \\ , , \n, , \n, \n~ \n~ \n\n, \n\n4 \n\n2 \n\n=f \n< 0 \nw o \n\n::;) --~ a. \n\n:J: < \n\nI, \n\n\" ' \\ \n\n2V \n\nr;-\" ::-:-\n, \n~be-.--1 \n\n, / ' =-.:! \n\nv h \n-,,-\n, \n, , \nI \nI I \n: ' \n\nI \n\nI \nI \nI \n\n\" V \n\n, I \n, I \n\n\" \n\\ \n\" \n\n4V \n\n, -\n... -\n\n--~ \n\n2 \n\n- /,-----\nWi \no \n\n8V \n\n'1;\"- -- ---\n\nI \nJ/----\n\n12 V \n\n1: \n\n16 V \n. r------l...- --\n\n\\ \n\nJ \n\nPOSITION IN ARTIFICIAL RECEPTOR ARRAY \n\nFig.6 \n\nThick lines: square-wave stimulus pattern with a spatial wavelength \noverlapping 32 artificial receptive elements. Thick lines: responses for 6 \ndifferent pattern velocities in a system consisting of paralleling neural \nfilters equipped with time constants, tuned by this velocity, and followed \nby a spatially differentiating network as described. \nDashed lines: responses to the 6 different pattern velocities in a filtering \nsystem with \nthe same spatial \ndifferentiating circuitry as before. Note the sharp over- and under \nshoots for this case. \n\ntime constants, \n\nfollowed by \n\nfixed \n\n\f670 \n\nResults obtained with an imaging procedure as drawn in figure 4b and 4c \nare shown in figure 6. The pattern consists of a square wave, overlapping 32 \npicture elements. The pattern moves (to the left) with 6 different velocities v, 2v, \n4v, 8v, 12v, 16v. At each velocity only one wavelength is shown. Thick lines: \nsquare wave pattern. Dashed lines: the outputs of an imaging device as depicted in \nfigure 4b: constant time constants and a constant weighing factor in the spatial \nprocessing stage. Note the large differences between the several outputs. Thin \ncontinuous lines: the outputs of an imaging device as drawn in figure 4c: tuned \ntime constants according to the reciprocal relationship between pattern velocity \nand time constant and a constant weighing factor in the spatial processing stage. \nFor further simulation details the reader is referred to Zaagman et al.21 . Now the \noutputs are almost completely the same and in good agreement with the original \nstimulus throughout the whole velocity range. \n\nFigure 7 shows the effect of the gradient weighing factor on the overall \nfilter performance, estimated as the improvement of the deblurred images as \ncompared with the blurred image, measured in dB. This quantitative measure has \nbeen determined for the case of a moving square wave pattern with motion blur \n\n7.-------~------~r-------_r------~ \n\n6 \n\n5 \n\n4 \n\nIX) \n\"0 \n\n0) \nu \nC \nItI 3 \nE \n\nc-o -~ 2 \n\na. \nc-\nO) \n~ 1 \n;z: \n\n0 \n\n-1 \n\n0 \n\n1 \n\n2 \n\nweighing factor \n\n3 \n\n\u2022 \n\n4 \n\nFig. 7 \n\nEffect of the weighing factor on the overall filter performance. Curve \nmeasured for \nthe case of a moving square-wave grating. Filter \nperformance is estimated as the improvement in signal to noise ratio: \n\n1=10\u00b7 1010g \n\n( \n\nI:iI:j\u00abV(i,j)-U(i,j\u00bb2) \n\nI:iI: j\u00ab O(i,j)- u(i,j\u00bb 2 \n\nwhere u(i,j) is the original intensity at position (i,j) in the image, v(i,j) \nis the intensity at the same position (i,j) in the motion blurred image and \nO(i,j) is the intensity at (i,j) in the image, generated with the adaptive \ntuning procedure. \n\n\f671 \n\nextents comparable to those used for the simulations to be discussed in section IV. \nFrom this curve it is apparent that for this situation there is an optimum value for \nthis weighing factor. Keeping the weight close to this optimum value will result in \na constant output of our adaptive scheme, thus enabling an optimal deblurring of \nthe smeared image of the moving object. \n\nOn the other hand, starting from the point of view that the time constants \nshould remain fixed throughout the filtering process, we should had have to tune \nthe gradient weights to the velocity in order to produce a constant output as \ndemonstrated in figure 6 where the dashed lines show strongly differing outputs of \na fixed time constant system with spatial processing with constant weight (figure \n4b). In other words, tuning of the time constants as proposed in this section results \nin: I) the realization of the blur-constancy criterion as formulated previously, and \n2) -as a consequence- the possibility to deblur the obtained image oPtimally with \none and the same weighing factor of the gradient in the final spatial processing \nlayer over the whole heterodyne velocity range. \n\nCOMPUTER SIMULATION RESULTS AND \n\nCONCLUSIONS \n\nThe \n\nimage quality \n\nimprovement algorithm developed \n\nin \n\nthe present \n\nin \n\nreduced \n\nthe higher spatial \n\nfrequencies has been \n\ncontribution has been implemented on a general purpose DG Eclipse Sjl40 mini(cid:173)\ncomputer for our two dimensional simulations. Figure Sa shows an undisturbed \nimage, consisting of 256 lines of each 256 pixels, with S bit intensity resolution. \nFigure Sb shows what happens with the original image if the PSF is modelled \naccording to the exponential decay (2). In this case the time constants of all \nspatial information processing channels have been kept fixed. Again, information \ncontent \nlargely. The \nimplementation of the heterodyne filtering procedure was now done as follows: \nfirst the adaptation range was defined by setting the range of velocities. This \nmeans that our adaptive heterodyne algorithm is supposed to operate adequately \nonly within the thus defined velocity range and that -in that range-\nthe time \nconstants are tuned according to relationship (2) and will always come out larger \nthan the minimum value 1 min. For demonstration purposes we set Q=I and /3=1 in \neq. (2), \ntwo \ndimensional set of spatial filters with time constants tuned by that velocity, will \nalways produce a constant output, independent of this velocity which introduces \nthe motion blur. Figure Sc shows this representation. It is important to note here \nthat this constant output has far more worse quality than any set of filters with \nsmallest and fixed time constants 1 min would produce for velocities within the \noperational range. The advantage of a velocity independent output at this level in \nour simulation model, is that in the next stage a differential scheme can be \nimplemented as discussed in detail in the preceding paragraph. Constancy of the \nweighing factor which is used in this differential processing scheme is guaranteed \nby the velocity independency of the obtained image representation. \n\nthat for any velocity, \n\nthus \n\nintroducing \n\nthe phenomenon \n\nthe \n\nFigure Sd shows the result of the differential operation with an optimized \ngradient weighing factor. This weighing factor has been optimized based on an \nalmost identical performance curve as described previously in figure 7. A clear \nand good restoration is apparent from this figure, though close inspection reveals \nfine structure (especially for areas with high intensities) which is unrelated with \nthe original intensity distribution. These artifacts are caused by the phenomenon \nthat for these high intensity areas possible tuning errors will show up much more \npronounced than for low intensities. \n\n\f672 \n\na \n\nb \n\n( \n\nd \n\nFig.8a \nFig.8b \n\nFig. 8c \n\nFig.8d \n\nOriginal 256x256x8 bit picture. \nMotion degraded image with a PSF derived from R(t)=c+a -e( -t/r). \nwhere T \nis kept fixed to 12 pixels and the motion blur extent is 32 \npixels. \nWorst case, i.e. the result of motion degradation of the original image \nwith a PSF as in figure 8b, but with tuning of the time constants based \non the velocity. \nRestored version of the degraded image using the heterodyne adaptive \nprocessing scheme. \n\nIn conclusion: a heterodyne adaptive image processing technique, inspired by \nthe fly visual system, has been presented as an imaging device for moving objects. \nA scalar difference equation model has been used to represent the motion blur \ndegradation process. Based on the experimental results described and on this state \nspace model, we developed an adaptive filtering scheme. which produces at a \ncertain level within the system a constant output, permitting further differential \noperations in order to produce an optimally deblurred representation of the \nmoving object. \n\nACKNOWLEDGEMENTS \n\nThe authors wish to thank mT. Eric Bosman for his expert programming \n\n\f673 \n\nassistance, mr. Franco Tommasi for many inspiring discussions and advises during \nthe implementation of the simulation model and dr. Rob de Ruyter van Steveninck \nfor experimental help. This research was partly supported by the Netherlands \nOrganization \nthe \nfoundation Stichting voor Biolysica. \n\nlor the Advancement 01 Pure Research (Z.W.O.) \n\nthrough \n\nREFERENCES \n\nI. K. Hausen, Z. Naturforschung 31c, 629-633 (1976). \n2. N. J. Strausfeld, Atlas of an insect brain (Springer Verlag, Berlin, Heidelberg, \n\nNew York, 1976). \n\n3. K. Hausen, BioI. Cybern. 45, 143-156 (1982). \n4. R. Hengstenberg, J. Compo Physiol. 149, 179-193 (1982). \n5. W. H. Zaagman, H. A. K. Mastebroek, J. W. Kuiper, BioI. Cybern. 31, 163-168 \n\n( 1978). \n\n6. H. A. K. Mastebroek, W. H. Zaagman, B. P. M. Lenting, Vision Res. 20, 467-\n\n474 (1980) \n\n7. R. R. de Ruyter van Steveninck, W. H. Zaagman, H. A. K. Mastebroek, BioI. \n\nCybern., 54, 223-236 (1986). \n\n8. W. Reichardt, T. Poggio, Q. Rev. Biophys. 9, 311-377 (1976). \n9. W. Reichardt, in Reichardt, W. (Ed.) Processing of optical Data by Organisms \n\nand Machines (Academic Press, New York, 1969), pp. 465-493. \n\n10. T. Poggio, W. Reichardt, Q. Rev. Bioph. 9, 377-439 (1976). \n11. D. Marr, S. Ullman, Proc. R. Soc. Lond. 211, 151-180 (1981). \n12. J. P. van Santen, G. Sperling, J. Opt. Soc. Am. A I, 451-473 (1984). \n13. J. L. Harris SR., J. Opt. Soc. Am. 56, 569-574 (1966). \n14. A. A. Sawchuk, Proc. IEEE, Vol. 60, No.7, 854-861 (1972). \n15. M. M.Sondhi, Proc. IEEE, Vol. 60, No.7, 842-853 (1972). \n16. N. E. Nahi, Proc. IEEE, Vol. 60, No.7, 872-877 (1972). \n17. A. O. Aboutalib, L. M. Silverman, IEEE Trans. On Circuits And Systems T(cid:173)\n\nCAS 75, 278-286 (1975). \n\n18. A. O. Aboutalib, M. S. Murphy, L.M. Silverman, IEEE Trans. Automat. Contr. \n\nAC 22, 294-302 (1977). \n\n19. Th. Hildebrand, BioI. Cybern. 36, 229-234 (1980). \n20. S. A. Rajala, R. J. P. de Figueiredo, IEEE Trans. On Acoustics, Speech and \n\nSignal Processing, Vol. ASSSP-29, No.5, 1033-1042 (1981). \n\n21. W. H. Zaagman, H. A. K. Mastebroek, R. R. de Ruyter van Steveninck, IEEE \n\nTrans, Syst. Man Cybern. SMC 13, 900-906 (1983). \n\n\f", "award": [], "sourceid": 50, "authors": [{"given_name": "F.", "family_name": "Schuling", "institution": null}, {"given_name": "H.", "family_name": "Mastebroek", "institution": null}, {"given_name": "W.", "family_name": "Zaagman", "institution": null}]}