{"title": "An Analog Implementation of the Constant Average Statistics Constraint For Sensor Calibration", "book": "Advances in Neural Information Processing Systems", "page_first": 699, "page_last": 705, "abstract": null, "full_text": "An Analog Implementation of the \n\nConstant Statistics Constraint \n\nFor Sensor Calibration \n\nJohn G. Harris and Yu-Ming Chiang \n\nComputational Neuro-Engineering Laboratory \n\nDepartment of Computer and Electrical Engineering \n\nUniversity of Florida \nGainesville, FL 32611 \n\nAbstract \n\nWe use the constant statistics constraint to calibrate an array of \nsensors that contains gain and offset variations. This algorithm has \nbeen mapped to analog hardware and designed and fabricated with \na 2um CMOS technology. Measured results from the chip show that \nthe system achieves invariance to gain and offset variations of the \ninput signal. \n\n1 \n\nIntroduction \n\nTransistor mismatches and parameter variations cause unavoidable nonuniformities \nfrom sensor to sensor. A one-time calibration procedure is normally used to coun(cid:173)\nteract the effect of these fixed variations between components. Unfortunately, many \nof these variations fluctuate with time--either with operating point (such as data(cid:173)\ndependent variations) or with external conditions (such as temperature). Calibrat(cid:173)\ning these sensors one-time only at the \"factory\" is not suitable-much more frequent \ncalibration is required. The sensor calibration problem becomes more challenging as \nan increasing number of different types of sensors are integrated onto VLSI chips at \nhigher and higher integration densities. Ullman and Schechtman studied a simple \ngain adjustment algorithm but their method provides no mechanism for canceling \nadditive offsets [10]. Scribner has addressed this nonuniformity correction problem \nin software using a neural network technique but it will be difficult to integrate this \ncomplex solution into analog hardware [9]. A number of researchers have studied \nsensors that output the time-derivative of the signal[9][4]. A simple time derivative \n\n\f700 \n\n1. G. Harris and Y. Chiang \n\ncancels any additive offset in the signal but also loses all of the DC and most of \nthe low frequency temporal information present. The offset-correction method pro(cid:173)\nposed by this paper, in effect, uses a time-derivative with an extremely long time \nconstant thereby preserving much of the low-frequency information present in the \nsignal. However, even if an ideal time-derivative approximation is used to cancel \nout additive offsets, the standard deviation process described in this paper can be \nused to factor out gain variations. \n\nWe hope to obtain some clues for sensory adaptation from neurobiological systems \nwhich possess a tremendous ability to adapt to the surrounding environment at \nmultiple time-scales and at multiple stages of processing. Consider the following \nexperiments: \n\n\u2022 After staring at a single curved line ten minutes, human subjects report \nthat the amount of curvature perceived appears to decrease. Immediately \nafter training, the subjects then were shown a straight line and perceived \nit as slightly curved in the opposite direction[5]. \n\n\u2022 After staring long enough at an object in continuous motion, the motion \n\nseems to decrease with time. Immediately after adaptation, subjects per(cid:173)\nceive motion in the opposite direction when looking at stationary objects. \nThis experiment is called the waterfall effect[2]. \n\n\u2022 Colors tend to look less saturated over time. Color after-images are per(cid:173)\n\nceived containing exactly the opponent colors of the original scene[1] . \n\nThough the purpose of these biological adaptation mechanisms is not clear, some \ntheories suggest that these methods allow for fine-tuning the visual system through \nlong-term averaging of measured visual parameters[lO]. We will apply such \ncontinuous-calibration procedures to VLSI sensor calibration. \n\nThe real-world variable x(t) is transduced by a nonlinear response curve into a \nmeasured variable y(t). For a single operating point, the linear approximation can \nbe written as: \n\ny(t) = ax(t) + b \n\n(1) \nwith a and b being the multiplicative gain and additive offset respectively. The gain \nand offset values vary from pixel to pixel and may vary slowly over time. Current \ninfra-red focal point arrays (IRFPAs) are limited by their inability to calibrate out \ncomponent variations [3] . Typically, off-board digital calibration is used to correct \nnonuniformities in these detector arrays; Special calibration images are used to \ncalibrate the system at startup. One-time calibration procedures such as these do \nnot take into account other operating points and will fail to recalibrate for any drift \nin the parameters. \n\n2 \n\nImplementing Natural Constraints \n\nA continuous calibration system must take advantage of natural constraints avail(cid:173)\nable during the normal operation of the sensors. One theory holds that biological \nsystems adapt to the long-term average of the stimulus. For example, the con(cid:173)\nstraints for the three psychophysical examples mentioned above (curvature, motion \nand color adaptation) may rely on the following constraints: \n\n\fThe Constant Average Statistics Constraint \n\n701 \n\n\u2022 The average line is straight. \n\n\u2022 The average motion is zero. \n\n\u2022 The average color is gray. \n\nThe system adapts over time in the direction of this average, where the average \nmust be taken over a very long time: from minutes to hours. We use two additional \nconstraints for offset/gain normalization, namely: \n\n\u2022 The average pixel intensities are identical. \n\n\u2022 The variances of the input for each pixel are all identical. \n\nEach of these constraints assumes that the photoarray is periodically moving in \nthe real-world and that the average statistics each pixels sees should be constant \nwhen averaged over a very long time. In pathological situations where humans or \nmachines are forced to stare at a single static scene for a long time, we violate this \nassumption. \n\nWe estimate the time-varying mean and variance by using an exponentially shaped \nwindow into the past. The equations for mean and variance are: \n\n1100 \n\n0 \n\nm(t) = -\nT \n\ny(t - b.)e-tJ,./T db. \n\nand \n\n1100 \n\n0 \n\ns(t) = -\nT \n\nIy(t - b.) - m(t - b.)le-tJ,./T db. \n\n(2) \n\n(3) \n\n(4) \n\nThe m(t) and s(t) in Equation 2 and 3 can be expressed as low-pass filters with \ninputs y(t) and Iy(t) - m(t)1 respectively. To simplify the hardware implementation \nfurther, we chose the Ll (absolute value) definition of variance instead of the more \nusual L2 definition. The Ll definition is an equally acceptable definition of signal \nvariation in terms of the complete calibration system. Using this definition, no \nsquares or square roots need be calculated. An added benefit of the Ll norm is that \nit provides robustness to outliers in the estimation. \n\nA zero-mean, unity variance 1 signal can then be produced with the following shift/ \nnormalization formula: \n\nx(t) = y(t) - m(t) \n\ns(t) \n\nEquation 2, Equation 3 and Equation 4 constitute a new algorithm for continuously \ncalibrating systems with gain and offset variations. Note that without additional \napriori knowledge about the values of the gains and offsets, it is impossible to recover \nthe true value of the signal x(t) given an infinite history of y(t). This is an ill-posed \nproblem even with fixed but unknown gain and offset parameters for each sensor. \nAll that can be done is to calibrate each sensor output to have zero offset and unity \nvariance. After calibration, each sensor would therefore all have the same offset and \nvariance when averaged over a long time. The fundamental assumption embedded \n\n1 For simplicity the signal s(t) will be called the variance estimate throughout the rest of \nthis paper even though technically s(t) is neither the variance nor the standard deviation. \n\n\f702 \n\nJ. G. Harris and Y. Chiang \n\ny(t} \n\nMean \nestimation \n\n- - - - - - -1 \n1 R \n1 \n1 A~A \n1 \n\n:v'V,-L 1 \n, CI' \n\n1 \n1 \n1 \n1 \n1 \n1 \n1 \n_ I \n1 _____ -_.1 \n\ny(t} \n\nVariance \nm(t} estimation \n\n-----.:-\n\ns(t} \n\nm(t) \n\ny(t} \n\nDivider \n\nS(I) \n\nv .. \n\nm(t) \n\ny(t) \n\n} \nx(t \n\nx(t) \n\nI \n\nFigure 1: Left: block diagram of continuous-time calibration system, Right: \nschematic of the divider circuit. \n\nin this algorithm is that each sensor measures real-world quantities with the same \nstatistical properties (e.g., mean and variance). For example, this would mean \nthat all pixels in a camera should eventually see the same average intensity when \nintegrated over a long enough time. This assumption leads to other system-level \nconstraints-in this case, the camera must be periodically moving. \n\nWe have successfully demonstrated this algorithm in software for the case of nonuni(cid:173)\nformity (gain and offset) correction of images [6]. Since there may be potentially \nthousands of sensors per chip, it is desirable to build calibration circuitry using \nsubthreshold analog MOS technology to achieve ultra-low power consumption[8]. \nThe next section describes the analog VLSI implementation of this algorithm. \n\n3 Continuous-time calibration circuit \n\nThe block diagram of the continuous-time gain and offset calibration circuit is shown \nin Figure 1a. This system includes three building blocks: a mean estimation circuit, \na variance estimation circuit and a divider circuit. As is shown, the mean of the \nsignal can be easily extracted by a RC low-pass filter circuit. Since there may \nbe potentially thousands of sensors per chip, it is desirable to build calibration \ncircuitry using subthreshold analog MOS technology to achieve ultra-low power \nconsumption[8] . \n\nFigure 2 shows the schematic of the variance estimation circuit. A full-wave recti(cid:173)\nfier [8] operating in the sub-threshold region is used to obtain the absolute value \nof the difference between the input and its mean. In the linear region, the current \nlout is proportional to Iy(t) - m(t)l. As indicated in Equation 3, lout has to be \nlow-pass filtered to obtain s(t). In Figure 2, transconductance amplifiers A 3 , A4 \nand capacitor C2 are used to form a current mode low-pass filter. For signals in the \nlinear region, we can derive the Laplace transform of VI as: \n\nVt (s) = RC2 S + 1Iout(s) \n\nR \n\n(5) \n\nwhich is a first-order low-pass filter for lout. The value of R is a function of several \n\n\fThe Constant Averoge Statistics Constraint \n\n703 \n\nyet) \n\nmet) \n\n11 \n\n1001 \n\nV1 \n\nC2 \n\nI \n\nV'1 Vb2 \n\nFigure 2: Variance estimation circuit. The triangle symbols represent 5-transistor \ntransconductance amplifiers that output a current proportional to the difference of \ntheir inputs. \n\nfabrication constants and an adjustable bias current. Figure 3(a) shows the ex(cid:173)\npected linear relationship between the measured variance s(t) and the peak-to-peak \namplitude of the sine-wave input. \n\nThe third building block in the calibration system is the divider circuit shown in \nFigure lb. A fed-back multiplier is used to enforce the constraint that y(t) - m(t) \nis proportional to x(t)s(t) which results in a scaled version of Equation 4. The \ncharacteristics of the divider have been measured and shown in Figure 3(b). With \na fixed Vb6 and m(t), we sweep y(t) from m(t) - 0.3V to m(t) + 0.3V and measure \nthe the change of output. A family of input/output characteristics with s(t) =20, \n25, 30, 40, 50, 60 and 70nA is shown in Figure 3(b). The divider circuit has been \ntested up to frequencies of 45kHz. \n\nThe first version of the calibration circuit has been designed and fabricated in a \n2-um CMOS technology. The chip includes the major parts of this calibration \ncircuit: the variance estimation circuit and divider circuit. In our initial design, the \nmean estimation circuit, which is simply a RC low-pass filter, was built off-chip. \nHowever, it can be easily integrated on-chip using a transconductance amplifier and \na capacitor. \n\nThe calibration results for a signal with gain and offset variations are shown in \nFigure 4. The input signal is a sine wave with a severe gain and offset jump as \nshown at the top of Figure 4. At the middle of Figure 4, the convergence of the \nvariance estimation is illustrated. It takes a short time for the circuit to converge \nafter any change of the mean or variance or of the input signal. At the bottom of \nFigure 4, we show the calibrated signal produced by the chip. The output eventually \nconverges to a zero-mean, constant-height sine wave independent of the values of \nthe DC offset and amplitude of the input sine wave. Additional experiments have \nshown that with the input amplitude changing from 20mV to 90mV, the measured \noutput amplitude varies by less than 3mV. Similarly, when the DC offset is varied \nfrom l.5V to 3.5V, the amplitude of the output varies by less than 5mV. These \n\n\f704 \n\n253 \n\n.52 \n\n25 \n\n1\"51 \nL~ \n.... \n\n2.4 20 \n\n30 \n\n40 \n\n50 \n\nx \n\n>( . ' \n\n>( \n\n>( \n\nM \n\n.... \n\n' .8 \n\n.... \n\njf .5 \n\n2.4' .. \n\nJ. G. Harris and Y. Chiang \n\n_ I (Q \n\neo \n\n.... poaI<_mV) \n\n70 \n\neo \n\n90 \n\n100 \n\n110 \n\n120 \n\n.~. \n\n-<> .\u2022 \n\n-<>1 \n\n0 \n\ny(1f.