{"title": "Constrained Optimization Applied to the Parameter Setting Problem for Analog Circuits", "book": "Advances in Neural Information Processing Systems", "page_first": 789, "page_last": 796, "abstract": null, "full_text": "Constrained Optimization Applied to the \n\nParameter Setting Problem for Analog Circuits \n\nDavid Kirk, Kurt Fleischer, Lloyd Watts~ Alan Barr \n\nComputer Graphics 350-74 \n\nCalifornia Institute of Technology \n\nPasadena, CA 91125 \n\nAbstract \n\nWe use constrained optimization to select operating parameters for two \ncircuits: a simple 3-transistor square root circuit, and an analog VLSI \nartificial cochlea. This automated method uses computer controlled mea(cid:173)\nsurement and test equipment to choose chip parameters which minimize \nthe difference between the actual circuit's behavior and a specified goal \nbehavior. Choosing the proper circuit parameters is important to com(cid:173)\npensate for manufacturing deviations or adjust circuit performance within \na certain range. As biologically-motivated analog VLSI circuits become \nincreasingly complex, implying more parameters, setting these parameters \nby hand will become more cumbersome. Thus an automated parameter \nsetting method can be of great value [Fleischer 90]. Automated parameter \nsetting is an integral part of a goal-based engineering design methodology \nin which circuits are constructed with parameters enabling a wide range \nof behaviors, and are then \"tuned\" to the desired behaviors automatically. \n\n1 \n\nIntroduction \n\nConstrained optimization methods are useful for setting the parameters of analog \ncircuits. We present two experiments in which an automated method successfully \nfinds parameter settings which cause our circuit's behavior to closely approximate \nthe desired behavior. These parameter-setting experiments are described in Sec(cid:173)\ntion 3. The difficult subproblems encountered were (1) building the electronic setup \n\n\u00b7Dept of Electrical Engineering 116-81 \n\n789 \n\n\f790 \n\nKirk, Fleischer, Watts, and Barr \n\nto acquire the data and control the circuit, and (2) specifying the computation of de(cid:173)\nviation from desired behavior in a mathematical form suitable for the optimization \ntools. We describe the necessary components of the electronic setup in Section 2, \nand we discuss the selection of optimization technique toward the end of Section 3. \n\nAutomated parameter setting can be an important component of a system to build \naccurate analog circuits. The power of this method is enhanced by including appro(cid:173)\npriate parameters in the initial design of a circuit: we can build circuits with a wide \nrange of behaviors and then \"tune\" them to the desired behavior. In Section 6, we \ndescribe a comprehensive design methodology which embodies this strategy. \n\n2 \n\nImplementation \n\nWe have assembled a system which allows us to test these ideas. The system can \nbe conceptually decomposed into four distinct parts: \n\ncircuit: an analog VLSI chip intended to compute a particular function. \ntarget function: a computational model quantitatively describing the desired be(cid:173)\n\nhavior of the circuit. This model may have the same parameters as the circuit, \nor may be expressed in terms of biological data that the circuit is to mimic. \n\nerror metric: compares the target to the actual circuit function, and computes a \n\ndifference measure. \n\nconstrained optimization tool: a numerical analysis tool, chosen based on the \n\ncharacteristics of the particular problem posed by this circuit. \n\nConstrained Optimization Tool \n\nparameters \n\nCircuit \n\nTarget Function \n\ndifference \nmeasure \n\nThe constrained optimization tool uses the error metric to compute the difference \nbetween the performance of the circuit and the target function. It then adjusts \nthe parameters to minimize the error metric, causing the actual circuit behavior to \napproach the target function as closely as possible. \n\n2.1 A Generic Physical Setup for Optimization \n\nA typical physical setup for choosing chip parameters under computer control has \nthe following elements: an analog VLSI circuit, a digital computer to control the \noptimization process, computer programmable voltage/current sources to drive the \nchip, and computer programmable measurement devices, such as electrometers and \noscilloscopes, to measure the chip's response. \n\nThe combination of all of these elements provides a self-contained environment for \ntesting chips. The setting of parameters can then be performed at whatever level \n\n\fConstrained Optimization Applied to the Parameter Setting Problem for Analog Circuits \n\n791 \n\nof automation is desirable. In this way, all inputs to the chip and all measurements \nof the outputs can be controlled by the computer. \n\n3 The Experiments \n\nWe perform two experiments to set parameters of analog VLSI circuits using con(cid:173)\nstrained optimization. The first experiment is a simple one-parameter system, a \n3-transistor \"square root\" circuit. The second experiment uses a more complex \ntime-varying multi-parameter system, an analog VLSI electronic cochlea. The ar(cid:173)\ntificial cochlea is composed of cascaded 2nd order section filters . \n\n3.1 Square Root Experiment \n\nIn the first experiment we examine a \"square-root\" circuit [Mead 89], which actually \ncomputes ax Ci + b, where a is typically near 0.4. We introduce a parameter (V) into \nthis circuit which varies a indirectly. By adjusting the voltage V in the square root \ncircuit, as shown in Figure l(a), we can alter the shape of the response curve . \n\n............. \n\n... -.,. \n\n.... ~ ..... \n\n\u2022\u2022\u2022 1i\"\" \" \" \" \"' / \n\n... / \n\n.. ~.// \n\n1.6 \n\n1.4 \n\n1.2 \n\n\" ~ 1.0 \n\nv \n\n;i \n..51 \n\n.8 \n\n. 6 \n\n~ .......... . \n...... \n~/ \n\nx ,I': \n.4 re,/ \n\n~: \n\n\u2022 \u00b7sqrt(x)+b \"\"\"\" \n\nchip data \n\n\u2022 \n\n. 2 \n\no \n\n2 \nlin (10e-6) \n\n3 \n\n4 \n\nFigure 1: (a) Square root circuit. (b) Resulting fit. \n\nWe have little control over the values of a and b in this circuit, so we choose an \nerror metric which optimizes a, targeting a curve which has a slope of 0.5 in log-log \n\nlin vs. lout space. Since b < < avx, we can safely ignore b for the purposes of this \n\nparameter-setting experiment. The entire optimization process takes only a few \nminutes for this simple one-parameter system . Figure l(b) shows the final results \nof the square root computation, with the circuit output normalized by a and b. \n\n3.2 Analog VLSI Cochlea \n\nAs an example of a more complex system on which to test the constrained optimiza(cid:173)\ntion technique, we chose a silicon cochlea, as described by [Lyon 88]. The silicon \ncochlea is a cascade of lowpass second-order filter sections arranged such that the \nnatural frequency r of the stages decreases exponentially with distance into the \n\n\f792 \n\nKirk, Fleischer, Watts, and Barr \n\ncascade, while the quality factor Q of the filters is the same for each section (tap). \nThe value of Q determines the peak gain at each tap. \n\nVa ---r-------------------,-----------------\n\nV~--~------------------~----------------\n\n____ ---''---_________ VD'Igh! \n\nFigure 2: Cochlea circuit \n\nTo specify the performance of such a cochlea, we need to specify the natural frequen(cid:173)\ncies of the first and last taps, and the peak gain at each tap. These performance \nparameters are controlled by bias voltages VTL VTR , and VQ, respectively. The \nparameter-setting problem for this circuit is to find the bias voltages that give the \ndesired performance. This optimization task is more lengthy than the square root \noptimization. Each measurement of the frequency response takes a few minutes, \nsince it is composed of many individual instrument readings. \n\n3.2.1 Cochlea Results \n\nThe results of our attempts to set parameters for the analog VLSI cochlea are quite \nencouragmg. \n\n45 \n\n40 \n\n]' 35 \nis \n~ 30 \n1 2S \n50 20 \nen \n\u00a7 \n~ ... \ncE \n\n10 \n\nQ \n\nIS \n\nFirst tap - - (cid:173)\nLast tap --- ---\n\n5 \n\n0 \n\n10 \n\n20 \n\n30 \n\nSO \nOptinrization Step \n\n40 \n\n60 \n\n70 \n\n80 \n\nFigure 3: Error metric trajectories for gradient descent on cochlea \n\nFigure 3 shows the trajectories of the error metrics for the first and last tap of the \ncochlea. Most of the progress is made in the early steps, after which the optimization \n\n\fConstrained Optimization Applied to the Parameter Setting Problem for Analog Circuits \n\n793 \n\nis proceeding along the valley of the error surface, shown in Figure 5. \n\n.... /~:.::-\n\n....... ,/ \n\no ~ __ ~~ .. ;:.. ~--..--~~r--~-\n\n............. \n\n:.:. ...... ,~.: .... -. \n\n\\ \n\\ \n\n'\\ \n\n\\ , \n\n\\ \n\\ \ni \n\\ \n\\ \n\nFirst tap goal - (cid:173)\nFirst tap data -+---\nLast tap goal ... . --. \nLast tap data _ .... -.... \n\n1\\\\ \n~ ~.\\ \n\n'. \n\n8 \n\n6 \n\n4 \n\n2 \n\n-2 \n\n-4 \n\n-6 \n\n-8 \n\n100 \n\n1000 \nFrequency \n\n10000 \n\nFigure 4: Target frequency response and gradient descent optimized data for cochlea \n\nFigure 4 shows both the target frequency response data and the frequency responses \nwhich result from our chosen parameter settings. The curves are quite similar, and \nthe differences are at the scale of measurement noise and instrument resolution in \nour system. \n\n3.2.2 Cochlea Optimization Strategies \n\nWe explored several optimization strategies for finding the best parameters for the \nelectronic cochlea. Of these, two are of particular interest: \n\nspecial knowledge: use a priori knowledge of the effect of each knob to guide the \n\noptimization \n\ngradient descent: assume that we know nothing except the input/output relation \nof the chip. Then we can estimate the gradient for gradient descent by varying \nthe inputs. Robust numerical techniques such as conjugate gradient can also \nbe helpful when the energy landscape is steep. \n\nWe found the gradient descent technique to be reliable, although it did not converge \nnearly as quickly as the \"special knowledge\" optimization. This corresponds with \nour intuition that any special knowledge we have about the circuit's operation will \naid us in setting the parameters. \n\n4 Choosing An Appropriate Optimization Method \n\nOne element of our system which has worked without much difficulty is the optimiza(cid:173)\ntion. However, more complex circuits may require more sophisticated optimization \nmethods. A wide variety of constrained optimization algorithms exist which are \n\n\f794 \n\nKirk, Fleischer, Watts, and Barr \n\nFigure 5: The error surface for the error metric for the frequency response of the \nfirst tap of the cochlea. Note the narrow valley in the error surface. Our target (the \nminimum) lies near the far left, at the deepest part of the valley. \neffective on particular classes of problems (gradient descent, quasi-newton, simu(cid:173)\nlated annealing, etc) [Platt 89, Gill 81, Press 86, Fleischer 90], and we can choose \na method appropriate to the problem at hand. Techniques such as simulated an(cid:173)\nnealing can find optimal parameter combinations for multi-parameter systems with \ncomplex behavior, which gives us confidence that our methods will work for more \ncomplex circuits. \n\nThe choice of error metric may also need to be reconsidered for more complex \ncircuits. For systems with time-varying signals, we can use an error metric which \ncaptures the time course of the signal. We can deal with hysteresis by beginning at \na known state and following the same path for each optimization step. Noisy and \nnon-smooth functions can be improved by averaging data and using robust numeric \ntechniques which are less sensitive to noise. \n\n5 Conclusions \n\nThe constrained optimization technique works well when a well-defined goal for chip \noperation can be specified . We can compare automated parameter setting with \nadjustment by hand: consider that humans often fail in the same situations where \noptimization fails (eg. multiple local minima). In contrast, for larger dimensional \nspaces, hand adjustment is very difficult, while an optimization technique may \nsucceed. We expect to integrate the technique into our chip development process, \nand future developments will move the optimization and learning process gradually \ninto the chip. It is interesting to note that our gradient descent method \"learns\" \nthe parameters of the chip in a manner similar to backpropagation. Seen from this \n\n\fConstrained Optimization Applied to the Parameter Setting Problem for Analog Circuits \n\n795 \n\nperspective, this work is a step on the path toward robust on-chip learning. \n\nIn order to use this technique, there are two moderately difficult problems to ad(cid:173)\ndress. First, one must assemble and interface the equipment to set parameters \nand record results from the circuit under computer control (eg. voltage and cur(cid:173)\nrent sources, electrometer, digital oscilloscope, etc). This is a one-time cost since a \nsimilar setup can be used for many different circuits. A more difficult issue is how \nto specify the target function of a circuit, and how to compute the error metric. \nFor example, in the simple square-root circuit, one might be more concerned about \nbehavior in a particular region, or perhaps along the entire range of operation. Care \nmust be taken to ensure that the combination of the target model and the error \nmetric accurately describes the desired behavior of the circuit. \n\nThe existence of an automated parameter setting mechanism opens up a new avenue \nfor producing accurate analog circuits. The goal of accurately computing a function \ndiffers from the approach of providing a cheap (simple) circuit which loosely approx(cid:173)\nimates the function [Gilbert 68] [Mead 89]. By providing appropriate parameters \nin the design of a circuit, we can ensure that the desired function is in the domain \nof possible circuit behaviors (given expected manufacturing tolerances). Thus we \ndefine the domain of the circuit in anticipation of the parameter setting apparatus. \nThe optimization methods will then be able to find the best solution in the domain, \nwhich could potentially be accurate to a high degree of precision. \n\n6 The Goal-based Engineering Design Technique \n\nThe results of our optimization experiments suggest the adoption of a comprehensive \nGoal-based Engineering Design Technique that directly affects how we design and \ntest chips. \n\nOur results change the types of circuits we will try to build. The optimization \ntechniques allow us to aggresively design and build ambitious circuits and more \nfrequently have them work as expected, meeting our design goals. As a corollary, \nwe can confidently attack larger and more interesting problems. \n\nThe technique is composed of the following four steps: \n\n1) goal-setting: identify the target function, or behavioral goals, of the design \n2) circuit design: design the circuit with \"knobs\" (adjustable parameters) in it, \nattempting to make sure desired (target) circuit behavior is in gamut of the \nactual circuit, given expected manufacturing variation and device characteris(cid:173)\ntics. \n\n3) optimization plan: devise optimization strategy to explore parameter set(cid:173)\ntings. This includes capabilities such as a digital computer to control the opti(cid:173)\nmization, and computer-driven instruments which can apply voltages/currents \nto the chip and measure voltage/current outputs. \n\n4) optimization: use optimization procedure to select parameters to minimize de(cid:173)\n\nviation of actual circuit performance from the target function the optimization \nmay make use of special knowledge about the circuit, such as \"I know that this \nknob has effect x,\" or interaction, such as \"I know that this is a good region, \nso explore here.\" \n\n\f796 \n\nKirk, Fleischer, Watts, and Barr \n\nDesign Goals \n\nCircuit Design \n\nCircuit \n\nOptimization Plan 1-------. \n\nThe goal-setting process produces design goals that influence both the circuit design \nand the form of the optimization plan. It is important to produce a match between \nthe design of the circuit and the plan for optimizing its parameters. \n\nAcknowledgements \n\nMany thanks to Carver Mead for ideas, encouragement, and support for this project. \nThanks also to John Lemoncheck for help getting our physical setup together. \nThanks to Hewlett-Packard for equipment donation. This work was supported \nin part by an AT&T Bell Laboratories Ph.D. Fellowship. Additional support was \nprovided by NSF (ASC-89-20219). All opinions, findings, conclusions, or recommen(cid:173)\ndations expressed in this document are those of the author and do not necessarily \nreflect the views of the sponsoring agencies. \n\nReferences \n\n[Fleischer 90] Fleischer, K., J. Platt, and A. Barr, \"An Approach to Solving the \n\nParameter Setting Problem,\" IEEE/ ACM 23rd IntI Conf on System Sci(cid:173)\nences, January 1990. \n\n[Gilbert 68] Gilbert, B., \"A Precise Four-Quadrant Multiplier with Sub-nanosecond \n\nResponse,\" IEEE Journal of Solid-State Circuits, SC-3:365, 1968. \n\n[Gill 81] Gill, P. E., W. Murray, and M. H. Wright, \"Practical Optimization,\" \n\nAcademic Press, 1981. \n\n[Lyon 88] Lyon, R. A., and C. A. Mead, \"An Analog Electronic Cochlea,\" IEEE \nTrans. Acous. Speech, and Signal Proc., Volume 36, Number 7, July, \n1988, pp. 1119-1134. \n\n[Mead 89] Mead, C. A., \"Analog VLSI and Neural Systems,\" Addison-Wesley, 1989. \n[Platt 89] Platt, J. C., \"Constrained Optimization for Neural Networks and Com(cid:173)\n\nputer Graphics,\" Ph.D. Thesis, California Institute of Technology, \nCaltech-CS-TR-89-07, June, 1989. \n\n[Press 86] Press, W., Flannery, B., Teukolsky, S., Vetterling, W ., \"Numerical \nRecipes: the Art of Scientific Computing,\" Cambridge University Press, \nCambridge, 1986. \n\n\f", "award": [], "sourceid": 496, "authors": [{"given_name": "David", "family_name": "Kirk", "institution": null}, {"given_name": "Kurt", "family_name": "Fleischer", "institution": null}, {"given_name": "Lloyd", "family_name": "Watts", "institution": null}, {"given_name": "Alan", "family_name": "Barr", "institution": null}]}