{"title": "Analog Circuits for Constrained Optimization", "book": "Advances in Neural Information Processing Systems", "page_first": 777, "page_last": 784, "abstract": null, "full_text": "Analog Circuits for Constrained Optimization \n\n777 \n\nA nalog Circuits for Constrained Optimization \n\nJohn C. Platt 1 \n\nComputer Science Department, 256-80 \n\nCalifornia Institute of Technology \n\nPasadena, CA 91125 \n\nABSTRACT \n\nThis paper explores whether analog circuitry can adequately per(cid:173)\nform constrained optimization. Constrained optimization circuits \nare designed using the differential multiplier method. These cir(cid:173)\ncuits fulfill time-varying constraints correctly. Example circuits in(cid:173)\nclude a quadratic programming circuit and a constrained flip-flop. \n\nINTRODUCTION \n\n1 \nConverting perceptual and cognitive tasks into constrained optimization problems \nis a useful way of generating neural networks to solve those tasks. Researchers have \nused constrained optimization networks to solve the traveling salesman problem \n[Durbin, 1987] [Hopfield, 1985], to perform object recognition [Gindi, 1988], and to \ndecode error-correcting codes [Platt, 1986]. \nImplementing constrained optimization in analog VLSI is advantageous, because an \nanalog VLSI chip can solve a large number of differential equations in parallel [Mead, \n1989]. However, analog circuits only approximate the desired differential equations. \nTherefore, we have built test circuits to determine whether analog circuits can fulfill \nuser-specified constraints. \n\n2 THE DIFFERENTIAL MULTIPLIER METHOD \nThe differential multiplier method (DMM) is a method for creating differential equa(cid:173)\ntions that perform constrained optimization. The DMM was originally proposed \nby [Arrow, 1958] as an economic model. It was used as a neural network by [Platt, \n1987]. \n\n1 Current address: Synaptics, 2860 Zanker Road, Suite IDS, San Jose, CA 95134 \n\n\f778 \n\nPlatt \n\n_gf \n\n~ \n\n_\u00a3f \n\nX \n\nI \nI \n--\n-\n\ng \n\n~ A \n~V \nI \n-\n--\n\nFigure 1. The architecture of the DMM. The x capacitor in the figure repre(cid:173)\nsents the Xi neurons in the network. The - f' box computes the current needed for \nthe neurons to minimize f . The rest of the circuitry causes the network to fulfill \nthe constraint g( i) = o. \n\nx \n\ny \n\nG3 \n\nFigure 2. A circuit that implements quadratic programming. x, y, and A are \n\nvoltages. \"Te\" refers to a transconductance amplifier. \n\n\fAnalog Circuits for Constrained Optimization \n\n779 \n\nA constrained optimization problem is find a x such that I(x) is minimized subject \nto a constraint g(x) = O. In order to find a constrained minimum, the DMM finds \nthe critical points (x, A) of the Lagrangian \n\n& = I(x) + Ag(i), \n\n(1) \n\nby performing gradient descent on the variables x and gradient ascent on the La(cid:173)\ngrange multiplier A: \n\n0 I \ndXi _ \n--------A-, \ndt \nOXi \ndA \n_ \ndt = + OA = g(x). \n\n0& _ \nOXi \n0& \n\n\\ og \nOXi \n\n(2) \n\nThe DMM can be thought of as a neural network which performs gradient descent \non a function I(x), plus feedback circuitry to find the A that causes the neural \nnetwork output to fulfill the constraint g(i) = 0 (see figure 1). \nThe gradient ascent on the A is necessary for stability. The stability can be exam(cid:173)\nined by combining the two equations (2) to yield a set of second-order differential \nequations \n\n(3) \n\nwhich is analogous to the equations that govern a spring-mass-damping system. \nThe differential equations (3) converge to the constrained minima if the damping \nmatrix \n\n(4) \n\nis positive definite. \nThe DMM can be extended to satisfy multiple simultaneous constraints. The sta(cid:173)\nbility of the DMM can also be improved. See [Platt, 1987] for more details. \n\n3 QUADRATIC PROGRAMMING CIRCUIT \nThis section describes a circuit that solves a specific quadratic programming prob(cid:173)\nlem for two variables. A quadratic programming circuit is interesting, because the \nbasic differential multiplier method is guaranteed to find the constrained minimum. \nAlso, quadratic programming is useful: it is frequently a sub-problem in a more \ncomplex task. A method of solving general nonlinear constrained optimization is \nsequential quadratic programming [Gill, 1981]. \nWe build a circuit to solve a time-dependent quadratic programming problem for \ntwo variables: \n\nminA(x - XO)2 + B(y - YO)2, \n\nsubject to the constraint \n\nex + Dy + E(t) = O. \n\n(5) \n\n(6) \n\n\f780 \n\nPlatt \n\nConstraint Fulfillment for Quadratic Programming \n\n0.2 \n\nobserved, target (V) 0.0 \n\n-0.2 \n\n0.0 \n\n~, \nI \nI \nI \nI \nI \nI \n\nI \nI \nI \nI \nI \nI \n~ \n\n0.8 \n\n0.4 \nTime (10- 2 Sec) \n\n1.2 \n\n1.6 \n\n2.0 \n\nFigure 3. Plot of two input voltages of transconductance amplifier. The \ndashed line is the externally applied voltage E(t). The solid line is the circuit's \nsolution of -Cx - Dy. The constraint depends on time: the voltage E(t) is a \nsquare wave. The linear constraint is fulfilled when the two voltages are the same. \nWhen E(t) changes suddenly, the circuit changes -Cx - Dy to compensate. The \nunusually shaped noise is caused by digitization by the oscilloscope. \n\nConstraint Fulfillment with Ringing \n\nobserved, target (V) \n\n0.3 \n\n0.1 \n\n-0.1 \n\n-0.3 \n\n0.0 \n\n1.0 \n\n2.0 \n\nTime (10- 2 Sec) \n\n3.0 \n\n4.0 \n\nFigure 4. Plot of two input voltages of transconductance amplifier: the con(cid:173)\n\nstraint forces are increased, which causes the system to undergo damped oscillations \naround the constraint manifold. \n\n\fAnalog Circuits for Constrained Optimization \n\n781 \n\nThe basic differential multiplier method converts the quadratic programming prob(cid:173)\nlem into a system of differential equations: \n\ndx \n\nkl dt = -2Ax + 2Axo - C).., \n\ndy \n\nk2 dt = -2By + 2Byo - D)\", \n\nd)\" \n\nk3 dt = ex + Dy + E(t). \n\n(7) \n\nThe first two equations are implemented with a resistor and capacitor (with a fol(cid:173)\nlower for zero output impedance). The third is implemented with resistor summing \ninto the negative input of a transconductance amplifier. The positive input of the \namplifier is connected to E(t). \nThe circuit in figure 2 implements the system of differential equations \n\n(8) \n\nwhere K is the transconductance of the transconductance amplifier. The two sys(cid:173)\ntems of differential equations (7) and (8) can match with suitably chosen constants. \nThe circuit in figure 2 actually performs quadratic programming. The constraint is \nfulfilled when the voltages on the inputs of the transconductance amplifier are the \nsame. The 9 function is a difference between these voltages. Figure 3 is a plot of \n-Cx - Dy and E(t) as a function of time: they match reasonably well. The circuit \nin figure 2 therefore successfully fulfills the specified constraint. \nDecreasing the capacitance C3 changes the spring constant of the second-order dif(cid:173)\nferential equation. The forces that push the system towards the constraint manifold \nare increased without changing the damping. Therefore, the system becomes un(cid:173)\nderdamped and the constraint is fulfilled with ringing (see figure 4). \nThe circuit in figure 2 can be easily expanded to solve general quadratic program(cid:173)\nming for N variables: simply add more Xi neurons) and interconnect them with \nresistors. \n4 CONSTRAINED FLIP-FLOP \nA flip-flop is two inverters hooked together in a ring. It is a bistable circuit: one \ninverter is on while the other inverter is off. A flip-flop can also be considered the \nsimplest neural network: two neurons which inhibit each other. \nIf the inverters have infinite gain, then the flip-flop in figure 5 minimizes the function \n\n\f782 \n\nPlatt \n\nU1 \n\nGI \n\nG2 \n\nI \n\n-VI \n\n-V:! \n\nU2 \n\nG4 \n\nh \n\nFigure 5. A flip-flop. U1 and U2 are voltages. \n\n... \n\nG1 \n\nG1 \n\nUI \n\nG2 \n\nGg \n\nI e1 \n\nG4 \n\n-===-\n\nFigure 6. A circuit for constraining a flip-flop. Ul, U2 , and A are voltages. \n\n\fAnalog Circuits for Constrained Optimization \n\n783 \n\nConstraint Satisfaction for Non-Quadratic f \n\n0.8 \n\nobserved, target (V) \n\n0.4 \n\n0.0 \n\n0.0 \n\n0.4 \n\nTime (10- 2 Sec) \n\n0.8 \n\n1.2 \n\n1.6 \n\nFigure 7. Constraint fulfillment for a non-quadratic optimization function. \nThe plot consists of the two input voltages of the transconductance amplifier. Again, \nE(t) is the dashed line and -Cx - Dy is the solid line. The constraint is fulfilled \nwhen the two voltages are the same. As the constraint changes with time, the flip(cid:173)\nflop changes state and the location of the constrained minimum changes abruptly. \nAfter the abrupt change, the constraint is temporarily not fulfilled. However, the \ncircuit quickly fulfills the constraint. The temporary violation of the constraint \ncauses the transient spikes in the -Cx - Dy voltage. \n\n\f784 \n\nPlatt \n\nNow, we can construct a circuit that minimizes the function in equation (9), subject \nto some linear constraint ex + Dy + E(t) = 0, where x and y are the inputs to the \ninverters. The circuit diagram is shown in figure 6. Notice that this circuit is very \nsimilar to the quadratic programming circuit. Now, the x and y circuits are linked \nwith a flip-flop, which adds non-quadratic terms to the optimization function. \nThe voltages -ex - Dy and E(t) for this circuit are plotted in figure 7. For most \nof the time, -ex - Dy is close to the externally applied voltage E(t). However, \nbecause G1 ;/; G4 and G 2 ;/; G 5 , the flip-flop moves from one minima to the other \nand the constraint is temporarily violated. But, the circuitry gradually enforces the \nconstraint again. The temporary constraint violation can be seen in figure 7. \n\n5 CONCLUSIONS \nThis paper examines real circuits that have been constrained with the differential \nmultiplier method. The differential multiplier method seems to work, even when the \nunderlying circuit is non-linear, as in the case of the constrained flip-flop. Other pa(cid:173)\npers examine applications of the differential multiplier method [Platt, 19S7] [Gindi, \n19S5]. These applications could be built with the same parallel analog hardware \ndiscussed in this paper. \n\nAcknowledgement \nThis paper was made possible by funding from AT&T Bell Labs. Hardware was \nprovided by Carver Mead, and Synaptics, Inc. \n\nReferences \nArrow, K., Hurwicz, L., Uzawa, H., [195S], Studies in Linear Nonlinear Program(cid:173)\nming, Stanford University Press, Stanford, CA. \nDurbin, R., Willshaw, D., [19S7], \"An Analogue Approach to the Travelling Sales(cid:173)\nman Problem,\" Nature, 326, 6S9-69l. \nGill, P. E., Murray, W., Wright, M. H., [19S1], Practical Optimization, Academic \nPress, London. \nGindi, G, Mjolsness, E., Anandan, P., [19SS], \"Neural Networks for Model Matching \nand Perceptual Organization,\" Advances in Neural Information Processing Systems \nI, 61S-625. \nHopfield, J. J., Tank, D. W., [19S5], \"'Neural' Computation of Decisions in Opti(cid:173)\nmization Problems,\" Bioi. Cyber., 52, 141-152. \nMead, C. A., [19S9], Analog VLSI and Neural Systems, Addison-Wesley, Reading, \nMA. \nPlatt, J. C., Hopfield, J. J., [19S6], \"Analog Decoding with Neural Networks,\" \nNeural Networks for Computing, Snowbird, UT, 364-369. \nPlatt, J. C., Barr, A., [19S7], \"Constrained Differential Optimization,\" Neural In(cid:173)\nformation and Processing Systems, 612-621. \n\n\f", "award": [], "sourceid": 245, "authors": [{"given_name": "John", "family_name": "Platt", "institution": null}]}