{"title": "Neural Network Computation by In Vitro Transcriptional Circuits", "book": "Advances in Neural Information Processing Systems", "page_first": 681, "page_last": 688, "abstract": null, "full_text": " Neural network computation by\n in vitro transcriptional circuits\n\n\n\n Jongmin Kim1, John J. Hopfield3, Erik Winfree2\n Biology1, CNS and Computer Science2, California Institute of Technology.\n Molecular Biology3, Princeton University.\n {jongmin,winfree}@dna.caltech.edu, hopfield@princeton.edu\n\n Abstract\n\n The structural similarity of neural networks and genetic regulatory net-\n works to digital circuits, and hence to each other, was noted from the\n very beginning of their study [1, 2]. In this work, we propose a simple\n biochemical system whose architecture mimics that of genetic regula-\n tion and whose components allow for in vitro implementation of arbi-\n trary circuits. We use only two enzymes in addition to DNA and RNA\n molecules: RNA polymerase (RNAP) and ribonuclease (RNase). We\n develop a rate equation for in vitro transcriptional networks, and de-\n rive a correspondence with general neural network rate equations [3].\n As proof-of-principle demonstrations, an associative memory task and a\n feedforward network computation are shown by simulation. A difference\n between the neural network and biochemical models is also highlighted:\n global coupling of rate equations through enzyme saturation can lead\n to global feedback regulation, thus allowing a simple network without\n explicit mutual inhibition to perform the winner-take-all computation.\n Thus, the full complexity of the cell is not necessary for biochemical\n computation: a wide range of functional behaviors can be achieved with\n a small set of biochemical components.\n\n\n1 Introduction\n\nBiological organisms possess an enormous repertoire of genetic responses to everchang-\ning combinations of cellular and environmental signals. Characterizing and decoding the\nconnectivity of the genetic regulatory networks that govern these responses is a major chal-\nlenge of the post-genome era [4]. Understanding the operation of biological networks is in-\ntricately intertwined with the ability to create sophisticated biochemical networks de novo.\nRecent work developing synthetic genetic regulatory networks has focused on engineered\ncircuits in bacteria wherein protein signals are produced and degraded [5, 6]. Although\nremarkable, such network implementations in bacteria have many unknown and uncontrol-\nlable parameters.\n\nWe propose a biochemical model system a simplified analog of genetic regulatory circuits\n that provides well-defined connectivity and uses nucleic acid species as fuel and signals\nthat control the network. Our goal is to establish an explicit model to guide the laboratory\nconstruction of synthetic biomolecular systems in which every component is known and\n\n\f\n RNAP transcript\n\n\n\n RNase\n\n\n\n inhibitor\n\n(A) activator DNA switch (B)\n\nFigure 1: (A) The components of an in vitro circuit. The switch template (blue) is shown\nwith the activator (red) attached. The dotted box indicates the promoter sequence and the\ndownstream direction. (B) The correspondence between a neural network and an in vitro\nbiochemical network. Neuron activity corresponds to RNA transcript concentration, while\nsynaptic connections correspond to DNA switches with specified input and output.\n\n\nwhere quantitative predictions can be tested. Only two enzymes are used in addition to syn-\nthetic DNA templates: RNA polymerase, which recognizes a specific promoter sequence\nin double-stranded DNA and transcribes the downstream DNA to produce an RNA tran-\nscript, and ribonuclease, which degrades RNA but not DNA. In this system, RNA transcript\nconcentrations are taken as signals. Synthetic DNA templates may assume two different\nconformations with different transcription efficiency: ON or OFF. Upon interaction with\na RNA transcript of the appropriate sequence, the DNA template switches between differ-\nent conformations like a gene regulated by transcription factors. The connectivity which\nRNA transcripts regulate which DNA templates is dictated by WatsonCrick base-pairing\nrules and is easy to program. The network computation is powered by rNTP that drives the\nsynthesis of RNA signals by RNAP, while RNase forces transient signals to decay. With\na few assumptions, we find that this stripped-down analog of genetic regulatory networks\nis mathematically equivalent to recurrent neural networks, confirming that a wide range of\nprogrammable dynamical behaviors is attainable.\n\n\n2 Construction of the transcriptional network\n\nThe DNA transcriptional switch. The elementary unit of our networks will be a DNA\nswitch, which serves the role of a gene in a genetic regulatory circuit. The basic require-\nments for a DNA switch are to have separate input and output domains, to transcribe poorly\nby itself [7], and to transcribe efficiently when an activator is bound to it. A possible mech-\nanism of activation is the complementation of an incomplete promoter region, allowing\nmore favorable binding of RNAP to the DNA template. Figure 1A illustrates our proposed\ndesign for DNA transcriptional switches and circuits. We model a single DNA switch with\nthe following binding reactions:\n\n\n A + I AI\n\n OFF D + A DA ON\n\n ON DA + I D + AI OFF\n\n\n\nwhere D (blue) is a DNA template with an incomplete promoter region, A (red) is an\nactivator that complements the incomplete promoter region, and I (green) is an inhibitor\ncomplementary to A. Thus, I can bind free A. Furthermore, activator A contains a \"toe-\nhold\" region [8] that overhangs past the end of D, allowing inhibitor I to strip off A from\nthe DA complex. D is considered OFF and DA is considered ON, based on their efficiency\nas templates for transcription. This set of binding reactions provides a means to choose the\nthreshold of the sigmoidal activation function, as will be explained later.\n\n\f\nRNAP and RNase drive changes in RNA transcript concentration; their activity is modeled\nusing a first-order approximation for enzyme kinetics. For the moment, we assume that the\ninput species (activator and inhibitor) are held at constant levels by external control.\n\n By RNA polymerase By RNase\n k k\n DA p\n DA + R R d\n k \n D p\n D + R\nwhere 0 < < 1 due to lack of activation and represents the complete degradation of\nRNA products by RNase. kd and kp are set by the concentration of enzymes.\n\nIn general, a set of chemical reactions obeying mass action have dynamics described by\n\n d[Xi] = k [X )\n dt j ]r\n j (p\n i - ri\n j\n\n\nwhere k is the rate constant, ri is the stoichiometry of species Xi as a reactant (typically\n0 or 1), and pi is the stoichiometry of Xi as a product in reaction . Analysis of our\nsystem is greatly simplified by the assumption that the binding reactions are fast and go\nto completion. We define Dtot as the sum of free and bound species:Dtot = [D] + [DA].\nSimilarly, Itot = [I]+[AI] and Atot = [A]+[DA]+[AI]. Then, [DA] depends on Dtot and\n, where = Atot - Itot. Because I can scavenge A whether the latter is free or bound\nto D, A can activate D only when > 0. The amount of [DA] is proportional to when\n0 < < Dtot, as shown in Figure 2A. It is convenient to represent this nonlinearity using a\npiecewise-linear approximation of a sigmoidal function, specifically, (x) = |x+1|-|x-1| .\n 2\nThus, we can represent [DA] using and a rescaled : [DA] = 1 Dtot(1 + ( ^\n )), where\n 2\n^\n = 2\n Dtot - 1 is called the signal activity. At steady-state, kd[R] = kp[DA] + kp[D];\nthus,\n 1 k\n [R] = p Dtot((1\n 2 k - )(^) + 1 + ) .\n d\nIf we consider the activator concentration as an input and the steady-state transcript con-\ncentration as an output, then the (presumed constant) inhibitor concentration, I tot, sets the\nthreshold, and the function assumes a sigmoidal shape (Fig. 2D). Adjusting the amount of\ntemplate, Dtot, sets the magnitude of the output signal and the width of the transition re-\ngion (Fig. 2C). We can adjust the width of the transition region independent of the threshold\nsuch that a step function would be achieved in the limit. Thus, we have a sigmoidal func-\ntion with an adjustable threshold, without reliance on cooperative binding of transcription\nfactors as is common in biological systems [9].\n\nNetworks of transcriptional switches. The input domain of a DNA switch is upstream\nof the promoter region; the output domain is downstream of the promoter region. This\nseparation of domains allows us to design DNA switches that have any desired connectivity.\n\n\n [ DA ] (x)\n tot\n D 1 [ R ] [ R ]\n\n\n\n -1 1 x\n tot\n D \n -1\n tot tot\n (A) (B) (C) A (D) A\n\n\n\nFigure 2: (A) [DA] as a function of . (B) The sigmoid (x). (C,D) [R] as a function of\nAtot for three values of Dtot and Itot, respectively.\n\n\f\nWe assume that distinct signals in the network are represented as distinct RNA sequences\nthat have negligible crosstalk (undesired binding of two molecules representing different\nsignals). The set of legitimate binding reactions is as follows:\n\n\n j\n A + Ij j\n A I j\n\n OFF D ij j D ij j\n A ON\n \n + A\n\n ON ij\n D j\n A + I j D ij + j\n A I j OFF\n\n\n\nwhere Dij is the DNA template that has the jth input domain and ith output domain, the\nactivator Aj complements the incomplete promoter region of Dij, and the inhibitor Ij is\ncomplementary to Aj. Note that Ij can strip off Aj from the DijAj complex, thus imposing\na sharp threshold as before. Again, we assume fast and complete binding reactions.\n\nThe set of enzyme reactions for the transcriptional network is as follows:\n\n By RNA polymerase By RNase\n k k\n D p d\n ij Aj\n k Dij Aj + Ai if sij = 1 Ij \n k\n D p d\n ij Dij + Ai Aj\n k \n k\n D p d\n ij Aj\n k Dij Aj + Ii if sij = -1 AjIj \n k\n D p d\n ij Dij + Ii DijAj Dij\nwhere sij {+1, -1} indicates whether switch ij will produce an activator or an inhibitor.\nThis notation reflects that the production of Ii is equivalent to the consumption of Ai. The\nchange of RNA concentrations over time is easy to express with i = Atot\n i - Itot\n i :\n\n di = s\n dt -kd i + kp ij ([Dij Aj ] + [Dij ]) . (1)\n j\n\nNetwork equivalence. We show next that the time evolution of this biochemical network\nmodel is equivalent to that of a general Hopfield neural network model [3]:\n\n dx\n i = w\n dt -xi + ij (xj ) + i . (2)\n j\n\nEquation 1 can be rewritten to use the same nonlinear activation function defined earlier.\nLet ^\n i = 2i\n Dtot - 1 be a rescaled difference between activator and inhibitor concentrations,\n i\nwhere Dtot\n i is the load on Ai, i.e., the total concentration of all switches that bind to Ai:\nDtot\n i = Dtot\n j ji and Dtot\n ij = [DijAj] + [Dij]. Then, we can derive the following rate\nequation, where ^\n i plays the role of unit i's activity xi:\n\n 1 d ^\n i k Dtot k Dtot\n = p (1 ij ( ^\n p (1 + )s ij\n k -^i + - )sij j )+ ij - 1 .\n d dt k Dtot k Dtot\n j d i j d i\n (3)\nGiven the set of constants describing an arbitrary transcriptional network, the constants for\nan equivalent neural network can be obtained immediately by comparing Equations 2 and\n3. The time constant is the inverse of the RNase degradation rate: fast turnover of RNA\nmolecules leads to fast response of the network. The synaptic weight wij is proportional to\nthe concentration of switch template ij, attenuated by the load on Ai. However, the thresh-\nold i is dependent on the weights, perhaps implying a lack of generality. To implement\nan arbitrary neural network, we must introduce two new types of switches to the transcrip-\ntional network. To achieve arbitrary thresholds, we introduce bias switches DiB which\n\n\f\nhave no input domain and thus produce outputs constitutively; this adds an adjustable con-\nstant to the right hand side of Equation 3. To balance the load on Ai, we add null switches\nD0i which bind to Ai but have no output domain; this allows us to ensure that all Dtot\n i are\nequal. Consequently, given any neural network with weights wij and thresholds i, we can\nspecify concentrations Dtot\n ij such that the biochemical network has identical dynamics, for\nsome .\n\nMichaelisMenten enzyme reactions. Next, we explore the validity of our assumption\nthat enzyme kinetics are first-order reactions. A basic but more realistic model is the\nMichaelisMenten mechanism [10], in which the enzyme and substrate bind to form an\nenzyme-substrate complex. For example, if E is RNAP,\n\n k+ k\n E + D cat\n ij Aj EDijAj E + DijAj + Ii/Ai .\n k-\nAn important ramification of MichaelisMenten reactions is that there is competition for\nthe enzyme by the substrates, because the concentration of available enzymes is reduced\nas they bind to substrates, leading to saturation when the enzyme concentration is limit-\ning. Using the steady-state assumption for MichaelisMenten reactions, we establish the\nfollowing relations to the rate constants of first-order reactions:\n\n Etot k Etot k Etot k\n k cat cat d d,cat\n p = k (4)\n 1 + L K kp = d = \n M 1 + L K 1 + L K\n M d d,M\n\nwhere kcat and KM = (k- + kcat)/k+ are the catalytic constant (enzyme's speed) and\nMichaelis constant (enzyme's affinity to target) of RNAP for the ON state switch, kcat and\nKM are for the OFF state switch, and kd,cat and Kd,M are the constants of RNase. Etot\n [D\nand Etot ij Aj ] +\n d are the concentrations of RNAP and RNase, respectively. L = i,j KM\n [Dij ] is the load on RNAP and L [Aj ]+[Ij ]+[Aj Ij ]+[Dij Aj ] is the load on\n i,j K d = i,j K\n M d,M\nRNase (i.e., the total concentration of binding targets divided by the Michaelis constants of\nthe enzymes), both of which may be time varying. To make the first-order approximation\nvalid, we must keep L and Ld constant. Introduction of a new type of switch with different\nMichaelis constants can make L constant by balancing the load on the enzyme. A scheme\nto keep Ld constant is not obvious, so we set reaction conditions such that Ld 1.\n\n\n3 Example computations by transcriptional networks\n\nFeed-forward networks. We first consider a feed-forward network to compute\nf (x, y, z) = \n xyz + \n yz + x. From the Boolean circuit shown in Figure 3A, we can construct\nan equivalent neural network. We label units 1 through 6: units 1, 2, 3 correspond to inputs\nx, y, z whereas units 4, 5, 6 are computation units. Using the conversion rule discussed\nin the network equivalence section, we can calculate the parameters of the transcriptional\nnetwork. Under the first-order approximation of Equation 3, the simulation result is exact\n(Fig. 3C). For comparison, we also explicitly simulated mass action dynamics for the full\nset of chemical equations with the MichaelisMenten enzyme reactions, using biologically\nplausible rate constants and with Etot and Etot\n d calculated from Equation 4 using estimated\nvalues of L and Ld. The full model performs the correct calculation of f for all eight 3-bit\ninputs, although the magnitude of signals is exaggerated due to an underestimate of RNase\nload (Fig. 3C).\n\nAssociative memories. Figure 4A shows three 4-by-4 patterns to be memorized in a con-\ntinuous neural network [3]. We chose orthogonal patterns because a 16 neuron network has\nlimited capacity. Our training algorithm is gradient descent combined with the perceptron\nlearning rule. After training, the parameters of the neural network are converted to the\nparameters of the transcriptional network as previously described. Starting from a random\n\n\f\n 6\n\n x\n x 1\n -1 4\n 1\n -2 2\n f 1\n y 1 i^ 0\n y -1 f\n 1 2\n -2\n 1 -1\n -4\n (A) z (B) z 1 (C) 0 200 400 600 800 1000\n time(sec)\n\n\nFigure 3: (A,B) A Boolean circuit and a neural network to compute f (x, y, z) = \n xyz+\n yz+\nx. (C) The activity of computation units (first-order approximation: solid lines; Michaelis-\nMenten reaction: dotted lines) for x=True=1, y=False=-1, z=True=1.\n 3\n\n 2\n\n 1\n\n i^ 0\n\n -1\n\n -2\n (A) (B) -30 200 400 600\n time(sec)\n\n\nFigure 4: (A) The three patterns to be memorized. (B) Time-course for the transcriptional\nnetwork recovery of the third pattern. (odd columns: blue lines, even columns: red lines)\n\n\ninitial state, a typical response of the transcriptional network (with the first-order approx-\nimation of Equation 3) is shown in Figure 4B. Thus, our in vitro transcriptional networks\ncan support complex sets of stable steady-states.\n\nA winner-take-all network. Instead of trying to compensate for the saturation phenomena\nof MichaelisMenten reactions, we can make use of it for computation. As an example,\nconsider the winner-take-all computation [11], which is commonly implemented as a neu-\nral network with O(N 2) mutually inhibitory connections (Fig. 5A), but which can also be\nimplemented as an electrical circuit with O(N ) interconnections by using a single global\ninhibitory feedback gate [12]. In a biochemical system, a limited global resource, such as\nRNAP, can act to regulate all the DNA switches and thus similarly produce global inhibi-\ntion. This effect is exploited by the simple transcriptional network shown in Figure 5B, in\nwhich the output from each DNA switch activates the same DNA switch itself, and mutual\ninhibition is achieved by competition for RNAP. Specifically, we have switch templates Dii\nwith fixed thresholds set by Ii, and Dii produces Ai as its output RNA. With the instant\nbinding assumption, we then derive the following equation:\n\n dAtot\n i Etot k Etot k k\n = d d,cat Atot cat [D cat [D\n dt -1 + L i + iiAi] + ii] . (5)\n d Kd,M 1 + L KM KM\n\nThe production rate of Ai depends on Atot\n i and on L, while the degradation rate of Ai\ndepends on Atot\n i and on Ld, as shown in Figure 6A. For a winner-take-all network, an ON\nstate switch draws more RNAP than an OFF state switch (because of the smaller Michaelis\nconstant for the ON state). Thus, if the other switches are turned OFF, the load on RNAP\n(L) becomes small, leading to faster production of the remaining ON switches. When the\nproduction rate curve and the degradation rate curve have three intersections, bistability is\nachieved such that the switches remain ON or OFF, depending on their current state.\n\nConsider n equivalent switches starting with initial activator concentrations above the\nthreshold, and with the highest concentration at least above the rest (as a percentage).\nAnalysis indicates that a less leaky system (small ) and sufficient differences in initial\nactivator concentrations (large ) can guarantee the existence of a unique winner. Simula-\ntions of a 10-switch winner-take-all network confirm this analysis, although we do not see\nperfect behavior (Fig. 6B). Figure 6C shows a time-course of a unique winner situation.\nSwitches get turned OFF one by one whenever the activator level approaches the threshold,\nuntil only one switch remains ON.\n\n\f\n -1\n\n -1\n -1 -1\n 0.5 0.5 0.5 0.5 0.5 0.5\n -1 -1\n\n\n (A) 1 1 1 (B) 1 1 1\n\n\nFigure 5: (A) A 3-unit WTA network with explicit mutual inhibition. (B) An equivalent\nbiochemical network.\n\n 30 1 \n \n 3\n tot\n i\n d A 25 0.8 2.5\n\n dt L : low 20 2\n 0.6 / M]\n 15\n 1/ i 1.5\n 0.4 [A\n L : high 10 1\n 0.2\n 5 0.5\n\n 0\n tot tot tot tot 5 10 15 0\n (A) i\n I i +D\n I ii A i (B) (%) (C) 0 5000 10000 15000\n time(sec)\n\n\n\n\nFigure 6: For WTA networks: (A) Production rates (solid lines) for two different L's,\ncompared to a linear degradation rate (dotted line). (B) Empirical probability of correct\noutput as a function of and . (C) Time-course with = 0.33% and = 0.04.\n\nSimilarly, we can consider a k-WTA network where k winners persist. If we set the pa-\nrameters appropriately such that k winners are stable but k + 1 winners are unstable, the\nsimulation result recovered k winners most of the time. Even a single k-WTA gate can\nprovide impressive computational power [13].\n\n\n4 Discussion\n\nWe have shown that, if we treat transcriptionally controlled DNA switches as synapses\nand the concentrations of RNA species as the states of neurons, then the in vitro transcrip-\ntional circuit is equivalent to the neural network model and therefore can be programmed\nto carry out a wide variety of tasks. The structure of our biochemical networks differs\nfrom that of previous formal models of genetic regulatory circuits [14, 15, 16]. For exam-\nple, consider the work of [16], which established a connection to the class of Boltzmann\nmachines. There, the occupancy of regulatory binding sites corresponds to the state of neu-\nrons, the weights are set by the cooperative interaction among transcription factors, and the\nthresholds are the effective dissociation constants at a binding site. Thus, implementing a\ngeneral N -unit neural network requires only O(N ) biochemical species, but up to O(N 2)\nsignificant binding interactions must be encoded in the molecular sequences. Changing\nor tuning a network is therefore non-trivial. In contrast, in our transcriptional networks,\neach weight and threshold is represented by the continuously adjustable concentration of a\ndistinct species, and the introduction or deletion of any node is straightforward.\n\nEach synapse is represented by a DNA switch with a single inputoutput specification, so\nthe number of DNA switches grows as O(N 2) for a fully recurrent neural network with\nN neurons (unlike the circuits of [16]). This constraint may be relieved because, in many\nnetworks of interest, most nodes have a small number of connections [17, 18]. The time\nfor computation will increase as O(N ) due to finite hybridization rates because, if the total\nconcentration of all RNA signals is capped, the concentration of any given species will\ndecrease as 1/N . The weights are capped by the maximum gain of the system, which is the\nproduction rate divided by the degradation rate. Since the time constant of the network is\nthe inverse of the degradation rate, if we wish to implement a network with large weights,\nwe must increase the time constant.\n\nWe can analyze the cost of computing by considering basic physical chemistry. The energy\nconsumption is about 20kT (= 10-19J) per nucleotide incorporated, and 1 bit of informa-\n\n\f\ntion is encoded by a sequence containing tens of nucleotides. The encoding energy is large,\nsince the molecule for each bit must contain specific instructions for connectivity, unlike\nspatially arranged digital circuits where a uniform physical signal carrier can be used. Fur-\nthermore, many copies (e.g., 1013 for a 1M signal in 20l) of a given species must be\nproduced to change the concentration in a bulk sample. Worse yet, because degradation is\nnot modulated in the transcriptional network, switching relies on selective change of pro-\nduction rates, thus continually using energy to maintain an ON state. Devising a scheme to\nminimize maintenance energy costs, such as in CMOS technology for electrical circuits, is\nan important problem.\n\nThe theory presented here is meant to serve as a guide for the construction of real bio-\nchemical computing networks. Naturally, real systems will deviate considerably from the\nidealized model (although perhaps less so than do neural network models from real neu-\nrons). For example, hybridization is neither instantaneous nor irreversible, strands can have\nundesired conformations and crosstalk, and enzyme reactions depend on the sequence and\nare subject to side reactions that generate incomplete products. Some problems, such as hy-\nbridization speed and crosstalk, can be reduced by slowing the enzyme reactions and using\nproper sequence design [19]. Ultimately, some form of fault tolerance will be necessary at\nthe circuit level. Restoration of outputs to digital values, achieved by any sufficiently high-\ngain sigmoidal activation function, provides some level of immunity to noise at the gate\nlevel, and attractor dynamics can provide restoration at the network level. A full under-\nstanding of fault tolerance in biochemical computing remains an important open question.\n\nFuture directions include utilizing the versatility of active RNA molecules (such as ap-\ntamers, ribozymes, and riboswitches [20, 21]) for more general chemical input and output,\ndevising a biochemical learning scheme analogous to neural network training algorithms\n[22], and studying the stochastic behavior of the transcriptional network when a very small\nnumber of molecules are involved in small volumes [5].\n\nAcknowledgements. We thank Michael Elowitz, Paul Rothemund, Casimir Wierzynski,\nDan Stick and David Zhang for valuable discussions, and ONR and NSF for funding.\n\n\nReferences\n\n [1] McCulloch WS, Pitts W, Bull. Math. Biophys. 5 (1943), 115.\n [2] Monod J, Jacob F, Cold Spring Harb. Symp. Quant. Biol. 26 (1961), 389-401.\n [3] Hopfield JJ, Proc. Nat. Acad. Sci. USA 81 (1984), 3088-3092.\n [4] Hasty J, McMillen D, Issacs F, Collins JJ, Nat. Rev. Genet. 2 (2001), 268-279.\n [5] Elowitz MB, Leibler S, Nature 403 (2000), 335-338.\n [6] Gardner TS, Cantor CR, Collins JJ, Nature 403 (2000), 339-342.\n [7] Martin CT, Coleman JE, Biochemistry 26 (1987), 2690-2696.\n [8] Yurke B, Mills AP Jr., Genetic Programming and Evolvable Machines 4 (2003), 111-122.\n [9] Shea MA, Ackers GK, J. Mol. Biol. 181 (1985), 211-230.\n[10] Hammes GG, Thermodynamics and kinetics for the biological sciences, Wiley (2000).\n[11] Yuille AL, Gieger D, in The Handbook of Brain Theory and Neural Networks, Arbib MA, ed.,\n MIT Press (1995), 1056-1060.\n[12] Tank DW, Hopfield JJ, IEEE Trans. on Circuits and Systems 33 (1986), 533-541.\n[13] Maass W, Neural Computation 12 (2000), 2519-2535.\n[14] Glass L, Kauffman SA, J. Theo. Biol. 39 (1973), 103-129.\n[15] Mjolsness E, Sharp DH, Reinitz J, J. Theo. Biol. 152 (1991), 429-453.\n[16] Buchler NE, Gerland U, Hwa T, Proc. Nat. Acad. Sci. USA 100 (2003), 5136-5141.\n[17] Bray D, Science 301 (2003), 1864-1865.\n[18] Reed RD, IEEE Trans. on Neural Networks, 4 (1993), 740-744.\n[19] Dirks R, Lin M, Winfree E, Pierce NA, Nucleic Acids Research 32 (2004), 1392-1403.\n[20] Lilley DM, Trends Biochem. Sci. 28 (2003), 495-501.\n[21] Nudler E, Mironov AS, Trends Biochem. Sci. 29 (2004), 11-17.\n[22] Mills AP Jr., Yurke B, Platzman PM, Biosystems 52 (1999), 175-180.\n\n\f\n", "award": [], "sourceid": 2707, "authors": [{"given_name": "Jongmin", "family_name": "Kim", "institution": null}, {"given_name": "John", "family_name": "Hopfield", "institution": null}, {"given_name": "Erik", "family_name": "Winfree", "institution": null}]}