{"title": "Stochastic Dynamics of Three-State Neural Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 271, "page_last": 278, "abstract": null, "full_text": "Stochastic Dynamics of Three-State \n\nNeural Networks \n\nToru Ohira \n\nJack D. Cowan \n\nSony Computer Science Laboratory \n\nDepts. of Mathematics and Neurology \n\n3-14-13 Higashi-gotanda, \n\nTokyo 141, Japan \nohira@csl.sony.co.jp \n\nUniversity of Chicago \n\nChicago, IL 60637 \n\ncowan@synapse.uchicago.edu \n\nAbstract \n\nWe present here an analysis of the stochastic neurodynamics of \na neural network composed of three-state neurons described by \na master equation. An outer-product representation of the mas(cid:173)\nter equation is employed. In this representation, an extension of \nthe analysis from two to three-state neurons is easily performed. \nWe apply this formalism with approximation schemes to a sim(cid:173)\nple three-state network and compare the results with Monte Carlo \nsimulations. \n\n1 \n\nINTRODUCTION \n\nStudies of single neurons or networks under the influence of noise have been a con(cid:173)\ntinuing item in neural network modelling. In particular, the analogy with spin \nsystems at finite temprature has produced many important results on networks of \ntwo-state neurons. However, studies of networks of three-state neurons have been \nrather limited (Meunier, Hansel and Verga, 1989). A master equation was intro(cid:173)\nduced by Cowan (1991) to study stochastic neural networks. The equation uses the \nformalism of \"second quantization\" for classical many-body systems (Doi, 1976a; \nGrassberger and Scheunert, 1980), and was used to study networks of of two-state \nneurons (Ohira and Cowan, 1993, 1994). In this paper, we reformulate the master \nequation using an outer-product representation of operators and extend our previ(cid:173)\nous analysis to networks of three-state neurons. A hierarchy of moment equations \nfor such networks is derived and approximation schemes are used to obtain equa-\n\n\f272 \n\nToru Ohira, Jack D. Cowan \n\nFigure 1: Transition rates for a three-state neuron. \n\ntions for the macroscopic activities of model networks. We compare the behavior \nof the solutions of these equations with Monte Carlo simulations. \n\n2 THE BASIC NEURAL MODEL \n\nWe first introduce the network described by the master equation. In this network \n(Cowan, 1991), neurons at each site, say the ith site, are assumed to cycle through \nthree states: \"quiescent\", \"activated\" and \"refractory\", labelled 'qi', 'aj', and 'ri' \nrespectively. We consider four transitions: q --t a, l' --t a, a --t 1', and \" --t q. Two of \nthese, q --t a and r --t a, are functions of the neural input current. We assume these \nare smoothly increasing functions of the input current and denoted them by fh, and \n(}2. The other two transition rates, a --t r, and r --t q, are defined as constants a and \n/3. The resulting stochastic transition scheme is shown in Figure 1. We assume that \nthese transition rates depend only on the current state of the network and not on \npast states, and that all neural state transitions are asynchronous. This Markovian \nassumption is essential to the master equation description of this model. \n\nWe represent the state of each neuron by three-dimensional basis vectors using the \nDirac notation lai >, Iri > and Iqi >. They correspond, in more standard vector \nnotation, to: \n\nIq; >= (D, \n\nI \n\nla;>= 0), \n\nI \n\nh >= ( ~ ) o \n\n. \u2022 \n\nWe define the inner product of these states as \n\n< adai >=< qdqj >=< 1'jlri >= 1, \n\n< qjlaj >=< ajlqi >=< rdai >=< ad\"i >=< 1'dqi >=< Qil\" j >= O. \n\n(3) \nLet the states (or configurations) of a network be represented by {In>}, the direct \nproduct space of each neuron in the network. \n\n(4) \nLet p[n, t] be the probability of finding the network in a particular state n at time \nt. We introduce the \"neural state vector\" for N neurons in a network as \n\n(1) \n\n(2) \n\n(5) \n\nI~(t) >= L p[n, t]l n >, \n\n{fl} \n\n\fStochastic Dynamics of Three-State Neural Networks \n\n273 \n\nwhere the sum is taken over all possible network states. \n\nWith these definitions, we can write the master equation for a network with the \ntransition rates shown in Figure 1, using the outer-product representations of op(cid:173)\nerators (Sakurai, 1985). For example: \n\nlai >< q;j = \n\n( 0 0 1) \n. \u2022 \n\n0 0 0 \no 0 0 \n\nThe master equation then takes the form of an evolution equation: \n\n, \n\n- 8t 1*(t) >= LI**(t) > \n\n(6) \n\n(7) \n\n8 \n\n. \n.=1 \n\nwith the network \"Liouvillian\" L given by: \n\nN \n\nL = a I)lai}(ad -Iri}(a;j) + L)h}(1'il-la i}(,'d)e2(= L wijlaj}(ajl) \n\nN \n\n1 N \n\nn . \n\n)=1 \n\n. \n1=1 \n\n+,6L:~1 (Iri}(ril-Iqi}(r;!) + L:~l (lqi}(qil-lai}(qiI)OlUi L:7=1 wijlaj}(ajl).(8) \nwhere n is an average number of connections to each neuron, and Wij is the \"weight\" \nfrom the jth to the ith neuron. Thus the weights are normalized with respect to \nthe average number n of connections per neuron. \nThe master equation given here is the same as the one introduced by Cowan using \nGell-Mann matrices (Cowan, 1991). However, we note that with the outer-product \nrepresentation, we can extend the description from two to three-state neurons sim(cid:173)\nply by including one more basis vector. \nIn analogy with the analysis of two-state neurons, we introduce the state vector: \n\nN \n\n(a r cJI = IT (qi(q;! + ri(rd + ai(ail). \n\ni=1 \n\n(9) \n\nwhere the product is taken as a direct product, and ai, ri, and qi are parameters. \nWe also introduce the point moments \u00abai(t)>>, \u00abqi(t)>>, and \u00abri(t)>> as the \nprobability that the ith neuron is active, quiescent, and refract.ory respectively, at \ntime t. Similarly, we can define the multiple moment, for example, \u00abaiqp'k .. , (t)>> \nas the probability that the ith neuron is active, the jth neuron is quiescent, the kth \nneuron is refractory and so on at time t. Then, it can be shown that they are given \nby: \n\n\u00abSiSjSk ... (t)>> = (a = r = q= 1lsi}(sd @ ISj)(Sjl @ ISk)(Skl ... 1**(t)}, \n\nS = a, \" ,q \n\nFor example, \n\n\u00abriqjak(t)>> = (a = r = q = 1I\"i}(ril @ Iqj}(qjl @ lak)(akl**(t)} \n\nWe note the following relations, \n\n\u00abai(t)>> + \u00abqi(t)>> + \u00abri(t)>> = 1 \n\nand \n\n\u00abaHt)>> = \u00abai(t)>>, \u00abr;(t)>> = \u00ab\"i(t)>>, \u00abq;(t)>> = \u00abqi(t)>>, \n\n(10) \n\n(11) \n\n(12) \n\n(13) \n\n\f274 \n\nToru Ohira, Jack D. Cowan \n\n3 THE HIERARCHY OF MOMENT EQUATIONS \n\nWe can now obtain an equation of motion for the moments. As is typical in the \ncase of many-body problems, we obtain an analogue of the BBGKY hierarchy of \nequations (Doi, 1976b). This can be done by using the definition of moments, the \nmaster equation, and the a-r-q state vector. We show the hierarchy up to the second \norder: \n\na \nt \n\n--a \u00abairj\u00bb = -a( \u00abaiaj\u00bb - \u00abairj\u00bb) + (3\u00abai 1'j\u00bb + \u00abair j(}2(= '\"' Wikak)>> \n\n1 N \nn~ \n\nk=l \n\n-\u00abrirj(}2(~'E~=1 Wikak)>> - \u00abqjrj(}lUf'E:=l Wikak)>> (19) \n\nWe note that since \n\n(20) \none of the parameters can be eliminated. We also note that the equations are \ncoupled into higher orders in this hierarchy. This leads to a need for approximation \nschemes which can terminate the hierarchy at an appropriate order. \n\n\u00abai\u00bb + \u00abri\u00bb + \u00abqi\u00bb = 1, \n\nIn the following, we introduce first and the second moment level approximation \nschemes. For simplicity, we consider the special case in which (}l and (}2 are linear \nand equal. \n\nWith the above simplication the first moment (mean field) approximation leads to: \n\n\fStochastic Dynamics of Three-State Neural Networks \n\n-:t \u00abai\u00bb = a\u00abai\u00bb - Wi( \u00abri\u00bb + ~qi\u00bb) \n\no \n\n- Ot \u00abri\u00bb = -a\u00abai\u00bb + \u00abri\u00bb + Wi\u00abri\u00bb, \n\nj3_ \n\no \n\n- Ot \u00abqi\u00bb = - \u00abri\u00bb + Wi\u00abqi\u00bb, \n\n_ \n\nj3 \n\nwhere \n\n1 N \n\nWI = iiL wlk\u00abak\u00bb, \n\nk=l \n\nWe also obtain the second moment approximation as: \n\no \n\no \n\n- Ot \u00abai\u00bb = a\u00abai\u00bb - ff~ Wij( \u00abqiaj\u00bb + \u00abriaj\u00bb), \n\n1 N \n\n)=1 \n\n- Ot \u00abri\u00bb = -a\u00abai\u00bb + j3\u00abri\u00bb + ff~ Wij\u00abriaj\u00bb, \n\n1 N \n\n)=1 \n\n1 N \n\n)=1 \n\no \n\n- Ot \u00abqi\u00bb = -j3\u00abri\u00bb + ff~ Wij\u00abqiaj\u00bb, \n\n-:t \u00abaiaj\u00bb = 2a\u00abaiaj\u00bb - Wij( \u00abriaj\u00bb + \u00abqiaj\u00bb) \n\n-Wji( \u00abairj\u00bb + \u00abaiqj\u00bb), \n\no \n\n- Ot \u00abairj\u00bb = -a( \u00abaiaj\u00bb - \u00abairj\u00bb) + j3\u00abair j\u00bb + Wji\u00abairj\u00bb \n\nwhere \n\n-Wij(<<7'i7'j\u00bb + \u00abqi7'j\u00bb), \n\n_ \n\n275 \n\n(22) \n\n(23) \n\n(24) \n\n(25) \n\n(26) \n\n(27) \n\n(28) \n\n(29) \n\n(31) \n\n(32) \n\nWe note that the first moment dynamics obtained via the first approximation differs \nfrom that obtained from the second moment approximation, In the next section, \nwe briefly examine this difference by comparing these approximations with Monte \nCarlo simulations. \n\n\f276 \n\nToru Ohira, Jack D. Cowan \n\n4 COMPARISON WITH SIMULATIONS \n\nIn this section, we compare first and second moment approximations with Monte \nCarlo simulation of a one dimensional ring of three-state neurons. This was studied \nin a previous publication (Ohira and Cowan, 1993) for two-state neurons. As shown \nthere, each three-state neuron in the ring interacts with its two neighbors. \n\nMore precisely, the Liouville operator is \n\nL = a 2:)lai}(ail-/ri}(a;!) + f3 L(/ri}{ril-/qi}(ril) \n\nN \n\ni=1 \n\n+2W2 L(/ri}(ri/-lai}(r;I)(/ai+l}(ai+d + /ai-l}{ai-d) \n\n+2W1 L(/qi}{q;/ -\n\n/ai}(qil)(/ai+l}{ai+d + /ai-J}(ai-d) \n\nN \n\ni=1 \n1 \n\n1 \n\nN \n\ni=1 \nN \n\ni=1 \n\nWe now define the dynamical variables of interest as follows: \n\n11 1 \n\nXa = N L \u00abaj\u00bb, Xr = N L \u00abri\u00bb, Xq = N L \u00abqi\u00bb, \n\ni=1 \n\n;=1 \n\n1]aa = N L \u00abaiai+l\u00bb, 1]rr = N L \u00abriri+l\u00bb, 1]ar = N L \u00abairi+l\u00bb. \n\n1 N \n\n;=1 \n\n1 N \n\ni=1 \n\n;=1 \n\n1 N \n\ni=1 \n\nThen, for this network, the first moment approximation is given by \n\n8 \n\n- 8tXa \n\n-\n\nax - W2XaXr - WIXqXa, \n\n8 \n\n- at Xr = -ax - f3Xr + W2XqXa, \n\nXq = 1 - Xa - Xr. \nThe second moment approximation is given by \n\n8 \n\n- at Xa = \n\n8 \n\n- 8t Xr = \n\n{) \n\n- 8t1]aa \n\n-\n\n8 \n\n- 8t1]ar = \n\n2a1]aa - W21]ar(Xa + 1) - Wl(Xa + 1)(Xa -1]ar -1]aa), \n\n-a('1aa -1]ar) - f31]ar + 2 W2 '1ar(Xa + 1), \n\n1 \n\n1 \n\n+21]rrXa + WIXa(Xr -1]rr -1]ar)' \n\n(33) \n\n(34) \n\n(35) \n\n(36) \n\n(37) \n\n\fStochastic Dynamics of Three-State Neural Networks \n\n277 \n\nMonte Carlo simulations of a ring of 10000 neurons were performed and compared \nwith the first and second moment approximation predictions. We fixed the following \nparameters: \n\n(38) \n\na = 1.0, \n\nf3 = 0.2, Wl = 0.01\u00b7 WO, W2 = 0.6\u00b7 Wo \n\nx. I.' \n\n\" --------_ ....... --.. _---\n\nx\",. \u2022 .... - .................. __ .............. - ..... .. \n\nx, \u2022.\u2022 \n\nx\" .\u2022 ., ... -.. --..... --.... -_ .. ------_. \n\n.. ........... _----_ ........ _----\n\nX.'\" \n\n(A) \n\n(8) \n\nFigure 2: Comparison of Monte Carlo simulations (dots) with the first moment \n(dashed line) and the second moment (solid line) approximations for the three state \ncase with the fraction of total active and refractory state variables Xa (A) and Xr \n(B). Each graph is labeled by the values of wo/a. \n\nWe varied Wo and sampled the numerical dynamics of these parameters. Some \ncomparisons are shown in Figure 2 for the time dependence of the total number \nof active and refractory state variables. We dearly see the improvement of the \nsecond over the first moment level approximation. More simulations with different \nparameter ranges remain to be explored. \n\n5 CONCLUSION \n\nWe have introduced here a neural network master equation using the outer-product \nrepresentation. In this representation, the extension from two to three-state neu(cid:173)\nrons is transparent. We have taken advantage of this natural extension to analyse \nthree-state networks. Even though the calculations involved are more intricate, we \n\n\f278 \n\nTorn Ohira, Jack D. Cowan \n\nhave obtained results indicating that the second moment level approximation is sig(cid:173)\nnificantly more accurate than the first moment level approximation. We also note \nthat as in the two-state case, the first moment level approximation produces more \nactivation than the simulation. FUrther analytical and theoretical investigations \nare needed to fully uncover the dynamics of three-state networks described by the \nmaster equation introduced above. \n\nAcknowledgements \n\nThis work was supported in part by the Robert R. McCOlmick fellowship at the \nUniversity of Chicago, and in part by grant No. N0014-89-J-I099 from the US \nDepartment of the Navy, Office of Naval Research. \n\nReferences \n\nCowan JD (1991) Stochastic neurodynamics in Advances in Neural Information \nProcessing Systems (D. S. Touretzky, R. P. Lippman, J. E. Moody, ed.), vol. 3, \nMorgan Kaufmann Publishers, San Mateo \n\nDoi M (1976a) Second quantization representation for classical many-particle sys(cid:173)\ntem. J. Phys. A: Math Gen. 9:1465-1477. \n\nDoi M (1976b) Stochastic theory of diffusion-controlled reactions. J. Phys. A: Math. \nGen. 9:1479. \n\nGrassberger P, Scheunert M (1980) Fock-space methods for identical classical ob(cid:173)\njects. Fortschritte der Physik 28:547 \n\nMeunier C, Hansel D, Verga A (1989) Information processing in three-state neural \nnetworks. J. Stat. Phys. 55:859 \n\nOhira T, Cowan JD (1993) Master-equation approach to stochastic neurodynamics. \nPhys. Rev. E 48:2259 \n\nOhira T, Cowan JD (1994) Feynman Diagrams for Stochastic Neurodynamics. In \nProceedings of Fifth Australian Conference of Neural Networks, pp218-221 \n\nSakurai JJ (1985) Modern Quantum Mechanics. Benjamin/Cummings, Menlo Park \n\n\f", "award": [], "sourceid": 991, "authors": [{"given_name": "Toru", "family_name": "Ohira", "institution": null}, {"given_name": "Jack", "family_name": "Cowan", "institution": null}]}*