{"title": "Solvable Models of Artificial Neural Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 423, "page_last": 430, "abstract": null, "full_text": "Solvable Models of Artificial Neural \n\nNetworks \n\nSumio Watanabe \n\nInformation and Communication R&D Center \n\nRicoh Co., Ltd. \n\n3-2-3, Shin-Yokohama, Kohoku-ku, Yokohama, 222 Japan \n\nsumio@ipe.rdc.ricoh.co.jp \n\nAbstract \n\nSolvable models of nonlinear learning machines are proposed, and \nlearning in artificial neural networks is studied based on the theory \nof ordinary differential equations. A learning algorithm is con(cid:173)\nstructed, by which the optimal parameter can be found without \nany recursive procedure. The solvable models enable us to analyze \nthe reason why experimental results by the error backpropagation \noften contradict the statistical learning theory. \n\n1 \n\nINTRODUCTION \n\nRecent studies have shown that learning in artificial neural networks can be under(cid:173)\nstood as statistical parametric estimation using t.he maximum likelihood method \n[1], and that their generalization abilities can be estimated using the statistical \nasymptotic theory [2]. However, as is often reported, even when the number of \nparameters is too large, the error for the test.ing sample is not so large as the theory \npredicts. The reason for such inconsistency has not yet been clarified, because it is \ndifficult for the artificial neural network t.o find the global optimal parameter. \n\nOn the other hand, in order to analyze the nonlinear phenomena, exactly solvable \nmodels have been playing a central role in mathematical physics, for example, the \nK-dV equation, the Toda lattice, and some statistical models that satisfy the Yang-\n\n423 \n\n\f424 \n\nWatanabe \n\nBaxter equation[3]. \n\nThis paper proposes the first solvable models in the nonlinear learning problem. We \nconsider simple three-layered neural networks, and show that the parameters from \nthe inputs to the hidden units determine the function space that is characterized \nby a differential equation. This fact means that optimization of the parameters \nis equivalent to optimization of the differential equation. Based on this property, \nwe construct a learning algorithm by which the optimal parameters can be found \nwithout any recursive procedure. Experimental result using the proposed algorithm \nshows that the maximum likelihood estimator is not always obtained by the error \nbackpropagation, and that the conventional statistical learning theory leaves much \nto be improved. \n\n2 The Basic Structure of Solvable Models \n\nLet us consider a function fc,w( x) given by a simple neural network with 1 input \nunit, H hidden units, and 1 output unit, \n\nH \n\nfc,w(x) = L CiIPw;{X), \n\ni=1 \n\n(I) \n\nwhere both C = {Ci} and w = {Wi} are parameters to be optimized, IPw;{x) is the \noutput of the i-th hidden unit. \nWe assume that {IPi(X) = IPw, (x)} is a set of independent functions in C H -class. \nThe following theorem is the start point of this paper. \n\nTheorem 1 The H -th order differential equation whose fundamental system of so(cid:173)\nlution is {IPi( x)} and whose H -th order coefficient is 1 is uniquely given by \n\n(Dwg)(x) = (_l)H H!H+l(g,1P1,1P2, .. \u00b7,IPH) = 0, \n\nlVH(IP1, IP2, .. \u00b7,IPH) \n\n(2) \n\nwhere ltV H is the H -th order Wronskian, \n\nIPH \n( 1) \nIPH \n(2) \n'PH \n\n(H-l) \n\n'PI \n\n(H-l) \n\n'P2 \n\n(H -1) \n\nIPH \n\nFor proof, see [4]. From this theorem, we have the following corollary. \n\nCorollary 1 Let g(x) be a C H -class function. Then the following conditions for \ng(x) and w = {wd are equivalent. \n(1) There exists a set C = {cd such that g{x) = E~l CjIPw;(x). \n(2) (Dwg)(x) = O. \n\n\fSolvable Models of Artificial Neural Networks \n\n425 \n\nExample 1 Let us consider a case, !Pw;(x) = exp(WiX). \n\nH \n\ng(x) = L Ci exp(WiX) \n\nis equivalent to {DH + P1D H- 1 + P2DH-2 + ... + PH }g(x) = 0, where D = (d/dx) \nand a set {Pi} is determined from {Wi} by the relation, \n\ni=l \n\nzH + Plz H- 1 + P2zH-2 + ... + PIl = II(z - Wi) \n\nH \n\n('Vz E C). \n\ni=l \n\nExample 2 \n\n(RBF) A function g(x) is given by radial basis functions, \n\n11 \n\ng(x) = L Ci exp{ -(x - Wi)2}, \n\ni=l \n\nif and only if e- z2 {DIl + P1DIl-l + P2DIl-2 + ... + PIl }(eZ2 g(x)) = 0, where a set \n{Pi} is determined from {Wi} by the relation, \n\nzll + Plz ll - 1 + P2zll-2 + ... + PII = II(z - 2Wi) \n\n11 \n\n('Vz E C). \n\ni=l \n\nFigure 1 shows a learning algorithm for the solvable models. When a target function \ng( x) is given, let us consider the following function approximation problem. \n\n11 \n\ng(x) = L Ci!Pw;(X) + E(X). \n\ni=l \n\n(3) \n\nLearning in the neural network is optimizing both {cd and {wd such that E( x) is \nminimized for some error function. From the definition of D w , eq. (3) is equivalent \nto (Dwg)(x) = (Dw\u20ac)(x), where the term (Dwg)(x) is independent of Cj. Therefore, \nif we adopt IIDwEIl as the error function to be minimized, {wd is optimized by \nminimizing IIDwgll, independently of {Cj}, where 111112 = J II(x)1 2dx. After IIDwgll \nis minimized, we have (Dw.g)(x) ~ 0, where w* is the optimized parameter. From \nthe corollary 1, there exists a set {cn such that g(x) ~ L:ci!Pw~(x), where {en \ncan be found using the ordinary least square method. \n\n3 Solvable Models \n\nFor a general function !Pw, the differential operator Dw does not always have such \na simple form as the above examples. In this section, we consider a linear operator \nL such that the differential equation of L!pw has a simple form. \nDefinition A neural network L: Cj!PWi (x) is called solvable ifthere exist functions \na, b, and a linear operator L such that \n\n(L!pwJ(x) = exp{a{wj)x + b(wi)). \n\nThe following theorem shows that the optimal parameter of the solvable models can \nbe found using the same algorithm as Figure 1. \n\n\f426 \n\nWatanabe \n\nH \n\ni \n\ni=l \n\ng(X) = L Ci ~ (x) +E(X) \nto optimize wi \nt \n\nIt is difficult \nindependently ?f ci \n\nThere exits C i s.t. \n\ng(x) = L Ci
0, we define a sequence {Yn} by Yn = (Lg)(nQ). Then, there \nexists {qd such that Yn + qlYn-l + q2Yn-2 + ... + qHYn-H = o. \n\nNote that IIDwLgl12 is a quadratic form for {pd, which is easily minimized by the \nleast square method. En IYn + qlYn-l + ... + QHYn_HI 2 is also a quadratic form for \n{Qd\u00b7 \n\nTheorem 3 The sequences { wd, {pd, and {qd in the theorem 2 have the following \nrelations. \n\nH+ H-l+ H-2+ \nz \n\nP2 Z \n\nPIZ \n\n... PH \n\n+ \n\nzH + qlzH-l + q2zH-2 + ... + qH = \n\n('Vz E C), \n\nH \nIT(z - a(wi)) \ni=l \nH \nIT(z - exp(a(Wi)Q)) \ni=l \n\n('Vz E C). \n\nFor proofs of the above theorems, see [5]. These theorems show that, if {Pi} or \n\n\fSolvable Models of Artificial Neural Networks \n\n427 \n\n{qd is optimized for a given function g( x), then {a( wd} can be found as a set of \nsolutions of the algebraic equation. \nSuppose that a target function g( x) is given. Then, from the above theorems, \nthe globally optimal parameter w* = {wi} can be found by minimizing IIDwLgll \nindependently of {cd. Moreover, if the function a(w) is a one-to-one mapping, then \nthere exists w* uniquely without permutation of {wi}, if and only if the quadratic \nform II{DH + P1 DH-1 + ... + PH }g1l2 is not degenerate[4]. (Remark that, if it is \ndegenerate, we can use another neural network with the smaller number of hidden \nunits.) \n\nExample 3 A neural network without scaling \n\nH \n\nfb,c(X) = L CiU(X + bi), \n\n(4) \n\nis solvable when (F u)( x) I- 0 (a.e.), where F denotes the Fourier transform. Define \na linear operator L by (Lg)(x) = (Fg)(x)/(Fu)(x), then, it follows that \n\ni=1 \n\n(Lfb,c)(X) = L Ci exp( -vCi bi x). \n\nH \n\n(5) \n\nBy the Theorem 2, the optimal {bd can be obtained by using the differential 01' the \nsequential equation. \n\ni=l \n\nExample 4 (MLP) A three-layered perceptron \n\nfb,c(X) = L Ci tan \n\nH \n~ -1 X + bi \n( a. \nz \n\ni=1 \n\n), \n\n(6) \n\nis solvable. Define a linear operator L by (Lg)( x) = x . (F g)( x), then, it follows \nthat \n\n(Lfb,c)(X) = L Ci exp( -(a.i + yCi bdx + Q(ai, bd) (x ~ 0). \n\nH \n\n(7) \n\ni=1 \n\nwhere Q( ai, bi ) is some function of ai and bj. Since the function tan -1 (x) is mono(cid:173)\ntone increasing and bounded, we can expect that a neural network given by eq. \n(6) has the same ability in the function approximation problem as the ordinary \nthree-layered perceptron using the sigmoid function, tanh{x). \n\nExample 5 (Finite Wavelet Decomposition) A finite wavelet decomposition \n\nH \n\nfb,c(X) = L Cju( \n\nx + bj \n\n(8) \nis solvable when u(x) = (d/dx)n(1/(l + x 2 \u00bb (n ~ 1). Define a lineal' operator L by \n(Lg)(x) = x- n . (Fg)(x) then, it follows that \n\ni=l \n\na.j \n\n), \n\n(Lfb,c)(X) = L Ci exp( -(a.j + vCi bi)x + P(a.j, bi\u00bb \n\nH \n\n(x ~ 0). \n\n(9) \n\ni=1 \n\n\f428 \n\nWatanabe \n\nwhere f3(ai, bi) is some function of ai and bi. Note that O\"(x) is an analyzing wavelet, \nand that this example shows a method how to optimize parameters for the finite \nwavelet decomposition. \n\n4 Learning Algorithm \n\nWe construct a learning algorithm for solvable models, as shown in Figure 1-\n\n<