{"title": "Efficient Estimation of OOMs", "book": "Advances in Neural Information Processing Systems", "page_first": 555, "page_last": 562, "abstract": null, "full_text": "Efficient Estimation of OOMs\nHerbert Jaeger, Mingjie Zhao, Andreas Kolling International University Bremen Bremen, Germany h.jaeger|m.zhao|a.kolling@iu-bremen.de\n\nAbstract\nA standard method to obtain stochastic models for symbolic time series is to train state-emitting hidden Markov models (SE-HMMs) with the Baum-Welch algorithm. Based on observable operator models (OOMs), in the last few months a number of novel learning algorithms for similar purposes have been developed: (1,2) two versions of an \"efficiency sharpening\" (ES) algorithm, which iteratively improves the statistical efficiency of a sequence of OOM estimators, (3) a constrained gradient descent ML estimator for transition-emitting HMMs (TE-HMMs). We give an overview on these algorithms and compare them with SE-HMM/EM learning on synthetic and real-life data.\n\n1\n\nIntroduction\n\nStochastic symbol sequences with memory effects are frequently modelled by training hidden Markov models with the Baum-Welch variant of the EM algorithm. More specifically, state-emitting HMMs (SE-HMMs) are standardly employed, which emit observable events from hidden states. Known weaknesses of HMM training with Baum-Welch are long runtimes and proneness to getting trapped in local maxima. Over the last few years, an alternative to HMMs has been developed, observable operator models (OOMs). The class of processes that can be described by (finite-dimensional) OOMs properly includes the processes that can be described by (finite-dimensional) HMMs. OOMs identify the observable events a of a process with linear observable operators a acting on a real vector space of predictive states w [1]. A basic learning algorithm for OOMs [2] estimates the observable operators a by solving a linear system of learning equations. The learning algorithm is constructive, fast and yields asymptotically correct estimates. Two problems that so far prevented OOMs from practical use were (i) poor statistical efficiency, (ii) the possibility that the obtained models might predict negative \"probabilities\" for some sequences. Since a few months the first problem has been very satisfactorily solved [2]. In this novel approach to learning OOMs from data we iteratively construct a sequence of estimators whose statistical efficiency increases, which led us to call the method efficiency sharpening (ES). Another, somewhat neglected class of stochastic models is transition-emitting HMMs (TEHMMs). TE-HMMs fall in between SE-HMMs and OOMs w.r.t. expressiveness. TEHMMs are equivalent to OOMs whose operator matrices are non-negative. Because TEHMMs are frequently referred to as Mealy machines (actually a misnomer because orig-\n\n\f\ninally Mealy machines are not probabilistic but only non-deterministic), we have started to call non-negative OOMs \"Mealy OOMs\" (MOOMs). We use either name according to the way the models are represented. A variant of Baum-Welch has recently been described for TE-HMMs [3]. We have derived an alternative learning constrained log gradient (CLG) algorithm for MOOMs which performs a constrained gradient descent on the log likelihood surface in the log model parameter space of MOOMs. In this article we give a compact introduction to the basics of OOMs (Section 2), outline the new ES and CLG algorithms (Sections 3 and 4), and compare their performance on a variety of datasets (Section 5). In the conclusion (Section 6) we also provide a pointer to a Matlab toolbox.\n\n2\n\nBasics of OOMs\n\nLet (, A, P, (Xn )n0 ) or (Xn ) for short be a discrete-time stochastic process with values in a finite symbol set O = {a1 , . . . , aM }. We will consider only stationary processes here for notational simplicity; OOMs can equally model nonstationary processes. An mdimensional OOM for (Xn ) is a structure A = (Rm , (a )aO , w0 ), where each observable operator a is a real-valued m m matrix and w0 Rm is the starting state, provided that for any finite sequence ai0 . . . ain it holds that P (X0 = ai0 , . . . Xn = ain ) = 1m ain ai0 w0 , (1)\n\nwhere 1m always denotes a row vector of units of length m (we drop the subscript if it is clear from the context). We will use the shorthand notation a to denote a generic sequence and a to denote a concatenation of the corresponding operators in reverse order, which would condense (1) into P (a) = 1a w0 . Conversely, if a structure A = (Rm , (a )aO , w0 ) satisfies (i) 1w0 = 1, a (ii) 1(\nO\n\na ) = 1,\n\n(iii) a O : 1a w0 0, \n\n(2)\n\ns (where O denotes the set of all finite sequences over O), then there exista a process whose distribution is described by A via (1). The process is stationary iff ( O a )w0 = w0 . Conditions (i) and (ii) are easy to check, but no efficient criterium is known to decide whether the non-negativity criterium (iii) holds for a structure A (for recent progress in this problem, which is equivalent to a problem of general interest in linear algebra, see [4]). Models A learnt from data tend to marginally violate (iii) this is the unresolved non-negativity problem in the theory of OOMs. The state wa of an OOM after an initial history a is obtained by normalizing a w0 to unit component sum via wa = a w0 /1a w0 . A fundamental (and nontrivial) theorem for OOMs characterizes equivalence of two ~ ~ ~ OOMs. Two m-dimensional OOms A = (Rm , (a )aO , w0 ) and A = (Rm , (a )aO , w0 ) are defined to be equivalent if they generate the same probability distribution according to ~ (1). By the equivalence theorem, A is equivalent to A if and only if there exists a transformation matrix of size m m, satisfying 1 = 1, such that a = a -1 for all symbols ~ a. We mentioned in the Introduction that OOM states represent the future probability distribution of the process. This can be algebraically captured in the notion of characterizers. Let A = (Rm , (a )aO , w0 ) be an OOM for (Xn ) and choose k such that = |O|k m. Let\n\n\f\n1 , . . . , be the alphabetical enumeration of Ok . Then a m matrix C is a characterizer b b of length k for A iff 1C = 1 (that is, C has unit column sums) and b b a O : wa = C (P (1 |a) P ( |a)) , (3)\n\nb where denotes the transpose and P (|a) is the conditional probability that the process continues with after an initial history a. It can be shown [2] that every OOM has characb terizers of length k for suitably large k . Intuitively, a characterizer \"bundles\" the length k future distribution into the state vector by projection. ~ If two equivalent OOMs A, A are related by a = a -1 , and C is a characterizer for A, ~ ~ it is easy to check that C is a characterizer for A. We conclude this section by explaining the basic learning equations. An analysis of (1) reveals that for any state wa and operator b from an OOM it holds that a wa = P (a|a)waa , (4)\n\n where aa is the concatenation of a with a. The vectors wa and P (a|a)waa thus form an argument-value pair for a . Let a1 , . . . , al be a finite sequence of finite sequences over O, and let V = (wa1 wal ) be the matrix containing the corresponding state vectors. Let again C be a m sized characterizer of length k and 1 , . . . , be the alphabetical b b enumeration of Ok . Let V = (P (i |aj )) be the l matrix containing the conditional b b continuation probabilities of the initial sequences aj by the sequences i . It is easy to see that V = C V . Likewise, let Wa = (P (a|a1 )wa1 a P (a|al )wal a ) contain the vectors corresponding to the rhs of (4), and let W a = (P (ai |aj )) be the analog of V . It is easily b verified that Wa = C W a . Furthermore, by construction it holds that a V = Wa . A linear operator on Rm is uniquely determined by l m argument-value pairs provided there are at least m linearly independent argument vectors in these pairs. Thus, if a characterizer C is found such that V = C V has rank m, the operators a of an OOM characterized by C are uniquely determined by V and the matrices W a via a = Wa V = C W a (C V ) , where denotes the pseudo-inverse. Now, given a training sequence S , the conditional continuation probabilities P (i |aj ), P (ai |aj ) that make up V , W a can be estimated from b b ^b ^ b S by an obvious counting scheme, yielding estimates P (i |aj ), P (ai |aj ) for making up ^ ^ V and W a , respectively. This leads to the general form of OOM learning equations: ^ ^ a = C W a (C V ) . ^ (5)\n\nIn words, to learn an OOM from S , first fix a model dimension m, a characterizer C , in^ ^ dicative sequences a1 , . . . , al , then construct estimates V and W a by frequency counting, and finally use (5) to obtain estimates of the operators. This estimation procedure is asymptotically correct in the sense that, if the training data were generated by an m-dimensional OOM in the first place, this generator will almost surely be perfectly recovered as the size ^ ^ of training data goes to infinity. The reason for this is that the estimates V and W a converge almost surely to V and W a . The starting state can be recovered from the estimated a ^ operators by exploiting ( O a )w0 = w0 or directly from C and V (see [2] for details).\n\n3\n\nThe ES Family of Learning Algorithms\n\nAll learning algorithms based on (5) are asymptotically correct (which EM algorithms are not, by the way), but their statistical efficiency (model variance) depends crucially on (i)\n\n\f\nthe choice of indicative sequences a1 , . . . , al and (ii) the characterizer C (assuming that the model dimension m is determined by other means, e.g. by cross-validation). We will first address (ii) and describe an iterative scheme to obtain characterizers that lead to a low model variance. The choice of C has a twofold impact on model variance. First, the pseudoinverse oper^ ation in (5) blows up variation in C V depending on the matrix condition number of this ^ matrix. Thus, C should be chosen such that the condition of C V gets close to 1. This strategy was pioneered in [5], who obtained the first halfway statistically satisfactory learning procedures. In contrast, here we set out from the second mechanism by which C influences ^ model variance, namely, choose C such that the variance of C V itself is minimized. We need a few algebraic preparations. First, observe that if some characterizer C is used ^ ~ with (5), obtaining a model A, and is an OOM equivalence transformation, then if C = ^ is an equivalent version of A via . ~ ^ C is used with (5), the obtained model A Furthermore, it is easy to see [2] that two characterizers C1 , C2 characterize the same OOM iff C1 V = C2 V . We call two characterizers similar if this holds, and write C1 C2 . Clearly C1 C2 iff C2 = C1 + G for some G satisfying GV = 0 and 1G = 0. That is, the similarity equivalence class of some characterizer C is the set {C + G|GV = 0, 1G = 0}. Together with the first observation this implies that we may confine our search for \"good\" characterizers to a single (and arbitrary) such equivalence class of characterizers. Let C0 in the remainder be a representative of an arbitrarily chosen similarity class whose members all characterize A. ^ In i [2] it is explained that the variance of C V is monotonically tied to i ) wa - C (:, i) 2, where C (:, i) is the i-th column of C . j =1,...,;j =1,...,l P (aj b ^ This observation allows us to determine an optimal (minimal variance of C V within the equivalence class of C0 ) characterizer Copt as the solution to the following minimization problem: Copt Gopt = C0 + Gopt , where i = arg min\nG =1,...,;j =1,...,l\n\nP (aj i ) waj - (C0 + G)(:, i) 2 b \n\n(6)\n\nunder the constraints GV = 0 and 1G = 0. This problem can be analytically solved [2] and has a surprising and beautiful solution, which we now explain. In a nutshell, Copt is composed column-wise by certain states of a time-reversed version of A. We describe in more detail time-reversal of OOMs. Given an OOM A = (Rm , (a )aO , w0 ) with an r r induced probability distribution PA , its reverse OOM Ar = (Rm , (a )aO , w0 ) is characr satisfying terized by a probability distribution PA a0 an O : PA (a0 an ) = PAr (an a0 ). (7)\n\nA reverse OOM can be easily computed from the \"forward\" OOM as follows. If A = (Rm , (a )aO , w0 ) is an OOM for a stationary process, and w0 has no zero entry, then Ar = (Rm , (Da D-1 )aO , w0 ) (8)\n\nis a reverse OOM to A, where D = diag(w0 ) is a diagonal matrix with w0 on its diagonal. r Now let 1 , . . . , again be the sequences employed in V . Let Ar = (Rm , (a )aO , w0 ) be b b the reverse OOM to A, which was characterized by C0 . Furthermore, for i = b1 . . . bk let b\n\n\f\nr r r r r r r wi = b1 bk w0 /1b1 bk w0 . Then it holds that C = (w1 w ) is a characterizer b b b for an OOM equivalent to A. C can effectively be transformed into a characterizer C r for A by C r = rC, where\n\n 11 b . r = (C . )-1 . . 1 b \n\n(9)\n\nWe call C r the reverse characterizer of A, because it is composed from the states of a reverse OOM to A. The analytical solution to (6) turns out to be [2] Copt = C r . (10)\n\nTo summarize, within a similarity class of characterizers, the one which minimizes model variance is the (unique) reverse characterizer in this class. It can be cheaply computed from the \"forward\" OOM via (8) and (9). This analytical finding suggests the following generic, iterative procedure to obtain characterizers that minimize model variance: 1. Setup. Choose a model dimension m and a characterizer length k . Compute V , W a from the training string S . ^ 2. Initialization. Estimate an initial model A(0) with some \"classical\" OOM estimation method (a refined such method is detailed out in [2]). ^ 3. Efficiency sharpening iteration. Assume that A(n) is given. Compute its reverse ^ r(n+1) . Use this in (5) to obtain a new model estimate A(n+1) . ^ characterizer C ^ 4. Termination. Terminate when the training log-likelihood of models A(n) appear to settle on a plateau. ^ The rationale behind this scheme is that the initial model A(0) is obtained essentially from an uninformed, ad hoc characterizer, for which one has to expect a large model variation ^ ^ and thus (on the average) a poor A(0) . However, the characterizer C r(1) obtained from ^(0) is not uninformed any longer but shaped by a reasonable reverse model. the reversed A ^ Thus the estimator producing A(1) can be expected to produce a model closer to the correct one due to its improved efficiency, etc. Notice that this does not guarantee a convergence of models, nor any monotonic development of any performance parameter in the obtained model sequence. In fact, the training log likelihood of the model sequence typically shoots to a plateau level in about 2 to 5 iterations, after which it starts to jitter about this level, only slowly coming to rest or even not stabilizing at all; it is sometimes observed that the log likelihood enters a small-amplitude oscillation around the plateau level. An analytical understanding of the asymptotic learning dynamics cannot currently be offered. We have developed two specific instantiations of the general ES learning scheme, differ entiated by the set of indicative sequences used. The first simply uses l = , a1 , . . . , al = 1 , . . . , , which leads to a computationally very cheap iterated recomputation of (5) with b b updated reverse characterizers. We call this the \"poor man's\" ES algorithm. The statistical efficiency of the poor man's ES algorithm is impaired by the fact that only the counting statistics of subsequences of length 2k are exploited. The other ES instantiation exploits the statistics of all subsequences in the original training string. It is technically rather involved and rests on a suffix tree (ST) representation of S . We can only give a coarse sketch here (details in [2]). In each iteration, the current reverse model is run backwards through S and the obtained reverse states are additively collected bottom-up in the\n\n\f\nnodes of the ST. From the ST nodes the collected states are then harvested into matrices ^ ^ corresponding directly to C V and C W a , that is, an explicit computation of the reverse characterizer is not required. This method incurs a computational load per iteration which is somewhat lower than Baum-Welch for SE-HMMs (because only a backward pass of the current model has to be computed), plus the required initial ST construction which is linear in the size of S .\n\n4\n\nThe CLG Algorithm\n\nWe must be very brief here due to space limitations. The CLG algorithm will be detailed out in a future paper. It is an iterative update scheme for the matrix parameters [a ]ij ^ of a MOOM. This scheme is analytically derived as gradient descent in the model log likelihood surface over the log space of these matrix parameters, observing constraints of non-negativity of these parameters and the general OOM constraints (i) and (ii) from Eqn. (2). Note that the constraint (iii) from (2) is automatically satisfied in MOOMs. We skip the derivation of the CLG scheme and describe only its \"mechanics\". Let S = b s1 . . . sN be the training string and for 1 k N define ak = s1 . . . sk , k = sk+1 . . . sN . Define for some m-dimensional OOM and a O s 1k b , ya = 1k wak b\n k wak-1\nk =a\n\nk =\n\n1sk wak-1 \n\n, y0 = max{[ya ]ij }, [ya0 ]i,j = [ya ]i,j /y0 .\ni,j,a\n\n(11) Then the update equation is ^ [a ]ij = j [a ]ij [ya0 ]j , ^+ i (12)\n\nwhere a is the new estimate of a , j 's are normalization parameters determined by the ^+ constraint (ii) from Eqn. (2), and is a learning rate which here unconventionally appears in the exponent because the gradient descent is carried out in the log parameter space. Note that by (12) [a ]ij remains non-negative if [a ]ij is. This update scheme is derived ^+ ^ in a way that is unrelated to the derivation of the EM algorithm; to our surprise we found that for = 1 (12) is equivalent to the Baum-Welch algorithm for TE-HMMs. However, significantly faster convergence is achieved with non-unit ; in the experiments carried out so far a value close to 2 was heuristically found to work best.\n\n5\n\nNumerical Comparisons\n\nWe compared the poor man's ES algorithm, the suffix-tree based algorithm, the CLG algorithm and the standard SE-HMM/Baum-Welch method on four different types of data, which were generated by (a) randomly constructed, 10-dimensional, 5-symbol SE-HMMs, (b) randomly constructed, 10-dimensional, 5-symbol MOOMs, (c) a 3-dimensional, 2symbol OOM which is not equivalent to any HMM nor MOOM (the \"probability clock\" process [2]), (d) a belletristic text (Mark Twain's short story \"The 1,000,000 Pound Note\"). For each of (a) and (b), 40 experiments were carried out with freshly constructed generators per experiment; a training string of length 1000 and a test string of length 10000 was produced from each generator. For (c), likewise 40 experiments were carried out with freshly generated training/testing sequences of same lengthes as before; here however the generator was identical for all experiments. For (a) (c), the results reported below are averaged numbers over the 40 experiments. For the (d) dataset, after preprocessing which\n\n\f\nshrunk the number of different symbols to 27, the original string was sorted sentence-wise into a training and a testing string, each of length 21000 (details in [2]). The following settings were used with the various training methods. (i) The poor man's ES algorithm was used with a length k = 2 of indicative sequences on all datasets. Two ES iterations were carried out and the model of the last iteration was used to compute the reported log likelihoods. (ii) For the suffix-tree based ES algorithm, on datasets (a) (c), likewise two ES iterations were done and the model from the iteration with the lowest (reverse) training LL was used for reporting. On dataset (d), 4 ES iterations were called and similarly the model with the best reverse training LL was chosen. (iii) In the MOOM studies, a learning rate of = 1.85 was used. Iterations were stopped when two consecutive training LL's differed by less than 5e-5% or after 100 iterations. (iv) For HMM/BaumWelch training, the public-domain implementation provided by Kevin Murphy was used. Iterations were stopped after 100 steps or if LL's differed by less than 1e-5%. All computations were done in Matlab on 2 GHz PCs except the HMM training on dataset (d) which was done on a 330 MHz machine (the reported CPU times were scaled by 330/2000 to make them comparable with the other studies). Figure 1 shows the training and testing loglikelihoods as well as the CPU times for all methods and datasets.\n-1200 -1250 1 -1300 -1350 -1400 -1 -1450 -1500 2 (a) 4 6 8 10 -2 12 -1650 -1700 2\n4\n\n2\n\n-1400 -1450\n\n2 1\n\n-1500 0 -1550 -1600 -1 (b) 4 6 8 10 -2 12 0\n\n-670 -675 -680 -685 (c) -690 2 3 4 6 8 10\n\n2 -3.5 1 -4 0 -4.5 -1 -2 12 -5\n\nx 10\n\n4 3.5 3 2.5 (d) 2 150\n\n-5.5\n\n5 020 1\n\n40\n\n60\n\n100\n\nFigure 1: Findings for datasets (a)(d). In each panel, the left y -axis shows log likelihoods for training and testing (testing LL normalized to training stringlength) and the right y -axis measures the log 10 of CPU times. HMM models are documented in solid/black lines, poor man's ES models in dotted/magenta lines, suffix-tree ES models in broken/blue, and MOOMs in dash-dotted/red lines. The thickest lines in each panel show training LL, the thinnest CPU time, and intermediate testing LL. The x-axes indicate model dimension. On dataset (c), no results of the poor man's algorithm are given because the learning equations became ill-conditioned for all but the lowest dimensions. Some comments on Fig. 1. (1) The CPU times roughly exhibit an even log spread over almost 2 orders of magnitude, in the order poor man's (fastest) suffix-tree ES CLG Baum-Welch. (2) CLG has the lowest training LL throughout, which needs an explanation because the proper OOMs trained by ES are more expressive. Apparently the ES algorithm does not lead to local ML optima; otherwise suffix-tree ES models should show the lowest training LL. (3) On HMM-generated data (a), Baum-Welch HMMs can play out their\n\n\f\nnatural bias for this sort of data and achieve a lower test error than the other methods. (4) On the MOOM data (b), the test LL of MOOM/CLG and OOM/poor man models of dimension 2 equals the best HMM/Baum-Welch test LL which is attained at a dimension of 4; the OOM/suffix-tree test LL at dimension 2 is superior to the best HMM test LL. (5) On the \"probability clock\" data (c), the suffix-tree ES trained OOMs surpassed the non-OOM models in test LL, with the optimal value obtained at the (correct) model dimension 3. This comes as no surprise because these data come from a generator that is incommensurable with either HMMs or MOOMs. (6) On the large empirical dataset (d) the CLG/MOOMs have by a fair margin the highest training LL, but the test LL quickly drops to unacceptable lows. It is hard to explain this by overfitting, considering the complexity and the size of the training string. The other three types of models are evenly ordered in both training and testing error from HMMs (poorest) to suffix-tree ES trained OOMs. Overfitting does not occur up to the maximal dimension investigated. Depending on whether one wants a very fast algorithm with good, or a fast algorithm with very good train/test LL, one here would choose the poor man's or the suffix-tree ES algorithm as the winner. (7) One detail in panel (d) needs an explanation. The CPU time for the suffix-tree ES has an isolated peak for the smallest dimension. This is earned by the construction of the suffix tree, which was built only for the smallest dimension and re-used later.\n\n6\n\nConclusion\n\nWe presented, in a sadly condensed fashion, three novel learning algorithms for symbol dynamics. A detailed treatment of the Efficiency Sharpening algorithm is given in [2], and a Matlab toolbox for it can be fetched from http://www.faculty.iubremen.de/hjaeger/OOM/OOMTool.zip. The numerical investigations reported here were done using this toolbox. Our numerical simulations demonstrate that there is an altogether new world of faster and often statistically more efficient algorithms for sequence modelling than Baum-Welch/SE-HMMs. The topics that we will address next in our research group are (i) a mathematical analysis of the asymptotic behaviour of the ES algorithms, (ii) online adaptive versions of these algorithms, and (iii) versions of the ES algorithms for nonstationary time series.\n\nReferences\n[1] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representation of state. In Advances in Neural Information Processing Systems 14 (Proc. NIPS 01), pages 15551561, 2001. http://www.eecs.umich.edu/baveja/Papers/psr.pdf. [2] H. Jaeger, M. Zhao, K. Kretzschmar, T. Oberstein, D. Popovici, and A. Kolling. Learning observable operator models via the es algorithm. In S. Haykin, J. Principe, T. Sejnowski, and J. McWhirter, editors, New Directions in Statistical Signal Processing: from Systems to Brains, chapter 20. MIT Press, to appear in 2005. [3] H. Xue and V. Govindaraju. Stochastic models combining discrete symbols and continuous attributes in handwriting recognition. In Proc. DAS 2002, 2002. [4] R. Edwards, J.J. McDonald, and M.J. Tsatsomeros. On matrices with common invariant cones with applications in neural and gene networks. Linear Algebra and its Applications, in press, 2004 (online version). http://www.math.wsu.edu/math/faculty/tsat/files/emt.pdf. [5] K. Kretzschmar. Learning symbol sequences with Observable Operator Models. GMD Report 161, Fraunhofer Institute AIS, 2003. http://omk.sourceforge.net/files/OomLearn.pdf.\n\n\f\n", "award": [], "sourceid": 2872, "authors": [{"given_name": "Herbert", "family_name": "Jaeger", "institution": null}, {"given_name": "Mingjie", "family_name": "Zhao", "institution": null}, {"given_name": "Andreas", "family_name": "Kolling", "institution": null}]}