{"title": "The Forgetron: A Kernel-Based Perceptron on a Fixed Budget", "book": "Advances in Neural Information Processing Systems", "page_first": 259, "page_last": 266, "abstract": null, "full_text": "The Forgetron: A Kernel-Based Perceptron on a Fixed Budget\n\nOfer Dekel Shai Shalev-Shwartz Yoram Singer School of Computer Science & Engineering The Hebrew University, Jerusalem 91904, Israel { oferd,shais,singer} @cs.huji.ac.il\n\nAbstract\nThe Perceptron algorithm, despite its simplicity, often performs well on online classification tasks. The Perceptron becomes especially effective when it is used in conjunction with kernels. However, a common difficulty encountered when implementing kernel-based online algorithms is the amount of memory required to store the online hypothesis, which may grow unboundedly. In this paper we present and analyze the Forgetron algorithm for kernel-based online learning on a fixed memory budget. To our knowledge, this is the first online learning algorithm which, on one hand, maintains a strict limit on the number of examples it stores while, on the other hand, entertains a relative mistake bound. In addition to the formal results, we also present experiments with real datasets which underscore the merits of our approach.\n\n1 Introduction\nThe introduction of the Support Vector Machine (SVM) [8] sparked a widespread interest in kernel methods as a means of solving (binary) classification problems. Although SVM was initially stated as a batch-learning technique, it significantly influenced the development of kernel methods in the online-learning setting. Online classification algorithms that can incorporate kernels include the Perceptron [6], ROMMA [5], ALMA [3], NORMA [4], Ballseptron [7], and the Passive-Aggressive family of algorithms [1]. Each of these algorithms observes examples in a sequence of rounds, and constructs its classification function incrementally, by storing a subset of the observed examples in its internal memory. The classification function is then defined by a kernel-dependent combination of the stored examples. This set of stored examples is the online equivalent of the support set of SVMs, however in contrast to the support, it continually changes as learning progresses. In this paper, we call this set the active set, as it includes those examples that actively define the current classifier. Typically, an example is added to the active set every time the online algorithm makes a prediction mistake, or when its confidence in a prediction is inadequately low. A rapid growth of the active set can lead to significant computational difficulties. Naturally, since computing devices have bounded memory resources, there is the danger that an online algorithm would require more memory than is physically available. This problem becomes especially eminent in cases where the online algorithm is implemented as part of a specialized hardware system with a small memory, such as a mobile telephone or an au-\n\n\f\ntonomous robot. Moreover, an excessively large active set can lead to unacceptably long running times, as the time-complexity of each online round scales linearly with the size of the active set. Crammer, Kandola, and Singer [2] first addressed this problem by describing an online kernel-based modification of the Perceptron algorithm in which the active set does not exceed a predefined budget. Their algorithm removes redundant examples from the active set so as to make the best use of the limited memory resource. Weston, Bordes and Bottou [9] followed with their own online kernel machine on a budget. Both techniques work relatively well in practice, however they both lack a theoretical guarantee on their prediction accuracy. In this paper we present the Forgetron algorithm for online kernel-based classification. To the best of our knowledge, the Forgetron is the first online algorithm with a fixed memory budget which also entertains a formal worst-case mistake bound. We name our algorithm the Forgetron since its update builds on that of the Perceptron and since it gradually forgets active examples as learning progresses. This paper is organized as follows. In Sec. 2 we begin with a more formal presentation of our problem and discuss some difficulties in proving mistake bounds for kernel-methods on a budget. In Sec. 3 we present an algorithmic framework for online prediction with a predefined budget of active examples. Then in Sec. 4 we derive a concrete algorithm within this framework and analyze its performance. Formal proofs of our claims are omitted due to the lack of space. Finally, we present an empirical evaluation of our algorithm in Sec. 5.\n\n2 Problem Setting\nOnline learning is performed in a sequence of consecutive rounds. On round t the online algorithm observes an instance xt , which is drawn from some predefined instance domain X . The algorithm predicts the binary label associated with that instance and is then provided with the correct label yt {-1, +1}. At this point, the algorithm may use the instance-label pair (xt , yt ) to improve its prediction mechanism. The goal of the algorithm is to correctly predict as many labels as possible. The predictions of the online algorithm are determined by a hypothesis which is stored in its internal memory and is updated from round to round. We denote the hypothesis used on round t by ft . Our focus in this paper is on margin based hypotheses, namely, ft is a function from X to R where sign(ft (xt )) constitutes the actual binary prediction and |ft (xt )| is the confidence in this prediction. The term y f (x) is called the margin of the prediction and is positive whenever y and sign(f (x)) agree. We can evaluate the performance of a hypothesis on a given example (x, y ) in one of two ways. First, we can check whether the hypothesis makes a prediction mistake, namely determine whether y = sign(f (x)) or not. Throughout this paper, we use M to denote the number of prediction mistakes made by an online algorithm on a sequence of examples (x1 , y1 ), . . . , (xT , yT ). The second way we evaluate the predictions of a hypothesis is by using the hinge-loss function, defined as, 0 f = if y f (x) 1 ; (x, y ) . (1 ) 1 - y f (x) otherwise\n\nThe hinge-loss penalizes a hypothesis for any margin less than 1. Additionally, if y = sign(f (x)) then (f , (x, y )) 1 and therefore the cumulative hinge-loss suffered over a sequence of examples upper bounds M . The algorithms discussed in this paper use kernelbased hypotheses that are defined with respect to a kernel operator K : X X R which adheres to Mercer's positivity conditions [8]. A kernel-based hypothesis takes the form, f (x) = ik i K (xi , x) , (2 )\n\n=1\n\n\f\nwhere x1 , . . . , xk are members of X and 1 , . . . , k are real weights. To facilitate the derivation of our algorithms and their analysis, we associate a reproducing kernel Hilbert space (RKHS) with K in the standard way common to all kernel methods. Formally, let HK be the closure of the set of all hypotheses of the form given in Eq. (2). For l k any two functions, f (x) = j =1 j K (zj , x), define i=1 i K (xi , x) and g (x) = k l the inner product between them to be, f, g = i=1 j =1 i j K (xi , zj ). This innerproduct naturally induces a norm defined by f = f, f 1/2 and a metric f - g = ( f, f - 2 f, g + g, g )1/2 . These definitions play an important role in the analysis of our algorithms. Online kernel methods typically restrict themselves to hypotheses that are defined by some subset of the examples observed oin previous rounds. That is, the hypothesis used on round t takes the form, ft (x) = It i K (xi , x), where It is a subset of {1, . . . , (t-1)} and xi is the example observed by the algorithm on round i. As stated above, It is called the active set, and we say that example xi is active on round t if i It . Perhaps the most well known online algorithm for binary classification is the Perceptron [6]. Stated in the form of a kernel method, the hypotheses generated by the Perceptron i take the form ft (x) = It yi K (xi , x). Namely, the weight assigned to each active example is either +1 or -1, depending on the label of that example. The Perceptron initializes I1 to be the empty set, which implicitly sets f1 to be the zero function. It then updates its hypothesis only on rounds where a prediction mistake is made. Concretely, on round t, if ft (xt ) = yt then the index t is inserted into the active set. As a consequence, the size of the active set on round t equals the number of prediction mistakes made on previous rounds. A relative mistake bound can be proven for the Perceptron algorithm. The bound holds for any sequence of instance-label pairs, and compares the number of mistakes made by the Perceptron with the cumulative hinge-loss of any fixed hypothesis g HK , even one defined with prior knowledge of the sequence. Theorem 1. Let K be a Mercer kernel and let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples such that K (xt , xt ) 1 for all t. Let g be an arbitrary function in HK and g . ^ define t = ; (xt , yt ) Then the number of prediction mistakes made by the Perceptron T^ on this sequence is bounded by, M g 2 + 2 t=1 t .\n\nAlthough the Perceptron is guaranteed to be competitive with any fixed hypothesis g HK , the fact that its active set can grow without a bound poses a serious computational problem. In fact, this problem is common to most kernel-based online methods that do not explicitly monitor the size of It . As discussed above, our goal is to derive and analyze an online prediction algorithm which resolves these problems by enforcing a fixed bound on the size of the active set. Formally, let B be a positive integer, which we refer to as the budget parameter. We would like to devise an algorithm which enforces the constraint |It | B on every round t. Furthermore, we would like to prove a relative mistake bound for this algorithm, analogous to the bound stated in Thm. 1. Regretfully, this goal turns out to be impossible without making additional assumptions. We show this inherent limitation by presenting a simple counterexample which applies to any online algorithm which uses a prediction function of the form given in Eq. (2), and for which |It | B for all t. In this example, we show a hypothesis g HK and an arbitrarily long sequence of examples such that the algorithm makes a prediction mistake on every single round whereas g suffers no loss at all. We choose the instance space X to be the set of B + 1 standard unit vectors in RB +1 , that is X = {ei }B +1 where ei is the i=1 vector with 1 in its i'th coordinate and zeros elsewhere. K is set to be the standard innerproduct in RB +1 , that is K (x, x ) = x, x . Now for every t, ft is a linear combination of at most B vectors from X . Since |X | = B + 1, there exists a vector xt X which is currently not in the active set. Furthermore, xt is orthogonal to all of the active vectors and therefore ft (xt ) = 0. Assume without loss of generality that the online algorithm we\n\n\f\nare using predicts yt to be -1 when ft (x) = 0. If on every round we were to present the online algorithm with the example (xt , +1) then the online algorithm would make a B +1 prediction mistake on every round. On the other hand, the hypothesis g = i=1 ei is a member of HK and attains a zero hinge-loss on every round. We have found a sequence of examples and a fixed hypothesis (which is indeed defined by more than B vectors from X ) that attains a cumulative loss of zero on this sequence, while the number of mistakes made by the online algorithm equals the number of rounds. Clearly, a theorem along the lines of Thm. 1 cannot be proven. One way to resolve this problem is to limit the set of hypotheses we compete with to a subset of HK , which would naturally exclude g . In this paper, we limit the set of competitors to hypotheses with small norms. Formally, we wish to devise an online algorithm which is competitive with every hypothesis g HK for which g U , for some constant U . Our count example indicates that we cannot prove a relative mistake bound with U set to at er least B + 1, since that was the norm of g in our counterexample. In this paper we come close to this upper bound by proving that our algorithms can compete with any hypothesis ( with a norm bounded by 1 B + 1)/ log(B + 1). 4\n\n3 A Perceptron with \"Shrinking\" and \"Removal\" Steps\n\nThe Perceptron algorithm will serve as our starting point. Recall that whenever the Perceptron makes a prediction mistake, it updates its hypothesis by adding the element t to It . Thus, on any given round, the size of its active set equals the number of prediction mistakes it has made so far. This implies that the Perceptron may violate the budget constraint |It | B . We can solve this problem by removing an example from the active set whenever its size exceeds B . One simple strategy is to remove the oldest example in the active set whenever |It | > B . Let t be a round on which the Perceptron makes a prediction mistake. We apply the following two step update. First, we perform the Perceptron's update by adding t to It . Let It = It {t} denote the resulting active set. If |It | B we are done and we set It+1 = It . Otherwise, we apply a removal step by finding the oldest example in the active set, rt = min It , and setting It+1 = It \\ {rt }. The resulting algorithm is a simple modification of the kernel Perceptron, which conforms with a fixed budget constraint. While we are unable to prove a mistake bound for this algorithm, it is nonetheless an important milestone on the path to an algorithm with a fixed budget and a formal mistake bound. The removal of the oldest active example from It may significantly change the hypothesis and effect its accuracy. One way to overcome this obstacle is to reduce the weight of old examples in the definition of the current hypothesis. By controlling the weight of the oldest active example, we can guarantee that the removal step will not significantly effect the accuracy of our predictions. More formally, we redefine our hypothesis to be, i ft = i,t yi K (xi , ) ,\nIt\n\nwhere each i,t is a weight in (0, 1]. Clearly, the effect of removing rt from It depends on the magnitude of rt ,t . Using the ideas discussed above, we are now ready to outline the Forgetron algorithm. The Forgetron initializes I1 to be the empty set, which implicitly sets f1 to be the zero function. On round t, if a prediction mistake occurs, a three step update is performed. The first step is the standard Perceptron update, namely, the index t is inserted into the active set and the weight t,t is set to be 1. Let It denote the active set which results from this update, and let ft denote the resulting hypothesis, ft (x) = ft (x) + yt K (xt , x). The second step of the update is a shrinking step in which we scale f by a coefficient t (0, 1]. The value of\n\n\f\nt is intentionally left unspecified for now. Let ft denote the resulting hypothesis, that is, ft = t ft . Setting i,t+1 = t i,t for all i It , we can write, i i,t+1 yi K (xi , x) . ft (x) =\n It\n\nThe third and last step of the update is the removal step discussed above. That is, if the bud get constraint is violated and |It | > B then It+1 is set to be It \\ {rt } where rt = min It . Otherwise, It+1 simply equals It . The recursive dej nition of the weight i,t can be unravfi eled to give the following explicit form, i,t = It-1 j i j . If the shrinking coefficients t are sufficiently small, then the example weights i,t decrease rapidly with t, and particularly the weight of the oldest active example can be made arbitrarily small. Thus, if t is small enough, then the removal step is guaranteed not to cause any significant damage. Alas, aggressively shrinking the online hypothesis with every update might itself degrade the performance of the online hypothesis and therefore t should not be set too small. The delicate balance between safe removal of the oldest example and over-aggressive scaling is our main challenge. To formalize this tradeoff, we begin with the mistake bound in Thm. 1 and investigate how it is effected by the shrinking and removal steps. We focus first on the removal step. Let J denote the set of rounds on which the Forgetron makes a prediction mistake and define the function, ( , , ) = ( )2 + 2 (1 - ) . Let t J be a round on which |It | = B . On this round, example rt is removed from the active set. Let t = yrt ft (xrt ) be the signed margin attained by ft on the active example being removed. Finally, we abbreviate, (rt ,t , t , t ) if t J |It | = B . t = 0 otherwise Lemma 1 below states that removing example rt from the active set on round t increases the mistake bound by t . As expected, t decreases with the weight of the removed example, rt ,t+1 . In addition, it is clear from the definition of t that t also plays a key role in determining whether xrt can be safely removed from the active set. We note in passing that [2] used a heuristic criterion similar to t to dynamically choose which active example to remove on each online round. Turning to the shrinking step, for every t J we define, if ft+1 U 1 t if ft U ft+1 < U t = t Uft if ft > U ft+1 < U\n\n.\n\nLemma 1 below also states that applying the shrinking step on round t increases the mistake bound by U 2 log(1/t ). Note that if ft+1 U then t = 1 and the shrinking step on round t has no effect on our mistake bound. Intuitively, this is due to the fact that, in this case, the shrinking step does not make the norm of ft+1 smaller than the norm of our competitor, g . Lemma 1. Let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples such that K (xt , xt ) 1 for all t and assume that this sequence is presented to the Forgetron with a budget .constraint g ^ B . Let g be a function in HK for which g U , and define t = ; (xt , yt ) Then, +t . g t tT ^ t t + U 2 log (1/t ) M 2+2\n=1 J J\n\n\f\nThe first term in the bound of Lemma 1 is identical to the mistake bound of the standard Perceptron, given in Thm. 1. The second term is the consequence of the removal and shrinking steps. If we set the shrinking coefficients in such a way that the second term is at t^ t + M . This can be most M , then the bound in Lemma 1 reduces to M g 2 + 2 2 2 t^ 2 t , which is twice the bound of the Perceptron algorithm. restated as M 2 g + 4 The next lemma states sufficient conditions on t under which the second term in Lemma 1 is indeed upper bounded by M . 2 Lemma 2. Assume that the conditions of Lemma 1 hold and that B 83. If the shrinking coefficients t are chosen such that, t t 15 M 32 t\nJ\n\nand t + U 2\n\nJ\n\nt\n\nJ\n\nlog (1/t ) \nJ\n\nlog(B + 1) M, 2(B + 1)\nM 2\n\nthen the following holds,\n\nIn the next section, we define the specific mechanism used by the Forgetron algorithm to choose the shrinking coefficients t . Then, we conclude our analysis by arguing that this choice satisfies the sufficient conditions stated in Lemma 2, and obtain a mistake bound as described above.\n\nt\n\nlog (1/t ) \n\n.\n\n4 The Forgetron Algorithm\nWe are now ready to define the specific choice of t used by the Forgetron algorithm. On each round, the Forgetron chooses t to be the maximal value in (0, 1] for which the damage caused by the removal step is still manageable. To clarify our construction, define Jt = {i J : i t} and Mt = |Jt |. In words, Jt is the set of rounds on which the algorithm made a mistake up until round t, and Mt is the size of this set. We can now rewrite the first condition in Lemma 2 as, t t 15 MT . 32 (3 )\n\nJT\n\nInstead of the above condition, the Forgetron enforces the following stronger condition, i {1, . . . , T }, t t 15 Mi . 32 (4 )\n\nJi\n\nt This is done as follows. Define, Qi = Ji-1 t . Let i denote a round on which the algorithm makes a prediction mistake and on which an example must be removed from the active set. The i'th constraint in Eq. (4) can be rewritten as i + Qi 15 Mi . The 32 Forgetron sets i to be the maximal value in (0, 1] for which this constraint holds, namely, . 15 i = max (0, 1] : (ri ,i , , i ) + Qi 32 Mi Note that Qi does not depend on and that (ri ,i , , i ) is a quadratic expression in . Therefore, the value of i can be found analytically. The pseudo-code of the Forgetron algorithm is given in Fig. 1. Having described our algorithm, we now turn to its analysis. To prove a mistake bound it suffices to show that the two conditions stated in Lemma 2 hold. The first condition of the lemma follows immediately from the definition of t . Using strong induction on the size of J , we can show that the second condition holds as well. Using these two facts, the following theorem follows as a direct corollary of Lemma 1 and Lemma 2.\n\n\f\nINPUT: Mercer kernel K (, ) ; budget parameter B > 0 I N I T I A L I Z E : I1 = ; f1 0 ; Q1 = 0 ; M0 = 0 For t = 1, 2, . . . receive instance xt ; predict label: sign(ft (xt )) receive correct label yt If yt ft (xt ) > 0 set It+1 = It , Qt+1 = Qt , Mt = Mt-1 , and i It set i,t+1 = i,t Else set Mt = Mt-1 + 1 (1) set It = It {t} If |It | B set It+1 = It , Qt+1 = Qt , t,t = 1, and i It+1 set i,t+1 = i,t Else (2) define rt = min It choose t = max{ (0, 1] : (rt ,t , , t ) + Qt 15 Mt } 32 set t,t = 1 and i It set i,t+1 = t i,t set Qt+1 = Qt + t (3) set It+1 = It \\ {rt } i define ft+1 = It+1 i,t+1 yi K (xi , ) Figure 1: The Forgetron algorithm. Theorem 2. Let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples such that K (xt , xt ) 1 for all t. Assume that this sequence is presented to the Forgetron algorithm from Fig. 1 with a bu( get parameter B 83. Let g be a functgon in HK .for which g U , where U = d i 1 ^ B + 1)/ log(B + 1), and define t = ; (xt , yt ) Then, the number of prediction 4 mistakes made by the Forgetron on this sequence is at most, M2g\n2\n\n+4\n\ntT\n\n^ t\n\n=1\n\n5 Experiments and Discussion\nIn this section we present preliminary experimental results which demonstrate the merits of the Forgetron algorithm. We compared the performance of the Forgetron with the method described in [2], which we abbreviate by CKS. When the CKS algorithm exceeds its budget, it removes the active example whose margin would be the largest after the removal. Our experiment was performed with two standard datasets: the MNIST dataset, which consists of 60,000 training examples, and the census-income (adult) dataset, with 200,000 examples. The labels of the MNIST dataset are the 10 digit classes, while the setting we consider in this paper is that of binary classification. We therefore generated binary p1ob/ems by splitting the 10 labels into two sets of equal size in all possible ways, totaling rl 0 2 = 126 classification problems. For each budget value, we ran the two algorithms on 5 all 126 binary problems and averaged the results. The labels in the census-income dataset are already binary, so we ran the two algorithms on 10 different permutations of the examples and averaged the results. Both algorithms used a fifth degree non-homogeneous polynomial kernel. The results of these experiments are summarized in Fig. 2. The accuracy of the standard Perceptron (which does not depend on B ) is marked in each plot\n\n\f\n0.3 Forgetron CKS 0.25 0.3 Forgetron CKS\n\naverage error\n\n0.2\n\naverage error\n1000 2000 3000 4000 5000 6000\n\n0.25 0.2 0.15 0.1 0.05 200 400 600 800 1000 1200 1400 1600 1800\n\n0.15\n\n0.1\n\n0.05\n\nbudget size - B\n\nbudget size - B\n\nFigure 2: The error of different budget algorithms as a function of the budget size B on the censusincome (adult) dataset (left) and on the MNIST dataset (right). The Perceptron's active set reaches a size of 14,626 for census-income and 1,886 for MNIST. The Perceptron's error is marked with a horizontal dashed black line. using a horizontal dashed black line. Note that the Forgetron outperforms CKS on both datasets, especially when the value of B is small. In fact, on the census-income dataset, the Forgetron achieves almost the same performance as the Perceptron with only a fifth of the active examples. In contrast to the Forgetron, which performs well on both datasets, the CKS algorithm performs rather poorly on the census-income dataset. This can be partly attributed to the different level of difficulty of the two classification tasks. It turns out that the performance of CKS deteriorates as the classification task becomes more difficult. In contrast, the Forgetron seems to perform well on both easy and difficult classification tasks. In this paper we described the Forgetron algorithm which is a kernel-based online learning algorithm with a fixed memory budget. We proved that the Fo(getron is competitive with r any hypothesis whose norm is upper bounded by U = 1 B + 1)/ log(B + 1). We 4 further argued that no algorithm with a udget of B active examples can be competitive b with every hypothesis whose norm is B + 1, on every input sequence. Bridging the small gap between U and B + 1 remains an open problem. The analysis presented in this paper can be used to derive a family of online algorithms of which the Forgetron is only one special case. This family of algorithms, as well as complete proofs of our formal claims and extensive experiments, will be presented in a long version of this paper.\n\nReferences\n[1] K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. Online passive aggressive algorithms. Technical report, The Hebrew University, 2005. [2] K. Crammer, J. Kandola, and Y. Singer. Online classification on a budget. NIPS, 2003. [3] C. Gentile. A new approximate maximal margin classification algorithm. JMLR, 2001. [4] J. Kivinen, A. J. Smola, and R. C. Williamson. Online learning with kernels. IEEE Transactions on Signal Processing, 52(8):21652176, 2002. [5] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. NIPS, 1999. [6] F. Rosenblatt. The Perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65:386407, 1958. [7] S. Shalev-Shwartz and Y. Singer. A new perspective on an old perceptron algorithm. COLT, 2005. [8] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998. [9] J. Weston, A. Bordes, and L. Bottou. Online (and offline) on an even tighter budget. AISTATS, 2005.\n\n\f\n", "award": [], "sourceid": 2806, "authors": [{"given_name": "Ofer", "family_name": "Dekel", "institution": null}, {"given_name": "Shai", "family_name": "Shalev-shwartz", "institution": null}, {"given_name": "Yoram", "family_name": "Singer", "institution": null}]}