{"title": "Training Conditional Random Fields for Maximum Labelwise Accuracy", "book": "Advances in Neural Information Processing Systems", "page_first": 529, "page_last": 536, "abstract": null, "full_text": "Training Conditional Random Fields for Maximum Labelwise Accuracy\n\nSamuel S. Gross Computer Science Department Stanford University Stanford, CA, USA ssgross@cs.stanford.edu Chuong B. Do Computer Science Department Stanford University Stanford, CA, USA chuongdo@cs.stanford.edu\n\nOlga Russakovsky Computer Science Department Stanford University Stanford, CA, USA olga@cs.stanford.edu Serafim Batzoglou Computer Science Department Stanford University Stanford, CA, USA serafim@cs.stanford.edu\n\nAbstract\nWe consider the problem of training a conditional random field (CRF) to maximize per-label predictive accuracy on a training set, an approach motivated by the principle of empirical risk minimization. We give a gradient-based procedure for minimizing an arbitrarily accurate approximation of the empirical risk under a Hamming loss function. In experiments with both simulated and real data, our optimization procedure gives significantly better testing performance than several current approaches for CRF training, especially in situations of high label noise.\n\n1\n\nIntroduction\n\nSequence labeling, the task of assigning labels y = y1 , ..., yL to an input sequence x = x1 , ..., xL , is a machine learning problem of great theoretical and practical interest that arises in diverse fields such as computational biology, computer vision, and natural language processing. Conditional random fields (CRFs) are a class of discriminative probabilistic models designed specifically for sequence labeling tasks [1]. CRFs define the conditional distribution Pw (y | x) as a function of features relating labels to the input sequence. Ideally, training a CRF involves finding a parameter set w that gives high accuracy when labeling new sequences. In some cases, however, simply finding parameters that give the best possible accuracy on training data (known as empirical risk minimization [2]) can be difficult. In particular, if we wish to minimize Hamming loss, which measures the number of incorrect labels, gradient-based optimization methods cannot be applied directly.1 Consequently, surrogate optimization problems, such as maximum likelihood or maximum margin training, are solved instead. In this paper, we describe a training procedure that addresses the problem of minimizing empirical per-label risk for CRFs. Specifically, our technique attempts to minimize a smoothed approximation of the Hamming loss incurred by the maximum expected accuracy decoding (i.e., posterior decoding) algorithm on the training set. The degree of approximation is controlled by a parameterized function Q() which trades off between the accuracy of the approximation and the smoothness of the objective. In the limit as Q() approaches the step function, the optimization objective converges to the empirical risk minimization criterion for Hamming loss.\nThe gradient of the optimization objective is everywhere zero (except at points where the objective is discontinuous), because a sufficiently small change in parameters will not change the predicted labeling.\n1\n\n\f\n2\n2.1\n\nPreliminaries\nDefinitions\n\nLet X L denote an input space of all possible input sequences, and let Y L denote an output space of all possible output labels. Furthermore, for a pair of consecutive labels yj -1 and yj , an input sequence x, and a label position j , let f (yj -1 , yj , x, j ) Rn be a vector-valued function; we call f the feature mapping of the CRF. A conditional random field (CRF) defines the conditional probability of a labeling (or parse) y given an input sequence x as L y wZ T exp j =1 w f (yj -1 , yj , x, j ) exp T F1,L (x, y) L = Pw (y | x) = , (1) T (x) Y L exp j =1 w f (yj -1 , yj , x, j ) b where we define the summed feature mapping, Fa,b (x, y) = j =a f (yj -1 , yj , x, j ), and where the wT e y F1,L (x, y ) nsures that the distribution is normalized for partition function Z (x) = exp any set of model parameters w.2 2.2 Maximum a posteriori vs. maximum expected accuracy parsing\n\nGiven a CRF with parameters w, the sequence labeling task is to determine values for the labels y of a new input sequence x. One approach is to choose the most likely, or maximum a posteriori, labeling, arg maxy Pw (y | x). This can be computed efficiently using the Viterbi algorithm. An alternative approach, which seeks to maximize the per-label accuracy of the prediction rather than the joint probability of the entire parse, chooses the most likely (i.e., highest posterior probability) value for each label separately. Note that jL jL (2) arg max 1{yj = yj } Pw (yj | x) = arg max Ey \ny =1 y =1\n\nwhere 1{condition} denotes the usual indicator function whose value is 1 when condition is true and 0 otherwise, and where the expectation is taken with respect to the conditional distribution Pw (y | x). From this, we see that maximum expected accuracy parsing chooses the parse with the maximum expected number of correct labels.\n\nIn practice, maximum expected accuracy parsing often yields more accurate results than Viterbi parsing (on a per-label basis) [3, 4, 5]. Here, we restrict our focus to maximum expected accuracy parsing procedures and seek training criteria which optimize the performance of a CRF-based maximum expected accuracy parser.\n\n3\n\nTraining conditional random fields\n\nUsually, CRFs are trained in the batch setting, where a complete set D = {(x(t) , y(t) )}m 1 of t= training examples is available up front. In this case, training amounts to numerical optimization of a fixed objective function R(w : D). A good objective function is one whose optimal value leads to parameters that perform well, in an application-dependent sense, on previously unseen testing examples. While this can be difficult to achieve without knowing the contents of the testing set, one can, under certain conditions, guarantee that the accuracy of a learned CRF on an unseen testing set is probably not much worse than its accuracy on the training set. In particular, when assuming independently and identically distributed (i.i.d.) training and testing examples, there exists a probabilistic bound on the difference between empirical risk and generalization error [2]. As long as enough training data are available (relative to model complexity), strong training set performance will imply, with high probability, similarly strong testing set performance. Unfortunately, minimizing empirical risk for a CRF is a very difficult task. Loss functions based on usual notions of per-label accuracy (such as Hamming loss) are typically not only nonconvex but also not amenable to optimization by methods that make use of gradient information.\n2\n\nWe assume for simplicity the existence of a special initial label y0 .\n\n\f\nIn this section, we briefly describe three previous approaches for CRF training which optimize surrogate loss functions in lieu of the empirical risk. Then, we consider a new method for gradient-based CRF training oriented more directly toward optimizing predictive performance on the training set. Our method minimizes an arbitrarily accurate approximation of empirical risk, where the loss function is defined as the number of labels predicted incorrectly by maximum expected accuracy parsing. 3.1 3.1.1 Previous objective functions Conditional log-likelihood\n\nConditional log-likelihood is the most commonly used objective function for training conditional random fields. In this criterion, the loss suffered for a training example (x(t) , y(t) ) is the negative log probability of the true parse according to the model, plus a regularization term: tm log Pw (y(t) | x(t) ) (3) RCLL (w : D) = C ||w||2 -\n=1\n\nThe convexity and differentiability of conditional log-likelihood ensure that gradient-based optimization procedures (e.g., conjugate gradient or L-BFGS [6]) will not converge to suboptimal local minima of the objective function.\n\nHowever, there is no guarantee that the parameters obtained by conditional log-likelihood training will lead to the best per-label predictive accuracy, even on the training set. For one, maximum likelihood training explicitly considers only the probability of exact training parses. Other parses, even highly accurate ones, are ignored except insofar as they share common features with the exact parse. In addition, the log-likelihood of a parse is largely determined by the sections which are most difficult to correctly label. This can be a weakness in problems with significant label noise (i.e., incorrectly labeled training examples). 3.1.2 Pointwise conditional log likelihood\n\nBy using pointwise posterior probabilities, this objective function takes into account suboptimal parses and focuses on finding a model whose posteriors match well with the training labels, even though the model may not provide a good fit for the training data as a whole.\n\nKakade et al. investigated an alternative nonconvex training objective for CRFs [7, 8] which considers separately the posterior label probabilities at each position of each training sequence. In this approach, one maximizes not the probability of an entire parse, but instead the product of the posterior probabilities (or equivalently, sum of log posteriors) for each predicted label: tm j L (t) log Pw (yj | x(t) ) (4) Rpointwise (w : D) = C ||w||2 -\n=1 =1\n\nNevertheless, pointwise logloss is fundamentally quite different from Hamming loss. A training procedure based on pointwise log likelihood, for example, would prefer to reduce the posterior probability for a correct label from 0.6 to 0.4 in return for improving the posterior probability for a hopelessly incorrect label from 0.0001 to 0.01. Thus, the objective retains the difficulties of the regular conditional log likelihood when dealing with difficult-to-classify outlier labels. 3.1.3 Maximum margin training\n\nand Tsochantaridis et al. [10].\n\nThe notion of Hamming distance is incorporated directly in the maximum margin training procedures of Taskar et al. [9]: 0 , tm max , max (y, y(t) ) - wT F1,L (x(t) , y) (5) Rmax margin (w : D) = C ||w||2 +\n=1 y Y L\n\nRmax margin (w : D) = C ||w||2 +\n\nHere, (y, y(t) ) denotes the Hamming distance between y and y(t) , and F1,L (x(t) , y) = F1,L (x(t) , y(t) ) - F1,L (x(t) , y). In the former formulation, loss is incurred when the Hamming distance between the correct parse y(t) and a candidate parse y exceeds the obtained classification\n\ntm\n\nmax\n\n=1\n\n0 1 , max (y, y(t) ) - wT F1,L (x(t) , y)\ny Y L\n\n.\n\n(6)\n\n\f\nmargin between y(t) and y. In the latter formulation, the amount of loss for a margin violation scales linearly with the Hamming distance betweeen y(t) and y. Both cases lead to convex optimization problems in which the loss incurred for a particular training example is an upper bound on the Hamming loss between the correct parse and its highest scoring alternative. In practice, however, this upper bound can be quite loose; thus, parameters obtained via a maximum margin framework may be poor minimizers of empirical risk. 3.2 Training for maximum labelwise accuracy\n\nIn each of the likelihood-based or margin-based objective functions introduced in the previous subsections, difficulties arose due to the mismatch between the chosen objective function and our notion of empirical risk as defined by Hamming loss. In this section, we demonstrate how to construct a smooth objective function for maximum expected accuracy parsing which more closely approximates our desired notion of empirical risk. 3.2.1 The labelwise accuracy objective function\n\nMaximizing this objective is equivalent to minimizing empirical risk under the Hamming loss (i.e., the number of mispredicted labels). To obtain a smooth approximation to this objective function, (t) we can express the condition that the algorithm predicts the correct label for yj in terms of the posterior probabilities of correct and incorrect labels as Pw (yj | x(t) ) - max Pw (yj | x(t) ) > 0.\nyj =yj\n(t)\n\nConsider the following objective function, . y tm j L (t) (t) 1 j = arg max Pw (yj | x ) R(w : D) =\n=1 =1 yj\n\n(7)\n\n(t)\n\n(8)\n\nWhen Q() is chosen to be the indicator function, Q(x) = 1{x > 0}, we recover the original objective. By choosing a nicely behaved form for Q(), however, we obtain a new objective that is easier to optimize. Specifically, we set Q(x) to be sigmoidal with parameter (see Figure 2a): 1 Q(x; ) = . (10) 1 + exp(-x) As , Q(x; ) 1{x > 0}, so Rlabelwise (w : D) approaches the objective function defined in (7). However, Rlabelwise (w : D) is smooth for any finite > 0. Because of this, we are free to use gradient-based optimization to maximize our new objective function. As get larger, the quality of our approximation of the ideal Hamming loss objective improves; however, the approximation itself also becomes less smooth and perhaps more difficult to optimize as a result. Thus, the value of controls the trade-off between the accuracy of the approximation and the ease of optimization.3 3.2.2 The labelwise accuracy objective gradient\n\nSubstituting equation (8) back into equation (7) and replacing the indicator function with a generic function Q(), we obtain . P tm j L (t) (t) (t) (9) Q Rlabelwise (w) = w (yj | x ) - max Pw (yj | x )\n=1 =1 yj =yj\n(t)\n\nWe now present an algorithm for efficiently calculating the gradient of the approximate accuracy (t) (t) objective. For a fixed parameter set w, let yj denote the label other than yj that has the maximum ~ posterior probability at position j . Also, for notational convenience, let y1:j denote the variables\nIn particular, note that that the method of using Q(x; ) to approximate the step function is analogous to the log-barrier method used in convex optimization for approximating inequality constraints using a smooth function as a surrogate for the infinite height barrier. As with log-barrier optimization, performing the maximization of Rlabelwise (w : D) using a small value of , and gradually increasing while using the previous solution as a starting point for the new optimization, provides a viable technique for maximizing the labelwise accuracy objective.\n3\n\n\f\ny1 , . . . , yj . Differentiating equation (9), we compute w Rlabelwise (w : D) to be4 . P P tm j L (t) (t) (t) (t) (t) (t) ~ ~ Q w (yj | x(t) ) - Pw (yj | x(t) ) w (yj | x ) - Pw (yj | x ) w\n=1 =1\n\n(11)\n\na P (t) (t) ~ ~ where Q (w) = Q w (yj | x(t) ) - Pw (yj | x(t) ) nd y is either y(t) or y(t) . To efficiently j compute terms of this type, we define w ( kj y T 1{yk = yk yj = i} Q (w) exp F1,j (x(t) , y) 16) (i, j ) = k\n=1\n1:j\n\nApplying the quotient rule allows us to compute the gradient of equation (12), whose complete form we omit for lack of space. Most of the terms involved in the gradient are easy to compute using the standard forward and backward matrices used for regular CRF inference, which we define here as w ( y 1{yj = i} exp T F1,j (x(t) , y) 13) (i, j ) = 1:j wT . y (t) (i, j ) = 1{yj = i} exp Fj +1,L (x , y) (14) j :L The two difficult terms that do not follow from the forward and backward matrices have the form, w , kL y T Q (w) 1{yk = yk } F1,L (x(t) , y) exp F1,L (x(t) , y) (15) k\n=1\n1:L\n\nUsing equation (1), the inner term, Pw (yj | x(t) ) - Pw (yj | x(t) ), is equal to ~ 1 w . y 1 (t) (t) T {yj = yj } - 1{yj = yj } exp ~ F1,L (x(t) , y) Z (x(t) ) 1:L\n\n(t)\n\n(t)\n\n(12)\n\nL\n\n (i, j ) =\n\nLike the forward and backward matrices, (i, j ) and (i, j ) may be calculated via dynamic pro gramming. In particular, we have the base cases (i, 1) = 1{i = y1 } (i, 1) Q (w) and 1 (i, L) = 0. The remaining entries are given by the following recurrences: T (t) i (i , j - 1) + 1{i = yj } (i , j - 1) Q (w) ew f (i ,i,x ,j ) (18) (i, j ) = j\n\n\nk\n\n=j +1\n\ny\n\n 1{yk = yk yj = i} Q (w) exp k\n\nj :L\n\nw\n\nT\n\nFj +1,L (x(t) , y)\n\n.\n\n(17)\n\n (i, j ) =\n\nIt follows that equation (15) is equal to w jL i i T f (i , i, x(t) , j ) exp f (i , i, x(t) , j ) (A + B ),\n=1\n\n\ni\n\n\n\n\n\n\n\n (i , j + 1) + 1{i = yj +1 } (i , j + 1) Q +1 (w) j\n\n\n\new\n\nT\n\nf (i,i ,x(t) ,j +1)\n\n. (19)\n\n(20)\n\nwhere\n\nA = (i , j - 1) (i, j ) + (i , j - 1) (i, j ) B = 1{i =\n yj }\n\n(21) (22)\n\n (i , j - 1) (i, j ) \n2\n\n\n\n Qj (w).\n\nThus, the algorithm above computes the gradient in O(|Y | L) time and O(|Y | L) space. Since ~ (i, j ) and (i, j ) must be computed for both y = y(t) and y = y(t) , the resulting total gradient computation takes approximately three times as long and uses twice the memory of the analogous computation for the log likelihood gradient.5\nTechnically, the max function is not differentiable. One could replace the max with a softmax function, and assuming unique probabilities for each candidate label, the gradient approaches (11) as the softmax function approaches the max. As noted in [11], this approximation used here does not cause problems in practice. 5 We note that the \"trick\" used in the formulation of approximate accuracy is applicable to a variety of other ( t) forms and arguments for Q(). In particular, if we change its argument to Pw (yj | x(t) ), letting Q(x) = log(x) gives the pointwise logloss formulation of Kakade et al. (see section 3.1.2), while letting Q(x) = x gives an objective function equal to expected accuracy. Computing the gradient for these objectives involves straightforward modifications of the recurrences presented here.\n4\n\n\f\n(a)\n\n(b)\n\n0.8 0.78 0.76 Labelwise Accuracy 0.74 0.72 0.7 0.68 0.66 0.64 Joint Log-Likelihood Conditional Log-Likelihood Maximum Margin Maximum Labelwise Accuracy 0 0.05 0.1 0.15 0.2 0.25 Noise Parameter, p 0.3 0.35 0.4\n\nFigure 1: Panel (a) shows the state diagram for the hidden Markov model used for the simulation experiments. The HMM consists of two states (`C' and `I') with transition probabilities labeled on the arrows, and emission probabilities specified (over the alphabet {A, C, G, T }) written inside each state. Panel (b) shows the proportion of state labels correctly predicted by the learned models at varying levels of label noise. The error bars show 95% confidence intervals on the mean generalization performance.\n\n4\n4.1\n\nResults\nSimulation experiments\n\nTo test the performance of the approximate labelwise accuracy objective function, we first ran simulation experiments in order to assess the robustness of several different learning algorithms in problems with a high degree of label noise. In particular, we generated sequences of length 1,000,000 from a simple two-state hidden Markov model (see Figure 1a). Given a fixed noise parameter p [0, 1], we generated training sequence labels by flipping each run of consecutive `C' hidden state labels to `I' with probability p. After learning parameters, we then tested each algorithm on uncorrupted testing sequence generated by the original HMM. Figure 1b indicates the proportion of labels correctly identified by four different methods at varying noise levels: a generative model trained with joint log-likelihood, a CRF trained with conditional log-likelihood, the maximum-margin method of Taskar et al. [9] as implemented in the SVMstruct package [10]6 , and a CRF trained with maximum labelwise accuracy. No method outperforms maximum labelwise accuracy at any noise level. For levels of noise above 0.05, maximum labelwise accuracy performs significantly better than the other methods. For each method, we used the decoding algorithm (Viterbi or MEA) that led to the best performance. The maximum margin method performed best when Viterbi decoding was used, while the other three methods had better performance with MEA decoding. Interestingly, with no noise present, maximum margin training with Viterbi decoding peformed significantly better than generative training with Viterbi decoding (0.749 vs. 0.710), but this was still much worse than generative training with MEA decoding (0.796). 4.2 Gene prediction experiments\n\nTo test the performance of maximum labelwise accuracy training on a large-scale, real world problem, we trained a CRF to predict protein coding genes in the genome of the fruit fly Drosophila melanogaster. The CRF labeled each base pair of a DNA sequence according to its predicted functional category: intergenic, protein coding, or intronic. The features used in the model were of two types: transitions between labels and trimer composition. The CRF was trained on approximately 28 million base pairs labeled according to annotations from the FlyBase database [12]. The predictions were evaluated on a separate testing set of the same size. Three separate training runs were performed, using three different objective functions: maximum\nWe were unable to get SVMstruct to converge on our test problem when using the Tsochantaridis et al. maximum margin formulation.\n6\n\n\f\n(a)\n3 pointwise logloss zero-one loss Q(x; 15) 2.5\n\n(b)\n\n0.82 0.8 0.78\n\nObjective Training Accuracy Testing Accuracy\n\n0.82 0.8 Approximate Accuracy Pointwise Log Likelihood / Length 0.78 0.76 0.74 0.72 0.7 0.68 0.66\n\n2 per-label loss\n\n0.76 Accuracy 0.74 0.72 0.7 0.68 0.66\n-0.5 0 0.5 P(incorrect label) - P(correct label) 1\n\n1.5\n\n1\n\n0.5\n\n0\n\n-0.5 -1\n\n0\n\n10\n\n(c)\n0.82 0.8 0.78 0.76 Accuracy 0.74 0.72 0.7 0.68 0.66 0 10 20 30 40 Iterations -0.004 50 -0.0035 -0.003 Objective Training Accuracy Testing Accuracy -0.002\n\n(d)\n0.82 0.8 Log Likelihood / Length -0.0025 0.78 0.76 Accuracy 0.74 0.72 0.7 0.68 0.66 0 10\n\n20 30 Iterations\n\n40\n\n50\n\nObjective Training Accuracy Testing Accuracy\n\n-1\n\n-1.5\n\n-2\n\n-2.5\n\n20 Iterations\n\n30\n\n40\n\n-3 50\n\nFigure 2: Panel (a) compares three pointwise loss functions in the special case where a label has two possible values. The green curve (f (x) = - log( 1-x )) depicts pointwise logloss; the red curve rep2 resents the ideal zero-one loss; and the blue curve gives the sigmoid approximation with parameter 15. Panels (b), (c), and (d) show gene prediction learning curves using three training objective functions: (b) maximum labelwise (approximate) accuracy, (c) maximum conditional log-likelihood, and (d) maximum pointwise conditional log-likelihood, respetively. In each case, parameters were initialized to their generative model estimates. likelihood, maximum pointwise likelihood, and maximum labelwise accuracy. Each run was started from an initial guess calculated using HMM-style generative parameter estimation.7 Figures 2b, 2c, and 2d show the value of the objective function and the average label accuracy at each iteration of the three training runs. Here, maximum accuracy training improves upon the accuracy of the original generative parameters and outperforms the other two training objectives. In contrast, maximum likelihood training and maximum pointwise likelihood training both give worse performance than the simple generative parameter estimates. Evidently, for this problem the likelihood-based functions are poor surrogate measures for per-label accuracy: Figures 2c and 2d show declines in training and testing set accuracy, despite increases in the objective function.\n\n5\n\nDiscussion and related work\n\nIn contrast to most previous work describing alternative objective functions for CRFs, the method described in this paper optimizes a direct approximation of the Hamming loss. A few notable papers have also dealt with the problem of minimizing empirical risk directly. For binary classifiers, Jansche showed that an algorithm designed to optimize F-measure performance of a logistic regression model for information extraction outperforms maximum likelihood training [14]. For parsing tasks, Och demonstrated that a statistical machine translation system choosing between a small finite collection of candidate parses achieves better accuracy when it is trained to minimize error rate instead\nWe did not include maximum margin methods in this comparison; existing software packages for maximum margin training, based on the cutting plane algorithm [10] or decomposition techniques such as SMO [9, 13], are not easily parallelizable and scale poorly for large datasets, such as those encountered in gene prediction.\n7\n\n\f\nof optimizing the more traditional maximum mutual information criterion [15]. Unlike Och's algorithm, our method does not require one to provide a small set of candidate parses, instead relying on efficient dynamic programming recurrences for all computations. After this work was submitted for consideration, a Minimum Classification Error (MCE) method for training CRFs to minimize empirical risk was independently proposed by Suzuki et al. [11]. This technique minimizes the loss incurred by maximum a posteriori, rather than maximum expected accuracy, parsing on the training set. In practice, Viterbi parsers often achieve worse per-label accuracy than maximum expected accuracy parsers [3, 4, 5]; we are currently exploring whether a similar relationship also exists between MCE methods and our proposed training objective. The training method described in this work is theoretically attractive, as it addresses the goal of empirical risk minimization in a very direct way. In addition to its theoretical appeal, we have shown that it performs much better than maximum likelihood and maximum pointwise likelihood training on a large scale, real world problem. Furthermore, our method is efficient, having time complexity approximately three times that of maximum likelihood likelihood training, and easily parallelizable, as each training example can be considered independently when evaluating the objective function or its gradient. The chief disadvantage of our formulation is its nonconvexity. In practice, this can be combatted by initializing the optimization with a parameter vector obtained by a convex training method. At present, the extent of the effectiveness of our method and the characteristics of problems for which it performs well are not clear. Further work applying our method to a variety of sequence labeling tasks is needed to investigate these questions.\n\n6\n\nAcknowledgments\n\nSSG and CBD were supported by NDSEG fellowships. We thank Andrew Ng for useful discussions.\n\nReferences\n[1] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In ICML, 2001. [2] V. Vapnik. Statistical Learning Theory. Wiley, 1998. [3] C. B. Do, M. S. P. Mahabhashyam, M. Brudno, and S. Batzoglou. ProbCons: probabilistic consistencybased multiple sequence alignment. Genome Research, 15(2):330340, 2005. [4] C. B. Do, D. A. Woods, and S. Batzoglou. CONTRAfold: RNA secondary structure prediction without physics-based models. Bioinformatics, 22(14):e90e98, 2006. [5] P. Liang, B. Taskar, and D. Klein. Alignment by agreement. In HLT-NAACL, 2006. [6] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999. [7] S. Kakade, Y. W. Teh, and S. Roweis. An alternate objective function for Markovian fields. In ICML, 2002. [8] Y. Altun, M. Johnson, and T. Hofmann. Investigating loss functions and optimization methods for discriminative learning of label sequences. In EMNLP, 2003. [9] B. Taskar, C. Guestrin, and D. Koller. Max margin markov networks. In NIPS, 2003. [10] I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. Support vector machine learning for interdependent and structured output spaces. In ICML, 2004. [11] J. Suzuki, E. McDermott, and H. Isozaki. Training conditional random fields with multivariate evaluation measures. In ACL, 2006. [12] G. Grumbling, V. Strelets, and The Flybase Consortium. FlyBase: anatomical data, images and queries. Nucleic Acids Research, 34:D484D488, 2006. [13] J. Platt. Using sparseness and analytic QP to speed training of support vector machines. In NIPS, 1999. [14] M. Jansche. Maximum expected F-measure training of logistic regression models. In EMNLP, 2005. [15] F. J. Och. Minimum error rate training in statistical machine translation. In ACL, 2003.\n\n\f\n", "award": [], "sourceid": 2992, "authors": [{"given_name": "Samuel", "family_name": "Gross", "institution": null}, {"given_name": "Olga", "family_name": "Russakovsky", "institution": null}, {"given_name": "Chuong", "family_name": "B.", "institution": null}, {"given_name": "Serafim", "family_name": "Batzoglou", "institution": null}]}