{"title": "Isotonic Conditional Random Fields and Local Sentiment Flow", "book": "Advances in Neural Information Processing Systems", "page_first": 961, "page_last": 968, "abstract": null, "full_text": "Isotonic Conditional Random Fields and Local Sentiment Flow\nYi Mao School of Elec. and Computer Engineering Purdue University - West Lafayette, IN ymao@ecn.purdue.edu Guy Lebanon Department of Statistics, and School of Elec. and Computer Engineering Purdue University - West Lafayette, IN lebanon@stat.purdue.edu\n\nAbstract\nWe examine the problem of predicting local sentiment flow in documents, and its application to several areas of text analysis. Formally, the problem is stated as predicting an ordinal sequence based on a sequence of word sets. In the spirit of isotonic regression, we develop a variant of conditional random fields that is well suited to handle this problem. Using the Mobius transform, we express the model as a simple convex optimization problem. Experiments demonstrate the model and its applications to sentiment prediction, style analysis, and text summarization.\n\n1\n\nIntroduction\n\nThe World Wide Web and other textual databases provide a convenient platform for exchanging opinions. Many documents, such as reviews and blogs, are written with the purpose of conveying a particular opinion or sentiment. Other documents may not be written with the purpose of conveying an opinion, but nevertheless they contain one. Opinions, or sentiments, may be considered in several ways, the simplest of which is varying from positive opinion, through neutral, to negative opinion. Most of the research in information retrieval has focused on predicting the topic of a document, or its relevance with respect to a query. Predicting the document's sentiment would allow matching the sentiment, as well as the topic, with the user's interests. It would also assist in document summarization and visualization. Sentiment prediction was first formulated as a binary classification problem to answer questions such as: \"What is the review's polarity, positive or negative?\" Pang et al. [1] demonstrated the difficulties in sentiment prediction using solely the empirical rules (a subset of adjectives), which motivates the use of statistical learning techniques. The task was then refined to allow multiple sentiment levels, facilitating the use of standard text categorization techniques [2]. However, sentiment prediction is different from traditional text categorization: (1) in contrast to the categorical nature of topics, sentiments are ordinal variables; (2) several contradicting opinions might co-exist, which interact with each other to produce the global document sentiment; (3) context plays a vital role in determining the sentiment. Indeed, sentiment prediction is a much harder task than topic classification tasks such as Reuters or WebKB and current models achieve lower accuracy. Rather than using a bag of words multiclass classifier, we model the sequential flow of sentiment throughout the document using a sequential conditional model. Furthermore, we treat the sentiment labels as ordinal variables by enforcing monotonicity constraints on the model's parameters.\n\n2\n\nLocal and Global Sentiments\n\nPrevious research on sentiment prediction has generally focused on predicting the sentiment of the entire document. A commonly used application is the task of predicting the number of stars assigned\n\n\f\nto a movie, based on a review text. Typically, the problem is considered as standard multiclass classification or regression using the bag of words representation. In addition to the sentiment of the entire document, which we call global sentiment, we define the concept of local sentiment as the sentiment associated with a particular part of the text. It is reasonable to assume that the global sentiment of a document is a function of the local sentiment and that estimating the local sentiment is a key step in predicting the global sentiment. Moreover, the concept of local sentiment is useful in a wide range of text analysis applications including document summarization and visualization. Formally, we view local sentiment as a function on the words in a document taking values in a finite partially ordered set, or a poset, (O, ). To determine the local sentiment at a particular word, it is necessary to take context into account. For example, due to context the local sentiment at each of the following words this is a horrible product is low (in the sense of (O, )). Since sentences are natural components for segmenting document semantics, we view local sentiment as a piecewise constant function on sentences. Occasionally we encounter a sentence that violates this rule and conveys opposing sentiments in two different parts. In this situation we break the sentence into two parts and consider them as two sentences. We therefore formalize the problem as predicting a sequence of sentiments y = (y1 , . . . , yn ), yi O based on a sequence of sentences x = (x1 , . . . , xn ). Modeling the local sentiment is challenging from several aspects. The sentence sequence x is discrete-time and high-dimensional categorical valued, and the sentiment sequence y is discretetime and ordinal valued. Regression models can be applied locally but they ignore the statistical dependencies across the time domain. Popular sequence models such as HMM or CRF, on the other hand, typically assume that y is categorical valued. In this paper we demonstrate the prediction of local sentiment flow using an ordinal version of conditional random fields, and explore the relation between the local and global sentiment.\n\n3\n\nIsotonic Conditional Random Fields\n\nConditional random fields (CRF) [3] are parametric families of conditional distributions p (y|x) that correspond to undirected graphical models or Markov random fields Zk c c c,k fc,k (x|c , y|c ) exp p (y, x) C C c (x|c , y|c ) = = c,k R (1) p (y|x) = p (x) Z (, x) (, x) where C is the set of cliques in the graph and x|c and y|c are the restriction of x and y to variables representing nodes in c C . It is assumed above that the potenkials c are exponential functions of t c,k fc,k (x|c , y|c )). features modulated by decay parameters c (x|c , y|c ) = exp(\n\nIn sequence annotation a standard choice for the feature functions is f , (yi-1 , yi ) = yi-1 , yi , and g ,w (yi , xi ) = yi , xi ,w (note that we index the feature functions using pairs rather than k as in (2)). In our case, since xi are sentences we use instead the slightly modified feature functions g ,w (yi , xi ) = 1 if yi = , w xi and 0 otherwise. Given a set of iid training samples the parameters are typically estimated by maximum likelihood or MAP using standard numerical techniques such as conjugate gradient or quasi-Newton. Despite the great popularity of CRF in sequence labeling, they are not appropriate for ordinal data such as sentiments. The ordinal relation is ignored in (2), and in the case of limited training data the parameter estimates will possess high variance resulting in poor predictive power. We therefore enforce a set of monotonicity constraints on the parameters that are consistent with the ordinal structure and domain knowledge. The resulting model is a restricted subset of the CRF (2) and, in accordance with isotonic regression [4], is named isotonic CRF.\n\nCRF have been mostly applied to sequence annotation, where x is a sequence of words and y is a sequence of labels annotating the words, for example part-of-speech tags. The standard graphical structure in this case is a chain structure on y with noisy observations x. In other words, the cliques are C = {{yi-1 , yi }, {yi , xi } : i = 1, . . . , n} (see Figure 1 left) leading to the model i ik k 1 p (y|x) = k gk (yi , xi ) = (, ). (2) k fk (yi-1 , yi ) + exp Z (x, )\n\n\f\nSince ordinal variables express a progression of some sort, it is natural to expect some of the binary features in (2) to correlate more strongly with some ordinal values than others. In such cases, we should expect the presence of such binary features to increase (or decrease) the conditional probability in a manner consistent with the ordinal relation. Since the parameters ,w represent the effectiveness of the appearance of w with respect to increasing the probability of O, they are natural candidates for monotonicity constraints. More specifically, for words w M 1 that are identified as strongly associated with positive sentiment, we enforce = ,w ,w (3) w M1 . Similarly, for words w M2 identified as strongly associated with negative sentiment, we enforce = ,w ,w (4) w M2 . The motivation behind the abov restriction is immediate for the non-conditional Markov rane dom fields p (x) = Z -1 exp( i fi (x)). Parameters i are intimately tied to model probabilities through activation of the feature functions fi . In the case of conditional random fields, things get more complicated due to the dependence of the normalization term on x. The following propositions motivate the above parameter restriction for the case of linear structure CRF with binary features. Proposition 1. Let p(y|x) be a linear state-emission chain CRF with binary features f , , g ,w as above, and x a sentence sequence for which v xj . Then, denoting x = (x1 , . . . , xj -1 , xj {v }, xj +1 , . . . , xn ), we have e j p(y|x) . y ,v - yj ,v = Ep(y |x) y ) p(y|x Proof. p(y|x) = Z (x ) e( = p(y|x ) Z (x) e( y(i\n= e =\n\ni i \n\n,\n\n\n\n\n\n, ,\n\n \n\n, f , (yi-1 ,yi )+ , f , (yi-1 ,yi )+\n\n\n\ni i , f , (y -1 ,y )+ i i (y -1 ,y )+\n\nw\n\nhere r =\n\ny y\n\ny ( i , f , , e r r,v O r e r e- yj ,v r O (y x)e\n| \np\n\ni\n\ni\n\ni \n\n \n\n,w ,w\n\n \n\n,w ,w\n\ng g\n\n,w ,w\n\n(yi ,xi )) (yi\ni\n\n,xi )\ni\n\n)\n\n=\n\nZ (x ) - yj ,v e Z (x)\n\n,w\n\n \n\n,w\n\ng g\n\n,w\n\n(y ,x )) (yi ,xi ))\nyj ,v\n\ni\n\n=\n\nyj ,v - yj ,v\n\nr\n\nr\n\n\n\ne- yj ,v\n\n,w\n\n,w\n\n,w\n\n\n\nr,v\n\n-\n\nO\n\nr\n\nO\n\nr\n\ne\n\nexp\n:yj =r\n\ni\n\n\n\n\n\ni i , f , (y -1 , y )\n\n+\n\n,\n\ni\n\n\n\n,w\n\ng\n\n,w\n\n(y , xi )\n\ni\n\n,w\n\n.\n\nNote that the specific linear CRF structure (Figure 1, left) and binary features are essential for p yx the above result. Proposition 1 connects the probability ratio p((y||x )) to the model parameters in a relatively simple manner. Together with Proposition 2 below, it motivates the ordering of { r,v : r O} determined by the restrictions (3)-(4) in terms of the ordering of probability ratios of transformed sequences. Proposition 2. Let p(y|x), x, x be as in Proposition 1. For all label sequences s, t, we have Proof. Since \nt j ,v \n\n\n\nsj ,v\n\n =\n\np(t|x) p(s|x) . ) p(s|x p(t|x )\n\n\n(5)\n\ntj ,v \n\nsj ,v we\n\nEp(y |x)\n\nBy Proposition 1 the above expectation is\n\nhave that ez- sj ,v - ez- tj ,v e j y ,v - sj ,v - yj ,v - tj ,v e\np(s|x) p(s|x )\n\n0 for all z and\n\n\n0.\n\n-\n\np(t|x) p(t|x )\n\nand Equation (5) follows.\n\n\f\nThe restriction (3) may thus be interpreted as ensuring that adding a word w M 1 to transform x x will increase labeling probabilities associated with no less than with if . Similarly, the restriction (4) may be interpreted in the opposite way. If these assumptions are correct, it is clear that they will lead to more accurate parameter estimates and better prediction accuracy. However, even if assumptions (3)-(4) are incorrect, enforcing them may improve prediction by trading off increased bias with lower variance. Conceptually, the parameter estimates for isotonic CRF may be found by maximizing the likelihood or posterior subject to the monotonicity constraints (3)-(4). Since such a maximization is relatively difficult for large dimensionality, we propose a re-parameterization that leads to a much simpler optimization problem. The re-parameterization, in the case of a fully ordered set, is relatively straightforward. In the more general case of a partially ordered set we need the mechanism of Mobius inversions on finite partially ordered sets. We introduce a new set of features {g ,w : O} for w M1 M2 defined as g ,w (yi , xi ) = g ,w (yi , xi ) w M 1 M2\n: \n\nand a new set of corresponding parameters {,w : O}. If (O, ) is fully ordered, ,w = ,w - ,w , where is the largest element smaller than , or 0 if = min(O). In the more general case, ,w is the convolution of ,w with the Mobius function of the poset (O, ) (see bius inversion theorem [5] we have that ,w satisfy [5] for more details). By the Mo ,w = ,w w M 1 M2 (6)\n: \n\nand that\n\np(y|x) = w +\n\n\n\n\n\n,w\n\ng\n\n,w\n\n1 exp Z (x) iw\n\n= i\n\n \n\n\n\n,w g ,w leading to the re-parameterization of isotonic CRF \n, f , (yi-1 , yi )\n\n+\n\n,\n\niw\n\nM1 M2\n\n\n\n\n\n,w\n\ng\n\n,w\n\n(yi , xi )\n\n,w g ,w (yi , xi )\n\nM1 M2\n\nith ,w 0, w M1 and ,w 0, w M2 for all > min(O). The re-parameterized model has the benefit of simple constraints and its maximum likelihood estimates can be obtained by a trivial adaptation of conjugate gradient or quasi-Newton methods. 3.1 Author Dependent Models\n\nThus far, we have ignored the dependency of the labeling model p(y|x) on the author, denoted here by the variable a. We now turn to account for different sentiment-authoring styles by incorporating this variable into the model. The word emissions yi xi in the CRF structure are not expected to vary much across different authors. The sentiment transitions yi-1 yi , on the other hand, typically vary across different authors as a consequence of their individual styles. For example, the review of an author who sticks to a list of self-ranked evaluation criteria is prone to strong sentiment variations. In contrast, the review of an author who likes to enumerate pros before he gets to cons (or vice versa) is likely to exhibit more local homogeneity in sentiment. Accounting for author-specific sentiment transition style leads to the graphical model in Figure 1 right. The corresponding author-dependent CRF model i i 1 ,w ,w g ,w (yi ,xi )) p(y|x, a) = e( ,a , ( , + ,,a )f ,,a (yi-1 ,yi ,a)+ Z (x, a) uses features f ,,a (yi-1 , yi , a) = f , (yi-1 , yi )a,a and transition parameters that are authordependent ,,a as well as author-independent , . Setting ,,a = 0 reduces the model to the standard CRF model. The author-independent parameters , allow parameter sharing across multiple authors in case the training data is too scarce for proper estimation of ,,a . For simplicity, the above ideas are described in the context of non-isotonic CRF. However, it is straightforward to combine author-specific models with isotonic restrictions. Experiments demonstrating author-specific isotonic models are described in Section 4.3.\n\n\f\na\n\nYi-1\n\nYi\n\nYi+1\n\nYi-1\n\nYi\n\nYi+1\n\nXi-1\n\nXi\n\nXi+1\n\nXi-1\n\nXi\n\nXi+1\n\nFigure 1: Graphical models corresponding to CRF (left) and author-dependent CRF (right). 3.2 Sentiment Flows as Smooth Curves\n\nThe sentence-based definition of sentiment flow is problematic when we want to fit a model (for example to predict global sentiment) that uses sentiment flows from multiple documents. Different documents have different number of sentences and it is not clear how to compare them or how to build a model from a collection of discrete flows of different lengths. We therefore convert the sentence-based flow to a smooth length-normalized flow that can meaningfully relate to other flows. We assume from now on that the ordinal set O is realized as a subset of R and that its ordering coincides with the standard ordering on R. In order to account for different lengths, we consider the sentiment flow as a function h : [0, 1] O R that is piecewise constant on the intervals [0, l), [l, 2l), . . . , [(k - 1)l, 1] where k is the number of sentences in the document and l = 1/k . Each of the intervals represents a sentence and the function value on it is its sentiment. To create a more robust representation we smooth out the discontinuous function by convolving it with a smoothing kernel. The resulting sentiment flow is a smooth curve f : [0, 1] R that can be easily related or compared to similar sentiment flows of other documents (see Figure 3 for an example). We can then define natural distances between two flows, for example the L p distance 1 1/p p (7) dp (f1 , f2 ) = |f1 (r) - f2 (r)| dr\n0\n\nfor use in a k -nearest neighbor model for relating the local sentiment flow to the global sentiment.\n\n4\n\nExperiments\n\nTo examine the ideas proposed in this paper we implemented isotonic CRF, and the normalization and smoothing procedure, and experimented with a small dataset of 249 movie reviews, randomly selected from the Cornell sentence polarity dataset v1.01 , all written by the same author. The code for isotonic CRF is a modified version of the quasi-Newton implementation in the Mallet toolkit. In order to check the accuracy and benefit of the local sentiment predictor, we hand-labeled the local sentiments of each of these reviews. We assigned for each sentence one of the following values in O R: 2 (highly praised), 1 (something good), 0 (objective description), -1 (something that needs improvement) and -2 (strong aversion). 4.1 Sentence Level Prediction\n\nTo evaluate the prediction quality of the local sentiment we compared the performance of naive Bayes, SVM (using the default parameters of SVMlight ), CRF and isotonic CRF. Figure 2 displays the testing accuracy and distance of predicting the sentiment of sentences as a function of the training data size averaged over 20 cross-validation train-test split. The dataset presents one particular difficulty where more than 75% of the sentences are labeled objective (or 0). As a result, the prediction accuracy for objective sentences is over-emphasized. To correct for this fact, we report our test-set performance over a balanced (equal number of sentences for different labels) sample of labeled sentences. Note that since there are 5 labels, random guessing yields a baseline of 0.2 accuracy and guessing 0 always yields a baseline of 1.2 distance.\n1\n\nAvailable at http://www.cs.cornell.edu/People/pabo/movie-review-data\n\n\f\n0.4\n\nbalanced testing accuracy\nisotonic CRFs CRFs SVM naive Bayes\n\n1.25\n\nbalanced testing distance\nisotonic CRFs CRFs SVM naive Bayes\n\n1.2\n\n0.35 1.15\n\n0.3\n\n1.1\n\n1.05 0.25 1\n\n0.2\n\n25\n\n50\n\n75\n\n100\n\n125\n\n150\n\n175\n\n0.95\n\n25\n\n50\n\n75\n\n100\n\n125\n\n150\n\n175\n\nFigure 2: Local sentiment prediction: balanced test results for naive Bayes, SVM, CRF and iso-CRF. As described in Section 3, for isotonic CRF, we obtained 300 words to enforce monotonicity constraints. The 150 words that achieved the highest correlation with the sentiment were chosen for positivity constraints. Similarly, the 150 words that achieved the lowest correlation were chosen for negativity constraints. Table 1 displays the top 15 words of the two lists.\ngreat perfection considerable too couldnt wasnt superb outstanding wonderfully didnt i uninspired memorable performance worth just no lacked enjoyable enjoyed beautifully failed satire boring mood certain delightfully unnecessary contrived tended\n\nTable 1: Lists of 15 words with the largest positive (top) and negative (bottom) correlations. The results in Figure 2 indicate that by incorporating the sequential information, the two versions of CRF perform consistently better than SVM and naive Bayes. The advantage of setting the monotonicity constraints in CRF is elucidated by the average absolute distance performance criterion (Figure 2, right). This criterion is based on the observation that in sentiment prediction, the cost of misprediction is influenced by the ordinal relation on the labels, rather than the 0-1 error rate. 4.2 Global Sentiment Prediction\n\nWe also evaluated the contribution of the local sentiment analysis in helping to predict the global sentiment of documents. We compared a nearest neighbor classifier for the global sentiment, where the representation varied from bag of words to smoothed length-normalized local sentiment representation (with and without objective sentences). The smoothing kernel was a bounded Gaussian density (truncated and renormalized) with 2 = 0.2. Figure 3 displays discrete and smoothed local sentiment labels, and the smoothed sentiment flow predicted by isotonic CRF. Figure 4 and Table 2 display test-set accuracy of global sentiments as a function of the train set size. The distance in the nearest neighbor classifier was either L1 or L2 for the bag of words representation or their continuous version (7) for the smoothed sentiment curve representation. The results indicate that the classification performance of the local sentiment representation is better than the bag of words representation. In accordance with the conclusion of [6], removing objective sentences (that correspond to sentiment 0) increased the local sentiment analysis performance by 20.7%. We can thus conclude that for the purpose of global sentiment prediction, local sentiment flow of the nonobjective sentences holds most of the relevant information. Performing local sentiment analysis on non-objective sentences improves performance as the model estimates possess lower variance. 4.3 Measuring the rate of sentiment change\n\nWe examine the rate of sentiment change as a characterization of the author's writing style using the isotonic author-dependent model of Section 3.1. We assume that the CRF process is a discrete\n\n\f\n2 labels curve rep of labels predicted curve rep\n\n1\n\n0\n\n- -1\n2\n\n0\n\n0. 1\n\n0.2\n\n0.3\n\n0.4\n\n0.5\n\n0.6\n\n0.7\n\n0.8\n\n0.9\n\n1\n\nFigure 3: Sentiment flow and its smoothed curve representation. The blue circles indicate the labeled sentiment of each sentence. The blue solid curve and red dashed curve are smoothed representations of the labeled and predicted sentiment flows. Only non-objective labels are kept in generating the two curves. The numberings correspond to sentences displayed in Section 4.4.\n0.38\n\nnearest neighbor classifier with L1\nsentiment flow w/o objective sentiment flow w/ objective vocabulary\n\n0.38\n\nnearest neighbor classifier with L2\nsentiment flow w/o objective sentiment flow w/ objective vocabulary\n\n0.36\n\n0.36\n\n0.34 0.34 0.32 0.32 0.3\n\n0.3\n\n0.28\n\n0.28\n\n25\n\n50\n\n75\n\n100\n\n125\n\n150\n\n175\n\n0.26\n\n25\n\n50\n\n75\n\n100\n\n125\n\n150\n\n175\n\nFigure 4: Accuracy of global sentiment prediction (4-class labeling) as a function of train set size. sampling of a corresponding continuous time Markov jump process. A consequence of this assumption is that the time T the author stays in sentiment before leaving is modeled by the exponential distribution p (T > t) = e-q (t-1) , t > 1. Here, we assume T > 1 and q is interpreted as the rate of change of the sentiment O: the larger the value, the more likely the author will switch to other sentiments in the near future. To estimate the rate of change q of an author we need to compute p (T > t) based on the marginal probabilities p(s|a) of sentiment sequences s of length l. The probability p(s|a) may be approximated by x p(s|a) = p(x|a)p(s|x, a) (8) w x p (x|a) ~ n-l+1 i i (s1 |x, a) i+(l-1) j =i+1 Mj (sj -i , sj -i+1 |x, a)i+(l-1) (sl |x, a) Z (x, a)\n\nx 1 here p is the empirical probability function p (x|a) = |C | ~ ~ C x,x for the set C of documents written by author a of length no less than l. , M , are the forward, transition and backward probabilities analogous to the dynamic programming method in [3]. Using the model p(s|a) we can compute p (T > t) for different authors at integer values of t which would lead to the quantity q associated with each author. However, since (8) is based on an approximation, the calculated values of p (T > t) will be noisy resulting in slightly different values of q for different time points t and cross validation iterations. A linear regression fit for q based on the approximated values of p (T > t) for two authors using 10-fold cross validation is displayed in Figure 5. The data was the 249 movie reviews from the previous experiments written by one author, and additional 201 movie reviews from a second author. Interestingly, the author associated with the red dashed line has a consistent lower q value in all those figures, and thus is considered as more \"static\" and less prone to quick sentiment variations.\n\n\f\nvocabulary sentiment flow with objective sentences sentiment flow without objective sentences\n\nL1 0.3095 0.3189 0.3736\n\n3.0% 20.7%\n\nL2 0.3068 0.3128 0.3655\n\n1.95% 19.1%\n\nTable 2: Accuracy results and relative improvement when training size equals 175.\n8 7 6 5 4 3 2 1 0 1 2 3 4 5 7 6 5 4 3 6 5 4 10 8\n\n1.8388 1.3504\n\n3 2 1 0 1 2\n\n1.6808 1.143\n2\n\n1.2181\n4\n\n0.76685\n1 0 1 2 0 1\n\n1.8959 1.2231\n\n3\n\n4\n\n5\n\n2\n\n3\n\n4\n\n5\n\n2\n\n3\n\n4\n\n5\n\nFigure 5: Linear regression fit for q , = 2, 1, -1, -2 (left to right) based on approximated values of p (T > t) for two different authors. X-axis: time t; Y-axis: negative log-probability of T > t. 4.4 Text Summarization\n\nWe demonstrate the potential usage of sentiment flow for text summarization with a very simple example. The text below shows the result of summarizing the movie review in Figure 3 by keeping only sentences associated with the start, the end, the top, and the bottom of the predicted sentiment curve. The number before each sentence relates to the circled number in Figure 3.\n1 What makes this film mesmerizing, is not the plot, but the virtuoso performance of Lucy Berliner (Ally Sheedy), as a wily photographer, retired from her professional duties for the last ten years and living with a has-been German actress, Greta (Clarkson). 2 The less interesting story line involves the ambitions of an attractive, baby-faced assistant editor at the magazine, Syd (Radha Mitchell), who lives with a boyfriend (Mann) in an emotionally chilling relationship. 3 We just lost interest in the characters, the film began to look like a commercial for a magazine that wouldn't stop and get to the main article. 4 Which left the film only somewhat satisfying; it did create a proper atmosphere for us to view these lost characters, and it did have something to say about how their lives are being emotionally torn apart. 5 It would have been wiser to develop more depth for the main characters and show them to be more than the superficial beings they seemed to be on screen.\n\nAlternative schemes for extracting specific sentences may be used to achieve different effects, depending on the needs of the user. We plan to experiment further in this area by combining local sentiment flow and standard summarization techniques.\n\n5\n\nDiscussion\n\nIn this paper, we address the prediction and application of the local sentiment flow concept. As existing models are inadequate for a variety of reasons, we introduce the isotonic CRF model that is suited to predict the local sentiment flow. This model achieves better performance than the standard CRF as well as non-sequential models such as SVM. We also demonstrate the usefulness of the local sentiment representation for global sentiment prediction, style analysis and text summarization. References\n[1] B. Pang, L. Lee, and S. Vaithyanathan. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP-02. [2] B. Pang and L. Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of ACL-05. [3] J. Lafferty, F. Pereira, and A. McCallum. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In International Conference on Machine Learning, 2001. [4] R. E. Barlow, D.J. Bartholomew, J. M. Bremner, and H. D. Brunk. Statistical inference under order restrictions; the theory and application of isotonic regression. Wiley, 1972. [5] R. P. Stanley. Enumerative Combinatorics. Wadsworth & Brooks/Cole Mathematics Series, 1986. [6] B. Pang and L. Lee. A sentimental education: Sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL-04.\n\n\f\n", "award": [], "sourceid": 3152, "authors": [{"given_name": "Yi", "family_name": "Mao", "institution": null}, {"given_name": "Guy", "family_name": "Lebanon", "institution": null}]}