Part of Advances in Neural Information Processing Systems 19 (NIPS 2006)
David Cohn, Deepak Verma, Karl Pfleger
Clustering, or factoring of a document collection attempts to “explain” each ob- served document in terms of one or a small number of inferred prototypes. Prior work demonstrated that when links exist between documents in the corpus (as is the case with a collection of web pages or scientiﬁc papers), building a joint model of document contents and connections produces a better model than that built from contents or connections alone. Many problems arise when trying to apply these joint models to corpus at the scale of the World Wide Web, however; one of these is that the sheer overhead of representing a feature space on the order of billions of dimensions becomes impractical. We address this problem with a simple representational shift inspired by proba- bilistic relational models: instead of representing document linkage in terms of the identities of linking documents, we represent it by the explicit and inferred at- tributes of the linking documents. Several surprising results come with this shift: in addition to being computationally more tractable, the new model produces fac- tors that more cleanly decompose the document collection. We discuss several variations on this model and show how some can be seen as exact generalizations of the PageRank algorithm.