Representing Face Images for Emotion Classification

Part of Advances in Neural Information Processing Systems 9 (NIPS 1996)

Bibtex Metadata Paper

Authors

Curtis Padgett, Garrison Cottrell

Abstract

We compare the generalization performance of three distinct rep(cid:173) resentation schemes for facial emotions using a single classification strategy (neural network). The face images presented to the clas(cid:173) sifiers are represented as: full face projections of the dataset onto their eigenvectors (eigenfaces); a similar projection constrained to eye and mouth areas (eigenfeatures); and finally a projection of the eye and mouth areas onto the eigenvectors obtained from 32x32 random image patches from the dataset. The latter system achieves 86% generalization on novel face images (individuals the networks were not trained on) drawn from a database in which human sub(cid:173) jects consistently identify a single emotion for the face .