Jie Luo, Barbara Caputo, Vittorio Ferrari
Given a corpus of news items consisting of images accompanied by text captions, we want to find out `
whos doing what, i.e. associate names and action verbs in the captions to the face and body pose of the persons in the images. We present a joint model for simultaneously solving the image-caption correspondences and learning visual appearance models for the face and pose classes occurring in the corpus. These models can then be used to recognize people and actions in novel images without captions. We demonstrate experimentally that our jointface and pose model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.