error: u'Images speak more than thousand words. One of their aspects is that they can a\x0bect people on an emotional level. Since the emotions that arise in the viewer of an image are highly subjective, they are rarely indexed. However there are situations when it would be helpful if images could be retrieved based on their emotional content.
Our goal is to use image-processing methods to investigate or develop methods to extract and combine low-level features that represent the emotional content of an image and build a framework that classifies images automatically.
Specifically, we exploit theoretical and empirical concepts from psychology and art theory to extract image features that are specific to the domain of artworks with emotional expression.
For our work we choose a dimensional approach to emotions that is known from the field of psychophysiology. According to this approach an emotion can be classified by coordinates in a two-dimensional emotion space where one axis represents valence (the type of emotion), ranging from pleasant to unpleasant, and the second axis is defined as arousal (the intensity of the emotion), ranging from calm to exciting/thrilling. Emotions mapped onto this space can be translated into words like angry, sad, exciting etc. and these can be used for automatic indexing of images.
Machine learning methods are used to learn classification based on these features.
For testing and training, we use three types of data sets, the International A\x0bective Picture System (IAPS) (which is also used by Yanulevskaya et al.), a set of artistic photography downloaded from a photo sharing site (to investigate whether the conscious use of colors and textures displayed by the artists improves the classification) and a set of peer rated abstract paintings to investigate the influence of' the features' performance and ratings on pictures without contextual content. Improved classification results are obtained on the IAPS set, compared to Yanulevskaya et al., who use general purpose image features.