Contemplating Visual Emotions: Understanding and Overcoming Dataset Bias
Rameswar Panda, Jianming Zhang, Haoxiang Li, Joon-Young Lee, Xin Lu, Amit K. Roy-Chowdhury
European Conference on Computer Vision (ECCV), 2018
We investigate different dataset biases and propose a curriculum guided webly supervised approch for learning a generalizable emotion recognition model.

Datasets & Models

We introduce three image emotion datasets collected from different sources for model training and testing. The first one is called WEBEmo dataset that contains about 268000 stock photos across 25 fine-grained emotion categories. The other two datasets, Emotion-6 and UnBiasedEmo are collected from Google and Flickr to study dataset bias in visual emotion recognition.

[WEBEmo] [Emotion-6] [UnBiasedEmo]

All the trained models will be released soon.

Acknowledgements

This work is partially supported by NSF grant 1724341 and gifts from Adobe. We thank Victor Hill of UCR CS for setting up the computing infrastructure used in this work.

Bibtex

Please cite our paper if you find it useful for your research.

@inproceedings{panda2018contemplating,
title={Contemplating Visual Emotions: Understanding and Overcoming Dataset Bias},
author={Panda, Rameswar and Zhang, Jianming and Li, Haoxiang and Lee, Joon-Young and Lu, Xin and Roy-Chowdhury, Amit K},
booktitle = {European Conference on Computer Vision},
year={2018}
}