Papers
Topics
Authors
Recent
Search
2000 character limit reached

AffectNet+: A Database for Enhancing Facial Expression Recognition with Soft-Labels

Published 29 Oct 2024 in cs.CV | (2410.22506v1)

Abstract: Automated Facial Expression Recognition (FER) is challenging due to intra-class variations and inter-class similarities. FER can be especially difficult when facial expressions reflect a mixture of various emotions (aka compound expressions). Existing FER datasets, such as AffectNet, provide discrete emotion labels (hard-labels), where a single category of emotion is assigned to an expression. To alleviate inter- and intra-class challenges, as well as provide a better facial expression descriptor, we propose a new approach to create FER datasets through a labeling method in which an image is labeled with more than one emotion (called soft-labels), each with different confidences. Specifically, we introduce the notion of soft-labels for facial expression datasets, a new approach to affective computing for more realistic recognition of facial expressions. To achieve this goal, we propose a novel methodology to accurately calculate soft-labels: a vector representing the extent to which multiple categories of emotion are simultaneously present within a single facial expression. Finding smoother decision boundaries, enabling multi-labeling, and mitigating bias and imbalanced data are some of the advantages of our proposed method. Building upon AffectNet, we introduce AffectNet+, the next-generation facial expression dataset. This dataset contains soft-labels, three categories of data complexity subsets, and additional metadata such as age, gender, ethnicity, head pose, facial landmarks, valence, and arousal. AffectNet+ will be made publicly accessible to researchers.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (118)
  1. M. E. Kret, “Emotional expressions beyond facial muscle actions. a call for studying autonomic signals and their impact on social perception,” Frontiers in psychology, vol. 6, p. 711, 2015.
  2. P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion.” Journal of personality and social psychology, vol. 17, no. 2, p. 124, 1971.
  3. P. Ekman et al., “Basic emotions,” Handbook of cognition and emotion, vol. 98, no. 45-60, p. 16, 1999.
  4. P. Ekman, W. V. Friesen, M. O’sullivan, A. Chan, I. Diacoyanni-Tarlatzis, K. Heider, R. Krause, W. A. LeCompte, T. Pitcairn, P. E. Ricci-Bitti et al., “Universals and cultural differences in the judgments of facial expressions of emotion.” Journal of personality and social psychology, no. 4, p. 712, 1987.
  5. A. H. Farzaneh and X. Qi, “Facial expression recognition in the wild via deep attentive center loss,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, 2021, pp. 2402–2411.
  6. A. P. Fard and M. H. Mahoor, “Ad-corre: Adaptive correlation-based loss for facial expression recognition in the wild,” IEEE Access, vol. 10, pp. 26 756–26 768, 2022.
  7. B. Hasani, P. S. Negi, and M. H. Mahoor, “Breg-next: Facial affect computing using adaptive residual networks with bounded gradient,” IEEE Transactions on Affective Computing, vol. 13, no. 2, pp. 1023–1036, 2020.
  8. B. Yang, J. Wu, K. Ikeda, G. Hattori, M. Sugano, Y. Iwasawa, and Y. Matsuo, “Face-mask-aware facial expression recognition based on face parsing and vision transformer,” Pattern Recognition Letters, vol. 164, pp. 173–182, 2022.
  9. F. Xue, Q. Wang, Z. Tan, Z. Ma, and G. Guo, “Vision transformer with attentive pooling for robust facial expression recognition,” IEEE Transactions on Affective Computing, 2022.
  10. P. Ekman and W. V. Friesen, “Facial action coding system,” Environmental Psychology & Nonverbal Behavior, 1978.
  11. G.-B. D. de Boulogne, “The mechanism of human facial expression,” (No Title), 1990.
  12. J. F. Cohn and K. L. Schmidt, “The timing of facial motion in posed and spontaneous smiles,” International Journal of Wavelets, Multiresolution and Information Processing, vol. 2, no. 02, pp. 121–132, 2004.
  13. N. M. Szajnberg, “What the face reveals: Basic and applied studies of spontaneous expression using the facial action coding system (facs),” 2022.
  14. S. Du and A. M. Martinez, “Compound facial expressions of emotion: from basic research to clinical applications,” Dialogues in clinical neuroscience, vol. 17, no. 4, pp. 443–455, 2015.
  15. S. Li, W. Deng, and J. Du, “Reliable crowdsourcing and deep locality-preserving learning for expression recognition in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2852–2861.
  16. C. Fabian Benitez-Quiroz, R. Srinivasan, and A. M. Martinez, “Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 5562–5570.
  17. D. Kollias, “Multi-label compound expression recognition: C-expr database & network,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 5589–5598.
  18. Y. Liu, W. Dai, C. Feng, W. Wang, G. Yin, J. Zeng, and S. Shan, “Mafw: A large-scale, multi-modal, compound affective database for dynamic facial expression recognition in the wild,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 24–32.
  19. S. Du, Y. Tao, and A. M. Martinez, “Compound facial expressions of emotion,” Proceedings of the national academy of sciences, vol. 111, no. 15, pp. E1454–E1462, 2014.
  20. J. Guo, Z. Lei, J. Wan, E. Avots, N. Hajarolasvadi, B. Knyazev, A. Kuharenko, J. C. S. J. Junior, X. Baró, H. Demirel et al., “Dominant and complementary emotion recognition from still images of faces,” IEEE Access, vol. 6, pp. 26 391–26 403, 2018.
  21. D. E. Matsumoto and H. C. Hwang, “The handbook of culture and psychology,” 2019.
  22. D. Keltner and D. T. Cordaro, “Understanding multimodal emotional expressions,” The science of facial expression, vol. 1798, 2017.
  23. A. Mollahosseini, B. Hasani, and M. H. Mahoor, “Affectnet: A database for facial expression, valence, and arousal computing in the wild,” IEEE Transactions on Affective Computing, vol. 10, no. 1, pp. 18–31, 2017.
  24. I. J. Goodfellow, D. Erhan, P. L. Carrier, A. Courville, M. Mirza, B. Hamner, W. Cukierski, Y. Tang, D. Thaler, D.-H. Lee et al., “Challenges in representation learning: A report on three machine learning contests,” in International conference on neural information processing.   Springer, 2013, pp. 117–124.
  25. J. A. Russell, “A circumplex model of affect.” Journal of personality and social psychology, vol. 39, no. 6, p. 1161, 1980.
  26. J. Han, P. Luo, and X. Wang, “Deep self-learning from noisy labels,” in Proceedings of the IEEE/CVF international conference on computer vision, 2019, pp. 5138–5147.
  27. J. She, Y. Hu, H. Shi, J. Wang, Q. Shen, and T. Mei, “Dive into ambiguity: Latent distribution mining and pairwise uncertainty estimation for facial expression recognition,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 6248–6257.
  28. C.-B. Zhang, P.-T. Jiang, Q. Hou, Y. Wei, Q. Han, Z. Li, and M.-M. Cheng, “Delving deep into label smoothing,” IEEE Transactions on Image Processing, vol. 30, pp. 5984–5996, 2021.
  29. D. Gera, S. Balasubramanian, and A. Jami, “Cern: Compact facial expression recognition net,” Pattern Recognition Letters, vol. 155, pp. 9–18, 2022.
  30. J. Lang, X. Sun, J. Li, and M. Wang, “Multi-stage and multi-branch network with similar expressions label distribution learning for facial expression recognition,” Pattern Recognition Letters, vol. 163, pp. 17–24, 2022.
  31. Y. Liu, X. Zhang, J. Kauttonen, and G. Zhao, “Uncertain label correction via auxiliary action unit graphs for facial expression recognition,” in 2022 26th International Conference on Pattern Recognition (ICPR).   IEEE, 2022, pp. 777–783.
  32. A. V. Savchenko, “Video-based frame-level facial analysis of affective behavior on mobile devices using efficientnets,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2359–2366.
  33. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778.
  34. I. Dominguez-Catena, D. Paternain, and M. Galar, “Assessing demographic bias transfer from dataset to model: A case study in facial expression recognition,” arXiv preprint arXiv:2205.10049, 2022.
  35. C. Su, J. Wei, D. Lin, and L. Kong, “Using attention lsgb network for facial expression recognition,” Pattern Analysis and Applications, pp. 1–11, 2022.
  36. N. Heidari and A. Iosifidis, “Learning diversified feature representations for facial expression recognition in the wild,” arXiv preprint arXiv:2210.09381, 2022.
  37. S. Wang, H. Shuai, C. Liu, and Q. Liu, “Bias-based soft label learning for facial expression recognition,” IEEE Transactions on Affective Computing, 2022.
  38. Z. Zhang, X. Sun, J. Li, and M. Wang, “Man: Mining ambiguity and noise for facial expression recognition in the wild,” Pattern Recognition Letters, vol. 164, pp. 23–29, 2022.
  39. H. Gao, M. Wu, Z. Chen, Y. Li, X. Wang, S. An, J. Li, and C. Liu, “Ssa-icl: Multi-domain adaptive attention with intra-dataset continual learning for facial expression recognition,” Neural Networks, vol. 158, pp. 228–238, 2023.
  40. X. Ma and Y. Ma, “Relation and context augmentation network for facial expression recognition,” Image and Vision Computing, vol. 127, p. 104556, 2022.
  41. D. Zeng, Z. Lin, X. Yan, Y. Liu, F. Wang, and B. Tang, “Face2exp: Combating data biases for facial expression recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 20 291–20 300.
  42. J. Jiang and W. Deng, “Boosting facial expression recognition by a semi-supervised progressive teacher,” IEEE Transactions on Affective Computing, 2021.
  43. W. Gong, Y. Fan, and Y. Qian, “Effective attention feature reconstruction loss for facial expression recognition in the wild,” Neural Computing and Applications, vol. 34, no. 12, pp. 10 175–10 187, 2022.
  44. Y. Li, Y. Lu, J. Li, and G. Lu, “Separate loss for basic and compound facial expression recognition in the wild,” in Asian conference on machine learning.   PMLR, 2019, pp. 897–911.
  45. J. Zhang and H. Yu, “Improving the facial expression recognition and its interpretability via generating expression pattern-map,” Pattern Recognition, vol. 129, p. 108737, 2022.
  46. Y. Liu, J. Peng, W. Dai, J. Zeng, and S. Shan, “Joint spatial and scale attention network for multi-view facial expression recognition,” Pattern Recognition, vol. 139, p. 109496, 2023.
  47. C. Zheng, M. Mendieta, and C. Chen, “Poster: A pyramid cross-fusion transformer network for facial expression recognition,” arXiv preprint arXiv:2204.04083, 2022.
  48. M. Kolahdouzi, A. Sepas-Moghaddam, and A. Etemad, “Facetoponet: Facial expression recognition using face topology learning,” IEEE Transactions on Artificial Intelligence, 2022.
  49. Z. Ullah, M. I. Mohmand, S. U. Rehman, M. Zubair, M. Driss, W. Boulila, R. Sheikh, and I. Alwawi, “Emotion recognition from occluded facial images using deep ensemble model,” Cmc-Computers Materials & Continua, vol. 73, no. 3, pp. 4465–4487, 2022.
  50. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.
  51. C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2818–2826.
  52. Z. Wang, F. Zeng, S. Liu, and B. Zeng, “Oaenet: Oriented attention ensemble for accurate facial expression recognition,” Pattern Recognition, vol. 112, p. 107694, 2021.
  53. F. Xue, Q. Wang, and G. Guo, “Transfer: Learning relation-aware facial expression representations with transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 3601–3610.
  54. D. Dresvyanskiy, E. Ryumina, H. Kaya, M. Markitantov, A. Karpov, and W. Minker, “End-to-end modeling and transfer learning for audiovisual emotion recognition in-the-wild,” Multimodal Technologies and Interaction, vol. 6, no. 2, p. 11, 2022.
  55. M. Rescigno, M. Spezialetti, and S. Rossi, “Personalized models for facial emotion recognition through transfer learning,” Multimedia Tools and Applications, vol. 79, pp. 35 811–35 828, 2020.
  56. D. Schiller, T. Huber, M. Dietz, and E. André, “Relevance-based data masking: a model-agnostic transfer learning approach for facial expression recognition,” Frontiers in Computer Science, vol. 2, p. 6, 2020.
  57. P. Lucey, J. F. Cohn, T. Kanade, J. Saragih, Z. Ambadar, and I. Matthews, “The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression,” in 2010 ieee computer society conference on computer vision and pattern recognition-workshops.   IEEE, 2010, pp. 94–101.
  58. A. Mollahosseini, B. Hasani, M. J. Salvador, H. Abdollahi, D. Chan, and M. H. Mahoor, “Facial expression recognition from world wild web,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2016, pp. 58–65.
  59. A. Dhall, R. Goecke, S. Lucey, T. Gedeon et al., “Collecting large, richly annotated facial-expression databases from movies,” IEEE multimedia, vol. 19, no. 3, p. 34, 2012.
  60. E. Barsoum, C. Zhang, C. Canton Ferrer, and Z. Zhang, “Training deep networks for facial expression recognition with crowd-sourced label distribution,” in ACM International Conference on Multimodal Interaction (ICMI), 2016.
  61. Z. Zhang, P. Luo, C. C. Loy, and X. Tang, “From facial expression recognition to interpersonal relation prediction,” International Journal of Computer Vision, vol. 126, no. 5, pp. 550–569, 2018.
  62. E. Barsoum, C. Zhang, C. C. Ferrer, and Z. Zhang, “Training deep networks for facial expression recognition with crowd-sourced label distribution,” in Proceedings of the 18th ACM international conference on multimodal interaction, 2016, pp. 279–283.
  63. H. Ming, W. Lu, and W. Zhang, “Soft label mining and average expression anchoring for facial expression recognition,” in Proceedings of the Asian Conference on Computer Vision, 2022, pp. 961–977.
  64. J. Jiang, M. Wang, B. Xiao, J. Hu, and W. Deng, “Joint recognition of basic and compound facial expressions by mining latent soft labels,” Pattern Recognition, p. 110173, 2023.
  65. F. Ma, B. Sun, and S. Li, “Transformer-augmented network with online label correction for facial expression recognition,” IEEE Transactions on Affective Computing, 2023.
  66. Y. Gan, J. Chen, and L. Xu, “Facial expression recognition boosted by soft label with a diverse ensemble,” Pattern Recognition Letters, vol. 125, pp. 105–112, 2019.
  67. T. Liu, J. Wang, B. Yang, and X. Wang, “Facial expression recognition method with multi-label distribution learning for non-verbal behavior understanding in the classroom,” Infrared Physics & Technology, vol. 112, p. 103594, 2021.
  68. T. Lukov, N. Zhao, G. H. Lee, and S.-N. Lim, “Teaching with soft label smoothing for mitigating noisy labels in facial expressions,” in European Conference on Computer Vision.   Springer, 2022, pp. 648–665.
  69. M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning.   PMLR, 2019, pp. 6105–6114.
  70. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1251–1258.
  71. B. Martinez, M. F. Valstar, B. Jiang, and M. Pantic, “Automatic analysis of facial actions: A survey,” IEEE transactions on affective computing, vol. 10, no. 3, pp. 325–347, 2017.
  72. X. Tan, Y. Fan, M. Sun, M. Zhuang, and F. Qu, “An emotion index estimation based on facial action unit prediction,” Pattern Recognition Letters, vol. 164, pp. 183–190, 2022.
  73. A. P. Fard, H. Abdollahi, and M. Mahoor, “Asmnet: A lightweight deep neural network for face alignment and pose estimation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1521–1530.
  74. R. Rothe, R. Timofte, and L. V. Gool, “Deep expectation of real and apparent age from a single image without facial landmarks,” International Journal of Computer Vision, vol. 126, no. 2-4, p. 144–157, 2018.
  75. S. I. Serengil and A. Ozpinar, “Hyperextended lightface: A facial attribute analysis framework,” in 2021 International Conference on Engineering and Emerging Technologies (ICEET).   IEEE, 2021, pp. 1–4.
  76. A. P. Fard and M. H. Mahoor, “Acr loss: Adaptive coordinate-based regression loss for face alignment,” in 2022 26th International Conference on Pattern Recognition (ICPR).   IEEE, 2022, pp. 1807–1814.
  77. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014.
  78. H. Yang, X. Jia, C. C. Loy, and P. Robinson, “An empirical study of recent face alignment methods,” arXiv preprint arXiv:1511.05049, 2015.
  79. H. Tao and Q. Duan, “Hierarchical attention network with progressive feature fusion for facial expression recognition,” Neural Networks, vol. 170, pp. 337–348, 2024.
  80. C. Li, X. Li, X. Wang, D. Huang, Z. Liu, and L. Liao, “Fg-agr: Fine-grained associative graph representation for facial expression recognition in the wild,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  81. W. Gong, Y. Qian, W. Zhou, and H. Leng, “Enhanced spatial-temporal learning network for dynamic facial expression recognition,” Biomedical Signal Processing and Control, vol. 88, p. 105316, 2024.
  82. Z. Sun, H. Zhang, J. Bai, M. Liu, and Z. Hu, “A discriminatively deep fusion approach with improved conditional gan (im-cgan) for facial expression recognition,” Pattern Recognition, vol. 135, p. 109157, 2023.
  83. R. Zhao, T. Liu, Z. Huang, D. P. Lun, and K.-M. Lam, “Spatial-temporal graphs plus transformers for geometry-guided facial expression recognition,” IEEE Transactions on Affective Computing, 2022.
  84. A. V. Savchenko, L. V. Savchenko, and I. Makarov, “Classifying emotions and engagement in online learning based on a single facial expression recognition neural network,” IEEE Transactions on Affective Computing, vol. 13, no. 4, pp. 2132–2143, 2022.
  85. B. Lee, K. Ko, J. Hong, and H. Ko, “Hard sample-aware consistency for low-resolution facial expression recognition,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024, pp. 199–208.
  86. D. Chen, G. Wen, H. Li, R. Chen, and C. Li, “Multi-relations aware network for in-the-wild facial expression recognition,” IEEE Transactions on Circuits and Systems for Video Technology, 2023.
  87. Z. Zhang, X. Tian, Y. Zhang, K. Guo, and X. Xu, “Enhanced discriminative global-local feature learning with priority for facial expression recognition,” Information Sciences, vol. 630, pp. 370–384, 2023.
  88. Y. Liu, X. Zhang, J. Kauttonen, and G. Zhao, “Uncertain facial expression recognition via multi-task assisted correction,” IEEE Transactions on Multimedia, 2023.
  89. J. Cai, Z. Meng, A. S. Khan, Z. Li, J. O’Reilly, and Y. Tong, “Probabilistic attribute tree structured convolutional neural networks for facial expression recognition in the wild,” IEEE Transactions on Affective Computing, 2022.
  90. E. Arnaud, A. Dapogny, and K. Bailly, “Thin: Throwable information networks and application for facial expression recognition in the wild,” IEEE Transactions on Affective Computing, 2022.
  91. Y. Liu, C. Feng, X. Yuan, L. Zhou, W. Wang, J. Qin, and Z. Luo, “Clip-aware expressive feature learning for video-based facial expression recognition,” Information Sciences, vol. 598, pp. 182–195, 2022.
  92. F. Xue, Z. Tan, Y. Zhu, Z. Ma, and G. Guo, “Coarse-to-fine cascaded networks with smooth predicting for video facial expression recognition,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 2412–2418.
  93. S. Kuruvayil and S. Palaniswamy, “Emotion recognition from facial images with simultaneous occlusion, pose and illumination variations using meta-learning,” Journal of King Saud University-Computer and Information Sciences, vol. 34, no. 9, pp. 7271–7282, 2022.
  94. R. A. Borgalli and S. Surve, “Deep learning framework for facial emotion recognition using cnn architectures,” in 2022 International Conference on Electronics and Renewable Systems (ICEARS).   IEEE, 2022, pp. 1777–1784.
  95. S. Cao, Y. Yao, and G. An, “E2-capsule neural networks for facial expression recognition using au-aware attention,” IET Image Processing, vol. 14, no. 11, pp. 2417–2424, 2020.
  96. M. N. Kartheek, M. V. Prasad, and R. Bhukya, “Windmill graph based feature descriptors for facial expression recognition,” Optik, vol. 260, p. 169053, 2022.
  97. K. P. Rao and M. Rao, “Recognition of learners’ cognitive states using facial expressions in e-learning environments,” Journal of University of Shanghai for Science and Technology, vol. 22, no. 12, pp. 93–103, 2020.
  98. S. Zafeiriou, A. Papaioannou, I. Kotsia, M. Nicolaou, and G. Zhao, “Facial affect“in-the-wild,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, 2016, pp. 36–47.
  99. D. Kollias and S. Zafeiriou, “Aff-wild2: Extending the aff-wild database for affect recognition,” arXiv preprint arXiv:1811.07770, 2018.
  100. R. Gross, I. Matthews, J. Cohn, T. Kanade, and S. Baker, “Multi-pie,” Image and Vision Computing, vol. 28, no. 5, pp. 807–813, 2010, best of Automatic Face and Gesture Recognition 2008. [Online]. Available: https://www.sciencedirect.com/science/article/pii/S0262885609001711
  101. M. Pantic, M. Valstar, R. Rademaker, and L. Maat, “Web-based database for facial expression analysis,” in 2005 IEEE international conference on multimedia and Expo.   IEEE, 2005, pp. 5–pp.
  102. S. M. Mavadati, M. H. Mahoor, K. Bartlett, P. Trinh, and J. F. Cohn, “Disfa: A spontaneous facial action intensity database,” IEEE Transactions on Affective Computing, vol. 4, no. 2, pp. 151–160, 2013.
  103. F. Ringeval, A. Sonderegger, J. Sauer, and D. Lalanne, “Introducing the recola multimodal corpus of remote collaborative and affective interactions,” in 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG).   IEEE, 2013, pp. 1–8.
  104. D. McDuff, R. Kaliouby, T. Senechal, M. Amr, J. Cohn, and R. Picard, “Affectiva-mit facial expression dataset (am-fed): Naturalistic and spontaneous facial expressions collected,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2013, pp. 881–888.
  105. S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras, “Deap: A database for emotion analysis; using physiological signals,” IEEE transactions on affective computing, vol. 3, no. 1, pp. 18–31, 2011.
  106. A. Dhall, R. Goecke, S. Lucey, and T. Gedeon, “Static facial expression analysis in tough conditions: Data, evaluation protocol and benchmark,” in 2011 IEEE international conference on computer vision workshops (ICCV workshops).   IEEE, 2011, pp. 2106–2112.
  107. D. Aneja, A. Colburn, G. Faigin, L. Shapiro, and B. Mones, “Modeling stylized character expressions via deep learning,” in Asian conference on computer vision.   Springer, 2017, pp. 136–153.
  108. G. Zhao, X. Huang, M. Taini, S. Z. Li, and M. PietikäInen, “Facial expression recognition from near-infrared videos,” Image and vision computing, vol. 29, no. 9, pp. 607–619, 2011.
  109. A. Martinez and R. Benavente, “The ar face database: Cvc technical report, 24,” 1998.
  110. M. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with gabor wavelets,” in Proceedings Third IEEE international conference on automatic face and gesture recognition.   IEEE, 1998, pp. 200–205.
  111. J. M. Girard, W.-S. Chu, L. A. Jeni, and J. F. Cohn, “Sayette group formation task (gft) spontaneous facial expression database,” in 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017).   IEEE, 2017, pp. 581–588.
  112. X. Zhang, L. Yin, J. F. Cohn, S. Canavan, M. Reale, A. Horowitz, and P. Liu, “A high-resolution spontaneous 3d dynamic facial expression database,” in 2013 10th IEEE international conference and workshops on automatic face and gesture recognition (FG).   IEEE, 2013, pp. 1–6.
  113. Z. Zhang, J. M. Girard, Y. Wu, X. Zhang, P. Liu, U. Ciftci, S. Canavan, M. Reale, A. Horowitz, H. Yang et al., “Multimodal spontaneous emotion corpus for human behavior analysis,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 3438–3446.
  114. S. Cheng, I. Kotsia, M. Pantic, and S. Zafeiriou, “4dfab: A large scale 4d database for facial expression analysis and biometric applications,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5117–5126.
  115. I. Sneddon, M. McRorie, G. McKeown, and J. Hanratty, “The belfast induced natural emotion database,” IEEE Transactions on Affective Computing, vol. 3, no. 1, pp. 32–41, 2011.
  116. A. Gupta, A. D’Cunha, K. Awasthi, and V. Balasubramanian, “Daisee: Towards user engagement recognition in the wild,” arXiv preprint arXiv:1609.01885, 2016.
  117. W. Gan, J. Xue, K. Lu, Y. Yan, P. Gao, and J. Lyu, “Feafa+: an extended well-annotated dataset for facial expression analysis and 3d facial animation,” in Fourteenth International Conference on Digital Image Processing (ICDIP 2022), vol. 12342.   SPIE, 2022, pp. 307–316.
  118. D. Lundqvist, A. Flykt, and A. Öhman, “Karolinska directed emotional faces,” Cognition and Emotion, 1998.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.

Tweets

Sign up for free to view the 1 tweet with 0 likes about this paper.