Papers
Topics
Authors
Recent
Search
2000 character limit reached

Real-time Convolutional Neural Networks for Emotion and Gender Classification

Published 20 Oct 2017 in cs.CV and cs.RO | (1710.07557v1)

Abstract: In this paper we propose an implement a general convolutional neural network (CNN) building framework for designing real-time CNNs. We validate our models by creating a real-time vision system which accomplishes the tasks of face detection, gender classification and emotion classification simultaneously in one blended step using our proposed CNN architecture. After presenting the details of the training procedure setup we proceed to evaluate on standard benchmark sets. We report accuracies of 96% in the IMDB gender dataset and 66% in the FER-2013 emotion dataset. Along with this we also introduced the very recent real-time enabled guided back-propagation visualization technique. Guided back-propagation uncovers the dynamics of the weight changes and evaluates the learned features. We argue that the careful implementation of modern CNN architectures, the use of the current regularization methods and the visualization of previously hidden features are necessary in order to reduce the gap between slow performances and real-time architectures. Our system has been validated by its deployment on a Care-O-bot 3 robot used during RoboCup@Home competitions. All our code, demos and pre-trained architectures have been released under an open-source license in our public repository.

Citations (286)

Summary

  • The paper introduces efficient CNN models—a sequential fully-convolutional network and a mini-Xception—that achieve 96% accuracy for gender and 66% for emotion classification.
  • It employs depth-wise separable convolutions to reduce model parameters by a factor of 10, enabling real-time performance in service robotics such as the Care-O-bot 3.
  • Guided back-propagation visualizations reveal interpretable feature learning and model biases, underscoring the need for diverse datasets to mitigate skewed outcomes.

Real-Time Convolutional Neural Networks for Emotion and Gender Classification

The paper "Real-time Convolutional Neural Networks for Emotion and Gender Classification" by Octavio Arriaga, Paul G. Plöger, and Matias Valdenegro presents a detailed framework for designing convolutional neural networks (CNNs) that operate in real-time for combined tasks of face detection, gender classification, and emotion classification. This research emphasizes the deployment of efficient CNN models in robotics, specifically through integration in a Care-O-bot 3, showcasing a significant step towards real-time facial recognition in service robotics.

The authors offer an innovative solution to the traditionally large parameter space of CNNs employed in image classification tasks, which often results in computational inefficiencies when implemented in real-time systems. The study sets out a framework for creating CNN architectures that minimize parameters while maintaining competitive accuracy in classification tasks. Two models are proposed: a sequential fully-convolutional network without fully connected layers, and a more complex model inspired by the Xception architecture, which leverages residual modules and depth-wise separable convolutions.

Both models were rigorously evaluated on critical benchmark datasets: the IMDB gender dataset and the FER-2013 emotion dataset. The sequential fully-convolutional model achieved an accuracy of 96% on gender classification using the IMDB dataset, which comprises 460,723 images. On the FER-2013 dataset, which includes 35,887 grayscale images across seven emotion classes, this model reached an accuracy of 66%. This performance mirrors human-level accuracy in emotion classification, acknowledging the intrinsic difficulty of the task.

The more advanced model, referenced as mini-Xception, further refines this approach. By integrating depth-wise separable convolutions, it reduces parameter count by a factor of 10 compared with the initial model. This model maintained accuracy levels of 95% and 66% respectively for gender and emotion classification tasks, highlighting the efficacy of the architectural strategy in reducing computational demand without compromising on performance.

This research also implemented guided back-propagation to visualize feature learning within the CNNs. This novel visualization technique confirmed that the models abstract human-like, interpretable features, such as facial expressions like frowns or smiles, that influence classification decisions. It revealed model biases, particularly over-weighted classifications in individuals wearing glasses, highlighting the need for dataset diversity to mitigate biased learning.

Looking forward, this work underscores the importance of reducing model bias and enhancing generalizability across diverse demographics. Additionally, the paper anticipates the increased utility of visualization techniques like guided back-propagation in uncovering latent biases and refining model robustness in complex real-world applications.

In conclusion, the proposed CNNs offer a robust framework for developing real-time facial recognition systems, demonstrated through successful deployment in service robots. By significantly reducing architectural complexity while retaining high accuracy, this work lays a foundation for future advancements in real-time CNN applications across various AI and robotic platforms. The open-source nature of the work further contributes to the collaborative improvement of real-time classification systems in machine learning and computer vision domains.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.