Papers
Topics
Authors
Recent
Search
2000 character limit reached

Easy over Hard: A Case Study on Deep Learning

Published 1 Mar 2017 in cs.SE and cs.LG | (1703.00133v2)

Abstract: While deep learning is an exciting new technique, the benefits of this method need to be assessed with respect to its computational cost. This is particularly important for deep learning since these learners need hours (to weeks) to train the model. Such long training time limits the ability of (a)~a researcher to test the stability of their conclusion via repeated runs with different random seeds; and (b)~other researchers to repeat, improve, or even refute that original work. For example, recently, deep learning was used to find which questions in the Stack Overflow programmer discussion forum can be linked together. That deep learning system took 14 hours to execute. We show here that applying a very simple optimizer called DE to fine tune SVM, it can achieve similar (and sometimes better) results. The DE approach terminated in 10 minutes; i.e. 84 times faster hours than deep learning method. We offer these results as a cautionary tale to the software analytics community and suggest that not every new innovation should be applied without critical analysis. If researchers deploy some new and expensive process, that work should be baselined against some simpler and faster alternatives.

Citations (190)

Summary

Overview of "Easy over Hard: A Case Study on Deep Learning"

The paper titled "Easy over Hard: A Case Study on Deep Learning" by Wei Fu and Tim Menzies presents a critical evaluation of the utility of deep learning, particularly the computational costs involved in its application, compared to simpler machine learning methods in the context of software engineering analytics. The specific case study investigated is the prediction of semantic linkability between knowledge units on Stack Overflow, previously studied by Xu et al.

Main Findings and Arguments

  1. Alternative Approaches to Deep Learning: The authors argue that for certain tasks, deep learning is excessively resource-intensive compared to other simpler methods. They utilize differential evolution (DE) to fine-tune a support vector machine (SVM) model, which performs comparably to convolutional neural networks (CNNs) but with substantially lower computational requirements.

  2. Reproduction and Tuning: The reproduction of the baseline system (Word Embedding + SVM) closely matches the performance reported by Xu et al., thus establishing a reliable starting point. The tuned SVM model demonstrates marked improvements in precision, recall, and F1-score over its untuned counterpart, sometimes surpassing the performance of CNNs.

  3. Computational Efficiency: A significant highlight is the dramatic reduction in computational cost. The DE-tuned SVM model executes roughly 84 times faster than the deep learning system, completing tasks in 10 minutes as opposed to the 14 hours required by CNNs.

Methodological Contributions

  • The study leverages differential evolution to optimize SVM parameters, underscoring the untapped potential of optimization techniques within the realm of parameter tuning for traditional machine learning models.

  • This tuning protocol evidenced substantial gains in performance, emphasizing the need to explore similar optimization frameworks in different software engineering contexts before defaulting to deep learning.

Implications and Future Directions

The findings advocate for a judicious assessment of computational costs in the application of novel machine learning techniques, particularly in scenarios where resource expenditure poses limitations on replication and refinement efforts. This research suggests establishing baselines using simpler methods could act as a litmus test for the application of more complex, resource-intensive methods like deep learning.

In terms of future work, while this paper does not dismiss the competence of deep learning outright, it does challenge the community to scrutinize efficiency and resource expenditure. Future advancements could involve combining the strengths of high-speed optimization methods with emerging fast deep learning architectures—potentially creating a hybrid approach that benefits from both improved performance and cost efficiency.

Conclusion

In conclusion, "Easy over Hard: A Case Study on Deep Learning" critically addresses an important issue in machine learning applications within software engineering. By successfully advocating for cost-effective alternatives like DE-tuned SVM over deep learning, the authors contribute to a broader understanding of methodological suitability in software analytics tasks. This highlights the essential balance between innovation and practicality, underpinning the work as an insightful reminder of the importance of methodological efficiency in research and industrial applications.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Authors (2)

Collections

Sign up for free to add this paper to one or more collections.