Papers
Topics
Authors
Recent
Search
2000 character limit reached

Processing Energy Modeling for Neural Network Based Image Compression

Published 29 Jun 2023 in eess.IV | (2306.16755v1)

Abstract: Nowadays, the compression performance of neural-networkbased image compression algorithms outperforms state-of-the-art compression approaches such as JPEG or HEIC-based image compression. Unfortunately, most neural-network based compression methods are executed on GPUs and consume a high amount of energy during execution. Therefore, this paper performs an in-depth analysis on the energy consumption of state-of-the-art neural-network based compression methods on a GPU and show that the energy consumption of compression networks can be estimated using the image size with mean estimation errors of less than 7%. Finally, using a correlation analysis, we find that the number of operations per pixel is the main driving force for energy consumption and deduce that the network layers up to the second downsampling step are consuming most energy.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. “Enhanced invertible encoding for learned image compression,” in Proc. 29th ACM International Conference on Multimedia (MM ’21), New York, NY, USA, 2021, MM ’21, pp. 162–170.
  2. “Rdonet: Rate-distortion optimized learned image compression with variable depth,” in Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1759–1763.
  3. The Shift Project, “Climate crisis: The unsustainable use of online video,” Tech. Rep., 2019.
  4. “End-to-end optimized image compression,” in Proc. International Conference on Learning Representations (ICLR), Apr 2017, pp. 1 – 27.
  5. A. Krizhevsky and G. E. Hinton, “Using very deep autoencoders for content-based image retrieval.,” in Proc. European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), 2011, pp. 489–494.
  6. “Variational image compression with a scale hyperprior,” in Proc. International Conference on Learning Representations (ICLR), 2018, pp. 1–47.
  7. “Joint autoregressive and hierarchical priors for learned image compression,” in Advances in Neural Information Processing Systems, Dec. 2018, vol. 31, pp. 1–10.
  8. “Learned image compression with discretized gaussian mixture likelihoods and attention modules,” in Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
  9. “GPU Computing,” Proc. IEEE, vol. 96, no. 5, pp. 879–899, May 2008.
  10. “ImageNet Classification with Deep Convolutional Neural Networks,” in Adv. Neural Inf. Process. Syst. 2012, vol. 25, Curran Associates, Inc.
  11. NVIDIA Corporation, “NVIDIA Tesla V100 GPU Architecture - The World’s Most Advanced Data Center GPU,” https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf, Aug. 2017.
  12. NVIDIA Corporation, “NVIDIA Turing GPU Architecture,” https://images.nvidia.com/aem-dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf, Sept. 2018.
  13. NVIDIA Developers, “NVIDIA System Management Interface,” https://developer.nvidia.com/nvidia-system-management-interface, accessed 2022-09.
  14. NVIDIA Corporation, “NVML Reference Manual vR470,” https://docs.nvidia.com/pdf/NVML_API_Reference_Guide.pdf, July 2021.
  15. “Modeling the energy consumption of the HEVC decoding process,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 28, no. 1, pp. 217–229, Jan. 2018.
  16. PyTorch Contributors, “Reproducibility — PyTorch 1.12 documentation,” https://pytorch.org/docs/stable/notes/randomness.html, June 2022.
  17. “CompressAI: Library and evaluation platform for end-to-end video compression research,” https://github.com/InterDigitalInc/CompressAI, accessed 2022-10.
  18. “Challenge on learned image compression (CLIC),” https://clic.compression.cc/2021, accessed 2022-10.
Citations (1)

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.