Papers
Topics
Authors
Recent
Search
2000 character limit reached

MarsQE: Semantic-Informed Quality Enhancement for Compressed Martian Image

Published 15 Apr 2024 in eess.IV | (2404.09433v2)

Abstract: Lossy image compression is essential for Mars exploration missions, due to the limited bandwidth between Earth and Mars. However, the compression may introduce visual artifacts that complicate the geological analysis of the Martian surface. Existing quality enhancement approaches, primarily designed for Earth images, fall short for Martian images due to a lack of consideration for the unique Martian semantics. In response to this challenge, we conduct an in-depth analysis of Martian images, yielding two key insights based on semantics: the presence of texture similarities and the compact nature of texture representations in Martian images. Inspired by these findings, we introduce MarsQE, an innovative, semantic-informed, two-phase quality enhancement approach specifically designed for Martian images. The first phase involves the semantic-based matching of texture-similar reference images, and the second phase enhances image quality by transferring texture patterns from these reference images to the compressed image. We also develop a post-enhancement network to further reduce compression artifacts and achieve superior compression quality. Our extensive experiments demonstrate that MarsQE significantly outperforms existing approaches for Earth images, establishing a new benchmark for the quality enhancement on Martian images.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (47)
  1. W. Wan, C. Wang, C. Li, and Y. Wei, “China’s first mission to mars,” Nature Astronomy, vol. 4, no. 7, pp. 721–721, 2020.
  2. G. Wallace, “The JPEG still picture compression standard,” IEEE Transactions on Consumer Electronics, vol. 38, no. 1, pp. xviii–xxxiv, 1992. [Online]. Available: https://doi.org/10.1109%2F30.125072
  3. M. C. Malin, M. A. Ravine, M. A. Caplinger, F. Tony Ghaemi, J. A. Schaffner, J. N. Maki, J. F. Bell III, J. F. Cameron, W. E. Dietrich, K. S. Edgett et al., “The mars science laboratory (msl) mast cameras and descent imager: Investigation and instrument descriptions,” Earth and Space Science, vol. 4, no. 8, pp. 506–539, 2017.
  4. J. F. Bell III, A. Godber, S. McNair, M. Caplinger, J. Maki, M. Lemmon, J. Van Beek, M. Malin, D. Wellington, K. Kinch et al., “The mars science laboratory curiosity rover mastcam instruments: Preflight and in-flight calibration, validation, and data archiving,” Earth and Space Science, vol. 4, no. 7, pp. 396–452, 2017.
  5. Q. Meng, D. Wang, X. Wang, W. Li, X. Yang, D. Yan, Y. Li, Z. Cao, Q. Ji, T. Sun et al., “High resolution imaging camera (hiric) on china’s first mars exploration tianwen-1 mission,” Space Science Reviews, vol. 217, pp. 1–29, 2021.
  6. M.-Y. Shen and C.-C. Kuo, “Review of postprocessing techniques for compression artifact removal,” Journal of Visual Communication and Image Representation, vol. 9, no. 1, pp. 2–14, mar 1998. [Online]. Available: https://doi.org/10.1006%2Fjvci.1997.0378
  7. C. Dong, Y. Deng, C. C. Loy, and X. Tang, “Compression artifacts reduction by a deep convolutional network,” in 2015 IEEE International Conference on Computer Vision (ICCV).   IEEE, dec 2015. [Online]. Available: https://doi.org/10.1109%2Ficcv.2015.73
  8. K. Zhang, W. Zuo, Y. Chen, D. Meng, and L. Zhang, “Beyond a gaussian denoiser: Residual learning of deep CNN for image denoising,” IEEE Transactions on Image Processing, vol. 26, no. 7, pp. 3142–3155, jul 2017. [Online]. Available: https://doi.org/10.1109%2Ftip.2017.2662206
  9. Q. Xing, M. Xu, T. Li, and Z. Guan, “Early exit or not: Resource-efficient blind quality enhancement for compressed images,” in Computer Vision – ECCV 2020.   Springer International Publishing, 2020, pp. 275–292. [Online]. Available: https://doi.org/10.1007%2F978-3-030-58517-4_17
  10. Q. Xing, M. Xu, X. Deng, and Y. Guo, “Daqe: Enhancing the quality of compressed images by exploiting the inherent characteristic of defocus,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  11. Q. Xing, M. Xu, S. Li, X. Deng, M. Zheng, H. Liu, and Y. Chen, “Enhancing quality of compressed images by mitigating enhancement bias towards compression domain,” CoRR, vol. abs/2402.17200, 2024. [Online]. Available: https://doi.org/10.48550/arXiv.2402.17200
  12. E. Agustsson and R. Timofte, “NTIRE 2017 challenge on single image super-resolution: Dataset and study,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).   IEEE, jul 2017. [Online]. Available: https://doi.org/10.1109%2Fcvprw.2017.150
  13. Q. Ding, M. Xu, S. Li, X. Deng, Q. Shen, and X. Zou, “A learning-based approach for martian image compression,” in 2022 IEEE International Conference on Visual Communications and Image Processing (VCIP).   IEEE, 2022, pp. 1–5.
  14. Z. Wang, D. Liu, S. Chang, Q. Ling, Y. Yang, and T. S. Huang, “D3: Deep dual-domain based fast restoration of JPEG-compressed images,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, jun 2016. [Online]. Available: https://doi.org/10.1109%2Fcvpr.2016.302
  15. J. Guo and H. Chao, “Building dual-domain representations for compression artifacts reduction,” in Computer Vision – ECCV 2016.   Springer International Publishing, 2016, pp. 628–644. [Online]. Available: https://doi.org/10.1007%2F978-3-319-46448-0_38
  16. P. Liu, H. Zhang, K. Zhang, L. Lin, and W. Zuo, “Multi-level wavelet-cnn for image restoration,” in Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 2018, pp. 773–782.
  17. S. Guo, Z. Yan, K. Zhang, W. Zuo, and L. Zhang, “Toward convolutional blind denoising of real photographs,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, jun 2019. [Online]. Available: https://doi.org/10.1109%2Fcvpr.2019.00181
  18. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   IEEE, jun 2016. [Online]. Available: https://doi.org/10.1109%2Fcvpr.2016.90
  19. S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, ser. JMLR Workshop and Conference Proceedings, F. R. Bach and D. M. Blei, Eds., vol. 37.   JMLR.org, 2015, pp. 448–456. [Online]. Available: http://proceedings.mlr.press/v37/ioffe15.html
  20. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3-d transform-domain collaborative filtering,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2080–2095, aug 2007. [Online]. Available: https://doi.org/10.1109%2Ftip.2007.901238
  21. D. Martin, C. Fowlkes, D. Tal, and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proc. 8th Int’l Conf. Computer Vision, vol. 2, July 2001, pp. 416–423.
  22. R. M. Swan, D. Atha, H. A. Leopold, M. Gildner, S. Oij, C. Chiu, and M. Ono, “Ai4mars: A dataset for terrain-aware autonomous driving on mars,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 1982–1991.
  23. S. Hu, J. Liu, and Z. Kang, “Deeplabv3+/efficientnet hybrid network-based scene area judgment for the mars unmanned vehicle system,” Sensors, vol. 21, no. 23, p. 8136, 2021.
  24. T. Panambur, D. Chakraborty, M. Meyer, R. Milliken, E. Learned-Miller, and M. Parente, “Self-supervised learning to guide scientifically relevant categorization of martian terrain images,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 1322–1332.
  25. G. Goutham, H. Juneja, C. Ankitha, and V. B. Prasad, “Semantic segmentation on martian terrain for navigation using transformers,” in 2022 IEEE 7th International Conference on Recent Advances and Innovations in Engineering (ICRAIE), vol. 7.   IEEE, 2022, pp. 276–282.
  26. C. Wang, Y. Zhang, Y. Zhang, R. Tian, and M. Ding, “Mars image super-resolution based on generative adversarial network,” IEEE Access, vol. 9, pp. 108 889–108 898, 2021.
  27. T. Ojala, M. Pietikainen, and D. Harwood, “Performance evaluation of texture measures with classification based on kullback discrimination of distributions,” in Proceedings of 12th international conference on pattern recognition, vol. 1.   IEEE, 1994, pp. 582–585.
  28. S. Wold, K. Esbensen, and P. Geladi, “Principal component analysis,” Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37–52, 1987.
  29. S. Kullback and R. A. Leibler, “On information and sufficiency,” The annals of mathematical statistics, vol. 22, no. 1, pp. 79–86, 1951.
  30. S. W. Zamir, A. Arora, S. H. Khan, M. Hayat, F. S. Khan, M. Yang, and L. Shao, “Multi-stage progressive image restoration,” in IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021.   Computer Vision Foundation / IEEE, 2021, pp. 14 821–14 831. [Online]. Available: https://openaccess.thecvf.com/content/CVPR2021/html/Zamir_Multi-Stage_Progressive_Image_Restoration_CVPR_2021_paper.html
  31. J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132–7141.
  32. J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization,” arXiv preprint arXiv:1607.06450, 2016.
  33. D. Hendrycks and K. Gimpel, “Gaussian error linear units (gelus),” arXiv preprint arXiv:1606.08415, 2016.
  34. F. Bellard, “Better portable graphics (bpg),” https://bellard.org/bpg/, 2018, [Online; accessed 10-Match-2021].
  35. G. J. Sullivan, J.-R. Ohm, W.-J. Han, and T. Wiegand, “Overview of the high efficiency video coding (HEVC) standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 22, no. 12, pp. 1649–1668, dec 2012. [Online]. Available: https://doi.org/10.1109%2Ftcsvt.2012.2221191
  36. T. Wang, M. Chen, and H. Chao, “A novel deep learning-based method of improving coding efficiency from the decoder-end for HEVC,” in 2017 Data Compression Conference (DCC).   IEEE, apr 2017. [Online]. Available: https://doi.org/10.1109%2Fdcc.2017.42
  37. Z. Guan, Q. Xing, M. Xu, R. Yang, T. Liu, and Z. Wang, “MFQE 2.0: A new approach for multi-frame quality enhancement on compressed video,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 3, pp. 949–963, 2021. [Online]. Available: https://doi.org/10.1109/TPAMI.2019.2944806
  38. M. Zheng, Q. Xing, M. Qiao, M. Xu, L. Jiang, H. Liu, and Y. Chen, “Progressive training of a two-stage framework for video restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2022, pp. 1024–1031.
  39. D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” 2017.
  40. I. Loshchilov and F. Hutter, “SGDR: stochastic gradient descent with warm restarts,” in 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings.   OpenReview.net, 2017. [Online]. Available: https://openreview.net/forum?id=Skq89Scxx
  41. C. Yim and A. C. Bovik, “Quality assessment of deblocked images,” IEEE Transactions on Image Processing, vol. 20, no. 1, pp. 88–98, 2010.
  42. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, apr 2004. [Online]. Available: https://doi.org/10.1109%2Ftip.2003.819861
  43. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, vol. 2.   Ieee, 2003, pp. 1398–1402.
  44. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.   IEEE, jun 2018. [Online]. Available: https://doi.org/10.1109%2Fcvpr.2018.00068
  45. G. Bjontegaard, “Calculation of average psnr differences between rd-curves,” VCEG-M33, 2001.
  46. Y. Blau, R. Mechrez, R. Timofte, T. Michaeli, and L. Zelnik-Manor, “The 2018 pirm challenge on perceptual image super-resolution,” in Proceedings of the European conference on computer vision (ECCV) workshops, 2018, pp. 0–0.
  47. A. Mittal, A. K. Moorthy, and A. C. Bovik, “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695–4708, dec 2012. [Online]. Available: https://doi.org/10.1109%2Ftip.2012.2214050

Summary

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.