Papers
Topics
Authors
Recent
Search
2000 character limit reached

Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction

Published 20 May 2024 in cs.CV | (2405.11823v1)

Abstract: Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-precision dark stereo knowledge, implicitly or explicitly, to efficient dual-pixel student networks enables faithful reconstructions. This dark knowledge distillation should also alleviate stereo-synchronization setup and calibration costs while dramatically increasing parameter and inference time efficiency. We collect the first and largest 3-view dual-pixel video dataset, dpMV, to validate our explicit dark knowledge distillation hypothesis. We show that these methods outperform purely monocular solutions, especially in challenging foreground-background separation regions using faithful guidance from dual pixels. Finally, we demonstrate an unconventional use case unlocked by dpMV and implicit dark knowledge distillation from an ensemble of teachers for Light Field (LF) video reconstruction. Our LF video reconstruction method is the fastest and most temporally consistent to date. It remains competitive in reconstruction fidelity while offering many other essential properties like high parameter efficiency, implicit disocclusion handling, zero-shot cross-dataset transfer, geometrically consistent inference on higher spatial-angular resolutions, and adaptive baseline control. All source code is available at the anonymous repository https://github.com/Aryan-Garg.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (90)
  1. A. Abuolaim and M. Brown, “Online lens motion smoothing for video autofocus,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), March 2020.
  2. R. Garg, N. Wadhwa, S. Ansari, and J. T. Barron, “Learning single camera depth estimation using dual-pixels,” ICCV, 2019.
  3. L. Pan, S. Chowdhury, R. Hartley, M. Liu, H. Zhang, and H. Li, “Dual pixel exploration: Simultaneous depth estimation and image restoration,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 4340–4349.
  4. A. Punnappurath, A. Abuolaim, M. Afifi, and M. S. Brown, “Modeling defocus-disparity in dual-pixel sensors,” in IEEE International Conference on Computational Photography (ICCP), 2020.
  5. S. Xin, N. Wadhwa, T. Xue, J. T. Barron, P. P. Srinivasan, J. Chen, I. Gkioulekas, and R. Garg, “Defocus map estimation and deblurring from a single dual-pixel image,” IEEE/CVF International Conference on Computer Vision (ICCV), 2021.
  6. N. Wadhwa, R. Garg, D. E. Jacobs, B. E. Feldman, N. Kanazawa, R. Carroll, Y. Movshovitz-Attias, J. T. Barron, Y. Pritch, and M. Levoy, “Synthetic depth-of-field with a single-camera mobile phone,” ACM Transactions on Graphics, vol. 37, no. 4, p. 1–13, Jul. 2018. [Online]. Available: http://dx.doi.org/10.1145/3197517.3201329
  7. M. Menze and A. Geiger, “Object scene flow for autonomous vehicles,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
  8. T. Schöps, J. L. Schönberger, S. Galliani, T. Sattler, K. Schindler, M. Pollefeys, and A. Geiger, “A multi-view stereo benchmark with high-resolution images and multi-camera videos,” in Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  9. D. Scharstein, H. Hirschmüller, Y. Kitajima, G. Krathwohl, N. Nesic, X. Wang, and P. Westling, “High-resolution stereo datasets with subpixel-accurate ground truth,” in German Conference on Pattern Recognition, 2014. [Online]. Available: https://api.semanticscholar.org/CorpusID:14915763
  10. J. L. Schönberger, E. Zheng, J.-M. Frahm, and M. Pollefeys, “Pixelwise view selection for unstructured multi-view stereo,” in Computer Vision – ECCV 2016, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds.   Cham: Springer International Publishing, 2016, pp. 501–518.
  11. N. Mayer, E. Ilg, P. Häusser, P. Fischer, D. Cremers, A. Dosovitskiy, and T. Brox, “A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation,” in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2016, arXiv:1512.02134. [Online]. Available: http://lmb.informatik.uni-freiburg.de/Publications/2016/MIFDB16
  12. H. Xu, J. Zhang, J. Cai, H. Rezatofighi, F. Yu, D. Tao, and A. Geiger, “Unifying flow, stereo and depth estimation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
  13. C. Buciluundefined, R. Caruana, and A. Niculescu-Mizil, “Model compression,” in Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD ’06.   New York, NY, USA: Association for Computing Machinery, 2006, p. 535–541. [Online]. Available: https://doi.org/10.1145/1150402.1150464
  14. L. Wang and K.-J. Yoon, “Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 6, p. 3048–3068, Jun. 2022. [Online]. Available: http://dx.doi.org/10.1109/TPAMI.2021.3055564
  15. L. Yang, B. Kang, Z. Huang, X. Xu, J. Feng, and H. Zhao, “Depth anything: Unleashing the power of large-scale unlabeled data,” in CVPR, 2024.
  16. R. Ranftl, K. Lasinger, D. Hafner, K. Schindler, and V. Koltun, “Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer,” IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2020.
  17. R. Ranftl, A. Bochkovskiy, and V. Koltun, “Vision transformers for dense prediction,” ArXiv preprint, 2021.
  18. N. Khan, M. H. Kim, and J. Tompkin, “Edge-aware bidirectional diffusion for dense depth estimation from light fields,” British Machine Vision Conference, 2021.
  19. J. T. Numair Khan, Min H. Kim, “View-consistent 4D lightfield depth estimation,” British Machine Vision Conference, 2020.
  20. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Ph.D. dissertation, Stanford University, 2005.
  21. M. Broxton, J. Busch, J. Dourgarian, M. DuVall, D. Erickson, D. Evangelakos, J. Flynn, R. Overbeck, M. Whalen, and P. Debevec, “A low cost multi-camera array for panoramic light field video capture,” in SIGGRAPH Asia 2019 Posters, ser. SA ’19.   New York, NY, USA: Association for Computing Machinery, 2019. [Online]. Available: https://doi.org/10.1145/3355056.3364593
  22. R. S. Overbeck, D. Erickson, D. Evangelakos, M. Pharr, and P. Debevec, “A system for acquiring, processing, and rendering panoramic light field stills for virtual reality,” ACM Trans. Graph., vol. 37, no. 6, dec 2018. [Online]. Available: https://doi.org/10.1145/3272127.3275031
  23. M. Broxton, J. Flynn, R. Overbeck, D. Erickson, P. Hedman, M. DuVall, J. Dourgarian, J. Busch, M. Whalen, and P. Debevec, “Immersive light field video with a layered mesh representation,” vol. 39, no. 4, pp. 86:1–86:15, 2020.
  24. S. Govindarajan, P. Shedligeri, Sarah, and K. Mitra, “Synthesizing light field video from monocular video.”   Berlin, Heidelberg: Springer-Verlag, 2022, p. 162–180. [Online]. Available: https://doi.org/10.1007/978-3-031-20071-7_10
  25. Q. Li and N. Khademi Kalantari, “Synthesizing light field from a single image with variable mpi and two network fusion,” ACM Transactions on Graphics, vol. 39, no. 6, 12 2020.
  26. P. P. Srinivasan, T. Wang, A. Sreelal, R. Ramamoorthi, and R. Ng, “Learning to synthesize a 4d rgbd light field from a single image,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV), Oct 2017.
  27. K. Bae, A. Ivan, H. Nagahara, and I. K. Park, “5d light field synthesis from a monocular video,” 2019.
  28. A. Ivan, I. K. Park et al., “Synthesizing a 4d spatio-angular consistent light field from a single image,” arXiv preprint arXiv:1903.12364, 2019.
  29. B. Chen, L. Ruan, and M.-L. Lam, “Lfgan: 4d light field synthesis from a single rgb image,” ACM Trans. Multimedia Comput. Commun. Appl., vol. 16, no. 1, feb 2020. [Online]. Available: https://doi.org/10.1145/3366371
  30. P. Shedligeri, F. Schiffers, S. Ghosh, O. Cossairt, and K. Mitra, “Selfvi: Self-supervised light-field video reconstruction from stereo video,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 2491–2501.
  31. N. K. Kalantari, T.-C. Wang, and R. Ramamoorthi, “Learning-based view synthesis for light field cameras,” ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2016), vol. 35, no. 6, 2016.
  32. S. Niklaus, L. Mai, J. Yang, and F. Liu, “3d ken burns effect from a single image,” ACM Transactions on Graphics, vol. 38, no. 6, pp. 184:1–184:15, 2019.
  33. A. Jaiswal, A. R. Babu, M. Z. Zadeh, D. Banerjee, and F. Makedon, “A survey on contrastive self-supervised learning,” Technologies, vol. 9, no. 1, 2021. [Online]. Available: https://www.mdpi.com/2227-7080/9/1/2
  34. V. Rani, S. Nabi, M. Kumar, A. Mittal, and K. Saluja, “Self-supervised learning: A succinct review,” Archives of Computational Methods in Engineering, vol. 30, 01 2023.
  35. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An image is worth 16x16 words: Transformers for image recognition at scale,” ICLR, 2021.
  36. Z. Teed and J. Deng, “Raft: Recurrent all-pairs field transforms for optical flow,” in Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II.   Berlin, Heidelberg: Springer-Verlag, 2020, p. 402–419. [Online]. Available: https://doi.org/10.1007/978-3-030-58536-5_24
  37. G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” CoRR, vol. abs/1503.02531, 2015. [Online]. Available: http://arxiv.org/abs/1503.02531
  38. J. Ba and R. Caruana, “Do deep nets really need to be deep?” in Advances in Neural Information Processing Systems, Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Weinberger, Eds., vol. 27.   Curran Associates, Inc., 2014. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2014/file/ea8fcd92d59581717e06eb187f10666d-Paper.pdf
  39. J. Gou, B. Yu, S. J. Maybank, and D. Tao, “Knowledge distillation: A survey,” International Journal of Computer Vision, vol. 129, no. 6, p. 1789–1819, Mar. 2021. [Online]. Available: http://dx.doi.org/10.1007/s11263-021-01453-z
  40. W. Park, D. Kim, Y. Lu, and M. Cho, “Relational knowledge distillation,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 3967–3976.
  41. F. Tung and G. Mori, “Similarity-preserving knowledge distillation,” in International Conference on Computer Vision (ICCV), 2019.
  42. B. Peng, X. Jin, J. Liu, D. Li, Y. Wu, Y. Liu, S. Zhou, and Z. Zhang, “Correlation congruence for knowledge distillation,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  43. J. Yim, D. Joo, J. Bae, and J. Kim, “A gift from knowledge distillation: Fast optimization, network minimization and transfer learning,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR).   Los Alamitos, CA, USA: IEEE Computer Society, jul 2017, pp. 7130–7138. [Online]. Available: https://doi.ieeecomputersociety.org/10.1109/CVPR.2017.754
  44. A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio, “Fitnets: Hints for thin deep nets,” in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.6550
  45. S. Zagoruyko and N. Komodakis, “Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer,” in ICLR, 2017. [Online]. Available: https://arxiv.org/abs/1612.03928
  46. Y. Tian, D. Krishnan, and P. Isola, “Contrastive representation distillation,” in International Conference on Learning Representations, 2020.
  47. H. Touvron, M. Cord, M. Douze, F. Massa, A. Sablayrolles, and H. Jégou, “Training data-efficient image transformers and distillation through attention,” 2021.
  48. F. Wu, A. Fan, A. Baevski, Y. Dauphin, and M. Auli, “Pay less attention with lightweight and dynamic convolutions,” in International Conference on Learning Representations, 2019. [Online]. Available: https://arxiv.org/abs/1901.10430
  49. Z. Peng, W. Huang, S. Gu, L. Xie, Y. Wang, J. Jiao, and Q. Ye, “Conformer: Local features coupling global representations for visual recognition,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 367–376.
  50. Y. Wang, Y. Yang, J. Bai, M. Zhang, J. Bai, J. Yu, C. Zhang, G. Huang, and Y. Tong, “Evolving attention with residual convolutions,” 2021.
  51. K. Yuan, S. Guo, Z. Liu, A. Zhou, F. Yu, and W. Wu, “Incorporating convolution designs into visual transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 579–588.
  52. H. Wu, B. Xiao, N. Codella, M. Liu, X. Dai, L. Yuan, and L. Zhang, “Cvt: Introducing convolutions to vision transformers,” arXiv preprint arXiv:2103.15808, 2021.
  53. T. Xiao, M. Singh, E. Mintun, T. Darrell, P. Dollar, and R. Girshick, “Early convolutions help transformers see better,” in Advances in Neural Information Processing Systems, M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, and J. W. Vaughan, Eds., vol. 34.   Curran Associates, Inc., 2021, pp. 30 392–30 400. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2021/file/ff1418e8cc993fe8abcfe3ce2003e5c5-Paper.pdf
  54. L. Yuan, Y. Chen, T. Wang, W. Yu, Y. Shi, Z.-H. Jiang, F. E. Tay, J. Feng, and S. Yan, “Tokens-to-token vit: Training vision transformers from scratch on imagenet,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 558–567.
  55. J. Bak and I. K. Park, “Light field synthesis from a monocular image using variable ldi,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, June 2023, pp. 3398–3406.
  56. Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3800–3809.
  57. B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, “Nerf: Representing scenes as neural radiance fields for view synthesis,” in Proceedings of the European Conference on Computer Vision (ECCV), 2020. [Online]. Available: http://arxiv.org/abs/2003.08934v2
  58. L. Liu, J. Gu, K. Z. Lin, T.-S. Chua, and C. Theobalt, “Neural sparse voxel fields,” 2021.
  59. K. Zhang, G. Riegler, N. Snavely, and V. Koltun, “Nerf++: Analyzing and improving neural radiance fields,” 2020.
  60. D. Xu, Y. Jiang, P. Wang, Z. Fan, H. Shi, and Z. Wang, “Sinnerf: Training neural radiance fields on complex scenes from a single image,” 2022.
  61. A. Yu, V. Ye, M. Tancik, and A. Kanazawa, “pixelNeRF: Neural radiance fields from one or few images,” in CVPR, 2021.
  62. G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on epi,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6319–6327.
  63. Y. Wang, F. Liu, Z. Wang, G. Hou, Z. Sun, and T. Tan, “End-to-end view synthesis for light field imaging with pseudo 4dcnn,” in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 333–348.
  64. B. Mildenhall, P. P. Srinivasan, R. Ortiz-Cayon, N. K. Kalantari, R. Ramamoorthi, R. Ng, and A. Kar, “Local light field fusion: Practical view synthesis with prescriptive sampling guidelines,” ACM Trans. Graph., vol. 38, no. 4, jul 2019. [Online]. Available: https://doi.org/10.1145/3306346.3322980
  65. J. Flynn, M. Broxton, P. Debevec, M. DuVall, G. Fyffe, R. Overbeck, N. Snavely, and R. Tucker, “Deepview: View synthesis with learned gradient descent,” 2019.
  66. M. Bemana, K. Myszkowski, H.-P. Seidel, and T. Ritschel, “X-fields: Implicit neural view-, light- and time-image interpolation,” ACM Transactions on Graphics (Proc. SIGGRAPH Asia 2020), vol. 39, no. 6, 2020.
  67. R. Tucker and N. Snavely, “Single-view view synthesis with multiplane images,” in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  68. M. Zhang, J. Wang, X. Li, Y. Huang, Y. Sato, and Y. Lu, “Structural multiplane image: Bridging neural view synthesis and 3d reconstruction,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023, pp. 16 707–16 716.
  69. U. Ojha, Y. Li, A. Sundara Rajan, Y. Liang, and Y. J. Lee, “What knowledge gets distilled in knowledge distillation?” in Advances in Neural Information Processing Systems, A. Oh, T. Neumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, Eds., vol. 36.   Curran Associates, Inc., 2023, pp. 11 037–11 048. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2023/file/2433fec2144ccf5fea1c9c5ebdbc3924-Paper-Conference.pdf
  70. J. Cho and B. Hariharan, “On the efficacy of knowledge distillation,” 10 2019, pp. 4793–4801.
  71. G. Xu, J. Cheng, P. Guo, and X. Yang, “Attention concatenation volume for accurate and efficient stereo matching,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12 981–12 990.
  72. Z. Li, X. Liu, N. Drenkow, A. Ding, F. X. Creighton, R. H. Taylor, and M. Unberath, “Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2021, pp. 6197–6206.
  73. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” 2015.
  74. M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” 2019.
  75. A. Howard, M. Sandler, G. Chu, L.-C. Chen, B. Chen, M. Tan, W. Wang, Y. Zhu, R. Pang, V. Vasudevan, Q. V. Le, and H. Adam, “Searching for mobilenetv3,” 2019.
  76. Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “Unet++: Redesigning skip connections to exploit multiscale features in image segmentation,” IEEE Transactions on Medical Imaging, 2019.
  77. G. Wetzstein, D. Lanman, M. Hirsch, and R. Raskar, “Tensor displays: Compressive light field synthesis using multilayer displays with directional backlighting,” ACM Trans. Graph., vol. 31, no. 4, jul 2012. [Online]. Available: https://doi.org/10.1145/2185520.2185576
  78. Z. Lu, H. Xie, C. Liu, and Y. Zhang, “Bridging the gap between vision transformers and convolutional neural networks on small datasets,” in Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, Eds., vol. 35.   Curran Associates, Inc., 2022, pp. 14 663–14 677. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/5e0b46975d1bfe6030b1687b0ada1b85-Paper-Conference.pdf
  79. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.
  80. S. Wang, B. Z. Li, M. Khabsa, H. Fang, and H. Ma, “Linformer: Self-attention with linear complexity,” 2020.
  81. Y. Dong, J.-B. Cordonnier, and A. Loukas, “Attention is not all you need: Pure attention loses rank doubly exponentially with depth,” ArXiv, vol. abs/2103.03404, 2021. [Online]. Available: https://api.semanticscholar.org/CorpusID:232134936
  82. S. F. Bhat, I. Alhashim, and P. Wonka, “Adabins: Depth estimation using adaptive bins,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2021, pp. 4009–4018.
  83. M. Jaderberg, K. Simonyan, A. Zisserman, and k. kavukcuoglu, “Spatial transformer networks,” in Advances in Neural Information Processing Systems, C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, Eds., vol. 28.   Curran Associates, Inc., 2015. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2015/file/33ceb07bf4eeb3da587e268d663aba1a-Paper.pdf
  84. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in Advances in Neural Information Processing Systems 32.   Curran Associates, Inc., 2019, pp. 8024–8035.
  85. I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” 2019.
  86. L. N. Smith and N. Topin, “Super-convergence: Very fast training of neural networks using large learning rates,” 2018.
  87. W. Lijun, S. Xiaohui, Z. Jianming, W. Oliver, L. Zhe, H. Chih-Yao, K. Sarah, and L. Huchuan, “Deeplens: Shallow depth of field from a single image,” ACM Trans. Graph. (Proc. SIGGRAPH Asia), vol. 37, no. 6, pp. 6:1–6:11, 2018.
  88. T.-C. Wang, J.-Y. Zhu, N. K. Kalantari, A. A. Efros, and R. Ramamoorthi, “Light field video capture using a learning-based hybrid imaging system,” ACM Transactions on Graphics (TOG), vol. 36, no. 4, pp. 1–13, 2017.
  89. D. G. Dansereau, B. Girod, and G. Wetzstein, “Liff: Light field features in scale and depth,” 2019.
  90. Z. Wang, A. Bovik, H. Sheikh, and E. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600–612, 2004.

Summary

No one has generated a summary of this paper yet.

Paper to Video (Beta)

No one has generated a video about this paper yet.

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Continue Learning

We haven't generated follow-up questions for this paper yet.

Collections

Sign up for free to add this paper to one or more collections.