(45-1) 15 * << * >> * Russian * English * Content * All Issues
Deep learning-based video stream reconstruction in mass-production diffractive optical systems
V. Evdokimova 1,2, M. Petrov 1,2, M. Klyueva 1,2, E. Zybin 3, V. Kosianchuk 3, I. Mishchenko 3, V. Novikov 3, N. Selvesiuk 3, E. Ershov 4, N. Ivliev 1,2, R. Skidanov 1,2, N. Kazanskiy 1,2, A. Nikonorov 1,2
1 Samara National Research University, 443086, Samara, Russia, Moskovskoye Shosse 34,
2 IPSI RAS – Branch of the FSRC "Crystallography and Photonics" RAS,
443001, Samara, Russia, Molodogvardeyskaya 151,
3 Federal State Unitary Enterprise State Research Institute of Aviation Systems, 125319, Russia, Moscow, Viktorenko, 7,
4 Institute for Information Transmission Problems, RAS, 127051, Moscow, Russia, Bolshoy Karetny per. 19, build 1
PDF, 4967 kB
DOI: 10.18287/2412-6179-CO-834
Pages: 130-141.
Full text of article: Russian language.
Abstract:
Many recent studies have focused on developing image reconstruction algorithms in optical systems based on flat optics. These studies demonstrate the feasibility of applying a combination of flat optics and the reconstruction algorithms in real vision systems. However, additional causes of quality loss have been encountered in the development of such systems. This study investigates the influence on the reconstructed image quality of such factors as limitations of mass production technology for diffractive optics, lossy video stream compression artifacts, and specificities of a neural network approach to image reconstruction. The paper offers an end-to-end deep learning-based image reconstruction framework to compensate for the additional factors of quality losing. It provides the image reconstruction quality sufficient for applied vision systems.
Keywords:
diffractive optics, diffractive lenses, deep learning-based reconstruction, image processing.
Citation:
Evdokimova VV, Petrov MV, Klyueva MA, Zybin EY, Kosianchuk VV, Mishchenko IB, Novikov VM, Selvesiuk NI, Ershov EI, Ivliev NA, Skidanov RV, Kazanskiy NL, Nikonorov AV. Deep learning-based video stream reconstruction in mass production diffractive optical systems. Computer Optics 2021; 45(1): 130-141. DOI: 10.18287/2412-6179-CO-834.
Acknowledgements:
The theoretical part and neural network models were developed with the support from the Russian Science Foundation under RSF grant 20-69-47110. The experimental part was executed with the support from the Russian Foundation for Basic Research under RFBR grant 18-07-01390-А and under the government project of the IPSI RAS – a branch of the Federal Scientific-Research Center "Crystallography and Photonics" of the RAS (agreement 007-ГЗ/Ч3363/2б).
References:
- Nikonorov A, Evdokimova A, Petrov M, Yakimov P, Bibikov S, Yuzifovich Y, Skidanov R, Kazanskiy N. Deep learning-based imaging using single-lens and multi-aperture diffractive optical systems. IEEE/CVF ICCVW 2019: 3969-3977. DOI: 10.1109/ICCVW.2019.00491.
- Nikonorov A, Evdokimova V, Petrov M, Bibikov S, Alekseev A, Skidanov R, Kazanskiy N. Deep learning-based image reconstruction for multi-aperture diffractive lens. J Phys Conf Ser 2019; 1368: 052031. DOI: 10.1088/1742-6596/1368/5/052031.
- Nikonorov A, Skidanov R, Fursov V, Petrov M, Bibikov S, Yuzifovich Y. Fresnel lens imaging with post-capture image processing. Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2015: 33-41. DOI: 10.1109/CVPRW.2015.7301373.
- Nikonorov A, Petrov M, Bibikov S, Yakimov P, Kutikova V, Yuzifovich Y, Morozov A, Skidanov R, Kazanskiy N. Toward ultralightweight remote sensing with harmonic lenses and convolutional neural networks. IEEE J Sel Top Appl Earth Obs Remote Sens 2018; 11(9): 3338-3348. DOI: 10.1109/JSTARS.2018.2856538.
- Genevet P, Capasso F, Aieta F, Khorasaninejad M, Devlin R. Recent advances in planar optics: from plasmonic to dielectric metasurfaces. Optica 2017; 4(1): 139-152.
- Peng Y, Fu Q, Amata H, Su Sh, Heide F, Heidrich W. Computational imaging using lightweight diffractive-refractive optics. Opt Express 2015; 23(24): 31393-31407.
- Sun T, PengY, HeidrichW. Revisiting cross-channel information transfer for chromatic aberration. IEEE International Conference on Computer Vision (ICCV) 2017: 3268-3276. DOI: 10.1109/ICCV.2017.352.
- Rao KR, Kim DN, Hwang JJ. Video coding standards. AVS China, H.264/MPEG-4 PART 10, HEVC, VP6, DIRAC and VC-1. Dordrecht, Heidelberg, New York, London: Springer; 2013.
- Naeem R, Zeeshan P, Abbes A. Quality of experience evaluation of H.265/MPEG-HEVC and VP9 comparison efficiency. 2014 26th International Conference on Microelectronics (ICM) 2015. DOI: 10.1109/ICM.2014.7071846.
- Bienik J, Uhrina M, Kortis P. Impact of constant rate factor on objective video quality assessment. Adv Electr Electron Eng 2017; 15(4): 673-682. DOI: 10.15598/aeee.v15i4.2387.
- Nikonorov A, Skidanov R, Kutikova V, Petrov M, Alekseev A, Bibikov S, Kazanskiy N. Towards multi-aperture imaging using diffractive lens. Proc SPIE 2019; 11146: 111460Y. DOI: 10.1117/12.2526923.
- Tan J, Niu L, Adams JK, Boominathan V, Robinson JT, Baraniuk RG, Veeraraghavan A. Face detection and verification using lensless cameras. IEEE Trans Comput Imaging 2018; 5(2): 180-194.
- Sweeney DW, Sommargen GE. Harmonic diffractive lenses. Appl Opt 1995; 34(14): 2469-2475. DOI: 10.1364/AO.34.002469.
- Isola P, Zhu J-Y, Zhou T, Efros A. Image-to-image translation with conditional adversarial networks. arXiv Preprint v3 2018. Source: <https://arxiv.org/abs/1611.07004>.
- Kim J, LeeJ, LeeK. Accurate image super-resolution using very deep convolutional networks. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2016. arXiv Preprint v2 2016. Source: <https://arxiv.org/abs/1511.04587>.
- Tai Y, Yang J, Liu X. Image super-resolution via deep recursive residual network. Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit 2017: 2790-2798. DOI: 10.1109/CVPR.2017.298.
- Nikonorov A, Petrov M, Bibikov S, Yuzifovich Y, Yakimov P, Kazanskiy N, Skidanov R, Fursov V. Comparative evaluation of deblurring techniques for fresnel lens computational imaging. ICPR 2016: 775-780. DOI: 10.1109/ICPR.2016.7899729.
- Ronneberger O, FischerP, BroxT. U-Net: Convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
- Zhao H, Gallo O, Frosio I, Kautz J. Loss functions for image restoration with neural networks. IEEE Trans Comput Imaging 2016; 3: 47-57. DOI: 10.1109/TCI.2016.2644865.
- Wang Z, Bovik AC, Sheikh HR, Simoncelli EP. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process 2004; 13(4): 600-612. DOI: 10.1109/TIP.2003.819861.
- Wang Z, Simoncelli EP, Bovik AC. Multiscale structural similarity for image quality assessment. Conference Record of the Thirty-Seventh Asilomar Conference on Signals, Systems and Computers 2003; 2: 1398-1402. DOI: 10.1109/ACSSC.2003.1292216.
- Huber PJ. Robust estimation of a location parameter. The Annals of Mathematical Statistics 1964; 35(1): 73-101.
- Kingma DP, Ba J. Adam: A method for stochastic optimization. Proc 3rd Intl Conf on Learning Representations (ICLR 2015). arXiv Preprint v9 2017. Source: <https://arxiv.org/abs/1412.6980>.
- Dun X, Ikoma H, Wetzstein G, Wang Z, Cheng X, Peng Y. Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging. Optica 2020; 7(8): 913-922. DOI: 10.1364/OPTICA.394413.
- Li R, Liu W, Yang L, Sun S, W. Hu, F. Zhang, W. Li. DeepUNet: A deep fully convolutional network for pixel-level sea-land segmentation. Computer Science: Computer Vision and Pattern Recognition (cs.CV). arXiv Preprint 2017. Source: <https://arxiv.org/abs/1709.00201>.
- Peng Y,Sun Q, Dun X, Wetzstein G, Heide F. Learned large field-of-view imaging with thin-plate optics. ACM Trans Graph 2019; 38(6): 219. DOI: 10.1145/3355089.3356526.
© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20