(45-4) 17 * << * >> * Russian * English * Content * All Issues

Veiling glare removal: synthetic dataset generation, metrics and neural network architecture
A.V. Shoshin 1,2, E.A. Shvets 1

Kharkevich Institute for Information Transmission Problems, RAS,
Bolshoy Karetny per. 19, build.1, Moscow, 127051, Russia,
Moscow Institute of Physics and Technology (State University),
Institutsky per. 9, Dolgoprudny, 141701, Russia

 PDF, 3793 kB

DOI: 10.18287/2412-6179-CO-883

Pages: 615-626.

Full text of article: English language.

Abstract:
In photography, the presence of a bright light source often reduces the quality and readability of the resulting image. Light rays reflect and bounce off camera elements, sensor or diaphragm causing unwanted artifacts. These artifacts are generally known as "lens flare" and may have different influences on the photo: reduce contrast of the image (veiling glare), add circular or circular-like effects (ghosting flare), appear as bright rays spreading from light source (starburst pattern), or cause aberrations. All these effects are generally undesirable, as they reduce legibility and aesthetics of the image. In this paper we address the problem of removing or reducing the effect of veiling glare on the image. There are no available large-scale datasets for this problem and no established metrics, so we start by (i) proposing a simple and fast algorithm of generating synthetic veiling glare images necessary for training and (ii) studying metrics used in related image enhancement tasks (dehazing and underwater image enhancement). We select three such no-reference metrics (UCIQE, UIQM and CCF) and show that their improvement indicates better veil removal. Finally, we experiment on neural network architectures and propose a two-branched architecture and a training procedure utilizing structural similarity measure.

Keywords:
lens flare, veiling glare, image enhancement, deep learning, synthetic data.

Citation:
Shoshin AV, Shvets EA. Veiling glare removal: synthetic dataset generation, metrics and neural network architecture. Computer Optics 2021; 45(4): 615-626. DOI: 10.18287/2412-6179-CO-883.

References:

  1. Lange H. Automatic glare removal in reflectance imagery of the uterine cervix. Proc SPIE 2005; 5747: 2183-2192. DOI: 10.1117/12.596012.
  2. Lamprinou N, Psarakis E. Fast detection and removal of glare in gray scale laparoscopic images. Proc Int Conf on Computer Vision Theory and Applications 2018: 206-212. DOI: 10.5220/0006654202060212.
  3. Ye S, Yin J, Chen B-H, Chen D, Wu Y. Single image glare removal using deep convolutional networks. Proc IEEE Int Conf on Image Processing (ICIP) 2020: 201-205. DOI: 10.1109/ICIP40778.2020.9190712.
  4. Sandhan T, Choi J. Anti-Glare: Tightly constrained optimization for eyeglass reflection removal. Proc IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 1675-1684. DOI: 10.1109/CVPR.2017.182.
  5. Hara T, Saito H, Kanade T. Removal of glare caused by water droplets. The Journal of the Institute of Image Information and Television Engineers 2009: 64. DOI: 10.1109/CVMP.2009.17.
  6. Safranek S. A comparison of techniques used for the removal of lens flare found in high dynamic range luminance measurements [Thesis]. Boulder, CO: University of Colorado; 2017.
  7. Cozzi F, Elia C, Gerosa G, Rocchetta F, Lanaro MP, Rizzi A. Use of an occlusion mask for veiling glare removal in HDR images. J Imaging 2018; 4: 100. DOI: 10.3390/jimaging4080100.
  8. Dušan P. Removing lens flare from digital photographs. Diploma thesis at Charles University in Prague Faculty of Mathematics and Physics. Prague: 2009.
  9. McCann J, Rizzi A. Veiling glare: The dynamic range limit of HDR images. Proc SPIE 2007; 6492: 649213. DOI: 10.1117/12.703042.
  10. McCann J, Rizzi A. Camera and visual veiling glare in HDR images. J Soc Inf Disp 2007; 15(9): 721-730. DOI: 10.1889/1.2785205.
  11. Talvala E-V, Adams A, Horowitz M, Levoy M. Veiling glare in high dynamic range imaging. ACM Trans Graph 2007; 26(3): 37. DOI: 10.1145/1275808.1276424.
  12. Boynton P, Kelley E. Liquid-filled camera for the measurement of high-contrast images. Proc SPIE 2003; 5080: 370-378. DOI: 10.1117/12.519602.
  13. Howorth JR. Anti-veiling-glare glass input window for an optical device and method for manufacturing such window. US Pat US4760307A of July 26, 1988.
  14. Gong D, Zhang Z, Shi Q, van den Hengel A, Shen C, Zhang Y. Learning deep gradient descent optimization for image deconvolution. IEEE Trans Neural Netw Learn Syst 2020; 31(12): 5468-5482. DOI: 10.1109/TNNLS.2020.2968289.
  15. Keshmirian A. A physically-based approach for lens flare simulation [Thesis]. San Diego: University of California; 2008.
  16. Hullin M, Eisemann E, Seidel H-P, Lee S. Physically-based real-time lens flare rendering. ACM Trans Graph 2011; 30: 108. DOI: 10.1145/2010324.1965003.
  17. Kilgard MJ. Fast OpenGL-rendering of lens flares. Source: <https://www.opengl.org/archives/resources/features/KilgardTechniques/LensFlare/>.
  18. Zhang Z, Feng H, Xu Z, Li Q, Chen Y. Single image veiling glare removal. J Mod Opt 2018; 65(19): 2220-2230. DOI: 10.1080/09500340.2018.1506057.
  19. Li C, Guo J, Cong R, Pang Y, Wang B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans Image Process 2016; 25(12): 5664-5677. DOI: 10.1109/TIP.2016.2612882.
  20. Wang W, Yuan X. Recent advances in image dehazing. IEEE/CAA Journal of Automatica Sinica 2017; 4(3): 410-436. DOI: 10.1109/JAS.2017.7510532.
  21. Yang M, Sowmya A. An underwater color image quality evaluation metric. IEEE trans Image Process 2015; 24(12): 6062-6071. DOI: 10.1109/TIP.2015.2491020.
  22. Wang Z, Simoncelli E. Reduce-reference image quality assessment using a wavelet-domain natural image statistic model. Proc SPIE 2005; 5666: 149-159. DOI: 10.1117/12.597306.
  23. Ma L, Li S, Zhang F, Ngan K. Reduced-reference image quality assessment using reorganized DCT-based image representation. IEEE Trans Multimedia 2011; 13(4): 824-829. DOI: 10.1109/TMM.2011.2109701.
  24. Li ZG, Zheng J, Yao W, Zhu Z. Single image haze removal via a simplified dark channel. Proc IEEE Int Conf on Acoustics, Speech and Signal Processing (ICASSP) 2015: 1608-1612. DOI: 10.1109/ICASSP.2015.7178242.
  25. Fang Y, Ma K, Wang Z, Lin W, Fang Z, Zhai G. No-reference quality assessment of contrast-distorted images based on natural scene statistics. IEEE Signal Process Lett 2015; 22(7): 838-842. DOI: 10.1109/LSP.2014.2372333.
  26. Macintyre B, Cowan W. A practical approach to calculating luminance contrast on a CRT. ACM Trans Graph 2000; 11(4): 336-347. DOI: 10.1145/146443.146467.
  27. Panetta K, Gao C, Agaian S. Human-visual-system-inspired underwater image quality measures. IEEE J Ocean Eng 2015; 41(3): 541-551. DOI: 10.1109/JOE.2015.2469915.
  28. Publicly available implementation of UIQM. Source: <https://github.com/xahidbuffon/SRDRM/blob/master/utils/uqim_utils.py>.
  29. Wang Y, Li N, Li Z, Gu Z, Zheng H, Zheng B, Sun M. An imaging-inspired no-reference underwater color image quality assessment metric. Comput Electr Eng 2017; 70: 904-913. DOI: 10.1016/j.compeleceng.2017.12.006.
  30. Choi LK, You J, Bovik AC. Referenceless prediction of perceptual fog density and perceptual image defogging. IEEE Trans Image Process 2015; 24(11): 3888-3901. DOI: 10.1109/TIP.2015.2456502.
  31. Qin X, Wang Z, Bai Y, Xie X, Jia H. FFA-Net: Feature fusion attention network for single image dehazing. Proceedings of the AAAI Conference on Artificial Intelligence 2020; 34(7). 11908-11915. DOI: 10.1609/aaai.v34i07.6865.
  32. Chen W-T, Ding J-J, Kuo S-Y. PMS-Net: Robust haze removal based on patch map for single images. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition (CVPR) 2019: 11673-11681. DOI: 10.1109/CVPR.2019.01195.
  33. He K, Sun J, Tang X. Single image haze removal using dark channel prior. IEEE Trans Pattern Anal Mach Intell 2011; 33. 2341-2353. DOI: 10.1109/CVPRW.2009.5206515.
  34. Contrast limited adaptive histogram equalization. contrast limited adaptive histogram equalization. Graphics Gems IV 1994: 474-485. DOI: 10.1016/B978-0-12-336156-1.50061-6.
  35. Hummel R. Image enhancement by histogram transformation. Comput Gr Image Process 1977; 6: 184-195. DOI: 10.1016/S0146-664X(77)80011-7.
  36. Ghani ASA, Isa NAM. Underwater image quality enhancement through composition of dual-intensity images and Rayleigh-stretching. SpringerPlus 2014; 3: 757. DOI: 10.1186/2193-1801-3-757.
  37. Iqbal K, Salam RA, Azam O, Talib A. Underwater image enhancement using an integrated colour model. IAENG Int J Comput Sci 2007; 2: 239-244.
  38. Huang D, Wang Y, Song W, Sequeira J, Mavromatis S. Shallow-water image enhancement using relative global histogram stretching based on adaptive parameter acquisition. In Book: Schoeffmann K, Chalidabhongse TH, Ngo CW, Aramvith S, O’Connor NE, Ho Y-S, Gabbouj M, Elgammal A, eds. MultiMedia modeling 2018: 453-465. DOI: 10.1007/978-3-319-73603-7_37.
  39. Iqbal K, Odetayo M, James A, Salam RA, Talib A. Enhancing the low quality images using Unsupervised Colour Correction Method. Proc IEEE Int Conf on Systems, Man and Cybernetics 2010; 1703-1709. DOI: 10.1109/ICSMC.2010.5642311.
  40. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
  41. Ilyukhin SA, Chernov TS, Polevoy DV. Improving the accuracy of neural network methods of verification of persons by spatialweighted normalization of brightness image [In Russian]. Informatsionnye Tehnologii I Vichislitelnye Systemi 2019; 4: 12-20. DOI: 10.14357/20718632190402.
  42. Senshina DD, Glikin AA, Polevoy DV, Kunina IA, Ershov EI, Smagina AA. Radial distortion correction for camera submerged under water [In Russian]. Sensory Systems 2020; 34(3): 254-264. DOI: 10.31857/S0235009220030087.
  43. Shepelev DA, Bozhkova VP, Ershov EI, Nikolaev DP. Simulating shot noise of color underwater images. Computer Optics 2020; 44(4): 671-679. DOI: 10.18287/2412-6179-CO-754.
  44. Shepelev DA. Color reproduction accuracy in channel-wise simulation of underwater images [In Russian]. Information Processes 2020; 20(3): 254-268.
  45. Gayer AV, Chernyshova YS, Sheshkus AV. Artificial training data generation for the task of character recognition of fields of russian passport [In Russian]. Sensory systems 2018; 32(3): 230-235. DOI: 10.1134/S023500921803006X.
  46. Ilyuhin SA, Chernov TS, Polevoy DV, Fedorenko FA. A method for spatially weighted image brightness normalization for face verification. Proc SPIE 2019; 11041: 1104118. DOI: 10.1117/12.2522922.
  47. Polevoy DV, Panfilova ЕI, Ershov EI, Nikolaev DP. Color correction of the document owner’s photograph image during recognition on mobile device. Proc SPIE 2021; 11605: 1160510. DOI: 10.1117/12.2587627.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20