(48-5) 13 * << * >> * Русский * English * Содержание * Все выпуски

Convolutional neural network-based low light image enhancement method
J. Guo 1

Department of Information Engineering,
Xiamen Ocean Vocational College, Xiamen, 361012, China

  PDF, 1967 kB

DOI: 10.18287/2412-6179-CO-1415

Страницы: 745-752.

Язык статьи: English.

Аннотация:
Low-light image augmentation has become increasingly important with the advancement of computer vision technologies in a variety of application settings. However, noise and contrast reduction frequently have an impact on image quality in low-light situations. In this paper, a convolutional neural network-based technique for low-light picture augmentation is put forth. The stability of local binary features under variations in illumination is the study’s initial method of providing directional advice for the enhancement algorithm. Second, the addition of a channel attentiveness mechanism improves the network’s capacity to acquire low-light image features. The proposed model of the study performed better on average in the two dataset tests when compared to the contrast-constrained adaptive histogram equalization algorithm and the bilateral filtering algorithm. Additionally, the recall and DICE coefficient performed better in the tests as well, improving by 16.24 % and 4.98 %, respectively. The proposed method outperformed all others in the picture enhancement studies, according to the experimental findings, proving the validity of this study. The purpose of the study is to offer a reference framework for low-light image enhancing techniques.

Ключевые слова:
computer vision, image enhancement, image quality, convolutional neural networks.

Citation:
Guo J. Convolutional neural network-based low light image enhancement method. Computer Optics 2024; 48(5): 745-752. DIO: 10.18287/2412-6179-CO-1415.

References:

  1. Gao K, Akbarpour HA, Fraser J, Nouduri K, Bunyak F, Massaro R, Seetharaman G, Palaniappan K. Local feature performance evaluation for structure-from-motion and multi-view stereo using simulated city-scale aerial imagery. IEEE Sens J 2020; 21(10): 11615-11627. DOI: 10.1109/JSEN.2020.3042810.
  2. Fan X, Lei J, Liang J, Fang Y, Cao X, Ling N. Unsupervised stereoscopic image retargeting via view synthesis and stereo cycle consistency losses. Neurocomputing 2021; 447(11): 161-171. DOI: 10.1016/j.neucom.2021.02.079.
  3. Zhang S, Li H, Kong W. Object counting method based on dual attention network. IET Image Process 2020; 14(8): 1621-1627. DOI: 10.1049/iet-ipr.2019.0465.
  4. Lévy B, Mohayaee R, Hausegger SV. A fast semi-discrete optimal transport algorithm for a unique reconstruction of the early Universe. Mon Not R Astron Soc 2021; 501(1): 1165-1185. DOI: 10.1093/mnras/stab1676.
  5. Belizario IV, Linares OC, Neto J. Automatic image segmentation based on label propagation. IET Image Process 2021; 15(15): 2532-2547. DOI: 10.1049/ipr2.12242.
  6. Sandoub G, Atta R, Ali HA, Abdel-Kader RF. A low-light image enhancement method based on bright channel prior and maximum colour channel. IET Image Process 2021; 15(8): 1759-1772. DOI: 10.1049/ipr2.12148.
  7. Yang J, Xu Y, Yue H, Jiang Z, Li K. LLIE based on Retinex decomposition and adaptive gamma correction. IET Image Process 2021; 15(5): 1189-1202. DOI: 10.1049/ipr2.12097.
  8. Yang H-H, Huang K-C, Chen W-T. LAFFNet: A lightweight adaptive feature fusion network for underwater image enhancement. IEEE International Conference on Robotics and Automation 2021; Corpus ID: 233715102. DOI:10.1109/ICRA48506.2021.9561263
  9. He K, Tao D, Xu D. Adaptive colour restoration and detail retention for image enhancement. IET Image Process 2021; 15(14): 3685-3697. DOI: 10.1049/ipr2.12223
  10. Azam MA, Khan KB, Ahmad M, Mazzara M, Khattak D, Multimodal K. Medical image registration and fusion for quality enhancement. Comput Mater Contin 2021; 68(1): 821-840. DOI: 10.32604/cmc.2021.016131.
  11. Lore KG, Akintayo A, Sarkar S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recogn 2017; 61: 650-662. DOI: 10.1016/j.patcog.2016.06.008.
  12. Wang R, Zhang Q, Fu C W, Shen X, Zheng W S, Jia J. Underexposed photo enhancement using deep illumination estimation. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition 2019: 6849-6857. DOI: 10.1109/CVPR.2019.00701.
  13. Li Y, Hu J, Ni G, Zeng T. Deep CNN denoiser prior for blurred images restoration with multiplicative noise. Inverse Probl Imag 2023; 17(3): 726-745. DOI: 10.3934/ipi.2022075.
  14. Wu X, Li P, Zhou J, Liu Y. A cascaded CNN-based method for monocular vision robotic grasping. Industrial Robot 2022; 49(4): 645-657.
  15. Shalash WM. A deep learning CNN model for driver fatigue detection using single EEG channel. J Theor Appl Inf Technol 2021; 99(2): 462-477.
  16. Zeng Z, Sun S, Sun J, Yin J, Shen Y. Constructing a mobile visual search framework for Dunhuang murals based on fine-tuned CNN and ontology semantic distance. Electron Libr 2022; 40(3): 121-139. DOI: 10.1108/EL-09-2021-0173.
  17. Wu J, Zhang Y, Luo C, Yan L, Shen X. A modification-free steganography algorithm based on image classification and CNN. Int J Digi Crime Forens 2021; 13(3): 47-58. DOI: 10.4018/IJDCF.20210501.oa4.
  18. Yang Y, Song X. Research on face intelligent perception technology integrating deep learning under different illumination intensities. Journal of Computational and Cognitive Engineering 2022, 1(1): 32-36. DOI: 10.47852/bonviewJCCE19919.
  19. Nsugbe E. Toward a self-supervised architecture for semen quality prediction using environmental and lifestyle factors. Artif Intell Appl 2023; 1(1): 35-42. DOI: 10.47852/bonviewAIA2202303..

© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: journal@computeroptics.ru; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20