(46-4) 10 * << * >> * Russian * English * Content * All Issues

Neural network application for semantic segmentation of fundus
R.A. Paringer 1,2, A.V. Mukhin 1, N.Y. Ilyasova 1,2, N.S. Demin 1,2

Samara National Research University, 443086, Samara, Russia, Moskovskoye Shosse 34;
IPSI RAS – Branch of the FSRC "Crystallography and Photonics" RAS,
443001, Samara, Russia, Molodogvardeyskaya 151

 PDF, 1294 kB

DOI: 10.18287/2412-6179-CO-1010

Pages: 596-602.

Full text of article: Russian language.

Abstract:
Advances in the neural networks have brought revolution in many areas, especially those related to image processing and analysis. The most complex is a task of analyzing biomedical data due to a limited number of samples, imbalanced classes, and low-quality labelling. In this paper, we look into the possibility of using neural networks when solving a task of semantic segmentation of fundus. The applicability of the neural networks is evaluated through a comparison of image segmentation results with those obtained using textural features. The neural networks are found to be more accurate than the textural features both in terms of precision (~25%) and recall (~50%). Neural networks can be applied in biomedical image segmentation in combination with data balancing algorithms and data augmentation techniques.

Keywords:
convolution, neural network, convolutional network, segmentation, fundus.

Citation:
Paringer RA, Mukhin AV, Ilyasova NY, Demin NS. Neural network application for semantic segmentation of fundus. Computer Optics 2022; 46(4): 596-602. DOI: 10.18287/2412-6179-CO-1010.

Acknowledgements:
This work was funded by the Russian Foundation for Basic Research under RFBR grant No. 19-29-01135 and the Ministry of Science and Higher Education of the Russian Federation within a government project of Samara University and FSRC "Crystallography and Photonics" RAS.

References:

  1. Kermany DS, Goldbaum M, Cai W. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell 2018; 172(5): 1122-1131.
  2. Kozak I, Luttrull JK. Modern retinal laser therapy. Saudi J Ophthalmol 2015; 29(2): 137-146.
  3. Gafurov SD, Katakhonov ShM, Holmonov MM. Features of the use of lasers in medicine. European science 2019; 3(45): 92-95.
  4. Ilyasova NYu, Shirokanev AS, Kupriyanov AV, Paringer RA. Technology of intellectual feature selection for a system of automatic formation of a coagulate plan on retina. Computer Optics 2019: 43(2): 304-315. DOI: 10.18287/2412-6179-2019-43-2-304-315.
  5. Ilyasova NYu, Shirokanev AS, Kirsh DV, Demin NS, Zamytskiy EA, Paringer RA, Antonov AA. Identification of prognostic factors and predicting the therapeutic effect of laser photocoagulation for DME treatment. Electronics 2021; 1(12): 1420. DOI: 10.3390/electronics10121420.
  6. Li Z, Liu F, Yang W, Peng S, Zhou J. A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans Neural Netw Learn Syst 2021: 1-21.
  7. Samek W, Montavon G, Lapuschkin S, Anders CJ, Müller KR. Explaining deep neural networks and beyond: A review of methods and applications. Proc IEEE 2021; 109(3): 247-278.
  8. Rawat W, Wang Z. Deep convolutional neural networks for image classification: A comprehensive review. Neural comput 2017; 29(9): 2352-2449.
  9. Guo Y, Liu Y, Georgiou T, Lew MS. A review of semantic segmentation using deep neural networks. Int J Multimed Inf Retr 2018; 7(2): 87-93.
  10. Singh RK, Gorantla R. DMENet: diabetic macular edema diagnosis using hierarchical ensemble of CNNs. Plos one 2020; 15(2): e0220677.
  11. Cao Z, Hidalgo G, Simon T, Wei SE, Sheikh Y. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. IEEE Trans Pattern Anal Mach Intell 2019; 43(1): 172-186.
  12. Apostolopoulos ID, Mpesiana TA. Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks. Phys Eng Sci Med 2020; 43(2): 635-640.
  13. Ismael SA, Mohammed A, Hefny H. An enhanced deep learning approach for brain cancer MRI images classification using residual networks. Artif Intell Med 2020; 102: 101779.
  14. Arellano AM, Dai W, Wang S, Jiang X. Ohno-Machado L. Privacy policy and technology in biomedical data science. Annu Rev Biomed Data Sci 2018; 1: 115-129.
  15. Shorten C, Khoshgoftaar TM. A survey on image data augmentation for deep learning. J Big Data 2019; 6(1): 60.
  16. Castro E, Cardoso JS, Pereira JC. Elastic deformations for data augmentation in breast cancer mass detection. 2018 IEEE EMBS Int Conf on Biomedical & Health Informatics (BHI) 2018: 230-234.
  17. Ishwaran H, O'Brien R. Commentary: the problem of class imbalance in biomedical data. J Thorac Cardiovasc Surg 2021; 161(6): 1940.
  18. Charte F, Rivera AJ, del Jesus MJ, Herrera F. MLSMOTE: Approaching imbalanced multilabel learning through synthetic instance generation. Knowl-Based Syst 2015; 89: 385-397.
  19. Pereira RM, Costa YM, Silla CN Jr. MLTL: A multi-label approach for the Tomek Link undersampling algorithm. Neurocomputing 2020; 383: 95-105.
  20. Hao D, Zhang L, Sumkin J, Mohamed A, Wu S. Inaccurate labels in weakly-supervised deep learning: Automatic identification and correction and their impact on classification performance. IEEE J Biomed Health Inform 2020; 24(9): 2701-2710.
  21. Tian C, Fang T, Fan Y, Wu W. Multi-path convolutional neural network in fundus segmentation of blood vessels. Biocybern Biomed Eng 2020; 40(2): 583-595.
  22. Kaur J, Mittal D. A generalized method for the segmentation of exudates from pathological retinal fundus images. Biocybern Biomed Eng 2018; 38(1): 27-53.
  23. Bhagat N, Grigorian RA, Tutela A, Zarbin MA. Diabetic macular edema: pathogenesis and treatment. Surv Ophthalmol 2009; 54(1): 1-32.
  24. Gabbasov R, Paringer R. Influence of the receptive field size on accuracy and performance of a convolutional neural network. 2020 Int Conf on Information Technology and Nanotechnology (ITNT) 2020: 1-4. DOI: 10.1109/ITNT49337.2020.9253219.
  25. Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. Int Conf on Medical Image Computing and Computer-Assisted Intervention 2015: 234-241.
  26. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Proc IEEE conf on Computer Vision and Pattern Recognition 2016: 770-778.
  27. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 2012; 25: 1097-1105.
  28. Iandola F, Moskewicz M, Karayev S, Girshick R, Darrell T, Keutzer K. Densenet: Implementing efficient convnet descriptor pyramids. arXiv preprint 2014. Source: <https://arxiv.org/abs/1404.1869>.
  29. Chollet F. Xception: Deep learning with depthwise separable convolutions. Proc IEEE conf on Computer Vision and Pattern Recognition 2017: 1251-1258.
  30. Ilyasova N, Paringer R, Kupriyanov A, Kirsh D. Intelligent feature selection technique for segmentation of fundus images. 2017 Seventh Int conf on Innovative Computing Technology (INTECH) 2017: 138-143.
  31. MaZda Web Site. Source: <http://www.eletel.p.lodz.pl/programy/mazda/index.php>.
  32. Wu J, Poehlman S, Noseworthy MD, Kamath MV. Texture feature based automated seeded region growing in abdominal MRI segmentation. 2008 Int Conf on BioMedical Engineering and Informatics 2008; 27(2): 263-267.
  33. Mukhin A, Kilbas I, Paringer R, Ilyasova N. Application of the gradient descent for data balancing in diagnostic image analysis problems. 2020 Int Conf on Information Technology and Nanotechnology (ITNT) 2020: 1-4. DOI: 10.1109/ITNT49337.2020.9253278.
  34. TensorFlow. Source: <https://www.tensorflow.org>.
  35. Lin TY, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. Proc IEEE int conf on Computer Vision 2017: 2980-2988.
  36. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint 2014. Source:  <https://arxiv.org/abs/1412.6980>.
  37. Tang H, Maitre H, Boujemaa N, Jiang W. On the relevance of linear discriminative features. Inf Sci 2010; 180(18): 3422-3433.
  38. Stone M. Cross-validatory choice and assessment of statistical predictions. Journal of the Royal Statistical Society: Series B (Methodological) 1974; 36(2): 111-133.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20