(49-6) 26 * << * >> * Русский * English * Содержание * Все выпуски
Deep Learning Approach for Layer-Specific Segmentation of the Olfactory Bulb in X-ray Phase-Contrast Tomography
V.A. Karyakina 1, D.V. Polevoy 1,2,3, I. Bukreeva 4, O. Junemann 4, S.V. Saveliev 5, M.V. Chukalina 1,3
1 Smart Engines Service LLC,
117312, Russia, Moscow, 60-Letiya Oktyabrya pr. 9;
2 Institute for Information Transmission Problems, RAS,
127051, Russia, Moscow, Bolshoy Karetny per. 19, build. 1;
3 Federal Research Center "Computer Science and Control", RAS,
119333, Russia, Moscow, Vavilova Str. 44, build. 2;
4 Institute of Nanotechnology CNR, Rome unit,
00185, Italy, Rome, Piazzale Aldo Moro 5;
5 Avtsyn Research Institute of Human Morphology of Federal State Budgetary Scientific Institution,
"Petrovsky National Research Centre of Surgery",
117418, Russia, Moscow, Tsyurupy Str. 3
PDF, 2255 kB
DOI: 10.18287/COJ1763
Страницы: 1102-1111.
Язык статьи: English.
Аннотация:
This paper addresses neural network segmentation of a human olfactory bulb sample on X-ray phase-contrast tomographic reconstruction. The olfactory bulb plays a key role in the primary processing of olfactory information. It consists of several nested cell layers, the morphometric analysis of which has important diagnostic value. However, manual segmentation of the reconstructed volume is labor-intensive and requires high qualifications, which makes the development of automated segmentation methods crucial. X-ray phase-contrast tomography provides a high-resolution reconstruction of the olfactory bulb morphological structure. The resulting reconstructions are characterized by excessive morphological details and reconstruction artifacts. These features, combined with limited data volume, visual similarity of neighboring slices, and sparse ground truth, hinder the application of standard neural network-based segmentation approaches. This paper examines the characteristics of the data under consideration and suggests a training pipeline for a convolutional neural network, including inter-slice smoothing at the data preprocessing stage, alternative strategies for splitting the data into subsets, a set of augmentations, and training on sparse sampling. The proposed adaptations achieved a Dice score (micro) value of 0.93 on the test subset. An ablation study demonstrated that each of the above-mentioned modifications independently improves segmentation quality. The presented training pipeline can be applied to the segmentation of morphological structures on tomographic images in biomedical tasks with a limited dataset and non-standard ground truth.
Ключевые слова:
olfactory bulb, deep learning, convolutional neural network, semantic segmentation, data curation, X-ray phase-contrast tomography.
Citation:
Karyakina VA, Polevoy DV, Bukreeva I, Junemann O, Saveliev SV, Chukalina MV. Deep Learning Approach for Layer-Specific Segmentation of the Olfactory Bulb in X-ray Phase-Contrast Tomography. Computer Optics 2025; 49(6): 1102-1111. DOI: 10.18287/COJ1763.
References:
- Ruan Y, Zheng X-Y, Zhang H-L, Zhu W, Zhu J. Olfactory dysfunctions in neurodegenerative disorders. Journal of Neuroscience Research. 2012; 90(9): 1693–1700. DOI: 10.1002/jnr.23054.
- Stoyanov G, Petkova L, Dzhenkov D, Sapundzhiev N, Todorov I. Gross and histopathology of COVID-19 with first histology report of olfactory bulb changes. Cureus. 2020; 12: e11912. DOI: 10.7759/cureus.11912.
- Xu Y, Quan R, Xu W, Huang Y, Chen X, Liu F. Advances in medical image segmentation: A comprehensive review of traditional, deep learning and hybrid approaches. Bioengineering. 2024; 11(10): 1034. DOI: 10.3390/bioengineering11101034.
- Reza A, Aghdam EK, Rauland A, Jia Y, Avval AH, Bozorgpour A, Karimijafarbigloo S, Cohen JP, Adeli E, Merhof D. Medical image segmentation review: The success of U-Net. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2024; 46(12): 10076-10095. DOI: 10.1109/TPAMI.2024.3435571.
- Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. 2015 Medical Image Computing and Computer-Assisted Intervention (MICCAI). Springer. 2015; 234–241. DOI: 10.1007/978-3-319-24574-4_28.
- Paringer R, Mukhin A, Ilyasova N, Demin N. Neural networks application for semantic segmentation of fundus. Computer Optics. 2022; 46(4): 596–602. DOI: 10.18287/2412-6179-CO-1010.
- Isensee F, Jaeger PF, Kohl SA, Petersen J, Maier-Hein KH. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nature Methods. 2021; 18(2): 203–211. DOI: 10.1038/s41592-020-01008-z.
- Isensee F, Wald T, Ulrich K, Baumgartner M, Roy S, Maier-Hein K, Jaeger PF. nnU-Net revisited: A call for rigorous validation in 3D medical image segmentation. 2024 International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer. 2024; 488–498. DOI: 10.1007/978-3-031-72114-4_47.
- Desser D, Assunção F, Yan X, Alves V, Fernandes FM, Hummel T. Automatic segmentation of the olfactory bulb. Brain Sciences. 2021; 11(9): 1141. DOI: 10.3390/brainsci11091141.
- Noothout J, Postma E, Boesveldt S, de Vos B, Smeets P, Iˇsgum I. Automatic segmentation of the olfactory bulbs in MRI. 2021 Medical Imaging: Image Processing. 2021. DOI: 10.1117/12.2580354.
- Postma E, Noothout J, Boek W, Joshi A, Herrmann T, Hummel T, Smeets P, Iˇsgum I, Boesveldt S. The potential for clinical application of automatic quantification of olfactory bulb volume in MRI scans using convolutional neural networks. NeuroImage: Clinical. 2023; 38:103411. DOI: 10.1016/j.nicl.2023.103411.
- Liu X, Li A, Luo Y, Bao S, Jiang T, Li X, Yuan J, Feng Z. An interactive image segmentation method for the anatomical structures of the main olfactory bulb with micro-level resolution. Frontiers in Neuroinformatics. 2023; 17:1276891. DOI: 10.3389/fninf.2023.1276891.
- Meshkov A, Khafizov A, Buzmakov A, Bukreeva I, Junemann O, Fratini M, Cedola A, Chukalina M, Yamaev A, Gigli G, Wilde F, Longo E, Asadchikov V, Saveliev S, Nikolaev D. Deep learning-based segmentation of post-mortem human’s olfactory bulb structures in X-ray phase-contrast tomography. Tomography. 2022; 8(4): 1854–1868. DOI: 10.3390/tomography8040156.
- Snigirev A, Snigireva I, Kohn V, Kuznetsov S, Igor S. On the possibilities of X-ray phase contrast microimaging by coherent high-energy synchrotron radiation. Review of Scientific Instruments. 1996; 66(12): 5486–5492. DOI: 10.1063/1.1146073.
- Arlazarov V, Nikolaev D, Arlazarov V, Chukalina M. X-ray tomography: The way from layer-by-layer radiography to computed tomography. Computer Optics. 2021; 45: 897–906. DOI: 10.18287/2412-6179-CO-898.
- Vedo – a module for scientific analysis and visualization of 3D objects. Source: <https://pypi.org/project/vedo/2024.5.1/>.
- Dice LR. Measures of the amount of ecologic association between species. Ecology. 1945; 26(3): 297–302.
- Takahashi K, Yamamoto K, Kuchiba A, Koyama T. Confidence interval for micro-averaged F1 and macro-averaged F1 scores. Applied Intelligence. 2022; 52(3): 4961–4972. DOI: 10.1007/s10489-021-02635-5.
- He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2015; 770–778. DOI: 10.1109/CVPR.2016.90.
- Lin T-Y, Goyal P, Girshick R, He K, Dollár P. Focal loss for dense object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2020; 42(2): 318–327. DOI: 10.1109/TPAMI.2018.2858826.
- Gayer A, Sheshkus A, Chernisheva Y. Augmentation of a training sample on the fly for training neural networks [In Russian]. Trudy ISA RAN. 2018; 68: 150–157. DOI: 10.14357/20790279180517.
- Sheshkus A, Nikolaev D, Arlazarov VL. HoughEncoder: Neural network architecture for document image semantic segmentation. 2020 IEEE International Conference on Image Processing (ICIP). 2020: 1946–1950. DOI: 10.1109/ICIP40778.2020.9191182.
- Limonova E, Matveev D, Nikolaev D, Arlazarov V. Bipolar morphological neural networks: Convolution without multiplication. 2019 International Conference on Machine Vision (ICMV). 2019. DOI: 10.1117/12.2559299.
- Limonova E. Fast and gate-efficient approximated activations for bipolar morphological neural networks. Journal of Information Technologies and Computing Systems (JITCS). 2022; 2: 3–10. DOI: 10.14357/20718632220201.
© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: journal@computeroptics.ru; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20