(49-5) 11 * << * >> * Russian * English * Content * All Issues
Transfer learning methods for glioma C6 cell segmentation
D.A. Ilyukhin 1, V.O. Yachnaya 1,2, R.O. Malashin 1,2, M.K. Ermachenkova 1,2, A.V. Volkov 3,4, S.G. Pashkevich 4, A.A. Denisov 3,4
1 Pavlov Institute of Physiology, Russian Academy of Sciences,
Naberezhnaya Makarova 6, Saint-Petersburg, 199034, Russia;
2 Saint Petersburg State University of Aerospace Instrumentation,
Bolshaya Morskaya 67, Saint Petersburg, 190000, Russia;
3 Institute of Physiology, National Academy of Sciences of Belarus,
Academicheskaya 28, Minsk, 220072, Belarus;
4 Belarusian State University,
Prospekt Nezavisimosti 4, Minsk, 220030, Belarus
PDF, 2504 kB
DOI: 10.18287/2412-6179-CO-1609
Pages: 794-804.
Full text of article: Russian language.
Abstract:
In this paper, we present an algorithm for binary segmentation of glioma C6 cells using deep learning methods to simplify and speed up the analysis of this culture growth. The first of its kind dataset containing 30 microscopic phase-contrast images of glioma C6 cells is collected to design and test the algorithm. We explore the influence of the encoder architecture in the neural network segmenter on the accuracy of glioma cell segmentation. Transfer learning approaches using the LIVECell dataset of microscopic images and the large ImageNet dataset of non-specialized images are used since the collected dataset contains a relatively small number of images. Experiments show that pre-training the neural network on LIVECell provides a significant advantage in low-resolution glioma cell recognition, with encoders trained on ImageNet providing better results at higher resolution. The paper proposes ways to improve the generalizing abilities of LIVECell weights to work at high resolution by applying augmentation. We demonstrate that using different starting weights allows us to obtain different generalization properties beyond the training set, which can be useful when detecting, or excluding from consideration, other cells in an image.
Keywords:
binary segmentation, microscopic images, computer vision, U-Net, neural networks, glioma C6.
Citation:
Ilyukhin DA, Yachnaya VO, Malashin RO, Ermachenkova MK, Volkov AV, Pashkevich SG, Denisov AA. Transfer learning methods for glioma C6 cell segmentation. Computer Optics 2025; 49(5): 794-804. DOI: 10.18287/2412-6179-CO-1609.
Acknowledgements:
The work was partly funded under a grant from the St. Petersburg Science Foundation and a government project of Pavlov Institute of Physiology, Russian Academy of Sciences (№1021062411653-4-3.1.8), and financially supported by BRFFR (SCST) under grant “Detection of tumor cells in nervous tissue using deep learning methods” (contract No. M24SPbG-010).
References:
- Giakoumettis D, Kritis A, Foroglou N. C6 cell line: the gold standard in glioma research. Hippokratia 2018; 22(3): 105-112.
- Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015. 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III. New York: Springer International Publishing Switzerland; 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
- 2018 Data Science Bowl. 2018. Source: <https://www.kaggle.com/c/data-science-bowl-2018>.
- Stringer C, Wang T, Michaelos M, Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation. Nat Methods 2021; 18: 100-106. DOI: 10.1038/s41592-020-01018-x.
- Cutler KJ, Stringer C, Lo TW, et al. Omnipose: a high-precision morphology-independent solution for bacterial cell segmentation. Nat Methods 2022; 19: 1438-1448. DOI: 10.1038/s41592-022-01639-4.
- Edlund C, Jackson TR, Khalid N, et al. LIVECell – A large-scale dataset for label-free live cell segmentation. Nat Methods 2021; 18: 1038-1045. DOI: 10.1038/s41592-021-01249-6.
- Pachitariu M., Stringer C. Cellpose 2.0: how to train your own model. Nat Methods 2022; 19: 1634-1641. DOI: 10.1038/s41592-022-01663-4.
- Li G, Xiao Z. Transfer learning-based neuronal cell instance segmentation with pointwise attentive path fusion. IEEE Access 2022; 10: 54794-54804. DOI: 10.1109/ACCESS.2022.3176956.
- Guo F. Segmenting neuronal cells in microscopic images using cascade mask R-CNN. 2022 2nd Int Conf on Medical Imaging, Sanitation and Biological Pharmacy (MISBP 2022) 2022: 1-7.
- Huynh HN, Ha QT, Tran AT, Tran HDT, Bui PA, Tran TN. Brain cell segmentation and detection from the LIVECell dataset using deep learning with the EfficientDet model. 2022 IEEE 4th Eurasia Conf on Biomedical Engineering, Healthcare and Sustainability (ECBIOS) 2022; 12-15. DOI: 10.1109/ECBIOS54627.2022.9945050.
- Zhou Y, Li W, Yang G. SCTS: Instance segmentation of single cells using a transformer-based semantic-aware model and space-filling augmentation. 2023 IEEE/CVF Winter Conf on Applications of Computer Vision (WACV) 2023: 5944-5953. DOI: 10.1109/WACV56688.2023.00589.
- Liao W, Li X, Wang Q, Xu Y, Yin Z, Xiong H. CUPre: Cross-domain unsupervised pre-training for few-shot cell segmentation. arXiv Preprint. 2023. Source: <https://arxiv.org/abs/2310.03981>. DOI: 10.48550/arXiv.2310.03981.
- Pan F, Wang F. DCSN: A flexible and efficient lightweight network for dense cell segmentation. 2023 IEEE Int Conf on Medical Artificial Intelligence (MedAI) 2023: 334-343. DOI: 10.1109/MedAI59581.2023.00052.
- Liu Y, Wang C, Wen Y, Huo Y, Liu J. Efficient segmentation algorithm for complex cellular image analysis system. IET Control Theory Appl 2023; 17: 2268-2279. DOI: 10.1049/cth2.12466.
- Khalid N, et al. DeepCeNS: An end-to-end pipeline for cell and nucleus segmentation in microscopic images. 2021 Int Joint Conf on Neural Networks (IJCNN) 2021: 1-8. DOI: 10.1109/IJCNN52387.2021.9533624.
- Khalid N, Koochali M, Rajashekhar V, et al. DeepMuCS: A framework for mono- & co-culture microscopic image analysis: From generation to segmentation. TechRxiv. 2022. Source: <https://d197for5662m48.cloudfront.net/documents/publicationstatus/163785/preprint_pdf/5e0aa768c6c9b2f35e11c54e8de4f71c.pdf>. DOI: 10.36227/techrxiv.19181552.v1.
- Khalid N, et al. DeepMuCS: A framework for co-culture microscopic image analysis: From generation to segmentation. 2022 IEEE-EMBS Int Conf on Biomedical and Health Informatics (BHI) 2022: 1-4. DOI: 10.1109/BHI56158.2022.9926936.
- Schilling MP, et al. AI2Seg: A method and tool for AI-based annotation inspection of biomedical instance segmentation datasets. 2023 45th Annual Int Conf of the IEEE Engineering in Medicine & Biology Society (EMBC) 2023: 1-6. DOI: 10.1109/EMBC40787.2023.10341074.
- Ying Z, Li G, Ren Y, Wang R, Wang W. A new image contrast enhancement algorithm using exposure fusion framework. In Book: Felsberg M, Heyden A, Krüger N, eds. Computer analysis of images and patterns. 17th International Conference, CAIP 2017, Ystad, Sweden, August 22-24, 2017, Proceedings, Part II. 2017; 36-46. DOI: 10.1007/978-3-319-64698-5_4.
- Tan M, Pang R, Le QV. EfficientDet: Scalable and efficient object detection. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020: 1-10. DOI: 10.1109/CVPR42600.2020.01079.
- Liu Z, et al. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2103.14030>. DOI: 10.48550/arXiv.2103.14030.
- Schwendy M, Unger RE, Parekh SH. EVICAN – A balanced dataset for algorithm development in cell and nucleus segmentation. Bioinformatics 2020; 36(12): 3863-3870. DOI: 10.1093/bioinformatics/btaa225.
- Lin T-Y, Dollar P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 936-944. DOI: 10.1109/CVPR.2017.106.
- Zhang H, et al. ResNeSt: Split-attention network. 2022 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2022: 2736-2746. DOI: 10.1109/CVPRW56347.2022.00309.
- Cai Z, Vasconcelos N. Cascade R-CNN: High quality object detection and instance segmentation. IEEE Trans Pattern Anal Mach Intell 2021; 43(5): 1483-1498. DOI: 10.1109/TPAMI.2019.2956516.
- Yi J, Wu P, Hoeppner DJ, Metaxas D. Pixel-wise neural cell instance segmentation. 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) 2018: 373-377. DOI: 10.1109/ISBI.2018.8363596.
- Farhan R. Residual network with attention to neural cells segmentation. Iraqi J Sci 2023; 64(4): 2023-2036. DOI: 10.24996/ijs.2023.64.4.37.
- Panigrahi S, Murat D, Gall AL, et al. Misic, a general deep learning-based method for the high-throughput cell segmentation of complex bacterial communities. Elife 2021; 10: e65151. DOI: 10.7554/eLife.65151.
- Piotrowski T, Rippel O, Elanzew A, et al. Deep-learning-based multi-class segmentation for automated, non-invasive routine assessment of human pluripotent stem cell culture status. Comput Biol Med 2021; 129: 104172. DOI: 10.1016/j.compbiomed.2020.104172.
- Long F. Microscopy cell nuclei segmentation with enhanced U-Net. BMC Bioinformatics 2020; 21(8): 8. DOI: 10.1186/s12859-019-3332-1.
- Sharan TS, Tripathi S, Sharma S, Sharma N. Encoder modified U-Net and feature pyramid network for multi-class segmentation of cardiac magnetic resonance images. IETE Tech Rev 2021; 39(5): 1092-1104. DOI: 10.1080/02564602.2021.
- Chen J, et al. TransUNet: Transformers make strong encoders for medical image segmentation. arXiv Preprint. 2021. Source: <https://arxiv.org/abs/2102.04306>. DOI: 10.48550/arXiv.2102.04306.
- Qiu P, et al. AgileFormer: Spatially agile transformer UNet for medical image segmentation. arXiv Preprint. 2024. Source: <https://arxiv.org/abs/2404.00122>. DOI: 10.48550/arXiv.2404.00122.
- He K, et al. Deep residual learning for image recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
- Atliha V, Šešok D. Comparison of VGG and ResNet used as encoders for image captioning. 2020 IEEE Open Conf of Electrical, Electronic and Information Sciences (eStream) 2020: 1-4. DOI: 10.1109/eStream50540.2020.9108880.
- Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv Preprint. 2014. Source: <https://arxiv.org/abs/1409.1556>. DOI: 10.48550/arXiv.1409.1556.
- Iglovikov V, Shvets A. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1801.05746>. DOI: 10.48550/arXiv.1801.05746.
- Computer Vision Annotation Tool (CVAT). 2025. Source: <https://github.com/opencv/cvat>.
- COCO: Common Objects in Context. Data format. 2025. Source: <https://cocodataset.org/#format-data>.
- Volkov AV, Demina YD, Ksenevich YV, Yachnaya VO, Ermachenkova MK, Ilyukhin DA. Preparation of a database for training an artificial neural network for tumour cells segmentation of neural tissue [In Russian]. XXI Mezhdunarodnaya nauchnaya konferenciya "Molodezh V Nauke 2024", 29-31 October, 2024, Minsk, Belarus.
- LIVECell dataset. Train sample. 2025. Source: <https://LIVECell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/LIVECell_coco_train.json>.
- LIVECell dataset. Validation sample. 2025. Source: <https://LIVECell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/LIVECell_coco_val.json>.
- LIVECell dataset. Test sample. 2025. Source: <https://LIVECell-dataset.s3.eu-central-1.amazonaws.com/LIVECell_dataset_2021/annotations/LIVECell/LIVECell_coco_test.json>.
- Huang G, et al. Densely connected convolutional networks. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 4700-4708. DOI: 10.1109/CVPR.2017.243.
- Chen Y, et al. Dual path networks. 31st Conference on Neural Information Processing Systems (NIPS 2017) 2017: 30.
- Tan M, Le Q. Efficientnet: Rethinking model scaling for convolutional neural networks. 36th Int Conf on Machine Learning (ICML 2019) 2019: 6105-6114.
- Szegedy, C et al. Inception-v4, inception-ResNet and the impact of residual connections on learning. AAAI'17: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence 2017: 4278-4284.
- Xie E, et al. SegFormer: Simple and efficient design for semantic segmentation with transformers. NIPS'21: Proceedings of the 35th Int Conf on Neural Information Processing Systems 2021: 12077-12090.
- Sandler M, et al. MobileNetV2: Inverted residuals and linear bottlenecks. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018: 4510-4520. DOI: 10.1109/CVPR.2018.00474.
- Xie S, et al. Aggregated residual transformations for deep neural networks. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 1492-1500. DOI: 10.1109/CVPR.2017.634.
- Hu J, Shen L, Sun G. Squeeze-and-excitation networks. 2018 IEEE/CVF Conf on Computer Vision and Pattern Recognition 2018: 7132-7141. DOI: 10.1109/CVPR.2018.00745.
- Lin M, et al. Neural architecture design for gpu-efficient networks. arXiv Preprint. 2020. Source: <https://arxiv.org/abs/2006.14090>. DOI: 10.48550/arXiv.2006.14090.
- Radosavovic I, et al. Designing network design spaces. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020: 10428-10436. DOI: 10.1109/CVPR42600.2020.01044.
- Gao SH, et al. Res2Net: A new multi-scale backbone architecture. IEEE Trans Pattern Anal Mach Intell 2019; 43(2); 652-662. DOI: 10.1109/TPAMI.2019.293875.
- Chollet F. Xception: Deep learning with depthwise separable convolutions. 2017 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 1251-1258. DOI: 10.1109/CVPR.2017.195.
- Hendrycks D, Mu N, Cubuk ED, Zoph B, Gilmer J, Lakshminarayanan B. AugMix: A simple data processing method to improve robustness and uncertainty. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1912.02781>. DOI: 10.48550/arXiv.1912.02781.
- Cubuk ED, Zoph B, Shlens J, Le QV. Randaugment: Practical automated data augmentation with a reduced search space. 2020 IEEE/CVF Conf on Computer Vision and Pattern Recognition Workshops (CVPRW) 2020: 3008-3017. DOI: 10.1109/CVPRW50498.2020.00359.
© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20