(48-5) 16 * << * >> * Russian * English * Content * All Issues

Improving generalization in classification of novel bacterial strains: a multi-headed ResNet approach for microscopic image classification
V.O. Yachnaya 1,2, M.A. Mikhalkova 1, R.O. Malashin 1,2, V.R. Lutsiv 2, L.А. Kraeva 3,4, G.N. Khamdulayeva 3, V.E. Nazarov 5, V.P. Chelibanov 6

Pavlov Institute of Physiology, Russian Academy of Sciences,
199034, Saint Petersburg, Russia, Naberezhnaya Makarova 6;
Saint Petersburg State University of Aerospace Instrumentation,
190000, Saint Petersburg, Russia, Bolshaya Morskaya 67;
Saint Petersburg Pasteur Institute,
197101, Saint Petersburg, Russia, Mira street 14;
Military and Medical Academy named after S.M. Kirov,
194044, St. Petersburg, Russia;
North-Western State Medical University named after I.I. Mechnikov,
191015, Saint Petersburg, Russia, Kirochnaya street 41;
ITMO University, 197101, Saint Petersburg, Russia, Kronverksky prospekt 49

 PDF, 2180 kB

DOI: 10.18287/2412-6179-CO-1464

Pages: 772-781.

Full text of article: English language.

Abstract:
The purpose of this work is to design a system for microscopic bacterial images classification that can be generalized to new data. In the course of work, a dataset containing 23 bacterial species was collected. We use a strain-wise method for dividing the dataset into training and test sets. Such splitting (in contrast to random division) allows evaluating the performance of classifiers on new strains in the case of intra-species visual variability of bacteria. We propose a "Multi-headed" ResNet (ResNet-MH) for the analysis of microscopic images of bacterial colonies. This approach forces the neural network to analyze features of different resolutions, such as the shape of individual bacterial cells and the shape and number of bacterial clusters during training. Our network achieves the 41.6% accuracy species-wise and 64.06% accuracy genera-wise. The proposed method of dataset splitting guarantees generalization to new unseen strains, whereas random splitting into training and test sets leads to overfitting of the system (accuracy is over 90%). For the 10 visually strain-wise stable species, the accuracy of the proposed system reaches 83.6% species-wise.

Keywords:
bacteria classification, image classification, deep neural network, dataset splitting, multi-head model, microscopic images.

Citation:
Yachnaya VO, Mikhalkova MA, Malashin RO, Lutsiv VR, Kraeva LA, Khamdulayeva GN, Nazarov VE, Chelibanov VP. Improving generalization in classification of novel bacterial strains: a multi-headed ResNet approach for microscopic image classification. Computer Optics 2024; 48(5): 772-781. DOI: 10.18287/2412-6179-CO-1464.

Acknowledgements:
This research was funded by the Ministry of Science and Higher Education of the Russian Federation under the agreement № 075-15-2022-303 to support the development of a World-class research center "Pavlov Center for Integrative Physiology for Medicine, High-tech Healthcare, and Stress Tolerance Technologies".

References:

  1. De Bruyne K, Slabbinck B, Waegeman W, Vauterin P, De Baets B, Vandamme P. Bacterial species identification from MALDI-TOF mass spectra through data analysis and machine learning. Syst Appl Microbiol 2011; 34(1): 20-29. DOI: 10.1016/j.syapm.2010.11.003.
  2. Ho CS, et al. Rapid identification of pathogenic bacteria using Raman spectroscopy and deep learning. Nat Commun 2019; 10(1): 1-33. DOI: 10.1038/s41467-019-12898-9.
  3. Sajedi H, Mohammadipanah F, Pashaei A. Image-processing based taxonomy analysis of bacterial macromorphology using machine-learning models. Multimed Tools Appl 2020; 79(43): 32711-32730. DOI: 10.1007/s11042-020-09284-9.
  4. García-Soriano DA, Andersen FD, Nygaard JV, Tørring T. ColFeatures: Automated data extraction and classification of bacterial colonies. bioRxiv Preprint. 2021. Source: <https://www.biorxiv.org/content/10.1101/2021.05.27.445853v1>. DOI: 10.1101/2021.05.27.445853.
  5. Doshi A, et al. A deep learning pipeline for segmentation of Proteus Mirabilis Colony Patterns. IEEE 19th Int Symp on Biomedical Imaging (ISBI) 2022: 1-5. DOI: 10.1109/ISBI52829.2022.9761643.
  6. Bär J, Boumasmoud M, Kouyos RD, Zinkernagel AS, Vulin C. Efficient microbial colony growth dynamics quantification with ColTapp, an automated image analysis application. Sci Rep 2020; 10(1): 1-15. DOI: 10.1038/s41598-020-72979-4.
  7. Zieliński B, Plichta A, Misztal K, Spurek P, Brzychczy-Włoch M, Ochońska D. Deep learning approach to bacterial colony classification. PLoS ONE 2017; 12(9): e0184554. DOI: 10.1371/journal.pone.0184554.
  8. DIBaS dataset. 2017. Source: <http://misztal.edu.pl/software/databases/dibas/>.
  9. Mohamed BA, Afify HM. Automated classification of bacterial images extracted from digital microscope via bag of words model. 9th Cairo Int Biomedical Engineering Conf (CIBEC) 2018: 86-89. DOI: 10.1109/CIBEC.2018.8641799.
  10. Patel S. Bacterial colony classification using atrous convolution with transfer learning. Ann Rom Soc Cell Biol 2021; 25(4): 1428-1441. Source:        <https://www.annalsofrscb.ro/index.php/journal/article/view/2650>.
  11. Mai D-T, Ishibashi K. Small-scale depthwise separable convolutional neural networks for bacteria classification. Electronics 2021; 10(23): 3005. DOI: 10.3390/electronics10233005.
  12. Song Y, et al. Segmentation, splitting, and classification of overlapping bacteria in microscope images for automatic bacterial vaginosis diagnosis. IEEE J Biomed Health Inform 2017; 21(4): 1095-1104. DOI: 10.1109/JBHI.2016.2594239.
  13. Tamiev D, Furman PE, Reuel NF. Automated classification of bacterial cell sub-populations with convolutional neural networks. PLoS ONE, 2020; 15: e0241200. DOI: 10.1371/journal.pone.0241200.
  14. Kang R, Park B, Eady M, Ouyang Q, Chen K. Single-cell classification of foodborne pathogens using hyperspectral microscope imaging coupled with deep learning frameworks. Sens Actuators 2020; 309: 127789. DOI: 10.1016/j.snb.2020.127789.
  15. Spahn C, et al. DeepBacs: Bacterial image analysis using open-source deep learning approaches. bioRxiv Preprint. 2021. Source:         <https://www.biorxiv.org/content/10.1101/2021.11.03.467152v1>. DOI: 10.1101/2021.11.03.467152.
  16. Smit JH, Li Y, Warszawik EM, Herrmann A, Cordes T. ColiCoords: A Python package for the analysis of bacterial fluorescence microscopy data. PLoS ONE 2019; 14 (6): 1-18. DOI: 10.1371/journal.pone.0217524.
  17. Stylianidou S, Brennan C, Nissen SB, Kuwada NJ, Wiggins PA. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells. Molecular 2016; 102(4): 690-700. DOI: 10.1111/mmi.13486.
  18. Balomenos AD, Tsakanikas P, Aspridou Z, Tampakaki AP, Koutsoumanis KP, Manolakos ES. Image analysis driven single-cell analytics for systems microbiology. BMC Syst Biol 2017; 11(1). DOI: 10.1186/s12918-017-0399-z.
  19. Van Valen DA, Kudo T, Lane KM, Macklin DN, Quach NT, DeFelice MM, Maayan I, Tanouchi Y, Ashley EA, Covert MW. Deep learning automates the quantitative analysis of individual cells in live-cell imaging experiments. PLoS Comput Biol 2016; 12(11): e1005177. DOI: 10.1371/journal.pcbi.1005177.
  20. Smith PK, Kang AD, Kirby JE, Bourbeau P. Automated interpretation of blood culture gram stains by use of a deep convolutional neural network. J Clin Microbiol 2018; 56(3): e01521-17. DOI: 10.1128/jcm.01521-17.
  21. Koutsoumanis KP, Lianou A. Stochasticity in colonial growth dynamics of individual bacterial cells. Appl Environ Microbiol 2013; 79(7): 2294-2301. DOI: 10.1128/aem.03629-12.
  22. He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2016: 770-778. DOI: 10.1109/CVPR.2016.90.
  23. Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In Book: Navab N, Hornegger J, Wells WM, Frangi AF, eds. Medical image computing and computer-assisted intervention – MICCAI 2015. Cham: Springer International Publishing Switzerland; 2015: 234-241. DOI: 10.1007/978-3-319-24574-4_28.
  24. Weigert M. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nature Methods 2018; 15(12): 1090-1097. DOI: 10.1038/s41592-018-0216-7.
  25. Isola P, Zhu JY, Zhou T, Efros AA. Image-to-image translation with conditional adversarial networks. Proc IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 1125-1134. DOI: 10.1109/CVPR.2017.632.
  26. Schmidt U, Weigert M, Broaddus C, Myers G. Cell detection with star-convex polygons. In Book: Frangi AF, Schnabel JA, Davatzikos C, Alberola-López C, Fichtinger G, eds. Medical image computing and computer assisted intervention – MICCAI 2018. Cham: Springer Nature Switzerland AG; 2018: 265-273. DOI: 10.1007/978-3-030-00934-2_30.
  27. Mandal S, Uhlmann V. Splinedist: Automated cell segmentation with spline curves. IEEE 18th Int Symposium on Biomedical Imaging (ISBI) 2021: 1082-1086. DOI: 10.1109/ISBI48211.2021.9433928.
  28. Redmon J, Farhadi A. YOLO9000: Better, faster, stronger. IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2017: 7263-7271. DOI: 10.1109/CVPR.2017.690.
  29. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997; 8: 1735-1780. DOI: 10.1162/neco.1997.9.8.1735.
  30. Gvozdev YA, Zimina TM, Kraeva LA, Hamdulaeva GN. Image recognition of juvenile colonies of pathogenic microorganisms in the culture based microbiological method implemented in bioMEMS device for express species identification. IEEE NW Russia Young Researchers in Electrical and Electronic Engineering Conf (EIConRusNW) 2016: 759-763. DOI: 10.1109/EIConRusNW.2016.7448292.
  31. Ren S, He K, Girshick R, Sun J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 2017; 39(6): 1137-1149. DOI: 10.1109/TPAMI.2016.2577031.
  32. Szegedy C, et al. Going deeper with convolutions. IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2015: 1-9. DOI: 10.1109/CVPR.2015.7298594.
  33. Zhou K, Yang Y, Qiao Y, Xiang T. Domain Adaptive Ensemble Learning. IEEE Trans Image Process 2021; 30: 8008-8018. DOI: 10.1109/TIP.2021.3112012.
  34. Li H, Ng J, Natsev P. EnsembleNet: End-to-end optimization of multi-headed models. arXiv Preprint. 2019. Source: <https://arxiv.org/abs/1905.09979>. DOI: 10.48550/arXiv.1905.09979.
  35. Mikhalkova MA, Yachnaya VO, Malashin RO. Comparative analysis of convolutional neural networks and vision transformer on classification of images containing homogenous microstructures. Wave Electronics and Its Application in Information and Telecommunication Systems (WECONF) 2023: 1-6. DOI: 10.1109/WECONF57201.2023.10148032.
  36. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. 2009 IEEE Conf on Computer Vision and Pattern Recognition 2009: 248-255 DOI: 10.1109/CVPR.2009.5206848.
  37. Kingma DP, Ba J. Adam: A method for stochastic optimization. 3rd Int Conf for Learning Representations 2015.
  38. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen L-C. MobileNetV2: Inverted residuals and linear bottlenecks. IEEE/CVF Conf on Computer Vision and Pattern Recognition 2018: 4510-4520. DOI: 10.1109/CVPR.2018.00474.
  39. Mingxing T, Le Quoc V. EfficientNet: Rethinking model scaling for convolutional neural networks. Int Conf on Machine Learning 2019: 6105-6114.
  40. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 2014; 15(56): 1929-1958.
  41. Zhu Z, Ren Z, Wang SH, Górriz JM, Zhang YD. RDNet: ResNet-18 with dropout for blood cell classification. In Book: Vicente JMF, Álvarez-Sánchez JR, de la Paz López F, Adeli H, eds. Artificial intelligence in neuroscience: Affective analysis and health applications. Cham: Springer Nature Switzerland AG; 2022: 136-144. DOI: 10.1007/978-3-031-06242-1_14.
  42. Hua SB, Lu AX, Moses AM. CytoImageNet: A large-scale pretraining dataset for bioimage transfer learning. NeurIPS 2021 Learning Meaningful Representations for Life (LMRL) Workshop 2021: 1-7. DOI: 10.48550/arXiv.2111.11646..

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20