(46-4) 11 * << * >> * Russian * English * Content * All Issues

A method for generating training data for a protective face mask detection system
E.V. Ryumina 1, D.A. Ryumin 1, M.V. Markitantov 1, A.A. Karpov 1

St. Petersburg Federal Research Center of the RAS (SPC RAS),
199178, St. Petersburg, Russia, 14th Line V.O. 39

 PDF, 5454 kB

DOI: 10.18287/2412-6179-CO-1039

Pages: 603-611.

Full text of article: Russian language.

Abstract:
Monitoring and evaluation of the safety level of individuals is one of the most important problems of the modern world, which was forced to change due to the emergence of the COVID-19 virus. To increase the safety level of individuals, new information technologies are needed that can stop the spread of infection by minimizing the threat of outbreaks and monitor compliance with recommended measures. These technologies, in particular, include intelligent tracking systems of the presence of protective face masks. For these systems, this article proposes a new method for generating training data that combines data augmentation techniques, such as Mixup and Insert. The proposed method is tested on two datasets, namely, the MAsked FAce dataset and the Real-World Masked Face Recognition Dataset. For these datasets, values of the unweighted average recalls of 98.51% and 98.50% are obtained. In addition, the effectiveness of the proposed method is tested on images with face mask imitation on people's faces, and an automated technique is proposed for reducing type I and II errors. Using the proposed automated technique, it is possible to reduce the number of type II errors from 174 to 32 for the Real-World Masked Face Recognition Dataset, and from 40 to 14 for images with painted protective face masks.

Keywords:
protective face mask detection, COVID-19, protective face mask imitation, data augmentation, visual features, heatmap.

Citation:
Ryumina EV, Ryumin DA, Markitantov MV, Karpov AA. A method for generating training data for a protective face mask detection system. Computer Optics 2022; 46(4): 603-611. DOI: 10.18287/2412-6179-CO-1039.

Acknowledgements:
This work was supported by the Russian Foundation for Basic Research № 20-04-60529.

References:

  1. Cheng VC, Wong SC, Chuang VW, So SY, Chen JH, Sridhar S, To KK, Chan JF, Hung IF, Ho PL, Yuen KY. The role of community-wide wearing of face mask for control of coronavirus disease 2019 (COVID-19) epidemic due to SARS-CoV-2. J Infect 2020; 81(1): 107-114. DOI: 10.1016/j.jinf.2020.04.024.
  2. Wang J, Pan L, Tang S, Ji JS, Shi X. Mask use during COVID-19: A risk adjusted strategy. Environ Pollut 2020; 266(1): 115099. DOI: 10.1016/j.envpol.2020.115099.
  3. Howard MC. The relations between age, face mask perceptions and face mask wearing. J Public Health (Oxf) 2021: fdab018. DOI: 10.1093/pubmed/fdab018.
  4. Markitantov M, Dresvyanskiy D, Mamontov D, Kaya H, Minker W, Karpov A. Ensembling end-to-end deep models for computational paralinguistics tasks: ComParE 2020 mask and breathing sub-challenges. Proc Interspeech 2020: 2072-2076. DOI: 10.21437/Interspeech.2020-2666.
  5. Montacié C, Caraty M. Phonetic, frame clustering and intelligibility analyses for the INTERSPEECH 2020 ComParE challeng. Proc Interspeech 2020: 2062-2066. DOI: 10.21437/Interspeech.2020-2243.
  6. Ryumina E, Ryumin D, Ivanko D, Karpov A. A novel method for protective face mask detection using convolutional neural networks and image histograms. Int Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences 2021; XLIV-2/W1-2021: 177-182. DOI: 10.5194/isprs-archives-XLIV-2-W1-2021-177-2021.
  7. Loey M, Manogaran G, Taha MHN, Khalifa NEM. A hybrid deep transfer learning model with machine learning methods for face mask detection in the era of the COVID-19 pandemic. Measurement 2021; 167: 108288. DOI: 10.1016/j.measurement.2020.108288.
  8. Deshpande G, Schuller BW. Audio, speech, language, & signal processing for COVID-19: A comprehensive overview. arXiv Preprint 2020. Source: <https://arxiv.org/abs/2011.14445>.
  9. Efremtsev VG, Efremtsev NG, Teterin EP, Teterin PE, Bazavluk ES. Chest x-ray image classification for viral pneumonia and Covid-19 using neural networks. Computer Optics 2021; 45(1): 149-153. DOI: 10.18287/2412-6179-CO-765.
  10. Jiang X, Gao T, Zhu Z, Zhao Y. Real-time face mask detection method based on YOLOv3. Electronics 2021; 10(7): 837. DOI: 10.3390/electronics10070837.
  11. Zhang H, Cissé M, Dauphin Y, Lopez-Paz D. Mixup: Beyond empirical risk minimization. Proc. International Conference on Learning Representations (ICLR) 2018; 1-13.
  12. Singh S, Ahuja U, Kumar M, Kumar K, Sachdeva M. Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment. Multimed Tools Appl 2021; 80(13): 19753-19768. DOI: 10.1007/s11042-021-10711-8.
  13. Vizilter YV, Gorbatsevich VS, Moiseenko AS. Single-shot face and landmarks detector. Computer Optics 2020; 44(4): 589-595. DOI: 10.18287/2412-6179-CO-674.
  14. Ge S, Li J, Ye Q, Luo Z. Detecting masked faces in the wild with LLE-CNNs. Proc IEEE Conf on Computer Vision and Pattern Recognition 2017: 2682-2690. DOI: 10.1109/CVPR.2017.53.
  15. Wang Z, Wang G, Huang B, Xiong Z, Hong Q, Wu H, Yi P, Jiang K, Wang N, Pei Y, Chen H, Miao Y, Huang Z, Liang J. Masked face recognition dataset and application. arXiv preprint 2020. Source: <https://arxiv.org/abs/2003.09093>.
  16. The simulated masked face dataset. Source: <https://github.com/prajnasb/observations/>.
  17. The labeled faces in the wild simulated masked face dataset. Source: <https://www.kaggle.com/muhammeddalkran/lfw-simulated-masked-face-dataset/>.
  18. Nagrath P, Jain R, Madan A, Arora R, Kataria P, Hemanth J. SSDMNV2: A real time DNN-based face mask detection system using single shot multibox detector and MobileNetV2. Sustain Cities Soc 2021; 66: 102692. DOI: 10.1016/j.scs.2020.102692.
  19. Dvoynikova AA, Markitantov MV, Ryumina EV, Ryumin DA, Karpov AA. Analytical review of audiovisual systems for determining personal protective equipment on a person's face [In Russian]. Informatics and Automation 2021; 20(5): 1116-1152. DOI: 10.15622/ia.2021.20.5.
  20. Learned-Miller E, Huang GB, RoyChowdhury A, Li H, Hua G. Labeled faces in the wild: A survey. In Book: Kawulok M, Celebi E, Smolka B, eds. Advances in face detection and facial image analysis. New York: Springer; 2016: 189-248. DOI: 10.1007/978-3-319-25958-1_8.
  21. Deng J, Guo J, Ververas E, Kotsia I, Zafeiriou S. RetinaFace: Single-shot multi-level face localisation in the wild. Proc IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2020: 5203-5212. DOI: 10.1109/CVPR42600.2020.00525.
  22. The annotation for MAsked FAce. Source: <https://github.com/ElenaRyumina/AnnotationMAFA/>.
  23. Ryumina EV, Karpov AA. Comparative analysis of methods for imbalance elimination of emotion classes in video data of facial expressions [In Russian]. Scientific and Technical Journal of Information Technologies, Mechanics and Optics 2020; 20(5:129): 683-691. DOI: 10.17586/2226-1494-2020-20-5-683-691.
  24. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-cam: Visual explanations from deep networks via gradient-based localization. IEEE Int Conf on Computer Vision 2017: 618-626. DOI: 10.1109/ICCV.2017.74.
  25. Markitantov MV, Ryumin DA, Ryumina EV, Karpov AA. Corpus of audiovisual Russian-language data of people in protective masks (BRAVE-MASKS – Biometric Russian Audio-Visual Extended MASKS corpus) [In Russian]. Database state registration certificate N2021621094 of May 26, 2021.

© 2009, IPSI RAS
151, Molodogvardeiskaya str., Samara, 443001, Russia; E-mail: journal@computeroptics.ru ; Tel: +7 (846) 242-41-24 (Executive secretary), +7 (846) 332-56-22 (Issuing editor), Fax: +7 (846) 332-56-20