(47-2) 15 * << * >> * Русский * English * Содержание * Все выпуски

The basic assembly of skeletal models in the fall detection problem
O.S. Seredin 1, A.V. Kopylov 1, E.E. Surkov 1, S.-C. Huang 2

Tula State University, 300012, Tula, Russia, Lenin Ave. 92;
National Taipei University of Technology, Taipei 106, Taiwan

 PDF, 9333 kB

DOI: 10.18287/2412-6179-CO-1158

Страницы: 323-334.

Язык статьи: English.

Аннотация:
The paper considers the appliance of the featureless approach to the human activity recognition problem, which exclude the direct anthropomorphic and visual characteristics of human figure from further analysis and thus increase the privacy of the monitoring system. A generalized pairwise comparison function of two human skeletal models, invariant to the sensor type, is used to project the object of interest to the secondary feature space, formed by the basic assembly of skeletons. A sequence of such projections in time forms an activity map, which allows an application of deep learning methods based on convolution neural networks for activity recognition. The proper ordering of skeletal models in a basic assembly plays an important role in secondary space design. The study of ordering of the basic assembly by the shortest unclosed path algorithm and correspondent activity maps for video streams from the TST Fall Detection v2 database are presented.

Ключевые слова:
skeletal model of human figure, pairwise similarity, activity map, featureless pattern recognition, basic assembly, convolutional neural networks.

Благодарности
The work was funded by the Ministry of Science and Higher Education of RF within the framework of the state task FEWG-2021-0012.

Citation:
Seredin OS, Kopylov AV, Surkov EE, Huang SC. The basic assembly of skeletal models in the fall detection problem. Computer Optics 2023; 47(2): 323-334. DOI: 10.18287/2412-6179-CO-1158.

References:

  1. Chen D, Bharucha AJ, Wactlar HD. Intelligent video monitoring to improve safety of older persons. Proc Annual Int Conf of the IEEE Engineering in Medicine and Biology 2007: 3814-3817.
  2. Di Huang C, Wang CY, Wang JC. Human action recognition system for elderly and children care using three stream ConvNet. Proc 2015 Int Conf on Orange Technologies (ICOT 2015) 2016; 4: 5-9.
  3. Abbate S, Avvenuti M, Light J. MIMS: A minimally invasive monitoring sensor platform. IEEE Sens J 2012; 12(3): 677-684.
  4. Vuong NK, Chan S, Lau CT, Chan SYW, Yap PLK, Chen ASH. Preliminary results of using inertial sensors to detect demential related wandering patterns. Proc Annual Int Conf of the IEEE Engineering in Medicine and Biology Society (EMBS) 2015; 2015: 3703-3706.
  5. Kumar A, Lau CT, Chan S, Ma M, Kearns WD. A unified grid-based wandering pattern detection algorithm. Proc Annual Int Conf of the IEEE Engineering in Medicine and Biology Society (EMBS) 2016; 2016: 5401-5404.
  6. Wearables have a dirty little secret: 50% of users lose interest – TechRepublic. Source: <http://www.techrepublic.com/article/wearables-have-a-dirty-little-secret-most-people-lose-interest/>.
  7. Wild K, Boise L, Lundell J, Foucek A. Unobtrusive in-home monitoring of cognitive and physical health: reactions and perceptions of older adults. J Appl Gerontol 2008; 27: 181-200.
  8. Demiris G, Oliver DP, Giger J, Skubic M, Rantz M. Older adults’ privacy considerations for vision-based recognition methods of eldercare applications. Technol Heal Care 2009; 17(1): 41-48.
  9. Seredin OS, Kopylov AV, Surkov EE. The study of skeleton description reduction in the human fall-detection task. Computer Optics 2020; 44(6): 951-958. DOI: 10.18287/2412-6179-CO-753.
  10. Seredin OS, Kopylov AV, Huang SC, Rodionov DS. A skeleton features-based fall detection using Microsoft Kinect v2 with one class-classifier outlier removal. ISPRS –International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019; 4212: 189-195.
  11. Hussein ME, Torki M, Gowayyed MA, El-Saban M. Human action recognition using a temporal hierarchy of covariance descriptors on 3D joint locations. Proc Twenty-Third Int Joint Conf on Artificial Intelligence (IJCAI '13) 2013: 2466-2472.
  12. Vemulapalli R, Arrate F, Chellappa R. Human action recognition by representing 3D skeletons as points in a lie group. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition 2014: 588-595.
  13. Wang J, Liu Z, Wu Y, Yuan J. Mining actionlet ensemble for action recognition with depth cameras. Proc IEEE Computer Society Conf on Computer Vision and Pattern Recognition 2012: 1290-1297.
  14. Yan S, Xiong Y, Lin D. Spatial temporal graph convolutional networks for skeleton based action recognition. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1801.07455>.
  15. Bevilacqua V, et al. Fall detection in indoor environment with kinect sensor. 2014 IEEE Int Symposium on Innovations in Intelligent Systems and Applications (INISTA) 2014: 319-324.
  16. Chen C, Zhuang Y, Nie F, Yang Y, Wu F, Xiao J. Learning a 3D human pose distance metric from geometric pose descriptor. IEEE Trans Vis Comput Graph 2011; 17(11): 1676-1689.
  17. Zhang S, Liu X, Xiao J. On geometric features for skeleton based action recognition using multilayer LSTM networks. Proc 2017 IEEE Winter Conf on Applications of Computer Vision (WACV 2017) 2017: 148-157.
  18. Bian P, Hou J, Chau P, Magnenat Thalmann N. Fall detection based on body part tracking using a depth camera. IEEE J Biomed Heal Informatics 2015; 19(2): 430-439.
  19. Zhang X, Xu C, Tao D. Graph edge convolutional neural networks for skeleton based action recognition. arXiv Preprint. 2018. Source: <https://arxiv.org/abs/1805.06184>.
  20. Du Y, Wang W, Wang L. Hierarchical recurrent neural network for skeleton based action recognition. 2015 IEEE Conf on Computer Vision and Pattern Recognition (CVPR) 2015: 1110-1118.
  21. Mottl V, Seredin O, Dvoenko S, Kulikowski C, Muchnik I. Featureless pattern recognition in an imaginary Hilbert space. 2002 Int Conf on Pattern Recognition 2002; 2: 88-91.
  22. Duin RPW, Pȩkalska E, De Ridder D. Relational discriminant analysis. Pattern Recognit Lett 1999; 20(11-13): 1175-1181.
  23. Seredin OS. Methods and algorithms for features pattern recognition. The thesis for the Candidate’s degree in Physical and Mathematical Sciences. Tula, 2001.
  24. Yang L, Jin R. Distance metric learning: A comprehensive survey. Michigan State Universiy; 2006: 1-51.
  25. Kaya M, Bilge HS. Deep metric learning: A survey. Symmetry 2019; 11(9): 26.
  26. Wang T, Wang S, Ding X. Learning a similarity metric discriminatively for pose exemplar-based action recognition. 2011 4th Int Congress on Image and Signal Processing 2011; 1: 404-408.
  27. Mottl V, Seredin O, Krasotkina O. Compactness hypothesis, potential functions, and rectifying linear space in machine learning. In Book: Braverman readings in machine learning. Key ideas from inception to current state. Cham: Springer; 2018: 52-102.
  28. Pekalska E, Duin RPW, Paclik P. Prototype selection for dissimilarity-based classifiers. Pattern Recognit 2006; 39: 189-208.
  29. Cai Z, Han J, Liu L, Shao L. RGB-D datasets using Microsoft Kinect or similar sensors: a survey. Multimed Tools Appl 2017; 76: 4313-4355. DOI: 10.1007/s11042-016-3374-6.
  30. IEEE DataPort TST Fall Detection Dataset v2. Source: <https://ieee-dataport.org/documents/tst-fall-detection-dataset-v2>.
  31. Liu J, Shahroudy A, Perez M, Wang G, Duan L-Y, Kot AC. NTU RGB+D 120: A large-scale benchmark for 3D human activity understanding. IEEE Trans Pattern Anal Machine Intell 2020; 42(10): 2684-2701. DOI: 10.1109/TPAMI.2019.2916873.
  32. Seredin O, Mottl V. Regularization in image recognition: The principle of decision rule smoothing. Proc Ninth Int Conf Pattern Recognition and Information Processing 2007; II: 151-155.
  33. Seredin O, Kopylov A, Mottl V. Selection of subsets of ordered features in machine learning. machine learning and data mining in pattern recognition. In Book: Perner P, ed. Machine learning and data mining in pattern recognition. Springer; 2009: 16-28.
  34. Seredin O, Surkov E, Kopylov A, Dvoenko S. Multidimensional data visualization based on the shortest unclosed path search. In Book: Dang NHT, Zhang YD, Tavares JMRS, Chen BH, eds. Artificial intelligence in data and big data processing. ICABDE 2021. Cham: Springer; 2022: 279-299.
  35. Efstratiou P. Skeleton tracking for sports using LiDAR depth camera. 2021. Source: <http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-297536>.
  36. Ponmozhi K, Deepalakshmi P. A posture recognition system for assisted self-learning of Yoga. EAI Int Conf on Big Data Innovation for Sustainable Cognitive Computing (BDCC 2018) 2019: 231.
  37. Wasenmüller O, Stricker D. Comparison of Kinect v1 and v2 depth images in terms of accuracy and precision. Asian Conf on Computer Vision 2016: 34-45.
  38. Grunnet-Jepsen A, et al. Projectors for Intel® RealSense™ depth cameras D4xx. Santa Clara, CA: Intel Support, Intel Corporation; 2018.
  39. Kazharov AA, Kureichik VM. Ant colony optimization algorithms for solving transportation problems. Int J Comput Systems Sci 2010; 49(1): 30-43.
  40. Gasparrini S, et al. Proposal and experimental evaluation of fall detection solution based on wearable and depth data fusion. In Book: Loshkovska S, Koceski S, eds. ICT innovations 2015. Cham: Springer; 2016: 99-108.
  41. Fakhrulddin AH, Fei X, Li H. Convolutional neural networks (CNN) based human fall detection on Body Sensor Networks (BSN) sensor data. 2017 4th Int Conf on Systems and Informatics (ICSAI) 2017: 1461-1465.
  42. Hwang S, Ahn DH, Park H, Park T. Maximizing accuracy of fall detection and alert systems based on 3D convolutional neural network: Poster abstract. Proc Second Int Conf on Internet-of-Things Design and Implementation (IoTDI '17) 2017: 343-344.
  43. Min W, et al. Support vector machine approach to fall recognition based on simplified expression of human skeleton action and fast detection of start key frame using torso angle. IET Comput Vis 2018; 12: 1133-1140.

© 2009, IPSI RAS
Россия, 443001, Самара, ул. Молодогвардейская, 151; электронная почта: journal@computeroptics.ru; тел: +7 (846) 242-41-24 (ответственный секретарь), +7 (846) 332-56-22 (технический редактор), факс: +7 (846) 332-56-20