Joint-Aware Action Recognition for Ambient Assisted Living

As the aged population is rapidly increased, the need for efficient and low-cost ambient systems becomes vital. The effectiveness of such systems lies upon the accurate and fast motion analysis in order to predict the elderly’s action and develop systems to act in need. To achieve that, the precise estimation of the entire human body pose is often exploited, providing the required motion-related information. Yet, the exploitation of the entire human pose can present several limitations. The paper at hand exploits state-of-the-art data-driven classifiers and compares their efficiency in action recognition based on a specific set of joints or coordinates, i.e., the x, y and z-axis. The above rests upon the notion that each action in real life can be effectively perceived by observing only a specific set of joints. Considering that, we aim to investigate the capacity of such a joint analysis and its ability to deliver an enhanced pose-based action recognition system. To that end, we correlate specific joints with each action, indicating the joints that contribute the most. We evaluate our findings on two different senior subjects using two different classifiers, viz., support vector machine (SVM) and convolutional neural network (CNN), showing that the above strategy can improve recognition rates.

Authors
Katerina Maria Oikonomou, Ioannis Kansizoglou, Pelagia Manaveli, Athanasios Grekidis, Dimitrios Menychtas, Nikolaos Aggelousis, Georgios Ch. Sirakoulis, Antonios Gasteratos

Conference
2022 IEEE International Conference on Imaging Systems and Techniques
Availability Date
July 20th, 2022
elEL
Μετάβαση στο περιεχόμενο