References of "Papadopoulos, Konstantinos 50022363"
     in
Bookmark and Share    
Full Text
Peer Reviewed
See detailVIEW-INVARIANT ACTION RECOGNITION FROM RGB DATA VIA 3D POSE ESTIMATION
Baptista, Renato UL; Ghorbel, Enjie UL; Papadopoulos, Konstantinos UL et al

in IEEE International Conference on Acoustics, Speech and Signal Processing, Brighton, UK, 12–17 May 2019 (2019, May)

In this paper, we propose a novel view-invariant action recognition method using a single monocular RGB camera. View-invariance remains a very challenging topic in 2D action recognition due to the lack of ... [more ▼]

In this paper, we propose a novel view-invariant action recognition method using a single monocular RGB camera. View-invariance remains a very challenging topic in 2D action recognition due to the lack of 3D information in RGB images. Most successful approaches make use of the concept of knowledge transfer by projecting 3D synthetic data to multiple viewpoints. Instead of relying on knowledge transfer, we propose to augment the RGB data by a third dimension by means of 3D skeleton estimation from 2D images using a CNN-based pose estimator. In order to ensure view-invariance, a pre-processing for alignment is applied followed by data expansion as a way for denoising. Finally, a Long-Short Term Memory (LSTM) architecture is used to model the temporal dependency between skeletons. The proposed network is trained to directly recognize actions from aligned 3D skeletons. The experiments performed on the challenging Northwestern-UCLA dataset show the superiority of our approach as compared to state-of-the-art ones. [less ▲]

Detailed reference viewed: 147 (26 UL)
Full Text
Peer Reviewed
See detailA View-invariant Framework for Fast Skeleton-based Action Recognition Using a Single RGB Camera
Ghorbel, Enjie UL; Papadopoulos, Konstantinos UL; Baptista, Renato UL et al

in 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Prague, 25-27 February 2018 (2019, February)

View-invariant action recognition using a single RGB camera represents a very challenging topic due to the lack of 3D information in RGB images. Lately, the recent advances in deep learning made it ... [more ▼]

View-invariant action recognition using a single RGB camera represents a very challenging topic due to the lack of 3D information in RGB images. Lately, the recent advances in deep learning made it possible to extract a 3D skeleton from a single RGB image. Taking advantage of this impressive progress, we propose a simple framework for fast and view-invariant action recognition using a single RGB camera. The proposed pipeline can be seen as the association of two key steps. The first step is the estimation of a 3D skeleton from a single RGB image using a CNN-based pose estimator such as VNect. The second one aims at computing view-invariant skeleton-based features based on the estimated 3D skeletons. Experiments are conducted on two well-known benchmarks, namely, IXMAS and Northwestern-UCLA datasets. The obtained results prove the validity of our concept, which suggests a new way to address the challenge of RGB-based view-invariant action recognition. [less ▲]

Detailed reference viewed: 189 (21 UL)
Full Text
Peer Reviewed
See detailLocalized Trajectories for 2D and 3D Action Recognition
Papadopoulos, Konstantinos UL; Demisse, Girum UL; Ghorbel, Enjie UL et al

in Sensors (2019)

The Dense Trajectories concept is one of the most successful approaches in action recognition, suitable for scenarios involving a significant amount of motion. However, due to noise and background motion ... [more ▼]

The Dense Trajectories concept is one of the most successful approaches in action recognition, suitable for scenarios involving a significant amount of motion. However, due to noise and background motion, many generated trajectories are irrelevant to the actual human activity and can potentially lead to performance degradation. In this paper, we propose Localized Trajectories as an improved version of Dense Trajectories where motion trajectories are clustered around human body joints provided by RGB-D cameras and then encoded by local Bag-of-Words. As a result, the Localized Trajectories concept provides an advanced discriminative representation of actions. Moreover, we generalize Localized Trajectories to 3D by using the depth modality. One of the main advantages of 3D Localized Trajectories is that they describe radial displacements that are perpendicular to the image plane. Extensive experiments and analysis were carried out on five different datasets. [less ▲]

Detailed reference viewed: 27 (7 UL)
Full Text
Peer Reviewed
See detailTwo-stage RGB-based Action Detection using Augmented 3D Poses
Papadopoulos, Konstantinos UL; Ghorbel, Enjie UL; Baptista, Renato UL et al

in 18th International Conference on Computer Analysis of Images and Patterns SALERNO, 3-5 SEPTEMBER, 2019 (2019)

In this paper, a novel approach for action detection from RGB sequences is proposed. This concept takes advantage of the recent development of CNNs to estimate 3D human poses from a monocular camera. To ... [more ▼]

In this paper, a novel approach for action detection from RGB sequences is proposed. This concept takes advantage of the recent development of CNNs to estimate 3D human poses from a monocular camera. To show the validity of our method, we propose a 3D skeleton-based two-stage action detection approach. For localizing actions in unsegmented sequences, Relative Joint Position (RJP) and Histogram Of Displacements (HOD) are used as inputs to a k-nearest neighbor binary classifier in order to define action segments. Afterwards, to recognize the localized action proposals, a compact Long Short-Term Memory (LSTM) network with a de-noising expansion unit is employed. Compared to previous RGB-based methods, our approach offers robustness to radial motion, view-invariance and low computational complexity. Results on the Online Action Detection dataset show that our method outperforms earlier RGB-based approaches. [less ▲]

Detailed reference viewed: 63 (6 UL)
Full Text
Peer Reviewed
See detailPose Encoding for Robust Skeleton-Based Action Recognition
Demisse, Girum UL; Papadopoulos, Konstantinos UL; Aouada, Djamila UL et al

in CVPRW: Visual Understanding of Humans in Crowd Scene, Salt Lake City, Utah, June 18-22, 2018 (2018, June 18)

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for ... [more ▼]

Some of the main challenges in skeleton-based action recognition systems are redundant and noisy pose transformations. Earlier works in skeleton-based action recognition explored different approaches for filtering linear noise transformations, but neglect to address potential nonlinear transformations. In this paper, we present an unsupervised learning approach for estimating nonlinear noise transformations in pose estimates. Our approach starts by decoupling linear and nonlinear noise transformations. While the linear transformations are modelled explicitly the nonlinear transformations are learned from data. Subsequently, we use an autoencoder with L2-norm reconstruction error and show that it indeed does capture nonlinear noise transformations, and recover a denoised pose estimate which in turn improves performance significantly. We validate our approach on a publicly available dataset, NW-UCLA. [less ▲]

Detailed reference viewed: 159 (43 UL)
Full Text
Peer Reviewed
See detailA Revisit of Action Detection using Improved Trajectories
Papadopoulos, Konstantinos UL; Antunes, Michel; Aouada, Djamila UL et al

in IEEE International Conference on Acoustics, Speech and Signal Processing, Calgary, Alberta, Canada, 15–20 April 2018 (2018)

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition ... [more ▼]

In this paper, we revisit trajectory-based action detection in a potent and non-uniform way. Improved trajectories have been proven to be an effective model for motion description in action recognition. In temporal action localization, however, this approach is not efficiently exploited. Trajectory features extracted from uniform video segments result in significant performance degradation due to two reasons: (a) during uniform segmentation, a significant amount of noise is often added to the main action and (b) partial actions can have negative impact in classifier's performance. Since uniform video segmentation seems to be insufficient for this task, we propose a two-step supervised non-uniform segmentation, performed in an online manner. Action proposals are generated using either 2D or 3D data, therefore action classification can be directly performed on them using the standard improved trajectories approach. We experimentally compare our method with other approaches and we show improved performance on a challenging online action detection dataset. [less ▲]

Detailed reference viewed: 160 (25 UL)
Full Text
See detailSTARR - Decision SupporT and self-mAnagement system for stRoke survivoRs Vision based Rehabilitation System
Shabayek, Abd El Rahman UL; Baptista, Renato UL; Papadopoulos, Konstantinos UL et al

in European Project Space on Networks, Systems and Technologies (2017)

This chapter explains a vision based platform developed within a European project on decision support and self-management for stroke survivors. The objective is to provide a low cost home rehabilitation ... [more ▼]

This chapter explains a vision based platform developed within a European project on decision support and self-management for stroke survivors. The objective is to provide a low cost home rehabilitation system. Our main concern is to maintain the patients' physical activity while carrying a continuous monitoring of his physical and emotional state. This is essential for recovering some autonomy in daily life activities and preventing a second damaging stroke. Post-stroke patients are initially subject to physical therapy under the supervision of a health professional to follow up on their daily physical activity and monitor their emotional state. However, due to social and economical constraints, home based rehabilitation is eventually suggested. Our vision platform paves the way towards having low cost home rehabilitation. [less ▲]

Detailed reference viewed: 54 (2 UL)
Full Text
Peer Reviewed
See detailEnhanced Trajectory-based Action Recognition using Human Pose
Papadopoulos, Konstantinos UL; Goncalves Almeida Antunes, Michel UL; Aouada, Djamila UL et al

in IEEE International Conference on Image Processing, Beijing 17-20 Spetember 2017 (2017)

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of ... [more ▼]

Action recognition using dense trajectories is a popular concept. However, many spatio-temporal characteristics of the trajectories are lost in the final video representation when using a single Bag-of-Words model. Also, there is a significant amount of extracted trajectory features that are actually irrelevant to the activity being analyzed, which can considerably degrade the recognition performance. In this paper, we propose a human-tailored trajectory extraction scheme, in which trajectories are clustered using information from the human pose. Two configurations are considered; first, when exact skeleton joint positions are provided, and second, when only an estimate thereof is available. In both cases, the proposed method is further strengthened by using the concept of local Bag-of-Words, where a specific codebook is generated for each skeleton joint group. This has the advantage of adding spatial human pose awareness in the video representation, effectively increasing its discriminative power. We experimentally compare the proposed method with the standard dense trajectories approach on two challenging datasets. [less ▲]

Detailed reference viewed: 246 (60 UL)