Abstract of FG98 paper

Extracting Gestural Motion Trajectories

Ming-Hsuan Yang and Narendra Ahuja

Compressed postscript version of this paper

Abstract

This paper is concerned with the extraction of spatio-temporal patterns in video sequences with focus on trajectories of gestural motions associated with American Sign Language. An algorithm is described to extract the motion trajectories of salient features such as human palms from an image sequence. First, motion segmentation of the image sequence is generated based on a multiscale segmentation of the frames and attributed graph matching of regions across frames. This produces region correspondences and their affine transformations. Second, colors of the moving regions are used to determine skin regions. Third, the head and palm regions are identified based on the shape and size of skin regions in motion. Finally, affine transformations defining a region's motion between successive frames are concatenated to construct the region's motion trajectory. Experimental results showing the extracted motion trajectories are presented.

Back to publication home page