标题:Action recognition based on spatial temporal graph convolutional networks
作者:Zheng, Wanqiang ;Jing, Punan ;Xu, Qingyang
通讯作者:Xu, Qingyang
作者机构:[Zheng, Wanqiang ;Jing, Punan ;Xu, Qingyang ] School of Mechanical, Electrica and Information Engineering Shandong University, Weihai, Shandong; 26420 更多
会议名称:3rd International Conference on Computer Science and Application Engineering, CSAE 2019
会议日期:22 October 2019 through 24 October 2019
来源:ACM International Conference Proceeding Series
出版年:2019
DOI:10.1145/3331453.3361651
关键词:Human action recognition; Human skeleton; Temporal and spatial graph convolution; UCF-101 dataset; UCF-31
摘要:Compared with the achievements of convolutional neural networks in image classification, human action recognition for video is not ideal in terms of accuracy and practicability. A major method in action recognition is based on the human skeleton, which is an important information for characterizing human motion in video. In this paper, the human skeleton in video is extracted by OpenPose, and the spatial and temporal graph of skeleton is constructed. The spatial and temporal graph convolution network (ST-GCN) is used to extract the spatial and temporal features of the human skeleton on consecutive video frames, and the features is used for video classification. In order to verify the action recognition performance based on the ST-GCN, a 50.53% top-1 and 81.58% top-5 accuracy is obtained on the UCF-101 dataset. A specific UCF-31 dataset is constructed manually and a 68.73% top-1 and 94.43% top-5 accuracy is obtained, verifying that the identification accuracy of ST-GCN model would also be improved when the accuracy of skeleton acquisition was improved. © 2019 Association for Computing Machinery.
收录类别:EI;SCOPUS
资源类型:会议论文;期刊论文
原文链接:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85074826853&doi=10.1145%2f3331453.3361651&partnerID=40&md5=b4a64b5b13cfb5c62ca159c8d763e232
TOP