OpenPifPaf: Composite Fields for Semantic Keypoint Detection and Spatio-Temporal Association
IEEE Transactions on Intelligent Transportation Systems(2022)
Ecole Polytech Fed Lausanne
Abstract
Many image-based perception tasks can be formulated as detecting, associating and tracking semantic keypoints, e.g. , human body pose estimation and tracking. In this work, we present a general framework that jointly detects and forms spatio-temporal keypoint associations in a single stage, making this the first real-time pose detection and tracking algorithm. We present a generic neural network architecture that uses Composite Fields to detect and construct a spatio-temporal pose which is a single, connected graph whose nodes are the semantic keypoints ( e.g ., a person’s body joints) in multiple frames. For the temporal associations, we introduce the Temporal Composite Association Field (TCAF) which requires an extended network architecture and training method beyond previous Composite Fields. Our experiments show competitive accuracy while being an order of magnitude faster on multiple publicly available datasets such as COCO, CrowdPose and the PoseTrack 2017 and 2018 datasets. We also show that our method generalizes to any class of semantic keypoints such as car and animal parts to provide a holistic perception framework that is well suited for urban mobility such as self-driving cars and delivery robots.
MoreTranslated text
Key words
Pose estimation,Automobiles,Animals,Semantics,Autonomous automobiles,Task analysis,Three-dimensional displays,Composite fields,pose estimation,pose tracking
PDF
View via Publisher
AI Read Science
Must-Reading Tree
Example

Generate MRT to find the research sequence of this paper
Related Papers
2016
被引用8199 | 浏览
2018
被引用818 | 浏览
2018
被引用97 | 浏览
2018
被引用152 | 浏览
2018
被引用4583 | 浏览
2019
被引用642 | 浏览
2019
被引用1063 | 浏览
2020
被引用123 | 浏览
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
去 AI 文献库 对话