human shape and pose tracking using keyframes
play

Human Shape and Pose Tracking Using Keyframes: Supplementary - PDF document

Human Shape and Pose Tracking Using Keyframes: Supplementary Material Chun-Hao Huang , Edmond Boyer , Nassir Navab , Slobodan Ilic Department of Computer Science, Technische Universit at M unchen LJK-INRIA Grenoble Rh


  1. Human Shape and Pose Tracking Using Keyframes: Supplementary Material Chun-Hao Huang § , Edmond Boyer † , Nassir Navab § , Slobodan Ilic § § Department of Computer Science, Technische Universit¨ at M¨ unchen † LJK-INRIA Grenoble Rhˆ one-Alpes { huangc,slobodan.ilic,navab } @in.tum.de, edmond.boyer@inria.fr image original silhouette clean silhouette annotated joints Keyframes besides t = 0 Error Ref Bandwidth 0.5 7125 97 (21) 211 (480) Figure 1. Example images, generated clean silhouettes, and anno- 0.31 6715 tated joint positions of WalkChair . (estimated) t = 0 95 (5) 198 (24) The supplementary material for the paper Human Shape and Pose Tracking Using Keyframes consists of this docu- … … … 6983 0.1 ment and the accompanying video. It provides more details on the newly recorded sequences and more analysis on the 71 (2) 87 (5) 191 (191) 260 (189) 405 (344) experiment results. 3881 0.8 no other keyframes generated 1. New recorded sequences 0.44 3593 In Fig. 1, we show one example frame of the newly (estimated) recorded sequences. The occluding object, i.e . the chair, 21 (5) is kept after background subtraction, and therefore remains in the subsequent reconstructed point cloud. The reference t = 0 surfaces at t = 0 is the smoothed reconstructed visual hulls. 3573 0.1 There is no need to register the surface to the point cloud with a rigid transformation to initialize the tracking. 14 (0) 15 (4) 19 (3) 20 (36) We produce two different types of ground truth for eval- Figure 2. Generated keyframe pool of Skirt [2] (top) and Ham- uating shapes and poses, respectively. For shape evaluation, merTable (bottom) in varying mean-shift bandwidths. we remove the silhouettes of irrelevant objects manually, if they are not connected to the subjects, as shown in Fig. 1. The associated metric is the standard silhouette overlap er- 2. Supplementary results ror which measures the discrepancies between the contour of the projected surface and the contour in the observed Influence of mean-shift bandwidth. In Fig. 2 we visual- silhouettes. To evaluate the estimated poses, we annotate ize the generated keyframe pools of Skirt and HammerTable the positions of joints in five cameras, and see how close in different bandwidths. Two sequences are chosen because to them the estimated joints are ( 2D joint error ). The se- the subjects repeat the actions. With small bandwidths, we quences and the associated ground truths will be publicly observe many similar key poses, which however does not available upon publication. guarantee smaller error. With the estimated bandwidths we

  2. 12000 ours Huang et al. [3DV`13] Cagniart et al. [ECCV`10] previous frame as ref 11000 pixel overlap error 10000 9000 8000 7000 6000 5000 4000 1 51 101 151 201 251 301 351 401 451 501 551 601 651 701 frame index (a) Skirt 15000 14000 pixel overlap error ours Huang et al. [3DV`13] Cagniart et al. [ECCV`10] previous frame as ref 13000 12000 11000 10000 9000 8000 7000 6000 5000 1 51 101 151 201 251 301 351 401 451 501 551 frame index (b) Dance Figure 3. Pixel overlap error of Dance and Skirt [2] in each frame, averaged over 8 cameras. Image resolution: 1004 × 1004 . Blue: ours . Green: Cagniart et al . [1]. Red: Huang et al . [3]. Orange: using the previous frame as the reference model. WalkChair HammerTable SideSit average joint error 250 average joint error 250 200 average joint error 200 200 150 150 150 100 100 100 50 50 50 0 0 0 1 11 21 31 41 51 61 71 81 91 1 21 41 61 81 101 121 141 1 11 21 31 41 51 61 71 81 91 frame index frame index frame index Figure 4. The curves of 2D joint error of three newly recorded sequences. Image resolution: 1000 × 1000 . Blue: ours . Green: Straka et al . [4] + [5]. Red: Huang et al . [3]. not only obtain distinctive key poses but also provide com- framework, we make a comparison with following two parable performance. strategies: 1. Adhering to t = 0 as the reference model. Further quantitative analysis. Table 2 in the main pa- per shows the overall average pixel overlap error of Dance 2. Adhering to previous frame as the reference model. and Skirt . In Fig. 3, we report the error in each frame. Broadly speaking, our approach attains smaller error over The benefit of our approach over the first strategy ( i.e . ref: the whole sequences, compared with Cagniart et al . [1] and t = 0 ) is already presented in Fig. 3, Fig. 6, and the cor- Huang et al . [3]. In Fig. 4, we further report the 2D joint responding text in the main paper. Here we concentrate on comparing with the 2 nd strategy, which always uses the error of WalkChair , HammerTable , and SideSit . We see that while [3] fails to track at a certain point, and Straka et al . [4] tracked result of previous frame as the reference model for + [5] produces sporadic high errors, our approach obtains the current frame. In Fig. 5(a-c), we overlay the correspond- consistent low error over sequences. ing results of t = 102 in Skirt sequence. For this frame To further justify the advantage of our keyframe-based only, using the previous frame result as reference actually

  3. (a) ref: t = 0 (b) ref: prev. ( t = 101 ) (c) ours (ref: t = 95 ) (d) prev. t = 31 (e) t = 256 (f) t = 462 Figure 5. Comparison of three different strategies (a-c), and the disadvantages of using always the previous frame result as the reference model (d-f). Better to be viewed in the pdf file. yields smallest error. We demonstrate in Fig. 5(d-f) the po- [2] J. Gall, C. Stoll, E. De Aguiar, C. Theobalt, B. Rosenhahn, and H.-P. Seidel. Motion capture using joint skeleton tracking tential drawback of this strategy: drifting. We see that the and surface estimation. In CVPR , 2009. blue patch is supposed to be at the back side of the subject [3] C.-H. Huang, E. Boyer, and S. Ilic. Robust human body shape ( t = 31 ), but it moves along the surface embedding during and pose tracking. In 3DV , 2013. tracking, and ends up at the front side of the body ( t = 462 ). [4] M. Straka, S. Hauswiesner, M. Ruether, and H. Bischof. In the very beginning of the tracking, drifting is difficult to Skeletal graph based human pose estimation in real-time. In be observed via overlap error because the silhouette does BMVC , 2011. not differ too much (orange curves in Fig. 3). However, as [5] M. Straka, S. Hauswiesner, M. R¨ uther, and H. Bischof. Simul- the errors accumulates, drifting gradually deteriorate the re- taneous shape and pose adaption of articulated models using sults, and eventually leads to noticeably large errors ( Skirt ), linear optimization. In ECCV , 2012. or even a tracking failure ( Dance ). Generated keyframe pool. We show the identified keyframes of all testing sequences and the associated esti- mated bandwidth (BW) in Fig. 6. Thanks to the way we cre- ate virtual samples, we do not observe duplicate keyframes in the same sequences, and the delay time are all within ac- ceptable range. Further qualitative results. In Fig. 7, we further demon- strate the effectiveness of our approach on taking care of outliers and missing data. In Fig. 7(b), we observe that the hand of the subject is connected to the table in both the silhouette and the point cloud. Such observations con- fuse methods like [4] which results in the high peak error in Fig. 4, whereas our method still estimates the pose and the shape successfully. In Fig. 7(c), despite that the ball ob- servations have close interaction with the subject, we still obtain correct shape around the right leg. In Fig. 7(d), we see that our method properly handles merging body parts (the right hand), and excludes outliers, while [1] does not manage to do so. References [1] C. Cagniart, E. Boyer, and S. Ilic. Probabilistic deformable surface tracking from multiple videos. In ECCV , 2010.

  4. BW: 0.44 frame 0 BW: 0.41 frame 0 frame 0 frame 201 (13) frame 0 frame 21 (5) frame 0 frame 0 BW: 0.41 BW: 0.50 Dance HammerTable frame 95 (5) BW: 0.50 BW: 0.31 BW: 330.25 frame 32(0) frame 198 (24) frame 0 frame 20 (6) frame 59 (18) frame 74 (22) frame 21 (5) frame 29 (25) Fighting SideSit Basketball Skirt WalkChair Figure 6. Generated keyframe pool of all testing sequences. Numbers in the parenthesis are the delay time. (a) (b) Cagniart et al . ECCV`10 [5] (no skeleton) (c) (d) Figure 7. Results of (a) Dance , (b) HammerTable , (c) Basketball , and (d) WalkChair . Black dots are the point clouds.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend