action segmentation with joint self supervised temporal
play

Action Segmentation with Joint Self-Supervised Temporal Domain - PDF document

Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation Min-Hung Chen 1 Baopu Li 2 Yingze Bao 2 Ghassan AlRegib 1 Zsolt Kira 1 1 Georgia Institute of Technology 2 Baidu USA Abstract Source Video Inputs Target Labels


  1. Action Segmentation with Joint Self-Supervised Temporal Domain Adaptation Min-Hung Chen 1 ∗ Baopu Li 2 Yingze Bao 2 Ghassan AlRegib 1 Zsolt Kira 1 1 Georgia Institute of Technology 2 Baidu USA Abstract Source Video Inputs Target Labels Unlabeled Videos Unlabeled Videos Labels Despite the recent progress of fully-supervised action Fully-Supervised segmentation techniques, the performance is still not fully Spatio-Temporal Feature Embedding Learning satisfactory. One main challenge is the problem of spatio- Domain Discrepancy temporal variations (e.g. different people may perform Source Target the same activity in various ways). Therefore, we exploit Model Model unlabeled videos to address this problem by reformulat- ing the action segmentation task as a cross-domain prob- SSTDA lem with domain discrepancy caused by spatio-temporal Action Predictions variations. To reduce the discrepancy, we propose Self- Figure 1: An overview of the proposed Self-Supervised Supervised Temporal Domain Adaptation (SSTDA) , which Temporal Domain Adaptation (SSTDA) for action segmen- contains two self-supervised auxiliary tasks (binary and se- tation. “Source” refers to the data with labels, and “Tar- quential domain prediction) to jointly align cross-domain get” refers to the data without access to labels. SSTDA can feature spaces embedded with local and global temporal effectively adapts the source model trained with standard dynamics, achieving better performance than other Do- fully-supervised learning to a target domain by diminish- main Adaptation (DA) approaches. On three challeng- ing the discrepancy of embedded feature spaces between the ing benchmark datasets (GTEA, 50Salads, and Breakfast), two domains caused by spatio-temporal variations. SSTDA SSTDA outperforms the current state-of-the-art method by only requires unlabeled videos from both domains with the large margins (e.g. for the F1 @25 score, from 59.6% to standard transductive setting, which eliminates the need of 69.1% on Breakfast, from 73.4% to 81.5% on 50Salads, and additional labels to obtain the final target model. from 83.6% to 89.1% on GTEA), and requires only 65% of the labeled training data for comparable performance, demonstrating the usefulness of adapting to unlabeled tar- cause degraded performance by directly deploying a model get videos across variations. The source code is available trained with different groups of people. at https://github.com/cmhungsteve/SSTDA . Despite significant progress made by recent methods based on temporal convolution with fully-supervised learn- ing [20, 6, 23, 8], the performance is still not fully satisfac- 1. Introduction tory (e.g. the best accuracy on the Breakfast dataset is still lower than 70%). One method to improve the performance The goal of action segmentation is to simultaneously is to exploit knowledge from larger-scale labeled data [2]. segment videos by time and predict an action class for However, manually annotating precise frame-by-frame ac- each segment, leading to various applications (e.g. human tions is time-consuming and challenging. Another way is to activity analyses). While action classification has shown design more complicated architectures but with higher costs great progress given the recent success of deep neural net- of model complexity. Thus, we aim to address the spatio- works [38, 28, 27], temporally locating and recognizing ac- temporal variation problem with unlabeled data, which are tion segments in long videos is still challenging. One main comparatively easy to obtain. To achieve this goal, we challenge is the problem of spatio-temporal variations of propose to diminish the distributional discrepancy caused human actions across videos [16]. For example, different by spatio-temporal variations by exploiting auxiliary unla- people may make tea in different personalized styles even beled videos with the same types of human activities per- if the given recipe is the same. The intra-class variations formed by different people. More specifically, to extend the ∗ Work done during an internship at Baidu USA framework of the main video task for exploiting auxiliary 9454

  2. 2. Related Works data [45, 19], we reformulate our main task as an unsuper- vised domain adaptation (DA) problem with the transduc- Action Segmentation methods proposed recently are built tive setting [31, 5], which aims to reduce the discrepancy upon temporal convolution networks (TCN) [20, 6, 23, 8] between source and target domains without access to the because of their ability to capture long-range dependencies target labels. across frames and faster training compared to RNN-based Recently, adversarial-based DA approaches [10, 11, 37, methods. With the multi-stage pipeline, MS-TCN [8] per- 44] show progress in reducing the discrepancy for images forms hierarchical temporal convolutions to effectively ex- using a domain discriminator equipped with adversarial tract temporal features and achieve the state-of-the-art per- training. However, videos also suffer from domain dis- formance for action segmentation. In this work, we utilize crepancy along the temporal direction [4], so using image- MS-TCN as the baseline model and integrate the proposed based domain discriminators is not sufficient for action seg- self-supervised modules to further boost the performance mentation. Therefore, we propose Self-Supervised Tem- without extra labeled data . poral Domain Adaptation (SSTDA) , containing two self- Domain Adaptation (DA) has been popular recently espe- supervised auxiliary tasks: 1) binary domain prediction , cially with the integration of deep learning. With the two- which predicts a single domain for each frame-level feature, branch (source and target) framework for most DA works, and 2) sequential domain prediction , which predicts the per- mutation of domains for an untrimmed video. Through finding a common feature space between source and target adversarial training with both auxiliary tasks, SSTDA can domains is the ultimate goal, and the key is to design the jointly align cross-domain feature spaces that embed lo- domain loss to achieve this goal [5]. cal and global temporal dynamics, to address the spatio- Discrepancy-based DA [24, 25, 26] is one of the major temporal variation problem for action segmentation, as classes of methods where the main goal is to reduce the dis- shown in Figure 1. To support our claims, we compare our tribution distance between the two domains. Adversarial- method with other popular DA approaches and show bet- based DA [10, 11] is also popular with similar concepts ter performance, demonstrating the effectiveness for align- as GANs [12] by using domain discriminators. With care- ing temporal dynamics by SSTDA. Finally, we evaluate fully designed adversarial objectives, the domain discrimi- our approaches on three datasets with high spatio-temporal nator and the feature extractor are optimized through min- variations: GTEA [9], 50Salads [35], and the Breakfast max training. Some works further improve the performance dataset [17]. By exploiting unlabeled target videos with by assigning pseudo-labels to target data [32, 41]. Fur- SSTDA, our approach outperforms the current state-of-the- thermore, Ensemble-based DA [34, 21] incorporates mul- art methods by large margins and achieve comparable per- tiple target branches to build an ensemble model. Recently, formance using only 65% of labeled training data. Attention-based DA [39, 18] assigns attention weights to In summary, our contributions are three-fold: different regions of images for more effective DA. Unlike images, video-based DA is still under-explored. 1. Self-Supervised Sequential Domain Prediction : We Most works concentrate on small-scale video DA propose a novel self-supervised auxiliary task, which datasets [36, 43, 14]. Recently, two larger-scale cross- predicts the permutation of domains for long videos, domain video classification datasets along with the to facilitate video domain adaptation. To the best of state-of-the-art approach are proposed [3, 4]. Moreover, our knowledge, this is the first self-supervised method some authors also proposed novel frameworks to utilize designed for cross-domain action segmentation. auxiliary data for other video tasks, including object detec- tion [19] and action localization [45]. These works differ 2. Self-Supervised Temporal Domain Adaptation from our work by either different video tasks [19, 3, 4] or (SSTDA) : By integrating two self-supervised auxil- access to the labels of auxiliary data [45]. iary tasks, binary and sequential domain prediction , Self-Supervised Learning has become popular in recent our proposed SSTDA can jointly align local and global years for images and videos given the ability to learn in- embedded feature spaces across domains, outperform- formative feature representations without human supervi- ing other DA methods. sion. The key is to design an auxiliary task (or pretext 3. Action Segmentation with SSTDA : By integrating task) that is related to the main task and the labels can SSTDA for action segmentation, our approach out- be self-annotated. Most of the recent works for videos performs the current state-of-the-art approach by large design auxiliary tasks based on spatio-temporal orders of margins, and achieve comparable performance by us- videos [22, 40, 15, 1, 42]. Different from these works, ing only 65% of labeled training data. Moreover, dif- our proposed auxiliary task predicts temporal permutation ferent design choices are analyzed to identify the key for cross-domain videos, aiming to address the problem of contributions of each component. spatio-temporal variations for action segmentation. 9455

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend