diff --git "a/BdE1T4oBgHgl3EQfpQWt/content/tmp_files/load_file.txt" "b/BdE1T4oBgHgl3EQfpQWt/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/BdE1T4oBgHgl3EQfpQWt/content/tmp_files/load_file.txt" @@ -0,0 +1,2095 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf,len=2094 +page_content='Noname manuscript No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (will be inserted by the editor) HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition Xiang Wang · Shiwei Zhang · Zhiwu Qing · Zhengrong Zuo · Changxin Gao · Rong Jin · Nong Sang Received: date / Accepted: date Abstract Few-shot action recognition is a challenging but practical problem aiming to learn a model that can be eas- ily adapted to identify new action categories with only a few labeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Recent attempts mainly focus on learning deep representations for each video individually under the episodic meta-learning regime and then performing tempo- ral alignment to match query and support videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, they still suffer from two drawbacks: (i) learning individ- ual features without considering the entire task may result in limited representation capability, and (ii) existing align- ment strategies are sensitive to noises and misaligned in- stances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To handle the two limitations, we propose a novel Hybrid Relation guided temporal Set Matching (HyRSM++) approach for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The core idea of HyRSM++ is to integrate all videos within the task to learn discriminative representations and involve a robust match- ing technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To be specific, HyRSM++ consists of two key components, a hybrid relation module and a temporal set matching metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Given the basic representations from the feature extractor, the hybrid relation module is introduced to fully exploit associated relations within and cross videos in an episodic task and thus can learn task-specific embed- dings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Subsequently, in the temporal set matching metric, we carry out the distance measure between query and support Xiang Wang · Zhiwu Qing · Zhengrong Zuo · Changxin Gao (Corre- sponding author) · Nong Sang Key Laboratory of Ministry of Education for Image Processing and Intelligent Control, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology E-mail: {wxiang, qzw, zhrzuo, cgao, nsang}@hust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='cn Shiwei Zhang Alibaba Group E-mail: zhangjin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='zsw@alibaba-inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='com Rong Jin Twitter E-mail: rongjinemail@gmail.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='com videos from a set matching perspective and design a bidi- rectional Mean Hausdorff Metric to improve the resilience to misaligned instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, we explicitly exploit the temporal coherence in videos to regularize the matching process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this way, HyRSM++ facilitates informative cor- relation exchanged among videos and enables flexible pre- dictions under the data-limited scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Furthermore, we extend the proposed HyRSM++ to deal with the more chal- lenging semi-supervised few-shot action recognition and un- supervised few-shot action recognition tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experimental results on multiple benchmarks demonstrate that our method consistently outperforms existing methods and achieves state-of-the-art performance under various few-shot set- tings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The source code is available at https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' com/alibaba-mmai-research/HyRSMPlusPlus.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Keywords Few-shot Action Recognition · Set Match- ing · Semi-supervised Few-shot Action Recognition · Unsupervised Few-shot Action Recognition 1 Introduction Recently, the development of large-scale video bench- marks [8, 23, 6, 13, 24] and deep networks [88, 51, 18, 89, 65, 52] have significantly boosted the progress of ac- tion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To achieve this success, we typically re- quire large amounts of manually labeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, ac- quiring these labeled examples consumes a lot of manpower and time, which actually limits further applications of this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this case, researchers look to alternatives to achieve action classification without extensive costly labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Few- shot action recognition is a promising direction to reduce manual annotations and thus has attracted much attention recently [112, 105].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' It aims at learning to classify unseen action classes with extremely few annotated examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='03330v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='CV] 9 Jan 2023 2 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' CNN CNN CNN .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Query video .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Support set Hybrid relation module Support: make coffee Query: make coffee Support: make coffee Query: make coffee \uf04f \uf050 Metric space (support) Metric space (quey) “pour water” “pour coffee powder” Temporal alignment Temporal set matching (b) Time line Matching line \uf04f \uf050 (a) Pull Push Pull Push Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 1 (a) Concept of the proposed hybrid relation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We adaptively produce task-specific video embeddings by extracting relevant discrim- inative patterns cross videos in an episodic task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (b) Example of make coffee, the current temporal alignment metrics tend to be strict, resulting in an incorrect match on misaligned videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In contrast, the proposed temporal set matching metric involving set matching technique and temporal coherence regularization is more flexible in finding the best correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To solve the few-shot data-scarcity problem, popu- lar attempts [112, 7, 68, 106] are mainly based on the metric-based meta-learning technique [86], in which a com- mon embedding space is first learnt via episodic training and then an explicit or implicit alignment metric is em- ployed to calculate the distances between the query (test) videos and support (reference) videos for classification in an episodic task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Typically, Ordered Temporal Alignment Module (OTAM) [7] adopts a deep feature extractor to con- vert an input video into a frame feature sequence indepen- dently and explicitly explores the ordered temporal align- ment path between support and query videos in this feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Temporal-Relational CrossTransformer (TRX) [68] learns a deep embedding space and tries to exhaustively con- struct temporally-corresponding sub-sequences of actions to compare.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Some recent works [33, 94, 108, 62] propose to design multi-level metrics for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Although these methods have achieved remarkable per- formance, there are still two limitations: individual feature learning and inflexible matching strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' First, discrimina- tive interactive clues cross videos in an episode are ignored when each video is considered independently during repre- sentation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As a result, these methods actually as- sume the learned representations are equally effective on different episodic tasks and maintain a fixed set of video fea- tures for all test-time tasks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', task-agnostic, which hence might overlook the most discriminative dimensions for the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Existing work also shows that the task-agnostic methods tend to suffer inferior generalization in other fields, such as image recognition [47, 101], NLP [66, 57], and in- formation retrieval [53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Second, actions are usually com- plicated and involve many subactions with different orders and offsets, which may cause the failure of existing tempo- ral alignment metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For example, as shown in Figure 1(b), to make coffee, you can pour water before pour coffee pow- der, or in a reverse order, hence it is hard for recent temporal alignment strategies to find the right correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Thus a more flexible metric is required to cope with the misalign- ment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Inspired by the above observations, we thus solve the few-shot action recognition problem by developing a novel Hybrid Relation guided temporal Set Matching algorithm, dubbed HyRSM++, which is architecturally composed of a hybrid relation module and a temporal set matching metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the hybrid relation module, we argue that the considerable relevant relations within and cross videos are beneficial to generate a set of customized features that are discriminative for a given task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To this end, we first apply an intra-relation function to strengthen structural patterns within a video via modeling long-range temporal dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Then an inter- relation function operates on different videos to extract rich semantic information to reinforce the features which are more relevant to query predictions, as shown in Figure 1(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' By this means, we can learn task-specific embeddings for the few-shot task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' On top of the hybrid relation module, we design a novel temporal set matching metric consist- ing of a bidirectional Mean Hausdorff Metric and a tem- poral coherence regularization to calculate the distances be- tween query and support videos, as shown in Figure 1(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The objective of the bidirectional Mean Hausdorff Metric is to measure video distance from the set matching per- spective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Concretely, we treat each video as a set of frames and alleviate the strictly ordered constraints to acquire bet- ter query-support correspondences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Furthermore, to exploit long-range temporal order dependencies, we explicitly im- pose temporal coherence regularization on the input videos for more stable measurement without introducing extra net- work parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this way, by combining the hybrid re- lation module and temporal set matching metric, the pro- posed HyRSM++ can sufficiently integrate semantically re- lational representations within the entire task and provide flexible video matching in an end-to-end manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We evalu- ate the proposed HyRSM++ on six challenging benchmarks and achieve remarkable improvements again current state- of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Although the intuition of HyRSM++ is straightforward, it is elaborately designed for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Can our HyRSM++ be applied to the more challenging HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 3 semi-supervised or unsupervised action recognition tasks even if the settings are entirely different?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To answer this question, we extend HyRSM++ to the semi-supervised and unsupervised objectives with minor task adaptation modi- fications, and experimental results indicate that HyRSM++ can be well adapted to different scenarios well and achieves impressive performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In summary, we make the following four contributions: (1) We propose a novel hybrid relation module to cap- ture the intra- and inter-relations inside the episodic task, yielding task-specific representations for different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2) We reformulate the query-support video pair distance metric as a set matching problem and develop a bidirectional Mean Hausdorff Metric, which can be robust to complex ac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To utilize long-term temporal order cues, we further design a new temporal coherence regularization on videos without adding network parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (3) We conduct extensive experiments on six challeng- ing datasets to verify that the proposed HyRSM++ achieves superior performance over the state-of-the-art methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (4) We show that the proposed HyRSM++ can be di- rectly extended to the more challenging semi-supervised few-shot action recognition and unsupervised few-shot ac- tion recognition task with minor modifications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this paper, we have extended our preliminary CVPR- 2022 conference version [91] in the following aspects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' i) We integrate the temporal coherence regularization and set matching strategy into a temporal set matching metric so that the proposed metric can explicitly leverage temporal order information in videos and match flexibly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that temporal coherence regularization does not introduce ad- ditional parameters and will not increase the burden of in- ference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ii) We conduct more comprehensive ablation stud- ies to verify the effectiveness and efficiency of the pro- posed HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' iii) We clearly improve the few-shot action recognition performance over the previous version.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experimental results also manifest that HyRSM++ signifi- cantly surpasses existing competitive methods and achieves state-of-the-art performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' iv) We show that the proposed HyRSM++ can be easily extended to the more challeng- ing semi-supervised few-shot recognition and unsupervised few-shot action recognition tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 2 Related Work In the literature, there are some techniques related to this paper, mainly including few-shot image classification, set matching, temporal coherence, semi-supervised few-shot learning, unsupervised few-shot learning, and few-shot ac- tion recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this section, we will briefly review them separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Few-shot Image Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Recently, the research of few-shot learning [17, 55, 56] has proceeded roughly along with the following directions: data augmentation, optimization-based, and metric-based.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Data augmentation is an intuitive method to increase the number of training sam- ples and improve the diversity of data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Mainstream strategies include spatial deformation [70, 67] and semantic feature augmentation [9, 100].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Optimization-based methods learn a meta-learner model that can quickly adopt to a new task given a few training examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' These algorithms include the LSTM-based meta-learner [74], learning efficient model ini- tialization [19], and learning stochastic gradient descent op- timizer [50].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Metric-based methods attempt to address the few-shot classification problem by ”learning to compare”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' This family of approaches aims to learn a feature space and compare query and support images through Euclidean dis- tance [76, 101, 99], cosine similarity [86, 98], or learnable non-linear metric [80, 29, 47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Our work is more closely re- lated to the metric-based methods [47, 101] that share the same spirit of learning task-specific features, whereas we fo- cus on solving the more challenging few-shot action recog- nition task with diverse spatio-temporal dependencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, we will further point out the differences and con- duct performance comparisons in the experimental section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Set Matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The objective of set matching is to accu- rately measure the similarity of two sets, which have re- ceived much attention over the years.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Set matching tech- niques can be used to efficiently process complex data struc- tures [2, 72, 3] and has been applied in many computer vi- sion fields, including face recognition [63, 93, 92], object matching [73, 107], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Among them, Hausdorff distance is an important alternative to handle set matching problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hausdorff distance and its variants have been widely used in the field of image matching and achieved remarkable re- sults [34, 16, 35, 107, 82, 79].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Inspired by these great suc- cesses, we introduce set matching into the few-shot action recognition field for the first time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Temporal Coherence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Videos naturally involve temporal continuity, and there is much effort to effectively explore how to leverage this property [11, 22, 27, 58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Inverse Dif- ference Moment (IDM) [11] is a commonly used measure of local homogeneity, which assumes that in a sequence, two elements are more similar if they are located next to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The idea of IDM has been widely applied to texture feature extraction [60], face recognition [59], and unsuper- vised representation learning [22, 27] and achieved remark- able performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this paper, we focus on constraining the few-shot matching process by exploiting temporal co- herence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Semi-supervised Few-shot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In practical appli- cation scenarios, there are usually many unlabeled samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Semi-supervised few-shot learning considers learning new concepts in the presence of extra unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ren et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [71] first introduce the challenging semi-supervised few- shot learning paradigm and refine the prototypes by adopt- 4 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ing a soft k-means on unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' LST [49] proposes a novel recursive-learning-based self-training strategy for ro- bust convergence of the inner loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TransMatch [103] de- velops a new transfer learning framework by incorporat- ing MixMatch [4] and existing few-shot learning methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' PTN [31] employs the Poisson learning model to obtain in- formative presentations between the labeled and unlabeled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' PLCM [32] and iLPC [44] focus on cleaning predicted pseudo-labels and generating accurate confidence estima- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the field of semi-supervised few-shot action recog- nition, LIM [113] utilizes a label-independent memory to preserve a feature bank and produces class prototypes for query classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Unsupervised Few-shot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The objective of un- supervised few-shot learning is to utilize unlabeled samples to construct meta-tasks for few-shot training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' CACTUs [30] and UFLST [36] construct many tasks by clustering em- beddings and optimize the meta-learning process over the constructed tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' UMTRA [38] generates artificial tasks by randomly sampling support examples from the training set and produces corresponding queries by augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ULDA [69] and AAL [1] follow this paradigm to randomly group augmented images for meta-learning and point out the importance of data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' More recently, MetaU- VFS [64] presents the first unsupervised meta-learning al- gorithm for few-shot action recognition and adopts a two- stream 2D and 3D CNN model to explore spatial and tem- poral features via contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Few-shot Action Recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The difference between few-shot action recognition and the previous few-shot learn- ing approaches is that it deals with more complex higher dimensional video data instead of two-dimensional images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The existing methods mainly focus on metric-based learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' OSS-Metric Learning [40] adopts OSS-Metric of video pairs to match videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TARN [5] learns an attention-based deep-distance measure from an attribute to a class center for zero-shot and few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' CMN [112] utilizes a multi-saliency embedding algorithm to encode video representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' AMeFu-Net [20] uses depth infor- mation to assist learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Xian et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [95] propose to learn a generative adversarial network and produce video fea- tures of novel classes for generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Coskun et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [12] leverage object-object interaction, hand grasp, optical flow, and hand trajectory to learn an egocentric few-shot classi- fier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' OTAM [7] preserves the frame ordering in video data and estimates distances with ordered temporal alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ARN [105] introduces a self-supervised permutation invari- ant strategy for spatio-temporal modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ITANet [106] proposes a frame-wise implicit temporal alignment strategy to achieve accurate and robust video matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TRX [68] matches actions by matching plentiful tuples of different sub-sequences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' More recently, STRM [84] makes use of lo- cal and global enrichment mechanism for spatio-temporal modeling based on TRX [68] and enforces class-separability at different phase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Some works [33, 94, 108, 62] propose to design multi-level metrics for few-shot action recogni- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that most above methods focus on learning video embedding independently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Unlike these previous methods, our HyRSM++ improves the transferability of embedding by learning intra- and inter-relational patterns that can bet- ter generalize to unseen classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3 Method In this section, we first formulate the definition of the few-shot action recognition task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Then we present our Hy- brid Relation guided temporal Set Matching (HyRSM++) method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Problem formulation Few-shot action recognition aims to obtain a model that can generalize well to new classes when limited labeled video data is available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To make training more faithful to the test environment, we adopt the episodic training manner [86] for few-shot adaptation as in previous work [86, 7, 68, 106].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In each episodic task, there are two sets, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', a support set S and a query set Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The support set S contains N × K sam- ples from N different action classes, and each class contains K support videos, termed the N-way K-shot problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The goal is to classify the query videos in Q into N classes with these support videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 HyRSM++ Pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The overall architecture of HyRSM++ is illus- trated in Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For each input video sequence, we first divide it into T segments and extract a snippet from each segment, as in previous methods [88, 7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' This way, in an episodic task, the support set can be denoted as S = {s1, s2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', sN×K}, where si = {s1 i , s2 i , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', sT i }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For sim- plicity and convenience, we discuss the process of the N- way 1-shot problem, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', K = 1, and consider that the query set Q contains a single video q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Then we apply an embedding model to extract the feature representations for each video sequence and obtain the support features Fs = {fs1, fs2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', fsN } and the query feature fq, where fsi = {f 1 i , f 2 i , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', f T i } and fq = {f 1 q , f 2 q , .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', f T q }.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' After that, we input Fs and fq to the hybrid relation module to learn task-specific features, resulting in ˜Fs and ˜fq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Finally, the enhanced representations ˜Fs and ˜fq are fed into the set matching metric to generate matching scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Based on the output scores, we can train or test the total framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 5 Support set Query video Backbone Intra-relation Intra-relation Intra-relation Intra-relation A Inter-relation modeling Hybrid relation module 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 A Avg-pooling E Expend Concatenate Convolution Temporal set matching metric Backbone Backbone Backbone A A A E E E E Pull Push Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 2 Schematic illustration of the proposed Hybrid Relation guided temporal Set Matching (HyRSM++) approach on a 3-way 1-shot problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Given an episode of video data, a feature embedding network is first employed to extract their feature vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Then, A hybrid relation module is followed to integrate rich information within each video and cross videos with intra-relation and inter-relation functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Finally, the task-specific features are fed forward into a temporal set matching metric for matching score prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Best viewed in color.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hybrid relation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Given the features Fs and fq output by the embedding network, current approaches, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', OTAM [7], directly apply a classifier C in this feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' They can be formulated as: yi = C(fsi, fq) (1) where yi is the matching score between fsi and fq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During training, yi = 1 if they belong to the same class, otherwise yi = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the testing phase, yi can be adopted to predict the query label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From the perspective of probability theory, it makes decisions based on the priors fsi and fq: yi = P((fsi, fq)|fsi, fq) (2) which is a typical task-agnostic method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, the task- agnostic embedding is often vulnerable to overfit irrelevant representations [29, 47] and may fail to transfer to unseen classes not yet observed in the training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Unlike the previous methods, we propose to learn task- specific features for each target task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To achieve this goal, we introduce a hybrid relation module to generate task-specific features by capturing rich information from different videos in an episode.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Specifically, we elaborately design the hybrid relation module H in the following form: ˜fi = H(fi, G);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' fi ∈ [Fs, fq], G = [Fs, fq] (3) That is, we improve the feature fi by aggregating seman- tic information cross video representations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', G, in an episodic task, allowing the obtained task-specific feature ˜fi to be more discriminative than the isolated feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For ef- ficiency, we further decompose hybrid relation module into two parts: intra-relation function Ha and inter-relation func- tion He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The intra-relation function aims to strengthen structural patterns within a video by capturing long-range temporal de- pendencies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We express this process as: f a i = Ha(fi) (4) here f a i ∈ RT ×C is the output of fi through the intra- relation function and has the same shape as fi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that the intra-relation function has many alternative implements, in- cluding multi-head self-attention (MSA), Transformer [85], Bi-LSTM [25], Bi-GRU [10], etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', which is incredibly flex- ible and can be any one of them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Based on the features generated by the intra-relation function, an inter-relation function is deployed to semanti- cally enhance the features cross different videos: f e i = He i (f a i , Ga) = |Ga| � j (κ(ψ(f a i ), ψ(f a j )) ∗ ψ(f a j )) (5) where Ga = [F a s , f a q ], ψ(·) is a global average pooling layer, and κ(f a i , f a j ) is a learnable function that calculates the se- mantic correlation between f a i and f a j .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The potential logic is that if the correlation score between f a i and f a j is high, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', κ(f a i , f a j ), it means they tend to have the same seman- tic content, hence we can borrow more information from f a j to elevate the representation f a i , and vice versa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the same way, if the score κ(f a i , f a i ) is less than 1, it indicates that some irrelevant information in f a i should be suppressed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this way, we can improve the feature discrimination by taking full advantage of the limited samples in each episodic task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The inter-relation function also has similar implements with the intra-relation function but with a dif- ferent target.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' After the inter-relation function, we employ an Expend-Concatenate-Convolution operation to aggregate 6 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' information, as shown in Figure 2, where the output feature ˜fi has the same shape as f e i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the form of prior, our method can be formulated as: yi = P(( ˜fsi, ˜fq)|H(fsi, G), H(fq, G));' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' G = [Fs, fq] (6) Intuitively, compared with Equation 2, it can be conducive to making better decisions because more priors are provided.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In particular, the hybrid relation module is a plug-and-play unit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the experiment, we will fully explore different con- figurations of the hybrid relation module and further inves- tigate its insertablility.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Temporal set matching metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Many prior few-shot action recognition algorithms usually impose a strict tempo- ral alignment strategy on generated video representations for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, they suffer from causing some failed matches when encountering misaligned video instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Instead, we develop a flexible metric based on set matching that explicitly discovers optimal frame matching pairs for the ability to be insensitive to misalignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Con- cretely, the proposed temporal set matching metric contains two parts, bidirectional Mean Hausdorff Metric (Bi-MHM) and temporal coherence regularization, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We will describe them in detail below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Given the relation-enhanced features ˜Fs and ˜fq, we present a novel metric to enable efficient and flexible match- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this metric, we treat each video as a set of T frames and reformulate distance measurement between videos as a set matching problem, which is robust to complicated instances, whether they are aligned or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Specifically, we achieve this goal by modifying the Hausdorff distance, which is a typical set matching approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The standard Hausdorff distance D can be formulated as: d( ˜fi, ˜fq) = max ˜ f a i ∈ ˜fi ( min ˜ f bq ∈ ˜ fq ��� ˜f a i − ˜f bq ���) d( ˜fq, ˜fi) = max ˜ f b q ∈ ˜ fq ( min ˜ f a i ∈ ˜fi ��� ˜f bq − ˜f a i ���) D = max(d( ˜fi, ˜fq), d( ˜fq, ˜fi)) (7) where ˜fi ∈ RT ×C contains T frame features, and ��· �� is a distance measurement function, which is the cosine distance in our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, the previous methods [102, 21, 111, 16] pointed out that Hausdorff distance can be easily affected by noisy examples, resulting in inaccurate measurements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hence they employ a directed modified Hausdorff distance that robust to noise as follows: dm( ˜fi, ˜fq) = 1 Ni � ˜ f a i ∈ ˜fi ( min ˜ f b q ∈ ˜ fq ��� ˜f a i − ˜f bq ���) (8) where Ni is the length of ˜fi, and equal to T in this paper.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hausdorff distance and its variants achieve great success in image matching [82, 16, 34] and face recognition [21, 79].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We thus propose to introduce the set matching strategy into the few-shot action recognition field and further design a novel bidirectional Mean Hausdorff Metric (Bi-MHM): Db = 1 Ni � ˜ f a i ∈ ˜fi ( min ˜ f bq ∈ ˜ fq ��� ˜f a i − ˜f bq ���)+ 1 Nq � ˜ f bq ∈ ˜ fq ( min ˜ f a i ∈ ˜fi ��� ˜f bq − ˜f a i ���) (9) where Ni and Nq are the lengths of the support feature ˜fi and the query feature ˜fq respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The proposed Bi-MHM is a symmetric function, and the two items are complementary to each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From Equa- tion 9, we can find that Db can automatically find the best correspondencies between two videos, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', ˜fi and ˜fq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that our Bi-MHM is a non-parametric classifier and does not involve numerous non-parallel calculations, which helps to improve computing efficiency and transfer ability compared to the previous complex alignment classifiers [7, 68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' More- over, the hybrid relation module and Bi-MHM can mutually reinforce each other, consolidating the correlation between two videos collectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The Bi-MHM approach described above assumes video sequence representations belonging to the same action have the same set structure in the feature space and does not explicitly utilize temporal order information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, it would be much more general to take the inherent temporal information in videos into account.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For this reason, we take advantage of the temporal coherence that naturally exists in sequential video data and construct a temporal coherence regularization to further constrain the matching process by incorporating temporal order information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' IDM [11] is a commonly used means that can exploit temporal coherence within videos, which can be formulated as: I( ˜fi) = T � a=1 T � b=1 1 (a − b)2 + 1 · ��� ˜f a i − ˜f b i ��� (10) where ˜fi is the input video feature, T is the temporal length of the video, and the above loss encourages frames that are close in time to be close in the feature space as well.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addi- tion, there is another way to use temporal order information in the literature [22, 59]: I( ˜fi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ˜f a i , ˜f b i ) = � � � ��� ˜f a i − ˜f b i ��� , if |a − b| = 1 max(0, m − ��� ˜f a i − ˜f b i ���) if |a − b| > 1 (11) where m is the size of the margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Equation 11 utilizes the video coherence property by pulling two frame features closer if they are adjacent, pushing farther apart by one mar- gin m if they are not adjacent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Through observation, we can HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 7 see that in Equation 10, all frames are pulled close regardless of time distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In Equation 11, all frame features are sep- arated by a margin m if they are not adjacent to the current frame, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', all pairs are treated equally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The above two man- ners do not fully exploit the smooth and continuous changes of the video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To this end, we propose a novel form to mine temporal coherence property: I( ˜fi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ˜f a i , ˜f b i ) = � � � 1 (a−b)2+1 · ��� ˜f a i − ˜f b i ��� , if |a − b| ≤ δ max(0, mab − ��� ˜f a i − ˜f b i ���) if |a − b| > δ (12) where δ is a window size and mab = 1 − e− (|a−b|−δ)2 2σ2 for smooth temporal coherence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Compared with the origi- nal forms, our proposed temporal coherence regularization can better reflect the continuous change of video and thus lead to better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the training phase, we take the negative distance for each class as logit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Then we utilize the same cross-entropy loss as in [7, 68], the auxiliary semantic loss [46, 54] and the temporal coherence regularization to jointly train the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The auxiliary semantic loss refers to the cross-entropy loss on the real action classes, which is widely used to improve training stability and generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During inference, we select the support class closest to the query for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Extended applications of HyRSM++ 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Semi-supervised few-shot action recognition The objective of semi-supervised few-shot action recogni- tion [113] is to fully explore the auxiliary information from unlabeled video data to boost the few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Compared with the standard supervised few-shot setting, in addition to the support set S and query set Q, an extra un- labeled set U is also included in a semi-supervised few-shot task to alleviate data scarcity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We demonstrate that the pro- posed HyRSM++ can build a bridge between labeled and unlabeled examples, leading to higher classification perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Given an unlabeled set U, a common practice in semi- supervised learning literature [110, 104, 77] is to adopt the Pseudo Labeling technique [45], which assumes that the de- cision boundary usually lies in low-density areas and data samples in a high-density area have the same label.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Sim- ilarly, traditional semi-supervised few-shot learning meth- ods [71, 49] usually produce pseudo labels for unlabeled data based on the known support set, and then the gener- ated high-confidence pseudo-label data is augmented into the support set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this paper, we follow this paradigm and utilize HyRSM++ to leverage unlabeled examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Since Algorithm 1 HyRSM++ for semi-supervised few-shot ac- tion recognition Require: A labeled support set S, an auxiliary unlabeled set U, and a query set Q Ensure: Optimized few-shot classifier HyRSM++ 1: Enter support set S and unlabeled set U into HyRSM++ and obtain the category prediction of U based on Equation 9;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 2: According to the prediction distribution, select the high-confidence samples to generate pseudo-labels and update S with the selected samples to get the augmented S ′;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3: Apply the augmented S ′ and query set Q for supervised few-shot training as described in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' noisy videos usually have higher losses in training, it is pos- sible to leverage the strong HyRSM++ to distinguish be- tween clean and noisy videos from the prediction scores.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Based on this, we choose reliable pseudo-labeled samples in the unlabeled set by predictions and augment the support set with high-confidence pseudo-label data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Subsequently, we take advantage of the augmented support set to classify the query videos as in the supervised few-shot task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During the training stage, many semi-supervised few-shot tasks are sampled to optimize the whole model, as shown in Algo- rithm 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For inference, the evaluation process is also con- ducted by sampling 10,000 episodic tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 Unsupervised few-shot action recognition Unlike the previously described settings involving labelled data, unsupervised few-shot action recognition aims to use unlabeled data to construct few-shot tasks and learn adap- tations to different tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We further extend HyRSM++ to this unsupervised task and verify its capability of transfer- ring prior knowledge to learn to deal with unseen tasks effi- ciently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To perform unsupervised few-shot learning, construct- ing few-shot tasks is the first step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, there are no label annotations that can be directly applied for few-shot learning in the challenging unsupervised setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Following prior unsupervised few-shot algorithms [38, 36], we gener- ate few-shot tasks by first adopting existing unsupervised learning approaches to learn initialized feature embeddings of the input videos, and then leveraging deep clustering tech- niques to construct pseudo-classes of the videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' According to clustering results, we are able to produce few-shot tasks by sampling N-way K-shot episodes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We then use the con- structed few-shot tasks to train HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During the test- ing phase, we sample 10,000 episodes from the test set to obtain the performance, and the label information is only used for evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 8 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 1 Comparison to recent few-shot action recognition methods on the meta-testing set of SSv2-Full, Kinetics, Epic-kitchens and HMDB51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The experiments are conducted under the 5-way setting, and results are reported as the shot increases from 1 to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ”-” means the result is not available in published works, and the underline indicates the second best result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method Reference Dataset 1-shot 2-shot 3-shot 4-shot 5-shot CMN++ [112] ECCV’18 SSv2-Full 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 TRN++ [109] ECCV’18 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 OTAM [7] CVPR’20 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 TTAN [48] ArXiv’21 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ITANet [7] IJCAI’21 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 TRX (Ω={1}) [68] CVPR’21 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 TRX (Ω={2, 3})[68] CVPR’21 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 STRM [84] CVPR’22 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 MTFAN [94] CVPR’22 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [62] ECCV’22 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [33] ECCV’22 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 HCL [108] ECCV’22 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 HyRSM CVPR’22 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 (+5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 (+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7) 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 (+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (+5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) HyRSM++ 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7) 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 (+8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4) 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7) MatchingNet [86] NeurIPS’16 Kinetics 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 MAML [19] ICML’17 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Plain CMN [112] ECCV’18 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 CMN-J [113] TPAMI’20 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 TARN [5] BMVC’19 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ARN [105] ECCV’20 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 OTAM [7] CVPR’20 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 ITANet [106] IJCAI’21 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 TRX (Ω={1}) [68] CVPR’21 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 TRX (Ω={2, 3}) [68] CVPR’21 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 STRM [84] CVPR’22 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 MTFAN [94] CVPR’22 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [62] ECCV’22 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [33] ECCV’22 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 HCL [108] ECCV’22 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 HyRSM CVPR’22 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 (-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1) 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) HyRSM++ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7) 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) OTAM [7] CVPR’20 Epic-kitchens 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 TRX [68] CVPR’21 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 STRM [84] CVPR’22 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 HyRSM CVPR’22 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) HyRSM++ 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (+4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 (+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) ARN [105] ECCV’20 HMDB51 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 OTAM [7] CVPR’20 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 TTAN [48] ArXiv’21 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 TRX [68] CVPR’21 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 STRM [84] CVPR’22 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 MTFAN [94] CVPR’22 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [62] ECCV’22 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [33] ECCV’22 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 HCL [108] ECCV’22 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 HyRSM CVPR’22 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7) 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) HyRSM++ 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4) 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (+1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (-0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) 4 Experiments In this section, the following key questions will be answered in detail: (1) Is HyRSM++ competitive to other state-of- the-art methods on challenging few-shot benchmarks?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2) What components play an integral role in HyRSM++ so that HyRSM++ can work well?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (3) Can the proposed hybrid re- lation module be viewed as a simple plug-and-play unit and have the same effect for other methods?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (4) Does the pro- posed temporal set matching metric have an advantage over other measure competitors?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (5) Can HyRSM++ have stable performance in a variety of different video scenarios?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Datasets and experimental setups Datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We evaluate our HyRSM++ on six standard public few-shot benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For the Kinetics [8], SSv2-Full [23], and SSv2-Small [23] datasets, we adopt the existing splits proposed by [7, 112, 106, 68], and each dataset consists HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 9 MSA Transformer Bi-LSTM Bi-GRU Inter-relation MSA Transformer Bi-LSTM Bi-GRU Intra-relation 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 3 Comparison between different components in hybrid relation module on 5-way 1-shot few-shot action classification without tempo- ral coherence regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experiments are conducted on the SSv2- Full dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' MSA Transformer Bi-LSTM Bi-GRU Inter-relation MSA Transformer Bi-LSTM Bi-GRU Intra-relation 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4 Comparison between different components in hybrid relation module on 5-way 1-shot few-shot action classification with temporal coherence regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experiments are conducted on the SSv2-Full dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' of 64 and 24 classes as the meta-training and meta-testing set, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For UCF101 [78] and HMDB51 [42], we verify our proposed methods by leveraging existing splits from [105, 68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition to the above, we also utilize the egocentric Epic-kitchens [14, 13] dataset to evaluate HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Implementation details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Following previous works [112, 7, 68, 106], ResNet-50 [28] initialized with ImageNet [15] pre- trained weights is utilized as the feature extractor in our ex- periments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We sparsely and uniformly sample 8 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', T = 8) frames per video to construct input frame sequence, which is in line with previous methods [7, 106].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the training phase, we also adopt basic data augmentation such as random crop- ping and color jitter, and use Adam [39] optimizer to train our model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During the inference stage, we conduct few-shot action recognition evaluation on 10,000 randomly sampled episodes from the meta-testing set and report the mean ac- curacy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For many shot classification, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', 5-shot, we follow ProtoNet [76] and calculate the mean features of support videos in each class as the prototypes, and classify the query videos according to their distances against the prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 Comparison with state-of-the-art In this section, we validate the effectiveness of the proposed HyRSM++ by comparing it with state-of-the-art methods under various settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As indicated in Table 1 and Ta- ble 2, the proposed HyRSM++ surpasses other advanced approaches significantly and is able to achieve new state- of-the-art performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For instance, HyRSM++ improves the state-of-the-art performance from 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2% to 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0% un- der the 1-shot setting on SSv2-Full and consistently outper- forms our original conference version [91].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Specially, ex- tensively compared with current strict temporal alignment techniques [7, 106] and complex fusion methods [48, 68], HyRSM++ produces results that are superior to them un- der most different shots, which implies that our approach is considerably flexible and efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that the SSv2- Full and SSv2-Small datasets tend to be motion-based and generally focus on temporal reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' While Kinetics and UCF101 are partly appearance-related datasets, and scene understanding is usually essential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Besides, Epic-kitchens and HMDB51 are relatively complicated and might involve diverse object interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Extensively evaluated on these benchmarks, HyRSM++ provides excellent performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' It reveals that our HyRSM++ has strong robustness and gen- eralization for different scenes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From Table 2, we observe that HyRSM++ outperforms current state-of-the-art meth- ods on UCF101 and SSv2-Small under the 1-shot and 3- shot settings, which suggests that our HyRSM++ can learn rich and effective representations with extremely limited samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' It’s worth noting that under the 5-shot evaluation, our HyRSM++ yields 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9% and 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0% 5-shot performance on UCF101 and SSv2-Small, respectively, which is slightly behind STRM and HCL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We attribute this to STRM and HCL are ensemble methods that weight each sample with attention or use multiple metrics for few-shot classification, which makes them more suitable for multi-shots, while our HyRSM++ is a simple and general method without involves complex ensemble operations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Moreover, we also observe that with the introduction of temporal coherence regulariza- tion, HyRSM++ has a significant improvement compared to HyRSM, which verifies the effectiveness of exploiting tem- poral order information during the set matching process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Ablation study For ease of comparison, we use a baseline method Pro- toNet [76] that applies global-average pooling to backbone representations to obtain a prototype for each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We will explore the role and validity of our proposed modules in de- tail below.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Design choices of relation modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To systematically investigate the effect of different relation modeling opera- 10 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 2 Results on 1-shot, 3-shot, and 5-shot few-shot classification on the UCF101 and SSv2-Small datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ”-” means the result is not available in published works, and the underline indicates the second best result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' UCF101 SSv2-Small Method Reference 1-shot 3-shot 5-shot 1-shot 3-shot 5-shot MatchingNet [86] NeurIPS’16 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 MAML [19] ICML’17 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 Plain CMN [112] ECCV’18 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 CMN-J [113] TPAMI’20 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 ARN [105] ECCV’20 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 OTAM [7] CVPR’20 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 TTAN [48] ArXiv’21 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ITANet [106] IJCAI’21 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 TRX [68] CVPR’21 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 STRM [84] CVPR’22 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 MTFAN [94] CVPR’22 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Nguyen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [62] ECCV’22 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 Huang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' [33] ECCV’22 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 HCL [108] ECCV’22 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 HyRSM CVPR’22 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3) 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 (-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2) 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4) 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 (-5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) HyRSM++ 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9) 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8) 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 (-1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 (+3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 (+0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5) 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 (-2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6) Table 3 Ablation study under 5-way 1-shot and 5-way 5-shot settings on the SSv2-Full dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' “TCR” refers to temporal coherence regular- ization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Intra-relation Inter-relation Bi-MHM TCR 1-shot 5-shot 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 � 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 � 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 � 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 � � 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 � � 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 � � 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 � � � 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 � � 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 � � � 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 � � � 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 � � � � 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 Table 4 Generalization of hybrid relation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We conduct exper- iments on SSv2-Full.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method 1-shot 5-shot OTAM [7] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 OTAM [7]+ Intra-relation 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 OTAM [7]+ Inter-relation 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 OTAM [7]+ Intra-relation + Inter-relation 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 tions in hybrid relation module, we vary the components to construct some variants and report the results in Figure 3 and Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The comparison experiments are conducted on the SSv2-Full dataset under the 5-way 1-shot setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We can observe that different combinations have quite distinct properties, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', multi-head self-attention (MSA) and Trans- former are more effective to model intra-class relations than Bi-LSTM and Bi-GRU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For example, utilizing multi-head 5-way 6-way 7-way 8-way 9-way 10-way Accuracy (%) Kinetics OTAM TRX STRM HyRSM HyRSM++ 50 66 54 58 70 62 5-way 6-way 7-way 8-way 9-way 10-way Accuracy (%) SSv2-Full OTAM TRX STRM HyRSM HyRSM++ 25 30 40 35 50 45 55 74 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 5 N-way 1-shot performance trends of our HyRSM++ and other state-of-the-art methods with different N on SSv2-Full.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The compari- son results prove the superiority of our HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Accuracy (%) (a) Frames 42 46 50 54 2 3 4 5 6 7 8 9 10 1 2 4 8 16 32 Accuracy (%) 1-shot 5-shot 45 50 60 55 70 65 (b) Head number Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 6 (a) Performance on SSv2-Full using a different number of frames under the 5-way 1-shot setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (b) The effect of the number of heads on SSv2-Full.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' self-attention to learn intra-relation produces at least 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5% improvements than with Bi-LSTM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Nevertheless, compared with other recent algorithms [68, 106], the performance of each combination can still be improved, which strongly sug- gests the necessity of structure design for learning task- specific features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For simplicity,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' we choose the same struc- ture to explore intra-relation and inter-relation,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' and the con- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='SSv2-Full (Resnet-18) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='OTAM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='TRX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM++ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='20 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='30 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='40 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='SSv2-Full (Resnet-34) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='OTAM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='TRX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM++ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Accuracy (%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='50 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='55 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='65 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Kinetics (Resnet-18) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='OTAM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='TRX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM++ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='55 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='60 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='65 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='70 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='75 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='80 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='85 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Kinetics (Resnet-34) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='OTAM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='TRX ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='HyRSM++ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5-shot ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Accuracy (%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Accuracy (%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Accuracy (%) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 7 Comparison of the backbone with different depths on the SSv2- Full and Kinetics datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 5 Comparative experiments on SSv2-Full using the Inception- v3 [81] feature extractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method 1-shot 2-shot 3-shot 4-shot 5-shot OTAM [7] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 TRX [68] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 STRM [84] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 HyRSM++ 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Table 6 Performance comparison on SSv2-Full with self-supervised initialization weights [97].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method 1-shot 2-shot 3-shot 4-shot 5-shot OTAM [7] 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 TRX [68] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 STRM [84] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 HyRSM++ 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 Table 7 Performance comparison with different relation modeling paradigms on SSv2-Full and Kinetics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Setting Method Dataset 1-shot 5-shot Support-only HyRSM SSv2-Full 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 Support-only HyRSM++ 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 Support&Query HyRSM 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Support&Query HyRSM++ 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 Support-only HyRSM Kinetics 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 Support-only HyRSM++ 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 Support&Query HyRSM 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Support&Query HyRSM++ 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 figuration of multi-head self-attention is adopted in the ex- periments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Analysis of the proposed components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 3 summa- rizes the ablation study of each module in HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To evaluate the function of the proposed components, Pro- toNet [76] is taken as our baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From the ablation results, we can conclude that each component is highly effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In particular, compared to the baseline, intra-relation mod- eling can respectively bring 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0% and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7% performance 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 0% 10% 20% 30% 40% 50% Accuracy (%) 5-way 5-shot OTAM TRX STRM HyRSM++ 45 61 49 53 65 57 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 0% 10% 20% 30% 40% 50% Accuracy (%) 5-way 1-shot OTAM TRX STRM HyRSM++ 25 30 40 35 50 45 55 69 Noisy ratio Noisy ratio Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 8 Robustness comparison experiments in the presence of noisy samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' X% represents the proportion of noisy labels included in the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 8 Comparison with recent temporal alignment methods on the SSv2-Full dataset under the 5-way 1-shot and 5-way 5-shot settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Diagonal means matching frame by frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Metric Bi-direction 1-shot 5-shot Diagonal 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 Plain DTW [61] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 OTAM [7] � 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 OTAM [7] � 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Bi-MHM � 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Temporal set matching metric � 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Table 9 Comparison of different set matching strategies on the SSv2- Full dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Metric Bi-direction 1-shot 5-shot Hausdorff distance � 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 Hausdorff distance � 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Modified Hausdorff distance � 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Bi-MHM � 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Temporal set matching metric � 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 Table 10 Generalization of temporal coherence regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We conduct experiments on SSv2-Full.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ”Hard margin” represents the method described in Equation 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method 1-shot 5-shot OTAM [7] 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 OTAM [7] + IDM 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 OTAM [7] + Hard margin 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 OTAM [7] + Temporal coherence regularization 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 Bi-MHM 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 Bi-MHM + IDM 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 Bi-MHM + Hard margin 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 Bi-MHM + Temporal coherence regularization 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 gains on 1-shot and 5-shot, and inter-relation function boosts the performance by 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5% and 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9% on 1-shot and 5-shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, the proposed set matching metric improves 1- shot and 5-shot classification by 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4% and 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7%, respec- tively, which indicates the ability to find better correspond- ing frames in the video pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Adding temporal coherence regularization to the set matching metric also achieves sta- 12 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ble performance improvements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Moreover, stacking the pro- posed modules can further improve performance, indicating the complementarity between components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' When consider- ing all the proposed modules together to form HyRSM++, the performance of 1-shot and 5-shot is improved to 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0% and 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8%, respectively, which strongly supports the impor- tance of learning task-related features and flexible metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Pluggability of hybrid relation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In Table 4, we experimentally show that the hybrid relation module gen- eralizes well to other methods by inserting it into the re- cent OTAM [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In this study, OTAM with our hybrid re- lation module benefits from relational information and fi- nally achieves 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9% and 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6% gains on 1-shot and 5-shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' This fully evidences that mining the rich information among videos to learn task-specific features is especially valuable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' N-way few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the previous experi- ments, all of our comparative evaluation experiments were carried out under the 5-way setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In order to further ex- plore the influence of different N, in Figure 5, we com- pare N-way (N ≥ 5) 1-shot results on SSv2-Full and Ki- netics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results show that as N increases, the difficulty be- comes higher, and the performance decreases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Neverthe- less, the performance of our HyRSM++ is still consistently ahead of the recent state-of-the-art STRM [84], TRX [68] and OTAM [7], which shows the feasibility of our method to boost performance by introducing rich relations among videos and the power of the set matching metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Varying the number of frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To demonstrate the scal- ability of HyRSM++, we also explore the impact of differ- ent video frame numbers on performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Of note, previous comparisons are performed under 8 frames of input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results in Figure 6(a) show that as the number of frames increases, the performance improves.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' HyRSM++ gradually tends to be saturated when more than 7 frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Influence of head number.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Previous analyses have shown that multi-head self-attention can focus on different patterns and is critical to capturing diverse features [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We investi- gate the virtue of varying the number of heads in multi-head self-attention on performance in Figure 6(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experimental results indicate that the effect of multi-head is remarkable, and the performance starts to saturate beyond a particular point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Varying depth of the backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The proposed HyRSM++ is general and compatible with feature extractors of various capacities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The previous methods all utilize ResNet-50 as backbone by default for a fair comparison, and the impact of backbone’s depth on performance is still under-explored.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As presented in Figure 7, we attempt to answer this question by adopting ResNet-18 and ResNet-34 pre-trained on Ima- geNet as alternative backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results demonstrate that the deeper network clearly benefits from greater learning capac- ity and results in better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' we notice ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 40% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 60% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 60% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 80% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 60% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Acc = 100% ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(+ hybrid relation module) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(a) Examples from SSv2-Full ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='(b) Examples from Kinetics ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 9 Similarity visualization of how query videos (rows) match to support videos (columns).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The boxes of different colors correspond to: correct match and incorrect match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Support Query (a) SSv2-Full: ”pretending to open something without actually opening it” (b) SSv2-Full: ”showing that something is empty” Support Query Support Query (c) Kinetics: ”cutting watermelon” Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 10 Visualization of matching results with the proposed set match- ing metric on SSv2-Full and Kinetics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' that our proposed HyRSM++ consistently outperforms the competitors (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', OTAM and TRX), which indicates that our HyRSM++ is a generally effective framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Influence of different backbones.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To verify that our ap- proach is not limited to ResNet-like structures, we further perform experiments on Inception-v3 and report the results in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From the comparison, we note that HyRSM++ is significantly superior to other competitive algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Com- pared with STRM [84], our proposed HyRSM++ leads to at least 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5% performance gain under various settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Impact of pretraining types.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Supervised ImageNet initial- ization [15] is widely employed in many vision tasks [7, 113, 90] and achieves impressive success.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Recently, self- supervised techniques have also received widespread at- tention and revealed excellent application potential.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In Ta- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='73 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='063 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='065 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='067 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='069 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='67 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='038 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='36 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='091 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='340.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='029 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='088 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='078 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='092 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='55 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='04 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='053 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='094 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='07 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='053 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='042 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='65 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='039 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='024 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='087 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='750.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='270.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='46 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='064 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='037 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='055 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='035 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='69 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='021 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='051 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='042 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='78 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='067 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='45 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='087 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='098 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='470.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='38 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='069 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='29 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='09 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='360.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='32 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='290.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='61 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='074 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='084 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='063 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='061 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='63 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='077 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='06 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='053 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='055 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='069 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='054 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='670.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='043 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='51 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='036 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='44 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='091 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='068 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='35 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='066 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='05 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='057 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='044 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='038 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='810.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='14 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='34 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='250.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='59 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='18 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='014 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='095 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='43 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='025 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='049 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='042 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='76 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='071 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='081 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='097 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='019 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='42 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='063 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='490.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='47 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='054 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='19 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='08 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='24 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='098 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='15 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='096 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='16 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='23 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='082 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='076 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='21 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='11 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='25HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 13 ble 6, we show the performance comparison with self- supervised pretraining weights [97].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results demonstrate that our HyRSM++ is still powerful and not limited to the specific initialization weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Other relation modeling forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Previous few-shot image classification methods of learning task-specific features have also achieved promising results [101, 47].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, many of them use some complex and fixed operations to learn the dependencies between images, while our method is straight- forward and flexible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Moreover, most previous works only use the information within the support set to learn task- specific features, ignoring the correlation with query sam- ples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In our hybrid relation module, we add the query video to the pool of inter-relation modeling to extract relevant in- formation suitable for query classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As illustrated in Table 7, we try to remove the query video from the pool in HyRSM++, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', Support-only, but we can observe that after removing the query video, the performance of 1-shot and 5-shot on SSv2-Full reduces by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3% and 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0%, respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' There are similar conclusions on the Kinetics dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' This evidences that the proposed hybrid relation module is reasonable and can effectively extract task-related features, thereby promoting query classification accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Robustness to noise labels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To demonstrate the robustness of HyRSM++ to noise samples, we simulate the presence of noise labels in the dataset in Figure 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' From the results, we can observe that performance generally decreases as the proportion of noise rises.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, our HyRSM++ still ex- hibits higher performance than other methods, which illus- trates the robustness of our method and its adaptability to complex conditions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 Comparison with other matching approaches Our proposed temporal set matching metric Bi-MHM aims to accurately find the corresponding video frames between video pairs by relaxing the strict temporal ordering con- straints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The following comparative experiments in Table 8 are carried out under identical experimental setups, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', re- Table 11 Complexity analysis for 5-way 1-shot SSv2-Full evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The experiments are carried out on one Nvidia V100 GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method Backbone Param FLOPs Latency Acc HyRSM ResNet-18 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8M 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='64G 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 HyRSM++ ResNet-18 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8M 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='64G 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 HyRSM ResNet-34 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9M 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='34G 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 HyRSM++ ResNet-34 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9M 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='34G 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 OTAM [7] ResNet-50 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5M 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='17G 116.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6ms 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 TRX [68] ResNet-50 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1M 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='22G 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6ms 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 STRM [84] ResNet-50 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3M 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='27G 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3ms 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 HyRSM ResNet-50 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6M 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='36G 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 HyRSM++ ResNet-50 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6M 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='36G 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5ms 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 place the OTAM directly with our Bi-MHM while keep- ing other settings unchanged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results show that our Bi- MHM performs well and outperforms other temporal align- ment methods (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', OTAM).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We further analyze different set matching approaches in Table 9, and the results indi- cate that Hausdorff distance is susceptible to noise interfer- ence, resulting in the mismatch and relatively poor perfor- mance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, our Bi-MHM shows stability to noise and obtains better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Furthermore, compared with the single directional metric, our proposed bidirectional metric is more comprehensive in reflecting the actual distances be- tween videos and achieves better performance on few-shot tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, we observe that the proposed temporal set matching metric achieves clear improvement over Bi- MHM after incorporating temporal coherence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' For instance, the temporal set matching metric obtains 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7%, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1% perfor- mance gains on 5-way 1-shot, and 5-way 5-shot SSv2-Full classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' It indicates the effectiveness of the proposed temporal set matching metric.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 Comparison of temporal coherence manners Pioneering work [11, 22, 59] also indicates the important role of temporal coherence and shows remarkable results in face recognition [59] and unsupervised representation learn- ing [22, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' However, they also have some limitations as noted in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2, and thus the temporal coherence reg- ularization is proposed for smooth video coherence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ta- ble 10 compares the proposed temporal coherence regular- ization with existing temporal coherence schemes based on OTAM and Bi-MHM.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Results show that exploiting tempo- ral coherence helps improve the classification accuracy of the metrics, which confirms our motivation for consider- ing temporal order information during the matching process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, our proposed temporal coherence regularization achieves more significant improvements than other manners, and we attribute this to the smooth property of temporal co- herence regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 Visualization results To qualitatively show the discriminative capability of the learned task-specific features in our proposed method, we visualize the similarities between query and support videos with and without the hybrid relation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As depicted in Figure 9, by adding the hybrid relation module, the discrim- ination of features is significantly improved, contributing to predicting more accurately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Additionally, the matching re- sults of the set matching metric are visualized in Figure 10, and we can observe that our Bi-MHM is considerably flexi- ble in dealing with alignment and misalignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 14 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Support Query Support Query “tipping Sth over” from SSv2-Full OTAM HyRSM++ “taking Sth out of Sth” from SSv2-Full OTAM HyRSM++ Support Query “showing Sth next to Sth” from SSv2-Full OTAM HyRSM++ Support Query “riding elephant” from Kinetics OTAM HyRSM++ Support Query “playing trumpet” from Kinetics OTAM HyRSM++ Support Query “filling eyebrows” from Kinetics OTAM HyRSM++ Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 11 Visualization of activation maps with Grad-CAM [75].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Compared to OTAM [7], HyRSM++ focuses more precisely on classification- related regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To further visually evaluate the proposed HyRSM++, we compare the activation visualization results of HyRSM++ to the competitive OTAM [7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As shown in Figure 11, the fea- tures of OTAM usually contain non-target objects or ignore most discriminative parts since it lacks the mechanism of learning task-specific embeddings for feature adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In contrast, our proposed HyRSM++ processes the query and support videos with an adaptive relation modeling operation, which allows it to focus on the different target objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The above qualitative experiments illustrate the rationality of our model design and the necessity of learning task-related fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 Limitations In order to further understand HyRSM++, Table 11 il- lustrates its differences with OTAM and TRX in terms of parameters, computation, and runtime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the inference HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 15 Table 12 Comparison to existing semi-supervised few-shot action recognition methods on the meta-testing set of Kinetics and SSv2-Small.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The experiments are conducted under the 5-way setting, and results are reported as the shot increases from 1 to 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ”w/o unlabeled data” indicates that there is no unlabeled set in a episode, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', the traditional few-shot action recognition setting, which can act as the lower bound of the semi- supervised counterpart.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Dataset Method Backbone 1-shot 2-shot 3-shot 4-shot 5-shot Kinetics OTAM w/o unlabeled data [7] Inception-v3 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 DeepCluster CACTUs-MAML [30] Inception-v3 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 DeepCluster CACTUs-ProtoNets [30] Inception-v3 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 LIM [113] Inception-v3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 HyRSM++ w/o unlabeled data Inception-v3 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 HyRSM++ Inception-v3 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 CMN w/o unlabeled data [112] ResNet-50 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 OTAM w/o unlabeled data [7] ResNet-50 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 LIM (ensemble) [113] ResNet-50, Inception-v3, ResNet-18 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 HyRSM++ w/o unlabeled data ResNet-50 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 HyRSM++ ResNet-50 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 SSv2-Small OTAM w/o unlabeled data [112] Inception-v3 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 DeepCluster CACTUs-MAML [30] Inception-v3 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 DeepCluster CACTUs-ProtoNets [30] Inception-v3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 LIM [113] Inception-v3 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 HyRSM++ w/o unlabeled data Inception-v3 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 HyRSM++ Inception-v3 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 CMN w/o unlabeled data [112] ResNet-50 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 OTAM w/o unlabeled data [112] ResNet-50 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 LIM (ensemble) [113] ResNet-50, Inception-v3, ResNet-18 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 HyRSM++ w/o unlabeled data ResNet-50 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 HyRSM++ ResNet-50 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 0 50 100 150 200 Accuracy (%) Kinetics 1-shot 2-shot 3-shot 4-shot 5-shot 73 75 79 77 85 81 87 83 89 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 12 Performance comparison of different amounts of unlabeled data for testing in an episode on Kinetics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' phase, HyRSM++ does not add additional computational burden compared to HyRSM because the temporal coher- ence regularization is not involved in the calculation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' No- tably, HyRSM++ introduces extra parameters (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', hybrid relation module), resulting in increased GPU memory and computational consumption.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Nevertheless, without complex non-parallel classifier heads, the whole inference speed of HyRSM++ is faster than OTAM and TRX.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We will further investigate how to reduce complexity with no loss of perfor- mance in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 0 50 100 150 200 Accuracy (%) SSv2-Small 1-shot 2-shot 3-shot 4-shot 5-shot 60 38 40 44 42 50 46 58 48 62 54 52 56 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 13 Performance comparison of different amounts of unlabeled data for testing in an episode on the SSv2-Small dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 5 Extension to Semi-supervised Few-shot Action Recognition In this section, we demonstrate that the proposed HyRSM++ can be extended to address the more challenging semi- supervised few-shot action recognition problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Follow- ing LIM [113], we utilize two common datasets (Kinet- ics [8] and SSv2-Small [23]) to perform comparative exper- iments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' These two datasets are subsets of Kinetics-400 [8] and Something-Something-v2 [23], respectively, and the un- labeled examples in our experiments are collected from the remaining videos of the same category as these subsets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To 16 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 13 Comparison to state-of-the-art unsupervised few-shot action recognition approaches on UCF101, HMDB51, and Kinetics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ∗ indi- cates that the algorithm adopt the same 2D ResNet-50 backbone as HyRSM++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Method Supervision UCF101 HMDB51 Kinetics MAML [19] Supervised 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 CMN [112] Supervised 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 TARN [5] Supervised 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 ProtoGAN [43] Supervised 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 ARN [105] Supervised 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 3DRotNet [37] Unsupervised 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 VCOP [96] Unsupervised 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 IIC [83] Unsupervised 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 Pace [87] Unsupervised 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 MemDPC [83] Unsupervised 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 CoCLR [26] Unsupervised 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6 MetaUVFS∗ [64] Unsupervised 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 HyRSM++ Unsupervised 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='4 50 75 100 125 150 175 200 Accuracy (%) UCF101 63 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 50 75 100 125 150 175 200 Accuracy (%) HMDB51 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3 50 75 100 125 150 175 200 Accuracy (%) Kinetics 49 65 67 69 55 37 39 41 43 51 53 57 35 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 14 Ablation study of different cluster numbers under 5-way 1- shot unsupervised few-shot settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' conduct the semi-supervised few-shot evaluation, we fol- low the mainstream distractor setting [30, 38, 113], where the unlabeled set contains other interference classes in each episodic task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' This setting is more realistic and requires the model to be robust to the existence of noisy samples from other classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In our experiments, we fixed the number of unlabeled videos in an episodic task to 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Table 12 provides the comparison of our HyRSM++ against state-of-the-art methods on the two standard semi- supervised few-shot benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We find that HyRSM++ substantially surpasses the previous approaches, such as LIM [113].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Under the semi-supervised 5-way 1-shot sce- nario, HyRSM++ produces performance gains of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='8% and 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='5% on Kinetics and SSv2-Small than LIM with Inception- v3 backbone, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In particular, when using the ResNet-50 backbone, our method is even superior to the multi-modal fusion method (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', LIM), which indicates that HyRSM++ enables more accurate pseudo-labels for unla- beled data and then can expand the support set to boost the classi��cation accuracy of the query videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In addition, com- pared to our supervised counterpart (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=', HyRSM++ w/o un- labeled data), joining unlabeled data is beneficial to allevi- ating the data scarcity problem and promotes few-shot clas- sification accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' We can observe that when ResNet-50 is adopted as the backbone, the performance of HyRSM++ with unlabeled data is improved by 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1% compared to that without unlabeled data under the 5-way 1-shot Kinetics evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' To further investigate the effect of unlabeled videos in an episode, we conduct comparative experiments with vary- ing numbers of unlabeled videos in Figure 12 and Figure 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experimental results show that as the number of unlabeled samples increases, the performance also increases gradually, indicating that the introduction of unlabeled data helps gen- eralize to unseen categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Furthermore, we notice that the improvement in the 1-shot setting is more significant than that in the 5-shot, which shows that under the condition of low samples, unlabeled videos can improve the estimation of the distribution of new categories more effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Mean- while, as the amount of unlabeled data increases to a certain level, the performance starts to saturate slowly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 6 Extension to Unsupervised Few-shot Action Recognition We also extend the proposed HyRSM++ to solve the challenging unsupervised few-shot action recognition task where labels for training videos are not available.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Following previous work [38, 36], we adopt the idea of the ”cluster- ing first and then meta-learning” paradigm to construct few- shot tasks and exploit unlabeled data for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Our ex- periments are based on unsupervised ResNet-50 initializa- tion [97], which is self-supervised pre-trained on Kinetics- 400 [8] without accessing any label information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During the clustering process, we utilize the K-means clustering strat- egy for each dataset to obtain 150 clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' As presented in Table 13, we compare HyRSM++ with current state-of-the-art methods on the UCF101, HMDB51 and Kinetics datasets under the 5-way 1-shot setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Note that HyRSM++ and MetaUVFS [64] use the same ResNet- 50 structure as the feature extractor, and our HyRSM++ shows better performance on each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In particular, we observe that our method achieves 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='0% performance on the UCF101 dataset, a 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='9% improvement over MetaUVFS, and even surpasses the fully supervised ARN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' The supe- rior performance of HyRSM++ reveals that our approach of leveraging relations within and cross videos and the flexible metric performs effectively in the low-shot regime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' More- over, this phenomenon also demonstrates the potential of our method to learn a strongly robust few-shot model using only unlabeled videos, even though HyRSM++ is not specifically HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 17 designed for the unsupervised few-shot action recognition task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In the experiments, one parameter involved in apply- ing HyRSM++ to the unsupervised few-shot setting is the number of clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In Figure 14, we display the perfor- mance comparison under different number of clusters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Re- sults show that when the number of clusters is 150, the per- formance reaches the peak value, which means that if the cluster number is too small, it may lead to under-clustering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' If the number is too large, it may cause over-clustering, dam- aging the performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 7 Conclusion In this work, we have proposed a hybrid relation guided temporal set matching (HyRSM++) approach for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Firstly, we design a hybrid relation mod- ule to model the rich semantic relevance within one video and cross different videos in an episodic task to generate task-specific features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Secondly, built upon the representa- tive task-specific features, an efficient set matching metric is proposed to be resilient to misalignment and match videos accurately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' During the matching process, a temporal coher- ence regularization is further imposed to exploit temporal order information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Furthermore, we extend HyRSM++ to solve the more challenging semi-supervised few-shot ac- tion recognition and unsupervised few-shot action recog- nition problems.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Experimental results demonstrate that our HyRSM++ achieves the state-of-the-art performance on multiple standard benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Acknowledgements This work is supported by the National Natural Science Foundation of China under grant 61871435, Fundamental Re- search Funds for the Central Universities no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2019kfyXKJC024, 111 Project on Computational Intelligence and Intelligent Control under Grant B18024, and Alibaba Group through Alibaba Research Intern Program.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' References 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Antoniou A, Storkey A (2019) Assume, augment and learn: Un- supervised few-shot meta-learning via random labels and data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:190209884 4 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Bai Y, Ding H, Sun Y, Wang W (2018) Convolutional set match- ing for graph similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:181010866 3 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Bai Y, Ding H, Gu K, Sun Y, Wang W (2020) Learning-based efficient graph similarity computation via multi-scale convolu- tional set matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: AAAI, vol 34, pp 3219–3226 3 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Berthelot D, Carlini N, Goodfellow I, Papernot N, Oliver A, Raf- fel CA (2019) Mixmatch: A holistic approach to semi-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 32 4 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Bishay M, Zoumpourlis G, Patras I (2019) TARN: temporal at- tentive relation network for few-shot and zero-shot action recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: BMVC, p 154 4, 8, 16 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Caba Heilbron F, Escorcia V, Ghanem B, Carlos Niebles J (2015) Activitynet: A large-scale video benchmark for human activity understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 961–970 1 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Cao K, Ji J, Cao Z, Chang CY, Niebles JC (2020) Few-shot video classification via temporal alignment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 10618– 10627 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Carreira J, Zisserman A (2017) Quo vadis, action recognition?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' a new model and the kinetics dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 6299–6308 1, 8, 15, 16 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Chen Z, Fu Y, Zhang Y, Jiang YG, Xue X, Sigal L (2019) Multi- level semantic feature augmentation for one-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TIP 28(9):4594–4605 3 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Cho K, Van Merri¨enboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representa- tions using rnn encoder-decoder for statistical machine transla- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:14061078 5 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Conners RW, Harlow CA (1980) A theoretical comparison of tex- ture algorithms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI (3):204–222 3, 6, 13 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Coskun H, Zia MZ, Tekin B, Bogo F, Navab N, Tombari F, Sawh- ney H (2021) Domain-specific priors and meta learning for few- shot first-person action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 4 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Damen D, Doughty H, Farinella G, Fidler S, Furnari A, Kaza- kos E, Moltisanti D, Munro J, Perrett T, Price W, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2020) The epic-kitchens dataset: Collection, challenges and baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI (01):1–1 1, 9 14.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Damen D, Doughty H, Farinella GM, Furnari A, Kazakos E, Ma J, Moltisanti D, Munro J, Perrett T, Price W, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2020) Rescal- ing egocentric vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:200613256 9 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L (2009) Ima- genet: A large-scale hierarchical image database.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 248–255 9, 12 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Dubuisson MP, Jain AK (1994) A modified hausdorff distance for object matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICPR, IEEE, vol 1, pp 566–568 3, 6 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Fei-Fei L, Fergus R, Perona P (2006) One-shot learning of object categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 28(4):594–611 3 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Feichtenhofer C, Fan H, Malik J, He K (2019) Slowfast networks for video recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 6202–6211 1 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Finn C, Abbeel P, Levine S (2017) Model-Agnostic Meta- Mearning for Fast Adaptation of Deep Networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICML 3, 8, 10, 16 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Fu Y, Zhang L, Wang J, Fu Y, Jiang YG (2020) Depth guided adaptive meta-fusion network for few-shot video recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ACMMM, pp 1142–1151 4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Gao Y (2003) Efficiently comparing face images using a modi- fied hausdorff distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' IEE Proceedings-Vision, Image and Sig- nal Processing 150(6):346–350 6 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Goroshin R, Bruna J, Tompson J, Eigen D, LeCun Y (2015) Unsupervised learning of spatiotemporally coherent metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 4086–4093 3, 6, 13 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Goyal R, Ebrahimi Kahou S, Michalski V, Materzynska J, West- phal S, Kim H, Haenel V, Fruend I, Yianilos P, Mueller-Freitag M, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2017) The” something something” video database for learning and evaluating visual common sense.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 5842–5850 1, 8, 15 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Grauman K, Westbury A, Byrne E, Chavis Z, Furnari A, Girdhar R, Hamburger J, Jiang H, Liu M, Liu X, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2022) Ego4d: Around the world in 3,000 hours of egocentric video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 18995–19012 1 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Graves A, Mohamed Ar, Hinton G (2013) Speech recognition with deep recurrent neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICASSP, pp 6645–6649 5 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Han T, Xie W, Zisserman A (2020) Self-supervised co-training for video representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 33, pp 5679– 5690 16 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Haresh S, Kumar S, Coskun H, Syed SN, Konin A, Zia Z, Tran QH (2021) Learning by aligning videos in time.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 18 Xiang Wang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' 5548–5558 3, 13 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 770–778 9 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hou R, Chang H, Ma B, Shan S, Chen X (2019) Cross attention network for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, pp 4003–4014 3, 5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Hsu K, Levine S, Finn C (2018) Unsupervised learning via meta- learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICLR 4, 15, 16 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Huang H, Zhang J, Zhang J, Wu Q, Xu C (2021) Ptn: A pois- son transfer network for semi-supervised few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: AAAI, vol 35, pp 1602–1609 4 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Huang K, Geng J, Jiang W, Deng X, Xu Z (2021) Pseudo- loss confidence metric for semi-supervised few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 8671–8680 4 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Huang Y, Yang L, Sato Y (2022) Compound prototype matching for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 351–368 2, 4, 8, 10 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Huttenlocher DP, Klanderman GA, Rucklidge WJ (1993) Com- paring images using the hausdorff distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 15(9):850– 863 3, 6 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Jesorsky O, Kirchberg KJ, Frischholz RW (2001) Robust face detection using the hausdorff distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: AVBPA, Springer, pp 90–95 3 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ji Z, Zou X, Huang T, Wu S (2019) Unsupervised few-shot learn- ing via self-supervised training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:191212178 4, 7, 16 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Jing L, Yang X, Liu J, Tian Y (2018) Self-supervised spa- tiotemporal feature learning via video rotation prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:181111387 16 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Khodadadeh S, Boloni L, Shah M (2019) Unsupervised meta- learning for few-shot image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 32 4, 7, 16 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Kingma DP, Ba J (2014) Adam: A method for stochastic opti- mization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:14126980 9 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Kliper-Gross O, Hassner T, Wolf L (2011) One shot similarity metric learning for action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: SIMBAD, Springer, pp 31–45 4 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Koizumi Y, Yatabe K, Delcroix M, Masuyama Y, Takeuchi D (2020) Speech enhancement using self-adaptation and multi- head self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICASSP, pp 181–185 12 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Kuehne H, Serre T, Jhuang H, Garrote E, Poggio T, Serre T (2011) HMDB: A large video database for human motion recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, DOI 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1109/ICCV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='6126543 9 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Kumar Dwivedi S, Gupta V, Mitra R, Ahmed S, Jain A (2019) Protogan: Towards few shot learning for action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCVW, pp 0–0 16 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lazarou M, Stathaki T, Avrithis Y (2021) Iterative label clean- ing for transductive and semi-supervised few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 8751–8760 4 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lee DH, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2013) Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICMLW, vol 3, p 896 7 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Li A, Luo T, Xiang T, Huang W, Wang L (2019) Few-shot learn- ing with global class representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 9715–9724 7 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Li H, Eigen D, Dodge S, Zeiler M, Wang X (2019) Finding task- relevant features for few-shot learning by category traversal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 1–10 2, 3, 5, 13 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Li S, Liu H, Qian R, Li Y, See J, Fei M, Yu X, Lin W (2021) Ttan: Two-stage temporal alignment network for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:210704782 8, 9, 10 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Li X, Sun Q, Liu Y, Zhou Q, Zheng S, Chua TS, Schiele B (2019) Learning to self-train for semi-supervised few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 32 4, 7 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Li Z, Zhou F, Chen F, Li H (2017) Meta-sgd: Learning to learn quickly for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:170709835 3 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lin J, Gan C, Wang K, Han S (2020) Tsm: Temporal shift module for efficient and scalable video understanding on edge devices.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 1 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Liu L, Shao L, Li X, Lu K (2015) Learning spatio-temporal rep- resentations for action recognition: A genetic programming ap- proach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TCYB 46(1):158–170 1 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Liu X, Gao J, He X, Deng L, Duh K, Wang Yy (2015) Repre- sentation learning using multi-task deep neural networks for se- mantic classification and information retrieval.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NAACL, pp 912–921 2 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Liu Y, Zhang X, Zhang S, He X (2020) Part-aware prototype network for few-shot semantic segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, pp 142– 158 7 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lu J, Gong P, Ye J, Zhang C (2020) Learning from very few samples: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:200902653 3 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lu J, Jin S, Liang J, Zhang C (2020) Robust few-shot learning for user-provided data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TNNLS 32(4):1433–1447 3 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Lu P, Bai T, Langlais P (2019) Sc-lstm: Learning task-specific representations in multi-task learning for sequence labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NAACL, pp 2396–2406 2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Mitra A, Biswas S, Bhattacharyya C (2016) Bayesian modeling of temporal coherence in videos for entity discovery and summa- rization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 39(3):430–443 3 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Mobahi H, Collobert R, Weston J (2009) Deep learning from temporal coherence in video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICML, pp 737–744 3, 6, 13 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Mohanaiah P, Sathyanarayana P, GuruKumar L (2013) Image texture feature extraction using glcm approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' IJSRP 3(5):1–5 3 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' M¨uller M (2007) Dynamic time warping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Information Retrieval for Music and Motion pp 69–84 11 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Nguyen KD, Tran QH, Nguyen K, Hua BS, Nguyen R (2022) In- ductive and transductive few-shot video classification via appear- ance and temporal alignments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 471–487 2, 4, 8, 10 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Nishiyama M, Yuasa M, Shibata T, Wakasugi T, Kawahara T, Yamaguchi O (2007) Recognizing faces of moving people by hi- erarchical image-set matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 1–8 3 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Patravali J, Mittal G, Yu Y, Li F, Chen M (2021) Unsupervised few-shot action recognition via action-appearance aligned meta- adaptation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 8484–8494 4, 16 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Peng B, Lei J, Fu H, Zhang C, Chua TS, Li X (2018) Unsuper- vised video action clustering via motion-scene interaction con- straint.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TCSVT 30(1):131–144 1 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Peng M, Zhang Q, Xing X, Gui T, Fu J, Huang X (2019) Learning task-specific representation for novel words in sequence labeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: IJCAI 2 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Perez L, Wang J (2017) The effectiveness of data augmenta- tion in image classification using deep learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:171204621 3 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Perrett T, Masullo A, Burghardt T, Mirmehdi M, Damen D (2021) Temporal-relational crosstransformers for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 475–484 2, 4, 6, 7, 8, 9, 10, 11, 12, 13 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Qin T, Li W, Shi Y, Gao Y (2020) Diversity helps: Unsupervised few-shot learning via distribution shift-based data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:200405805 4 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ratner AJ, Ehrenberg HR, Hussain Z, Dunnmon J, R´e C (2017) Learning to compose domain-specific transformations for data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, NIH Public Access, vol 30, p 3239 3 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ren M, Triantafillou E, Ravi S, Snell J, Swersky K, Tenenbaum JB, Larochelle H, Zemel RS (2018) Meta-learning for semi- supervised few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICLR 3, 7 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Rezaei M, Fr¨anti P (2016) Set matching measures for external cluster validity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TKDE 28(8):2173–2186 3 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Saito Y, Nakamura T, Hachiya H, Fukumizu K (2020) Exchange- able deep neural networks for set-to-set matching and learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 626–646 3 HyRSM++: Hybrid Relation Guided Temporal Set Matching for Few-shot Action Recognition 19 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T (2016) Meta-learning with memory-augmented neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICML, PMLR, pp 1842–1850 3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D (2017) Grad-cam: Visual explanations from deep networks via gradient-based localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 618–626 14 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 30, pp 4077–4087 3, 9, 11 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Sohn K, Berthelot D, Carlini N, Zhang Z, Zhang H, Raffel CA, Cubuk ED, Kurakin A, Li CL (2020) Fixmatch: Simplifying semi-supervised learning with consistency and confidence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, vol 33, pp 596–608 7 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Soomro K, Zamir AR, Shah M (2012) Ucf101: A dataset of 101 human actions classes from videos in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' arXiv preprint arXiv:12120402 9 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Sudha N, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' (2007) Robust hausdorff distance measure for face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Pattern Recognition 40(2):431–442 3, 6 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Sung F, Yang Y, Zhang L, Xiang T, Torr PH, Hospedales TM (2018) Learning to compare: Relation network for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 1199–1208 3 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 2818–2826 11 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Takacs B (1998) Comparing face images using the modified hausdorff distance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Pattern recognition 31(12):1873–1881 3, 6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Tao L, Wang X, Yamasaki T (2020) Self-supervised video rep- resentation learning using inter-intra contrastive framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ACMMM, pp 2193–2201 16 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Thatipelli A, Narayan S, Khan S, Anwer RM, Khan FS, Ghanem B (2022) Spatio-temporal relation modeling for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR 4, 8, 10, 11, 12, 13 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, pp 5998–6008 5 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Vinyals O, Blundell C, Lillicrap T, Kavukcuoglu K, Wierstra D (2016) Matching Networks for One Shot Learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: NeurIPS, arXiv:1606.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='04080v2 2, 3, 4, 8, 10 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wang J, Jiao J, Liu YH (2020) Self-supervised video representa- tion learning by pace prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 504–521 16 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wang L, Xiong Y, Wang Z, Qiao Y, Lin D, Tang X, Van Gool L (2018) Temporal segment networks for action recognition in videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 41(11):2740–2755 1, 4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wang X, Zhang S, Qing Z, Shao Y, Gao C, Sang N (2021) Self- supervised learning for semi-supervised temporal action pro- posal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 1905–1914 1 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wang X, Zhang S, Qing Z, Shao Y, Zuo Z, Gao C, Sang N (2021) Oadtr: Online action detection with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' ICCV 12 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wang X, Zhang S, Qing Z, Tang M, Zuo Z, Gao C, Jin R, Sang N (2022) Hybrid relation guided set matching for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR 3, 9 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Weng R, Lu J, Hu J, Yang G, Tan YP (2013) Robust feature set matching for partial face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 601–608 3 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Weng R, Lu J, Tan YP (2016) Robust point set matching for par- tial face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TIP 25(3):1163–1176 3 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Wu J, Zhang T, Zhang Z, Wu F, Zhang Y (2022) Motion- modulated temporal fragment alignment network for few-shot action recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 9151–9160 2, 4, 8, 10 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Xian Y, Korbar B, Douze M, Torresani L, Schiele B, Akata Z (2021) Generalized few-shot video classification with video re- trieval and feature generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Xu D, Xiao J, Zhao Z, Shao J, Xie D, Zhuang Y (2019) Self- supervised spatiotemporal learning via video clip order predic- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 10334–10343 16 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Xu J, Wang X (2021) Rethinking self-supervised correspondence learning: A video frame-level similarity perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICCV, pp 10075–10085 11, 13, 16 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ye HJ, Hu H, Zhan DC, Sha F (2020) Few-shot learning via embedding adaptation with set-to-set functions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 8808–8817 3 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Ye HJ, Ming L, Zhan DC, Chao WL (2022) Few-shot learning with a strong teacher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 3 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Yoo S, Bahng H, Chung S, Lee J, Chang J, Choo J (2019) Col- oring with limited data: Few-shot colorization via memory aug- mented networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 11283–11292 3 101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Yoon SW, Seo J, Moon J (2019) Tapnet: Neural network aug- mented with task-adaptive projection for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICML, pp 7115–7123 2, 3, 13 102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Yu CB, Qin HF, Cui YZ, Hu XQ (2009) Finger-vein image recog- nition combining modified hausdorff distance with minutiae fea- ture matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Interdisciplinary Sciences: Computational Life Sciences 1(4):280–289 6 103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Yu Z, Chen L, Cheng Z, Luo J (2020) Transmatch: A transfer- learning scheme for semi-supervised few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: CVPR, pp 12856–12864 4 104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2018) mixup: Beyond empirical risk minimization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ICLR 7 105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhang H, Zhang L, Qi X, Li H, Torr PH, Koniusz P (2020) Few- shot action recognition with permutation-invariant attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 525–542 1, 4, 8, 9, 10, 16 106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhang S, Zhou J, He X (2021) Learning implicit temporal align- ment for few-shot video classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: IJCAI 2, 4, 8, 9, 10 107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhao C, Shi W, Deng Y (2005) A new hausdorff distance for image matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Pattern Recognition Letters 26(5):581–586 3 108.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zheng S, Chen S, Jin Q (2022) Few-shot action recognition with hierarchical matching and contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, Springer, pp 297–313 2, 4, 8, 10 109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhou B, Andonian A, Oliva A, Torralba A (2018) Temporal rela- tional reasoning in videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, pp 803–818 8 110.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhou ZH, Li M (2005) Tri-training: Exploiting unlabeled data using three classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TKDE 17(11):1529–1541 7 111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhou ZQ, Wang B (2009) A modified hausdorff distance using edge gradient for robust object matching.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: IASP, IEEE, pp 250–254 6 112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhu L, Yang Y (2018) Compound memory networks for few-shot video classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' In: ECCV, pp 751–766 1, 2, 4, 8, 9, 10, 15, 16 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' Zhu L, Yang Y (2020) Label independent memory for semi- supervised few-shot video classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content=' TPAMI 44(1):273–285, DOI 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='1109/TPAMI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'} +page_content='3007511 4, 7, 8, 10, 12, 15, 16' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/BdE1T4oBgHgl3EQfpQWt/content/2301.03330v1.pdf'}