diff --git "a/7NAyT4oBgHgl3EQfpviE/content/tmp_files/load_file.txt" "b/7NAyT4oBgHgl3EQfpviE/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/7NAyT4oBgHgl3EQfpviE/content/tmp_files/load_file.txt" @@ -0,0 +1,1857 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf,len=1856 +page_content='SUBMISSION TO IEEE TRANSACTION ON MULTIMEDIA 1 Multi-Stage Spatio-Temporal Aggregation Transformer for Video Person Re-identification Ziyi Tang, Ruimao Zhang, Member, IEEE, Zhanglin Peng, Jinrui Chen, Liang Lin, Senior Member, IEEE Abstract—In recent years, the Transformer architec- ture has shown its superiority in the video-based person re-identification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Inspired by video representation learning, these methods mainly focus on designing mod- ules to extract informative spatial and temporal fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, they are still limited in extracting local attributes and global identity information, which are critical for the person re-identification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In this paper, we propose a novel Multi-Stage Spatial-Temporal Aggre- gation Transformer (MSTAT) with two novel designed proxy embedding modules to address the above issue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, MSTAT consists of three stages to encode the attribute-associated, the identity-associated, and the attribute-identity-associated information from the video clips, respectively, achieving the holistic perception of the input person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' We combine the outputs of all the stages for the final identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, to save the computational cost, the Spatial-Temporal Aggregation (STA) modules are first adopted in each stage to conduct the self-attention operations along the spatial and tem- poral dimensions separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' We further introduce the Attribute-Aware and Identity-Aware Proxy embedding modules (AAP and IAP) to extract the informative and discriminative feature representations at different stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' All of them are realized by employing newly designed self-attention operations with specific meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, temporal patch shuffling is also introduced to further improve the robustness of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Extensive experimental results demonstrate the effectiveness of the proposed modules in extracting the informative and discriminative information from the videos, and illustrate the MSTAT can achieve state-of-the-art accuracies on various standard benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Index Terms—Video-based Person Re-ID, Trans- former, Spatial Temporal Modeling, Deep Representation Learning I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' INTRODUCTION P ERSON Re-identification (re-ID) [6], [26], [28], which aims at matching pedestrians across dif- ferent camera views at different times, is a critical Ziyi Tang, Ruimao Zhang, and Jinrui Chen are with The Chi- nese University of Hong Kong (Shenzhen), and Ziyi Tang is also with Sun Yat-sen University (e-mail: tangziyi@cuhk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='cn, ruimao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='zhang@ieee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='org, and 120090765@link.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='cuhk.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='cn ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhanglin Peng is with the Department of Computer Science, The University of Hong Kong, Hong Kong, China (e-mail: zhanglin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='peng@connect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='hku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='hk ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liang Lin is with the School of Computer Science and Engineer- ing, Sun Yat-sen University (e-mail: linliang@ieee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='org).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' This paper was done when Ziyi Tang was working as a Research Assistant at The Chinese University of Hong Kong (Shenzhen).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The Corresponding Author is Ruimao Zhang Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1: Comparison between different Transformer- based frameworks for video re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (a) shows the framework where the Transformer fuse post-CNN fea- tures of the entire video.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (b) is Trigeminal Trans- former [51], including three separate streams for tem- poral, spatial, and spatio-temporal feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (c) displays a multi-stage spatio-temporal aggregation Transformer, which consists of three stages, all with a spatio-temporal view but different meanings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' task of visual surveillance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the earlier stage, the studies have mainly focused on image-based person re-ID [26], [28], [46], which mine the discrimina- tive information in the spatial domain.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' With the de- velopment of the monitoring sensors, multi-modality information has been introduced to re-ID task [33], [71], [72].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Numerous methods have been proposed to break down barriers between modalities regarding their image styles [86], structural features [81], [84], [97], or network parameters [33], [82].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' On the other hand, some studies have exploited multi-frame data and proposed various schemes [40], [62], [100] to extract informative temporal represen- tations to pursue video-based person re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In such a setting, each time a non-labeled query tracklet clip is given, its discriminative feature representation needs to be extracted to retrieve the clips of the corresponding person in the non-labeled gallery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, how to simultaneously extract such discriminative information arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='00531v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='CV] ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 Jan 2023 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Cross-viewTransfomer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio-temporalTransfomer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatial ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Transfomer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Transfomer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Transformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='CNN ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='(a)Post-fusion Transformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='(b)TrigeminalTransformer ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='i Concatenation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='个 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='企 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Space-related module/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute-Aware ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-Aware ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-Aware ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='featurerepresentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Proxy Embedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='ProxyEmbedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='ProxyEmbedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Time-related module/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='featurerepresentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='AttributeSpatio ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Space-time-relatedmodule/ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio-Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio-Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='featurerepresentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute-relatedmodule ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute-Associated ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-Associated ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute-ldentity- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='/featurerepresentation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Stage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='AssociatedStage ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-related module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='(c) MSTATTANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 2 from spatial and temporal dimensions is the key to improving the accuracy of video-based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To address such an issue, traditional methods [20] usually employ hierarchically convolutional architec- tures to update local patterns progressively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Further- more, some attempts [14], [15], [48], [73], [94] adopt attention-based modules to dynamically infer discrim- inative information from videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For instance, Wu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [72] embed body part prior knowledge inside the network architecture via dense and non-local region- based attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Although recent years have witnessed the success of convolution-based methods [12], [13], [20], [38], [43], [74], [94], [104], they have encoun- tered a bottleneck of accuracy improvement, as con- volution layers suffer from their intrinsic limitations of spatial-temporal dependency modeling and infor- mation aggregation [96].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Recently, the Transformer architecture [24], [32], [54], [89] has attracted much attention in the com- puter vision area because of its excellent context modeling ability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The core idea of such a model is to construct interrelationships between local contents via global attention operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the literature, some hybrid network architectures [19], [34], [51] have been proposed to tackle long-range context modeling in video-based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A widely used paradigm is to leverage Transformer as the post-processing unit, coupled with a convolutional neural network (CNN) as the basic feature extractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For example, as sum- marized in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1 (a), He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [35] and Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [95] adopt a monolithic Transformer to fuse frame-level CNN feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1 (b), Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [51] take a step further and put forward multi-stream Transformer architecture in which each stream emphasizes a particular dimension of the video features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In a hybrid architecture, however, the 2D CNN bottom encoder restricts the long-range spatio- temporal interactions among local contents, which hinders the discovery of contextual cues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Later, to address this problem, some pure Transformer-based approaches are introduced to video-based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Nev- ertheless, the existing Transformer-based frameworks are mainly motivated by those in video understanding and concentrate on designing the architecture to learn spatial-temporal representations efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Most of them are still limited in extracting informative and human-relevant discriminative information from the video clips, which are critical for large-scale matching tasks [39], [92], [98], [104].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To address the above issues, we propose a novel Multi-stage Spatial-Temporal Aggregation Trans- former framework, named MSTAT, which consists of three stages to respectively encode the attribute- associated, the identity-associated, and the attribute- identity-associated information from video clips.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Firstly, to save the computational cost, the Spatial- Temporal Aggregation (STA) modules [4], [7] are firstly adopted in each stage as their building blocks to conduct the self-attention operations along the spatial and temporal dimensions separately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Further, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1, we introduce the plug-and-play Attribute-Aware Proxy and Identity-Aware Proxy (AAP and IAP) embedding modules into different stages, for the purpose of reserving informative at- tribute features and aggregating discriminative identity features respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' They are both implemented by self-attention operations but with different learnable proxy embedding schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For the AAP embedding module, AAPs play the role of attribute queries to reserve a diversity of implicit attributes of a person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Arguably, the combination of these attribute repre- sentations is informative and provides discriminative power, complementary to the identity-only prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In contrast, the IAP embedding module maintains a group of IAPs as key-value pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' With explicit con- straints, they learn to successively match and aggregate the discriminative identity-aware features embedded in patch tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' During similarity measurement, the output feature representations of the three stages are concatenated to form a holistic view of the input person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, a Transformer-specific data augmenta- tion scheme, Temporal Patch Shuffling, is also intro- duced, which randomly rearranges the patches tem- porally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' With such a scheme, the enriched training data effectively improve the ability to learn invariant appearance features, leading to the robustness of the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Extensive experiments on three public bench- marks demonstrate our proposed framework is superior to the state-of-the-art on different metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Concretely, we achieve the best performance of 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% rank-1 accuracy on MARS, which is the largest video re-ID dataset at present.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In summary, our contributions are three-fold.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (1) We introduce a Multi-stage Spatial-Temporal Aggregation Transformer framework (MSTAT) for video-based per- son re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Compared to existing Transformer-based frameworks, MSTAT better learns informative attribute features and discriminative identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (2) For different stages, we devise two different proxy embed- ding modules, named Attribute-Aware and Identity- Aware Proxy embedding modules, to extract infor- mative attribute features and aggregate discriminative identity features from the entire video, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (3) A simple yet effective data augmentation scheme, referred to as Temporal Patch Shuffling, is proposed to consolidate the network’s invariance to appearance shifts and enrich training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' RELATED WORKS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Image-Based Person Re-ID Image-based person re-ID mainly focuses on person representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Early works focus primarily on carefully designed handcraft features [6], [26], [28], [44], [46], [103].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Recently, The flourishing deep learn- ing has become the mainstream method for learning 3 IEEE TRANSACTIONS ON MULTIMEDIA representation in person ReID [43], [65], [67], [74], [77], [88].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For the last few years, CNN has been a widely-used feature extractor [1], [17], [41], [43]–[45], [65], [76], [94].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' OSNet [104] fuses multi-scale features in an attention-style sub-network to obtain informative omni-scale features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Some works [18], [87], [98] focus on extracting and aligning semantic information to address misalignment caused by pose/viewpoint variations, imperfect person detection, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To avoid the misleading by noisy labels, Ye et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [83] presents a self-label refining strategy, deeply integrating anno- tation optimization and network training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' So far, some works [19], [34] also explore Image-based person re- ID based on Vision Transformer [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For example, TransReID [34] adopts Transformer as the backbone and extracts discriminative features from randomly sampled patch groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video-Based Person ReID Compared to image-based person re-ID, video- based person re-ID usually performs better because it provides temporal information and mitigates occlusion by taking advantage of multi-frame information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For capturing more robust and discriminative representa- tion from frame sequences, traditional video-based re- ID methods usually focus on two areas: 1) encoding of temporal information;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2) aggregation of temporal information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To encode additional temporal information, early methods [40], [62], [100] directly use temporal infor- mation as additional features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Some works [1], [49], [55], [73] use recurrent models, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', RNNs [56] and LSTM [37], to process the temporal information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Some other works [1], [12], [13], [53], [55], [60], [105] go further by introducing the attention mechanism to apply dynamic temporal feature fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Another class of works [21] introduces optical flow that captures temporal motion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' What is more, some works [2], [42], [63], [75], [91], [102] directly implement spatio- temporal pooling to video sequences and generate a global representation via CNNs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Recently, 3D CNNs [29], [45] learn to encode video features in a joint spatio-temporal manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' M3D [41] endows 2D CNN with multi-scale temporal feature extraction ability via multi-scale 3D convolutional kernels.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For the sake of aggregation that aims to generate discriminative features from full video features, a class of approaches [55], [93], [105] applies average pooling on the time dimension to aggregate spatio- temporal feature maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Recently, some attention-based methods [2], [15], [72], [80] attained significant per- formance improvement by dynamically highlighting different video frames/regions so as to filter more dis- criminative features from these critical frames/regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For instance, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [51] introduce cross-attention to aggregate multi-view video features by pair-wise interaction between these views.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Apart from the explo- ration of more effective architectural design, a branch of works study the effect of pedestrian attributes [10], [61], [101], such as shoes, bag, and down color, or the gait [11], [57], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' walking style of pedestrians, as a more comprehensive form of pedestrian description.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [11] closely integrate two coherent tasks: gait recognition and video-based re-ID by using a hybrid framework including a set-based gait recogni- tion branch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Some works [61], [101] embed attribute predictors into the network supported by annotations obtained from a network pretrained on an attribute dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chai et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [10] separate attributes into ID- relevant and ID-irrelevant ones and propose a novel pose-invariant and motion-invariant triplet loss to mine the hardest samples considering the distance of pose and motion states.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Although the above methods have made significant progress in performance, Transformer [66], which is deemed a more powerful architecture to process se- quence data, may raise the performance ceiling of video-based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To illustrate this, Transformer can readily adapt to video data with the support of the global attention mechanism to capture spatio-temporal dependencies and temporal positional encoding to or- der spatio-temporal positions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In addition, the class token is off-the-shelf for Transformer-based models to aggregate spatio-temporal information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, Transformer suffers from multiple drawbacks [24], [70], [89], [90], and few works have been released so far on video-based person re-ID based on Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In this work, we attempt to explore the potential of intractable Transformer in video-based person re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vision Transformer Recently, Transformer has shown its ability as an alternative to CNN.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Inspired by the great success of Transformer in natural language processing, recent researchers [24], [54], [54], [70] have extended Trans- former to CV tasks and obtained promising results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bertasius et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [7] explores different video self- attention schemes considering their cost-performance trade-off, resulting in a conclusion that the di- vided space-time self-attention is optimal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Similarly, ViViT [4] factorizes self-attention to compute self- attention spatially and then temporally.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Inspired by these works, we divide video self-attention into spa- tial attention followed by temporal attention, and we further propose a attribute-aware variant for video- based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Furthermore, little research has been done on Transformer for Video-based person re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Trigeminal Transformers (TMT) [51] puts the input patch token sequence through a spatial, a temporal, and a spatio-temporal minor Transformer, respectively, and a cross-view interaction module fuses their outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Differently, MSTAT has three stages, all extracting spatio-temporal features but with different meanings: (1) attribute features, (2) identity features, (3) attribute- identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' TANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='c ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Tokenization ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Class ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Token ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Patch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Tokens ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute-aware Prxoy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Embedding Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='… ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='… ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='L CE + L Tri ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='N3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Stage III ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio-Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='N2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Stage II ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Class Token Re-init ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='L CE + L Tri ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='L CE + L Tri ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='c ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Inference ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Element-wise ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Addition ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatial Positional ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Encoding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Concatenation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Attribute ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Representation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='c ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='M ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='X ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='E ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='M ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='A-Spatio-Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Spatio-Temporal ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Aggregation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='N ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='× ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Stage I ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-aware Prxoy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Embedding Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Identity-aware Prxoy ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Embedding Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2: The overall architecture of our proposed MSTAT which consists of three stages, all based on the Transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Stage I updates the spatio-temporal patch token sequence of the input video and aggregates them into a group of attribute-associated representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Subsequently, Stage II aggregates discriminative identity-associated features and Stage III attribute-identity-associated features, relying upon their stage-specific class tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Here, we omit the input and output of each module except the attribute- aware proxy embedding module in Stage I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' At inference time, all these feature representations are combined through concatenation to infer the pedestrian’s identity jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' METHOD In Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' III-A, we first overview the proposed MSTAT framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then, Spatio-Temporal Aggrega- tion (STA), the normal spatial-temporal feature extrac- tor in MSTAT, is formulated in section Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' III-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Along with it, we introduce the proposed Attribute- Aware Proxy (AAP) and Identity-Aware Proxy (IAP) embedding modules in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' III-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Finally, Tem- poral Patch Shuffling (TPS), a newly introduced Transformer-specific data augmentation scheme, is presented in section III-E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Overview This section briefly summarizes the workflow of MSTAT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The overall MSTAT framework is shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Given a video tracklet V P RT ˆ3ˆHˆW with T frames and the resolution of each frame is H ˆ W, the goal of MSTAT is to learn a mapping from a video tracklet V to a d-dimension representation space in which each identity is discriminative from the others.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, as shown on the left of Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2, MSTAT first linearly projects non-overlapping image patches of size 3 ˆ P ˆ P into d-dimensional patch tokens, where d “ 3P 2 denotes the embedded dimension of tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Thus, a patch token sequence X P RT ˆNˆd is obtained, where the number of patch tokens in each frame is denoted by N “ HˆW P 2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Meanwhile, spatial positional encoding E P RNˆd is added to X in a element-wise manner for reserving spatial structure in each frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Notably, we do not insert temporal positional encoding into X, since the temporal order is usually not conducive to video-based re-ID, which is also demonstrated in [92].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Finally, a class token c P Rd is associated with X to aggregate global identity representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Next, we feed the token sequence X into Stage I of MSTAT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' It takes X and c as input, and employs a stack of eight Spatio-Temporal Aggregation (STA) blocks for inter-frame and intra-frame correlation modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The output tokens are then fed into an Attribute-Aware Proxy (AAP) embedding module to mine rich visual attributes, a composite group of semantic cues that im- ply identity information, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', garments, handbags and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The Stage II includes a series of STA blocks (three in our experiments), followed by an Identity-Aware Proxy (IAP) embedding module which is able to screen out discriminative identity-associated information by inspecting the entire sequence in par- allel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the Stage III, we first introduce a novel class token to directly aggregate higher-level features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In addition, a stack of Attribute-STA (A-STA) blocks is used to fuse attributes from different frames.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' At last, an IAP embedding module is adopted to generate a discriminative representation for the person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the training phase, the attribute representations extracted from Stage I and the class tokens of Stage II and Stage III are supervised separately by a group of losses.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' During the testing, the attribute representations and the class tokens from the last two stages are concatenated for similarity measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatio-temporal Aggregation To begin with, we make a quick review of the vanilla Transformer self-attention mechanism first proposed in [66].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, visual Transformer embeds an image into a sequence of patch tokens, and self-attention operation first linearly projects these tokens to the corresponding query Q, key K and value V respec- tively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then, the scaled product of Q and K generates an attention map A, indicating estimated relationships s:/iblog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='csdn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='net/qqn34182315 IEEE TRANSACTIONS ON MULTIMEDIA between token representations in Q and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then, V performs a re-weighting by multiplying the attention map A, to obtain the output of Transformer self- Attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In this way, patch tokens are reconstructed by leveraging interaction with each other.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Formally, self-Attention operation SAp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='q can be formulated as follows: Q, K, V “ ˆSWq, ˆSWk, ˆSWv A “ SoftmaxpQKTq{ ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' d SApˆSq “ AV (1) where ˆS P R ˆ Nˆd denotes an 2-dimensional input token sequence, and Wq, Wk, Wv P Rdˆd1 denote three learnable parameter matrices of size d ˆ d1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the multi-head setting, we let d1 “ d{n, where n indicates the number of attention heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The function Softmaxp¨q denotes the softmax operation for each row.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' And the scaling operation in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (1) eliminates the influence from the scale of embedded dimension d1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In our Spatio-Temporal Aggregation block (STA), self-attention operation along time axis and along space axis (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' temporal attention and spatial attention) are separately denoted as SAtp¨q and SAsp¨q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Let S P R ˆT ˆ ˆ Nˆd denote an input spatio-temporal token sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Formally, SAtp¨q and SAsp¨q can be written as: SAtpSq “ SApConcatpS:,0, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', S:,n, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', S:,N´1qq SAspSq “ SApConcatpS0,:, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', St,:, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', ST´1,:qq (2) where T indicates the total number of frames in video clip, N is the total spatial position index, and Concatp¨q denotes the concatenation operation in the split dimension, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', the spatial position dimension in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Given SAtp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='q and SAsp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='q, the STA block consecu- tively integrates these two self-attention modules to ex- tract spatial-temporal features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 3, STA further extracts discriminative information from patch tokens to the class token through spatial attention SAsp¨q, which can be realized by concatenating the copies of class token to the token sequence of each frame before SAsp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='q, and taking the average of class token copies after SAsp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='q to further apply the later temporal aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In this way, the general form of STA can be presented as: S1 “ S ` α ˆ SAtpLNpSqq STApS, cq “ ConcatpS1, cq `β ˆ SAspLNpConcatpS1, cqqq (3) where LNp¨q denotes Layer Normalization [5].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The hyper-parameter α and β are learnable scalar residual weights to balance temporal attention and spatial at- tention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Compared with the space-time joint attention in [7] and [4], which jointly processes all patches of a video, STA is more computation-efficient by reducing Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 3: The detailed comparison between (a) Spatio- Temporal Aggregation block (STA) and (b) Attri- bution Spatio-Temporal Aggregation block (A-STA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Two additional Attribute-Aware Proxy (AAP) embed- ding modules are placed into the latter, before and after the temporal attention module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The class token broadcasting operation duplicates the class token for each frame to attend spatial attention within a specific frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Oppositely, class token averaging calculates the average of all class token copies.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Note that the Pre- Norm [79] layers before temporal attention and spatial attention are omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 4: The detailed module design of the Attribute- Aware Proxy (AAP) embedding module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The Attribute-Aware Proxy Embedding denotes a learnable matrix that is used as the query of the attention operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For simplicity, this figure only shows the single-head version of the AAP embedding module and the scaling operation before the softmax operation is omitted.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' complexity from OpT 2N 2q to OpT 2 ` N 2q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Actually, it avoids operating on a long sequence, whose length always leads to quadratic growth of computational complexity [31], [68].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attribute-Aware Proxy Embedding Module Local patch tokens usually contain rich attributive information, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', glasses, umbrellas, logos, and so on.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Even if a single attribute is not discrimina- tive enough to recover one’s identity,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' the combinations 00000 00000 个个个个个 个个个个个 ClassToken Class Token Averaging Averaging SpatialAttention Spatial Attention ClassToken ClassToken Broadcasting Broadcasting AAP embeddingmodule TemporalAttention TemporalAttention AAP embeddingmodule 个个个个个 个不不个个 00000 00000 (a) STA (b) A-STAAttribute Representation Linear Attribute-Aware i Proxy Embedding : Module Softmax Q K V Attribute-Aware Linear Linear Proxy Embeddings Class 0000000 Patch Token TokenTANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 6 of a pedestrian’s rich attributes should be discrimina- tive as each attribute eliminates a certain degree of uncertainty.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rather than directly aggregating into a “coarse” class token, we introduce the Attribute-Aware Proxy (AAP) embedding module to directly extract attribute features from a single-frame or multi-frame patch token sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Practically, AAP embeddings are formed by a learnable matrix with anisotropic initialization for the richness of learned attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' It can be considered as the “attribute bank” to serve as the query of the attention operation to match with the feature representations of the input patch tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, AAP embeddings interact with the keys of the patch token sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Finally, the resulting attention map is used to re-weight the value, generating the attribute representations of the specific video clip with the same dimension of AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Formally, an AAP embedding module can be written as follows, Q, K, V “ PQ, SWk, SWv, AAPpSq “ SoftmaxpQKTq ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' d V (4) here we use the multi-head version of AAP embedding module in practice, which has the same multi-head setting as SAp¨q in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Note that the spatio- temporal input S here can also be ˆS P R ˆ Nˆd for spatial-only use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Compared with SAp¨q, the newly proposed AAP module consider the query Q in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (1) as the a set of learnable parameters PQ P RNaˆd1, where Na !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' N is a hyper-parameter that indicates the number of AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' By controlling Na, the AAP module could have a manually defined capacity, which leads to flexibility for various real applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2, both Stage I and Stage III employ the proposed AAP embedding modules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, in Stage I, the proposed AAP module is firstly used to generate attribute representations from a multi-frame sequence of patch tokens S P R ˆT ˆ ˆ Nˆd for similarity measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Although we do not have any attribute-level annotations, we hope the AAP module can automatically learn a rich set of implicit attributes from the entire training dataset, while these resultant attribute representations could also present discrimina- tive power complementary to ID-only representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To achieve this goal, the ID-level supervision signal is first imposed on the combination of learned attribute representations to constrain its discriminative power.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In addition, we initialize the AAPs with anisotropic distributions to capture diverse implicit attribute rep- resentations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, we surprisingly find that such anisotropy can maintain after the model training, which means such optimized AAP could respond to a set of differentiated attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, the number of AAPs can be relatively large compared with the class token to cover rich attribute information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In this sense, both the richness and diversity of learned implicit attributes can be guaranteed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Stage III, we further insert two intra-frame AAP embedding modules before and after the tem- poral attention of each STA to conduct attribute-aware temporal interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Such a modified STA block is named A-STA, which is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In A-STA, semantic-related attributes in different frames experi- ence inter-frame interaction to model their temporal relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the end, after temporal attention, we set Na equal to N for the second AAP embedding module so that it has N tokens as output to keep the input- output consistency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Identity-Aware Proxy Embedding Module Extracting discriminative identity representation is also crucial for video-based re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To this end, the Identity-Aware Proxy (IAP) embedding module is pro- posed for effective and efficient discriminative repre- sentation generation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In previous works, joint space- time attention has shown promising results [4], [7], as it accelerates information aggregation by applying self- attention over spatial and temporal dimensions jointly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, the quadratic computational overheads limit its applicability.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The IAP embedding module is pro- posed to address such an issue, which performs joint space-time attention with high efficiency while main- taining the discrimination of the identity feature rep- resentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The IAP module contains a set of identity proto- types, which are presented as two learnable matrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, we exploit them to replace the keys tpi KuM i“1 P PK and values tpi V uM i“1 P PV of the attention operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Both PK, PV P RMˆd1, where M P N` denotes the number of identity prototypes and determine the capacity of the IAP module (usually M !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' N).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 5, an attention map A P RMˆN is first calculated to present the affinity between prototype-patch pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Thus each element in A reflects how close a patch token is to a specific identity prototype.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then this attention map is sparsified by suc- cessively applying an L1 normalization and softmax normalization along M and N, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' At last, the class token c, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' the first row of V, is updated by the multiplication of V and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Such an operation aggregates the most discriminative identity features from the entire patch token sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Formally, given the spatio-temporal token sequence S, the output of the IAP module can be calculated as follows: Q, K, V “ SWq, PK, PV A “ SoftmaxpL1NormpQKTqq ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' d IAPpSq “ AV (5) where K and V are not conditioned on input S but are learnable parameters.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Here we insert an L1 normalization layer before the softmax operation in Eqn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (5), resulting in double normalization [30], [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Such a scheme performs patch token re-coding to reduce the noise of patch representations, leading to 7 IEEE TRANSACTIONS ON MULTIMEDIA Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 5: The detailed module design of the Identity- Aware Proxy (IAP) embedding module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The IAP em- bedding denotes the learnable matrix used to calculate the key or value of the attention operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Here we only show the single-head version of the IAP embedding module and omit the scaling operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In such a scheme, The output token sequence can be considered as reconstruction by a group of IAPs, which tend to reserve the most discriminative identity features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' robust identification results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, the learnable matrix PK matches the input tokens through the double normalization operation to generate the affinity map A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then these input tokens are thereupon re-coded through a projection of PV along A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Since the numbers of learnable vectors in PK and PV are much smaller than the number of input tokens, the above operation has been able to represent each token in a more compact space (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' linear combination of the vectors in PV ), effectively suppressing irrelevant information for re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, IAPp¨q has OpNq computational complexity since the number of identity prototypes M is fixed and is usually much less than the total number of patch tokens of a specific video tracklet (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', 64 in our experiments).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' So, the proposed IAP embedding module allows all spatio-temporal patch tokens to be processed in parallel for effective and efficient feature extraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Temporal Patch Shuffling To improve the robustness of the model, we propose a novel data augmentation scheme termed Temporal Patch Shuffling (TPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Suppose we have one patch sequences Ri “ tri1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', rit, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', riT u from the same video clip, where the sub-index i denotes specific spatial locations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 6, the proposed TPS randomly permutes the patch tokens in Ri and refill the shuffled sequence ˆ Ri to form the new video clip for training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 6, we could simultaneously select multiple spatial regions in one video clip for shuffling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' While in the inference phase, the original video clip is directly fed into the model for identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' TPS brings firm appearance shifts and motion changes from which the network learns to extract generalizable and invariant visual clues.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In addition, the scale of available training data can be f1 f2 f3 f4 f5 r21 r22 r23 r24 r25 r11 r12 r13 r14 r15 r15 r14 r11 r12 r13 r24 r23 r25 r22 r21 X x’ shuffling f1 f2 f3 f4 f5 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 6: Visualization of Temporal Patch Shuffling (TPS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ft represents tth frame, rit the patch in spatial position i and tth frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' TPS is a built-in data aug- mentation scheme that randomizes the order of a patch sequence sampled from spatial position i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As a result, for example, the patch in the red box is transferred from the 5th frame to the 1st frame.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' greatly extended based on such a scheme, which helps to prevent the network from overfitting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In our experiments, we treat TPS as a plug-and-play operation and implement it at the stem of the network to promote the entire network for the best performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The following section will conduct ablation studies to explore where to insert TPS and to what extent TPS should be for optimal training results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' EXPERIMENT A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Datasets and evaluation protocols In this paper, we evaluate our proposed MSTAT on three widely-used video-based person re-ID bench- marks: iLIDS-VID [69], DukeMTMC-VideoReID (DukeV) [59], and MARS [102].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1) iLIDS-VID [69] is comprised of 600 video track- lets of 300 persons captured from two cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In these video tracklets, frame numbers range from 23 and 192.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The test set shares 150 identities with the training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2) DukeMTMC-VideoReID [59] is a large-scale video-based benchmark which contains 4, 832 videos sharing 1, 404 identities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the following sections, we use the abbreviation “DukeV” for the DukeMTMC- VideoReID dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The video sequences in the DukeV dataset are commonly longer than videos in other datasets, which contain 168 frames on average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 3) MARS [102] is one of the largest video re- ID benchmarks which collects 1, 261 identities exist- ing in around 20, 000 video tracklets captured by 6 cameras.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Frames within a video tracklet are relatively more misaligned since they are obtained by a DPM detector [27] and a GMMCP tracker [22] rather than hand drawing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Furthermore, around 3, 200 distractor tracklets are mixed into the dataset to simulate real- world scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For evaluation on MARS and DukeV datasets, we use two metrics: the Cumulative Matching Character- istic (CMC) curves [8] and mean Average Precision (mAP) following previous works [16], [51], [94], [99].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, in the gallery set of iLIDS-VID, there is merely one correct match for each query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For this benchmark, only cumulative accuracy is reported.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 00000000 Identity-Aware : Proxy Embedding i Module Softmax L1Norm K V Identity-Aware Identity-Aware Linear Proxy Embeddings Proxy Embeddings Class Patch Token TokensXXTANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 8 Method Source Backbone MARS Duke-V iLIDS-VID Rank-1 Rank-5 mAP Rank-1 Rank-5 mAP Rank-1 Rank-5 SCAN [94] TIP19 Pure-CNN 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 VRSTC [39] CVPR19 Pure-CNN 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 M3D [39] AAAI19 Pure-CNN 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 MG-RAFA [99] CVPR20 Pure-CNN 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 AFA [16] ECCV20 Pure-CNN 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 AP3D [29] ECCV20 Pure-CNN 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 TCLNet [16] ECCV20 Pure-CNN 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 A3D [15] TIP20 Pure-CNN 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 GRL [52] CVPR21 Pure-CNN 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 STRF [3] ICCV21 Pure-CNN 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [25] WACV21 Pure-CNN 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 TMT [51] Arxiv21 CNN-Transformer 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [47] CVPR21* CNN-Transformer 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 STT [95] Arxiv21 CNN-Transformer 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 ASANet [10] TCSVT22 Pure-CNN 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 MSTAT(ours) Pure-Transformer 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 TABLE I: Result comparison with state-of-the-art video-based person re-ID methods on MARS, DukeMTMC- VideoReID, and iLIDS-VID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' * denotes the workshop of the conference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Implementation details Our proposed MSTAT framework is built based on Pytorch toolbox [58].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In our experiments, it is running on a single NVIDIA A100 GPU (40G memory).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' We resize each video frame to 224 ˆ 112 for the above benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Typical data augmentation schemes are involved in training, including horizontal flipping, ran- dom cropping, and random erasing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For all stages, STA modules are pretrained on an action recognition dataset, K600 [9], while other aforementioned modules are randomly initialized.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the training phase, if not specified, we sample L “ 8 frames each time for a video tracklet and set the batch size as 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In each mini-batch, we randomly sample two video tracklets from different cameras for each person.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' We supervise the network by cross-entropy loss with label smoothing [64] asso- ciated with widely used BatchHard triplet loss [36].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, we impose supervision signals separately on the concatenated attribute representation from the AAP embedding module in Stage I, the output class tokens from Stage II, and Stage III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The learning rate is initially set to 1e-3, which would be multiplied by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='75 after every 25 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The entire network is updated by an SGD optimizer in which the weight decay and Nesterov momentum are set to 5 ˆ 10´5 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In the test phase, following [29], [95], we randomly sample 32 frames as a sequence from each original tracklet in either query or gallery.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For each sequence, The attribute representation from Stage I, the output class tokens from stage Stage II and Stage III are concatenated as the overall representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Following the widely-used protocol, we compute the cosine sim- ilarity between each query-gallery pair using their overall representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then, the CMC curves and the mAP can be calculated based on the predicted ranking list and the ground truth identity of each query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Note that we do not use any re-ranking technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Compared with the state of the arts In Table I, we make a comparison on three bench- mark datasets between our method and video-based person re-ID methods from 2019 to 2021, including M3D [39], GRL [52], STRF [3], Fang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [25], TMT [51], Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [47], ASANet [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' According to their backbones, these re-ID methods can be roughly divided into the following types: Pure-CNN, CNN- Transformer Hybrid, and Pure-Transformer methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In real-world applications, rank-1 accuracy [8] re- flects what extent a method can find the most confident positive sample [85], and relatively high rank-1 accu- racy can save time in confirmation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As the first method based on Pure-Transformer for video-based re-ID so far, we achieve state-of-the-art results in rank-1 ac- curacy on three benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Our approach especially attains rank-1 accuracy of 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% and rank-5 accuracy of 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4% on the largest-scale benchmark, MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' It is noteworthy that our MSTAT outperforms the best pure CNN-based methods using ID annotations only by a margin of 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1% and a CNN-Transformer hybrid method, TMT, by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6% in MARS rank-1 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Compared to our proposed method, TCLNet [16] explicitly captures complementary features over dif- ferent frames, and GRL [52] devises a guiding mech- anism for reciprocating feature learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, the designed modules in these methods commonly take as input the deep spatial feature maps extracted by a CNN backbone (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ResNet50) that may overlook attribute-associated or identity-associated information without explicit modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Similar to ours, TMT [51] and M3D [3] process video tracklets in multiple views to extract and fuse multi-view features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' No- tably, in all stages of MSTAT, intermediate features are spatio-temporal and can be iteratively updated to capture spatio-temporal cues with different emphases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ASANet [10] exploits explicit ID-relevant attributes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', gender, clothes, and hair) and ID-irrelevant at- tributes (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=', pose and motion) on a multi-branch net- 9 IEEE TRANSACTIONS ON MULTIMEDIA Method Test Protocol Rank-1 Rank-5 mAP MSTAT Stage I 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 Stage II 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Stage III 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Stage I & II 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Stage I & III 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 Stage II & III 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='9 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 Stage I, II, & III 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 TABLE II: Ablation study on three stages of MSTAT on MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Test Protocol means the final feature rep- resentation used for similarity measurement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The net- work architecture and training hyper-parameter setting remain the same for each experiment.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Despite the performance growth, the demand for attribute annotations may limit its applications in large- scale scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In comparison with existing methods, our method aggregates spatio-temporal information in a unified manner and explicitly capitalizes on implicit attribute information to improve recognizability un- der challenging scenarios.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Conclusively, our method achieves the state-of-the-art performance of 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% and 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3% rank-1 accuracy, respectively, on MARS and iLIDS-VID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Effectiveness of Multi-Stage Framework Architec- ture To evaluate the effectiveness of the three stages in our proposed MSTAT, we carry out a series of ablation experiments whose results are displayed in Table II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' After the three stages are jointly trained, we first separately evaluate each stage using its output feature representation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Then, we concatenate two or more stages to evaluate whether each is effective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For three single stages, each has rank-1 accuracy ranging from 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 to 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, their combina- tions result in a significant increase of over 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Remarkably, while Stage I and Stage II secure only 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 rank-1 accuracy, their integration attains up to 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2%, surpassing them by a 2% margin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' One can attribute such a result to their emphases: one stage on attribute-associated features and the other on identity- associated features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Eventually, when all three stages are used, MSTAT reaches a 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% rank-1 accuracy, higher than all two-stage combinations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Overall, these experiments demonstrate that the three stages have dif- ferent preferences toward features and can complement each other by simple concatenation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Effectiveness of Key Components To demonstrate the effectiveness of our proposed MSTAT, we conduct a range of ablative experiments on the largest public benchmark MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 1) Effectiveness of Attribute-Aware Proxy Embed- ding Module: As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 7, we evaluate MSTAT with different AAP numbers (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Na in Sec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' III-C) in the AAP embedding module in the last layer of Stage Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 7: Ablation study on the attribute-aware proxy (AAP) embedding module for attribute extraction in MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ”Base” is the network without attribute ex- traction using AAP in training and testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' AAP- k indicates the network where the AAP embedding module in Stage I has k AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 8: Ablation study on A-STA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ”Base” is the net- work that consists of STA only.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A-STA-k represents the network in which Stage III is equipped with A- STA layers each of k AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The figure reveals that 24 proxies are optimal for attributive information extraction as it attains the best performance in terms of rank-1 and rank-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In contrast to the baseline, MSTAT has seen over 2% growth in rank-1 accuracy and around 1% in rank-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, a redundant or insufficient number of AAPs may cause a minor performance drop since they may pay attention to noisy or useless attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In summary, the AAP embedding module for clue extraction gives a boost to the performance in rank- 1 and rank-5 accuracy, with negligible computational overhead.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attribute-Aware Proxy (AAP) embedding modules are also used for A-STA, a variant of STA for attribute- aware temporal feature fusion in Stage III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 8, we conduct a series of experiments to explore whether A-STA is effective and how many AAPs for A-STA are appropriate (also corresponding to Na in III-C)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The experiment results reveal that the baseline model fails to reach 90% rank-1 accuracy or 97% rank- 5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As the number of AAPs increases, these two metrics grow to 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% and 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Therefore, we can attribute the performance soar to A-STA, allowing for attribute-aware temporal in- teraction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A-STA offers a different viewpoint from that of Stage II on videos.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, due to the redundancy of temporal information in many video re- ID scenarios discussed in [16], A-STA with too many AAPs incurs meaningless attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' This can be why the performance descends once A-STA has too many AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rank-1 accuracy Rank-5 accuracy 9160 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='974 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='972 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='908 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='971 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='907 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='906 0L60 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='969 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='968 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='968 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='892 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='967 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='B9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='966 Bese AAP-BAAP-16 AAP-24 AAP-32 Bease AAP-BAAP-16 AAP-24 AAP-32Rank-1 accuracy Rank-5 accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='976 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='93 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='974 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='92 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='910 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='972 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='971 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='91 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='906 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='970 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='970 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='90 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='968 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='968 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='892 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='B9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='566 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='966 BaBBASTA-16 ASTA-32 ASTA4B ASTA-64 Baiga ASTA-16 ASTA-32 ASTA-4B ASTA-64TANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 10 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 9: Study on the effect of training video sequence length on MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In conclusion, our proposed AAP embedding mod- ule can be used for: (1) the extraction of informative attributes as plugged into any Transformer layer and (2) attribute-aware temporal interaction when a tempo- ral attention module is sandwiched between two.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Both of the two functionalities cause a significant increase in performance, demonstrating their effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2) Effectiveness of Identity-Aware Proxy Embedding Module: In Table III, MSTAT that discards IAP em- bedding modules leads to only 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2% rank-1 accuracy and 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4% rank-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, it boosts rank-1 performance by 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% or 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2% by taking the place of STA in Stage II or Stage III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Finally, IAP embedding modules in the last layers in both Stage II and Stage III further improve 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8% rank-1 accuracy and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4% rank-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The IAP embedding module’s ablation results demonstrate its ability to generate discriminative representations efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Intuitively, we place the IAP embedding module only in the last few depths because it may discard non-discriminative features that should be preserved in shallow layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 3) Effectiveness of Temporal Patch Shuffling: To evaluate the effectiveness of Temporal Patch Shuffling (TPS), we assign different probabilities to implement TPS for each training video sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Note that in the following experiments, the number of spatial positions to shuffle is set to 5 if we implement TPS on this sam- ple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As shown in Table IV, 20% probability provides the best result over others, which leads to a growth of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3% in rank-1 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, the 60% or 80% probability results in a 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1% or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2% rank-1 accuracy drop mainly due to heavy noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In summary, a proper level of TPS would be an effective data augmentation method for the Transformer for video-based person re-ID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Further, rather than reserving temporal motion (an ordered sequence of patches), TPS stimulates re- identification accuracy by learning temporal coherence from shuffled patch tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Effect of video sequence length To investigate how temporal noise influences the training of MSTAT, we conduct experiments on videos with varied lengths.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 9, experiments provide length-varying video tracklets for training, while all experiments are implemented under the identical eval- uation setting with a fixed video length of 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' All Method Position Rank-1 Rank-5 w/o IAP embedding module 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 w/ IAP embedding module Stage II 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Stage III 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Stage II&III 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 TABLE III: Ablation study on the IAP embedding module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Stage II and Stage III in this table means that an IAP embedding module is appended to the last layer of Stage II and Stage III respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' This table shows that the IAP embedding module brings improvements to every single stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' When it is placed on both two stages, MSTAT shows the best performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Methods Prob.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rank-1 Rank-5 mAP MSTAT w/o TPS 0% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 MSTAT w/ TPS 20% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 40% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 60% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 80% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='1 TABLE IV: Ablation study on Temporal Patch Shuf- fling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The table shows that the proper level of shuffling can bring slight improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, it may de- grade the learning while the shuffling degree becomes increasingly overwhelming.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' experiments shut down until the loss stops decreasing for ten epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' On the one hand, rank-1 accuracy shows an upward trend as temporal noise gradually decreases, reach- ing a peak at 8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' On the other hand, temporal noise shows no apparent correlation with rank-5 accuracy and mAP.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' These results show that our model gains up to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6% rank-1 accuracy through learning better temporal features from data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' However, rank-5 accuracy and mAP benefit little from noise reduction, from which we can speculate that in most cases in video re-ID, learning temporal features is less important than learning appearance features as they only account for 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6% of rank-1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2% of rank-5 accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Similar results can be found in [51].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Comparison among metric learning methods Metric learning aims to regularize the sample distri- bution on feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Usually, metric learning losses constrain the compactness of intra-class distribution and sparsity of the overall distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To explore which strategy cooperates with our framework better, we compare a range of classic metric learning loss functions on iLIDS-VID, as shown in Table V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Note that these losses are scaled to the same magnitude to ensure fairness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Significantly, OIM loss [78] and BatchHard triplet loss [36], widely used in re-ID, outperform Arcface [23] and SphereFace [50] losses by a large margin since the latter two loss functions suffer from untimely overfitting in our experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rank-l accuracy Rank-5 accuracy 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='920 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='980 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='918 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='916 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='976 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='975 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='975 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='914 - 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='914 160 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='974 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='913 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='973 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='912 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='972 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='91 6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='970 4 8 4 6 1 training video sequence length training video sequence length11 IEEE TRANSACTIONS ON MULTIMEDIA Metric learning loss Rank-1 Rank-5 w/o Metric learning 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 Arcface [23] 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 SphereFace [50] 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='7 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 OIM [78] 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 BatchHard* [36] 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='3 TABLE V: Comparison among metric learning loss functions on iLIDS-VID, where denotes the method used in our implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For Arcface and SphereFace, we test three margins and report the best result: (1) by default, (2) 20% larger than the default, (3) 20% smaller than the default.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 10: Visualization of the similarity matrix of attribute-aware proxies trained on MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The maxi- mal similarity between all pairs is around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2, demon- strating that AAPs learn to capture diverse attributes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Visualization To better understand how the proposed framework works, we conduct visualization on the AAP embed- ding module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 10, we show the diversity of implicit attributes by the similarity matrix of 24 AAPs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' This figure implies that AAPs are anisotropic, covering different attribute features that appear in the given training dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Specifically, as shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 11, we randomly select two pedestrians’ tracklets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attention map visu- alization is adopted as a sign of each AAP’s concen- tration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In practice, we process the raw attention maps first by several average filters and then by thresholding to deliver smooth visual effects instead of grid-like maps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In these heap maps, the brighter color denotes the higher attention value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Despite the absence of attribute-level supervision, Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='11 shows that some AAPs learn to pay attention to a local region with special meanings as an identity cue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' For example, the AAP in white color in video clips (a) automatically learns to cover the logo in the T-shirt, while the one in (b) captures the head of the woman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, we display the t-SNE visualization result on iLIDS-VID in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' It only contains the first 1/3 of the IDs in the test set for a better visual effect.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' We also provide the corresponding quantitative evalu- Methods IntraÓ IntraÓ InterÒ Rank-1Ò Baseline [7] 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4572 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4495 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4704 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='873 MSTAT w/o attr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4517 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4469 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4644 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='913 MSTAT (ours) 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4410 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4389 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5012 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='933 TABLE VI: Quantitative evaluation on iLIDS-VID.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ”Intra” denotes the averaged normalized intra-class distance, and ”Inter” is the minimum inter-class dis- tance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Here, * means that the metric is computed on samples with the correct rank-1 match.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' (a) (b) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 11: Visualization of attribute-aware proxies for two different pedestrians on MARS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attention heat maps of four consecutive frames from the AAP em- bedding module on Stage I are displayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ation results in Table VI measured by the normalized averaged intra-class distance and the minimum inter- class distance (0-2) on the entire test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' As a result, MSTAT drops the average intra-class distance from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4572 of the baseline to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4410 and enlarges the minimum inter-class distance from 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4704 to 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='5012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Further, to eliminate the influence of accuracy, we measure the intra-class distance between correctly matched samples, from which we witness a similar result.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' These results explain why MSTAT’s t-SNE visualization seems sparser.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' CONCLUSION This paper proposes a novel framework for video- based person re-ID, referred to as Spatial-Temporal Aggregation Transformer (MSTAT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' To tackle simulta- neous extraction for local attributes and global identity information, MSTAT adopts a multi-stage architecture to extract (1) attribute-associated, (2) the identity- associated, and (3) the attribute-identity-associated in- formation from video clips, with all layers inherited from the vanilla Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Further, for reserving informative attribute features and aggregating discrim- inative identity features, we introduce two proxy em- bedding modules (Attribute-Aware Proxy embedding module and Identity-Aware Proxy embedding module) into different stages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In addition, a patch-based data augmentation scheme, Temporal Patch Shuffling, is proposed to force the network to learn invariance to appearance shifts while enriching training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Massive experiments show that MSTAT can extract attribute-aware features consistent across frames while reserving discriminative global identity information on 0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 2 3 4- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 5 6 7 8- 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 9- 10 11 12 13 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 14 15 16 17 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 18- 19 20 21 22 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 23 456780TANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 12 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 12: T-SNE Visualization of the iLIDS-VID test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The numbers on the plots indicate person IDs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' MSTAT shows an increase in intra-class compactness and the minimum inter-class distance over the entire test set compared to the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' different stages to attain high performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Finally, MSTAT outperforms most existing state-of-the-arts on three public video-based re-ID benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Future work may focus on mining the hard instances or local informative attribute locations to conduct con- trastive learning to promote the model’s accuracy fur- ther.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Moreover, leveraging more unlabeled and multi- modal data to improve the model’s effectiveness is also a potential research direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ACKNOWLEDGMENT The work is supported in part by the Young Sci- entists Fund of the National Natural Science Foun- dation of China under grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 62106154, by Na- tional Key R&D Program of China under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2021ZD0111600, by Natural Science Foundation of Guangdong Province, China (General Program) un- der grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2022A1515011524, by Guangdong Ba- sic and Applied Basic Research Foundation under Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2017A030312006, by CCF-Tencent Open Fund, by Shenzhen Science and Technology Program ZDSYS20211021111415025, and by the Guangdong Provincial Key Laboratory of Big Data Computing, The Chinese Univeristy of Hong Kong (Shenzhen).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' REFERENCES [1] [2] [3] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Aich, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Karanam, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Roy- Chowdhury, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatio-temporal representation factorization for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 152–162, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [4] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Arnab, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dehghani, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Heigold, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Luˇci´c, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Schmid.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vivit: A video vision transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='15691, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [5] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ba, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Kiros, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Layer normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:1607.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='06450, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [6] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bak and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Carr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' One-shot metric learning for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2990–2999, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [7] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bertasius, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Torresani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Is space-time attention all you need for video understanding?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='05095, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [8] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bolle, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Connell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Pankanti, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ratha, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Senior.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' The relation between the roc curve and the cmc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Fourth IEEE workshop on automatic identifica- tion advanced technologies (AutoID’05), pages 15–20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE, 2005.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [9] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Carreira, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Noland, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Banki-Horvath, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hillier, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A short note about kinetics-600.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:1808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='01340, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mei, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video person re-identification using attribute-enhanced fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Circuits and Systems for Video Technology, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [11] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Seq-masks: Bridging the gap between appearance and gait modeling for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In 2021 International Conference on Visual Communications and Image Processing (VCIP), pages 1–5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [12] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Qi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Saliency and granularity: Discovering temporal coherence for video- based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Circuits and Systems for Video Technology, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [13] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yi, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video person re- identification with competitive snippet-similarity aggregation and co-attentive snippet embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 1169–1178, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [14] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatial- temporal attention-aware learning for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 28(9):4192–4205, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [15] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Learning recurrent 3d attention for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 29:6963–6976, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [16] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Temporal coherence or temporal motion: Which is more critical for video-based per- son re-identification?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 660–676.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [17] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hui, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Knowledge- guided multi-label few-shot learning for general image recog- nition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis & Machine Intelligence, 44(03):1371–1384, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [18] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Pu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xie, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cross- domain facial expression recognition: A unified evaluation benchmark and adversarial graph learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [19] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Oh-former: Omni- relational high-order transformer for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2109.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='11159, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [20] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Per- son re-identification by multi-channel parts-based cnn with improved triplet loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the iEEE conference on computer vision and pattern recognition, pages 1335–1344, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [21] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chung, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tahboub, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Delp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A two stream siamese convolutional neural network for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on com- puter vision, pages 1983–1991, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [22] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dehghan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Modiri Assari, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shah.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gmmcp tracker: Globally optimal generalized maximum multi clique problem for multiple object tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4091–4099, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [23] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Deng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xue, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zafeiriou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Arcface: Additive angular margin loss for deep face recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4690–4699, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [24] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dosovitskiy, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Beyer, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Kolesnikov, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Weissenborn, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhai, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Unterthiner, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dehghani, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Minderer, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Heigold, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gelly, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='11929, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [25] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ji, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Petersson, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Harandi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Set augmented triplet loss for video person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 464–473, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [26] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Farenzena, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bazzani, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Perina, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Murino, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cristani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re-identification by symmetry-driven ac- cumulation of local features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In 2010 IEEE computer society conference on computer vision and pattern recognition, pages 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 55 66 0 6 9 10 3 10 11 444 437 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 1313 3强26 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 33 11 112 41 4646 15 2 4 4 3333 41 45 36 449 402 29 31 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 22 2121 1818 45 30 36 28 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 22 24 35 34 30 437 28 1g19 295 37 342 经 3232 2929 46 1办 VE 48 34 338 492 25 37 3535 蛇 66 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 49%9 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 190 肆 66 3939 37 16 388 188 20 47 20 26 6E 社 111 27 396 1414 27 480 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 2 1313 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 36 448 2 4343 332 EEE 221 1515 28 12 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 + 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='8 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='0 (a) Baseline (b) MSTAT13 IEEE TRANSACTIONS ON MULTIMEDIA 2360–2367.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [27] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Felzenszwalb, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Girshick, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' McAllester, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ramanan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Object detection with discriminatively trained part-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 32(9):1627–1645, 2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [28] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gray and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Viewpoint invariant pedestrian recogni- tion with an ensemble of localized features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European conference on computer vision, pages 262–275.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [29] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Appearance-preserving 3d convolution for video-based per- son re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 228–243.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [30] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cai, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mu, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Martin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Pct: Point cloud transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Computational Visual Media, 7(2):187–199, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [31] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Beyond self- attention: External attention using two linear layers for visual tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='02358, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [32] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Han, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiao, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xu, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Trans- former in transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='00112, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [33] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cross-modality person re-identification via modality confusion and center aggregation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 16403–16412, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [34] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Luo, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Transreid: Transformer-based object re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='04378, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [35] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dense interaction learning for video-based person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='09013, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [36] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hermans, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Beyer, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Leibe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In defense of the triplet loss for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:1703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='07737, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [37] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hochreiter and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Schmidhuber.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Long short-term memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Neural computation, 9(8):1735–1780, 1997.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [38] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hou, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shan, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Temporal complementary learning for video person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European conference on computer vision, pages 388–405.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [39] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ma, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shan, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vrstc: Occlusion-free video person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7183–7192, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [40] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Karanam, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Radke.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re-identification with discriminatively trained viewpoint invariant dictionaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 4516–4524, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [41] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Multi-scale 3d convolution net- work for video based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 8618–8625, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [42] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Unsupervised person re- identification by deep learning tracklet association.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Pro- ceedings of the European conference on computer vision (ECCV), pages 737–753, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [43] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiao, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Deepreid: Deep filter pairing neural network for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 152–159, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [44] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re-identification by local maximal occurrence representation and metric learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2197–2206, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [45] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video-based person re-identification via 3d convolutional networks and non-local attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Asian Conference on Computer Vision, pages 620–634.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Loy, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re- identification: What features are important?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 391–401.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [47] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chien.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video- based person re-identification without bells and whistles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 1491–1500, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [48] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chien.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatially and temporally efficient non-local attention net- work for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:1908.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='01683, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [49] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jie, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jayashree, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Qi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jiang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yan, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Feng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Video-based person re-identification with accumu- lative motion context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE transactions on circuits and systems for video technology, 28(10):2788–2802, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [50] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Raj, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Song.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sphereface: Deep hypersphere embedding for face recogni- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 212–220, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [51] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Qian, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A video is worth three views: Trigeminal transformers for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='01745, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [52] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Watching you: Global-guided reciprocal learning for video-based person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13334– 13343, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [53] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yuan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatial and temporal mutual promotion for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intel- ligence, volume 33, pages 8786–8793, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [54] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wei, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Swin transformer: Hierarchical vision transformer using shifted windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='14030, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [55] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' McLaughlin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Del Rincon, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Miller.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Re- current convolutional network for video-based person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1325–1334, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [56] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mikolov, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Karafi´at, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Burget, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cernock`y, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Khu- danpur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Recurrent neural network based language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Interspeech, volume 2, pages 1045–1048.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Makuhari, 2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [57] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Nambiar, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bernardino, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Nascimento.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gait-based person re-identification: A survey.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' ACM Computing Surveys (CSUR), 52(2):1–34, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [58] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Paszke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gross, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Massa, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lerer, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bradbury, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chanan, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Killeen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gimelshein, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Antiga, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Pytorch: An imperative style, high-performance deep learning library.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Advances in neural information processing systems, 32:8026–8037, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [59] E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ristani, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Solera, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zou, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cucchiara, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tomasi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Performance measures and a data set for multi-target, multi- camera tracking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European conference on computer vision, pages 17–35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [60] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shi, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wei, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ling, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person retrieval in surveillance videos via deep attribute mining and reasoning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Multimedia, 23:4376–4387, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [61] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Song, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' A two- stage attribute-constraint network for video-based person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Access, 7:8508–8518, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [62] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Subramaniam, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Nambiar, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mittal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Co- segmentation inspired attention networks for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 562– 572, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [63] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Suh, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mei, and K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lee.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Part- aligned bilinear representations for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the European Conference on Computer Vision (ECCV), pages 402–419, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [64] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Szegedy, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vanhoucke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ioffe, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shlens, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wojna.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Rethinking the inception architecture for computer vision.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2818–2826, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [65] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Varior, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Haloi, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gated siamese convolu- tional neural network architecture for human re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European conference on computer vision, pages 791–808.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [66] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vaswani, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shazeer, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Parmar, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Uszkoreit, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jones, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gomez, Ł.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Kaiser, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Advances in neural information processing systems, pages 5998–6008, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [67] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tian, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Con- centrated local part discovery with fine-grained part repre- sentation for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Multimedia, 22(6):1605–1618, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' TANG et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' : MULTI-STAGE SPATIO-TEMPORAL AGGREGATION TRANSFORMER FOR VIDEO PERSON RE-IDENTIFICATION 14 [68] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Khabsa, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fang, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Linformer: Self-attention with linear complexity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [69] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhu, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re- identification by video ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European conference on computer vision, pages 688–703.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [70] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xie, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Fan, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Song, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Luo, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Pyramid vision transformer: A versatile backbone for dense prediction without convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='12122, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [71] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Robust depth-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Pro- cessing, 26(6):2588–2603, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [72] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gao, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Person re- identification by context-aware part attention and multi-head collaborative learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Information Forensics and Security, 17:115–126, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [73] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gao, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Where-and-when to look: Deep siamese attention networks for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Multimedia, 21(6):1412–1424, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [74] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' You, and W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='- S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' An enhanced deep feature representation for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In 2016 IEEE winter conference on applications of computer vision (WACV), pages 1–8.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [75] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ouyang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Exploit the unknown gradually: One-shot video-based person re-identification by stepwise learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, pages 5177–5186, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [76] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xia, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Poellabauer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Second-order non-local attention networks for person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3760–3769, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [77] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ouyang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Learning deep fea- ture representations with domain guided dropout for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1249–1258, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [78] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiao, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Joint detection and identification feature learning for person search.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3415–3424, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [79] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiong, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xing, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lan, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' On layer normal- ization in the transformer architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In International Con- ference on Machine Learning, pages 10524–10533.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' PMLR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [80] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cheng, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chang, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jointly attentive spatial-temporal pooling networks for video- based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 4733– 4742, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [81] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dynamic tri-level relation mining with attentive graph for visible infrared re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Information Forensics and Security, 17:386–398, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [82] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lan, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Leng, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cross-modality person re-identification via modality-aware collaborative ensemble learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 29:9387– 9399, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [83] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Du, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hoi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Collaborative refining for person re-identification with label noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 31:379–391, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [84] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' J Crandall, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Luo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Dynamic dual-attentive aggregation learning for visible-infrared person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 229–247.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [85] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hoi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Deep learning for person re-identification: A survey and outlook.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [86] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ye, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Visible-infrared person re- identification via homogeneous augmented tri-modal learn- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Information Forensics and Secu- rity, 16:728–739, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [87] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jiang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Guo, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Devil’s in the details: Aligning visual clues for conditional embedding in person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='05250, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [88] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hong, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Apparel-invariant feature learning for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Multimedia, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [89] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yuan, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shi, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tay, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Feng, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tokens-to-token vit: Training vision transformers from scratch on imagenet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='11986, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [90] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hua, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Feature pyramid transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 323–339.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [91] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Multi-shot pedestrian re- identification via sequential decision making.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6781–6789, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [92] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cheng, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bian, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zeng, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ordered or orderless: A revisit for video based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelligence, 43(4):1460–1466, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [93] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ben.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Learn- ing spatial-temporal representations over walking tracklet for long-term person re-identification in the wild.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Multimedia, 23:3562–3576, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [94] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ge, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Scan: Self-and-collaborative attention network for video person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE Transactions on Image Processing, 28(10):4870–4882, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [95] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wei, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xie, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhuang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spatiotemporal transformer for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='16469, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [96] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Li, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shuai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhu, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Brattoli, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Marsic, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tighe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Vidtr: Video transformer without convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 13577– 13587, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [97] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Kang, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Modality synergy complement learning with cascaded aggregation for visible- infrared person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 462–479.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [98] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zeng, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Densely seman- tically aligned person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 667–676, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [99] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zeng, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Multi-granularity reference-aided attentive feature aggregation for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10407–10416, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [100] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tian, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yan, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yi, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Spindle net: Person re-identification with human body region guided feature decomposition and fusion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1077–1085, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [101] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhao, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Shen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jin, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Lu, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-s.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Hua.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Attribute- driven feature disentangling and temporal aggregation for video person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF conference on computer vision and pattern recog- nition, pages 4913–4922, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [102] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Bie, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Sun, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Su, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Mars: A video benchmark for large-scale person re- identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 868–884.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [103] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='-S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zheng, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Gong, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Towards open-world person re-identification by one-shot group-based verification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' IEEE transactions on pattern analysis and machine intelli- gence, 38(3):591–606, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [104] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Yang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Cavallaro, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Xiang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Omni-scale feature learning for person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 3702–3712, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' [105] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhou, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Huang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Wang, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Tan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' See the forest for the trees: Joint spatial and temporal recurrent neural networks for video-based person re-identification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4747–4756, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' 15 IEEE TRANSACTIONS ON MULTIMEDIA Ziyi Tang is now pursuing his Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degree at Sun Yat-Sen University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Before that, he was a research assistant at The Chinese University of Hong Kong, Shen- zhen (CUHK-SZ), China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degree from South China Agriculture University (SCAU), Guangzhou, China in 2019 and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degree from The Univer- sity of Southampton, Southampton, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' in 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He has won top places in data science competitions hosted by Kaggle and Huawei respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' His research interests include Computer Vision, Vision-Language Joint Modeling, and Casual Inference.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Ruimao Zhang is currently a Research Assistant Professor in the School of Data Science, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ), China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He is also a Research Scientist at Shenzhen Research Institute of Big Data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He received the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' and Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degrees from Sun Yat- sen University, Guangzhou, China in 2011 and 2016, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' From 2017 to 2019, he was a Post-doctoral Research Fellow in the Multimedia Lab, The Chinese Univer- sity of Hong Kong (CUHK), Hong Kong.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' After that, he joined at SenseTime Research as a Senior Researcher until 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' His research interests include computer vision, deep learning and related multi- media applications.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He has published about 40 peer-reviewed articles in top-tier conferences and journals such as TPAMI, IJCV, ICML, ICLR, CVPR, and ICCV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He has won a number of competitions and awards such as Gold medal in 2017 Youtube 8M Video Classification Challenge, the first place in 2020 AIM Challenge on Learned Image Signal Processing Pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He was rated as Outstanding Reviewer of NeurIPS in 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He is a member of IEEE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Zhanglin Peng is now pursuing her Ph.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degree with the Department of Computer Science, The University of Hong Kong, Hong Kong, China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' She received her B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degrees from Sun Yat-Sen Uni- versity, Guangzhou, China in 2013 and 2016, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' From 2016 to 2020, she was a researcher at SenseTime Research.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Her research interests are computer vision and machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Jinrui Chen is currently pursuing the B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content='A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' degree in Financial Engineering conferred jointly by the School of Data Science, the School of Science and Engineering, and the School of Management and Eco- nomics, The Chinese University of Hong Kong, Shenzhen (CUHK-SZ), China.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' His research interests include deep learning and financial technology.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Liang Lin (M’09, SM’15) is a Full Pro- fessor of computer science at Sun Yat- sen University.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He served as the Exec- utive Director and Distinguished Scien- tist of SenseTime Group from 2016 to 2018, leading the R&D teams for cutting- edge technology transferring.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He has au- thored or co-authored more than 200 pa- pers in leading academic journals and con- ferences, and his papers have been cited by more than 22,000 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He is an associate editor of IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Multimedia and IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' Neural Networks and Learning Systems, and served as Area Chairs for numerous conferences such as CVPR, ICCV, SIGKDD and AAAI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He is the recipient of numerous awards and honors including Wu Wen-Jun Artificial Intelligence Award, the First Prize of China Society of Image and Graphics, ICCV Best Paper Nomination in 2019, Annual Best Paper Award by Pattern Recognition (Elsevier) in 2018, Best Paper Dimond Award in IEEE ICME 2017, Google Faculty Award in 2012.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' His supervised PhD students received ACM China Doctoral Dissertation Award, CCF Best Doctoral Dissertation and CAAI Best Doctoral Dissertation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'} +page_content=' He is a Fellow of IET and IAPR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/7NAyT4oBgHgl3EQfpviE/content/2301.00531v1.pdf'}