diff --git "a/19E1T4oBgHgl3EQf5QWX/content/tmp_files/load_file.txt" "b/19E1T4oBgHgl3EQf5QWX/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/19E1T4oBgHgl3EQf5QWX/content/tmp_files/load_file.txt" @@ -0,0 +1,647 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf,len=646 +page_content='Parallel Reasoning Network for Human-Object Interaction Detection Huan Peng1,2, Fenggang Liu2, Yangguang Li2, Bin Huang2, Jing Shao2, Nong Sang1, Changxin Gao1 1Huazhong University of Science and Technology 2SenseTime Group {nsang,cgao}@hust.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='cn;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' liyangguang@sensetime.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='com;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' {penghuan,liufenggang,huangbin1,shaojing}@senseauto.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='com Abstract Human-Object Interaction (HOI) detection aims to learn how human interacts with surrounding objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Previous HOI detection frameworks simultaneously detect human, objects and their corresponding interactions by using a predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Using only one shared predictor cannot differ- entiate the attentive field of instance-level prediction and relation-level prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' To solve this problem, we pro- pose a new transformer-based method named Parallel Rea- soning Network(PR-Net), which constructs two indepen- dent predictors for instance-level localization and relation- level understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The former predictor concentrates on instance-level localization by perceiving instances’ extrem- ity regions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The latter broadens the scope of relation region to reach a better relation-level semantic understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ex- tensive experiments and analysis on HICO-DET benchmark exhibit that our PR-Net effectively alleviated this problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Our PR-Net has achieved competitive results on HICO-DET and V-COCO benchmarks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Introduction The real world contains large amounts of complex human-centric activities, which are mainly composed of various human-object interactions (HOIs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In order for ma- chines to better understand these complex activities, we need to detect all these HOIs accurately.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' To be specific, HOI detection can be defined as detecting the human-object pair and their corresponding interactions in an image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It can be divided into two sub-tasks, instance detection, and interaction understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Only if these two sub-tasks are completed can we build a good HOI detector.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Previously, different methods were taken to process these two sub-tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Traditional methods like [4,11,23,28] first locates all instances and then extracts their correspond- ing features with an off-the-shelf object detector like [12, 29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' After that, instance matching and feature fusing ap- proaches are used to construct human-object pairs which Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The attention fields for two different level predictors in our PR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The first column shows these input images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The second column exhibits the attention fields of instance-level pre- dictor, in which the model concentrates on the extremity region of human and object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The third column exhibits the attention fields of interaction-level predictor, in which the model spreads its scope of attention to the relation-level region.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' are more likely to have interactive relations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' These pairs are then sent into the intention parsing network as inputs, and HOI is classified and outpus, so as to obtain the humain- object position and corresponding interactive relation cate- gory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In summary, these traditional two-stage approaches suffer from the isolated training process of instance local- ization and interaction understanding, so they cannot lo- calize interactive human-object pairs and understand those complex HOI instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' To alleviate the above problems, multitask learning man- ners [5, 17, 18, 24, 30, 35, 40, 42] are proposed to com- plete these two sub-tasks simultaneously.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Among these ap- proaches, they [5,18,24,35,40] process these two sub-tasks concurrently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Whereas they need an additional complex group composition procedure to match the predictions of these two sub-tasks, which reduces the computation effi- ciency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In addition, other one-stage methods [30, 42] pre- dict human-object pairs and corresponding interactions us- ing one shared prediction head, without needing matching or gathering processes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' However, they accomplish instance 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='03510v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='CV] 9 Jan 2023 localization and interaction understanding in a mixed and tied manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' This naive mixed prediction manner can cause inconsistent focus in attentive fields between the instance- level and the relation-level prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' This inconsistent fo- cus has caused limited interaction understanding for those hard-negative HOIs, which leads to dissatisfactory HOI de- tection performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' To sum up, we propose a new transformer-based ap- proach named Parallel Reasoning Network (PR-Net) to alle- viate inconsistent focus of attentive fields for different level prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Specificly, two parallel predictos, instance-level predictor and relation-level predictor,are concluded in PR- Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The former focuses on instance-level localization, and the latter keeps a watchful eye on relation-level semantic understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' As can be seen from the two examples in the second columns of Figure 1, PR-Net’s attention to in- stances is focused on the endpoints of human skeleton and the particular edge regions of objects, indicating that the instance-level predictor can accurately locate the localiza- tion of human and objects by focusing on these critical ex- tremity regions of instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From the two examples in the third column of Figure 1, it can be seen that PR-Net’s at- tention to relational areas is focused on the interaction con- tact areas between human and objects and some contextual areas containing helpful understanding of the interaction, which indicates that the relational level predictor spreads its vision to relation areas to better understand the subtle relationships between human and objects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In addition, the instance-level queries of our instance-level predictor strictly correspond to the relation-level queries of our relationship- level predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' So there is no need for any instance-level queries between them, which greatly reduces the computa- tional cost [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Our contribution can be concluded in the following three aspects: We propose PR-Net, which leverages a parallel reason- ing architecture to effectively alleviate the problem of inconsistent focus in attention fields between instance- level and relation-level prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' PR-Net achieves a better trade-off between two contradictory sub-tasks of HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The former needs more local informa- tion from the extremity region of instances, the latter is eager for more context information from the relation- level area.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' With a decoupled prediction manner, PR-Net can de- tect various HOIs simultaneously without any match- ing or recomposition process to link the instance-level prediction and relation-level prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Equipped with additional techniques, including Con- sistency Loss for better training and Trident-NMS for better post-processing, PR-Net achieves competitive results on both HICO-DET and V-COCO benchmark datasets in HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Related Works 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Two-stage Approaches in HOI Detection Most two-stage HOI detectors firstly detect all the hu- man and object instances with a modern object detection framework such as Faster R-CNN, Mask R-CNN [12, 29].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' After instance-level feature extraction and contextual infor- mation collection, these approaches pair the human and ob- ject instances for interaction recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In the process of interaction recognition, various contextual features are ag- gregated to acquire a better relation-level semantic repre- sentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' InteractNet [9] introduces an additional branch for interaction prediction, iCAN [8] captures contextual in- formation using attention mechanisms for interaction pre- diction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' TIN [23] further extends HOI detection models with a transferable knowledge learner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In-GraphNet [37] presents a novel graph-based interactive reasoning model to infer HOIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' VSGNet [31] utilizes relative spatial reasoning and structual connections to analyze HOIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' IDN [22] repre- sents the implicit interaction in the transformation function space to learn a better HOI semantic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Hou proposes fabri- cating object representations in feature space for few-shot learning [16] and learning to transfer object affordance for HOI detection [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Zhang [38] proposes to merge multi- modal features using a graphical model to generate a more discriminative feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' One-stage Approaches in HOI Detection One-stage approaches directly detect Human-Object In- teractions without complicated coarse-to-fine bounding box regression [5, 17, 18, 24, 30, 35, 40, 42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Among these ap- proaches, [24, 36] introduced a keypoint-style interaction detection method which performs inference at each interac- tion key point.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' [17] introduced a real-time method to pre- dict the interactions for each human-object union box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Re- cently, transformer-based detection approach was proposed to handle HOI detection as a sparse set prediction prob- lem [5, 30, 42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Specifically, [30] designed a transformer encoder-decoder architecture to predict Human-Object In- teractions in an end-to-end manner directly and introduced additional cost terms for interaction prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' On the other hand, Kim et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' [19] and Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' [6] propose an in- teraction decoder to be used alongside the DETR instance decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It is equally important for predicting interactions and matching related human-object pairs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' These aforemen- tioned one-stage approaches have enormously boosted the performance of Human-Object Interaction Detectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 Pairwise Instance Decoder Instance-level Queries Instance-level Feature Relation Decoder Relation-Level Predictor Instance-level Predictor Relation-level Queries Relation-level Feature Convolutional Neural Network ……' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Transformer Encoder … … Positional Encoding Input Feature Visual Memory Image Feature Extractor Classification Loss Regression Loss Consistency Loss Training Trident-NMS Testing Object Class Human Box Object Box Relation Box Relation class ……' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' ……' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' ……' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' ……' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The framework of our PR-Net.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It is comprised of four components:Image Feature Extractor, Pairwise Instance Predictor, Relation-level Predictor, Training and Post-processing Techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Proposed Method In this section, we present our Parallel Reasoning Network(PR-Net) for HOI detection, which is illustrated in the Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We can know that our PR-Net includes an Image Feature Extractor(CNN backbone and transformer encoder) and two parallel predictors (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', Instance-level Predictor and Relation-level Predictor).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The two parallel predictors are designed to decode instance information(i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' human-box, object-box, object-class) and relation informa- tion(i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' relation-box, relation-class) respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Next, we introduce the proposed instance-level and relation-level loss functions to learn the location of instances and the interac- tions within each human-object pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' At last, we introduce the proposed Trident-NMS which is utilized to filter those duplicated HOI predictions effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Image Feature Extractor The overall Image Feature Extractor architecture con- sists of a standard CNN backbone fc and transformer en- coder fe.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The conventional CNN backbone is used to pro- cess the input image xϵR3×H×W to a global context feature map zϵRc×H′×W ′, in which typically images are down- sampled to (H′, W ′) spatial shape with a dimension of c.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Then, the global context feature map is serialized as to- kens, in which the spatial dimensions of the feature map are collapsed into one dimension, resulting in H′ × W ′ tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Then, the tokens are linearly mapped to T = {ti|tiϵRc′}Nq i=1, where Nq = H′ × W ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Afterward, these tokens are shaped as a sequence to feed into the transformer encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' For the transformer encoder, each encoder layer fol- lows standard architecture of transformer, which con- sists of a multi-head self-attention module and a feed forward network (FFN).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Additional position embedding qeϵRc′×H′×W ′ is also added to the serialized token to supplement the positional information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' With the mech- anism of self-attention, the encoder can map the former global context feature map from CNN to richer contex- tual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Finally, the set of encoded image fea- tures {di|diϵRc′}Nq i=1 can be formulated as visual memory E = fe(T, qe).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The visual memory E contains richer con- textual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Instance-level Predictor The Instance-level Predictor includes a standard trans- former decoder fip with just three layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The decoder response for above visual memory E, according to a set of learnable instance query vectors Qp = {qi|qiϵRc′}Nq i=1 which is added with position embedding plϵRc′×H′×W ′.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The instance-level queries vectors are trained to learn a more precise location of instances, which focuses more on those local information about location of instances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The independent predictors are composed of three feed- forward networks (FFNs), including human-bounding-box FFN φhb, object-bounding-box FFN φob, and object-class FFN φoc, each of which response for decoding instance fea- ture to human-box ˆbh, object-box ˆbo and object-class ˆco re- spectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The formulation can be denoted as: ˆbh = φhb(fip(Qp, pl, E)), ˆbo = φob(fip(Qp, pl, E)), ˆco = φoc(fip(Qp, pl, E)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (1) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Relation-level Predictor We decouple the relation problems from HOI and use a Relation-level Predictor to reason relationships from larger- scale semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We propose a relation box to guide the predictor to percept the human-object relationship in the 3 human blow cakeRelation-level Predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The Relation-level Predictor consists of a standard trans- former decoder frd and two independent predictors(FFNs).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Another relation-level queries Qr and position embedding pr are randomly initialed and fed into the Relation-level Predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' One of the predictors φub predicts relation boxes ˆbu, the other predictor φac decodes the relation class in- formation ˆca.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The relation boxes ˆbu and the relation class information ˆca can be formulated as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' ˆbu = φub(fdr(Qr, pr, E)), ˆca = φac(fdr(Qr, pr, E)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (2) Attributed to the relation boxes, the decoder of Interaction- level Predictor is guided to enlarge the receptive field (as shown in Figure 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The relation queries Qr can pay at- tention to the entire area where human and object interact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Thus, the predictor φac can predict a more accurate relation class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In addiction, to match the relation class information ˆca with the aforementioned human-box ˆbh, object-box ˆbo and object-class ˆco from the Instance-level Predictor, we ditch the complex matching method like HO pointer in HOTR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Instead, we just match the relation class information ˆca and the instances information ˆbh etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' one by one in order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Specifically, for a pair of instances {ˆbh i ,ˆbo , ˆco i , iϵNq}, ˆca i is the corresponding relation class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In this way, the instance- level query vectors Qp and the relation-level query vectors Qr represent the same human-object interaction, but have the ability to focus on different receptive field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Loss Functions The overall loss functions consist of the instance-level loss and relation-level loss, applied to Instance-level Predic- tor and Relation-level Predictor, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The instance- level loss supervises the Instance-level Predictor to pre- dict instance-level target, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', human-box, object-box, and object-class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The relation-level loss assists the Relation- level Predictor to predict relation-class and relation-box from the larger receptive field.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1 The Instance-level loss function LIL supervises the instance information, including human- box ˆbh, object-box ˆbo and object-class ˆco.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The instance- level loss function consists of human-box regression Lhr, object-box regression Lor and object-class classification Loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Lhr and Lor are standard bounding-box regression loss, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' L1 loss, to locate the position of human and ob- ject.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Loc is a classification loss to classify the categories of the object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The loss functions can be defined as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Lhr = 1 N N � i ||ˆbh i − bh i ||, Lor = 1 N N � i ||ˆbo i − bo i ||, Loc = 1 N N � i CE(ˆco i , co i ), (3) where CE is cross entropy loss, co i is the ground truth of object class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The instance-level loss function LIL can be defined as: LIL = Whr ∗ Lhr + Wor ∗ Lor + Woc ∗ Loc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (4) 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2 The Relation-level loss function LRL supervises the relationship information, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', the rela- tion class ˆca, primarily.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In addition, auxiliary relation boxes are also supervised to pay attention to the entire area where the interaction happens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Thus, the Relation-level loss func- tion consists of relation-box regression Lur, relation-box consistency loss Luc and relation-class loss Lac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The Lac is a classification loss to classify the categories of the interac- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The relation-box regression loss function Lur is a L1 loss to resemble the predicted relation boxes and its ground- ing truth.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The grounding truth of relation boxes is the outer bounding box of human and object boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The relation-box regression loss function helps the Relation-level Predictor to be aware of the relation feature of human and object.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The consistency loss Luc are used to keep the consistency of ˆbh, ˆbo and ˆbu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Specifically, a pseudo relation box ˆbho is gener- ated by taking the outer bounding box of ˆbh and ˆbo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Then, an L1 loss resemble ˆbu and ˆbho.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' With the relation box, the relation-class loss can supervise better relation semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Lur = 1 N N � i ||ˆbu i − bu i ||, Luc = 1 N N � i ||ˆbu i − ˆbho i ||, Lac = 1 N N � i SigmoidCE(ˆca i , ca i ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (5) The relation-level loss function LRL can be defined as: LRL = Wur ∗ Lur + Wuc ∗ Luc + Wac ∗ Lac.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (6) In all, the overall loss fucntion L can be denoted as: L = LIL + LRL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' (7) 4 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Inference for HOI Detection The inference process of our PR-Net can be divided into two parts: the calculation of the HOI predictions and the Trident-NMS post-processing technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' HOI Prediction To acquire the final HOI detection results, we need to predict human bounding box, object bounding box, and object class using both instance-level predictions and relation class and relation box using relation-level pre- diction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Based on the above predictions, we can calculate the final HOI prediction score as below: shoi i = {maxkso i (k)} ∗ srel i (8) Where maxkso i (k) means the most probable class score of the i-th output object from instance-level predictor;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' srel i means the multi-class scores of the i-th output interaction from relation-level predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Note that each human-object pair can only have one object with certain class, but there maybe exist multiple human-object interactions within one pair.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Trident-NMS For each predicted HOI class in one image, we choose to filter its duplicated predictions according to the above calculated HOI prediction scores with our pro- posed Trident Non Maximal Suppression(Trident-NMS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In detail, if the TriIoU(i, j) between the i-th and the j-th HOI prediction is higher than the threshold Thresnms, we will filter the prediction which has a lower HOI score.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' And the calculation of TriIoU(i, j) is as below: TriIoU(i, j) =IoU(bh i , bh j )Wh × IoU(bo i , bo j)Wo × IoU(brel i , brel j )Wrel (9) Where IoU(bh i , bh j ), IoU(bo i , bo j), IoU(brel i , brel j ) repre- sent the Interaction over Union between the i-th and the j- th human boxes, object boxes and relation boxes;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Wh, Wo, Wrel represent the weights of Human IoU, Object IoU and Relation IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Experiment 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Datasets and Evaluation Metrics We evaluate our method on two large-scale benchmarks, including V-COCO [10] and HICO-DET [3] datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' V- COCO includes 10,346 images, which contains 16,199 hu- man instances in total and provides 26 common verb cate- gories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' HICO-DET contains 47,776 images, where 80 ob- ject categories and 117 verb categories compose of 600 HOI categories.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' There are three different HOI category sets in HICO-DET, which are: (a) all 600 HOI categories (Full), (b) 138 HOI categories with less than 10 training instances (Rare), and (c) 462 HOI categories with 10 or more training instances (Non-Rare).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Following the standard protocols, we use mean average precision (mAP) in HICO-DET [4] and role average precision (AProle) in V-COCO [10] to report evaluation results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Implementation Details We use ResNet-50 and ResNet-101 [13] as a backbone feature extractor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The transformer encoder consist of 6 transformer layers with multi-head attention of 8 heads.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The number of transformer layers in Instance-level Predic- tor and Interaction-level Predictor is both set to be 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The reduced dimension size of visual memory is set to 256.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The number of instance-level and relation-level queries is set to 100 for both HICO-Det and V-COCO benchmark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' The human, object and relation box FFNs both have 3 linear layers with ReLU, while the object and relation category FFNs have one linear layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' During training, we initial- ize the network with the parameters of DETR [2] trained on the MS-COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We set the weight coefficients of bounding box regression, Generalized IoU, object class, relation class and consistency loss to 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5, 1, 1, 1 and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5, respectively, which follows QPIC [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We optimize the network by AdamW [26] with the weight decay 10−4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We train the model for 150 epochs with a learning rate of 10−5 for the backbone and 10−4 for the other parts decreased by 10 times at the 100th and the 130th epoch respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' All experiments are conducted on the 8 Tesla A100 GPUs and CUDA11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2, with a batch size of 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We select 100 detection results with the highest scores for validation and then adopt Trident-NMS to filter results further.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Overall Performance We summarize the performance comparisons in this sub- section.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Performance on HICO-DET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Table 1 shows the per- formance comparison on HICO-DET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Firstly, the detection results of our PR-Net are the best among all approaches un- der the Full and Non-Rare settings, demonstrating that our method is more competitive than the others in detecting the most common HOIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It is noted that PR-Net is also pre- eminent in detecting rare HOIs (HOI categories with less than 10 training instances), because our parallel reasoning network can migrate the non-rare knowledge into a rare do- main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Besides, our PR-Net obtains 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='86 mAP on HICO- DET (Default Full), which achieves a relative gain of 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='8% compared with the baseline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' These results quantitatively show the efficacy of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Performance on V-COCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comparison results on V- COCO in terms of mAProle are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It can be seen that our proposed PR-Net has a mAP(%) of 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4, obtaining the best performance among all approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Al- though we do not adopt previous region-based feature learn- ing (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', RPNN [41], Contextual Att [34]), or employ ad- 5 Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Results on HICO-DET [4].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' “COCO” is the COCO pre- trained detector, “HICO-DET” means that the detector is further fine-tuned on HICO-DET.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Default Full Method Detector Backbone Full Rare Non-Rare CNN-based VCL [14] COCO ResNet-50 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='43 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='55 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='29 VSGNet [31] COCO ResNet-152 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='80 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='05 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='91 DJ-RN [21] COCO ResNet-50 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='34 18.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='53 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='18 PPDM [24] HICO-DET Hourglass-104 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='73 13.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='78 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='10 Bansal et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' [1] HICO-DET ResNet-101 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='96 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='43 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='62 TIN [23]DRG HICO-DET ResNet-50 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='17 15.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='02 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='61 VCL [14] HICO-DET ResNet-50 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='63 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='21 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='55 GG-Net [40] HICO-DET Hourglass-104 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='47 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='48 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='60 IDNDRG [22] HICO-DET ResNet-50 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='29 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='61 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='39 Transformer-based HOI-Trans [42] HICO-DET ResNet-50 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='46 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='91 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='41 HOTR [18] HICO-DET ResNet-50 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='10 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='34 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='42 AS-Net [5] HICO-DET ResNet-50 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='87 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='25 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='25 QPIC [30] HICO-DET ResNet-50 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='07 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='85 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='23 PR-Net (Ours) HICO-DET ResNet-50 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='17 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='66 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='82 PR-Net (Ours) HICO-DET ResNet-101 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='86 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='03 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='30 ditional human pose (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', PMFNet [32], TIN [23]), our method outperforms these approaches with sizable gains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Besides, our method achieves an absolute gain of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='6 points, a relative improvement of 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1% compared with the baseline, validating its efficacy in the HOI detection task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Performance comparison on V-COCO dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Method Backbone Network APS1 role APS2 role CNN-based VSGNet [31] ResNet-152 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 PMFNet [32] ResNet-50-FPN 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 PD-Net [39] ResNet-152-FPN 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='6 CHGNet [33] ResNet-50-FPN 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 FCMNet [25] ResNet-50 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1 ACP [20] ResNet-152 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='23 IDN [22] ResNet-50 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='3 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='3 GG-Net [40] Hourglass-104 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 DIRV [7] EfficientDet-d3 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1 Transformer-based HOI-Trans [42] ResNet-101 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='9 AS-Net [5] ResNet-50 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='9 HOTR [18] ResNet-50 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4 QPIC [30] ResNet-50 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='8 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 PR-Net (Ours) ResNet-50 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 PR-Net (Ours) ResNet-101 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='9 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ablation Analysis To evaluate the contribution of different components in our PR-Net, we first conduct a comprehensive ablation analysis on the HICO-DET dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Next, we analyze the impact of the number of different-level predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' At last, we analyze the effects of different post-processing manners.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Contribution of different components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Compared Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ablation analysis of the proposed PR-Net with the back- bone of ResNet-101 on HICO-DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Parallel Predictor means we parallelly predict instance-level locations and relation- level semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Consistency Loss means we constrain the union box of the human-object pair and the relation box to be consis- tent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Trident-NMS means duplicate filtering through human, ob- ject, and relation bounding boxes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Parallel Predictor Consistency Loss Trident-NMS HICO-DET Full Rare NonRare 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='90 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='92 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='69 ✓ 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='62 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='43 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='47 ✓ ✓ 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='87 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='59 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='14 ✓ ✓ ✓ 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='86 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='03 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='30 with our baseline [30], the performance improvements of our PR-Net are from three components: Parallel Predictor, Consistency Loss, and Trident-NMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From Table 3, we can know the contribution of different components.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Among these components, Parallel Predictor is our core approach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' With that, we can observe a noticeable gain of mAP in HICO-DET by 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It proves that the parallel reasoning structure can significantly improve instance localization and interaction understanding for an HOI detection model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Additionally, we design a consistency loss between the union box of the human-object pair and the relation box, which can contribute about 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='25 mAP gain in the HICO- DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It shows that it is meaningful and helpful to constrain the union region of instance-level predictions and the relation region of relation-level predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' At last, we design a more effective post-processing technique named Trident-NMS, which brings about 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 mAP gain in the HICO-DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It reveals that the set-prediction method can also benefit from duplicate filtering technique and post-processing technique like NMS is essential for HOI detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Impacts of different numbers of parallel predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In our PR-Net, two parallel predictors are significant for HOI detection, and we detailedly analyze the impact of different numbers of parallel predictors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From Table 4, we can know that equipped with three layers of instance-level predictor and relation-level predictor, our PR-Net can acquire the best mAP performance in the HICO-DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' It reveals that our PR-Net can significantly outperform the baseline QPIC [30] without additional computational cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Interestingly, we can also observe that even with only one layer of parallel predictors, our PR-Net can also outperform the baseline equipped with a six-layer predictor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Effects of different implements of Trident-NMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Table 5, we analyze the effects of different implements of Trident-NMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' We find that Product-based Trident- NMS performs better than Sum-based Trident-NMS.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 Table 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ablation analysis of the number of instance-level predic- tor Ndec and the number of relation-level predictor Nreldec.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' approaches Backbone Ndec Nreldec Full Rare Non-Rare QPIC(Baseline) [30] ResNet50 6 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='07 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='85 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='23 PR-Net(Ours) ResNet50 1 1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='64 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='18 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='27 PR-Net(Ours) ResNet50 3 3 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='17 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='66 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='82 PR-Net(Ours) ResNet50 6 6 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='04 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='87 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='89 QPIC(Baseline) [30] ResNet101 6 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='90 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='92 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='69 PR-Net(Ours) ResNet101 1 1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='26 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='27 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='34 PR-Net(Ours) ResNet101 3 3 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='86 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='03 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='30 PR-Net(Ours) ResNet101 6 6 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='52 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='04 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='16 Additionally, we can also observe that when the weight of Human-IoU in TriIoU increases, the HOI detection performance will be better.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' This reveals that human box duplication is more frequent than that of object box or relation box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In summary, with either the Product-based or Sum-based TriIoU calculation, we should pay more attention to the non-maximal suppression of the human box.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ablation analysis of the Trident-NMS module on HICO- DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Product means we calculate TriIoU by multiply- ing these weighted Human-IoU, Object-IoU, and Relation-IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Sum means we calculate TriIoU by adding all these weighted Human-IoU, Object-IoU, and Relation-IoU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Wh, Wo, Wrel rep- resent the weights of Human-IoU, Object-IoU and Relation-IoU respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Thresnms means the threshold of non-maximum suppression.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Product Sum Wh Wo Wrel Thresnms HICO-DET Full Rare NonRare 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='87 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='59 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='14 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='61 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='00 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='69 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='33 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='53 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='88 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='91 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='2 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='63 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='96 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='02 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='4 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='66 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='91 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='00 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='6 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='3 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='1 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='7 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='56 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='70 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='01 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='77 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='98 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='20 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='81 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='02 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='25 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='61 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='65 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='08 ✓ 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='61 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='67 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='09 ✓ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='0 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='86 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='03 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='30 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Visualization of features Using the t-SNE visualization technique [27], we visual- ize 20000 samples of output feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' These object and inter- action features are extracted from the last layer of Instance- level Predictor and Relation-level Predictor in our PR-Net, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From the Figure 3, we can observe that our PR-Net can obviously distinguish different class of objects and interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Interestingly, from this visualization of features, our PR-Net can even learn better the complex interaction representations then the object representations which benefits from our advantageous parallel reasoning ar- chitecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Visualization of object features and relation features on HICO-DET dataset via t-SNE technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Left is object features and right is relation features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Qualitative Examples From Figure 4, we can observe that our PR-Net can accu- rately detect both human box, object box, and relation box as well as their corresponding interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From the first row and second column of Figure 4, we can know that our PR-Net can precisely distinguish which man is riding the horse in the image.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' From the second row and third column of Figure 4, our PR-Net can precisely detect those subtle and indiscernible HOIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In summary, our PR-Net can cor- rectly detect those complex and hard HOIs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Visualization of some HOI detection examples (Top 1 result) detected by the proposed Parallel Reasoning Network on the HICO-DET test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Conclusion In this paper, we propose a new Human-Object Inter- action Detector named Parallel Reasoning Network(PR- Net), which consists of an instance-level predictor and a relation-level predictor, to alleviate the problem of in- consistent focus in attentive fields between instance-level and interaction-level predictions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In addition, our PR- Net achieves a better trade-off between instance localiza- tion and interaction understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Furthermore, equipped with Consistency Loss and Trident-NMS, our PR-Net has achieved competitive results on two main HOI benchmarks, validating its efficacy in detecting Human-Object Interac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 7 100 100 75 75 50 50 25 25 0 0 25 25 50 50 75 75 100 100 100 75 50 25 0 25 50 75 100 100 75 50 25 0 25 50 75 100uman human lie_on chair ridehorse LOG numan sit on benchReferences [1] Ankan Bansal, Sai Saketh Rambhatla, Abhinav Shrivastava, and Rama Chellappa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Detecting human-object interactions via functional generalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' AAAI, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [2] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' End-to- end object detection with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [3] Yuwei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Learning to detect human-object interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' work- shop on applications of computer vision, pages 381–389, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [4] Yu-Wei Chao, Yunfan Liu, Xieyang Liu, Huayi Zeng, and Jia Deng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Learning to detect human-object interactions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 5, 6 [5] Mingfei Chen, Yue Liao, Si Liu, Zhiyuan Chen, Fei Wang, and Chen Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Reformulating hoi detection as adaptive set prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 [6] Mingfei Chen, Yue Liao, Si Liu, Zhiyuan Chen, Fei Wang, and Chen Qian.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Reformulating hoi detection as adaptive set prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [7] Hao-Shu Fang, Yichen Xie, Dian Shao, and Cewu Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Dirv: Dense interaction region voting for end-to-end human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In AAAI, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [8] Chen Gao, Yuliang Zou, and Jiabin Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' ican: Instance- centric attention network for human-object interaction detec- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' british machine vision conference, page 41, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [9] Georgia Gkioxari, Ross B Girshick, Piotr Dollar, and Kaim- ing He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Detecting and recognizing human-object interac- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' computer vision and pattern recognition, pages 8359– 8367, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [10] Saurabh Gupta and Jitendra Malik.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Visual semantic role la- beling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' arXiv preprint arXiv:1505.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content='04474, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [11] Tanmay Gupta, Alexander Schwing, and Derek Hoiem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' No- frills human-object interaction detection: Factorization, lay- out encodings, and training techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1 [12] Kaiming He, Georgia Gkioxari, Piotr Doll´ar, and Ross Gir- shick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Mask r-cnn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE international conference on computer vision, pages 2961–2969, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2 [13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Deep residual learning for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', pages 770–778, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [14] Zhi Hou, Xiaojiang Peng, Yu Qiao, and Dacheng Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vi- sual compositional learning for human-object interaction de- tection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [15] Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, and Dacheng Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Affordance transfer learning for human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [16] Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, and Dacheng Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Detecting human-object interaction via fab- ricated compositional learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [17] Bumsoo Kim, Taeho Choi, Jaewoo Kang, and Hyunwoo J Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Uniondet: Union-level detector towards real-time human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 498–514.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2 [18] Bumsoo Kim, Junhyun Lee, Jaewoo Kang, Eun-Sol Kim, and Hyunwoo J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Hotr: End-to-end human-object in- teraction detection with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 [19] Bumsoo Kim, Junhyun Lee, Jaewoo Kang, Eun-Sol Kim, and Hyunwoo J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Kim.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Hotr: End-to-end human-object inter- action detection with transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [20] Dong-Jin Kim, Xiao Sun, Jinsoo Choi, Stephen Lin, and In So Kweon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Detecting human-object interactions with ac- tion co-occurrence priors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In European Conference on Com- puter Vision, pages 718–736.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [21] Yong-Lu Li, Xinpeng Liu, Han Lu, Shiyi Wang, Junqi Liu, Jiefeng Li, and Cewu Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Detailed 2d-3d joint representation for human-object interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [22] Yong-Lu Li, Xinpeng Liu, Xiaoqian Wu, Yizhuo Li, and Cewu Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Hoi analysis: Integrating and decomposing human-object interaction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 33, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2, 6 [23] Yong-Lu Li, Siyuan Zhou, Xijie Huang, Liang Xu, Ze Ma, Hao-Shu Fang, Yanfeng Wang, and Cewu Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Transferable interactiveness knowledge for human-object interaction de- tection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3585–3594, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 [24] Yue Liao, Si Liu, Fei Wang, Yanjie Chen, Chen Qian, and Jiashi Feng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ppdm: Parallel point detection and matching for real-time human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 [25] Yang Liu, Qingchao Chen, and Andrew Zisserman.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Ampli- fying key cues for human-object-interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 248–265.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [26] Ilya Loshchilov and Frank Hutter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Decoupled weight decay regularization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In ICLR, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [27] Laurens van der Maaten and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Visualiz- ing data using t-sne.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Journal of machine learning research, 9(Nov):2579–2605, 2008.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 7 [28] Siyuan Qi, Wenguan Wang, Baoxiong Jia, Jianbing Shen, and Songchun Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Learning human-object interactions by graph parsing neural networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' european conference on com- puter vision, pages 407–423, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1 [29] Shaoqing Ren, Kaiming He, Ross B Girshick, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Faster r-cnn: Towards real-time object detection with region proposal networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6):1137–1149, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2 [30] Masato Tamura, Hiroki Ohashi, and Tomoaki Yoshinaga.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Qpic: Query-based pairwise human-object interaction de- tection with image-wide contextual information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 5, 6, 7 [31] Oytun Ulutan, A S M Iftekhar, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Manjunath.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' VS- GNet: Spatial attention network for detecting human object interactions using graph convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Com- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', June 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2, 6 8 [32] Bo Wan, Desen Zhou, Yongfei Liu, Rongjie Li, and Xuming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Pose-aware multi-level feature network for human ob- ject interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE Inter- national Conference on Computer Vision, pages 9469–9478, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [33] Hai Wang, Wei-shi Zheng, and Ling Yingbiao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Contextual heterogeneous graph network for human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [34] Tiancai Wang, Rao Muhammad Anwer, Muhammad Haris Khan, Fahad Shahbaz Khan, Yanwei Pang, Ling Shao, and Jorma Laaksonen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Deep contextual attention for human- object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE International Conference on Computer Vision, pages 5694– 5702, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [35] Tiancai Wang, Tong Yang, Martin Danelljan, Fahad Shahbaz Khan, Xiangyu Zhang, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Learning human-object interaction detection using interaction points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2 [36] Tiancai Wang, Tong Yang, Martin Danelljan, Fahad Shahbaz Khan, Xiangyu Zhang, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Learning human-object interaction detection using interaction points.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [37] Dongming Yang and Yuexian Zou.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' A graph-based interactive reasoning for human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Pro- ceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 1111–1117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Inter- national Joint Conferences on Artificial Intelligence Organi- zation, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [38] Frederic Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Zhang, Dylan Campbell, and Stephen Gould.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Spatially conditioned graphs for detecting human-object in- teractions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=', pages 13319–13327, October 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 2 [39] Xubin Zhong, Changxing Ding, Xian Qu, and Dacheng Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Polysemy deciphering network for human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In ECCV, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 6 [40] Xubin Zhong, Xian Qu, Changxing Ding, and Dacheng Tao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Glance and gaze: Inferring action-aware points for one- stage human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 13234–13243, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 [41] Penghao Zhou and Mingmin Chi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' Relation parsing neural network for human-object interaction detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In Proceed- ings of the IEEE International Conference on Computer Vi- sion, pages 843–851, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 5 [42] Cheng Zou, Bohan Wang, Yue Hu, Junqi Liu, Qian Wu, Yu Zhao, Boxun Li, Chenguang Zhang, Chi Zhang, Yichen Wei, and Jian Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' End-to-end human object interaction detection with hoi transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' In CVPR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'} +page_content=' 1, 2, 6 9' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/19E1T4oBgHgl3EQf5QWX/content/2301.03510v1.pdf'}