diff --git "a/AtAzT4oBgHgl3EQf__8r/content/tmp_files/load_file.txt" "b/AtAzT4oBgHgl3EQf__8r/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/AtAzT4oBgHgl3EQf__8r/content/tmp_files/load_file.txt" @@ -0,0 +1,944 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf,len=943 +page_content='Adaptively Clustering Neighbor Elements for Image Captioning Zihua Wang1,2 Xu Yang1 Haiyang Xu2* Hanwang Zhang3 Chenliang Li2 Songfang Huang2 Fei Huang2 Yu Zhang1* 1 School of Computer Science & Engineering, Key Lab of Computer Network & Information Integration (Ministry of Education), Southeast Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', Nanjing, China 2Alibaba Group 3 School of Computer Science & Engineering, Nanyang Technological Univ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', Singapore.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' {zihua, 101013120, zhang yu}@seu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='cn,{shuofeng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='xhy, lcl193798, songfang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='hsf, f.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='huang}@alibaba-inc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='com, hanwangzhang@ntu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='sg Abstract We design a novel global-local Transformer named Ada- ClustFormer (ACF) to generate captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We use this name since each layer of ACF can adaptively cluster input el- ements to carry self-attention (Self-ATT) for learning lo- cal context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Compared with other global-local Transform- ers which carry Self-ATT in fixed-size windows, ACF can capture varying graininess, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', an object may cover dif- ferent numbers of grids or a phrase may contain diverse numbers of words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To build ACF, we insert a probabilis- tic matrix C into the Self-ATT layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For an input se- quence {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sN}, Ci,j softly determines whether the sub-sequence {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj} should be clustered for carrying Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For implementation, Ci,j is calculated from the contexts of {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj}, thus ACF can exploit the input itself to decide which local contexts should be learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' By us- ing ACF to build the vision encoder and language decoder, the captioning model can automatically discover the hid- den structures in both vision and language, which encour- ages the model to learn a unified structural space for trans- ferring more structural commonalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The experiment re- sults demonstrate the effectiveness of ACF that we achieve CIDEr of 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8, which outperforms most SOTA captioning models and achieve comparable scores compared with some BERT-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The code will be available in the sup- plementary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Introduction Image Captioning (IC) aims to learn a shared vision- language representation space for facilitating the transfer of multimodal knowledge to generate visually grounded sen- Corresponding authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' tence [22].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Two prevailing deep learning techniques help the IC model learn such space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The first one is the vi- sion encoder-language decoder pipeline [41] which back- propagates the language semantic to the visual encoder and another one is the attention mechanism [46] which di- rectly bridges between vision and language domains for transferring multimodal knowledge.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Transformers [39], which build the encoder and decoder based on dense at- tention operations, have both of the above-mentioned ad- vantages.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Transformers have two types of attention opera- tions which are self-attention (Self-ATT) and cross-modal attention (Cross-ATT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From the perspective of structure learning, Self-ATT applies the fully connected (FC) graph prior to the data sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' By using Self-ATT in both encoder and decoder, the graph structures of both vision and language data can be discovered and Cross-ATT helps transfer these structural commonalities for narrowing the modality gaps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Therefore, Transformer prevails in IC tasks [10,12,13,28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Interestingly, structure learning is one of the most sig- nificant research directions of IC since the paired vision- language data usually share a unified internal semantic structure although they have diverse external appearances.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Thus, if this unified semantic structure is captured, more structural commonalities can be transferred for generating better captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Motivated by this, various IC models are proposed to exploit scene graphs [5, 21, 49] or hierarchy trees [43, 51] to narrow the domain gap.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, such structures need additional well-trained parsers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Moreover, vision and language parsers usually have domain gaps that the parsed structures of the paired image-sentence may not match, which may even weaken the effectiveness of these IC models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We prefer an IC model that can adaptively dis- cover the unified semantic structures to remove the costs of the additional structure annotations and more importantly, arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='01955v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='CV] 5 Jan 2023 (a) Fixed-Size Transformer s1 s2 s3 s4 s5 s6 s7 s8 Input 1-st layer 2-nd layer 3-rd layer (b) ACF s1 s2 s3 s4 s5 s6 s7 s8 Input 1-st layer 2-nd layer 3-rd layer … … … … (c) ACF-based IC riding a snow board on snow A man riding a snow board on snow A man riding a snow board on snow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' riding a A man snow board on snow A man Figure 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (a) Transformer with fixed-size windows (size = 2);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (b) ACF which adjusts the window size according to the input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (c) ACF-based IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The left/right part shows how the vision/language ACFs cluster image grids/language words for transferring struc- tural commonalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' to learn a unified structure space for transferring structural commonalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Transformer seems to be a good starting point since it can implicitly build graphs by Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, it exploits the FC graph prior, while the useful semantic structure is usually sparse and hierarchical like the scene graphs or trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To discover more sparse structures, re- searchers design various global-local Transformers [20,29, 33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' As sketched in Figure 1(a), these Transformers grad- ually merge the neighbor elements in fixed-size windows into bigger clusters and carry Self-ATT in each cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For example, the 1-st layer clusters 2 neighboring elements like {s1, s2} to carry Self-ATT for local contexts and the 2- nd layer merges {s1, s2} and {s3, s4} into a bigger one to learn more global context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then a hierarchical struc- ture is built from lower to higher layers where local and global contexts are respectively captured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, these Transformers still do not satisfy our requirement since vi- sion and language data have diverse graininess, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', objects may cover varying grids and phrases may compose different numbers of words, while fixed-size windows cannot effec- tively capture such varying graininess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To capture the varying graininess, we propose to Adaptively Cluster the neighbor elements to carry Self-ATT and named the novel Transformer as Ada- ClustFormer (ACF).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' As shown in Figure 1(b), in each layer, the window size is not fixed but can be adjusted to each specific input sequence, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', in the 1-st layer, {s1, s2, s3}, {s4}, {s5, s6}, {s7}, {s8} are respectively clustered.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The higher layers merge small clusters into big- ger ones for global contexts, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', the 2-nd layer respectively merges {s1, s2, s3, s4, s5, s6}, {s7, s8} into two clusters to carry Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To achieve this adaptive clustering, we in- sert a probabilistic clustering matrix C into the Self-ATT layer, where the probability Cij softly determines whether the sub-sequence {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj} should be clustered or not.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To calculate Cij, we consider whether the next element sj is similar to the mean-pooling of {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Thus ACF can adjust the window of Self-ATT based on each specific data sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To construct an IC model based on ACF, besides build- ing 1-D ACF for the language decoder, we also extend it to the 2-D ACF as the vision encoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In this way, both the visual encoder and language decoder can automatically discover the hidden structures of the image and language data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' This means that the ACF model does not need any additional structure annotations as some previous IC mod- els [2, 5] but still exploits the sparse structures implied in both vision and language data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For example, as shown in Figure 1(c), a visual ACF can merge the smaller grids into bigger regions to capture both grid-level [15] and region- level [4] contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' And the language one gradually clus- ters the single words into phrases to generate the captions in an imaginary phrase-by-phrase manner [38, 48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' More importantly, compared with certain global-local Transform- ers which are exclusively developed in vision and language domains [24, 47], the visual and language ACF exploit the same way to discover hidden structures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' So, our ACF model is a homogeneous structure that helps transfer more struc- tural commonalities between vision and language domains, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', as shown in Figure 1(c), the patches of the object “snow board” is clustered in the image and correspondingly, the phrase “a snow board” is also clustered in the language do- main.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In summary, our contributions can be listed as follows: We propose ACF that can adaptively capture varying graininess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We extend ACF to the 2-D case for building a homo- geneous IC model that learns unified structural space for transferring more structural commonalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The experimental results show that our ACF model outperforms the classic Transformers in IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Related Work Image Captioning (IC).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' IC aims to generate descriptions according to the given images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Typically, an encoder- decoder paradigm is used to convert visual inputs to se- quence outputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the early stage, image features are ex- tracted by CNN-based encoders, as the input of the RNN- based decoders [4, 16, 35, 41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For example, Up-Down [4] employs a Faster R-CNN [34] to extract image region fea- tures and LSTM networks to generate sentences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Nowadays, Transformer-based models have shown their might in Neural Language Process (NLP) and replace RNN- based decoders in IC [12, 14, 19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Subsequently, more ad- vanced Transformer-based decoders are proposed, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', M2 Transformer [8] proposes a meshed-memory Transformer to interact with the low-level and high-level features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' X- Linear Transformer [31] selectively capitalizes the visual information from image regions by bilinear pooling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, these models still use CNN-based feature ex- tractors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' More recently, witnessing the boom of Vision Transformers (ViT) [9, 24], researchers use ViT-based vi- sual encoders for captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For instance, CPTR [23] in- troduces grid-based features that are extracted by ViT [9] instead of using the ROI-based features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' DLCT [25] fuses the ROI-based features with the grid-based features to over- come the shortcoming of both features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Besides that, some models exploit the knowledge distilled from Vision- Language BERTs for better captions [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' VinVL [52] and GRIT [28] propose the object detection model in IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ClipCAP [27] and LEMON [13] introduce large-scale pre- training on IC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Noteworthy, the methods above employ the ViT [9] or Swin Transformer [24] as their backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Thus, our ACF adopts the Swin Transformer as our encoder back- bone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Among the previous IC models, Auto-Parsing Network (APN) [48] has a similar motivation as ours, which also in- serts a clustering matrix into the Self-ATT layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, Ada-ClustFormer (ACF) calculates this matrix differently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' APN only considers whether pairwise neighboring elements should be clustered or not, while we calculate this proba- bility from a more global scope.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Specifically, we consider whether the next element is similar to the previous clustered elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' More importantly, we extend our ACF into the 2- D case, which can adaptively cluster the visual patches into regions, while APN only treats a sequence of ROI features as the visual input and still applies a 1-D clustering matrix to address it.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' More comparisons will be given in the supple- mentary material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Global-Local Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To alleviate the fully con- nected graph prior in Transformer, researchers propose var- ious global-local Transformers to learn sparse structures of the language [6, 26].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For example, Global-local [26] intro- duces a fixed-size of the global and local attention model in neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Longformer [6] proposes global and local window attentions, which can provide inductive bias and long sequence representation, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Hi- Transformer [44] learns sentence-level and document-level semantics through the hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The global-local Transformer mechanism is also effec- tive in vision area [7, 25, 53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Pairwise and patchwise self- attention are proposed in image recognition [53].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Further- more, GLiT [7] proposes to adaptively trade off the global and local information of the images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' DLCT [25] explores the global and local information by the combination of grid- based features and ROI-based features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, these models are exclusively developed in a single domain (either NLP or CV), while our ACF provides a general approach in both the vision and language domains.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Thus, using ACF to build the IC model encourages learn- ing a unified structure space for transferring more structure commonalities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Ada-ClustFormer IC model Compared with the classic Transformer, Ada- ClustFormer (ACF) inserts an adaptively clustering matrix C into each self-attention (Self-ATT) layer to adaptively control the scope of Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The calculation of C is detailed in Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 where we first show the 1-D language case and then extend it to the 2-D vision case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' By stacking these revised Self-ATT layers, ACF can be built for constructing the vision encoder and language decoder for captioning (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Ada-ClustFormer Multi-Head Attention (MHA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF is built based on Transformer, whose most elemental building block is the Multi-Head Attention (MHA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Given the query Q ∈ RNQ×d, key K ∈ RNK×d, and value V ∈ RNV ×d, MHA calculates the output Z = MHA(Q, K, V) as: Input: Q, K, V ATT: Al = Softmax(QWQ l (KWK l )T √ d ) Head : Hl = AlVWV l , Multi-Head: H = [H1, H2, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', Hh]WH, Output: Z = LN(H + Q), (1) where WQ l , WK l , WV l ∈ Rd×dh, WH l ∈ Rd×d are all learn- able parameters;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' h denotes the head number and dh = d/h;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Al is the l-th attention matrix corresponding to the l-th head Hl;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [·] is the concatenation operation;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' and LN denotes to the Layer Normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Given an input sequence S = {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sN}, if Q = K = V = S, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (1) is also called self-attention (Self- ATT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Self-ATT captures the global contexts between any two elements si and sj by calculating the pairwise atten- tion weight in the “ATT” operation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From the perspective of structure learning [5], single-head Self-ATT constructs a fully-connected (FC) graph where the nodes are the ele- ments of S and the pairwise edges are weighted by the pair- wise attention weight.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Correspondingly, a h-head Self-ATT constructs h FC graphs with different edge weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Adaptive Clustering Matrix C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To sparsify this FC-graph, researchers [9, 24] propose to carry Self-ATT in fixed-size windows, which is achieved by revising “Head” in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (1): C-based Head : H = Softmax(A ⊗ C)VWV , (2) where “⊗” denotes the element-wise production;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' C is a N × N binary clustering matrix that only the elements in the window can attend to each other, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', if the win- dow size is w, Ci,j = 1 if |i − j| ≤ w and Ci,j = 0 if |i − j| > w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, language or vision data usually have diverse graininess, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', a phrase may contain different numbers of words or an object may cover diverse spatial regions, while the fixed-size windows can not capture the varying graininess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To amend this, we revise the binary C to a probabilistic one where Ci,j softly determines whether to cluster the em- beddings from si to sj for carrying Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then if Ci,j is small, the pairwise attention in A between si and sj is weakened in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (2), which means si and sj are less likely to stay in the same cluster.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To adaptively decide the win- dow size according to each specific input for capturing the varying graininess, we use the input itself to calculate Ci,j: Ci,j = P(si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj) = j� k=i P(sk|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1), (3) where the joint distribution is decomposed to the produc- tions of conditional distributions P(sk|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1), which softly decides whether to merge a new element sk into the sub-sequence {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the implementation, P(sk|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1) is calculated as: P(sk|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1) = Sigmoid(FC([sk, si:k−1])), (4) where si:k−1 is the mean pooling of the embeddings from si to sk−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Intuitively, Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4) exploits the context of the whole sub-sequence {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sk−1} to decide whether to merge a new element {sk} into this sub-sequence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Note that Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (3) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4) only make sense when i < k.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Since clustering the embeddings from si to sk equals to cluster- ing from sk to si, we set Ci,k = Ck,i if i > k and since a single element si is itself a cluster, we set Ci,i = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (3), we can also find that: Ci,j =P(sj|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj−1) × P(si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj−1) =P(sj|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj−1) × Ci,j−1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (5) Since P(sj|si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj−1) ≤ 1, we have Ci,j ≤ Ci,j−1, which means that two elements in the shorter distance are more likely to be clustered for carrying Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In this way, local contexts are encouraged to be captured, as is shown in Figure 2(a).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Stacking Revised Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To learn global contexts, we can stack these revised Self-ATT layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' When stacking, we hope that the higher layers will carry Self-ATT in bigger windows than the lower layers to capture the global con- texts [43, 48].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To achieve this, for the m-th layer, we re- calculate C(m) as ˜C(m): ˜C(m) = (1 − C(m)) ˜C(m−1) + C(m).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (6) s1 s2 s3 s4 s5 s6 s1 s2 s3 s4 s5 s6 C1,4 = C1,3 × P( s4 | s1, s2, s3) Sigmoid(FC([s4, s1:s3])) (a) Calculation of C1,4 (b) C(2) ≥ C(1) 1-st layer 2-nd layer ~ ~ Figure 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (a) shows how to calculate C1,4, where the shade denotes the probability value, the darker the color, the larger the probability value.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (b) shows that the clustered elements in the lower layer will be further clustered in a higher layer, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', the color of {s1, s2, s3} in the 2-nd layer is darker than the 1-st layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Horizontal Upsampling (a) Calculation of C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3 (b) Down-up Sampling Strategy Ph(v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', v4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1) Pv(v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3) s1 s2 s4 s3 s1 s2 C1,2 C2,3 ∏ ∏ C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3 Horizontal Upsampling (a) Calculation of C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3 (b) Down-up Sampling Strategy Ph(v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', v4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1) Pv(v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3) s1 s2 s4 s3 s1 s2 C1,2 C2,3 ∏ ∏ C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3 Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (a) The example of 2-D C, where C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3 is used as the example, which is decomposed into vertical and horizontal di- rections probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (b) Overview of the Down-Up Sampling Strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then ˜C(m) is used in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (2) when m > 1 and ˜C(1) = C(1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Since 0 ≤ C(m) i,j ≤ 1, ˜C(m) i,j is a convex combination of ˜C(m−1) i,j and 1, which means that ˜C(m−1) i,j ≤ ˜C(m) i,j ≤ 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' If ˜C(m−1) i,j is large, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', the sub-sequence {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj} should be clustered in the (m − 1)-th layer, then ˜C(m) i,j must be larger, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', {si, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sj} is also clustered in the m-th layer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For example, Figure 2(b) shows that the 2-nd layer will further cluster {s1, s2, s3} since ˜C(1) 1,3 ≤ ˜C(2) 1,3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Thus, the higher layers will carry Self-ATT in a bigger window than the lower layers to learn more global contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 2-D Clustering Matrix.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (3) shows how to calculate C when the input is a 1-D language sequence, next we extend it to the 2-D vision surface.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Given a 2-D fea- ture map V = {v1,1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vH,W }, we use Ci,j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x,y to de- note the probability that softly decides whether a sub-region {vi,x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vj,y} should be clustered or not, which is: Ci,j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x,y = P(vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='y) = j � k=i y � u=x P(vk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='u|vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, vi+1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vk−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='u−1) (7) where i, j and x, y respectively denote the horizontal and vertical dimensions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To cover all the sub-regions in a H×W Image Self-ATT Add&LN 1-D C Self-ATT Add&LN Words Cross-ATT Add&LN Captioning: Z me× Encoder Decoder md× Q,K,V Q,K,V K,V Q 2-D C Figure 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Overview of our ACF-based encoder-decoder IC model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The “Add&LN” is the Add and Layer Normalization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' me/md rep- resent the number of the encoder/decoder layers, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' map, it requires applying O(H2 × W 2) times for Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4) to get all the probabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To reduce the computation burden, we apply the independence assumption to decompose the 2-D distribution into two independent ones, which respec- tively correspond to the horizontal and vertical dimensions: P(vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='y) = Ph(vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='vj;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x)Pv(vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='y) = j � k=i Ph(vk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x|vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vk−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x) y � u=x Pv(vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x|vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='u−1), (8) In this way, we only need to apply O(H2 + W 2) times for Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4) and once matrix production.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Noteworthy, as sketched in Figure 2, for the 2-D region which spans the horizontal axis from i to j and the vertical axis from x to y, we use the left-most vertical and top-most hor- izontal to calculate two 1-D distributions and then mul- tiply them to get Ci,j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='x,y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' As Figure 3(a) shows, to calculate C1,4;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1,3, for the vertical distribution Pv, the horizontal ordinate is fixed to 1 and the vertical or- dinate changes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Ph(vk;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1|v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', vk−1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1)|k=1,2,3,4 and Pv(v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='u|v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', v1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='u−1)|u=1,2,3 are calculated in the same way as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The above-mentioned symmetric character- istic is also applied.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Down-Up Sampling Strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' If the sequence (feature map) is too long (big), we can apply the Down-Up Sam- pling Strategy to reduce the computation cost.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We use a 1-D language case as an example to show this strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For S = {s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', sL}, we can downsample it to ¯S = {¯s1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', ¯sL/2} where ¯si is the mean pooling of s2∗i−1 and s2∗i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then ¯S is used in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (3) and Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (4) to get ¯ C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To upsample ¯C to the original size, we set Ci,j = ¯ C⌈i/2⌉,⌈j/2⌉.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Figure 3(b) shows one simple case where L = 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Encoder-Decoder Architecture As is shown in Figure 4, we apply the ACF to build the vision encoder and language decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Compared to the clas- sic Transformer, our ACF introduces clustering-restrained attention head.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Specifically, in encoder, we calculate a 2-D clustering matrix C (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (7)) to softly cluster the ele- ments for carrying Self-ATT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Similarly, in decoder, the at- tention head is revised with the 1-D C (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' (5)).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The output of this encoder-decoder is used to calculate the word distributions Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To train our IC model, we optimize the model by min- imizing the cross-entropy loss and maximizing the Rein- forcement learning (RL) [35] reward.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' First, we train the model by minimizing the cross-entropy loss: LCE = − log P(Z∗), (9) where Z∗ is the ground-truth captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then, we further train the model by minimizing the negative reward: Lrl = −EZs∼P (Z)(S(Z∗, Zs)), (10) where Zs is sampled from Z, E represents the mathemat- ical expectation, and S represents the evaluation metrics, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', CIDEr [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Experiments 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Dataset, Metrics, and Settings MSCOCO.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Following [8, 12, 14, 31, 48], we train and evaluate our model on MSCOCO [22], which contains 123, 287 images, and each one is annotated with 5 cap- tions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the experiments, we use the Karpathy split (113,287/5,000/5,000 train/val/test images) [16] for offline training and the official split (40775 test images) for online testing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We adopt five widely-used metrics in caption- ing for evaluation, including BLEU [32], METOR [1], ROUGE-L [36], CIDEr [40], and SPICE [3].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the training process, we convert all the captions into lowercase and delete all the words that occur less than 6 times.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The remaining 9487 words are regarded as our vocabulary.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We adopt Swin Transformer [24] as the visual encoder to extract the visual features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The size of the feature map is H × W = 12 × 12, and we apply the Down-Up Sampling Strategy (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Section 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We train 20/25 epochs in the cross-entropy/RL stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the cross-entropy stage, the Adam optimizer is used with the learning rate of 5 × 10−5 and decays by 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 per 5 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In the RL stage, the learning rate is initialized to 5 × 10−6 and we implement the same decay policy for 10 epochs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Then the “Reduce- On-Plateau” strategy is applied with a decay rate of 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 and patience of 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The batch size is 40 at the whole training stage.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Comparison between with and without Ada-ClustFormer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Models me md B@4 M R C S BASE 6S 6S 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 ACF 1 6C 6S 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 134.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 ACF 2 6S 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 ACF 6C 6C 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Ablation Studies We conduct extensive ablations for quantifying the dif- ference between classic self-attention (Self-ATT) layers and Ada-ClustFormer (ACF) layers (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1), the im- pact of the depth of the ACF layers (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2), and the impact of the orders of ACF and the Self-ATT layers (cf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 Differences Between ACF and Self-ATT Comparing Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To evaluate the effectiveness of the ACF, we ablate our ACF with the following baselines: BASE: We employ 6 Self-ATT encoder layers and de- coder layers, which is shown in Table 1 as “6S”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF 1 / ACF 2: We replace the encoder/decoder with our ACF, which is represented as “6C”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The results of the ablation are listed in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Compared with BASE, we can find that only using ACF encoder (ACF 1) or decoder (ACF 2) has marginal im- provements, which is 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 or 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 on CIDEr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, when combining the ACF encoder and decoder to build a homo- geneous architecture ACF, the improvement is substantial, which is 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' This comparison suggests that a homogeneous model transfers more structural commonalities for better captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 Impact of the Layer Depth Comparing Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF 3: We reduce the depth of the encoder and decoder layer to 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF 4/ACF 5: The num- ber of the encoder/decoder layers is set to 3 and the number of the decoder/encoder layer remains 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From Table 2, we observe that stacking 6 layers generally outperforms the 3-layer case.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Our method with 6 ACF layers in the encoder and decoder achieves the best performance among them.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We also further explore the in- fluence of me by fixing md = 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We present the impact of the number of the encoder layers me in Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' It sug- gests that CIDEr approximately linearly increases when me increases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 Impact of the Layer Order Comparing Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We discuss the combination of the ACF layers and the Self-ATT layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We freeze the depth Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The performances with different layer depth Models me md B@4 M R C S ACF 3 3C 3C 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 ACF 4 6C 3C 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 ACF 5 3C 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 ACF 6C 6C 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 Table 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The impact of the layer orders.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Models me md B@4 M R C S ACF 5 3C 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 ACF 6 3C+ 3S 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 ACF 7 3S+ 3C 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 ACF 2 6S 6C 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 ACF 6C 6C 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 of the decoder layer md = 6 and quantify the influence of the order of the encoders: ACF 5: It stacks 3 ACF lay- ers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF 6/ACF 7: Both of them have 3 ACF layers and 3 Self-ATT layers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The difference between them is that ACF 7 encodes on 3 Self-ATT layers firstly.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The results are listed in Table 3, where we can see that the performances are not sensitive to the orders of ACF and Self-ATT layers, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', ACF 6 and ACF 7 differ only 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We can also find that replacing all the Self-ATT layers with our ACF layers will achieve the best captioning quality.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 3 4 5 6 Number of encoder layers 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 CIDEr 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='97 136.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='83 Figure 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Impact of the number of encoder layers me.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Qualitative Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We visualize the hierarchical struc- tures of the image and the generated captions in Figure 6 according to the 2-D and 1-D clustering matrix calculated from the 1-st, 3-rd, 5-th, and 6-th layers in encoder and de- coder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' By inspecting the images and captions, we can find that the patches and the words are respectively clustered, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', in the left part of (b), the patches in the “motorcycles” region are clustered, and in the right part, the words “sit- ting on motorcycles” are clustered into a phrase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' More im- portantly, when uniting the image and caption, we can find that structural commonalities are transferred, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', in (b), the “motorcycle” region helps generate the phrase “sitting on motorcycles”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' A woman standing on the door of a train with a suitcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' a woman standing on the door of a train with a suitcase standing on a woman the door of a a train with suitcase Ground-truth: A woman in white and black dress with suitcase on train.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' BASE: A woman standing with a suitcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF: A woman standing on the door of a train with a suitcase.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Two people sitting on motorcycles next to a stop sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' sitting on Two people motorcycles a next to stop sign sitting on motorcycles next to a stop sign Ground-truth: Two people riding motorcycles on a city street.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' BASE: Two people riding black motorcycles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF: Two people sitting on motorcycles next to a stop sign.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Ground-truth: A man with a hat and eye glasses holding a cell phone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' BASE: A man with a cowboy hat holding a cell phone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ACF: A man wearing a cowboy hat taking a picture with a cell phone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' A man wearing a cowboy hat taking a picture with a cell phone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' wearing cowboy hat a taking cell picture with wearing a cowboy hat a man taking a picture a man a a phone with a cell phone Two people (b) (c) (a) taking a picture with a cell phone A man wearing a cowboy hat Two people sitting on motorcycles next to a stop sign a woman standing on the door of a train with a suitcase Figure 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Examples of the generated captions by BASE and ACF models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We visualize the 2-D C and 1-D C in the 1-st, 3-rd, 5-th, and 6-th layers as the clustered patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Comparisons with SOTA Comparing Methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Nowadays, the SOTA of image cap- tioning has been updated quickly and these models can be categorized into 3 groups.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The first one is the meth- ods which use ROI-based features, including Up-Down [4], ORT [12], AoANet [14], M2 Transformer [8], Tree- Transformer [43], APN [48], and X-Transformer [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Among the above methods, Up-Down [4] deploys a famous architecture with a CNN-based encoder and an LSTM- based decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ORT [12] applies Transformer to lan- guage decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' AoANet [14] and M2 Transformer [8] further improve the attention mechanism on the language decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Tree-Transformer [43] and APN [48] reveal the validity of the utilization of the sequence structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To capture high-order interaction between sequence and re- gions, X-Transformer [31] introduces a bilinear pooling structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The second group are the methods using grid- based features: CPTR [23], Dual-Global [45], DLCT [25], and PureT [42].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Among them, Dual-Global [45] and DLCT [25] combine the grid-based features with the ROI- based features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' PureT [42] end-to-end trains the whole model and PureT-standard/PureT-Swin respectively use Transformer [9]/Swin Transformer [24] as the vision en- coder to deal with the visual features, which is also ex- tracted from a Swin Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The third group dis- tills the knowledge from large-scale pretraining models: RSTNet [54], and ViTCAP [10].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Accordingly, we seg- ment the performances into 3 parts in Table 4, where the top/middle/bottom parts are the ROI-based, grid-based, and BERT-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Note that for APN, besides reporting the results in their paper [48], which is got by using ROI- based features, we also report the performances using the same visual features as ours, which is denoted as “APN♯”.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From Table 4, we can see that ACF is com- parable to most of state-of-the-art performance when compared with ROI and grid-based models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Moreover, STOPS OPSTOPSTOPTable 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The performances of SOTA methods on MSCOCO Karpathy split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Models Cross-Entroy Loss CIDEr optimization B@4 M R C S B@4 M R C S ROI-based feature Up-Down [4] 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 113.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 ORT [12] 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 115.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 AoANet [14] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 M2 Transformer [8] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 CATT [50] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 119.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 APN [48] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 131.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 X-Transformer [31] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 122.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 Grid-based feature CPTR [23] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 − APN♯ [48] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 Dual-Global [45] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 DLCT [25] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 End-to-End training PureT-standard [42] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 PureT-Swin [42] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 Visual-language BERT pretraining RSTNet [54] 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 ViTCAP-small [10] 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 121.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 133.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 ViTCAP-large [10] 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 22.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 138.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 ACF 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 123.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The scores on the MSCOCO online test server.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Models B@4 M R C c5 c40 c5 c40 c5 c40 c5 c40 Up-Down [4] 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 117.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 120.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 SGAE [49] 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 122.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 125.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 ETA [19] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 122.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 124.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 APN [48] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 127.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 NG-SAN [11] 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 128.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 Dual-Global [45] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 AoANet [14] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 126.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='6 M2 Transformer [8] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='8 129.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 RSTNet [54] 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='5 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='7 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 130.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='4 ACF 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='0 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 130.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='2 132.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='3 ACF achieves comparable performances with ViTCAP- large [10] that distills knowledge from Google-CC [37], SBU Caption dataset [30], MSCOCO [22], and Visual Genome dataset [17], which uses 9.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='9M image-text pairs and 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='1M independent images to pretrain a detector-free IC model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, we only use the captions from MSCOCO to train our ACF.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Moreover, compared with APN♯ [48] which inserts an additional clustering matrix into the Self- ATT layers into the decoder, ACF achieves higher per- formance since it inserts the clustering matrix in both vi- sion encoder and language decoder to build a homogeneous model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Also, we submit the single-model results to the online server for testing, which is shown in Table 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' We can see that ACF achieves the best performance than the other mod- els, even we do not ensemble the results as AoANet [14], M2 Transformer [8], and RSTNet [54].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Limitations and Potential Solutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' From Table 4, we can find that PureT-Swin [42] achieves higher CIDEr than ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' There are two major reasons cause this.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Firstly, PureT- Swin extracts visual features from Swin Transformer [24] and then still uses Swin Transformer as the visual encoder to deal with the extracted features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' For ACF, the used vision encoder is quite different from Swin Transformer that they apply shifted fixed-size windows, while we insert an adap- tive clustering matrix into the Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In this way, the whole captioning model (including the vision extractor) is not a strictly homogeneous structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Also, it can be seen that ACF outperforms PureT-standard which applies a stan- dard Transformer as the vision encoder, which means that once PureT is not homogeneous, their performance will be worse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Secondly, they end-to-end train the whole architecture by captioning data since Swin Transformer [24] provides well- trained parameters that PureT does not need to train their visual extractor from scratch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, this requires heavy computation resources to end-to-end train the visual extrac- tor by image annotations while we now cannot afford such computation burdens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' However, even with these two limi- tations, it can be found that ACF still achieves comparable performances compared with PureT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' To solve these limitations, we prepare to extend the com- putation resources like the GPU servers to build a novel pure vision global-local Transformer where ACF prior is used to learn hierarchical structure.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' And then using this model to extract visual features for solving more vision-language tasks, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=', by building a homogeneous ACF-based vision- language model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Conclusion We propose a novel global-local Transformer named as Ada-ClustFormer (ACF) that can adaptively cluster the in- put elements for carrying self-attention (Self-ATT) to learn global-local contexts.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Specifically, this is achieved by in- serting a clustering matrix into the Self-ATT layer, where the probability terms are calculated from the input data and thus ACF can adaptively cluster the elements.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Moreover, we use ACF to build an image captioning model to transfer more structural commonalities for better captions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' The ex- periment results confirm the effectiveness of the proposed model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' References [1] Abhaya Agarwal and Alon Lavie.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Proceedings of WMT-08, 2007.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [2] Mahtab Ahmed, Muhammad Rifayat Samee, and Robert E Mercer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' You only need attention to traverse trees.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Pro- ceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 316–322, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [3] Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Spice: Semantic propositional image cap- tion evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In European conference on computer vision, pages 382–398.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Springer, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [4] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Bottom-up and top-down attention for image captioning and visual question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE con- ference on computer vision and pattern recognition, pages 6077–6086, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [5] Peter W Battaglia, Jessica B Hamrick, Victor Bapst, Al- varo Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Ma- linowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Relational inductive biases, deep learn- ing, and graph networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:1806.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='01261, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [6] Iz Beltagy, Matthew E Peters, and Arman Cohan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Long- former: The long-document transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='05150, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [7] Boyu Chen, Peixia Li, Chuming Li, Baopu Li, Lei Bai, Chen Lin, Ming Sun, Junjie Yan, and Wanli Ouyang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Glit: Neural architecture search for global and local image transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 12–21, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [8] Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Meshed-memory transformer for image cap- tioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10578– 10587, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl- vain Gelly, Jakob Uszkoreit, and Neil Houlsby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [10] Zhiyuan Fang, Jianfeng Wang, Xiaowei Hu, Lin Liang, Zhe Gan, Lijuan Wang, Yezhou Yang, and Zicheng Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Injecting semantic concepts into end-to-end image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18009–18019, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [11] Longteng Guo, Jing Liu, Xinxin Zhu, Peng Yao, Shichen Lu, and Hanqing Lu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Normalized and geometry-aware self- attention network for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10327–10336, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [12] Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Image captioning: Transforming objects into words.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, pages 11137–11147, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [13] Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and Lijuan Wang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Scaling up vision-language pre-training for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 17980–17989, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [14] Lun Huang, Wenmin Wang, Jie Chen, and Xiao-Yong Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Attention on attention for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE International Conference on Computer Vision, pages 4634–4643, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [15] Huaizu Jiang, Ishan Misra, Marcus Rohrbach, Erik Learned- Miller, and Xinlei Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In defense of grid features for visual question answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition, pages 10267–10276, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [16] Andrej Karpathy and Li Fei-Fei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Deep visual-semantic align- ments for generating image descriptions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recog- nition, pages 3128–3137, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [17] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A Shamma, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Visual genome: Connecting language and vision using crowdsourced dense image annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' International Journal of Computer Vi- sion, 123(1):32–73, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [18] Hwanhee Lee, Seunghyun Yoon, Franck Dernoncourt, Doo Soon Kim, Trung Bui, and Kyomin Jung.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Vilbertscore: Evaluating image caption using vision-and-language bert.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the First Workshop on Evaluation and Com- parison of NLP Systems, pages 34–39, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [19] Guang Li, Linchao Zhu, Ping Liu, and Yi Yang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Entan- gled transformer for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [20] Jinpeng Li, Yichao Yan, Shengcai Liao, Xiaokang Yang, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Local-to-global self-attention in vision trans- formers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='04735, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [21] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Oscar: Object-semantics aligned pre-training for vision-language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In European Conference on Computer Vision, pages 121–137.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [22] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Doll´ar, and C Lawrence Zitnick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Microsoft coco: Common objects in context.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In European conference on computer vision, pages 740–755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Springer, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [23] Wei Liu, Sihan Chen, Longteng Guo, Xinxin Zhu, and Jing Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Cptr: Full transformer network for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='10804, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [24] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Swin transformer: Hierarchical vision transformer using shifted windows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 10012–10022, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [25] Yunpeng Luo, Jiayi Ji, Xiaoshuai Sun, Liujuan Cao, Yongjian Wu, Feiyue Huang, Chia-Wen Lin, and Rongrong Ji.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Dual-level collaborative transformer for image caption- ing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 2286–2293, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [26] Minh-Thang Luong, Hieu Pham, and Christopher D Man- ning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Effective approaches to attention-based neural machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:1508.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='04025, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [27] Ron Mokady, Amir Hertz, and Amit H Bermano.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Clip- cap: Clip prefix for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2111.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='09734, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [28] Van-Quang Nguyen, Masanori Suganuma, and Takayuki Okatani.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Grit: Faster and better image captioning transformer using dual visual features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='09666, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [29] Xuan-Phi Nguyen, Shafiq Joty, Steven Hoi, and Richard Socher.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Tree-structured attention with hierarchical accumu- lation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In International Conference on Learning Representa- tions, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [30] Vicente Ordonez, Girish Kulkarni, and Tamara Berg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Im2text: Describing images using 1 million captioned pho- tographs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Advances in neural information processing sys- tems, 24, 2011.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [31] Yingwei Pan, Ting Yao, Yehao Li, and Tao Mei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' X-linear attention networks for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In CVPR, pages 10971–10980, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [32] Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Bleu: a method for automatic evaluation of machine translation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318, 2002.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [33] Samrudhdhi B Rangrej, Kevin J Liang, Tal Hassner, and James J Clark.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Glitr: Glimpse transformers with spatiotem- poral consistency for online action prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2210.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='13605, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [34] S Ren, K He, R Girshick, and J Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Towards real-time ob- ject detection with region proposal networks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Advances in neural information processing systems, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [35] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Self-critical sequence training for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7008–7024, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [36] Lin CY ROUGE.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' A package for automatic evaluation of summaries.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of Workshop on Text Summa- rization of ACL, Spain, 2004.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [37] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Conceptual captions: A cleaned, hypernymed, im- age alt-text dataset for automatic image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Pro- ceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2556–2565, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [38] Ying Hua Tan and Chee Seng Chan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Phrase-based image caption generator with hierarchical lstm network.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Neurocom- puting, 333:86–100, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [39] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Attention is all you need.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Advances in neural information processing systems, 30, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [40] Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Cider: Consensus-based image description evalua- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4566–4575, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [41] Oriol Vinyals, Alexander Toshev, Samy Bengio, and Du- mitru Erhan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Show and tell: A neural image caption gen- erator.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3156–3164, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [42] Yiyu Wang, Jungang Xu, and Yingfei Sun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' End-to-end trans- former based model for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, pages 2585– 2594, Jun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [43] Yau-Shian Wang, Hung-Yi Lee, and Yun-Nung Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Tree transformer: Integrating tree structures into self-attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:1909.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='06639, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [44] Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Hi-transformer: hierarchical interactive transformer for effi- cient and effective long document modeling.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content='01040, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [45] Tiantao Xian, Zhixin Li, Canlong Zhang, and Huifang Ma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Dual global enhanced transformer for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Neural Networks, 148:129–141, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [46] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Show, attend and tell: Neural image caption gen- eration with visual attention.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In International conference on machine learning, pages 2048–2057.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' PMLR, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [47] Jianwei Yang, Chunyuan Li, Pengchuan Zhang, Xiyang Dai, Bin Xiao, Lu Yuan, and Jianfeng Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Focal attention for long-range interactions in vision transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Advances in Neural Information Processing Systems, 34:30008–30022, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [48] Xu Yang, Chongyang Gao, Hanwang Zhang, and Jianfei Cai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Auto-parsing network for image captioning and visual ques- tion answering.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 2197–2207, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [49] Xu Yang, Kaihua Tang, Hanwang Zhang, and Jianfei Cai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Auto-encoding scene graphs for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10685–10694, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [50] Xu Yang, Hanwang Zhang, Guojun Qi, and Jianfei Cai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Causal attention for vision-language tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 9847–9857, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [51] Ting Yao, Yingwei Pan, Yehao Li, and Tao Mei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Hierar- chy parsing for image captioning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2621–2629, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [52] Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Vinvl: Revisiting visual representations in vision-language models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579– 5588, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [53] Hengshuang Zhao, Jiaya Jia, and Vladlen Koltun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Explor- ing self-attention for image recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10076–10085, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' [54] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Ja- son Corso, and Jianfeng Gao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' Unified vision-language pre- training for image captioning and vqa.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'} +page_content=' In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 13041–13049, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/AtAzT4oBgHgl3EQf__8r/content/2301.01955v1.pdf'}