diff --git "a/4tAzT4oBgHgl3EQfEPpQ/content/tmp_files/load_file.txt" "b/4tAzT4oBgHgl3EQfEPpQ/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/4tAzT4oBgHgl3EQfEPpQ/content/tmp_files/load_file.txt" @@ -0,0 +1,642 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf,len=641 +page_content='arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='00989v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='CV] 3 Jan 2023 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 1 A New Perspective to Boost Vision Transformer for Medical Image Classification Yuexiang Li vicyxli@tencent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com Yawen Huang yawenhuang@tencent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com Nanjun He nanjunhe@tencent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com Kai Ma kylekma@tencent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com Yefeng Zheng yefengzheng@tencent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com Tencent Jarvis Lab Shenzhen China Abstract Transformer has achieved impressive successes for various computer vision tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' However, most of existing studies require to pretrain the Transformer backbone on a large-scale labeled dataset (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', ImageNet) for achieving satisfactory performance, which is usually unavailable for medical images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Additionally, due to the gap between medical and natural images, the improvement generated by the ImageNet pretrained weights sig- nificantly degrades while transferring the weights to medical image processing tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In this paper, we propose Bootstrap Own Latent of Transformer (BOLT), a self-supervised learning approach specifically for medical image classification with the Transformer backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Concretely, the online network is trained to predict the target network representation of the same patch embedding tokens with a dif- ferent perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To maximally excavate the impact of Transformer from limited med- ical data, we propose an auxiliary difficulty ranking task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The Transformer is enforced to identify which branch (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', online/target) is processing the more difficult perturbed tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Overall, the Transformer endeavours itself to distill the transformation-invariant features from the perturbed tokens to simultaneously achieve difficulty measurement and maintain the consistency of self-supervised representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The proposed BOLT is eval- uated on three medical image processing tasks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The experimental results validate the superiority of our BOLT for medical image classification, compared to ImageNet pre- trained weights and state-of-the-art self-supervised learning approaches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 1 Introduction Recently, vision Transformer (ViT) [10] and its variants [23, 32, 36] has been introduced for various computer vision tasks (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', image classification [10, 18], object detection [9, © 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The copyright of this document resides with its authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' It may be distributed unchanged freely in print or electronic forms.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 41], semantic segmentation [34, 39] and medical image processing [11, 15, 16, 31, 38]) and gained increasing attentions from the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The common ViT usually requires pretrainig on large-scale natural image datasets, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', ImageNet, to achieve the satisfactory performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' For natural images, the labels for pretraining dataset can be efficiently obtained by crowdsourcing, as even ordinary people possess the ability to effectively identify and label objects in natural images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' However, the same strategy cannot be adopted for medical images, as professional expertise is mandatory for high-quality medical image annotations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Hence, the limited amount of annotated medical data is the major obstacle for the improvement of diagnosis accuracy even with the powerful vision Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Self-supervised learning (SSL) approach is a potential solution to tackle the challenge of insufficient annotated data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The typical self-supervised learning formulates a proxy task to extract representative features from unlabeled data, which can boost the accuracy of subse- quent target task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Existing studies have proposed various proxy tasks, including grayscale image colorization [19], patch re-ordering [25], and context restoration [27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The SSL was firstly brought to medical image processing by Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Concretely, the neural network was pretrained with a proxy task that sorted the 2D slices from the conventional 3D medical volumes for the subsequent fine-grained body part recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Zhu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [40] enforced 3D networks to play a Rubik’s cube game for pretraining, which can be seen as an extension of 2D Jigsaw puzzles [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Contrastive learning [13] has been recently popular- ized for self-supervised representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' These approaches enforce neural networks to spontaneously exploit useful information from pairs of positive and negative samples, instead of permuting the contextual information of images for self-supervised signal for- mulation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' He et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [14] firstly introduced the idea of contrastive learning into the area of self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' They proposed an approach, namely MoCo, which addressed the problem of large number of negative samples for contrastive learning by maintaining a memory bank of negative samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Following the direction, various contrastive-learning- based self-supervised approaches have been proposed [4, 6, 7, 12, 26, 33, 35].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Inspired by the success of self-supervised learning for CNNs, researchers began to make their efforts to ViT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Atito et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [1] directly utilized the existing SSL approaches, including rotation pre- diction, contrastive learning and image restoration, to pretrain vision Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Several studies [2, 3] have been proposed along this direction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' However, taking the architecture dif- ference between CNN and ViT into account, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', CNN takes the whole image as input, while the input of ViT is the embedding tokens of image tiles, the self-supervised learning approach specifically for ViT is worthwhile to develop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In the recent study, Chen et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [7] proposed MoCo V3 as a token-based constrastive learning approach, specifically for ViT to extract self-supervised features from raw data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The network pretrained with MoCo V3 outperformed the ImageNet-pretrained one, which demonstrated the effectiveness of token-based self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In this paper, we follow the direction and propose a token-wise perturbation based self-supervised learning framework specifically for medical image classification with vision Transformer, namely Bootstrap Own Latent of Transformer (BOLT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar to the existing Bootstrap Your Own Latent (BYOL) [12], our BOLT consists of two networks, namely online and target branches, for self-supervised representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Instead of image-wise transformation adopted by BYOL, the online network of our BOLT is trained to predict the target network representa- tion of the same patch embedding tokens with a different perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Moreover, to encour- age the vision Transformer to deeply exploit useful information from limited medical data, we propose an auxiliary difficulty ranking task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The difference between the original patch embedding tokens and the perturbed ones is measured as the difficulty (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=',' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' the larger dif- STUDENT,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' PROF,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' COLLABORATOR: BMVC AUTHOR GUIDELINES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Linear Projection of Flattened Patches ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Permutation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Linear Projection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Split ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Sliding Window ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token Permutation Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='x ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Vision Transformer �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Vision Transformer �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Patch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Embedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Perturbation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Perturbation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='��(��) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� (��) ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Difficulty-awareness Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Exponential Moving Average ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Similarity Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Online ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Target ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Embedded Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Content perturbed Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Long Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Figure 1: The architecture of our BOLT framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Compared to the original BYOL, our BOLT consists of two main revisions: 1) The proposed BOLT generates two views of em- bedding tokens for self-supervised learning;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2) A novel difficulty-awareness loss is proposed to encourage the ViT to deeply exploit useful information from raw data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' sg(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=') means stop- gradient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' ference means more difficult for the vision Transformer to process), which is then adopted as the supervision signal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In other words, the vision Transformer is required to identify which branch (online/target) is processing the more difficult perturbed tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Under the co-supervision of the two tasks, the vision Transformer is encouraged to endeavour itself to distill the transformation-invariantfeatures from the perturbed tokens, which should be capa- ble for simultaneous difficulty measurement and maintain the consistency of self-supervised representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In summary, the main contributions of our work can be concluded into four-fold: A token perturbation based self-supervised learning approach, namely BOLT, specif- ically designed for vision Transformer is proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' A token perturbation module is integrated to the existing BYOL framework for the more effective ViT pretraining.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' An auxiliary self-supervised task, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', difficulty ranking, is proposed to encourage ViTs to deeply exploit useful information from limited medical data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The self-supervised signal of this auxiliary task also derives from the perturbed tokens generated by our perturbation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To our best knowledge, this is the first SSL framework based on the difficulty-awareness paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The proposed BOLT is evaluated on three medical image processing tasks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', skin lesion classification, knee fatigue fracture grading and diabetic retinopathy grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The experimental results demonstrate the superiority of our BOLT, compared to the widely-used ImageNet pretrained weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Last but not least, we pretrain the ViT using different self-supervised learning ap- proaches on a large-scale private fundus image dataset captured from a collaborating hospital for diabetic retinopathy grading task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The dataset consists of 350,000 fundus images of normal cohort and patients with various diseases, which may be the largest fundus image dataset in the worldwide.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The pretraining on our private large-scale dataset is verified to benefit the related downstream target task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To advance the de- velopment of automated fundus image processing, we will release the ViT pretrained models to the community.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 4 STUDENT,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' PROF,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' COLLABORATOR: BMVC AUTHOR GUIDELINES ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Linear Projection of Flattened Patches ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Permutation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 1 5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 3 9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 2 4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Linear Projection ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Split ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Sliding Window ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token Permutation Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Vision Transformer �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Vision Transformer �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Patch ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Embedding ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Permutation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Token ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Permutation ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Module ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Difficulty awareness Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Exponential Moving Average ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='�� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Similarity Loss ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Online ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Target ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Embedded Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Content-perturbed Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Long Token �� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='Figure 2: The architecture of the proposed token perturbation module.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The module consists of three operations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', permutation, linear projection and split) to perturb the order and content of embedded tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Note that nine embedding tokens in this figure are taken as an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The exact number (N) of embedding tokens is decided by HW P2 , where H and W are the height and width of the original image, respectively, and (P, P) is the size of each image patch.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2 Method In this section, we introduce the proposed BOLT framework in details.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The pipeline of our Bootstrap Own Latent of Transformer (BOLT) is illustrated in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar to BYOL, the proposed BOLT adopts two branches to extract useful information from raw data, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', the online and target branches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The online branch consists of a set of weights θ, including a vision Transformer fθ, a projector gθ and a predictor qθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The target branch is of the same architecture with a different set of weights ξ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The target branch generates the regression targets for the online branch to learn, and its parameters ξ are an exponential moving average of the online branch parameters θ, which can be defined as: ξ ← τξ + (1 − τ)θ (1) where τ ∈ [0,1] is the decay rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Compared to the existing BYOL [12], the proposed BOLT has two differences: First, instead of image-based perturbation, we implement a token-based perturbation module for the constrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The underlying reason for the token-based perturbation is that the vision Transformer is insensitive to the order of input embedded tokens due to the mechanism of self-attention, which neutralizes the effectiveness of typical image-based transformation (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', Jigsaw puzzle permutation [24]) made to the self-supervised learning of ViT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Inspired by recent studies [8, 36], our token perturbation module involves permutation, fusion and split operations to simultaneously disarrange the order and content of tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Second, since the recent study [29] demonstrated the difficulty-awareness can boost the performance of CNNs, a difficulty-awareness auxiliary task, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', requiring the ViT to identify which branch (online/target) is processing the more difficult perturbed tokens, is integrated to the existing BYOL framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 5 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 Token Perturbation Module Instead of permuting the image content, we propose a token perturbation module to per- turb the order and content of embedded tokens for the self-supervised learning of a vision Transformer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The architecture of our token perturbation module is presented in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2, which involves three operations, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', permutation, linear projection and split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Permutation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar to the typical vision Transformer, the input image x ∈ RH×W×C is cropped into a sequence of flattened 2D patches xp ∈ RN×(P2C), where H and W are the height and width of the original image, respectively, C is the number of channels, (P, P) is the size of each image patch, and N = HW P2 is the resulting number of patches.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Therefore, the embedded tokens zo can be written as: zo = [x1 pE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='x2 pE;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='··· ;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='xN p E], (2) where E ∈ R(P2C)×D is a trainable linear projection (D is the latent vector size of the vision Transformer).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Then, the permuted tokens zp are obtained using a permutation operation (Perm(·)), which randomly disarranges the order of zo: zp = Perm(zo).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2 shows an example, the order of zo is disarranged to [z6 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z1 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z5 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z7 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z3 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z9 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z8 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z2 o;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='z4 o].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Linear Projection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' After the permutation, we concatenate M adjacent tokens using a sliding window with a stride S = W P , which results in K = N S long tokens (z′ p) with the length of M × D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The obtained tokens are then fed to a linear projection layer (Efuse ∈ RMD×SD) for information fusion, which yields K content-perturbed long tokens (zl): zl = z′ pEfuse.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' (3) Split.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' As previously mentioned, the typical vision Transformer uses the constant latent vec- tor size D through all of its layers;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' hence, the fused tokens with the length of S×D need to be reshaped back to the length of D to fulfill the input requirement of ViT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To achieve that, the proposed token perturbation module adopts a split operation to separate each long token into S D-length tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The splitted tokens (zs) is then fed to ViT for self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 Loss Function As shown in Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 1, our BOLT is jointly supervised by two loss functions, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', similarity loss and difficulty-awareness loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The similarity loss is consistent to the existing BYOL framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Concretely, for a set of embedded tokens zo, our BOLT produces two augmented perturbed tokens zt and z′ t for online and target branches, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The perturbed tokens zt are then fed to a ViT fθ, which yields a representation yθ = fθ(zt) and a projection zθ = gθ(yθ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' For the perturbed tokens for the target branch, a representation yξ = fξ(z′ t) and a projection zξ = gξ(yξ) are accordingly generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Consistent to BYOL, a prediction network qθ(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=') is adopted to yield the prediction of zξ and l2-norm is calculated for network training: Lθ = ��qθ(zθ)− zξ ��2 2 (4) where θ denotes the network weights of the online branch including fθ, gθ and qθ.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The loss LBOLT θ = Lθ + ˜Lθ only optimizes the weights of online branch θ, where ˜Lθ is the symmetric loss of Lθ by feeding z′ t and zt to online and target branches, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 6 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES Difficulty-awareness Loss.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Apart from the similarity loss, inspired by the curriculum learn- ing [17], we propose an auxiliary task—identifying which branch is processing the tokens with a larger level of perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Such an auxiliary task can drive ViTs to self-adaptively pay more attention on the hard case and accordingly better exploit the semantic information from the embedded tokens, since they are required to understand the content of tokens for the accurate difficulty ranking.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To formulate the auxiliary task, the self-supervised signal needs to be first generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' As- suming the perturbed tokens feeding to online and target branches as zt and z′ t, respectively, the self-supervised signal ysel f can be defined as: ysel f = � 0, MSE(Perm−1 zt (zt)−zo) < MSE(Perm−1 z′t (z′ t)−zo) 1, MSE(Perm−1 zt (zt)−zo) ⩾ MSE(Perm−1 z′t (z′ t)−zo) (5) where MSE(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=') is the mean squared error function;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Perm−1(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=') is the inverse permutation operation rearranging the perturbed tokens back to the original order.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' After the self-supervision is obtained, the features extracted by the online and target ViTs (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', yθ and yξ) are concatenated (Cat(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=')) and sent to a fully-connected layer (FC(.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=')) for difficulty classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Specifically, the process can be written as: LDif f fθ = −ysel f ∗ log(p)− (1 − ysel f)∗ log(1 − p)) (6) where p = FC(Cat(yθ,yξ))) is the probability of ysel f = 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar to LBOLT θ,ξ , the difficulty- awareness loss only optimizes the online branch (fθ).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' We notice that the recent study [29] has already proposed a difficulty-awareness loss for scleral spur localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Hence, it is worthwhile to emphasize the difference between it and our loss function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Concretely, Tao et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [29] explicitly enforced networks to predict the Dice score of input images using segmentation ground truth to achieve difficulty-awareness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Due to the lack of manual annotations, few study introduces the idea of difficulty-awareness for self-supervised learning (SSL).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In this study, we obtain the difficulty-related information in a self-supervised manner using the token perturbation module, and implicitly formulate the difficulty-ranking proxy task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To our best knowledge, this is the first SSL framework based on the difficulty-awareness paradigm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Overall Objective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Combining the aforementioned loss functions (LBOLT and LDif f ), the full objective L for the optimization of the online branch can be written as: L = LBOLT θ + αLDif f fθ (7) where α = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 is the loss weight of LDif f fθ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' According to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' (1), the weights of target branch ξ are updated via exponential moving average.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 3 Experiments We evaluate the proposed BOLT on three target tasks, including skin lesion classification, knee fatigue grading and diabetic retinopathy grading, using publicly available and private datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Conventional self-supervised learning approaches often pretrain the models on a large-scale unlabeled dataset (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', proxy set), and then finetune them on the relatively smaller target set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In this paper, three different medical image processing tasks are involved STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 7 for performance evaluation and the corresponding proxy and target datasets (example images are shown in Supplementary Material) for each task are introduced in the followings: Skin Lesion Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The publicly available ISIC 2019 dataset1 is used to validate the effectiveness of the proposed BOLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Specifically, the dataset [30] is provided by the ISIC 2019 challenge, which encourages researchers to develop the automated systems pre- dicting eight skin disease categories with dermoscopic images, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', squamous cell carci- noma, melanocytic nevus, benign keratosis, actinic keratosis, dermatofibroma, basal cell carcinoma, vascular lesion, and melanoma.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The whole ISIC 2019 dataset, consisting of over 20,000 dermoscopic images, is adopted as the proxy set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Due to the class imbalance prob- lem of original ISIC dataset, consistent to [21], 628 images are randomly sampled from each class to establish a balanced target set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' It is worthwhile to mention that the images from the two classes consisting of fewer than 628 images are all taken into the target set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' After that, the balanced target set with 4,260 images is randomly separated into training, validation and test sets based on the ratio of 70:10:20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Note that the ViT is first pretrained on the proxy set and finetuned on the training and validation sets, and then evaluated on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Knee Fatigue Grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The publicly available MURA dataset2 (musculoskeletal radio- graphs) [28], which is a large dataset of bone X-rays (over 40,000 images), is adopted as the proxy set to pretrain ViTs for the subsequent target task (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', knee fatigue grading).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' For the knee fatigue grading, 2,725 X-ray images are collected from a collaborating hospital as the target set [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The positions of fatigue fracture are different, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', navicular bone, tibia and fibula.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Each X-ray image is labeled by three physicians, and the final grade is decided via majority-voting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In particular, the target set has 1,785 normal, 190 grade-1, 452 grade-2, 196 grade-3 and 102 grade-4 cases, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' For the evaluation on our private knee fatigue grading dataset, the target set is divided to training, validation and test sets according to the ratio of 70:10:20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar to [20], due to the imbalance problem (normal vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' fatigue fracture and grade-2 vs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' other fracture grades), an equal number (20) of test images from each cate- gory are randomly sampled to form an uniform-distribution set for performance evaluation, instead of using the whole test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Diabetic Retinopathy Grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' For the diabetic retinopathy grading task, we pretrain the ViT on a large-scale private dataset captured from a collaborating hospital (proxy set), with approval obtained from the institutional review board of the hospital.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The dataset consists of 350,000 fundus images of normal cohort and patients with various diseases.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Then, the pre- trained ViT is finetuned on the publicly available APTOS 2019 blindness detection dataset (target set) for performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 In particular, there are 3,662 fundus images con- tained in the target set and the severity of diabetic retinopathy (DR) can be classified to four grades, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', normal (1,805), mild DR (370), moderate DR (999), severe DR (193) and proliferative DR (295).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Consistent to [22], a five fold cross-validation is conducted on this dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 Baselines & Evaluation Criterion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To demonstrate the effectiveness of our BOLT pretrain- ing, we finetune ViTs with ImageNet pretrained weights on the target tasks and evaluate 1https://challenge2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='isic-archive.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com/ 2https://stanfordmlgroup.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='io/competitions/mura/ 3https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='kaggle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='com/c/aptos2019-blindness-detection 4The ViT pretrained on our private large-scale dataset may benefit the related downstream target tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' To advance the development of automated fundus image processing, we will release the ViT pretrained models to the community soon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 8 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES Table 1: The classification accuracy (ACC) presented in percentage (%) of ViTs using dif- ferent training strategies with different amounts of training data on the ISIC 2019 test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 100% 50% 10% Train-from-scratch 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ImageNet Pretrained 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 SimSam [5] 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 BYOL [12] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 MoCo V3 [7] 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 BOLT w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='/o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' LDi f f 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 BOLT (ours) 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 ImageNet Pretrained ResNet-50 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 their performances on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Consistent to MoCo V3 [7], the basic ViT-B/16 is adopted as backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The original BYOL [12], state-of-the-art self-supervised learning approach SimSam [5] and token-based self-supervised learning approach MoCo V3 [7] are assessed for comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' It is worthwhile to mention that the backbones of representation networks of BYOL and SimSam implemented in this study are ViT-B/16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The average classification accuracy (ACC) is adopted as metric for the performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 Performance Evaluation In this section, we evaluate the effectiveness of different training strategies on different datasets and present the experimental results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The widely-used ImageNet pretrained ResNet- 50 is also adopted as a baseline for comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Some detailed discussions are presented in Supplementary Material.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Skin Lesion Classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' First, the different training strategies are evaluated on the pub- licly available ISIC 2019 dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The evaluation results of models finetuned with all train- ing data (100%) on the test set are listed in Table 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The ImageNet pretrained ViT is ob- served to surpass the ImageNet pretrained ResNet-50 by a large margin (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', +4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8%), which demonstrates the superiority of ViT for medical image classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Compared to the state- of-the-art self-supervised learning approaches (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', SimSam, BYOL and MoCo V3), our token-based BOLT achieves a higher ACC (80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' By using the difficulty-awareness loss (LDif f ), the ACC of BOLT can be further improved to 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5%, which outperforms the runner- up (MoCo V3) by a margin of +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2%.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The goal of self-supervised learning approach primarily is to deal with the insufficient training data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Hence, to better verify the superiority of our BOLT approach, we conduct an experiment to assess the performance of BOLT pretrained ViTs with different numbers of labeled samples used for finetuning (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', 10% and 50% in Table 1).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' It can be observed that our BOLT can effectively tackle the situation with few labeled training samples—the proposed BOLT with difficulty-awareness loss achieves the best ACC under both 50% and 10% settings.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Knee Fatigue Grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Consistent to the previous study [20], apart from classification ac- curacy, the F1 score is also adopted for performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The experimental results on the uniform test set are listed in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' As shown, the ViT pretrained with the proposed BOLT outperforms the ones using existing self-supervised learning approaches and the Ima- geNet pretrained weights, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', an ACC of 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0% is achieved (+2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0% higher than the runner- STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 9 Table 2: The accuracy (ACC and F1 score) presented in percentage (%) of different training strategies on knee fatigue grading and diabetic retinopathy grading tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Knee Fatigue Grading Diabetic Retinopathy Grading ACC F1 ACC F1 Train-from-scratch 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 ImageNet Pretrained 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 49.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 SimSam [5] 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='5 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 BYOL [12] 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 MoCo V3 [7] 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 BOLT w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='/o.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' LDif f 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='2 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='4 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='3 BOLT (ours) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='6 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8 ImageNet Pretrained ResNet-50 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='7 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='0 up).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Similar trend to ISIC 2019 is observed—the ACC of ImageNet pretrained ViT (51%) is significantly higher than that of ImageNet pretrained ResNet-50 (36%), demonstrating the effectiveness of ViT backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' We notice that the improvements to train-from-scratch yielded by pretraining are more obvious on our knee fatigue grading dataset (over +20%), compared to the skin lesion classification task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The reason may be that the target set of knee fatigue grading contains less training samples (around 1,000 X-ray images);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' thus, it is more difficult to well train the model from scratch, compared to the skin lesion classification task with a target set of 4,260 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Diabetic Retinopathy Grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Consistent to [22], we split the APTOS 2019 dataset into five folds for cross-validation and adopt the F1 score for performance evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The grad- ing accuracy of models using different training strategies is shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The proposed BOLT pretrained ViT achieves the best ACC (85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='9%) and F1 score (85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='8%) among the listed approaches, which are +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1% and +1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='1% higher than the original BYOL, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 4 Conclusion In this paper, a self-supervised learning approach, termed Boostrap Own Latent of Trans- former (BOLT), was proposed specifically for medical image classification with the vision Transformer backbone.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The proposed BOLT involved online and target branches, which ex- tracted the self-supervised representation from raw data via contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Concretely, the online network was trained to predict the target network representation of the same patch embedding tokens with a different perturbation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Furthermore, we proposed an auxiliary dif- ficulty ranking task to enable the vision Transformer to exploit diverse information from the limited medical data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The difference between the original patch embedding tokens and the perturbed ones was calculated as the difficulty measurement (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', the larger difference means more difficult for the vision Transformer to process), which was then adopted as the supervi- sion signal for self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The vision Transformer was trained to identify the branch (online/target) processing for the more difficult perturbed tokens, which enabled it to distill the transformation-invariant features from the perturbed tokens.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The experimental results on three medical image classification tasks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', skin lesion classification, knee fa- tigue fracture grading and dabetic retinopathy grading) demonstrated the effectiveness of the proposed BOLT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' We notice several limitations of this study and plan to address them in the future works: Extension to Medical Image Segmentation Task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The proposed BOLT can be easily ex- 10 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES tended to medical image segmentation in a similar way like [40], i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=', pretraining the encoder and using a random initialization for the decoder.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Yet, the randomly initialized decoder may neutralize the performance improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Therefore, we plan to explore a more effective way extending our pretrained ViTs for medical image segmentation task in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Pretrained Weights for ViT Variants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Recently, many powerful ViT-based backbones, such as Swin Transformer [23], have been proposed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The weights of these ViT variants pretrained on our large-scale fundus image dataset will be continuously provided in the future.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' References [1] Sara Atito, Muhammad Awais, and Josef Kittler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' SiT: Self-supervised vision Trans- former.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='03602, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [2] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Bao, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Dong, and F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Wei.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' BEiT: BERT pre-training of image Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='08254, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [3] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Caron, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Touvron, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Misra, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Jegou, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Mairal, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Bojanowski, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Joulin.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Emerging properties in self-supervised vision Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='14294, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [4] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' A simple framework for contrastive learning of visual representations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Confer- ence on Machine Learning, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [5] Xinlei Chen and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Exploring simple Siamese representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, June 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [6] Xinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Improved baselines with momentum contrastive learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='04297, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [7] Xinlei Chen, Saining Xie, and Kaiming He.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' An empirical study of training self- supervised vision Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2104.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='02057, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [8] Xiangxiang Chu, Zhi Tian, Bo Zhang, Xinlong Wang, Xiaolin Wei, Huaxia Xia, and Chunhua Shen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Conditional positional encodings for vision Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='10882, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [9] Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' UP-DETR: Unsupervised pre-training for object detection with Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [10] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiao- hua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' An image is worth 16x16 words: Transformers for image recognition at scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Conference on Learning Representations, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [11] Yunhe Gao, Mu Zhou, and Dimitris Metaxas.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' UTNet: A hybrid Transformer architec- ture for medical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='00781, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 11 [12] Jean-Bastien Grill, Florian Strub, Florent Altche, Corentin Tallec, Pierre H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Remi Munos, and Michal Valko.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Bootstrap your own latent: A new approach to self-supervised learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In Advances in Neural Information Processing Systems, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [13] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Hadsell, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Chopra, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' LeCun.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Dimensionality reduction by learning an invariant mapping.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2006.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [14] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Momentum con- trast for unsupervised visual representation learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [15] Ge-Peng Ji, Yu-Cheng Chou, Deng-Ping Fan, Geng Chen, Debesh Jha Huazhu Fu, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Progressively normalized self-attention network for video polyp segmenta- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='08468, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [16] Yuanfeng Ji, Ruimao Zhang, Huijie Wang, Zhen Li, Lingyun Wu, Shaoting Zhang, and Ping Luo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Multi-compound Transformer for accurate biomedical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='14385, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [17] Hoel Kervadec, Jose DolzÉric Granger, and Ismail Ben Ayed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Curriculum semi- supervised segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Conference on Medical Image Computing and Computer Assisted Intervention, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [18] Jack Lanchantin, Tianlu Wang, Vicente Ordonez, and Yanjun Qi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' General multi-label image classification with Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [19] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Larsson, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Maire, and G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Shakhnarovich.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Colorization as a proxy task for vi- sual understanding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [20] Yuexiang Li, Yanping Wang, Guang Lin, Yi Lin, Dong Wei, Qirui Zhang, Kai Ma, Zhiqiang Zhang, and Yefeng Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Triplet-branch network with prior-knowledge embedding for fatigue fracture grading.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Conference on Medical Image Computing and Computer Assisted Intervention, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [21] Zhuoyun Li, Changhong Zhong, Ruixuan Wang, and Wei-Shi Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Continual learn- ing of new diseases with dual distillation and ensemble strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Con- ference on Medical Image Computing and Computer Assisted Intervention, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [22] Shaoteng Liu, Lijun Gong, Kai Ma, and Yefeng Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' GREEN: a graph residual re-ranking network for grading diabetic retinopathy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Conference on Medical Image Computing and Computer Assisted Intervention, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [23] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Swin Transformer: Hierarchical vision Transformer using shifted win- dows.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2103.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='14030, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Noroozi and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Favaro.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Unsupervised learning of visual representations by solving Jigsaw puzzles.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In European Conference on Computer Vision, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' 12 STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES [25] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Noroozi, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Vinjimoor, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Favaro, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Pirsiavash.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Boosting self-supervised learning via knowledge transfer.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [26] Tian Pan, Yibing Song, Tianyu Yang, Wenhao Jiang, and Wei Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' VideoMoCo: Con- trastive video representation learning with temporally adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [27] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Pathak, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Krähenbühl, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Donahue, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Darrell, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Efros.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Context encoders: Feature learning by inpainting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [28] Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Ball, Curtis Langlotz, Katie Shpan- skaya, Matthew P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Lungren, and Andrew Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Ng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' MURA: Large dataset for abnormal- ity detection in musculoskeletal radiographs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Conference on Medical Imaging with Deep Learning, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [29] Xing Tao, Chenglang Yuan, Cheng Bian, Yuexiang Li, Kai Ma, Dong Ni, and Yefeng Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The winner of age challenge: Going one step further from keypoint detection to scleral spur localization.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE International Symposium on Biomedical Imaging, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [30] Philipp Tschandl, Cliff Rosendahl, and Harald Kittler.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Scientific Data, 5(1):1–9, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [31] Jeya Maria Jose Valanarasu, Poojan Oza, Ilker Hacihaliloglu, and Vishal M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Patel.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Medical Transformer: Gated axial-attention for medical image segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='10662, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [32] Wenhai Wang, Enze Xie, Xiang Li, Deng-Ping Fan, Kaitao Song, Ding Liang, Tong Lu, Ping Luo, and Ling Shao.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Pyramid vision Transformer: A versatile backbone for dense prediction without convolutions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2102.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='12122, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [33] Xinlong Wang, Rufeng Zhang, Chunhua Shen, Tao Kong, and Lei Li.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Dense contrastive learning for self-supervised visual pre-training.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vi- sion and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [34] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, Baoshan Cheng, Hao Shen, and Huaxia Xia.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' End-to-end video instance segmentation with Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [35] Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, and Han Hu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Self-supervised learning with Swin Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2105.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='04553, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [36] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, Zihang Jiang, Francis EH Tay, Jiashi Feng, and Shuicheng Yan.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Tokens-to-Token ViT: Training vision Trans- formers from scratch on ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='11986, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' STUDENT, PROF, COLLABORATOR: BMVC AUTHOR GUIDELINES 13 [37] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Zhang, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Wang, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Self supervised deep representation learning for fine- grained body part recognition.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In International Symposium on Biomedical Imaging, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [38] Yinglin Zhang, Risa Higashita, Huazhu Fu, Yanwu Xu, Yang Zhang, Haofeng Liu, Jian Zhang, and Jiang Liu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' A multi-branch hybrid Transformer network for corneal endothelial cell segmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2106.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='07557, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [39] Sixiao Zheng, Jiachen Lu, Hengshuang Zhao, Xiatian Zhu, Zekun Luo, Yabiao Wang, Yanwei Fu, Jianfeng Feng, Tao Xiang, Philip H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Torr, and Li Zhang.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Rethinking semantic segmentation from a sequence-to-sequence perspective with Transformers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' In IEEE Conference on Computer Vision and Pattern Recognition, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [40] Jiuwen Zhu, Yuexiang Li, Yifan Hu, Kai Ma, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Kevin Zhou, and Yefeng Zheng.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Ru- bik’s cube+: A self-supervised feature learning framework for 3D medical image anal- ysis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' Medical Image Analysis, 64:101746, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' [41] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' De- formable DETR: Deformable Transformers for end-to-end object detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content=' arXiv preprint arXiv:2010.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'} +page_content='04159, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/4tAzT4oBgHgl3EQfEPpQ/content/2301.00989v1.pdf'}