diff --git "a/29AyT4oBgHgl3EQf1vl1/content/tmp_files/load_file.txt" "b/29AyT4oBgHgl3EQf1vl1/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/29AyT4oBgHgl3EQf1vl1/content/tmp_files/load_file.txt" @@ -0,0 +1,1249 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf,len=1248 +page_content='P3DC-Shot: Prior-Driven Discrete Data Calibration for Nearest-Neighbor Few-Shot Classification Shuangmei Wanga,∗, Rui Maa,b,∗, Tieru Wua,b,∗∗, Yang Caoa,∗∗ aJilin University, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2699 Qianjin Street, Changchun, 130012, China bEngineering Research Center of Knowledge-Driven Human-Machine Intelligence, MOE, No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2699 Qianjin Street, Changchun, 130012, China Abstract Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior- driven data calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we perform NN classification using these discretely calibrated support data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Keywords: Few-Shot Learning, Image Classification, Prototype, Calibration 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Introduction Deep learning has triggered significant breakthroughs in many computer vision tasks, such as image classifi- cation [1, 2, 3], object detection [4, 5, 6], and seman- tic segmentation [7, 8, 9] etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One key factor for the success of deep learning is the emergence of large-scale datasets, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', ImageNet [2], MSCOCO [10], Cityscapes [11], just to name a few.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' However, it is difficult and expensive to collect and annotate sufficient data sam- ples to train a deep model with numerous weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The data limitation has become a main bottleneck for more broader application of deep leaning, especially for the tasks involving rarely seen samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' On the other hand, human can learn to recognize novel visual concepts from only a few samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' There is still a notable gap This work is supported in part by the National Key Research and Development Program of China (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2020YFA0714103) and the National Natural Science Foundation of China (Grant No.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 61872162 and 62202199).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' ∗Co-first authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' ∗∗Corresponding authors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' between human intelligence and the deep learning based artificial intelligence.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Few-shot learning (FSL) aims to learn neural models for novel classes with only a few samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Due to its ability for generalization, FSL has attracted extensive interests in recent years [12, 13, 14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Few-shot classification is the most widely studied FSL task which attempts to recognize new classes or classify data in an unseen query set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Usually, few-shot classification is formulated in a meta-learning frame- work [15, 16, 17, 18, 19, 20, 21, 22, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In the meta-training stage, the N-way K-shot episodic training paradigm is often employed to learn generalizable clas- sifiers or feature extractors for data of the base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, in the meta-testing stage, the meta-learned clas- sifiers can quickly adapt to a few annotated but unseen data in a support set and attain the ability to classify the novel query data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Although meta-learning has shown the effectiveness for few-shot classification, it is unclear how to set the optimal class number (N) and per-class sample number (K) when learning the classifiers.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Also, the learned classifier may not perform well when the sample number K used in meta-testing does not match Preprint submitted to Elsevier January 3, 2023 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='00740v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='CV] 2 Jan 2023 the one used in the meta-training [24].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' On the other hand, nearest-neighbor (NN) based clas- sification has been proven as a simple and effective ap- proach for FSL.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Based on features obtained from the meta-learned feature extractor [15, 16] or the pretrained deep image models [25], the query data can be effi- ciently classified by finding the nearest support class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, the prediction is determined by measuring the similarity or distance between the query feature and the prototypes (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', average or centroid) of the support features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' From the geometric view, NN-based classi- fication can be solved using a Voronoi Diagram (VD) which is a partition of the space formed by the support features [26, 27].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Given a query feature, its class can be predicted by computing the closest Voronoi cell that corresponds to a certain support class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' With proper VD construction and feature distance metrics, the state-of- the-art performance can be achieved for few-shot clas- sification [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' However, due to the limited number of support samples, NN-based few-shot classification is sensitive to the distribution of the sampled data and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of differ- ent classes (see Figure 1 left).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To solve above issues, various efforts have been paid to more effectively utilize the knowledge or priors from the base classes for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One natural way is to learn pretrained classifiers or image encoders with the abundant labeled samples of base classes and then adapt them the novel classes via transfer learning [29, 30, 31, 23].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meanwhile, it has been shown that variations in selecting the base classes can lead to dif- ferent performance on the novel classes [32, 33, 34] and how to select the base classes for better feature repre- sentation learning still needs more investigation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' On the other hand, a series of works [35, 36, 37, 38] per- form data calibration to the novel classes so that the re- sults are less affected by the limited number of support samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One representative is Distribution Calibration (DC) [38] which assumes the features of the data fol- low the Gaussian distribution and transfers the statis- tics from the similar base classes to the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, DC trains a simple logistic regression classifier to classify the query features using features sampled from the calibrated distributions of the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Although DC has achieved superior performance than previous meta-learning [19, 21, 22] or transfer-learning [29, 30, 31, 23] based methods, it relies on the strong as- sumption for Gaussian-like data distribution and it can- not be directly used for NN-based few-shot classifica- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In this paper, we propose P3DC-Shot, an improved Support sample Query sample Calibrated support sample Figure 1: When samples in the support set lie around the distribution boundary of different classes, the NN classifier may produce false pre- diction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' By performing discrete calibration for each support sample using priors from the base classes, the calibrated support data is trans- formed closer to the actual class centroid and can lead to less-biased NN classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The colored regions represent the underlying data distribution of different classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The gray lines are the predicted deci- sion boundaries by the NN classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' NN-based few-shot classification method that employs prior information from base classes to discretely cali- brate or adjust the support samples so that the calibrated data is more representative for the underlying data dis- tribution (Figure 1 right).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Our main insight is even the novel classes have not been seen before, they still share similar features to some base classes, and the prior in- formation from the base classes can serve as the context data for the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' When only a few support samples are available for the novel classes, performing prior-driven calibration can alleviate the possible bias introduced by the few-shot support samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' With the calibrated support samples, the query data can be more accurately classified by a NN-based classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, for the prior information, we compute the prototype, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', the average of features, for each base class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we propose three different schemes for se- lecting the similar prototypes to calibrate the support data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Firstly, we propose the sample-level calibration which selects the top M most similar base prototypes for each support sample and then apply weighted averaging between each support sample and selected prototypes to obtain the calibrated support sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Secondly, to uti- lize more context from the base classes, we propose the task-level calibration which combines the most similar base prototypes for each support sample into a union and performs the calibration for the support samples us- ing each prototype in the union.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In addition, we pro- pose a unified calibration scheme that combines the two above schemes so that the calibration can exploit dif- ferent levels of prior information from the base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To utilize the calibrated support samples for the NN- based classification, we further obtain the prototypes of 2 the support class using an attention-weighted averaging, while the attention weights are computed between the query sample and each calibrated support sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Fi- nally, the classification of a query sample is simply de- termined by finding its nearest support prototype mea- sured by the cosine similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparing to DC, our P3DC-Shot adopts the simi- lar idea of transferring the information or statistics from the base classes to the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The key differ- ence is our data calibration is performed on each indi- vidual support sample rather than the distribution pa- rameters and we employ the NN-based classification in- stead of the learned classifier as in DC.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparing to other NN-based few-shot classification methods such as SimpleShot [25], since our support data is calibrated, the NN classification is less affected by the sampling bias for the support data, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='g, the calibrated data is more likely to be close to the center of the corresponding novel class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We conduct extensive comparisons with re- cent state-of-the-art few-shot classificaiton methods on miniImageNet [2], tiredImageNet [39] and CUB [40] and the results demonstrate the superiority and general- izability of our P3DC-Shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ablation studies on differ- ent calibration schemes, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', different weights between the sample-level and task-level calibration also show the necessity of combining two schemes for better results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In summary, our contributions are as follows: 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We propose P3DC-Shot, a prior-driven dis- crete data calibration strategy for nearest-neighbor based few-shot classification to enhance the model’s robustness to the distribution of the sup- port samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Without additional training and expensive compu- tation, the proposed method can efficiently cali- brate each support sample using information from the prototypes of the similar base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We conduct extensive evaluations on three discrete calibration schemes on various datasets and the re- sults show our efficient non-learning based method can outperform or at least comparable to SOTA few-shot classification methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Related Work In this section, we first review the representative meta-learning and transfer learning based few-shot clas- sification techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we summarize the nearest- neighbor and data calibration based approaches which are most relevant to our P3DC-Shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meta-learning based few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meta- learning [41] has been widely adopted for few-shot clas- sification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The core idea is to leverage the episodic training paradigm to learn generalizable classifiers or feature extractors using the data from the base classes in an optimization-based framework [18, 19, 20, 21, 22], as well as learn a distance function to measure the similarity between the support and query samples through metric-learning [42, 15, 17, 43, 44, 37].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For example, MAML [19] is one of the most representa- tive optimization-based meta-learning method for few- shot classification and its goal is to learn good net- work initialization parameters so that the model can quickly adapt to new tasks with only a small amount of new training data from the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For metric- learning based methods such as the Matching Networks [15], Prototypical Networks [16] and Relation Net- works [17], the network is trained to either learn an embedding function with a given distance function or learn both the embedding and the distance function in a meta-learning architecture.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Unlike the optimization and metric-learning based methods which require so- phisticated meta-learning steps, our method can directly utilize the features extracted by the pretrained models and perform the prior-driven calibration to obtain less- biased support features for classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Transfer learning based few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Transfer learning [45, 46, 47] is a classic machine learn- ing or deep learning technique that aims to improve the the learning of a new task through the transfer of knowledge from one or more related tasks that have al- ready been learned.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pretraining a deep network on the base dataset and transferring knowledge to the novel classes via fine-tuning [31, 48, 30] has been shown as the strong baseline for the few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To learn better feature representations which can lead to improved few-shot fine-tuning performance, Mangla et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [29] propose S2M2, the Self-Supervised Manifold Mixup, to apply regularization over the feature mani- fold enriched via the self-supervised tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In addition to training new linear classifiers based on the pretrained weights learned from the base classes, Meta-Baseline [23] performs meta-learning to further optimize the pre- trained weights for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' On the other hand, it has been shown the results of the transfer learn- ing based methods depend on different selections of the base classes for pretraining [32, 33], while how to se- lect the base classes to achieve better performance is still challenging [34].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In comparison, our P3DC-shot does not need the additional cost for feature represen- tation learning and can more effectively utilize the base classes in a NN-based classification framework.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Nearest neighbor based few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' NN-based classification has also been investigated for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The main idea is to compute the 3 prototypes of the support samples, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', the mean or cen- troid of the support features, and classify the query sam- ple using metrics such as L2 distance, cosine similarity or a learned distance function.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In SimpleShot [25], it shows nearest neighbor classification with features sim- ply normalized by L2 norm and measured by Euclidean distance can achieve competitive few-shot classification results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Instead of performing nearest neighbor classifi- cation on the image-level features, Li et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [49] intro- duces a Deep Nearest Neighbor Neural Network which performs nearest neighbor search over the deep local descriptors and defines an image-to-class measure for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' From a geometric view, Ma et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [50] utilize the Cluster-induced Voronoi Diagram (CIVD) to incorporate cluster-to-point and cluster-to- cluster relationships to the nearest neighbor based clas- sification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Similar to above methods, our method is based on the nearest prototype classification, while we perform the prior-driven data calibration to obtain less-biased support data for the prototype computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meanwhile, computing the attentive or reweighted pro- totypes [51, 52, 53] that are guided by the base classes or query samples has also been investigated recently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We follow the similar idea and compute the attention- weighted prototypes for NN-based classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Data calibration for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Due to the limited number of samples, the prototypes or cen- troids computed from the few-shot support data may be biased and cannot represent the underlying data distri- bution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Simply performing NN-based classification on these biased prototypes will lead to inaccurate classi- fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Several methods have been proposed to cali- brate or rectify the data to obtain better samples or pro- totypes of the support class [35, 36, 37, 54, 38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Using the images in the base classes, RestoreNet [35] learns a class agnostic transformation on the feature of each image to move it closer to the class center in the fea- ture space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To reduce the bias caused by the scarcity of the support data, Liu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', [36] employ the pseudo- labeling to add unlabelled samples with high prediction confidence into the support set for prototype rectifica- tion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In [37], Guo et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' propose a Pair-wise Similar- ity Module to generate calibrated class centers that are adapted to the query sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Instead of calibrating in- dividual support samples, Distribution Calibration (DC) [38] aims to calibrate the underlying distribution of the support classes by transferring the Gaussian statistics from the base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' With sufficient new support data sampled from the calibrated distribution, an additional classifier is trained in [38] to classify the query sam- ple.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In contrast to these methods, we do not require additional training or assumption of the underlying dis- tribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Instead, we directly use the prototypes of the base classes to calibrate each support sample individ- ually and we adopt the NN-based classification which makes the whole pipeline discrete and efficient.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One recent work that is similar to ours is Xu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [54] which proposes the Task Centroid Projection Removing (TCPR) module and transforms all support and query features in a given task to alleviate the sample selection bias problem.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparing to [54], we only calibrate the support samples using the priors from the base classes and keep the query samples unchanged.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Method To effectively utilize the prior knowledge from the base classes, we first propose two independent calibra- tion strategies, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', sample-level calibration and task- level calibration, which exploit different levels of infor- mation from the base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we combine the sample-level and task-level calibration together to ob- tain the final calibrated support samples which will be used for the nearest neighbor classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Figure 2 shows an illustration of the P3DC-Shot pipeline.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Given a pretrained feature extractor F and a set of prototypes of base classes, we perform the prior- driven discrete calibration to the normalized features of the support data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Initially, the query sample in green is closer to the support sample in yellow.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' After the proposed calibration using the related base class proto- types, the query sample becomes closer to the calibrated support sample in blue.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In the following, we provide technical details of the P3DC-Shot for few-shot classi- fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Problem Statement In this paper, we focus on the few-shot image clas- sification which aims to classify the new image sam- ples from the novel classes with just a few labeled im- age samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Normally, the new data sample is called a query sample and the labelled samples are called sup- port samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' With the aid of a set of base classes rep- resented by their prototypes Pb = {pb i }nb i=1, our goal is to calibrate the support samples from novel-class so that they can be better matched with the query samples by a nearest neighbor classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Here, all data samples are represented by the features computed from a pretrained feature extractor F(·) : X → Rd, while X is the domain of the image space and d is the dimension of the feature space;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' pb i is the prototype of a base class,' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' which is com- ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='puted as the average feature of the samples within the ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='L2 norm ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Calibration ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Feature ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='extraction ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Support data ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Query data ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Final calibrated support features ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Endpoint of sample-level calibration ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Endpoint of task-level calibration ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='All base class prototypes ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='or ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Selected prototypes for a sample ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='+ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Selected prototypes for a task ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������ ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������1 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='������������� ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='̅������������2 ' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='Figure 2: An illustration of the P3DC-Shot pipeline for the 2-way 1-shot scenario.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that the direct interpolation of the three triangle vertices return a feature on the triangle plane.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' After normalization, the final calibrated features ¯xu 1 and ¯xu 2 are on the hypersphere of the normalized space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' class;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' nb is the number of all base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For simplic- ity, we directly use xi to represent the feature F(xi) of an image xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We follow the conventional few-shot learning setting, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', build a series of N-way K-shot tasks where N is the number of novel classes and K is the number of sup- port samples in each task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Formally, each task con- sists of a support set S = {(xi, yi)}N×K i=1 and a query set Q = {qi}N×K+N×Q i=N×K+1 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Here, yi is the label of the corre- sponding sample, which is known for the support set and unknown for the query set;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Q is the number of query sample for each novel class in the current task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Given a support feature xi, we perform our prior-driven calibra- tion to obtain the calibrated support feature xc i = C(xi), where C(·) : Rd → Rd conducts feature transformation based on the information from the base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we predict the label of a query feature by performing nearest neighbor classification w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t the novel class pro- totypes computed from the calibrated support feature(s).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Prior-Driven Discrete Data Calibration Before we perform calibration to the support data, we first apply L2 normalization to the support and query features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' It is shown in SimpleShot [25] that using L2-normalized feature with a NN-based classifier can lead to competitive results for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hence, we obtain ¯xi for a support feature xi by: ¯xi = normalize(xi) = xi ∥xi∥2 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (1) Similarly, the normalization of the query features are also computed: ¯qi = normalize(qi).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' By working with the normalized features, we can obviate the absolute scales of the features and focus on the similarities and differences on their directions.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that, the normal- ized features are used in the feature combination step (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 7, 10 and 11) for obtaining the interpolation be- tween the normalized features and in the NN-based clas- sification step (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 12) for performance improvement.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Next, we propose the sample-level and task-level cal- ibration, and their combination to utilize the priors from the base classes for obtaining the less-biased support features.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sample-Level Calibration According to previous works [55, 38] which also use the information from base classes for classifying the new classes, the base classes with higher similarities to the query classes are more important than other base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hence, we first propose to perform calibration based on the top similar base classes for each support sample.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Moreover, following DC [38], we apply the Tukeys’s Ladder of Powers transformation [56] to the features of the support samples before the calibration: ˜xi = � xλ i if λ � 0 log(xi) if λ = 0 (2) Here, λ is a hyperparameter which controls the distri- bution of the transformed feature, with a smaller λ can lead to a less skewed feature distribution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We set λ = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='5 and obtain the transformed support feature ˜xi from the original feature xi.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we select the top M base classes with higher similarities to a transformed support feature ˜xi: ΛM i = {pb j| j ∈ topM(Si)}, (3) where Si = {< ˜xi, pb j > | j ∈ {1, .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' nb}}.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (4) Here, ΛM i stores the M nearest base prototypes with re- spect to a transformed support feature vector ˜xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' topM(·) is an operator that returns the index of top M elements from Si, the similarity set of ˜xi, while the similarity be- tween ˜xi and a base prototype pb j is computed by the 5 inner product < ·, · >.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In DC [38], the distributions of the base and novel classes are assumed as Gaussian distribution and the statistics (mean and co-variance) of the base classes are used to calibrate the distribution of the novel classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In contrast, we directly use the sim- ilar base prototypes to calibrate each support feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, the calibration for ˜xi driven by base proto- types pb j ∈ ΛM i is computed as: si = ˜xi + � j∈ΛM i wijpb j, (5) where the weights of the M nearest base classes proto- types in ΛM i are obtained by applying Softmax to the similarities between ˜xi and these prototypes: wij = e<˜xi,pb j> � k∈ΛM i e<˜xi,pb k> , j ∈ ΛM i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (6) It should be noted that, in Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 5, the support feature ˜xi is a transformed feature, while the base prototypes are in the original feature space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' This setting is the same as DC does for calibrating the distribution of the novel classes and it can be understood as follows: 1) the trans- formation can initially reduce the skewness of the few- shot-sampled support features;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2) the term wijpb j can be regarded as the projection of ˜xi w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t prototype pb j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3) ˜xi is calibrated based on its projects to all of its similar base prototypes in ΛM i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Finally, the sample-level calibration for a normalized support sample ¯xi is defined as: ¯xs i = normalize((1 − α)¯xi + α¯si), (7) where α ∈ [0, 1] is a parameter to linearly combine the normalized support feature ¯xi and normalized base- prototypes-driven calibration ¯si = norm(si).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' As shown in Figure 2, ¯xi and ¯si form a line in the normalized fea- ture space and ¯xs i is the normalization of a in-between point on this line.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In general, the sample-level calibra- tion can rectify each support sample based on its own top M most similar base classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Task-Level Calibration By performing the sample-level calibration, the bias induced by the few-shot support samples can be reduced to a certain degree.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' However, when the sampling bias is too large, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', the support sample is lying near the boundary of a class, the set of similar base classes ΛM i obtained by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3 may also be biased.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To alleviate such bias, we propose the task-level calibration which utilizes the base prototypes related to all support samples when calibrating each individual support feature.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Concretely, for a support set S = {(xi, yi)}N×K i=1 w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t a task T , we col- lect the top M similar base prototypes for each support sample and form a union of related base prototypes for T : ΛT = N×K � i=1 ΛM i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (8) Then, for a transformed support sample ˜xi obtained by Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2, the calibration using all of the task-related base prototypes is computed by: ti = ˜xi + � j∈ΛT wi jpb j, (9) where wi j is calculated in the similar way as Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 6, but the similarities are computed using the prototypes from ΛT instead of ΛM i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' By involving more prototypes to cal- ibrate the support samples, the bias caused by only using nearby prototypes for a near-boundary support sample can be reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we define the task-level calibration for a nor- malized support sample ¯xi as: ¯xt i = normalize((1 − β)¯xi + β¯ti), (10) where ¯ti is the normalization of ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Similar to the sample- level calibration, ¯xi and ¯ti also form a line in the normal- ized feature space, while the calibration for each support sample is based on the union of all related base proto- types ΛT .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Unified Model The sample-level and task-level calibration utilize different levels of information from the base classes to rectify the support samples in a discrete manner.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To fur- ther attain the merits of both calibration schemes, we propose a unified model which linearly combines the sample-level and task-level calibration: xc i = ¯xu i = normalize((1 − α − β)¯xi + α¯si + β¯ti).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (11) Here, ¯xu i which is also denoted as xc i , is the final calibra- tion for a normalized support sample ¯xi .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Geometrically, xc i can be understood as the normalization of an interpo- lated feature point xu i locating in the triangle formulated by the three vertices ¯xi, ¯si and ¯ti, while 1 − α − β, α and β are the barycentric coordinates of xu i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Different α and β values can lead to different calibration effects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' When β = 0, the unified model degenerates to the sample- level calibration, while when α = 0, the model becomes to the task-level calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We quantitatively evaluate the effects of different α and β values in Section 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 6 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Nearest Prototype Classifier With the calibrated support set Sc = {(xc i , yi)}N×K i=1 , we compute the prototypes {pn}N n=1 for the novel classes and perform cosine similarity based nearest classification for a query feature q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To simplify the notation, we fur- ther represent Sc = {Sc n}N n=1, while Sc n = {(xc k, yk = n)}K k=1 is the support set for a novel class CLS n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For the 1-shot case, each calibrated support sample becomes one prototype and the class of the query fea- ture is predicted by the nearest prototype classifier: y∗ = max pn cos(¯q, pn), (12) where pn = xc n is the calibrated prototype for novel class CLS n and ¯q is the normalization of query q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For the multi-shot case, one way to obtain the pro- totype for a novel class is simply to compute the av- erage of all support features for the given class as in Prototypical Networks [16].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' However, merely using the unweighted average of the support features as prototype does not consider the importance of the support samples w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t the query.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Therefore, we adopt the idea of attentive prototype which is proposed in recent works [51, 53] for query-guided prototype computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In our implemen- tation, we define the attention-weighted prototype as: pq n = � xc k∈Scn akxc k, (13) where ak = e � xcm∈Scn e .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (14) Here, xc k and xc m are the calibrated support samples be- longing to the CLS n’s support set Sc n and ak is the atten- tion weight computed by applying Softmax to the sim- ilarities between query q and these calibrated support samples;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' pq n is the CLS n’s prototype guided by query q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Similar to Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 12, the prediction for a query q is obtained by finding the novel class with the nearest pro- totype pq n.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Experiments In this section, we perform quantitative compar- isons between our P3DC-Shot and state-of-the-art few-shot classification methods on three represen- tative datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We also conduct ablation studies on evaluating different hyperparameters and design choices for our methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Our code is available at: https://github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='com/breakaway7/P3DC-Shot.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Datasets We evaluate our prior-driven data calibration strate- gies on three popular datasets for benchmarking few shot classificaiton: miniImageNet [2], tieredImageNet [39] and CUB [40].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' miniImageNet and tieredImageNet contain a broad range of classes including various an- imals and objects, while CUB is a more fine-grained dataset that focuses on various species of birds.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, the miniImageNet [2] is derived from the ILSVRC-2012 [58] and it contains a subset of 100 classes, each of which consisting of 600 images.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We follow the split used in [18] and obtain 64 base, 16 val- idation and 20 novel classes for miniImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comar- ing to miniImageNet, the tieredImageNet [39] is a larger subset of [58] which contains 608 classes and therefore more challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We follow [39] and split the tiered- ImageNet into 351, 97, and 160 classes for base, vali- dation, and novel classes, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For CUB [40], it is the short name for Caltech-UCSD Birds 200 dataset, which contains a total of 11,788 images covering 200 categories of different bird species.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We split the CUB dataset into 100 base, 50 validation and 50 novel classes following [31].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that the set formed by the base classes can also be regarded as the train set and the novel classes correspond to the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Implementation Details For each image in the dataset, we represent it as a 640-dimensional feature vector which is extracted us- ing the WideResNet [59] pretrained by the S2M2 [29] work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Our calibration pipeline can efficiently proceed in four steps: 1) find the M = 5 nearby base prototypes for each support sample xi;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2) compute the endpoint of the sample-level calibration for xi, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', si;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3) col- lect all nearby base prototypes for all support samples in the task and compute the endpoint of the task-level calibration for xi, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', ti;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4) combine the sample-level and task-level calibration and obtain the final calibrated support sample xc i .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The parameter α and β for weighting the sample-level and task-level calibration are selected based on the best results obtained on the validation set for each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' All experiments are conducted on a PC with a 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='70GHz CPU and 16G memory.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' No GPU is needed during the calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' On average, for a 5- way 5-shot task, it takes 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='027 seconds to calibrate the support samples and 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='002 seconds for performing the nearest prototype classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparison and Evaluation To evaluate the performance of our P3DC-Shot, we first conduct quantitative comparisons with some rep- resentative and state-of-the-art few-short classification 7 Table 1: Quantitative comparison on the test set of miniImageNet, tieredImageNet and CUB.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The 5-way 1-shot and 5-way 5-shot classification accuracy (%) with 95% confidence intervals are measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Best results are highlighted in bold and second best are in italic.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The last line shows the α and β selected based on the valiation set for each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' * 8 and 20 are the number of ensembles in DeepVoro and DeepVoro++.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' † The results of [54] on tieredImageNet are obtained using its released code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Methods miniImageNet tieredImageNet CUB 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot Meta-learning (metric-learning) MatchingNet [15] (2016) 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='03 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='16 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='92 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='60 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='71 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='89 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='58 ProtoNet [16] (2017) 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='16 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='82 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='68 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='92 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='40 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='99 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='88 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='51 RelationNet [17] (2018) 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='19 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='83 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='66 54.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='78 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='91 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='12 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='63 Meta-learning (optimization) MAML [19] (2017) 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='70 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='84 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='10 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='92 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='67 ± 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='81 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='30 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='08 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='97 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='60 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='84 LEO [21] (2019) 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='76 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='08 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='59 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='12 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='15 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='09 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='22 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='22 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='27 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='16 DCO [22] (2019) 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='61 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='63 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='46 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='99 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='72 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='56 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='53 Transfer learning Baseline++ [31] (2019) 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='53 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='10 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='99 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='98 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='21 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='17 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='40 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='81 82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='92 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='78 Negative-Cosine [57] (2020) 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='82 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='94 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='59 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='85 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='40 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 S2M2R [29] (2020) 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='30 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='12 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='52 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='71 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='34 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='14 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='99 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 Nearest neighbor SimpleShot [25] (2019) 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='29 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='14 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='22 86.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='66 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='15 DeepVoro(8)∗ [50] (2022) 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='55 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='29 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='02 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='90 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='29 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='98 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='47 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='22 DeepVoro++(20)∗ [50] (2022) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='38 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='46 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='27 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='31 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='70 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 Data calibration RestoreNet [35] (2020) 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='28 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='91 DC [38] (2021) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='79 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='69 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='31 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='24 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='38 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='31 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='46 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='77 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='24 MCL-Katz+PSM [37] (2022) 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='03 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='03 69.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='90 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='08 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='89 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='08 S2M2+TCPR† [54] (2022) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='05 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='41 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='51 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='27 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='67 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 87.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='96 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='31 P3DC-Shot (α = 0, β = 0) 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='06 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='30 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='56 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='61 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='36 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='22 P3DC-Shot (α = 1, β = 0) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='41 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='06 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='84 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='01 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='51 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='83 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='24 P3DC-Shot (α = 0, β = 1) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='67 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='64 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='31 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='29 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='58 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='02 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 P3DC-Shot (α = 1 3, β = 1 3) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='19 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='30 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='91 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='54 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='75 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='21 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 P3DC-Shot (selected α, β) 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='68 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='44 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='37 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='30 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='20 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='67 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='32 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='86 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='36 ± 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='9) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0, 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='3) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0, 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4) methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we compare with different data trans- formation or calibration schemes and provide qualita- tive visualization for showing the difference of our cali- bration results w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t existing works.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In addition, we eval- uate the generalizability of our method by performing classification tasks with different difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Quantitative comparisons.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' As there are numerous efforts have been paid to the few-shot classification, we mainly compare our P3DC-Shot with representative and SOTA works which cover different types of few- shot learning schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The compared methods include the metric-learning based meta-learning [15, 16, 17], optimization-based meta-learning [19, 21, 22], transfer learning [31, 57, 29], nearest neighbor [25, 50] and cal- ibration [35, 38, 37, 54] based methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For certain methods such as [29, 28], we only compare with their basic versions and do not consider their model trained with data augmentation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that as not every method has conducted experiments on all three datasets, we mainly compare with their reported results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One excep- tion is for [54], we compare with its results generated using its released code.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For our method, we report the results of our model with different hyperparameters α and β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In particular, we consider the case when α and β are both zero, which makes our method a simple NN-based method with no data calibration and only shows the effect for using the query-guided prototype computation (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' We also compare with the results of α or β is 1, or both of them are equal to 1 3, which correspond to the cases that the endpoint of the sample-level or task-level calibration or the barycenter of the calibration triangle (Figure 2).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In the end, we provide our best results with the α or β se- lected based on the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For each dataset, we evaluate on the 5-way 1-shot and 5-way 5-shot classification setting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For each set- ting, 2,000 testing tasks, each of which contains 5 × K (K = 1 or 5) samples for the support set and 5 × 15 8 •• . 口� 本 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='. -\xad . . .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' ,,, " •• ·护h.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='. 心 .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=".了 '." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' :.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' -� ~ .沁:..“心.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=". °'“).一." metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='.,.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='..' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' ·- --■一一- .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 一护 . _ . I I . .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='. . . . .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' lJ .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' . . . . .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='1 � .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='. . .. . . . . . . •• 气 . 它v 二, □ 女 .. . . 炉 女 . . 女 0 v . `i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='.. . . . . . . 觅 . . . 炉 •• .. ', . . . . . . .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=',.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' . •• (a) (b) (c) Figure 3: T-SNE visualization of the calibration on example support samples from the test set of miniImageNet (a), tieredImageNet (b), and CUB (c).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The colored dots are data from the same underlying classes as the selected sample and the star is the center of each class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Given a support sample (represented in square), the upside down triangle is our calibration result and the lozenge is the calibration result of DC [38].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' samples for the query set, are randomly generated from the test split of the corresponding dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Table 1 shows the quantitative comparison results on three datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' It can be seen that our best results outperform most meth- ods in the 5-way 1-shot setting and are comparable to the SOTA methods [28, 38] for the 5-way 5-shot set- ting.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that although [37] achieves best results on the CUB dataset, it is inferior on miniImageNet and tiered- ImageNet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Moreover, since [37] follows a metric-based few-shot learning pipeline, it still requires to train the feature extractor and the metric module for each dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For [28], it performs generally well on all three datasets, but as an ensemble-based method, its computation time is much longer than our method, especially when the ensemble number is large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In contrast, our method does not require any training and only needs to perform an efficient calibration step for each testing task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Also, from results of our method with different α and β values in Table 1, it can be found when α and β is zero, the query-guided prototype computation can lead to better performance than the simple NN-based Sim- pleShot [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' When either the sample-level or task-level calibration is applied, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', α or β is not zero, the results are better than the non-calibrated version, showing the calibration can indeed reduce the bias for the support samples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meanwhile, which calibration type is more suitable is depending on the underlying data distribu- tion of the dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' By selecting the α and β based on the validation set of each dataset, the results are further improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In the ablation study, we perform more ex- periments and analysis of different α and β values.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparison with different data transformation or calibration schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To further verify the effectiveness Table 2: Comparison with different data transformation or calibration schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Accuracy (%) for 5-way 1-shot task on the test set of mini- ImageNet are measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Model miniImageNet CUB 5-way 1-shot 5-way 1-shot NN 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='50 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='40 L2N+NN 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='61 CL2N+NN 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='96 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='54 DC+L2N+NN 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='49 P3DC-Shot 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='68 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='86 (selected α, β) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='0,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='9) (0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='2,0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4) of our prior-driven data calibration, we compare with several NN-based baseline methods which perform dif- ferent data transformation or calibration schemes and the results are shown in Table 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In this experiment, all methods are based on the pretrained WideResNet fea- tures.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Also, only the 5-way 1-shot classification ac- curacy is measured so that the comparison is focused on feature transformation instead of the prototype com- putation schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The first baseline is NN, which is a naive inner product based nearest neighbor classifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, L2N and CL2N represent L2 normalization and centered L2 normalization which have been shown as effective in SimpleShot [25].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In addition, another base- line that follows the data calibration scheme in DC [38] is compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comparing to the original DC, this base- line directly takes the calibrated and then normalized features and employs NN for classification instead of training new classifiers using the sampled data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' From Table 2, it can be observed the data normalization or cal- ibration can signi��cantly improve the NN-based classi- 9 ★Table 3: Generalizability test on different N in N-way 1-shot tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Accuracy (%) on the test set of miniImageNet are measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For our P3DC-Shot, the same α = 0 and β = 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='9 selected based on the validation set for the 5-way 1-shot case are used for all experiments.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Models 5-way 7-way 9-way 11-way 13-way 15-way 20-way RestroreNet [35] 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='56 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='55 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='54 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='98 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='34 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='52 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='48 L2N+NN 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='86 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='45 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='25 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='80 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='12 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='06 CL2N+NN 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='96 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='69 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='23 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='93 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='36 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='85 36.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='65 P3DC-Shot 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='68 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='58 55.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='03 50.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='75 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='21 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='43 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='33 fication.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In addition, our data calibration achieves the best results comparing to other baselines.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The main rea- son is the L2N and CL2N only perform transformation rather than calibration using the base priors, while the modified DC does not consider the attentive similarity between the support samples and the base classes when performing the calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Visualization of the calibration.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To qualitatively verify the effectiveness of our calibration, we show the T-SNE [60] visualization of the calibration results for some example support samples in Figure 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The results of calibrating the same sample using DC [38] are also compared.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' It can be seen from Figure 3 that our calibra- tion can more effectively transform the support samples closer to the center of the underlying classes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For DC, the calibration may be minor or even be far away from the center.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The reason is still due to it treats the nearby base classes with the same weights.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' In contrast, our cal- ibration pays more attention to the similar base classes when determining the weights for combining the base prototypes (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 5 and 9).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Generalizability test on different N in N-way clas- sification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Following [35], we conduct a series of N- way 1-shot experiments on miniImageNet to test the generalizability of the proposed calibration for differ- ent classification tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Table 3 shows the results of the baseline methods [35], L2N and CL2N and ours.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Note that with the N increases, there are more data samples in a test task and the classification becomes more difficult.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' It can be observed that our P3DC-Shot achieves con- sistent best results comparing to the baseline methods, verifying our method is generalizable to classification tasks with different difficulties.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ablation Study In this section, we perform ablation studies to ver- ify the effectiveness of different modules and design choices of our method.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' First, we conduct experiments on different hyperparameter α and β to see how the sample-level and task-level calibration can affect the fi- nal results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Then, we perform the study on the effec- tiveness of using the query-guided attentive prototypes in the NN classification step.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Effect on different hyperparameter α, β.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Differ- ent α and β values correspond to different degrees of sample-level and task-level calibration applied to the in- put data.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Geometrically, α, β and 1 − α − β can also be understood as the coordinates of the calibration result w.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='r.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='t to the triangle formed by the three points ¯xi, si, ti.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To quantitatively reveal how these two hyperparameters can affect the results, we enumerate different α and β values on both the validation and test sets of different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' From the results in Figure 4, it can be found the accuracy near the origin of the figures are smaller, which means performing calibration can improve upon using the original features for classification, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', α and β is zero.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Also, different datasets prefer different α and β combinations for achieving higher performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For example, miniImageNet shows better results when α+β is around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='9 and CUB prefers a relatively smaller cal- ibration, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', α + β is around 0.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' For tieredImageNet, better results are obtained around the topper left of the figure, showing the task-level calibration is more help- ful than the sample-level.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Overall, the trend on the test set is consistent with the validation set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' From above ex- periments, it shows the sample-level and task-level cali- bration are consistently effective, while how to selecting the good α and β values are dataset dependent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' There- fore, for our best results, we use the α and β selected based on the validation set and report their performance on the test set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Effect on using attentive prototypes in NN classifi- cation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To improve the conventional prototype based NN classificaiton, we propose to compute the query- guided attentive prototypes to represent the support class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' To verify the effectiveness of this scheme, we per- form ablation study for 5-way 5-shot tasks on different tasks using different prototype computation schemes.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Specifically, we take the calibrated support features and compute the prototypes for the support classes by performing the conventional average operation or our query-guided attentive averaging (Eq.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 13).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The results 10 Figure 4: The effect of different α and β on the validation (top) and test (bottom) set of different datasets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Accuracy (%) for 5-way 1-shot task on miniImageNet, tieredImageNet and CUB are measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' The warmer color corresponds to higher accuracy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Table 4: Ablation study on using the query-guided attentive proto- types in NN classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Accuray (%) on the test set of miniIma- geNet, tieredImageNet and CUB are measured.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Model miniImageNet tieredImageNet CUB 5-way 5-shot 5-way 5-shot 5-way 5-shot Average 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='11 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='54 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='27 Attentive 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='37 88.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='67 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='36 in Table 4 show that the attentive prototypes can lead to better performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hence, we adopt the attentive pro- totypes in our NN-based classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conclusion In this paper, we propose a simple yet effective frame- work, named P3DC-Shot, for few-shot classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Without any retraining and expensive computation, our prior-driven discrete data calibration method can effi- ciently calibrate the support samples based on prior- information from the base classes to obtain the less- biased support data for NN-based classification.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Exten- sive experiments show that our method can outperform or at least comparable to SOTA methods which need ad- ditional learning steps or more computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' One lim- itation of our method is we rely on the whole valida- tion set to select the good hyperparameters α and β to determine which degree of the sample-level and task- level calibration is more suitable for the given dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Investigating a more general scheme to combine the sample-level and task-level calibration is an interesting future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Moreover, when exploring the combina- tion schemes, we only focus on exploring the inner area of the calibration triangle.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' It is worthy to extend the parameter search to a larger area, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', by extrapolation of the calibration triangle, to find whether better results can be obtained.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' References [1] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Simonyan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zisserman, Very deep convolutional net- works for large-scale image recognition, arXiv preprint arXiv:1409.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='1556 (2014).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [2] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Russakovsky, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Deng, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Su, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Krause, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Satheesh, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Karpathy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Khosla, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Bernstein, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Ima- genet large scale visual recognition challenge, Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 115 (2015) 211–252.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 11 [3] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' He, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sun, Deep residual learning for image recognition, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 770–778.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [4] R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Girshick, Fast r-cnn, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 1440–1448.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [5] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ren, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' He, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Girshick, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sun, Faster r-cnn: Towards real- time object detection with region proposal networks, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Neu- ral Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 28 (2015).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [6] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Redmon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Divvala, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Girshick, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Farhadi, You only look once: Unified, real-time object detection, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Com- put.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 779–788.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [7] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Long, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Shelhamer, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Darrell, Fully convolutional networks for semantic segmentation, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2015, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3431–3440.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [8] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' He, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Gkioxari, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Doll´ar, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Girshick, Mask r-cnn, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2961–2969.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [9] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chen, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Papandreou, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Kokkinos, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Murphy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yuille, Deeplab: Semantic image segmentation with deep con- volutional nets, atrous convolution, and fully connected crfs, IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 40 (2017) 834–848.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [10] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lin, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Maire, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Belongie, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hays, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Perona, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ra- manan, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Doll´ar, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zitnick, Microsoft coco: Common ob- jects in context, in: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 740–755.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [11] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cordts, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Omran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ramos, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Rehfeld, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Enzweiler, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Benenson, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Franke, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Roth, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Schiele, The cityscapes dataset for semantic urban scene understanding, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 3213–3223.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [12] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Kwok, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ni, Generalizing from a few examples: A survey on few-shot learning, ACM Comput Surv 53 (2020) 1–34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [13] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lu, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Gong, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ye, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, Learning from very few sam- ples: A survey, arXiv preprint arXiv:2009.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='02653 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [14] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Laradji, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' V´azquez, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lacoste-Julien, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ro- driguez, A survey of self-supervised and few-shot object de- tection, IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Anal.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Intell.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [15] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vinyals, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Blundell, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lillicrap, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wierstra, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Match- ing networks for one shot learning, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Neural Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pro- cess.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 29 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [16] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Snell, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Swersky, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zemel, Prototypical networks for few- shot learning, Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Neural Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 30 (2017).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [17] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sung, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xiang, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Torr, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hospedales, Learning to compare: Relation network for few- shot learning (2018) 1199–1208.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [18] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ravi, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Larochelle, Optimization as a model for few-shot learning (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [19] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Finn, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Abbeel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Levine, Model-agnostic meta-learning for fast adaptation of deep networks, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Mach.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', PMLR, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 1126–1135.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [20] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Jamal, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Qi, Task agnostic meta-learning for few-shot learning, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Rusu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Rao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sygnowski, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vinyals, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pascanu, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Osindero, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hadsell, Meta-learning with latent embedding optimization, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [22] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lee, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Maji, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ravichandran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Soatto, Meta-learning with differentiable convex optimization, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 10657–10665.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [23] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chen, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Darrell, A new meta- baseline for few-shot learning, arXiv preprint arXiv:2003.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='11539 (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [24] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cao, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Law, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Fidler, A theoretical analysis of the num- ber of shots in few-shot learning, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Repre- sent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [25] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chao, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Weinberger, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' van der Maaten, Simpleshot: Revisiting nearest-neighbor classification for few- shot learning, arXiv preprint arXiv:1911.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='04623 (2019).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [26] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Aurenhammer, Voronoi diagrams—a survey of a fundamental geometric data structure, ACM Comput Surv 23 (1991) 345– 405.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [27] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chen, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, On clustering induced voronoi diagrams, SIAM Journal on Computing 46 (2017) 1679–1711.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [28] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, Few-shot learning as cluster- induced voronoi diagrams: A geometric approach, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [29] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Mangla, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Kumari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sinha, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Singh, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Krishnamurthy, V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Balasubramanian, Charting the right manifold: Manifold mixup for few-shot learning, in: WACV, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 2218–2227.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [30] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tian, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Krishnan, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tenenbaum, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Isola, Re- thinking few-shot image classification: a good embedding is all you need?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', in: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 266–282.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [31] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Kira, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, A closer look at few-shot classification, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [32] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ge, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yu, Borrowing treasures from the wealthy: Deep transfer learning through selective joint fine-tuning, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [33] O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sbai, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Couprie, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Aubry, Impact of base dataset design on few-shot image classification, Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' (2020).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [34] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cui, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Jia, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yang, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tian, Learning to select base classes for few-shot classification, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 4624–4633.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [35] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xue, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, One-shot image classification by learning to restore prototypes, in: AAAI, volume 34, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 6558–6565.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [36] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Song, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Qin, Prototype rectification for few-shot learning, in: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Springer, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 741– 756.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [37] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Guo, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Du, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xie, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ma, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Dong, Learning cali- brated class centers for few-shot classification by pair-wise sim- ilarity, IEEE Trans.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Image Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 31 (2022) 4543–4555.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [38] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yang, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, Free lunch for few-shot learning: Dis- tribution calibration, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [39] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ren, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Triantafillou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ravi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Snell, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Swersky, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tenenbaum, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Larochelle, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zemel, Meta-learning for semi-supervised few-shot classification, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [40] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wah, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Branson, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Welinder, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Perona, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Belongie, The caltech-ucsd birds-200-2011 dataset (2011).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [41] T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hospedales, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Antoniou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Micaelli, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Storkey, Meta- learning in neural networks: A survey 44 (2021) 5149–5169.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [42] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Koch, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zemel, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Salakhutdinov, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Siamese neural net- works for one-shot image recognition, in: ICML deep learning workshop, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [43] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tu, Attentional constellation nets for few-shot learning, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [44] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zheng, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Song, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cai, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' He, Dmn4: Few-shot learn- ing via discriminative mutual nearest neighbor neural network, in: AAAI, volume 36, 2022, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 1828–1836.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [45] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Torrey, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Shavlik, Transfer learning, in: Handbook of re- search on machine learning applications and trends: algorithms, methods, and techniques, IGI global, 2010, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 242–264.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [46] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tan, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Sun, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Kong, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yang, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, A survey on deep transfer learning, in: International conference on artificial neural networks, Springer, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 270–279.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [47] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhuang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Qi, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Duan, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xi, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhu, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xiong, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' He, A comprehensive survey on transfer learning, Proceed- ings of the IEEE 109 (2020) 43–76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [48] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Dhillon, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chaudhari, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ravichandran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Soatto, A baseline for few-shot image classification, in: Int.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Learn.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Represent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 12 [49] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huo, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Luo, Revisiting local descriptor based image-to-class measure for few-shot learning, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 7260– 7268.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [50] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Huang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Gao, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, Few-shot learning as cluster- induced voronoi diagrams: A geometric approach (2022).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [51] F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Smith, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lu, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pang, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, Attentive proto- type few-shot learning with capsule network-based embedding, in: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 237–253.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [52] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Ji, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Chai, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, Reweighting and information- guidance networks for few-shot learning, Neurocomputing 423 (2021) 13–23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [53] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Meng, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Wen, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xue, Racp: A network with attention corrected prototype for few-shot speaker recognition using indefinite distance metric, Neurocomputing 490 (2022) 283–294.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [54] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Luo, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pan, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pei, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Xu, Alleviating the sam- ple selection bias in few-shot learning by removing projection to the centroid, in: Adv.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Neural Inform.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Syst.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [55] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhou, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cui, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Yang, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhu, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tian, Learning to learn image classifiers with visual analogy, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 11497–11506.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [56] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Tukey, et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Exploratory data analysis, volume 2, Read- ing, MA, 1977.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [57] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Liu, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Cao, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Lin, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zhang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Long, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hu, Nega- tive margin matters: Understanding margin in few-shot classifi- cation, in: Eur.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Springer, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [58] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Deng, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Dong, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Socher, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='-J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Fei-Fei, Im- agenet: A large-scale hierarchical image database, in: IEEE Conf.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Comput.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Vis.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Pattern Recog.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=', Ieee, 2009, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 248–255.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [59] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Zagoruyko, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Komodakis, Wide residual networks, arXiv preprint arXiv:1605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content='07146 (2016).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' [60] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' van der Maaten, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' Hinton, Visualizing data using t-sne, Journal of Machine Learning Research 9 (2008) 2579–2605.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'} +page_content=' 13' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/29AyT4oBgHgl3EQf1vl1/content/2301.00740v1.pdf'}