diff --git "a/B9E0T4oBgHgl3EQfgAGB/content/tmp_files/load_file.txt" "b/B9E0T4oBgHgl3EQfgAGB/content/tmp_files/load_file.txt" new file mode 100644--- /dev/null +++ "b/B9E0T4oBgHgl3EQfgAGB/content/tmp_files/load_file.txt" @@ -0,0 +1,1109 @@ +filepath=/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf,len=1108 +page_content='Adversarial Attacks on Neural Models of Code via Code Difference Reduction Zhao Tian College of Intelligence and Computing, Tianjin University Tianjin, China tianzhao@tju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='cn Junjie Chen† College of Intelligence and Computing, Tianjin University Tianjin, China junjiechen@tju.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='cn Zhi Jin Key Lab of High Confidence Software Technologies, Peking University Beijing, China zhijin@pku.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='edu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='cn Abstract—Deep learning has been widely used to solve various code-based tasks by building deep code models based on a large number of code snippets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, deep code models are still vulnerable to adversarial attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As source code is discrete and has to strictly stick to the grammar and semantics constraints, the adversarial attack techniques in other domains are not applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Moreover, the attack techniques specific to deep code models suffer from the effectiveness issue due to the enormous attack space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this work, we propose a novel adversarial attack technique (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CODA).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its key idea is to use the code differences between the target input and reference inputs (that have small code differences but different prediction results with the target one) to guide the generation of adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' It considers both structure differences and identifier differences to preserve the original semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, the attack space can be largely reduced as the one constituted by the two kinds of code differences, and thus the attack process can be largely improved by designing corresponding equivalent structure transformations and identifier renaming transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Our experiments on 10 deep code models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', two pre-trained models with five code-based tasks) demonstrate the effectiveness and efficiency of CODA, the naturalness of its generated examples, and its capability of defending against attacks after adversarial fine-tuning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, CODA improves the state-of-the-art techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT and ALERT) by 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='25% and 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='20% on average in terms of the attack success rate, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' INTRODUCTION In recent years, deep learning (DL) has been widely used to solve code-based software engineering tasks, such as code clone detection [1], vulnerability prediction [2], and code com- pletion [3], by building DL models based on a large amount of training code snippets (also called deep code models).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Indeed, deep code models have achieved notable performance and largely promoted the process of software development and maintenance [4]–[7].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, some industrial products on deep code models have been released and received extensive attention, such as AlphaCode [8] and Codex [9].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Like DL models in other areas (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', image processing [10] and speech recognition [11]), the robustness of deep code models is also critical [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, the existing adversarial attack techniques proposed in other areas are not applicable to deep code models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' This is because these techniques perturb an input in a continuous space for altering the model prediction result, while the inputs (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', source code) for deep code models †Junjie Chen is the corresponding author.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' are discrete.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Moreover, source code has to strictly stick to the grammar and semantics constraints, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', the adversarial example generated from an original input should have no grammar errors and preserve the original semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Indeed, some adversarial attack techniques specific to deep code models have been proposed recently, such as MHM [13], CARROT [12], and ALERT [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In general, they share two main steps: (1) designing a series of semantic-preserving code transformation rules (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', identifier renaming or dead code insertion), and (2) searching ingredients from the space defined by the rules (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', a valid identifier name is an ingredient for the rule of identifier renaming) for transforming an input to a semantic-preserving adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, CAR- ROT designs two semantic-preserving code transformation rules (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', identifier renaming and dead code insertion), and uses the hill-climbing algorithm to search for the ingredients from the entire space with the guidance of gradients and changes of model prediction results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ALERT considers the rule of identifier renaming, and uses the naturalness (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', natural semantics of code) and changes of model prediction results to guide the ingredient search process from the entire space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Although some of them have been demonstrated to be effective to some degree, these existing techniques still suffer from major limitations: The ingredient space defined by the code transformation rules is enormous.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, for the rule of identifier renaming, all valid identifier names could be the ingredi- ents for renaming the target identifier.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, searching for the ingredients that can help attack the target model successfully is challenging.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The existing techniques tend to utilize the changes of model prediction results after performing semantic-preserving transformations on the target input to guide the search process, which is very likely to fall into local optimum in the enormous space and thus limits their attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Frequently invoking the target model can negatively af- fect the efficiency of adversarial attack techniques, as model invocation is the most costly part during the attack process [12].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, when the model is deployed remotely, frequent model invocations could be identified as malicious attacks and thus lead to blocking access to the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, the existing techniques often involve 1 arXiv:2301.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='02412v1 [cs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='CR] 6 Jan 2023 frequent model invocations due to calculating gradients or guiding the search direction via model prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Developers care about the natural semantics of code since it is helpful to assist human comprehension [15].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, guaranteeing the naturalness of generated adversarial ex- amples (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', source code in our task) is important.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' How- ever, all the existing techniques (except ALERT [14]) do not consider this factor.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, CARROT designs the rule of dead code insertion, but it may largely damage the naturalness of the generated examples (especially when a large amount of dead code is inserted).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Overall, a more effective adversarial attack technique spe- cific to deep code models should enhance the attack effective- ness by improving the ingredient search process, and guarantee the naturalness of generated adversarial examples as much as possible and the times of model invocations as few as possible.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Our work does propose such a technique, called CODA (COde Difference guided Attacking).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To improve the attack effectiveness, the key idea of CODA is to use the inputs, which have small code differences with the target input but have different prediction results, to largely reduce the ingredient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For ease of presentation, we call such inputs reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Actually, reference inputs can be regarded as invalid successfully-attacking adversarial ex- amples generated from the target input, where “invalid” refers to altering the original semantics and “successfully-attacking” refers to producing different prediction results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Over the target input, the code differences brought by reference inputs con- tribute to the invalid but successful attack to a large extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, if we extract the ingredients from the code differences to support semantic-preserving transformations on the target input, their code differences can be gradually reduced without altering the original semantics, and thus a valid successfully- attacking adversarial example is likely to be generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this way, the ingredient space is effectively reduced as the one constituted by only code differences between reference inputs and the target input, and thus the search process can be largely improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Please note that taking reference inputs (especially code differences brought by them) as the guidance for generating adversarial examples is an innovative perspective, which closely utilizes the unique characteristics of deep code models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', source code is discrete).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To preserve the semantics of the target input during the attack process, CODA considers code structure differences and identifier differences, and thus extracts the ingredients to support equivalent structure transformations and identifier re- naming transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Equivalent structure transformations (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', transforming a for loop to an equivalent while loop) do not affect the naturalness of generated examples, and thus CODA first applies this kind of transformations to reduce code differences for generating adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, identifier renaming transformations are applied to further reduce code differences to improve the attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To ensure the naturalness of generated examples by this kind of transformations, CODA measures semantic similarity between identifiers for guiding iterative transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, CODA just involves necessary model invocations to check whether the generated example attacks successfully, without extra gradient calculation and a large amount of model prediction for guiding the search process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We conducted an extensive study to evaluate CODA based on two popular pre-trained models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CodeBERT [6] and GraphCodeBERT [7]) and five code-based tasks (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', vul- nerability prediction [2], clone detection [16], authorship attribution [17], functionality classification [18], and defect prediction [12]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In total, we used 10 subjects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Our results demonstrate the effectiveness and efficiency of CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, on average across the ten subjects, CODA improves the two state-of-the-art adversarial attack techniques specific to deep code models (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14]) by 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='25% and 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='20% respectively in terms of the attack success rate.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The time spent by CODA on completing the attack process for the ten subjects is just 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='59 hours, while those by CARROT and ALERT are 159.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='19 hours and 198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='89 hours, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, we investigated the value of the generated adversarial examples by using them to improve the robustness of the target model via an adversarial fine- tuning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The results show that the models after fine- tuning with the examples generated by CODA can successfully defend against attacks from 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='64%, 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='96%, and 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='68% of adversarial examples generated by CARROT, ALERT, and CODA on average, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Besides, we conducted a user study to confirm the naturalness of the generated examples by CODA and an ablation experiment to confirm the contribution of each main component in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To sum up, our work makes the four major contributions: Novel Perspective.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We propose a novel perspective of utilizing code differences between reference inputs and the target input to guide the adversarial attack process for deep code models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Technique Implementation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We implement CODA fol- lowing the novel perspective by measuring code structure and identifier differences and designing the corresponding semantic-preserving code transformation rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Performance Evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We conducted an extensive study on two popular pre-trained models and five code- based tasks, demonstrating the effectiveness and effi- ciency of CODA over two state-of-the-art techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Public Artifact.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We released all the experimental data and our source code at the project homepage [19] for experiment replication, future research, and practical use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' II.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' BACKGROUND AND MOTIVATION In this section, we first introduce the background of deep code models (Section II-A), define our problem (Section II-B), and motivate our key idea with an example (Section II-C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Deep Code Models In the area of software engineering, DL has been widely- used to process source code [2], [3], [16], [20].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, some popular pre-trained DL models have been constructed based on a large number of code snippets, among which 2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 8 9 10 11 12 13 14 void f1(int a[], int n){ int i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' int j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' int k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' for (i = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i < n;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i++) { for (j = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' j < ((n - i) - 1);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' j++) { if (a[j] > a[j + 1]){ k = a[j];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' a[j] = a[j + 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' a[j + 1] = k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } } } } int f2(int t[], int len){ int i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' int j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' j = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' while (len !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='= 0) { t[i] = len % 10;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' len /= 10;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i = i + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } while (j < i){ if (t[j] !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='= t[(i - j) - 1]) return 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' j = j + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } return 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } void f3(int t[], int len){ int i;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' int j;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' int k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' while (i < len) { j = 0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' while (j < ((len - i) - 1)) { if (t[j] > t[j + 1]){ k = t[j];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' t[j] = t[j + 1];' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' t[j + 1] = k;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } j = j + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } i = i + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } } Ground-truth Label: sort Prediction Result: sort (96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='52%) Ground-truth Label: palindrome Prediction Result: palindrome (99.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='98%) Ground-truth Label: sort Prediction Result: palindrome (90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88%) Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' An illustrating example (the target input f1, a reference input f2, and a successfully-attacking adversarial example f3 generated from f1) CodeBERT [6] and GraphCodeBERT [7] are two state-of-the- art pre-trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' CodeBERT learns features from bi- modal data in the form of programming languages and natural languages, while GraphCodeBERT takes into consideration the code structure and data flow information.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Same as the existing work [14], we used them in our evaluation (Section IV).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' These pre-trained models have brought breakthrough changes to many downstream code-based tasks [21], including both classification tasks and generation tasks, by fine-tuning them on the datasets of the corresponding tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The former makes classification based on the given code snippets (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', clone detection [16] and vulnerability prediction [2]), while the latter produces a sequence of information based on code snip- pets or natural language descriptions (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', code completion [3] and code summarization [22]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Following most of the existing work on attacking deep code models [12]–[14], our work also focuses on the classification tasks and takes the generation tasks as our future work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, in our study, we adopted all the tasks used in the studies of evaluating the state-of-the- art attack techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14]), i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', five classification tasks including vulnerability prediction, clone detection, authorship attribution, functionality classifica- tion, and defect prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Problem Definition Given a code snippet x that is processed as the required format by the target deep code model M (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', abstract syntax trees required by code2seq [4], control-flow graphs required by DGCNN [23], or data-flow graphs required by GraphCode- BERT [7]), M can predict a probability vector for x, each element in which represents the probability classifying x to the corresponding class.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The class with the largest probability is the final prediction result of M for x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' If the prediction result is different from the ground-truth label (denoted as y) of x, it means that M makes a wrong prediction on x;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' otherwise, M makes a correct prediction.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Although deep code models can achieve great performance on the given test sets, they may be vulnerable to adversarial examples [12]–[14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The goal of our work is to generate successfully-attacking adversarial examples as much as pos- sible, so as to improve the model robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As source code is discrete and has to stick to the grammar and semantics con- straints, the existing adversarial example generation techniques proposed in other domains are not applicable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The existing attack techniques specific to deep code models always generate adversarial examples from a target input by performing a series of semantic-preserving code transforma- tions [12]–[14], which is also followed by our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For ease of understanding, we formally define our target problem is to find {x′|x′ ∈ ϵ ∧ y = M(x) ̸= M(x′)} from a target input x for the target model M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Here, ϵ refers to the universal set of code snippets that satisfy the grammar constraints and preserve the semantics of x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' y = M(x) means that we just regard the test inputs on which M makes correct predictions as target inputs, where M(x) refers to the prediction result of M on x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' M(x) ̸= M(x′) means that x′ successfully attacks M, that is, it is a successfully-attacking adversarial example generated from x.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Besides, an effective attack technique should be also efficient to find x′ and ensure the naturalness of x′ (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', natural to human comprehension [14]), which are indeed carefully considered by the proposed technique in our work.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Motivating Example We then use a real-world example (simplified for ease of illustration) to help motivate our key idea: utilizing the code differences between reference inputs and the target input to guide the generation of adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In Figure 1, the first code snippet f1 is the target input from the test set of the functionality classification task [18], and the two state- of-the-art techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14]) do not generate successfully-attacking adversarial examples from it since they can fall into local optimum in the enormous ingredient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this figure, the second code snippet f2 is a reference input from the training set of this task, which has the different label with f1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In fact, f2 can be regarded as an invalid successfully- attacking example from f1, as they are semantically inconsis- tent but have different prediction results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The code differences between f1 and f2 mainly contribute to this phenomenon.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From this perspective, to generate a valid successfully-attacking adversarial example (denoted as f3) from f1, we should per- form semantic-preserving code transformations on f1, and the transformations should reduce the code differences between f1 and f2 in order to alter the prediction result of the target model on f1.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' That is, the ingredients supporting these transformations 3 Initial Snippet Adversarial Snippet Input Output Training Data Input Target Model Test Test Report Attack Success ?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Overview of CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' should be extracted from the code differences brought by f2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' With this intuition, by performing equivalent structure transformations on f1 (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', transforming for loops to while loops, where while loops are the used loop structure in f2) and identifier remaining transformations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', renaming a and n to t and len respectively, where t and len are the used identifier names in f2), f3 is generated as shown in the third code snippet in Figure 1 and indeed attacks successfully, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', making a wrong prediction (palindrome) with a high confidence (90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Based on the code differences between the target input and the reference input, the ingredient space is largely reduced.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, the ingredient space defined by identifier renaming transformations is reduced from all valid identifier names (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', almost infinite) to the identifier names occurring in the reference input but not in the target input (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', only two identifier names in this simplified example).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, it can help improve the ingredient search process and thus improve the attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' On the other hand, too small ingredient space could also lose too many ingredients useful to successful attacks, and thus we will select a set of reference inputs (rather than only one reference input) for guiding the attack process in order to balance the size of the ingredient space and the number of useful ingredients.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' III.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' APPROACH A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Overview In this work, we propose a novel perspective to attack deep code models more effectively and more efficiently, which utilizes the code differences between reference inputs and the target input to guide the generation of adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From this perspective, we design an effective attack technique, called CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Specifically, the code differences brought by reference inputs provide effective ingredients for altering the prediction result of the target input by transforming it with these ingredients, which can contribute to successful attacks in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, as the semantics of reference inputs and the target input are different, the ingredients from some kinds of code differences can alter the original semantics, which is not allowed by the adversarial attack for deep code models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, in CODA, we consider the structure differences and identifier differences for measuring code differences between them, which can preserve the original semantics during the attack process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this way, the ingredient space can be effectively reduced as the one constituted by the two kinds of code differences between reference inputs and the target input, and thus the ingredient search process (for generating adversarial examples) can be largely improved.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In fact, not all the inputs that have different prediction results with the target one, can be regarded as effective reference inputs for improving the adversarial attack process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In other words, different inputs could have different degrees of capabilities for reducing the ingredient search space and providing effective ingredients for altering the prediction result of the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Therefore, the first step in CODA is to select effective reference inputs for the target input in order to improve the attack effectiveness as much as possible (to be presented in Section III-B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Based on the selected reference inputs, CODA then measures the structure differ- ences and identifier differences over the target input, which support extracting the ingredients for two corresponding kinds of semantic-preserving code transformations (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', equivalent structure transformations and identifier renaming transforma- tions).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' With the guidance of reducing their code differences based on the two kinds of transformations, the target input could be effectively transformed to a successfully-attacking adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As equivalent structure transformations do not affect the naturalness of generated examples, CODA first applies this kind of transformations to reduce the code differences for improving the attack effectiveness (to be pre- sented in Section III-C).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, we apply identifier renaming transformations to further reduce the code differences for improving the generation of successfully-attacking adversarial examples (to be presented in Section III-D).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, CODA measures the semantic similarity between identifiers to guarantee the naturalness of generated examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Figure 2 shows the overview of CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In a nutshell, by successively applying equivalent structure transformations and identifier renaming transformations to the target input with the ingredient space defined by the code differences between the selected reference inputs and the target one, adversarial examples can be generated towards the direction of reducing the code differences without altering the original semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this way, the prediction result of the target input is more likely to be changed, leading to a successfully-attacking adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Due to the smaller ingredient search space (but including effective ingredients) and the clearer attack direction, the attack effectiveness could be largely improved by CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Reference Inputs Selection The goal of reference inputs is to largely reduce the in- gredient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, the reduced space should include the ingredients that are effective to transform the target input to a successfully-attacking adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this way, the adversarial attack process can be largely improved by searching for effective ingredients more efficiently.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4 TABLE I DESCRIPTIONS OF EQUIVALENT STRUCTURE TRANSFORMATIONS Transformation Description Example Before Transformation Example After Transformation R1-loop equivalent transformation among for structure for ( i=0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i<9;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i++ ) { i=0;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' while ( i<9 ) { and while structure Body;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } Body;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i++;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } R2-branch equivalent transformation between if-else(-if) if ( A ) { BodyA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } if ( A ) { BodyA;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } structure and if-if structure else if ( B ) { BodyB;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } if ( !' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='A && B ) { BodyB;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' } R3-calculation equivalent numerical calculation transformation, e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', i += 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' i = i + 1;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ++, --, +=, -=, *=, /=, %=, <<=, >>=, &=, |= , ˆ = R4-constant equivalent transformation between a constant and println("Hello, World!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ");' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' String i = "Hello, World!' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ";' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' a variable assigned by the same constant println(i);' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Although all the inputs that have different prediction results with the target one can provide ingredients for altering the pre- diction result of the target one after transformations, their capa- bilities for successful attacks could be different.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To transform the target input to a successfully-attacking adversarial example with fewer perturbations, CODA should select the reference inputs, which can provide the ingredients that are more likely to conduct successful attacks for the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Similar to the existing work [24]–[27], we assume that the prediction result of the target input is more likely to be changed from its original class denoted as ci (with the largest probability predicted by the target model) to the class with the second largest probability (denoted as cj).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, the ingredients in the inputs belonging to cj are more likely to attack successfully on the target input, and thus CODA selects the inputs belonging to cj as the initial set of reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Please note that all the reference inputs are selected from the training set to avoid introducing the contents beyond the cognitive scope of the target model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Meanwhile, we only consider the training inputs whose prediction results are consistent with their ground-truth labels in order to avoid introducing noise.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, the number of inputs belonging to the same class (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', cj as above) could be large, and thus the ingredient space constituted by code differences between them and the target input could be also large.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, to further reduce the ingre- dient space for more effective adversarial example generation, CODA selects a subset of inputs with high similarity to the target input from the initial set of reference inputs, as the final set of reference inputs used by CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' This is because smaller code differences can effectively limit the number of ingredients, leading to smaller ingredient space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' CODA does not select only one reference input, as too small ingredient space could incur a high risk of missing too many ingredients contributing to successful attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' That is, CODA selects a small set of reference inputs following the above two steps of selection to balance the ingredient space size and the amount of ingredients contributing to successful attacks in the space.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We further introduce how to measure the similarity be- tween the target input (denoted as t) and a reference input (denoted as r) for the second step of selection in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In general, we can adopt some pre-trained models to represent the code as a vector and then measure code similarity by calculating the vector distance, like many existing studies [5], [6], [28].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, as presented in Section III-A, CODA first applies equivalent structure transformations (rather than identifier renaming transformations) to reduce code differences for adversarial attacks, as this kind of transformations does not affect the naturalness of generated examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Moreover, the identifiers used in different code snippets are usually different due to the enormous identifier space, which may lead to the low similarity between various code snippets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, when measuring code similarity, CODA eliminates the influence of identifiers by replacing them with the placeholder .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Specifically, CODA first represents t and r after placeholder replacement as vectors respectively based on CodeBERT [6] (one of the most widely-used pre-trained models [29]–[31]), and then calculates the cosine similarity between the two vectors.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As the descending order of the calculated similarity, CODA selects Top-N reference inputs for the follow-up ad- versarial attack process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Please note that to make the selection process efficient, we randomly sampled U inputs from the initial set for the second step of selection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We will investigate the influence of both U and N on CODA in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Equivalent Structure Transformation Based on the small set of selected reference inputs, CODA then extracts ingredients from the space defined by their brought code differences over the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' That is, CODA transforms the target input to an adversarial example towards the direction of reducing code differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' CODA first reduces structure differences by applying equivalent structure transfor- mations to the target input as they do not affect the naturalness of generated examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To preserve the semantics of the target input, we design four categories of equivalent structure transformations in CODA inspired by the existing work in metamorphic testing and code refactoring [32], [33].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In particular, we systematically consider all common kinds of code structures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', loop struc- tures, branch structures, and sequential structures (including numerical calculation and constant usage).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We explain the four categories in detail in Table I, each of which is also illustrated with an example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For each category of transformations, it may include several specific rules.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, the rules of transformation on += and transformation on -= belong to the category of R3-calculation, and the rules of transforming for loop to while loop and transforming while loop to for loop belong to the category of R1-loop.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In total, CODA has 20 specific rules for the four categories of transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Please note that not all the rules are applicable to the code 5 programmed by any programming language.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, ++ and -- in R3-calculation are not supported by Python.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, in R4-constant, the newly-defined variable cannot be the same as the existing variables in the code;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' otherwise, it may incur grammar errors and alter the original semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Due to the space limit, we put more details about all these specific rules at our project homepage [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, we illustrate how to apply each rule for reducing code differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Each rule involves two structures, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', the one before transformation (sb) and the one after transformation (sa).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' CODA first counts the occurring times of sb and sa in the set of selected reference inputs (denoted as nb and na), and then calculates their occurring distribution, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', nb nb+na and na nb+na .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Further, CODA applies each rule in a probabilistic way to reduce the occurring distribution differences in terms of sb and sa between reference inputs and the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this way, the structure differences in terms of sb and sa can be reduced effectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' More specifically, for each occurrence of sb in the target input, CODA applies this rule with the probability of na nb+na , also indicating that it can be retained with the probability of nb nb+na .' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In this step, CODA obtains M inputs from the target input, each of which is generated by applying all the applicable rules in the above probabilistic way, and then selects the input with the highest average similarity (also measured by the method described in Section III-B) to the selected reference inputs as the one for the follow-up adversarial attack process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Identifier Renaming Transformation To facilitate the generation of successfully-attacking ad- versarial examples, CODA then applies identifier renaming transformations to further reduce code differences.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Inspired by the existing work [12]–[14], identifier renaming transformation in CODA refers to replacing the name of an identifier in the target input with the name of an identifier in the selected reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For ease of presentation, we denote the set of identifiers in the target input as Vt and the set of identifiers in the selected reference inputs as Vr.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To preserve the semantics of the target input and guarantee the grammatical correctness of the generated example, CODA ensures that the identifier used for replacement does not exist in the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, we illustrate how to apply this kind of transformations to the input obtained from the last step (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', equivalent structure transformations).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As demonstrated by the existing work [12]–[14], renaming identifiers is effective to generate successfully-attacking adversarial examples, but can negatively affect the naturalness of generated examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To ensure the naturalness of generated examples, CODA considers the se- mantic similarity between identifiers and designs an iterative transformation process like ALERT [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Specifically, CODA measures the semantic similarity between each identifier in Vt and each identifier in Vr by representing each identifier as a vector via word embedding.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Here, CODA builds the pre- trained language model with the FastText algorithm [34] and calculates the cosine similarity between vectors to measure their semantic similarity.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, CODA prioritizes each pair TABLE II STATISTICS OF OUR USED SUBJECTS Task Train/Validate/Test Class Language Model Acc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Vulnerability 21,854/2,732/2,732 2 C CB 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='76% Prediction GCB 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65% Clone 90,102/4,000/4,000 2 Java CB 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='97% Detection GCB 97.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='36% Authorship 528/–/132 66 Python CB 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='35% Attribution GCB 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='48% Functionality 41,581/–/10,395 104 C CB 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='18% Classification GCB 98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='66% Defect 27,058/–/6,764 4 C/C++ CB 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='37% Prediction GCB 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='98% CB is short for CodeBERT and GCB is short for GraphCodeBERT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' of identifiers as the descending order of their semantic sim- ilarity, and iteratively applies this transformation based on each pair of identifiers in the ranking list, which ensures that more natural transformations can be first performed.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' After each iteration, CODA invokes the target model to check whether a successfully-attacking adversarial example is generated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The iterative attack process terminates until a successfully-attacking adversarial example is generated or all the pairs are used by this transformation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Please note that CODA ensures that the pair of identifiers will not introduce repetitive identifiers in the generated example in each iteration;' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' otherwise, this pair will be discarded.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Overall, CODA only invokes the target model when check- ing if a successfully-attacking adversarial example is gen- erated.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' They are necessary model invocations for this task.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, CODA can largely reduce the number of model invocations compared with the existing techniques (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', ALERT [14]), which is confirmed by our study (Section V-A).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IV.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' EVALUATION DESIGN In the study, we address four research questions (RQs): RQ1: How does CODA perform in terms of effectiveness and efficiency compared with state-of-the-art techniques?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ2: Are the adversarial examples generated by CODA natural for humans?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ3: Are the adversarial examples generated by CODA useful to improve the robustness of deep code models?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ4: Does each main component contribute to the over- all effectiveness of CODA?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Subjects 1) Datasets and Tasks: To sufficiently evaluate CODA, we consider all the five code-based tasks in the studies of evaluating state-of-the-art techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14]).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The statistics of datasets is shown at the first four columns in Table II, each of which represents the task, the number of inputs in the training/validation/test set, the number of classes for the classification task, and the programming language for the inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The task of vulnerability prediction aims to predict whether a given code snippet has vulnerabilities.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its used dataset is ex- tracted from two C projects (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', FFmpeg [35] and Qemu [36]) by Zhou et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [2] and has been integrated as part of the CodeXGLUE benchmark [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The task of clone detection 6 aims to detect whether two given code snippets are equivalent in semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its used dataset is from BigCloneBench [37], the most widely-used dataset for clone detection.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The existing work [14] randomly sampled 90,102/4,000/4,000 inputs from the benchmark for training/validation/testing, to make the experiment at a computationally friendly scale.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In our study, we used the same dataset.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The task of authorship attribution aims to identify the author of a given code snippet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its used dataset is the Google Code Jam (GCJ) dataset [17].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The task of functionality classification aims to classify the functionality of a given code snippet.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' If code snippets solve the same problem, they are regarded to have the same functionality [18].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its used dataset is the Open Judge (OJ) benchmark [38], which has been also integrated as part of the CodeXGLUE benchmark [30].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The task of defect prediction aims to predict whether a given code snippet is defective and its defect type.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Its used dataset is the CodeChef dataset [39], which is labeled by the execution results on the CodeChef platform (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', four defect types: no defect, wrong answer, timeout, runtime error).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2) Models: Following the existing work [14], we adopted two state-of-the-art pre-trained models for code-based tasks, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CodeBERT [6] and GraphCodeBERT [7], and then fine- tuned them on the five tasks based on the corresponding datasets, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In total, we obtained 10 deep code models as the subjects.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The last two columns in Table II show the used pre-trained model and the accuracy of the deep code model after fine-tuning, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' When fine-tuning Code- BERT and GraphCodeBERT on these tasks (except Graph- CodeBERT on functionality classification and defect predic- tion), we used the same hyper-parameter settings provided by the existing work [12], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As there is no instruction on the hyper-parameter settings for fine-tuning GraphCodeBERT on functionality classification and defect prediction, we used the same settings as the one used by authorship attribution (they are all multi-class classification tasks).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Indeed, the achieved model performance outperforms that achieved by the models (e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='g.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', TBCNN [38] and CodeBERT [6]) used in the existing work [12] on the same datasets [12], indicating that the transferred hyper-parameter settings are reasonable.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Overall, the subjects used in our study are indeed diverse, involving different tasks, different pre-trained models, different numbers of classes, different programming languages, etc.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' It is very helpful to sufficiently evaluate the performance of CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Compared Techniques In the study, we compared CODA with two state-of-the-art techniques attacking deep code models, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14], which have been introduced in Section I (the third paragraph).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We adopted their implementations and the recommended parameter settings provided by the correspond- ing papers [12], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As the original version of CARROT can only support to attack C/C++ code, we extended it to attack Python and Java code for sufficient comparison.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Implementations We implemented CODA in Python and adopted tree- sitter [40] to extract identifiers from code following the exist- ing work [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We set the parameters in CODA by conducting a preliminary experiment, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', U = 256, N = 64, and M = 64.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We discuss the influence of the settings of main parameters on CODA in Section VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We released our code and experimental data at our project homepage [19].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' All the experiments were conducted on a server with an Ubuntu 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04 system with Intel(R) Xeon(R) Silver 4214 @ 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='20GHz CPU, 256GB memory, and NVIDIA GeForce RTX 2080 Ti GPU.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RESULTS AND ANALYSIS A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ1: Effectiveness and Efficiency 1) Setup: For each deep code model, we applied CODA, CARROT, ALERT to generate adversarial examples from each target input in the test set, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We measured their effectiveness and efficiency based on the following metrics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' To reduce the influence of randomness, we repeated all the experiments (including those for other RQs) 10 times, and reported the average results.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Following the existing work [14], we adopted the at- tack success rate (ASR) to measure the effectiveness of each technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ASR is the percentage of the target inputs from which an attack technique can generate a successfully- attacking adversarial example.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Larger ASR values mean better attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, it is important to measure whether the prediction confidence (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', the probability of being the ground-truth class of the target input) is decreased by the gen- erated examples (although there is no successfully-attacking adversarial example generated from a target input).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, we also calculated prediction confidence decrement (PCD) to measure the effectiveness of each technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' PCD is calculated by the prediction confidence of the target input minus the min- imum prediction confidence of the set of generated examples from the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' If the former is smaller than the latter, we regard PCD to be 0, indicating that the generated examples cannot decrease the prediction confidence of the target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Larger PCD values mean better attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In addition, following the existing work [12], [14], we used the time spent on the overall attack process (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', completing the adversarial example generation process from all the target inputs) and the average number of model invocations for generating examples from one target input, to measure the efficiency of each technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Less time and fewer model invocations mean higher efficiency.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2) Results: Table III shows the comparison results among CARROT, ALERT, and CODA in terms of ASR.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From this table, CODA always outperforms CARROT and ALERT on all the tasks based on both CodeBERT and GraphCodeBERT, demonstrating the stable attack effectiveness of CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' On average, CODA improves 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='11% and 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='83% higher ASR than CARROT and ALERT across all the five tasks on CodeBERT, and improves 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='34% and 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% higher ASR on GraphCodeBERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Figures 3(a) and 3(b) show the comparison results among the three techniques in terms of PCD on CodeBERT and GraphCodeBERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From these figures, the upper quartile, median, and lower quartile of CODA are always 7 TABLE III EFFECTIVENESS COMPARISON IN TERMS OF ASR Task CodeBERT GraphCodeBERT CARROT ALERT CODA CARROT ALERT CODA Vulnerability Prediction 33.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='72% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='58% 37.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='40% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='95% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='72% Clone Detection 20.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='78% 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='79% 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65% 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='50% 7.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='96% 27.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='37% Authorship Attribution 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='44% 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='78% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='05% 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='68% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='47% 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='00% Functionality Classification 44.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='15% 10.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='74% 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='76% 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='22% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='44% Defect Prediction 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='59% 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='15% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='18% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='87% 96.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='58% Average 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='94% 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='48% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='69% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% (a) PCD on attacking CodeBERT (b) PCD on attacking GraphCodeBERT Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 3.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Comparison in terms of prediction confidence decrement (a) Model invocation times on attacking CodeBERT (b) Model invocation times on attacking GraphCodeBERT Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Comparison in terms of model invocation times (y-axis refers to the normalized values following the existing work [14]) larger than (or equal to) those of both CARROT and ALERT regardless of the tasks and the pre-trained models, demon- strating that CODA produces more significant attacks for decreasing prediction confidence of target inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, on CodeBERT, the average improvements of CODA over CARROT and ALERT are 101.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88% and 520.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65% across all the tasks in terms of average PCD, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Similarly, on GraphCodeBERT, the average improvements of CODA over CARROT and ALERT are 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='35% and 560.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='15%, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Besides, CARROT, ALERT, and CODA take 159.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='19 hours, 198.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='89 hours, and 39.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='59 hours to complete the entire attack process on all five tasks, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Further, we measured the number of model invocations for each target input during the attack process, whose results are shown in Figure 4(a) and 4(b).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From these figures, CODA performs fewer model invocations than both CARROT and ALERT regardless of the tasks and pre-trained models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' On average, CODA performs 65.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='73% and 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='58% fewer model invocations than CARROT and ALERT across all the tasks on CodeBERT, and 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='07% and 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='31% fewer model invocations on GraphCodeBERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The results demonstrate that CODA has the significantly highest efficiency among the three techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Answer to RQ1: CODA spends less time with fewer model invocations on completing the entire attack process, but generates more successfully-attacking ex- amples with more significant prediction confidence decrement on all the subjects, than the state-of-the-art techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT and ALERT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ2: Naturalness of Adversarial Examples 1) Setup: It is important to check whether the generated adversarial examples are natural to human judges [14], [41].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Here, we conducted a user study to compare the naturalness of examples generated by CODA, CARROT, and ALERT, and our user study shares the same design as the one conducted by the existing work [14]: Data Preparation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For each subject, we randomly sampled 10 target inputs, and then for each technique we randomly sampled an adversarial example from the set of examples generated from each sampled target input.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' That is, for each sampled target input, we construct three pairs of code snippets, each of which contains the target input and an adversarial example generated by CODA, CARROT, or ALERT.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In total, we obtained 300 pairs of code snippets for the user study due to 10 subjects × 3 techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Same as the existing work [14], the user study also involves four non-author participants, each of whom has 8 Fig.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 5.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Average score to evaluate naturalness of examples per participant a Bachelor/Master degree in Computer Science with at least five years of programming experience.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For objective evaluation, we did not tell partici- pants which technique generates the adversarial example in a pair of code snippets.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, we highlighted the changes in each pair of code snippets for facilitating manual evaluation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, each participant individually evaluated each pair by evaluating to what extent the changes are natural to the code context and the changed identifiers preserve the original semantics, following the existing work [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Specifically, participants gave a score for each pair based on a 5-point Likert scale [42] (1 means strongly disagree and 5 means strongly agree) following the existing work [14], [43].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2) Results: Figure 5 shows the average score of the ad- versarial examples generated by each technique for each participant.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From this figure, the conclusions from different participants are consistent: the naturalness of the adversarial examples generated by CODA and ALERT is closely high (round 4.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='50 on average), and significantly higher than that by CARROT (just 2.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='89 on average).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' ALERT is a naturalness- aware technique, whose core contribution is to ensure the naturalness of generated examples, but CODA achieves similar naturalness scores to it, demonstrating that CODA can also generate highly natural adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Answer to RQ2: The adversarial examples gener- ated by CODA are natural closely to the state-of-the- art naturalness-aware attack technique (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', ALERT), which is consistently confirmed by participants.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ3: Model Robustness Improvement 1) Setup: We studied the value of generated adversarial examples by using them to improve the robustness of the target model via an adversarial ���ne-tuning strategy.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For each subject, we divided the test set into two equal parts (S1 and S2), so as to avoid data leakage between the adversarial training set and the evaluation set constructed by the same technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Specifically, we applied each technique to generate examples from S1, and selected one generated adversarial example for each target input, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', the one that successfully attacks the model or achieves the largest decrement on prediction confidence (if no successfully-attack example is generated).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The selected examples were integrated with the training set to form the adversarial training set, which is used for fine-tuning the model.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Thus, for a given subject, the size of the adversarial training set constructed by each technique is the same.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' After obtaining a fine-tuned model for each subject with each technique, we evaluated it on the evaluation set of the successfully-attacking examples generated from S2 by CODA, CARROT, ALERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Then, we measured the accuracy of the fine-tuned model on the three evaluation sets to measure its ability of defending against attacks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2) Results: Table IV shows the effectiveness of improving the model robustness with the generated examples by the studied techniques, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The first row represents the evaluation set constructed by the corresponding technique, while the second row represents the adversarial training set constructed by the corresponding technique.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The value in each cell represents the ratio of the adversarial examples in the evaluation set that can be defended by the fine-tuned model based on the adversarial training set.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We found on most sub- jects, CODA improves the model robustness to defend against attacks from the largest ratio of adversarial examples generated by CODA, CARROT, ALERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' On average, the models fine-tuned by CODA can defend against attacks from 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='64%, 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='96%, 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='68% of successfully-attacking examples generated by CARROT, ALERT, CODA respectively, with the improvement of 6.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='35%, 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='69%, 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% over those by CARROT and 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65%, 11.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='70%, 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='99% over those by ALERT respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Besides, the results of attack defense between different techniques indicate that the examples generated by CODA could subsume those by CARROT and ALERT to a large extent.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We also applied each fine-tuned model to the corresponding test set, and found its accuracy is almost consistent with the original accuracy (all the absolute accuracy differences are less than 1%).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The results demonstrate that CODA is more helpful to improve the model robustness than CARROT and ALERT without damaging the original model performance.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' In four evaluation sets (constructed by ALERT or CARROT), CODA performs worse than ALERT or CARROT, as the adversarial training set and the evaluation set generated by the same technique could be more similar.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Answer to RQ3: CODA helps improve the model ro- bustness more effectively than CARROT and ALERT, in terms of defending against attacks from the adver- sarial examples generated by itself as well as the adver- sarial examples generated by the other two techniques.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RQ4: Contribution of Each Main Component 1) Setup: We studied the contribution of each main compo- nent in CODA, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', reference inputs selection (RIS), equivalent structure transformations (EST), and identifier renaming trans- formations (IRT).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We constructed three variants of CODA: w/o RIS: we replaced RIS with the method that randomly selects N inputs from training data as reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' w/o EST: we removed EST from CODA, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', it directly performs identifier renaming transformations after select- ing reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' w/o IRT: we removed IRT from CODA, i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', it directly checks whether a successfully-attacking example is gen- erated after equivalent structure transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 9 TABLE IV ROBUSTNESS IMPROVEMENT OF THE TARGET MODELS AFTER ADVERSARIAL FINE-TUNING Task Model CARROT ALERT CODA CARROT ALERT CODA CARROT ALERT CODA CARROT ALERT CODA Vulnerability CodeBERT 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='14% 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='11% 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='69% 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='43% 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='27% 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='44% 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='16% 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='73% 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='82% Prediction GraphCodeBERT 12.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='37% 19.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='59% 21.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65% 16.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='33% 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='35% 23.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='71% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='77% 24.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='74% 34.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='02% Clone CodeBERT 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='15% 42.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='31% 94.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='44% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='65% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='46% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='32% 38.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='51% 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='45% 89.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='78% Detection GraphCodeBERT 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='00% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='50% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='17% 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='29% 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='31% 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='71% 57.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='69% 92.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='97% Authorship CodeBERT 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='06% 40.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% 41.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='03% 51.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='25% 56.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='25% 58.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='82% 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% 43.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='33% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='47% Attribution GraphCodeBERT 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='75% 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='40% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='41% 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='67% 100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='00% 45.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='59% 80.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='39% 84.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='75% Functionality CodeBERT 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='46% 72.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='80% 81.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='51% 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='83% 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='75% 79.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='41% 78.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='92% 71.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='18% 95.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='43% Classification GraphCodeBERT 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='53% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='19% 77.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='27% 32.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='98% 91.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='22% 90.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='81% 93.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08% Defect CodeBERT 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='73% 25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='81% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='03% 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='87% 83.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='12% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='86% 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='66% 85.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='36% Prediction GraphCodeBERT 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='20% 48.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='54% 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='88% 52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='73% 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='91% 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='45% 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08% 68.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='66% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='14% Average 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='84% 47.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='98% 63.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='64% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='27% 59.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='94% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='96% 53.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='75% 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='86% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='68% TABLE V ABLATION TEST FOR CODA IN TERMS OF AVERAGE ASR Model w/o RIS w/o EST w/o IRT CODA CodeBERT 30.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='83% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='73% 35.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='14% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% GraphCodeBERT 29.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='49% 62.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='41% 26.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='24% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% TABLE VI INFLUENCE OF HYPER-PARAMETER U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' U 64 128 256 512 1024 CodeBERT 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='14% 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='90% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='27% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='83% GraphCodeBERT 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='92% 70.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='16% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='98% 75.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='69% 2) Results: Table V shows the average ASR values of each technique across all the tasks on CodeBERT and GraphCode- BERT, respectively.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The results on each task can be found at our project homepage [19] due to the space limit.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We found that CODA outperforms all three variants in terms of average ASR with improvements of 17.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='20%∼143.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='14%, demonstrating the contribution of each main component in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Also, ref- erence inputs selection and identifier renaming transformations contribute more than equivalent structure transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The possible reason is that not all the rules of equivalent structure transformations can be applicable to all the target inputs, but identifier renaming transformations are applicable to all the inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We can enrich the rules of equivalent structure transformations in the future to further improve the attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Answer to RQ4: All the components of reference in- put selection, equivalent structure transformations, and identifier renaming transformations make contributions to the overall effectiveness of CODA, demonstrating the necessity of each of them in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' VI.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' THREATS TO VALIDITY The main threat to validity lies in the settings of param- eters in CODA.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Here, we investigated the influence of two important parameters in CODA (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', U and N introduced in Section III-B).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' They affect the selection of reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tables VI and VII show the influence of U and N in terms of average ASR across all the tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' As U increases, CODA performs better, as incorporating more inputs for the second step of selection can increase the possibility of finding TABLE VII INFLUENCE OF HYPER-PARAMETER N N 1 4 16 32 64 128 CodeBERT 28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='33% 61.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='07% 67.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='12% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='04% 76.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='38% GraphCodeBERT 31.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='84% 46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='46% 60.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='40% 66.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='12% 73.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='62% 74.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='93% more effective reference inputs.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Similarly, as N increases within our studied range, more effective ingredients could be included, leading to better effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, the amount of increase in terms of average ASR becomes smaller with U and N increasing, and meanwhile incorporating more inputs can incur more costs in similarity calculation or code transformations.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hence, by balancing the effectiveness and efficiency of CODA, we set U to 256 and N to 64 as the default settings in CODA for practical use.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' VII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' RELATED WORK Besides the state-of-the-art techniques compared in our study (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT [12] and ALERT [14]), there are some other adversarial example generation techniques for deep code models.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' For example, Yefet et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [44] proposed DAMP, which changes variables in the target input by gradient computation.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' It only works for the models using one-hot encoding to process code, and thus cannot attack the models based on state- of-the-art CodeBERT [6] and GraphCodeBERT [7] due to different encoding methods.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [13] proposed MHM, which iteratively performs identifier renaming transformations to generate adversarial examples based on the Metropolis- Hastings [45]–[47] algorithm.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' MHM underperforms CARROT and ALERT as presented by the existing studies [12], [14].' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Pour et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [48] proposed a search-based technique with an iterative refactoring-based process.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' It does not ensure the naturalness of generated examples, especially with the rule of dead code insertion.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' These techniques still search for effective ingredients in the enormous space, limiting their effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Different from them, our work designs the first code-difference-guided attack technique, which can largely reduce ingredient space for improving the attack effectiveness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' There are also many adversarial attack techniques in other domains, such as FGSM [49], JSMA [50], and BIM [10] in image processing.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' However, they are not applicable to attack deep code models as source code is discrete and has to strictly stick to the grammar and semantics constraints.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 10 VIII.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' CONCLUSION To improve the attack effectiveness to deep code models, we propose a novel perspective by exploiting the code differences between reference inputs and the target input to guide the generation of adversarial examples.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' From this perspective, we design CODA, which reduces the ingredient space as the one constituted by structure and identifier differences and designs equivalent structure transformations and identifier renaming transformations to preserve original semantics.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' We conducted an extensive study on two popular pre-trained models with five tasks.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' The results demonstrate that CODA performs more successful attacks with less time than the state-of-the-art techniques (i.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='e.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', CARROT and ALERT), and confirm the naturalness of its generated examples as well as the capability of improving the model robustness.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' REFERENCES [1] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' White, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tufano, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Vendome, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Poshyvanyk, “Deep learning code fragments for code clone detection,” in 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 87–98.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [2] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhou, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Siow, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Du, and Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, “Devign: effective vulner- ability identification by learning comprehensive program semantics via graph neural networks,” in Proceedings of the 33rd International Con- ference on Neural Information Processing Systems, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 10 197– 10 207.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [3] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wang, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lyu, and I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' King, “Code completion with neural attention and pointer networks,” in Proceedings of the 27th International Joint Conference on Artificial Intelligence, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4159–25.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [4] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Alon, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Brody, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Levy, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yahav, “code2seq: Generating sequences from structured representations of code,” in International Conference on Learning Representations, 2018.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [5] U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Alon, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zilberstein, O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Levy, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yahav, “code2vec: Learning distributed representations of code,” Proceedings of the ACM on Pro- gramming Languages, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 3, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' POPL, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1–29, 2019.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [6] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Feng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Guo, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tang, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Duan, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Feng, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Gong, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Shou, B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Qin, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jiang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', “Codebert: A pre-trained model for programming and natural languages,” in Findings of the Association for Computational Linguistics: EMNLP 2020, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1536–1547.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [7] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ren, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lu, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Feng, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhou, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Duan, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Svyatkovskiy, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Fu et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', “Graphcodebert: Pre-training code repre- sentations with data flow,” in ICLR, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [8] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Choi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chung, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Kushman, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Schrittwieser, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Leblond, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Eccles, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Keeling, F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Gimeno, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lago et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', “Competition- level code generation with alphacode,” arXiv preprint arXiv:2203.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='07814, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [9] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chen, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tworek, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jun, Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yuan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' d.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' O.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Pinto, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Kaplan, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Edwards, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Burda, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Joseph, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Brockman et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='03374, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [10] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Kurakin, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Goodfellow, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bengio, “Adversarial examples in the physical world,” in Artificial intelligence safety and security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chapman and Hall/CRC, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 99–112.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [11] Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Qin, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Carlini, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Cottrell, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Goodfellow, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Raffel, “Imper- ceptible, robust, and targeted adversarial examples for automatic speech recognition,” in International conference on machine learning.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' PMLR, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 5231–5240.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [12] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Fu, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ma, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhao, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yang, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Sun, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jin, “Towards robustness of deep program processing mod- els—detection, estimation, and enhancement,” ACM Transactions on Software Engineering and Methodology (TOSEM), vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 31, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 3, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1–40, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [13] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ma, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jin, “Generating adversarial examples for holding robustness of source code processing models,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 34, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 01, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1169–1176.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [14] Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yang, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Shi, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' He, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lo, “Natural attack for pre-trained models of code,” arXiv preprint arXiv:2201.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='08698, 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [15] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Casalnuovo, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Barr, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Dash, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Devanbu, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Morgan, “A theory of dual channel constraints,” in 2020 IEEE/ACM 42nd Interna- tional Conference on Software Engineering: New Ideas and Emerging Results (ICSE-NIER).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 25–28.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [16] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wei and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, “Supervised deep features for software functional clone detection by exploiting lexical and syntactical information in source code.” in IJCAI, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 3034–3040.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [17] B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Alsulami, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Dauber, R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Harang, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Mancoridis, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Greenstadt, “Source code authorship attribution using long short-term memory based networks,” in European Symposium on Research in Computer Security.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Springer, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 65–82.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [18] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Sun, K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wang, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Liu, “A novel neural source code representation based on abstract syntax tree,” in 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 783–794.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [19] “Coda,” To be announced later, Accessed: 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [20] Q.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Huang, X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Xia, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Xing, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lo, and X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wang, “Api method recommendation without worrying about the task-api knowledge gap,” in 2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 293–304.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [21] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Karmakar and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Robbes, “What do pre-trained code models know about code?”' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' in 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1332–1336.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [22] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Allamanis, H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Peng, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Sutton, “A convolutional attention net- work for extreme summarization of source code,” in 33rd International Conference on Machine Learning: ICML 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' PMLR, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 2091– 2100.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [23] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Phan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Le Nguyen, and L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bui, “Convolutional neural networks over control flow graphs for software defect prediction,” in 2017 IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2017, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 45–52.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [24] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Sharif, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bhagavatula, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bauer, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Reiter, “Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition,” in Proceedings of the 2016 acm sigsac conference on computer and communications security, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1528–1540.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [25] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Grosse, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Manoharan, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Papernot, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Backes, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' McDaniel, “On the (statistical) detection of adversarial examples,” arXiv preprint arXiv:1702.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='06280, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [26] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Prakash, N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Moran, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Garber, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' DiLillo, and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Storer, “Deflecting adversarial attacks with pixel deflection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 8571– 8580.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [27] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Shen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chen, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Han, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhou, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Xu, “Multiple- boundary clustering and prioritization to promote neural network retrain- ing,” in Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 410–422.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [28] G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhao and J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Huang, “Deepsim: deep learning code functional similar- ity,” in Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2018, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 141–151.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [29] X.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhou, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Han, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lo, “Assessing generalizability of codebert,” in 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 425–436.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [30] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lu, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Guo, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ren, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Huang, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Svyatkovskiy, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Blanco, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Clement, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Drain, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jiang, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Tang et al.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=', “Codexglue: A machine learning benchmark dataset for code understanding and generation,” in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [31] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Pan, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lu, and B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Xu, “An empirical study on software defect prediction using codebert model,” Applied Sciences, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 11, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 11, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4793, 2021.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [32] K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Nakamura and N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ishiura, “Random testing of c compilers based on test program generation by equivalence transformation,” in 2016 IEEE Asia Pacific Conference on Circuits and Systems (APCCAS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 676–679.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [33] H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Cheers, Y.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Lin, and S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Smith, “Spplagiarise: A tool for generating simulated semantics-preserving plagiarism of java source code,” in 2019 IEEE 10th International conference on software engineering and service science (ICSESS).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2019, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 617–622.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [34] P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bojanowski, E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Grave, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Joulin, and T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Mikolov, “Enriching word vectors with subword information,” Transactions of the association for computational linguistics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 5, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 135–146, 2017.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [35] “Ffmpeg,” https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='ffmpeg.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='org/, Accessed: 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [36] “Qemu,” https://www.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='qemu.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='org/, Accessed: 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 11 [37] J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Svajlenko, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' F.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Islam, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Keivanloo, C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Roy, and M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Mia, “Towards a big data curated benchmark of inter-project code clones,” in 2014 IEEE International Conference on Software Maintenance and Evolution.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2014, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 476–480.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [38] L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Mou, G.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhang, T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Wang, and Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jin, “Convolutional neural networks over tree structures for programming language processing,” in Thirtieth AAAI conference on artificial intelligence, 2016.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [39] “Codechef,” https://codechef.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='com/, Accessed: 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [40] “Tree-sitter,” https://tree-sitter.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='github.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='io/tree-sitter/, Accessed: 2022.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [41] C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Szegedy, W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zaremba, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Sutskever, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Bruna, D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Erhan, I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Good- fellow, and R.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Fergus, “Intriguing properties of neural networks,” in 2nd International Conference on Learning Representations, ICLR 2014, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [42] A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Joshi, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Kale, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chandel, and D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Pal, “Likert scale: Explored and explained,” British journal of applied science & technology, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 7, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4, p.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 396, 2015.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [43] D.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jin, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jin, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' T.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Zhou, and P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Szolovits, “Is bert really robust?' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' a strong baseline for natural language attack on text classification and entailment,” in Proceedings of the AAAI conference on artificial intelligence, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 34, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 05, 2020, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 8018–8025.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [44] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yefet, U.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Alon, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Yahav, “Adversarial examples for models of code,” Proceedings of the ACM on Programming Languages, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' OOPSLA, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1–30, 2020.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [45] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Metropolis, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Rosenbluth, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Rosenbluth, A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Teller, and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Teller, “Equation of state calculations by fast computing machines,” The journal of chemical physics, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 21, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 6, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 1087–1092, 1953.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [46] W.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' K.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hastings, “Monte carlo sampling methods using markov chains and their applications,” 1970.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [47] S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Chib and E.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Greenberg, “Understanding the metropolis-hastings algorithm,” The american statistician, vol.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 49, no.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 4, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 327–335, 1995.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [48] M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' V.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Pour, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Li, L.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Ma, and H.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Hemmati, “A search-based testing framework for deep neural networks of source code embedding,” in 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2021, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 36–46.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [49] I.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Goodfellow, J.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Shlens, and C.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Szegedy, “Explaining and harnessing adversarial examples,” arXiv preprint arXiv:1412.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content='6572, 2014.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' [50] N.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Papernot, P.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' McDaniel, S.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Jha, M.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Fredrikson, Z.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' B.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Celik, and A.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' Swami, “The limitations of deep learning in adversarial settings,” in 2016 IEEE European symposium on security and privacy (EuroS&P).' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' IEEE, 2016, pp.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 372–387.' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'} +page_content=' 12' metadata={'source': '/home/zjlab/wf/langchain-ChatGLM/knowledge_base/B9E0T4oBgHgl3EQfgAGB/content/2301.02412v1.pdf'}