[Research] Prof. Lee JeeHyong's Research Lab Publishes Three papers in the ACL 2023
- 소프트웨어융합대학
- Hit49
- 2023-05-23
논문 #1: “DIP: Dead code Insertion based Black-box Attack for Programming Language Model”, ACL 2023
(인공지능학과 석박통합과정 나철원, 소프트웨어학과 박사과정 최윤석)
논문 #2: “BLOCSUM: Block Scope-based Source Code Summarization via Shared Block Representation”, Findings of ACL 2023
(소프트웨어학과 박사과정 최윤석, 인공지능학과 석사과정 김효준)
논문 #3: “CodePrompt: Task-Agnostic Prefix Tuning for Program and Language Generation”, Findings of ACL 2023
(소프트웨어학과 박사과정 최윤석)
(논문 #1)
[Abstract] Automatic processing of source code, such as code clone detection and software vulnerability detection, is very helpful to software engineers. Large pre-trained Programming Language (PL) models (such as CodeBERT, GraphCodeBERT, CodeT5, etc.), show very powerful performance on these tasks. However, these PL models are vulnerable to adversarial examples that are generated with slight perturbation. Unlike natural language, an adversarial example of code must be semantic-preserving and compilable. Due to the requirements, it is hard to directly apply the existing attack methods for natural language models. In this paper, we propose DIP (Dead code Insertion based Black-box Attack for Programming Language Model), a high-performance and efficient black-box attack method to generate adversarial examples using dead code insertion. We evaluate our proposed method on 9 victim downstream-task large code models. Our method outperforms the state-of-the-art black-box attack in both attack efficiency and attack quality, while generated adversarial examples are compiled preserving semantic functionality.
(논문 #2)
[Abstract] Code summarization, which aims to automatically generate natural language descriptions from source code, has become an essential task in software development for better program understanding. Abstract Syntax Tree (AST), which represents the syntax structure of the source code, is helpful when utilized together with the sequence of code tokens to improve the quality of code summaries. Recent works on code summarization attempted to capture the sequential and structural information of the source code, but they considered less the property that source code consists of multiple code blocks. In this paper, we propose BLOCSUM, BLOck scope-based source Code SUMmarization via shared block representation that utilizes block-scope information by representing various structures of the code block. We propose a shared block position embedding to effectively represent the structure of code blocks and merge both code and AST. Furthermore, we develop variant ASTs to learn rich information such as block and global dependencies of the source code. To prove our approach, we perform experiments on two real-world datasets, the Java dataset and the Python dataset. We demonstrate the effectiveness of BLOCSUM through various experiments, including ablation studies and a human evaluation.
(논문 #3)
[Abstract] In order to solve the inefficient parameter update and storage issues of fine-tuning in Natural Language Generation (NLG) tasks, prompt-tuning methods have emerged as lightweight alternatives. Furthermore, efforts to reduce the gap between pre-training and fine-tuning have shown successful results in low resource settings. As large Pre-trained Language Models (PLMs) for Program and Language Generation (PLG) tasks are constantly being developed, prompt tuning methods are necessary for the tasks. However, due to the gap between pre-train and fine-tuning different from PLMs for natural language, a prompt tuning method that reflects the traits of PLM for program language is needed. In this paper, we propose a Task-Agnostic prompt tuning method for the PLG tasks, CodePrompt, that combines Input-Dependent Prompt Template (to bridge the gap between pre-training and fine-tuning of PLMs for program and language) and Corpus-Specific Prefix Tuning (to efficiently update the parameters of PLMs for program and language). Also, we propose a method to provide more rich prefix word information for limited prefix lengths. We prove that our method is effective in three PLG tasks, not only in the full-data setting, but also in the low-resource setting and cross domain setting.