Treffer: Assessing the Effectiveness of ChatGPT in Secure Code Development: A Systematic Literature Review

Title:
Assessing the Effectiveness of ChatGPT in Secure Code Development: A Systematic Literature Review
Contributors:
NSERC
Source:
ACM Computing Surveys ; volume 57, issue 12, page 1-32 ; ISSN 0360-0300 1557-7341
Publisher Information:
Association for Computing Machinery (ACM)
Publication Year:
2025
Document Type:
Fachzeitschrift article in journal/newspaper
Language:
English
DOI:
10.1145/3744553
Accession Number:
edsbas.C9233B9B
Database:
BASE

Weitere Informationen

ChatGPT, a Large Language Model (LLM) maintained by OpenAI, has demonstrated a remarkable ability to seemingly comprehend and contextually generate text. Among its myriad applications, its capability to autonomously generate and analyze computer code stands out as particularly promising. This functionality has piqued substantial interest due to its potential to streamline the software development process. However, this technological advancement also brings to the forefront significant apprehensions concerning the security of code produced by LLMs. In this article, we survey recent research that examines the use of ChatGPT to generate secure code, detect vulnerabilities in code, or perform other tasks related to secure code development. Beyond categorizing and synthesizing these studies, we identify important insights into ChatGPT’s potential impact on secure programming. Key findings indicate that while ChatGPT shows great promise as an aid in writing secure code, challenges remain. Its effectiveness varies across security tasks, depending on the context of experimentation (programming language, CWE, code length, etc.) and the benchmark used for comparison–whether against other LLMs, traditional analysis tools, or its own versions. The overall trend indicates that GPT-4 consistently surpasses its predecessor in most tasks.