Software Vulnerability Detection Using Large Language Models



Software development is among the first demonstrations of using Large Language Models (LLMs) to enhance human productivity. Such a co-pilot paradigm envisions LLM working side-by-side with human developers to assist in programming tasks. Ensuring the security of software products is a critical factor for the success of such a paradigm. There have been various anecdotal reports on the success of using LLMs to detect vulnerabilities in programs. This paper reports a set of experiments applying four well-known LLMs to two widely referenced public datasets to evaluate the performance of LLMs in detecting software vulnerabilities. Our results show a significant performance gap between these LLMs and those from popular static analysis tools, primarily due to their high false positive rates. However, LLMs show great promise in identifying subtle patterns commonly associated with software vulnerabilities. This observation suggests a possible path forward by combining LLMs and other program analysis techniques to achieve better software vulnerability detection.

Cite this Paper (BibTeX)
@article{radford:20230101,
    author={Moumita Das Purba and Arpita Ghosh and Benjamin J. Radford and Bill Chu},
    title={Software Vulnerability Detection Using Large Language Models},
    journal={Proceedings of the Workshop on Machine Learning for Software Reliability, ISSRE 2023},
    year={2023},
    volume={},
    number={},
    pages={},
    DOI={}}