You are currently viewing a new version of our website. To view the old version click .
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Article
  • Open Access

5 December 2025

Leveraging Static Analysis for Feedback-Driven Security Patching in LLM-Generated Code

,
,
and
School of Interactive Computing, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
J. Cybersecur. Priv.2025, 5(4), 110;https://doi.org/10.3390/jcp5040110 
(registering DOI)
This article belongs to the Section Security Engineering & Applications

Abstract

Large language models (LLMs) have shown remarkable potential for automatic code generation. Yet, these models share a weakness with their human counterparts: inadvertently generating code with security vulnerabilities that could allow unauthorized attackers to access sensitive data or systems. In this work, we propose Feedback-Driven Security Patching (FDSP), wherein LLMs automatically refine vulnerable generated code. The key to our approach is a unique framework that leverages automatic static code analysis to enable the LLM to create and implement potential solutions to code vulnerabilities. Further, we curate a novel benchmark, PythonSecurityEval, that can accelerate progress in the field of code generation by covering diverse, real-world applications, including databases, websites, and operating systems. Our proposed FDSP approach achieves the strongest improvements, reducing vulnerabilities by up to 33% when evaluated with Bandit and 12% with CodeQL and outperforming baseline refinement methods.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.