000 a
999 _c23296
_d23296
003 OSt
005 20250811122315.0
008 250811b xxu||||| |||| 00| 0 eng d
040 _aAIKTC-KRRC
_cAIKTC-KRRC
100 _927025
_aBar, Kaushik
245 _aSecure code generation with LLMs
_b: risk assessment and mitigation strategies
250 _aVol.17(1), Feb
260 _aHyderabad
_bIUP Publications
_c2024
300 _a75-95p.
520 _aArtificial intelligence (AI)-powered code generation tools, such as GitHub Copilot and OpenAI Codex, have revolutionized software development by automating code synthesis. However, concerns remain about the security of AI-generated code and its susceptibility to vulnerabilities. This study investigates whether AI-generated code can match or surpass human-written code in security, using a systematic evaluation framework. It analyzes AIgenerated code samples from state-of-the-art large language models (LLMs) and compares them against human-written code using static and dynamic security analysis tools. Additionally, adversarial testing was done to assess the robustness of LLMs against insecure code suggestions. The findings reveal that while AI-generated code can achieve functional correctness, it frequently introduces security vulnerabilities, such as injection flaws, insecure cryptographic practices, and improper input validation. To mitigate these risks, securityaware training methods and reinforcement learning techniques were explored to enhance the security of AI-generated code. The results highlight the key challenges in AI-driven software development and propose guidelines for integrating AI-assisted programming safely in real-world applications. This paper provides critical insights into the intersection of AI and cybersecurity, paving the way for more secured AI-driven code synthesis models.
650 0 _94619
_aEXTC Engineering
773 0 _x0975-5551
_tIUP Journal of telecommunications
_dHyderabad IUP Publications
856 _uhttps://iupindia.in/ViewArticleDetails.asp?ArticleID=7759
_yClick here
942 _2ddc
_cAR