Local cover image
Local cover image
Image from Google Jackets

Secure code generation with LLMs : risk assessment and mitigation strategies

By: Publication details: Hyderabad IUP Publications 2024Edition: Vol.17(1), FebDescription: 75-95pSubject(s): Online resources: In: IUP Journal of telecommunicationsSummary: Artificial intelligence (AI)-powered code generation tools, such as GitHub Copilot and OpenAI Codex, have revolutionized software development by automating code synthesis. However, concerns remain about the security of AI-generated code and its susceptibility to vulnerabilities. This study investigates whether AI-generated code can match or surpass human-written code in security, using a systematic evaluation framework. It analyzes AIgenerated code samples from state-of-the-art large language models (LLMs) and compares them against human-written code using static and dynamic security analysis tools. Additionally, adversarial testing was done to assess the robustness of LLMs against insecure code suggestions. The findings reveal that while AI-generated code can achieve functional correctness, it frequently introduces security vulnerabilities, such as injection flaws, insecure cryptographic practices, and improper input validation. To mitigate these risks, securityaware training methods and reinforcement learning techniques were explored to enhance the security of AI-generated code. The results highlight the key challenges in AI-driven software development and propose guidelines for integrating AI-assisted programming safely in real-world applications. This paper provides critical insights into the intersection of AI and cybersecurity, paving the way for more secured AI-driven code synthesis models.
Tags from this library: No tags from this library for this title. Log in to add tags.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Status Barcode
Articles Abstract Database Articles Abstract Database School of Engineering & Technology Archieval Section Not for loan 2025-1294
Total holds: 0

Artificial intelligence (AI)-powered code generation tools, such as GitHub Copilot and OpenAI Codex, have revolutionized software development by automating code synthesis. However, concerns remain about the security of AI-generated code and its susceptibility to vulnerabilities. This study investigates whether AI-generated code can match or surpass human-written code in security, using a systematic evaluation framework. It analyzes AIgenerated code samples from state-of-the-art large language models (LLMs) and compares them against human-written code using static and dynamic security analysis tools. Additionally, adversarial testing was done to assess the robustness of LLMs against insecure code suggestions. The findings reveal that while AI-generated code can achieve functional correctness, it frequently introduces security vulnerabilities, such as injection flaws, insecure cryptographic practices, and improper input validation. To mitigate these risks, securityaware training methods and reinforcement learning techniques were explored to enhance the security of AI-generated code. The results highlight the key challenges in AI-driven software development and propose guidelines for integrating AI-assisted programming safely in real-world applications. This paper provides critical insights into the intersection of AI and cybersecurity, paving the way for more secured AI-driven code synthesis models.

There are no comments on this title.

to post a comment.

Click on an image to view it in the image viewer

Local cover image
Share
Unique Visitors hit counter Total Page Views free counter
Implemented and Maintained by AIKTC-KRRC (Central Library).
For any Suggestions/Query Contact to library or Email: librarian@aiktc.ac.in | Ph:+91 22 27481247
Website/OPAC best viewed in Mozilla Browser in 1366X768 Resolution.