Friday, December 27, 2024
HomeTechArtificial IntelligenceStudy Finds: OpenAI's ChatGPT Generates Insecure Code

Study Finds: OpenAI’s ChatGPT Generates Insecure Code

Key Highlights

  • Researchers have found that code generated by OpenAI’s ChatGPT needs to catch up to minimal security standards in most contexts.
  • ChatGPT lacks an adversarial model of code execution, though it can recognize critical vulnerabilities in its suggestions.
  • It only alerts users to its inadequacies if asked to evaluate the security of its own code suggestions.

As Artificial Intelligence (AI) continues to advance, the potential for it to be used for both good and bad. ChatGPT, a large language model developed by OpenAI, has been used to generate code for programming tasks. 

ChatGPT Fails To Meet Minimum Security Standards In Generated Code

A recent study by researchers from the Université du Québec in Canada found that the ChatGPT often generates insecure code, failing to meet even minimal security standards.

The study involved asking ChatGPT to generate 21 programs and scripts in various programming languages, each illustrating a specific security vulnerability, such as:

  • Memory corruption
  • Denial of service
  • Improperly implemented cryptography

The researchers found that ChatGPT only managed to generate five secure programs out of 21 on its first attempt. After further prompting to correct its missteps, it produced seven more secure apps, though these were only secure concerning the specific vulnerability being evaluated.

Adversarial Model Lacking In ChatGPT’s Code Execution

One of the main issues highlighted by the researchers was that ChatGPT does not assume an adversarial model of code execution. The model repeatedly suggested that security problems can be circumvented by simply not feeding an invalid input to the vulnerable program it has created. 

ChatGPT Recognizes Insecure Code But Fails To Alert Users Unless Prompted

The researchers note that ChatGPT is aware of critical vulnerabilities in the code it suggests and readily admits to their presence, but only when prompted to evaluate the security of its own code suggestions.

The researchers also noted an ethical inconsistency in the fact that the AI tool will refuse to create attack code but will create insecure code. 

For example, OpenAI’s tool generated insecure code in a Java deserialization vulnerability and provided advice on making it more secure but stated it could not create a more secure version of the code.

This study highlights the potential security risks associated with using AI-generated code and the need for greater scrutiny and evaluation of AI applications in various domains. As AI continues to advance and become more integrated into our daily lives, we must remain vigilant about these systems’ potential risks and limitations.

RELATED ARTICLES
- Advertisment -
Google search engine

Most Popular

Recent Comments