Google Used Large Language Model to Identify SQLite Vulnerability

Researchers from Google Project Zero and Google DeepMind have published a report on the development of the Big Sleep AI system, built on the large Gemini 1.5 Pro language model and designed to identify vulnerabilities in source code. The project's achievement was the identification of the first exploitable and previously unknown vulnerability in an existing project using Big Sleep. The vulnerability was discovered as a result of the AI ​​system checking the SQLite DBMS code base and leads to a buffer underflow. The problem was discovered in recently accepted code and fixed before it was included in the final release of SQLite 3.47.0.

The model can be used as an auxiliary tool in areas that require labor-intensive manual review, as well as for organizing automatic verification of new code to identify vulnerabilities at early stages of development (before the problematic code gets into final releases). It is assumed that the developed AI model will allow identifying security-related issues in code that are difficult to identify through fuzzing testing.

Additionally, Google CEO said that more than a quarter (25%) of all code created in the company is now generated using large Gemini language models, after which this code is reviewed and accepted by engineers. It is noted that this use of AI has significantly accelerated the product development process.

Source: opennet.ru

Add a comment