- Project Zero and DeepMind “big AI” uncovers security vulnerabilities
- Big Sleep finds a SQLite stack buffer underflow flaw before official release
- AI could revolutionize software development by discovering critical flaws
A collaborative “big AI” project between Google Project Zero and Google DeepMind has discovered a critical vulnerability in a piece of software before public release.
The Big Sleep AI agent was set to work analyzing the SQLite open source database engine, where it discovered a stack buffer underflow flaw which was subsequently patched the same day.
This discovery potentially marks the first ever time an AI has uncovered a memory-safety flaw in a widely used application.
Fuzzed software out-fuzzed by AI
Big Sleep found the stack buffer underflow vulnerability in SQLite which had been ‘fuzzed’ multiple times.
Fuzzing is an automated software testing method that can discover potential flaws or vulnerabilities such as memory safety issues that are typically exploited by attackers. However, it is not a foolproof method of vulnerability hunting, and a fuzzed vulnerability that is found and patched could also exist as a variant elsewhere in the software and go undiscovered.
The methodology used by Google in this instance was to provide a previously patched vulnerability as a starting point for the Big Sleep agent, and then set it loose hunting for similar vulnerabilities elsewhere in the software.
While hunting for a similar vulnerability, Big Sleep encountered a vulnerability and traced the steps it took to recreate the vulnerability in a test case, gradually narrowing down the potential causes to a single issue and generating an accurate summary of the…
Read full post on Tech Radar
Discover more from Technical Master - Gadgets Reviews, Guides and Gaming News
Subscribe to get the latest posts sent to your email.