Google Launches AI-Powered CodeMender, Expands Security Initiatives

Google Introduces CodeMender for Automated Code Security

On October 6, 2025, Google announced a suite of new security measures to address the rising threat of AI-driven cyberattacks. Central to this effort is CodeMender, an AI-powered agent designed to autonomously identify and fix critical code vulnerabilities. Leveraging the advanced reasoning capabilities of Google’s Gemini models, CodeMender aims to accelerate patch deployment across open-source projects, making proactive defense more scalable and efficient.

Google highlighted that AI is now both a tool for innovation and a potential weapon for cybercriminals, scammers, and state-backed attackers. The company noted that “cybercriminals, scammers and state-backed attackers are already exploring ways to use AI to harm people and compromise systems around the world.” The development of CodeMender is positioned as a major leap in using AI for defensive purposes, helping defenders keep pace with increasingly sophisticated threats.

AI Vulnerability Reward Program Targets Security Research

To further strengthen the security ecosystem, Google is launching a dedicated AI Vulnerability Reward Program (VRP). This initiative clarifies the scope of eligible AI-related issues and provides a single set of rules and reward tables, streamlining the reporting process for researchers. The company has already paid out over $430,000 for AI-related vulnerabilities through its VRPs and aims to maximize incentives for discovering high-impact flaws.

New AI Vulnerability Reward Program - Google Launches AI-Powered CodeMender, Expands Security Initiatives

The new AI VRP is intended to foster closer collaboration with the global security research community, which Google views as an “indispensable partner” in the fight against emerging threats.

Secure AI Framework 2.0 Expands Protection for AI Agents

Recognizing the growing risks posed by autonomous AI systems, Google has updated its Secure AI Framework (SAIF) to version 2.0. The revised framework introduces new guidance on the security risks associated with AI agents and defines controls to mitigate them. SAIF 2.0 is supported by a risk map and incorporates best practices from industry alliances such as the Coalition for Secure AI (CoSAI).

Secure AI Framework (SAIF)
Six core elements of SAIF

Google’s security strategy emphasizes “secure by design” principles for AI agents and builds on prior successes with AI-powered tools like BigSleep and OSS-Fuzz, which have uncovered zero-day vulnerabilities in widely used software.

Industry Collaboration and Long-Term Commitment

Google is also expanding its partnerships with government agencies such as DARPA and continues to play a leading role in industry alliances focused on AI security. The company asserts that its long-term ambition is to ensure AI remains a “decisive advantage for security and safety,” striving to tip the balance in favor of defenders as threats evolve.

With the launch of CodeMender, the AI VRP, and SAIF 2.0, Google is reinforcing its commitment to securing AI technologies and supporting the broader cybersecurity community in the face of rapidly evolving risks.