AI & ML

Google Urges Developers to Keep Humans in the Loop When Filing Bug Reports

Mar 20, 2026 5 min read views

Google has moved to ban AI-generated submissions from its open-source vulnerability reward program, citing a mounting quality crisis — even as it simultaneously channels funding into a separate initiative designed to harness AI as a defensive tool for open-source security.

The Google Open Source Software Vulnerability Reward Program team has grown increasingly alarmed by the deteriorating signal-to-noise ratio of incoming AI-assisted bug reports. A significant portion of these submissions arrive riddled with hallucinated exploitation scenarios — plausible-sounding but technically inaccurate descriptions of how a vulnerability might be triggered — or flag issues that carry negligible real-world security impact, draining the bandwidth of already-stretched triage teams.

"To ensure our triage teams can focus on the most critical threats, we will now require higher-quality proof (like OSS-Fuzz reproduction or a merged patch) for certain tiers to filter out low-quality reports and allow us to focus on real-world impact," Google stated in an official blog post. The updated policy effectively raises the evidentiary bar, demanding reproducible proof-of-concept artifacts or accepted code patches before submissions qualify for review — a significant shift from prior, more permissive intake standards.

Google is far from alone in confronting this challenge. The Linux Foundation has similarly flagged the surging volume of AI-generated bug submissions as an operational burden threatening to overwhelm its maintainer community. In response, the Foundation has secured financial commitments from a coalition of leading AI developers — Google, Anthropic, AWS, Microsoft, and OpenAI — amounting to a combined $12.5 million earmarked for bolstering open-source software security infrastructure.

"Grant funding alone is not going to help solve the problem that AI tools are causing today on open-source security teams," said Greg Kroah-Hartman of the Linux kernel project. "OpenSSF has the active resources needed to support numerous projects that will help these overworked maintainers with the triage and processing of the increased AI-generated security reports they are currently receiving." His remarks underscore a broader concern within the open-source community: that financial injections, while necessary, must be paired with purpose-built tooling to produce any meaningful operational relief.

Stewardship of the $12.5 million commitment will fall to two established open-source security organizations — Alpha-Omega and the Open Source Security Foundation (OpenSSF). Both organizations plan to direct the capital toward developing AI-powered triage assistance tools, effectively deploying AI as a countermeasure against the volume of AI-generated noise — a striking example of the technology being used to remediate problems of its own making.

"We are excited to bring maintainer-centric AI security assistance to the hundreds of thousands of projects that power our world," said Alpha-Omega co-founder Michael Winser — signaling an ambition to scale protections across the vast, often under-resourced ecosystem of critical open-source projects.

This article first appeared on InfoWorld.