Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development
Google AI Security Plan: Bug Bounty, Supply Chain Safety
Google Makes Announcements on New Bug Reporting Guidelines, Supply Chain SecuritySecurity researchers with novel ways to make Google artificial intelligence models leak sensitive training data or otherwise misbehave can now submit their findings to the internet giant's bug bounty program.
See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity
The Mountain, View, Calif., giant also said in a Thursday morning announcement that it's expanding its work on supply chain security for AI.
The company characterizes today's announcements as follow up to its participation in a White House effort that obtains tech company pledges to a slew of voluntary AI security commitments (see: IBM, Nvidia, Others Commit to Develop 'Trustworthy' AI).
"Cyber threats evolve quickly and some of the biggest vulnerabilities aren’t discovered by companies or product manufacturers, but by outside security researchers," said Google security executive Royal Hansen
Hackers and researchers in multiple settings have found ways to circumvent controls on generative AI output, causing large language models to involuntarily participate in the production of malicious code or phishing emails. A study published Tuesday by IBM found it took just a handful of prompts to coax ChatGPT into writing phishing emails and tricking the model into developing highly convincing phishing emails (see: Phish Perfect: How ChatGPT Can Help Criminals Get There).
Google updated its criteria for reporting for the type of AI bugs that third party researchers can submit for a potential reward. Google paid out $12 million during 2022 to security researchers who identified issues otherwise overlooked by internal security teams.
Among the attack scenarios the company says it wants to know more about are prompt injections that are invisible to users. Should an attacker be able to obtain the "the exact architecture or weights of a confidential/proprietary model," Google wants to know. So too if an attacker can reliably trigger a "misclassification in a security control that can be abused for malicious use or adversarial gain."
Google additionally said it wants to apply the same supply chain security measures developed for software development including Supply-chain Levels for Software Artifacts and code signing to machine learning models. "In fact, since models are programs, many allow the same types of arbitrary code execution exploits that are leveraged for attacks on conventional software," wrote Google security executives.
SLSA is a framework for metadata that tells developers what's in the software and how it was built in order to identify known vulnerabilities and detect more advanced threats. Code singing allows users to deploy digital signatures to verify that the software wasn't tampered with.
"By incorporating supply chain security into the ML development lifecycle now, while we as an industry are still determining AI risks, we can jumpstart work with the open source community to establish industry standards that will help solve emerging problems," Google said.