The U.S. federal government is preparing to collect reports from foundational artificial intelligence model developers, including details about their cybersecurity defenses and red-teaming efforts. The Department of Commerce said it wants thoughts on how data should be safety collected and stored.
The underground market for illicit large language models is a lucrative one, said academic researchers who called for better safeguards against artificial intelligence misuse. "This laissez-faire approach essentially provides a fertile ground for miscreants to misuse the LLMs."
A three-month-old startup promising safe artificial intelligence raised $1 billion in an all-cash deal in a seed funding round. Co-founded by former OpenAI Chief Scientist Ilya Sutskever, Safe Superintelligence will reportedly use the funds to acquire computing power.
The Dutch data regulator is the latest agency to fine artificial intelligence company Clearview AI over its facial data harvesting and other privacy violations of GDPR rules, joining regulatory agencies in France, Italy, Greece and the United Kingdom.
While the criminals may have an advantage in the AI race, banks and other financial services firms are responding with heightened awareness and vigilance, and a growing number of organizations are exploring AI tools to improve fraud detection and response to AI-driven scams.
Unifying fragmented network security technology under a single platform allows for consistent policy application across on-premises, cloud and hybrid environments, said Palo Alto Networks' Anand Oswal. Having a consistent policy framework simplifies management and improves security outcomes.
HackerOne has tapped F5's longtime product leader as its next chief executive to continue expanding its portfolio beyond operating vulnerability disclosure programs. The firm tasked Kara Sprague with building on existing growth in areas including AI red teaming and penetration testing as a service.
AI companies OpenAI and Anthropic made a deal with a U.S. federal body to provide early access to major models for safety evaluations. The agreements are "are an important milestone as we work to help responsibly steward the future of AI," said U.S. AI Safety Institute Director Elizabeth Kelly.
California state lawmakers on Wednesday handed off a bill establishing first-in-the-nation safety standards for advanced artificial intelligence models to their Senate counterparts after weathering opposition from the tech industry and high-profile Democratic politicians.
ISMG's Virtual AI Summit brought together cybersecurity leaders to explore the intersection of AI and security. Discussions ranged from using AI for defense to privacy considerations and regulatory frameworks and provided organizations with valuable insights for navigating the AI landscape.
Cisco announced its intent to acquire Robust Intelligence to fortify the security of AI applications. With this acquisition, Cisco aims to address AI-related risks, incorporating advanced protection to guard against threats such as jailbreaking, data poisoning and unintentional model outcomes.
Microsoft says it fixed a security flaw in artificial intelligence chatbot Copilot that enabled attackers to steal multifactor authentication code using a prompt injection attack. Security researcher Johann Rehberger said he discovered a way to invisibly force Copilot to send data.
U.S. law enforcement is cracking down on users who use artificial intelligence to generate child sexual abuse material, stating there is no difference between material made by a computer and material from real life. "Put simply, CSAM generated by AI is still CSAM," said a U.S. attorney.
Artificial intelligence is transforming cybersecurity on both offensive and defensive fronts. Attackers use AI to iterate and modify exploits rapidly, making malicious code harder to detect, said Tim Gallo, head - global solutions architects, Google.
AI-assisted coding tools can speed up code production but often replicate existing vulnerabilities when built on poor-quality code bases. Snyk's Randall Degges discusses why developers must prioritize code base quality to maximize the benefits and minimize the risks of using AI tools.
Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing paymentsecurity.io, you agree to our use of cookies.