Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Biden Urges Congress to Take Action Following AI Order

Experts Praise Executive Order for Focusing on Security Risks Associated With AI
Biden Urges Congress to Take Action Following AI Order
U.S. President Joe Biden in the White House on March 13, 2023 (Image: Shutterstock)

U.S. President Joe Biden called on Congress to pass comprehensive legislation on artificial intelligence after invoking Cold War-era executive powers over private industry in a sweeping executive order that aims to set new standards and regulations for AI systems.

See Also: The SIEM Selection Roadmap: Five Features That Define Next-Gen Cybersecurity

"This executive order represents bold action, but we still need Congress to act," Biden said during a Monday signing ceremony. The president added that he was convening a bipartisan group of lawmakers at the White House on Tuesday "to underscore the need for congressional action" to mitigate the risks associated with AI systems.

The executive order invokes the Defense Production Act - the 1950 statute that the White House used throughout the novel coronavirus pandemic to develop vaccines and essential medical supplies - and directs developers of advanced AI models to report the results of their safety tests to the federal government.

The order also establishes an AI Safety and Security Board, expands grant programs for AI research in key areas and directs the National Institute of Standards and Technology to set standards for extensive red-team testing of new foundational models. AI developers will be required under the guidance to notify the government when training models that could pose a "serious risk" to national security.

Biden told an audience in the East Room signing ceremony he's seen an AI deepfake video of himself. "I watched one of me. I said, 'When the hell did I say that?' But all kidding aside, a three-second recording of your voice to generate an impersonation good enough to fool your family - or you - it's mind-blowing," he said.

Senate Majority Leader Chuck Schumer, D-NY, agreed in a statement following the signing ceremony that Congress must "augment, expand and cement" actions that the administration has taken to promote the responsible development of AI systems with bipartisan legislation.

"This is a massive step forward, but of course more is needed," Schumer said. "Congress must now act with urgency and humility."

The executive order promotes innovation in AI research and development, including granting AI researchers and students access to resources through the National AI Research Resource. It also highlights the importance of international collaboration and increased information-sharing between the public and private sectors. The order tasks the departments of State and Commerce with leading efforts to establish comprehensive international frameworks to manage AI risks and instructs NIST to play a pivotal role in advancing AI safety and security by developing comprehensive guidelines and best practices for the responsible development and deployment of AI systems.

Mandy Andress, CISO of the security firm Elastic, told Information Security Media Group that the order takes a "pragmatic" approach to developing regulations and gaining buy-in from AI developers, and NIST leads the charge in developing standards and expanded information-sharing initiatives across government, including at the Cybersecurity and Infrastructure Security Agency. Elastic joined CISA's flagship public-private partnership, the Joint Cyber Defense Collaborative, earlier this year.

Security researchers praised the executive order for mandating AI developers to notify the government when training potentially harmful foundational models and tasking NIST with establishing red-teaming standards. Those steps will help ensure "that the most powerful AI systems are rigorously tested to ensure they are safe before public deployment," said Paul Scharre, executive vice president and director of studies for think tank Center for a New American Security.

The order sets a foundation for privacy protections in the United States around AI development and use, including prioritizing federal support development and utilization of privacy-preserving techniques and enabling AI systems to be trained with privacy safeguards in place. Without legislation, limitations could quickly arise in fully addressing the vast scope of privacy concerns posed by AI.

"While this executive order is an important first step, its impact is limited to the power of the federal agencies to lead by example through procurement and research," said Chris Lewis, president and CEO of non-profit public interest group Public Knowledge. "We hope to see Congress continue its work in developing a robust regulatory framework for AI."

"This would include passing a comprehensive privacy law, stronger antitrust protections, and an expert regulatory agency with the resources and authority to keep up with the pace of innovation," he added.

Though the U.S. lacks a comprehensive privacy statute or regulatory law that specifically focuses on AI and its potential risks, several recent bills and White House guidelines have sought to establish a foundation for AI ethics and accountability. The administration previously received voluntary commitments from leading AI companies to implement secure and transparent development processes for AI systems, and it published a blueprint for an AI bill of rights (see: US Lawmakers Warned That AI Needs a 'Safety Brake').

House Energy and Commerce Committee Chair Cathy McMorris Rodgers, R-Wash., said Congress is already implementing strategies to secure AI under the American COMPETES Act of 2020, which instructed the Department of Commerce to explore U.S. competitiveness in AI development.

"As companies begin to incorporate AI, we need to protect and secure the personal information of every American, especially our children, while preserving innovation in the process," she said. "I agree with President Biden that the best way to do this is by enacting a comprehensive data privacy and security law, which should be the first step toward cementing America’s leadership in AI" (see: US House Panel: AI Regulation Begins With Privacy).


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing paymentsecurity.io, you agree to our use of cookies.