Feds Call for Certifying, Assessing Veracity of AI SystemsBiden Administration Wants to Ensure AI Tech Works as Intended Without Undue Bias
The Biden administration initiated a potential precursor to regulations for artificial intelligence tools such as ChatGPT amid mounting concern about the technology's unintended effects.
In a document soliciting public input over the next 60 days, the administration floats a range of possible mechanisms to ensure an "AI accountability ecosystem" that includes audits, reporting, testing and evaluation.
"Accountability policies will help us shine a light on these systems and verify whether they are safe, effective, responsible and lawful," said Alan Davidson, assistant secretary of commerce for communications and information, during a Tuesday speech at the University of Pittsburgh.
The announcement by the head of the National Telecommunications and Information Administration comes days after President Joe Biden told reporters that "it remains to be seen" whether AI is dangerous and weeks after a slew of top tech executives called for a minimum half-year pause on advanced artificial intelligence systems development (see: Tech Luminaries Call for Pause in AI Development).
Davidson said the NTIA is particularly looking for responses that address certifications AI systems might need ahead of deployment, the data sets needed to conduct AI audits and assessments, the designs developers should choose to prove their AI systems are safe, and the level of testing and assurance the public should expect before systems are released.
"People will benefit immensely from a world in which we reap the benefits of responsible AI while minimizing or eliminating the harms," Davidson said. "We have to move fast because these AI technologies are moving very fast in some ways," he told The Guardian.
Nat Beuse, chief safety officer at self-driving technology vendor Aurora, told attendees of the University of Pittsburgh event that anomaly resolution must be coded into AI systems so technology that doesn't perform up to expectations can be adjusted.
Regulators will likely be more interested in how AI systems are trained, Beuse said during a panel following Davidson's keynote, while customers typically want to learn more about uptime or issues in the field.
How much or whether to regulate AI is a question that has consumed increasing levels of Washington's attention. Eric Schmidt, former CEO of Google and AI proponent, told a House Oversight Committee hearing in early March that innovation should be the primary concern.
"Let American ingenuity, American scientists, the American government and American corporations invent this future and we’ll get something pretty close to what we want. Then [the government] can work on the edges where you have misuse," he said. Schmidt earlier this month told the Australian Financial Review he's against a six-month pause in AI development "because it will simply benefit China."
In Pittsburgh, Mozilla Fellow Deborah Raji sounded another concern. "Of course, there are concerns about fairness," she said. "But before all of that, we should actually be questioning whether what's being put on the market is really AI. Is it false marketing for a company to say its AI technology does certain things if the product doesn't live up to the claims?"