Congress deliberates AI regulation to avoid mistakes of the past

OpenAI CEO Sam Altman testifies before the Senate.
(Win McNamee/Getty Images)

Sam Altman has a clear mandate for regulators: police the AI industry.

The OpenAI CEO and co-founder’s appearance at a Senate Judiciary subcommittee hearing on Tuesday was a reprieve from recent antagonistic showdowns on Capitol Hill involving Silicon Valley executives like Meta CEO Mark Zuckerberg and Alphabet’s head Sundar Pichai.

The hearing didn’t tackle what form or timeline regulations would take. But senators from both parties agreed they had failed to regulate prior tech revolutions—and vowed to not miss the boat this time.

“Congress failed to meet the moment on social. We have an obligation to not miss it with AI,” said Sen. Richard Blumenthal, D-Conn., chair of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law.

Altman was joined in the hearing by Christina Montgomery, chief privacy officer at IBM, and professor Gary Marcus, an AI researcher at New York University.

“One thing is for certain,” Montgomery said, “this can’t be another era of move fast and break things.”

Training grounds

Senators repeatedly asked how large language models like ChatGPT and Google’s Bard are trained, how to regulate that training, and how to disclose what materials were fed into the program.

“Garbage in, garbage out: The principles still apply,” Blumenthal said.

One of the ideas floated by Blumenthal was a kind of “nutritional label” for AI programs, disclosing what was put into the models. A parallel idea was that, like governmental testing agencies for agricultural products, AI could be rated based on several factors by independent groups.

Sen. Marsha Blackburn, R-Tenn., grilled Altman on what kinds of copyright and data privacy protections were being implemented, raising concerns that generative AI could be used to profit from the work of artists in Nashville and around the world.

Blackburn suggested that third-party companies could license out music and other artistic rights for those who want to use AI to create content in the style of famous artists. The senator also alluded to the need for a national data privacy law, which is being discussed within the Senate Judiciary Committee, that could have wide reaching implications for technology and social media.

Applying a similar process to the Food and Drug Administration’s reviewing and approval of drugs to AI was also brought up several times.

Marcus told the committee that the rating and approval system would need to come from a new regulatory body that can also conduct research. He suggested a new Cabinet-level position to tackle AI affairs and regulation. Large startups and big tech companies cannot be trusted to regulate themselves, Marcus argued, noting that OpenAI has not disclosed what data is used to train its models.

“We are facing a perfect storm of corporate irresponsibility, widespread deployment, lack of adequate regulation and inherent unreliability,” Marcus said.

License to AI

The subject of AI licensing came up several times, with comparisons drawn to drone operators and pilots that must have special licenses to operate in the US and Europe. Regulators did not detail how that licensing scheme would be set up or what body would issue such credentials.

Altman and some committee members agreed that global standards for large language models were necessary and that an agency like CERN, the European multigovernmental agency that researches nuclear energy, was needed to promote research, oversight and ethics.

A part of the government’s role, the witnesses agreed, should be providing funding for AI research and startups to combat misinformation and misuse of AI technologies.

“I think if this technology goes wrong, it can go quite wrong,” Altman said. “We want to work with the government to prevent that from happening.”

Source link

Tags: No tags

Comments are closed.