California advances AI regulation bill amid Silicon Valley concerns
The California State Assembly’s Appropriations Committee today voted in favor of a proposed law to regulate the artificial intelligence industry that has drawn the ire of some in Silicon Valley and federal lawmakers.
SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, would require developers of “frontier” models — models that cost at least $100 million to train — to implement safeguards and safety testing frameworks. The bill would also require companies with such models to undergo audits and give “reasonable assurance” that the models won’t cause a catastrophe. Developers would have to report their safety work to state agencies.
According to Fast Company, the bill would also establish a new agency called the Frontier Model Division, which would help the California state government with enforcement of the bill and creation of new safety standards.
The bill, which has faced opposition from big tech companies such as Meta Platforms Inc. and Google LLC, also proposes the establishment of “CalCompute,” a publicly funded computer cluster program aimed at providing operational expertise and user support for creating “equitable” AI innovation.
Though the bill has strong support in the California Assembly, which is dominated by Democrats, Ro Khanna and Zoe Lofgren, Democratic members of Congress who represent Silicon Valley, have expressed concern that the bill could stifle innovation.
“As the representative from Silicon Valley, I have been pushing for thoughtful regulation around artificial intelligence to protect workers and address potential risks, including misinformation, deepfakes and an increase in wealth disparity,” Rep. Khanna said in a statement. “I agree wholeheartedly that there is a need for legislation and appreciate the intention behind SB 1047, but am concerned that the bill as currently written would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.”
Lofgren, a ranking member of the House Committee on Science, Space and Technology, said separately that the bill is “heavily skewed” toward addressing hypothetical risks “while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workforce displacement.”
With the ongoing criticism, the bill has had some minor amendments, with TechCrunch reporting that the revised bill will no longer require AI labs to submit certifications of safety test results “under penalty of perjury.” With the amendments, the AI labs will only be required to submit public statements outlining their safety practices without the threat of criminal liability.
Others also opposing the bill include Christopher Nguyen, chief executive of AI startup Aitomatic Inc., who told SiliconValley.com that the bill may affect startup companies that rely on large language models such as Meta’s Llama 31.
“We depend very much on this thriving ecosystem of open-source AI,” Nguyen said. “If we can’t keep state-of-the-art technology accessible, it will immediately impact the startup ecosystem, small businesses, and even the man on the street.”
The amended bill, as it stands, is now being forwarded to the California Assembly for approval. Given the Democratic majority, it’s only a question now of when it will pass and become law.
Image: SiliconANGLE/Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU