SB 1047 Crisis in AI Regulation
As debate over SB 1047 intensifies, California State Senator Scott Wiener remains steadfast in his support for this bill, which he believes requires responsible regulation of artificial intelligence.
SB 1047, championed by Wiener, is a bill aimed at making artificial intelligence (AI) technologies safer, requiring AI companies to conduct comprehensive security assessments before making their models available to the public.
Wiener emphasized that the bill he submitted in February is very important for public safety and national security. However, this bill received a serious reaction from OpenAI, one of the most prominent companies in the industry.
In opposing SB 1047, OpenAI wrote a letter to California Governor Gavin Newsom and Wiener, warning that such regulations could stifle innovation and alienate a talented workforce from California.
OpenAI’s chief strategy officer, Jason Kwon, argued that the bill could threaten the tech industry’s leading position in California. According to Kwon, the AI industry is still in the early stages of its development, and overly restrictive regulations at the state level could negatively impact the growth of this innovative field.
Kwon also suggested that federal legislation, rather than state laws, is a more appropriate way to regulate AI. This view is also supported by some circles in the technology sector.
But Wiener called these concerns “obsolete” and unfounded. In a press release published on August 21, he noted that OpenAI did not criticize any specific provisions of the bill and suggested that the company’s objections were based more on fear of regulation.
Wiener stated that he found OpenAI’s claim that SB 1047 would negatively impact companies in California unreasonable. He emphasized that the bill covers not only California-based companies, but all companies operating in the field of artificial intelligence.
SB 1047 requires AI companies to conduct comprehensive security assessments of the models they release, and these assessments aim to identify the potential risks of the models. Additionally, the bill includes powers to shut down artificial intelligence models that pose serious risks.
According to Wiener, these provisions are both reasonable and necessary because they ensure the public is protected against the unpredictable dangers of advanced artificial intelligence systems. Such regulations become even more important as rapid progress in artificial intelligence makes it difficult to predict the technology’s potentially harmful consequences.
In addition, Wiener reminded that OpenAI had previously committed to conducting such security assessments and stated that he found it contradictory that the company opposed this bill.
Wiener emphasized that despite OpenAI’s advocacy for federal regulation, Congress has yet to take meaningful action on AI safety. He reminded that California had previously taken action on data privacy legislation in the absence of federal legislation, and that law had become a model for other states. Wiener believes that a similar path should be followed regarding artificial intelligence security.
In July, OpenAI announced its support for three Senate bills focused on the security and accessibility of artificial intelligence. These bills — the AI Innovation Future Act, the CREATE AI Act, and the NSF AI Education Act — address different aspects of artificial intelligence.
However, Wiener argues that SB 1047 provides a basis for such regulations and plays a critical role in terms of public safety.