Artificial intelligence companies are pushing back against California state lawmakers’ demand that they install a “kill switch” designed to mitigate potential dangers posed by the new technology — with some threatening to leave Silicon Valley altogether.
Scott Wiener, a Democratic state senator, introduced legislation that would force tech companies to comply with regulations fleshed out by a new government-run agency designed to prevent AI companies from allowing their products to gain “a hazardous capability” such as starting a nuclear war.
Wiener and other lawmakers want to install guardrails around “extremely large” AI systems that have the potential to spit out instructions for creating disasters — such as building chemical weapons or assisting in cyberattacks — that could cause at least $500 million in damages.
The measure, supported by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices, including for still-more powerful models that don’t yet exist.
The state attorney general also would be able to pursue legal actions in case of violations.
But tech firms are threatening to relocate away from California if the new legislation is enshrined into law.
The bill was passed last month by the state Senate.
A general assembly vote is scheduled for August. If it is passed, it goes to the desk of Gov. Gavin Newsom.
A spokesperson for the governor told The Post: “We typically don’t comment on pending legislation.”
A senior Silicon Valley venture capitalist told Financial Times on Friday that he has fielded complaints from tech startup founders who have mused about leaving California altogether in response to the proposed legislation.
“My advice to everyone that asks is we stay and fight,” the venture capitalist told FT. “But this will put a chill on open source and the start-up ecosystem. I do think some founders will elect to leave.”
The biggest objections from tech firms to the proposal are that it will stifle innovation by deterring software engineers from taking bold risks with their products due to fears of a hypothetical scenario that may never come to pass.
“If someone wanted to come up with regulations to stifle innovation, one could hardly do better,” Andrew Ng, an AI expert who has led projects at Google and Chinese firm Baidu, told FT.
“It creates massive liabilities for science-fiction risks, and so stokes fear in anyone daring to innovate.”
Arun Rao, lead product manager for generative AI at Meta, wrote on X last week that the bill was “unworkable” and would “end open source in [California].”
“The net tax impact by destroying the AI industry and driving companies out could be in the billions, as both companies and highly paid workers leave,” he wrote.
Prominent Silicon Valley tech researchers have expressed alarm in recent years over the rapid advancement of artificial intelligence, saying that the consequences for humans could be dire.
“I think we’re not ready, I think we don’t know what we’re doing, and I think we’re all going to die,” AI theorist Eliezer Yudkowsky, who is viewed as particularly extreme by his tech peers, said in an interview last summer.
Yudkowsky echoed concerns voiced by the likes of Elon Musk and other tech figures who advocated a six-month pause on AI research.
Musk said last year that there’s a “non-zero chance” that AI could “go Terminator” on humanity.
Source: https://nypost.com/2024/06/07/business/california-lawmakers-demand-ai-firms-install-kill-switch/