AI Regulation: Distractions, Not Solutions
California's SB 1047 might have hurt big tech a little, but was just a distraction. Governor Newsom's veto of the proposed AI regulation changes little in the AI game.
Earlier today, Governor Gavin Newsom of California, a flying car enthusiast, struck down the proposed AI regulation bill SB 1047, even though it had passed 32-1 in a bipartisan Senate vote.
We shouldn’t be too surprised by this outcome, nor should we be too disappointed. I’ll explain why.
In this official statement on vetoing the bill, Newsom wrote:
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 - at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.“
In other words, Newsom is saying, this is very unfair for my friends in big tech because it will slow them down and hold them accountable for any mistakes they make, but will leave the door wide open for smaller startups to enter the race and potentially disrupt the emerging field of AI tech — because they’ll be less encumbered by pesky regulations.
This is all scripted, and Gavin’s friends in Silicon Valley will be drinking a cocktail of Soylent and painkillers in his honor tonight. For a while now, obscenely rich white guys who argue over tabs vs spaces have been gathering in places like Switzerland to compare notes on how to protect their monopolies in the burgeoning AI industry and it’s become pretty obvious that they believe in regulation when it hurts the little guys more. In the Fortune 500, regulation is a game that heavyweights can win, but for startups, it lands a crippling blow.
Which makes Nancy Pelosi’s statement on the veto disappointingly obtuse:
“We have the opportunity and responsibility to enable small entrepreneurs and academia – not big tech – to dominate.”
Is this some kind of congressional reverse psychology? Because Nancy, passing this bill would have potentially achieved your goals! The Brookings Institute has called out this misrepresentation, and many more, in an interesting report published earlier this week.
Fuelled by fears of rogue terminators who cause nuclear holocausts, unleash biological weapons, or turn us all into paperclips, SB 1047 was designed to “help prevent catastrophic harms” and tries to establish a “kill switch” in the event of imminent disaster. This is not what we need right now.
This focus on dystopian science fiction is misleading: it plays into big tech’s advantage of distracting us all with fears of the far future while they run unchecked with AI systems that impact our lives today.
All this talk of catastrophes and kill switches causes Congress to ignore less apocalyptic but still wide-reaching ethical, social, cultural, political, and environmental concerns. It veers us away from critical topics like the impact on writers and artists dealing with AI copyright infringement, security risks, ethical dilemmas, privacy concerns, job displacement fears, economic inequalities, and the bias and discrimination at the heart of the AI rollout debate in the present.
What’s ironic is that last month Newsom signed bills aimed at curtailing political deepfake ads, which didn’t cause any uproar among Silicon Valley but did invoke the ire of the general public, confused and whipped into a frenzy by rightwing X accounts, that spread the catchphrase “Newsom hates memes”.
We need regulations for mitigating AI harms, but first, we’ve got to find our way through a storm of disinformation, misunderstanding, hype, and snake oil. We’ll never have self-driving cars if we can’t pay attention to the clear and present dangers right in front of us.
Thanks to writer and technologist
for this insightful piece, commissioned by HII.