There has been a wave of artificial intelligence regulatory news this week, and I thought it would be useful to collect a few of those stories in a single post.
Earlier this week, U.S. president Joe Biden issued an executive order:
My Administration places the highest urgency on governing the development and use of AI safely and responsibly, and is therefore advancing a coordinated, Federal Government-wide approach to doing so. The rapid speed at which AI capabilities are advancing compels the United States to lead in this moment for the sake of our security, economy, and society.
Reporting by Josh Boak and Matt O’Brien of the Associated Press indicates this executive order was informed by several experts in the technology and human rights sectors. Unfortunately, it seems that something I interpreted as a tongue-in-cheek statement to the adversary of the latest “Mission: Impossible” movie is being taken seriously and out of context by some.
Steven Sinofsky — who, it should be noted, is a board partner at Andreessen Horowitz which still has as its homepage that ridiculous libertarian manifesto which is, you know, foreshadowing — is worried about that executive order:
I am by no means certain if AI is the next technology platform the likes of which will make the smartphone revolution that has literally benefitted every human on earth look small. I don’t know sitting here today if the AI products just in market less than a year are the next biggest thing ever. They may turn out to be a way stop on the trajectory of innovation. They may turn out to be ingredients that everyone incorporates into existing products. There are so many things that we do not yet know.
What we do know is that we are at the very earliest stages. We simply have no in-market products, and that means no in-market problems, upon which to base such concerns of fear and need to “govern” regulation. Alarmists or “existentialists” say they have enough evidence. If that’s the case then then so be it, but then the only way to truly make that case is to embark on the legislative process and use democracy to validate those concerns. I just know that we have plenty of past evidence that every technology has come with its alarmists and concerns and somehow optimism prevailed. Why should the pessimists prevail now?
This is a very long article with many arguments against the Biden order. It is worth reading in full; I have just pulled its conclusion as a summary. I think there is a lot to agree with, even if I disagree with its conclusion. The dispute is not between optimism and pessimism; it is between democratically regulating industry, and allowing industry to dictate the terms of if and how it is regulated.
That there are “no in-market products […] upon which to base such concerns” is probably news to companies like Stable AI and OpenAI, which sell access to Eurocentric and sexually biased models. There are, as some will likely point out, laws in many countries against bias in medical care, hiring, policing, housing, and other significant areas set to be revolutionized by A.I. in the coming years. That does not preclude the need for regulations specifically about how A.I. may be used in those circumstances, though.
Ben Thompson:
The point is this: if you accept the premise that regulation locks in incumbents, then it sure is notable that the early AI winners seem the most invested in generating alarm in Washington, D.C. about AI. This despite the fact that their concern is apparently not sufficiently high to, you know, stop their work. No, they are the responsible ones, the ones who care enough to call for regulation; all the better if concerns about imagined harms kneecap inevitable competitors.
[…]
In short, this Executive Order is a lot like Gates’ approach to mobile: rooted in the past, yet arrogant about an unknowable future; proscriptive instead of adaptive; and, worst of all, trivially influenced by motivated reasoning best understood as some of the most cynical attempts at regulatory capture the tech industry has ever seen.
There is a neat rhetorical trick in both Sinofsky’s and Thompson’s articles. It is too early to regulate, they argue, and doing so would only stifle the industry and prevent it from reaching its best potential and highest aspirations. Also, it is a little bit of a smokescreen to call it a nascent industry; even if the technology is new, many of the businesses working to make it a reality are some of the world’s most valuable. Alas, it becomes more difficult to create rules as industries grow and businesses become giants — look, for example, to Sinofsky’s appropriate criticism of the patchwork approach to proposed privacy laws in several U.S. states, or Thompson’s explanation of how complicated it is to regulate “entrenched” corporations like Facebook and Google on privacy grounds given their enormous lobbying might.
These are not contradictory arguments, to be clear; both writers are, in fact, raising a very good line of argument. Regulations enacted on a nascent industry will hamper its growth, while waiting too long will be good news for any company that can afford to write the laws. Between these, the latter is a worse option. Yes, the former approach means a new industry faces constraints on its growth, both in terms of speed and breadth. With a carefully crafted regulatory framework with room for rapid adjustments, however, that can actually be a benefit. Instead of a well poisoned by years of risky industry experiments on the public, A.I. can be seen as safe and beneficial. Technologies made in countries with strict regulatory regimes may be seen as more dependable. There is the opportunity of a lifetime to avoid entrenching the same mistakes, biases, and problems we have been dealing with for generations.
Where I do agree with Sinofsky and Thompson is that such regulation should not be made by executive order. However, regardless of how much I think the mechanism of this policy is troublesome and much of the text of the order is messy, it is wrong to discard the very notion of A.I. regulation simply on this basis.
A group of academics published a joint paper concerning A.I. development, which I thought was less alarmist and more grounded than most of these efforts:
The rate of improvement is already staggering, and tech companies have the cash reserves needed to scale the latest training runs by multiples of 100 to 1000 soon. Combined with the ongoing growth and automation in AI R&D, we must take seriously the possibility that generalist AI systems will outperform human abilities across many critical domains within this decade or the next.
What happens then? If managed carefully and distributed fairly, advanced AI systems could help humanity cure diseases, elevate living standards, and protect our ecosystems. The opportunities AI offers are immense. But alongside advanced AI capabilities come large-scale risks that we are not on track to handle well. Humanity is pouring vast resources into making AI systems more powerful, but far less into safety and mitigating harms. For AI to be a boon, we must reorient; pushing AI capabilities alone is not enough.
John Davidson, columnist at the Australian Financial Review, interviewed Andrew Ng, who co-founded Google Brain:
“There are definitely large tech companies that would rather not have to try to compete with open source [AI], so they’re creating fear of AI leading to human extinction.
“It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he said.
Ng is not an anti-regulation hardliner. He acknowledges the harms already caused by A.I. and supports oversight.
Dan Milmo and Kiran Stacey, of the Guardian, covered this week’s Bletchley Park A.I. safety summit:
The possibility that AI can wipe out humanity – a view held by less hyperbolic figures than Musk – remains a divisive one in the tech community. That difference of opinion was not healed by two days of debate in Buckinghamshire.
But if there is a consensus on risk among politicians, executives and thinkers, then it focuses on the immediate fear of a disinformation glut. There are concerns that elections in the US, India and the UK next year could be affected by malicious use of generative AI.
I do not love the mainstreaming of the apparently catastrophic risks of A.I. on civilization because it can mean one of two possibilities: either its proponents are wrong and are using it for cynical or attention-seeking purposes, or they are right. This used to be something which was regarded as ridiculous science fiction. That apparently serious and sober people see it as plausible is discomforting.