On Steven Sinofsky’s Post on Regulating A.I. ⇥ technovia.co.uk
There is a myth in Silicon Valley that innovation is somehow an unalloyed good which must always be protected and should never be regulated, lest we stop some world-shaking discovery. It doesn’t take 20 seconds of thinking – or even any understanding of history – to see that’s not true. Yes, experimentation is how we learn, how we discover new things which benefit us all. But there are no spheres of knowledge outside possibly the humanities where that is completely unregulated. If you want to do nuclear research, good look with getting a permit to run your experimental reactor in the middle of a city. If you would like to do experimental chemistry, you’re going to be on the wrong side of the law if you do it in your garage.
All of those things “stifle innovation”. All of them are entirely justified. Given the world-changing hype – created by technology business people – around AI, they really should get used to a little stifling too.
It is possible to discount the cautionary letter signed by A.I. developers earlier this year and find fault with the executive order mechanism while also agreeing with the overall thrust of Betteridge’s response. It should be possible to nurture the incredible possibilities of A.I. argued by its biggest proponents while creating guardrails to reduce its biggest negative risks.
One problem with talking about “regulating A.I.” is that “A.I.” is such a vague and expansive term. Some companies have rebranded existing processes with “A.I.” language with varying degrees of seriousness. Generative text and media tools are also “A.I.”, and so is facial recognition, and so is a self-driving car. I think even the most optimistic person would acknowledge autonomous vehicles need oversight, and that facial recognition has so many privacy and surveillance tangents that it desperately needs more aggressive regulation.