For the first time in more than a decade, it truly feels like we are experiencing massive changes in how we use computers now, and how that will change in the future. The ferocious burgeoning industry of artificial intelligence, machine learning, LLMs, image generators, and other nascent inventions has been a part of our lives first gradually, then suddenly. The growth of this new industry provides an opportunity to reflect on how it ought to be grown while avoiding problems similar to those which have come before.
A frustrating quality of industries and their representatives is a general desire to avoid scrutiny of their inventions and practices. High technology is no different. They begin by claiming things are too new or that worries are unproven and, therefore, there is no need for external policies governing their work. They argue industry-created best practices are sufficient in curtailing bad behaviour. After a period of explosive growth, as regulators are eager to corral growing concerns, those same industry voices protest that regulations will kill jobs and destroy businesses. It is a very clever series of arguments which can luckily be repurposed for any issue.
Eighteen years ago, EPIC reported on the failure of trusting data brokers and online advertising platforms to self-regulate. It compared them unfavourably to the telemarketing industry, which pretended to self-police for years before the Do Not Call list was introduced. At the time, it was a rousing success; unfortunately, regulators were underfunded and failed to keep pace with technological change. Due to overwhelming public frustration with the state of robocalls, the U.S. government began rolling out call verification standards in 2019, and Canadian regulators followed suit. For U.S. numbers, these verification standards will be getting even more stringent just nine days from now.
These are imperfect rules and they are producing mixed results, but they are at least an attempt at addressing a common problem with some success. Meanwhile, a regulatory structure for personal privacy remains elusive. That industry still believes self-regulation is effective despite all evidence to the contrary, as my regular readers are fully aware.
Artificial intelligence and machine learning services are growing in popularity across a wide variety of industries, which makes it a perfect opportunity to create a regulatory structure and a set of ideals for safer development. The European Union has already proposed a set of restrictions based on risk. Some capabilities — like when automated systems are involved in education, law enforcement, or hiring contexts — would be considered “high risk” and subject to ongoing assessment. Other services would face transparency requirements. I do not know if these rules are good but, on their face, the behavioural ideals which the E.U. appears to be constructing are fair. The companies building these tools should be expected to disclose how models were trained and, if they do not do so, there should be consequences. That is not unreasonable.
This is about establishing a set of principles to which new developments in this space must adhere. I am not sure what those look like, but I do not think the correct answer is in letting businesses figure it out before regulators struggle to catch up years later with lobbyist-influenced half-measures. Things can be different this time around if there is a demand and an expectation for doing so. Written and enforced correctly, these regulations can help temper the worst tendencies of this industry while allowing it to flourish.