In the regulatory context, discussion of privacy invariably means data privacy—the idea of protecting designated sensitive material from unauthorized access.
But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.
That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life—the workplace, church, club or union hall. As these interactions move online, our privacy in this deeper sense withers away.
Young people already understand this second definition very well. They have separate private accounts on social networks, and they’re more careful about what they share online than many older people give them credit for.
Charlie Warzel, New York Times:
I called up Ceglowski after his trip to Washington to inquire about the experience and what he thinks we can do to make opting out less of a pipe dream. Like anyone with a decent understanding of how the web works, he has a healthy skepticism that we’ll rein in privacy violations, but his one potential area of optimism really stuck with me. It’s the concept of positive regulation.
Over the phone, he explained that, while it might seem small, if real people on the internet vote with their wallets to use privacy-focused services over big data-sucking platforms like Facebook and Google, the effect could be profound. He cited the telemarketing wars of the early 2000s as an example.
“When telemarketers were fighting the ‘do not call’ list they argued that people loved having the opportunity to hear about great deals and products via phone during dinner time,” he said. “But once the regulation passed, everyone signed up for that list and it became obvious that the industry’s argument was laughable.”
After years of relentless scandals driven by the surveillance economy,1 I think there are plenty of users out there who would be interested enough in greater privacy to pay for it. But that’s only likely to be successful if the purveyors of privacy-robbing services are held accountable for their behaviour. So far, that just isn’t happening.
Many of which, by the way, were reported in stories published on websites like the New York Times’, which share visitor data with Facebook and Google, as well as lots of other third-party tracking and advertising vendors.
For example, if I visit Warzel’s article with my content blockers turned off, over fifty more HTTP requests are made and it takes three times as long to load the page. The additional requests include trackers from Optimizely, Scorecard Research, Oracle, and ChartBeat; there are also several advertising scripts which are loaded from several vendors, and they also function as trackers.
I’m not innocent of this either. If you’re reading this on the web — as opposed to, say, in a feed reader — there’s an analytics script running on this page and an ad in the righthand column. In my pathetic defence, my analytics script does not share anything with third parties, it minimizes information collection and fuzzes IP addresses, and you can entirely opt out of it. As far as the ad goes, it is not behavioural, my Content Security Policy prevents any extra scripts or images of unknown origin from loading — like a Google tracking pixel, for instance — and it’s my understanding that the ad network does not collect any information from my readers unless the ad is clicked. ↩︎