Bowman Heiden and Nicolas Petit write for the Hill about how they think those who have an “absolute” interpretation of privacy are missing the point:
Privacy absolutism comes from a world of physical privacy where moral agents are the norm. However, the digital world is different than the physical world. Leaving aside legitimate fears like information leaks, incorrect predictions or systemic externalities that may require appropriate regulations or technical standards, our policymakers must understand that users do not view digital privacy in black and white, but rather in shades of gray.
This is a fair point, but it is a strawman argument. I don’t think anyone asks for “absolute” privacy; the authors point to “doctors, lawyers and priests” as examples of people who frequently handle others’ private information. But we willingly give these professionals that information — and we have failed to translate that to the ease, speed, and scale of personal information aggregation on the web.
Here’s a wacky thought experiment from their piece:
You and your spouse are sleeping upstairs in your house after a long day of work. Downstairs, the living room and kitchen are a mess. There just wasn’t time to manage it all. During the night, someone sneaks in, takes a photo of the disorder, and then proceeds to clean it up.
What is there to learn here? At first blush, all of us should find the privacy intrusion intolerable. And yet, on further thought, some of us may accept the trade-off of saving money on a cleaning service and enjoying breakfast in a tidy room. […]
Earlier this year, an almost identical incident occurred to a Massachusetts homeowner who found it “weird and creepy”. I don’t know anyone who would not find this situation unacceptable, regardless of the benefit of having a clean house. But if I were to request a house cleaner, this would be fine, as I imagine it would be for anyone. Again: the difference is that permission has been granted.
Heiden and Petit also betray a level of ignorance that is scarcely believable for the co-authors of an article about privacy and tech companies:
It is not that hard to understand that no one at Facebook is passing moral judgment on photos of your carbon-intensive vacation, your meat consumption at a restaurant or your latest political rant. And when someone searches Google for “how to avoid taxes,” there’s no need to add, “I’m asking for a friend.” In both cases, there is just a set of algorithms seeing sequences of 1s and 0s. And when humans listen to your conversations with a digital assistant, they’re basically attempting to refine the accuracy of the translating machine that will one day replace them. How different is this from scientists in a laboratory looking at results from a clinical trial?
Let’s skip over the stock photo interpretation of how computers work and focus on the wildly myopic interpretation of how the accumulation of personal information at endpoints like Google and Facebook can be connected back to individuals. Google search data has been used as evidence in court; Facebook employees have been caught spying on users.
But these are minor blips in how collected data may be used and abused. The real problems — and, again, I must stress how badly the co-authors of this op-ed missed both of these things — are the lack of informed consent, and the scale and speed of personal information collection. We are only starting to understand the full unwelcome consequences of privacy-rejecting business models.
Of course, this wouldn’t be so bad if policymakers weren’t avid readers of the Hill. As it is, this tripe may help persuade some of them to relax their newfound and admirably aggressive stance on privacy.