Pixel Envy

Written by Nick Heer.

On the Apparently Unmoving FCC Investigation Into Phone Carriers Selling Subscriber Location Data

Dell Cameron, Gizmodo:

How the intimate data exchanging hands in these back-alley deals compares in size and scale to, say, what Cambridge Analytica acquired on Facebook users in 2016 is ultimately made irrelevant by the fact that it’s a thousand times more sensitive. This is data meant for hunting people down. In the most outrageous case documented by the press so far, a Motherboard reporter managed to pay a bounty hunter $300 to put a trace on a cellphone in New York. The coordinates he received provided accurate up to around a quarter mile. As one Democrat on the FCC put it, the trade in Americans’ location data is “a personal and national security issue that affects every American with a cell phone.”

There’s little evidence that it’s being treated as such. Lawmakers on Wednesday openly scolded [FCC Chairman Ajit] Pai over his handling of the investigation, which is thought to be nearing the end of its first year. (It remains unclear when the investigation was actually started.) During questioning, he refused outright to say whether he’d share basic information about the investigation with the FCC’s two Democratic commissioners, Jessica Rosenworcel and Geoffrey Starks.

Who knows? This could all be nothing; Pai’s office could just be bad at email. In that case, I sympathize.

But if this is a case of what appears to be partisan nonsense, it’s deeply troubling. These carriers were providing third parties access to customers’ real-time location data. That’s an unconscionable violation of privacy; I don’t think anyone would disagree. It would be an egregious breach of duty for the FCC investigate this without urgency or priority.

Of course, this is the Republican Party serving the doctrine of Donald Trump, so there are always gratuitous conflicts of interest or the potential for grift:

Statutorily, the FCC has one year in most cases from the date of a violation to issue a notice of apparent liability. Neither commissioner can say whether the statute of limitation has expired on any particular infraction. But notably, more than a year has passed since Senator Ron Wyden first wrote to the FCC demanding this investigation take place. A New York Times expose about a business that sold phone-tracking services to state law enforcement officials without a warrant turned a year old last week. (Pai, incidentally, represented that business — Securus Technologies — seven years ago, while working in private practice.)

Shocking.

A Report From the AMP Advisory Committee Meeting

Terence Eden:

I don’t like AMP. I think that Google’s Accelerated Mobile Pages are a bad idea, poorly executed, and almost-certainly anti-competitive.

So, I decided to join the AC (Advisory Committee) for AMP. I don’t want them surrounded with sycophants and yes-men. A few weeks ago, a bunch of the AC met in London for our first physical meeting after several exploratory video calls.

I maintain that AMP is antithetical to the open web, and a stealthy anticompetitive threat. If Google did not restrict the top “carousel” of news results on mobile to AMP pages, I doubt it would have ever caught on. It has few merits of its own and is popular solely because it has been given undue weight due to Google’s influence.

Scrap it. Take what good albeit obvious lessons have been learned from AMP — limits on asset size, no arbitrary scripts, simpler page structures — and sink its corpse to the seabed of our collective conscience.

The Night the Lights Went Out

Drew Magary, Deadspin:

I remember hosting the Deadspin Awards in New York the night of December 5th and then heading over to a karaoke bar for a staff after-party, where I ate some pizza, drank a beer, sang one song (Tom Petty’s “You Got Lucky,” which would soon prove either fitting or ironic, depending upon your perspective), and that’s it. After that comes a great void. I don’t remember inexplicably collapsing in a hallway, fracturing my skull because I had no way to brace myself for the impact. I don’t remember sitting up after that, my co-workers alarmed at the sight of blood trickling out of the back of my head. I don’t remember puking all over Barry Petchesky’s pants, vomit being one of many fun side effects of your brain exploding, as he held my head upright to keep me from choking on my own barf. I don’t remember Kiran Chitanvis quickly calling 911 to get me help. I don’t remember getting into an ambulance with Victor Jeffreys and riding to an uptown hospital, with Victor begging me for the passcode to my phone so that he could call my wife. He says I made an honest effort to help, but my circuits had already shorted out and I ended up giving him sequences of four digits that had NOTHING to do with the code. Flustered, he asked me for my wife’s phone number outright. Instead, I unwittingly gave him a series of 10 digits unrelated to the number he sought.

I don’t remember that. I don’t remember bosswoman Megan Greenwell trailing behind the ambulance in a cab with her husband and staying at the hospital ALL NIGHT to plead with them to give me a closer look (at first, the staff thought I was simply inebriated; my injury had left me incoherent enough to pass as loaded) because she suspected, rightly, that something was very wrong with me. I don’t remember doctors finally determining that I had suffered a subdural hematoma, or a severe brain bleed: A pool of blood had collected in my brain and was pressing against my brain stem. I was then rushed to another hospital for surgery, where doctors removed a piece of my skull, drained the rogue blood, implanted a small galaxy in my brain to make sure my opinions remain suitably vast, put the hunk of skull back in, and also drilled a hole in the TOP of my head to relieve the pressure. They also pried my eyes open and peeled the contact lenses off my eyeballs. They then put me into a medically-induced coma (SO METAL) so that my brain could rest and heal without Awake Drew barging in and fucking everything up.

I don’t remember any of that. I told you I wouldn’t be a very reliable narrator.

This is many things. It is gutting, inspiring, saddening, frustrating, at times very funny because Drew Magary wrote it so of course it is, illuminating, and moving. But, as a piece of writing, it’s perfect. Put this on your reading list for the weekend, or read it now. I don’t care which; it’s worth your time.

A History of Data Collection in Mobile Games

Kaitlyn Tiffany, Vox:

So what do these third-party advertisers do that’s so bad? A study conducted last year by security researchers at UC Berkeley gives us some insight.

The study focused on children’s privacy and resettable advertising IDs —the string of numbers and letters that identify you and keep a log of your clicks, searches, purchases, and sometimes geographic location as you move through various apps — in contrast with non-resettable, persistent identifiers. Phone security experts recommend regularly resetting it to limit advertisers’ ability to track you. (You can do that in the Advertising section at the bottom of the Privacy settings on an iPhone, or in the Ads menu in the Services section of an Android device’s settings.)

The study found something alarming: Of 3,454 children’s apps that share resettable advertising IDs, 66 percent were sharing persistent identifiers as well. You could reset the advertising ID every 20 minutes on the device your child is using, if you wanted to, but it wouldn’t do anything to clear their history. The only way to reset that device ID is by factory-resetting the phone or tablet and starting from scratch. More importantly, the study found that 19 percent of children’s apps contained ad-targeting software with terms of service so predatory that they’re not even legal to include in apps designed for children. Kids under 13 aren’t supposed to be tracked between apps at all, especially for advertising purposes, and especially as part of a permanent history of their digital lives.

This is the kind of thing that I would like to see sifted out of apps before they make it into the App Store. There are honest justifications for developers to use an analytics framework to sort out bugs within their apps or figure out how often a feature is being used. Still, I would like to see more limitations placed on the monetization of data collected from app usage, and on persistent identifiers — particularly when they’re operated by a third party and can therefore be used to track users across multiple apps and even across devices.

By the way, do yourself a favour and opt out of ad targeting on your iOS devices.

Consistently Stupid U.S. Administration Solicits Complaints of ‘Bias’ by Websites

Tony Romm, Washington Post:

The White House on Wednesday escalated its war against Silicon Valley when it announced an unprecedented campaign asking Internet users to share if they had been censored on Facebook, Google and Twitter, tapping into President Trump’s long-running claim that tech giants are biased against conservatives.

The effort, which the White House said on Twitter was directed at users “no matter your views,” seeks to collect names, contact information and other details from Americans. The survey asks whether they have encountered problems on Facebook, Instagram, Google-owned YouTube, Twitter or other social media sites — companies the president frequently takes aim at for alleged political censorship.

This, on the very same day that the Trump administration announced it would not sign a statement pledging to take action to combat and avoid amplifying violent extremist rhetoric on what is ostensibly First Amendment grounds. I’m not saying that they should necessarily sign such a statement as I understand the free speech concerns — though the pledge does not require that government signatories do anything that would curtail freedom of expression — but the contrast is notable.

It’s horseshit anyway because this is pretty obviously a means to build the Trump 2020 campaign’s email list. Also, the U.S. government can’t require private companies to change how they moderate user behaviour because that would be a violation of the First Amendment — but you knew that.

Update: Casey Newton:

In the meantime, “bias” is defined ever downward. In conservative parlance, it now refers to any instance in which the user of a social platform did not have a desired outcome. You didn’t appear high enough in search results? Your video wasn’t promoted by an algorithm? You were suspended for threatening to kill someone? It’s all just “bias” now.

Far enough down the conspiracy hole, everything has meaning, which means nothing really does.

Recent Actions by Adobe Are a Case Study in Why Customers Don’t Like Subscriptions

A neat thing about the software-as-a-service model — which, by the way, is a loathsome phrase — is that you get updates to your apps all the time.

On the other hand, you must install those updates; many of these apps don’t even give you a choice. So if a feature no longer behaves as it used to, or it’s buggy, or a crucial option is dropped, you’re out of luck.

Throw Your Laptop Into the Sea, but the Surveillance Economy Will Still Win

Maciej Cegłowski:

In the regulatory context, discussion of privacy invariably means data privacy—the idea of protecting designated sensitive material from unauthorized access.

[…]

But there is a second, more fundamental sense of the word privacy, one which until recently was so common and unremarkable that it would have made no sense to try to describe it.

That is the idea that there exists a sphere of life that should remain outside public scrutiny, in which we can be sure that our words, actions, thoughts and feelings are not being indelibly recorded. This includes not only intimate spaces like the home, but also the many semi-private places where people gather and engage with one another in the common activities of daily life—the workplace, church, club or union hall. As these interactions move online, our privacy in this deeper sense withers away.

Young people already understand this second definition very well. They have separate private accounts on social networks, and they’re more careful about what they share online than many older people give them credit for.

Charlie Warzel, New York Times:

I called up Ceglowski after his trip to Washington to inquire about the experience and what he thinks we can do to make opting out less of a pipe dream. Like anyone with a decent understanding of how the web works, he has a healthy skepticism that we’ll rein in privacy violations, but his one potential area of optimism really stuck with me. It’s the concept of positive regulation.

[…]

Over the phone, he explained that, while it might seem small, if real people on the internet vote with their wallets to use privacy-focused services over big data-sucking platforms like Facebook and Google, the effect could be profound. He cited the telemarketing wars of the early 2000s as an example.

“When telemarketers were fighting the ‘do not call’ list they argued that people loved having the opportunity to hear about great deals and products via phone during dinner time,” he said. “But once the regulation passed, everyone signed up for that list and it became obvious that the industry’s argument was laughable.”

After years of relentless scandals driven by the surveillance economy,1 I think there are plenty of users out there who would be interested enough in greater privacy to pay for it. But that’s only likely to be successful if the purveyors of privacy-robbing services are held accountable for their behaviour. So far, that just isn’t happening.


  1. Many of which, by the way, were reported in stories published on websites like the New York Times’, which share visitor data with Facebook and Google, as well as lots of other third-party tracking and advertising vendors.

    For example, if I visit Warzel’s article with my content blockers turned off, over fifty more HTTP requests are made and it takes three times as long to load the page. The additional requests include trackers from Optimizely, Scorecard Research, Oracle, and ChartBeat; there are also several advertising scripts which are loaded from several vendors, and they also function as trackers.

    I’m not innocent of this either. If you’re reading this on the web — as opposed to, say, in a feed reader — there’s an analytics script running on this page and an ad in the righthand column. In my pathetic defence, my analytics script does not share anything with third parties, it minimizes information collection and fuzzes IP addresses, and you can entirely opt out of it. As far as the ad goes, it is not behavioural, my Content Security Policy prevents any extra scripts or images of unknown origin from loading — like a Google tracking pixel, for instance — and it’s my understanding that the ad network does not collect any information from my readers unless the ad is clicked. ↩︎

Reuters Source: Facebook Facing 20-Year FTC Privacy Consent Agreement

Just a reminder that every Facebook privacy scandal you’ve heard about for the past seven years — Cambridge Analytica, passwords stored in plain text, that thing where they were demanding email account passwords, using two-factor phone numbers for user account lookup, the private data sent to Facebook by developers using the company’s SDK, and so on; I could do this all day — was committed while the company was already promising the FTC to not violate users’ privacy.

Why I (Still) Love Tech

This essay by Paul Ford, published in Wired, is magnificent. I’ve been letting it stew all day, re-reading it a couple of times here and there. It’s beautiful, haunting, gutting, and romantic. Two excerpts from a dozen or more I could have picked to share here. First:

I keep meeting people out in the world who want to get into this industry. Some have even gone to coding boot camp. They did all the exercises. They tell me about their React apps and their Rails APIs and their page design skills. They’ve spent their money and time to gain access to the global economy in short order, and often it hasn’t worked.

I offer my card, promise to answer their emails. It is my responsibility. We need to get more people into this industry.

But I also see them asking, with their eyes, “Why not me?”

And here I squirm and twist. Because— because we have judged you and found you wanting. Because you do not speak with a confident cadence, because you cannot show us how to balance a binary tree on a whiteboard, because you overlabored the difference between UI and UX, because you do not light up in the way that we light up when hearing about some obscure bug, some bad button, the latest bit of outrageousness on Hacker News. Because the things you learned are already, six months later, not exactly what we need. Because the industry is still overlorded by people like me, who were lucky enough to have learned the etiquette early, to even know there was an etiquette.

Tech is, of course, not the sole industry with an insular and specific culture; but, it is something that can be changed by readers of websites like this one, or Wired. Technology has been commoditized so that you see people of every age, race, gender, and personality walking around with a smartphone or a DSLR or a smartwatch or wireless headphones, but the creation of these things haven’t followed suit at the same rate.

The second excerpt:

I have no desire to retreat to the woods and hear the bark of the fox. I like selling, hustling, and making new digital things. I like ordering hard drives in the mail. But I also increasingly enjoy the regular old networks: school, PTA, the neighbors who gave us their kids’ old bikes. The bikes represent a global supply chain; when I touch them, I can feel the hum of enterprise resource planning software, millions of lines of logistics code executed on a global scale, bringing the handlebars together with the brakes and the saddle onto its post. Then two kids ride in circles in the supermarket parking lot, yawping in delight. I have no desire to disrupt these platforms. I owe my neighbors a nice bottle of wine for the bikes. My children don’t seem to love computers as I do, and I doubt they will in the same way, because computers are everywhere, and nearly free. They will ride on different waves. Software has eaten the world, and yet the world remains.

This sounds dour and miserable but it isn’t all that — I promise. As much as Ford examines the failings of the industry in this essay, there’s an undercurrent of optimism.

In some ways, Ford’s piece reminds me of Frank Chimero’s 2018 essay about how web development is increasingly like building software instead of just writing a document. I remember when I learned that I could view the source of a webpage, and that’s how I began to learn how to build stuff for the web. That foundation drove my career and a passion for learning how things are made. Things are different now, of course. Common toolchains now generate gnarly HTML and indecipherable CSS; the web is less elegant and human-driven. But I’m not sure that different and harder are necessarily worse.

Thinking more comprehensively about Ford’s essay, perhaps there’s a new perspective that can be brought only by those new to tech. After growing up with the stratospheric rise of the industry and seeing how it has strained, maybe that context will inform how they read this piece.

AT&T to Pull WarnerMedia Shows from Competing Streaming Services

Melissa Repko, Dallas News:

AT&T chief executive Randall Stephenson said Tuesday that the company will pull popular TV shows and movies from streaming rivals and “bring that content back into the fold” as it launches its own Netflix-like video service.

AT&T “will be bringing a lot of these media rights, licensing rights back to ourselves to put on our own SVOD (subscription video-on-demand) product,” Stephenson said Tuesday morning at the JPMorgan Global Technology, Media and Communications Conference in Boston.

AT&T’s new subscription video service is expected to launch in late 2019. It will be anchored by HBO TV shows and movies, along with content from Warner Bros. studios and Turner Networks. AT&T became the owner of the valuable entertainment library last June when it bought Time Warner in a deal valued at about $108.7 billion, including debt.

This new era of media conglomerates is dismal for American consumers who will have fewer choices and greater opportunity for exploitation. There is a conscious push to move away from the channel-free future that was hoped for in favour of more expensive siloed options.

Leonid Bershidsky Wrote Maybe the Dumbest Take on This WhatsApp Spyware Story

Leonid Bershidsky, writing for Bloomberg because of course a horrible infosec article will be published by Bloomberg:

The discovery that hackers could snoop on WhatsApp should alert users of supposedly secure messaging apps to an uncomfortable truth: “End-to-end encryption” sounds nice — but if anyone can get into your phone’s operating system, they will be able to read your messages without having to decrypt them.

In related news, your text messages are also less private if someone is looking at your screen over your shoulder.

These are merely applications running on top of an operating system, and once a piece of malware gets into the latter it can control the device in a multitude of ways. With a keylogger, a hacker can see only one side of a conversation. Add the ability to capture a user’s screen, and they can see the full discussion regardless of what security precautions are built into the app you are using.

“End-to-end encryption” is a marketing device used by companies such as Facebook to lull consumers wary about cyber-surveillance into a false sense of security.

End-to-end encryption is not mere marketing; everyone knows this, and it’s a jackass move to suggest otherwise. Vulnerabilities that are able to gain system-wide access, like those used by NSO Group, are exceedingly rare. It is far more likely that data can be intercepted in transit. Encrypting anything as it travels across the world is not lip service or marketing — it’s good sense.

It’s foolish for Bershidsky to have written this terrible article, and it beggars belief that any editor who has the first inkling of knowledge about encryption or information security would choose to run it. Alas, this is Bloomberg.

WhatsApp Voice Calls Used to Inject NSO Group Spyware on Phones

Mehul Srivastava, Financial Times:

WhatsApp, which is used by 1.5bn people worldwide, discovered in early May that attackers were able to install surveillance software on to both iPhones and Android phones by ringing up targets using the app’s phone call function. 

The malicious code, developed by the secretive Israeli company NSO Group, could be transmitted even if users did not answer their phones, and the calls often disappeared from call logs, said the spyware dealer, who was recently briefed on the WhatsApp hack.

This vulnerability feels a little like an echo of Apple’s FaceTime bug from earlier this year, except it’s much, much worse. All a recipient needed to do was to have WhatsApp installed and connected to their phone number; with just those factors, according to this report, an attacker could remotely install NSO Group’s Pegasus spyware.

The good news is that unless you’re a journalist, an activist, or a tech CEO exposing corruption in Saudi Arabia, in particular, you likely won’t be targeted with Pegasus spyware. Still, keep your devices up to date; Apple released iOS 12.3 today with a bunch of security fixes.

Update: The Dumpster Fire on Twitter:

So, Saudi Arabia has and has used the WhatsApp malware — which spies on phones, can even record audio and video — and Trump’s senior advisor/son-in-law Jared Kushner uses the app to communicate with the Crown Prince of Saudi Arabia… cool cool cool

Neat.

Lawsuit Targeting Apple’s 30% App Store Levy Is Allowed to Proceed, U.S. Supreme Court Rules

Bill Chappell and Nina Totenberg, NPR:

The theory of the lawsuit is that Apple’s 30% commission charge to app developers is often passed on to consumers — creating a higher-than-competitive price — and that competitors are shut out because Apple prevents iPhone owners from buying apps anywhere other than its App Store.

Apple sought to block the lawsuit, asserting that it had not set the prices on the apps and thus the iPhone owners had no standing to sue.

But the 9th Circuit Court of Appeals ruled against Apple, and on Monday the Supreme Court agreed.

This one is worth keeping an eye on, particularly as the E.U. also begins its examination into Spotify’s claims alleging anticompetitive behaviours.

Questions About 5G Safety Are Being Politicized

Parked atop the New York Times’ homepage right now — arguably one of the most influential positions in English-language media for any news story — is this story by William J. Broad about the framing by RT America of questions about the safety of 5G networking. Here’s a taste:

The Russian network RT America aired the segment, titled “A Dangerous ‘Experiment on Humanity,’” in covering what its guest experts call 5G’s dire health threats. U.S. intelligence agencies identified the network as a principal meddler in the 2016 presidential election. Now, it is linking 5G signals to brain cancer, infertility, autism, heart tumors and Alzheimer’s disease — claims that lack scientific support.

Yet even as RT America, the cat’s paw of Russia’s president, Vladimir Putin, has been doing its best to stoke the fears of American viewers, Mr. Putin, on Feb. 20, ordered the launch of Russian 5G networks in a tone evoking optimism rather than doom.

[…]

Hundreds of blogs and websites appear to be picking up the network’s 5G alarms, seldom if ever noting the Russian origins. Analysts call it a treacherous fog.

This story is right in claiming that RT’s let’s-call-it-reporting vastly overstates any known concerns about 5G networking. It’s fair to assume that RT, owing to its Kremlin connection and eagerness to hype conspiracy theories, is happy to exploit scientific illiteracy as a way to stoke fear. Broad explains the loaded terminology used by the network, and cites good sources and knowledgeable individuals that see little health concern in the frequencies used by 5G.

However, this article also gets carried away in definitively stating the safety of 5G by too readily ascribing concerns to Russian propaganda.

An article published last month in Computer Weekly by a coalition of investigative journalists cited several scientific bodies and research institutes that have questions about the safety of 5G. They also quote David Carpenter who, as the Times explained, is an inaccurate alarmist. Susan Crawford, in a piece for Wired, pointed out that the FCC’s health testing standards are possibly outdated, being based on thirty year old German research; but, she also uses the “some say” weasel words to insinuate connections between the telecom industry and the German research institute. An article by Mark Hertsgaard and Mark Dowie, published by the Nation last year, explored the wireless industry’s successful lobbying efforts.

Meanwhile, the source for Broad’s claim that RT’s propaganda is being widely circulated is a Google search for "RT America" "5G". Yeah, really. The way that sentence is phrased, you’d think that RT citations are appearing in loads of mainstream blogs. But, right now, that Google search is returning results for: this Times story; a bunch of stories and videos from RT America, of course; and several conspiracy websites. No mainstream blog or website that I can find has so far decided to use RT as a source for questions about 5G safety. On the contrary, bigger publications are asking scientists and industry representatives for their thoughts, as is responsible. The fact that RT’s stories are being circulated by idiots who would trust the network if it reported that the Pacific and Atlantic oceans had swapped places is not indicative of a successful propaganda campaign.

All of this is not to say that the Times’ story is wrong. Nor is it to establish false equivalency — there are not two equal sides here. There are thousands of scientists working around the world to try to answer the questions of whether wireless networking has any health risks, and whether 5G has any specific concerns. But the headline used by the Times — “Your 5G Phone Won’t Hurt You. But Russia Wants You to Think Otherwise.” — is another entry in a series of headlines that oversimplify a nuanced story, and the article itself and its push notification are too quick to blame questions about 5G’s safety on Russian propaganda.

It’s Not Enough to Break Up Tech Giants

Here’s something that rarely happens: I agree with Alex Stamos. Or, at least, I agree with his argument that we should not consider a breakup of Facebook as a panacea to its ills.

Facebook’s market domination is dangerous from a privacy perspective — in as much as it collects a lot of data about a lot of people — and it is ultimately helpful for the company for its size to be inherently influential when it attempts to push the limits of what is socially acceptable.

Trevor Callaghan on Twitter:

I’ve always taken the view that access to information is the key power dynamic. [Facebook, Google, Amazon] and others all have unique, entirely proprietary stores of information that they use. On the one hand, you can (and should) regulate that where it intersects with privacy.

On the other, continue to be concerned about that information only being processed/controlled by those entities. If we really want to break control then you can certainly regulate use, but you ultimately need to either take control back entirely to the individual, or find a way to make those assets non-rivalrous. This is often dismissed as an extremist position, but think about what would happen if, for example, any other company (with your consent) could build a product with the social graph and interests [Facebook] have for you or a search engine tailored to your interests from [Google] data, but on a web index that was part of a data commons?

I think this is a fascinating idea worth discussing, but I also think that there’s a simpler option already available: interpret antitrust laws that are already on the books with more than the financial cost of goods and services in mind. Consider user data and anti-privacy business models as costs, as well. Google and Facebook already do.

Google’s New Privacy Features Put the Responsibility on Users

Lauren Goode, Wired:

But as Google increases the number of privacy features — part of an attempt to scrub its reputation clean of data-tracking dirt — the setup of the settings, toggles, and dashboards within its apps seems to put more responsibility on the individual user rather than the platform. As Pichai himself said, Google aims to give people “choices.” So it’s your choice if you want to take the time to adjust, monitor, take out, or toggle something off. Just like it’s Google’s choice to not change its fundamental approach to gathering data to help better target advertising and thus make heaps of money.

Google is fully aware that most people will not choose to go spelunking around their privacy settings to get things configured just so. Most people are just going to stick with the defaults. And those defaults will, for the foreseeable future, be skewed to protect Google’s data collection interests.

Taking Stock of Subscriptions

Joanna Stern, writing in the Wall Street Journal which, yes, you need a subscription to read:

The technology industry loves the term SaaS, or Software as a Service. It’s the idea that software isn’t just bought once and installed, but rather is subscribed to and always updating. Microsoft Office 365? SaaS. Google Drive? SaaS. Your kid’s coding app? SaaS again.

There’s also CaaS, Content as a Service. Netflix ? Hulu? Spotify? Apple News+? All CaaS. And then there’s HaaS, hardware as a service. Your connected door lock, thermostat, security camera, maybe even your car or your toothbrush, now come with subscriptions.

Throw it all into one basket and call it Everything as a Service or — don’t hate me — “EaaS.”

I completely get the short-term allure of this from the perspective of platforms and accountants. It’s a steady, predictable, easy revenue stream — particularly if users are locked into year-long contracts.

But, especially over the long term, I think users will find it fatiguing — at best — to live in a world where we pay hundreds of dollars a month to listen to music, use software, and store files. There are advantages: we can listen to most music of our choosing on demand; our software is constantly up to date and regularly has new features; the files we store are synced across our devices.

Extrapolated over a longer term, however, these niceties start to feel like lock-in. What if your music listening habits don’t change all that much? What if you don’t really need all those new features, or you’re frustrated that you feel forced to relearn a piece of software you’ve relied upon for years because an update changed the UI dramatically? What if you only edit most of your files from the same device?

There are records that I’ve listened to a hundred times that I paid for once. That’s amazing to me. So is the fact that I paid for a license for Photoshop eight years ago and have consistently used it over that time. Now, it’s a subscription product.

More than anything, I submit to you that the things we are obligated to pay for on a set date every month are generally the things that we are least excited to spend our money on. Rent, utilities, insurance — these are things we need, but do not do anything themselves. An apartment is least exciting for what it is on its own; it’s only made interesting by how we use it and make it our home. An internet connection is just a wire to some panel somewhere until we start using it for other stuff.

I get excited when I sit down to listen to a new record or use new software. I’m not excited to pay bills.

See Also:Hi, it’s me, the app you’ve never used that’s still billing you”.

Update: Matt Roszak received an email from Adobe stating that they’re discontinuing the older version of Animate — formerly Flash CC — that he uses. They have informed him that if he continues to use it, it is a violation of their terms and he could be sued.

Chris Hughes, a Facebook Co-Founder, Argues for the Breakup of Facebook

Chris Hughes in an op-ed for the New York Times:

Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved. In one of the government’s few attempts to rein in the company, the F.T.C. in 2011 issued a consent decree that Facebook not share any private information beyond what users already agreed to. Facebook largely ignored the decree. Last month, the day after the company predicted in an earnings call that it would need to pay up to $5 billion as a penalty for its negligence — a slap on the wrist — Facebook’s shares surged 7 percent, adding $30 billion to its value, six times the size of the fine.

The F.T.C.’s biggest mistake was to allow Facebook to acquire Instagram and WhatsApp. In 2012, the newer platforms were nipping at Facebook’s heels because they had been built for the smartphone, where Facebook was still struggling to gain traction. Mark responded by buying them, and the F.T.C. approved.

[…]

The alternative is bleak. If we do not take action, Facebook’s monopoly will become even more entrenched. With much of the world’s personal communications in hand, it can mine that data for patterns and trends, giving it an advantage over competitors for decades to come.

Mike Masnick of Techdirt wrote a counterargument to this piece which, I think, rather misses the point while pointing out many of the errors in Hughes’ piece. Yes, Hughes mixes up patents and copyright infringement, and he employs flawed readings of CDA 230 and the First Amendment.

But the bulk of Hughes’ argument is strong: Facebook grew by acquiring competitors to establish an enormous user base over which it wields control of communications to an unprecedented degree. Breaking the company into greater subsidiary companies would allow users to join multiple platforms if they’d like, remain on a single one if that’s what pleases them, and prevent a mass singular collection of data.

Facebook spokesperson and former Deputy Prime Minister of the U.K. Nick Clegg read Hughes’ editorial and responded predictably. Of course Facebook wants to muddy the waters by positioning themselves as just another tech company because, if all “big tech” companies are treated the same and Facebook gets to help write the rules on that, they can give themselves an advantage.

Driving Change

M. R. O’Connor, for in the New Yorker, reviewed the idea of living in a shifting era of what it means to be a driver (the Roy here is Alex Roy, who you may know for his Polizei 144 antics or for driving across the United States in just over 31 hours):

Finally, Roy points out that many of the problems autonomous cars promise to solve also have simpler, non-technological solutions. (This is true, of course, only if one assumes that driving isn’t a problem in itself.) To reduce traffic, governments can invest in mass-transit and road infrastructure. To diminish pollution, they can build bike lanes and encourage the adoption of electric cars. In Roy’s opinion, the best way to make driving safer has nothing to do with technology: it’s to raise licensing standards and improve driver education. Over lunch — a Niçoise salad — Roy argued that our fixation on driverless cars flows from our civic laziness. “It’s easier to imagine that technology can solve a problem that education or regulation could also fix,” he said. In place of the driverless utopia that technologists often picture, he asked me to consider another possibility: a congested urban hellscape in which autonomous vehicles are subsidized by companies that pump them full of advertising; in exchange for free rides, companies might require you to pass by particular stores or watch commercial messages displayed on the vehicles’ windows. (A future very much like this was recently imagined by T. Coraghessan Boyle, in his short story “Asleep at the Wheel.”) In such a world, Roy said, “The joy of the ride is taken away.”

[…]

Perhaps it was inevitable that a nascent right-to-drive movement would spring up in America, where — as fervent gun-rights advocates and anti-vaccinators have shown — we seem intent on preserving freedom of choice even if it kills us. “People outside the United States look at it with bewilderment,” Toby Walsh, an Australian artificial-intelligence researcher, told me. In his book “Machines That Think: The Future of Artificial Intelligence,” from 2018, Walsh predicts that, by 2050, autonomous vehicles will be so safe that we won’t be allowed to drive our own cars. Unlike Roy, he believes that we will neither notice nor care. In Walsh’s view, a constitutional amendment protecting the right to drive would be as misguided as the Second Amendment. “We will look back on this time in fifty years and think it was the Wild West,” he went on. “The only challenge is, how do we get to zero road deaths? We’re only going to get there by removing the human.”

I would love to hear from readers around the world whether Walsh’s perspective is the case. Is the apprehension to self-driving cars or the desire to have human rights to control autonomous vehicles a mostly American stance? For what it’s worth, it was a software control that could not easily be overridden that brought down two 737 Max airplanes.

Also, I thought this was an insightful observation in the context of platform freedom, obfuscated code, and increasingly locked-down hardware:

In his book “Shop Class as Soulcraft: An Inquiry Into the Value of Work,” from 2009, the political philosopher and motorcycle mechanic Matthew B. Crawford argues that manual competence — our ability to repair the machines and devices in our lives—is a kind of ethical practice. Knowing how to fix things ourselves creates opportunities for meaningful work and individual agency; it allows us to grasp more deeply the built world around us. The mass-market economy, Crawford writes, produces devices that are practically impenetrable. If we try to repair our microwaves or printers, we’ll quickly be discouraged by their complexity; many cars produced today lack even dipsticks to check their oil levels. Driving the Tesla Model 3 has been compared to using a giant iPhone: instead of controlling the car directly, one seems to pilot it by means of a user interface.

This is a great essay.

Vice Reporter Becomes a Victim of Google’s Featured Snippets

Lorenzo Franceschi-Bicchierai, Vice:

This keeps happening. In the last three days, I’ve gotten more than 80 phone calls. Just today, in the span of eight minutes, I got three phone calls from people looking to talk to Facebook. I didn’t answer all of them, and some left voicemails.

Initially, I thought this was some coordinated trolling campaign. As it turns out, if you Googled “Facebook phone number” on your phone earlier this week, you would see my cellphone as the fourth result, and Google has created a “card” that pulled my number out of the article and displayed it directly on the search page in a box. The effect is that it seemed like my phone number was Facebook’s phone number, because that is how Google has trained people to think.

It’s not a stretch to think how this feature could be abused. Imagine if searching for “Microsoft phone number” produced one of those fake tech support lines. It would feel even more trustworthy because you’re calling them.

Marzipan Expectations

Craig Hockenberry of the Iconfactory:

In the early days of the iPhone, there were many apps developed by folks who brought their sensibilities from Windows or the Mac to a new platform. Those apps felt wrong and have largely disappeared because everyone has figured out that different interactions are needed for a small, handheld device. Don’t let the same thing happen when you bring your iOS expertise to a Mac.

If you’ve been developing on iOS for any period of time, you’ve probably had the “flip a switch” experience when starting an iPad app. It’s fairly easy to get things up and running, but then you realize that there are a lot of design and code changes needed for a larger screen. You’ll start reworking things with master/detail views and auto layout constraints. You might even need to adapt your app to support input from a Smart Keyboard.

Look at how many user interface idiom checks you have and you’ll start to get an idea of what lies ahead for a macOS app. If you’re someone who’s decided that an iPad version of your app is too much extra work, you’ll likely think the same thing about all the conditional checks required for a Mac.

My initial pessimism towards Marzipan’s possibilities was mostly borne of what shipped in Mojave. But, apparently, those weren’t even the best that Apple could have done with the Marzipan tools in Mojave. Again, I question why they were shipped in the first place; I don’t know that they would be greatly missed if they didn’t make the final build.

Everything I’ve read from Steve Troughton-Smith over the past couple of months suggests that we’re headed in MacOS 10.151 for a promising if incomplete step towards a full declarative UI framework. It is slowly steering my mind towards a future where developers don’t have to write two entirely separate apps if they want their software on Apple’s biggest platforms. Make no mistake — that would be great news, if it is done well. Here’s hoping.


  1. Sonara, Kelso, or Tehachapi, perhaps? ↩︎

Section 230 of the CDA Does Not Create Definitions for ‘Platforms’ or ‘Publishers’

Mike Masnick, Techdirt:

This “publisher” v. “platform” concept is a totally artificial distinction that has no basis in the law. News publishers are also protected by Section 230 of the CDA. All CDA 230 does is protect a website from being held liable for user content or moderation choices. It does not cover content created by the company itself. In short, the distinction is not “platform” or “publisher” it’s “content creator” or “content intermediary.” Contrary to Coaston’s claims, Section 230 equally protects the NY Times and the Washington Post if it chooses to host and/or moderate user comments. It does not protect content produced by those companies itself, but similarly, Section 230 does not protect content produced by Facebook itself.

This is a misconception that seems to have become increasingly common as platforms grapple with moderating users’ posts. There is no legal obligation for any private website or service to be neutral, and there is — I believe — a strong moral argument for them not to be.

Unpacking Google’s Apparent Turnaround on Privacy

It looks like something spooked Facebook and Google. Instead of ignoring the privacy implications inherent to their business models, they both decided to reposition themselves as privacy-forward companies. Facebook did so by having an op-ed from its CEO published in a national newspaper, and by trying to redefine privacy itself. Google’s strategy has, so far, been similar.

Google CEO Sundar Pichai in an op-ed for the New York Times:

“For everyone” is a core philosophy for Google; it’s built into our mission to create products that are universally accessible and useful. That’s why Search works the same for everyone, whether you’re a professor at Harvard or a student in rural Indonesia. And it’s why we care just as much about the experience on low-cost phones in countries starting to come online as we do about the experience on high-end phones.

Our mission compels us to take the same approach to privacy. For us, that means privacy cannot be a luxury good offered only to people who can afford to buy premium products and services. Privacy must be equally available to everyone in the world.

That is a terrific point. I would very much like to live in a world where Apple cannot compete on privacy because every company must follow a strict set of rules governing the collection and use of private data.

But we do not live in that world, and Google has little intention of actually changing that. For example, they announced that Chrome will soon restrict third-party cookies and cross-site tracking, but John Wilander of Apple’s WebKit team says that their plan will be ineffective for privacy protection:

For a cookie policy to have meaningful effect on cross-site tracking, you also need to partition storage available to third-parties, such as LocalStorage, IndexedDB, ServiceWorkers, and cache. Safari is the only major browser to have such partitioning and we shipped it in 2013.

[…]

Safari’s default cookie policy since 10+ years is to deny third-parties to use cookies unless they have been first party at some point. This used to be “Allow cookies for sites I visit” in Safari settings. No other major browser has shipped cookie restrictions on by default yet.

For what it’s worth, it was this very setting that Google circumvented in 2012 so that they could track Safari users, a decision which resulted in a $22 million penalty.

Google also says that they’re giving users more privacy-centric options, but they couldn’t commit to not using facial recognition in their Nest products for ad personalization.

The overall picture of Google’s approach to privacy is perhaps best summarized by Ben Thompson:

At the same time, from a purely strategic perspective, the positive message makes sense. Presuming that everything about technology is bad is just as mistaken as the opposite perspective, and the fact of the matter is that lots of people like Google products, and reminding them of that fact is to Google’s long-term benefit.

Moreover, a world of assistants and machine-learning based products is very much to Google’s advantage: the argument to not simply tolerate Google’s collection of data, but to actually give them more, is less about some lame case about better-targeted ads but about making actually useful products better. The better-targeted ads are a Strategy Credit!

In short, Google’s argument is that they’re able to protect you from other companies’ privacy-rejecting technologies, but you can and should give them more of your private data. You can trust them. But, as far as I’m concerned, they haven’t yet earned that trust.

The Potential Advantages of a JavaScript Whitelist

Brent Simmons:

What I want is two related and similar things:

  • The ability to turn off JavaScript by default, and turn it on only for selected sites. (For me that would be sites like GitHub.)

  • The ability to turn off cookies by default, and, again, turn them on only for selected sites.

If it‘s the opposite — if I have to blacklist instead of whitelist — then I’d be constantly blacklisting. And, the first time I go to a site, it gets to run code before I decide to allow it.

A cookie whitelist would, I think, be frustrating to non-technical users, but it would be nice to have as an option. And, I imagine, it could be extended to allowing any kind of local storage.

But a JavaScript whitelist is something I could absolutely get behind. When you think about it, it’s pretty nuts that we allow the automatic execution of whatever code a web developer wrote. We don’t do that for anything else, really — certainly not to the same extent of possibly hundreds of webpages visited daily, each carrying a dozen or more scripts.

The openness of the web is unlike other platforms that have become more locked-down. There are few permission requests when visiting a webpage. That’s both beautiful and potentially damaging, particularly as new JavaScript functionality has been added and browsers have increasingly prioritized JavaScript execution time. New engines run scripts far closer to the metal — as they say — but these speed improvements have come with increased risks. Two examples:

  1. I had a webpage open not too long ago that, astonishingly enough, was mining cryptocurrency with JavaScript. This was something I had previously heard about in the context of malware, but this was a legitimate page that was attempting to make some extra money by maxing out my CPU when I left the tab open. I only noticed it when my iMac’s fans started whirring like I was rendering video or something.

  2. The speculative vulnerabilities in Intel CPUs, revealed last year, were exploitable through JavaScript.

It’s baffling to me that trackers, ad networks, cryptocurrency miners, and image lightboxes are all written for the web in the same language and that there is little granularity in how they’re treated. You can either turn all scripts off and lose key functionality on some websites, or you can turn everything on and accept the risk that your CPU will be monopolized in the background.

Current and Former Employees Say the Apple Store Isn’t as Good as It Once Was

Mark Gurman and Matthew Townsend, Bloomberg:

In interviews, current and former Apple employees blame a combination of factors. They say the stores have become mostly an exercise in branding and no longer do a good job serving mission shoppers like Smith. Meanwhile, they say, the quality of staff has slipped during an 18-year expansion that has seen Apple open more than 500 locations and hire 70,000 people. The Genius Bar, once renowned for its tech support, has been largely replaced with staff who roam the stores and are harder to track down. That’s a significant drawback because people are hanging onto their phones longer these days and need them repaired.

This report mirrors many of my own complaints when I’ve had to go to an Apple Store in the past couple of years, particularly for service or support:

The overhaul of the Genius Bar has been especially controversial. Customers looking for technical advice or repairs must now check in with an employee, who types their request into an iPad. Then when a Genius is free, he or she must find the customer wherever they happen to be in the store. Ahrendts was determined to get rid of lineups, but now the stores are often crowded with people waiting for their iPhones to be fixed or batteries swapped out.

The store I most frequently visit when I need support has a really strange vibe around the Genius Bar. I guess the intent is that, while you’re waiting five to forty-five minutes for your technician, you can look around for stuff to buy. But I don’t see people doing that. I see lots of people sitting awkwardly waiting at tables with lots of other people also sitting awkwardly. All of us just want our products fixed so we can go home.

The last time I went to an Apple Store was at the end of March or the beginning of April. I had picked up a pair of AirPods with a wireless charging case, and I wanted to exchange them for the model with the regular charging case. As I walked in, I was welcomed to the store and I explained that I wanted to do a product exchange. The greeter gestured me towards the rear-middle of the store, near the Genius Bar area. So I walked over there and asked someone if they could help me with exchanging a product, and they pointed me to a different person, who I once again had to explain what I wanted to do. They seemed baffled but had me wait for them to bring me a new set of AirPods which, I think, were hand-carried from the factory at that moment, all while kind of standing in the middle of where some other customers were trying to browse products.

When I brought my Thunderbolt Display in a few years ago to have the Y-cable swapped after it started getting flaky, I was told that I was better off taking it to a third-party repair facility as it would take far longer for Apple to get the part in stock. In related news, please buy my Thunderbolt Display.

I’m sure the competition is worse; but, that’s not saying much. Apple shouldn’t shoot for good enough. They certainly don’t in architectural terms.