Month: November 2017

Christina Bonnington, writing for Slate:

Face ID is one of the hallmark features of the iPhone X. Using facial recognition, you can unlock your phone almost as quickly as if you had no device security enabled at all—all you have to do is stare at it. It’s convenient, and potentially more secure than a four- or six-digit passcode. And because your data is stored in the phone’s so-called secure enclave and not in the cloud (as Apple did with Touch ID’s fingerprint data), the impressively detailed digital map Apple makes of your face, and the more than 50 facial expressions it can recognize, are kept safe. For the most part.

“For the most part”? Oh, please, tell me something that I’ll be shocked by after reading the title of this page in my web browser, “Apple plans to share some iPhone X Face ID data. Uh oh.”. What could possibly be next?

At launch, facial recognition data from Face ID will only be used by Apple to unlock your phone—and animate a handful of goofy emoji characters called Animoji. However, Apple plans to allow third-party app developers access to some of the biometric data Face ID collects. And this has some privacy experts concerned, as Reuters reports.

A stunning twist.

Fun fact: that Animoji link goes to another Slate article with the title “Three reasons why Apple’s iPhone X animojis are worrisome.” Those three reasons are: they are so good that users will be encouraged to use them! in public! with audio! and that can be annoying; that they are so good that they will become a selling tool for the iPhone X; and that the author gets confused about the difference between the Face ID feature and iOS’ ARKit APIs. A distinction which, as it turns out, Bonnington buries in her ostensibly panic-inducing article:

Facial recognition is everywhere these days. It’s how Facebook suggests friends you should tag in photos, how Snapchat’s lenses so masterfully morph onto your face, and how Google Photos can so intelligently collect and organize photos of people you photograph often. Apple already uses facial recognition in its Photos app on iOS, too. But until now, these companies have kept their facial recognition data private. Allowing developers to access some of that data — even if it’s only a rough map of your face and facial expressions, not the full dataset it uses for biometric identification — is new, potentially scary territory.

This is a completely confused paragraph. There is a difference between facial feature identification — the kind that’s used by Snapchat for lenses, Facebook for suggesting faces to tag in photos, and variations of which are available in a bunch of GitHub repos — and recognition of specific faces, like Google and Apple use for notating specific people in photo libraries.

Apple uses a very sophisticated version of the latter to make Face ID work, which they’ve detailed in a security white paper. But the version of face tracking that’s available to developers is not to be confused with Face ID; it is more like an enhanced version of facial identification. But even that has Bonnington worried:

To use your facial data, developers must first ask your permission in their apps, and must not sell that information to other parties. Still, while it’s forbidden under Apple developer guidelines, privacy experts worry that developers might sell this data or use it for marketing or advertising purposes. (Imagine, if you will, an ad-supported gaming app that uses your current facial expression on your avatar. How valuable would it be for an advertiser to monitor what facial expressions you make as you watch their commercial in between rounds of gameplay?)

That would, indeed, be pretty valuable and deeply creepy. Privacy experts are right to be worried about the plausibility of a company using any kind of facial identification data for marketing purposes, and that’s why Apple has prohibited it. And, yeah, they’re going to have to be pretty vigilant about that.

But let’s not pretend that this is a brand new hypothetical concern that’s exclusive to the iPhone X. Theoretically, any app the user has granted permission for the camera could also target ads using one of those open source facial identification libraries I wrote about earlier — something which is, of course, also prohibited by Apple.

The thing that confuses me most about this piece is that Bonnington is a damn good writer. On the same day that this poorly-researched article was published, she also wrote a fantastic take on those YouTube hands-on videos of the iPhone X published Monday last week. Can’t win ’em all, I guess.

From Apple’s knowledgebase:

If you updated your iPhone, iPad, or iPod touch to iOS 11.1 and find that when you type the letter “i” it autocorrects to the letter “A” with a symbol, learn what to do.

Apple suggests creating a text replacement shortcut to swap the letter I for the letter i. Yeah, really. They also say that they’re going to fix this in an update soon.

This is an utterly ridiculous bug to have escaped Apple’s QA checks and beta testing amongst developers and a public pool. I understand that this seems like an overreaction to a relatively minor bug, but I wasn’t kidding when I wrote last month that input devices should always work. That goes for virtual input devices, too.

With the release of High Sierra and iOS 11 in September, Apple introduced a machine learning-based method to restrict the ability of retargeting scripts to track users across the web. Previously, Safari users could try to prevent this by only allowing cookies from websites the user had explicitly visited — this was the default setting in Safari. Unfortunately, mischievous providers of ad retargeting, like Criteo, figured out a workaround:

Here’s what happens: when visiting a site that includes Criteo’s scripts, a bit of browser sniffing happens. If it’s a Safari variant — and only Safari — Criteo rewrites the internal links on the page to redirect through their domain, […]

The user is then sent to their intended destination page, and Criteo’s cookies are allowed to be set. All that’s needed is that split-second redirect for the first link clicked on the site.

Safari’s new tracking prevention mechanism is supposed to prevent this sort of creepy — and, arguably, unethical — behaviour. So, has it worked? Well, here’s what Criteo said in their most recent earnings report:

We believe our solution for Safari users currently allows us to mitigate about half of the potential impact from ITP. In the third quarter, ITP had a minimal net negative impact on our Revenue ex-TAC of less than $1 million. Given our expectations of the roll out of Apple’s iOS11 and our coverage of Safari users, we expect ITP to have a net negative impact on our Revenue ex-TAC in the fourth quarter of between 8% and 10% relative to our base case projections for the quarter. We will continue to improve and deploy our solution for Safari users over the coming quarters.

It appears that there’s definitely some effect on the ability for Criteo’s shitty script to work, but they’re estimating that it’s still about 50% effective. Perhaps this is just petty of me, but I wish ITP reduced Criteo’s script to 0% efficacy. The lengths to which Criteo has gone to — and will go to, according to the last sentence of that quote — in order for them to track users is an indication that they aren’t following the spirit of users’ wishes.

I’m using Criteo as an example here, but AdRoll employs a similar technique. I think that both of these companies behave disreputably, and I hope Intelligent Tracking Prevention continues to improve so it can better protect Safari users.

Maya Salam, New York Times:

Google Docs threw some users for a loop on Tuesday when the service suddenly locked them out of their documents for violating Google’s terms of service. The weird part? The documents were innocuous. The alerts were caused by a glitch, but they served as a stark reminder that not much is truly private in the cloud.

[…]

“Obviously this is raising questions in a lot of people’s minds about the level of surveillance in internet tools, like cloud-based tools,” Rachael Bale, whose tweets gained traction, said on Tuesday.

Ms. Bale, a reporter for National Geographic’s Wildlife Watch, said that while what happened was “problematic,” she was not too taken aback. “We know Google has access to all kinds of information about us,” she said, adding that professionally, she avoids using Google Docs for “anything sensitive.”

Bale is like so many others; she seems totally okay with the idea that Google knows a lot about us. Here’s the thing: we’re not very good about understanding what information is sensitive and should be withheld. Because so much of the web is either part of Google, funded by Google, or at least tracked by Google, the amount of data they collect on us individually is unfathomably great. Collectively, that’s likely far more dangerous than any single piece of “sensitive” data they might possess.