On Thursday, Apple fired pre-emptive warnings to at least two Chinese apps, telling them to cease and desist after naming a dozen parameters such as “setDeviceName” that could be used “to create a unique identifier for the user’s device”.
“We found that your app collects user and device information to create a unique identifier for the user’s device,” reads a screenshot of a warning to one developer who was using a new way of identifying users called CAID, which was developed by the state-backed China Advertising Association.
This is promising news; it appears that Apple will be taking seriously any attempt at tracking users without their permission, something which was unfortunately unclear in McGee and Yang’s earlier report.
There are two reasons this is noteworthy. The first is that this tracking ID and these developers are connected to the government of China, a country with a human rights record as of late that has differed from Apple’s professed values. Apple has been mostly compliant with escalating demands, presumably because of its manufacturing dependence. So, the thinking goes, would Apple risk challenging apps from politically-connected companies?
The second reason is that Apple has also shown more deference to rule-breaking from big-name developers. Uber, for example, was granted an in-person meeting after it was found to be tracking device serial numbers in a manner disguised from App Review by geofencing, and was not punished for this insidious privacy violation. As Michael Tsai observed:
That said, it’s got to be a tough situation for Apple to be in. They’re trying to protect their customers, but denying them access to an important transportation service would harm them far more than what Uber did. And what if this were an app that provided an essential medical function? The store is full of apps that flout the rules, but I don’t think Apple could ignore the geofencing. It looks like it tried to thread the needle by getting Uber to comply with the rules but then being lenient.
What if high-profile developers just stop playing by the App Store rules? If ByteDance implements the CAID tracking mechanism anyway, would Apple pull TikTok from the store, particularly as there is that ongoing Epic Games lawsuit? I recognize that Apple has nothing that competes with TikTok and, so, this is not a comparable case. Still, that would surely look like a risky move to pull with lawmakers watching.
But if Apple is deferential, that looks like it is permitting different rules for some developers: perhaps because they are from China, perhaps because they are well-known, or perhaps because of antitrust litigation. None of those are acceptable options.
The only choice is for Apple to permit no leeway for any developer, big or small, if they break its rules. Apple has long promised that this is the case anyhow, but it has granted plenty of exemptions. If it only wants to allow native iOS apps to be installed from its own moderated store, it must be especially careful in enforcing these privacy rules evenly.
It seemed entirely possible that Clearview AI would be sued, legislated or shamed out of existence. But that didn’t happen. With no federal law prohibiting or even regulating the use of facial recognition, Clearview did not, for the most part, change its practices. Nor did it implode. While it shut down private companies’ accounts, it continued to acquire government customers. Clearview’s most effective sales tool, at first, was a free trial it offered to anyone with a law-enforcement-affiliated email address, along with a low, low price: You could access Clearview AI for as little as $2,000 per year. Most comparable vendors — whose products are not even as extensive — charged six figures. The company later hired a seasoned sales director who raised the price. “Our growth rate is crazy,” Hoan Ton-That, Clearview’s chief executive, said.
Clearview has now raised $17 million and, according to PitchBook, is valued at nearly $109 million. As of January 2020, it had been used by at least 600 law-enforcement agencies; the company says it is now up to 3,100. […]
Any way you cut it, this is disturbing. The public’s reaction to news of Clearview’s existence was overwhelmingly negative, but police saw that article as an advertisement.
Shameless companies will not change from public pressure.
Clearview is now fighting 11 lawsuits in the state [Illinois], including the one filed by the A.C.L.U. in state court. In response to the challenges, Clearview quickly removed any photos it determined came from Illinois, based on geographical information embedded in the files it scraped — but if that seemed on the surface like a capitulation, it wasn’t.
Clearview assumes that it can scrape, store, and transform anything in the public realm unless it is certain it would be prohibited from doing so. Data is inherently valuable to the company, so it is incentivized to capture as much as possible.
But that means there is likely a whole bunch of stuff in its systems that it cannot legally use but has no way of knowing that. For example, there are surely plenty of photos taken in Illinois that do not have GPS coordinates in their metadata. Why would any of those be cleared from Clearview’s inventory? Clearview also allows people to request removal from its systems, but there are surely photographs from those people that are not positively matched, so the company has no way of identifying them as part of a removal request.
This is an aside, but that raises an interesting question: if images scraped without legal consent were used to train Clearview’s machine learning models, is it truly possible to remove those illegal images?
If Clearview were even slightly more ethical, it would only scrape the images it has explicit permission to access. I would still disagree with that on its face, but at least it would be done with permission. But this is the perhaps inevitable consequence of the Uber-like fuck your rules philosophy — as Hill writes, it is a “gamble that the rules would successfully be bent in their favor”.
Sadly, that Silicon Valley indifference to legality and ethics will not remain localized. There is no way to know for certain that Clearview has complied with the Privacy Commissioner’s recommendation that the company must delete all collected data on Canadians.
Hill digs into Clearview’s origin story, too, which of course involves Peter Thiel and someone who is even more detestable:
After I broke the news about Clearview AI, BuzzFeed and The Huffington Post reported that Ton-That and his company had ties to the far right and to a notorious conservative provocateur named Charles Johnson. I heard the same about Johnson from multiple sources. So I emailed him. At first, he was hesitant to talk to me, insisting he would do so only off the record, because he was still frustrated about the last time he talked to a New York Times journalist, when the media columnist David Carr profiled him in 2014.
“Provacateur” is an awfully kind description of Johnson, though Hill expands further in the successive paragraphs. Just so we’re clear here, Johnson is a hateful subreddit in human form; a moron attached to a megaphone. Johnson has a lengthy rap sheet of crimes against intelligence, decency, facts, and ethics. He has denied the Holocaust, and did Jacob Wohl’s dumb bitbefore Wohl was old enough to vote.
Johnson is, apparently, a sort of unofficial cofounder of Clearview, who agreed to talk with Hill apparently because he thought it would rehabilitate his image. Reading between the lines, as of earlier this month he still held shares in a company that seeks to eradicate privacy on a global scale, so I am not sure how that is supposed to make me think more highly of him.
I thought this was amusing:
Johnson believes that giving this superpower only to the police is frightening — that it should be offered to anyone who would use it for good. In his mind, a world without strangers would be a friendlier, nicer world, because all people would be accountable for their actions.
I thought “cancel culture” was a scourge; I guess some fairly terrible people want to automate it.
Hermès’ approach to watch typography is unusually poetic. In reality, only a small and decreasing number of watchmakers go to the trouble of creating custom lettering for their dials. More often, watch brands use off-the-rack fonts that are squished and squeezed onto the dial’s limited real estate. Patek Philippe, for example, has used ITC American Typewriter and Arial on its high-end watches. French brand Bell & Ross deploys the playful 1980 typeface Isonorm for the numerals on many of its timepieces. Rolex uses a slightly modified version of Garamond for its logo. And Audemars Piguet has replaced the custom lettering on its watches with a stretched version of Times Roman.
Picture this: you sit yourself into the leather armchair that has sunken into the plush carpet on the jewellery store’s floor. You sip your complementary sparkling water as a staff member passes you a soft-lined box, inside which lays your dream watch, a Patek Philippe 5207G Grand Complications. This is among the finest examples of watchmaking and you have convinced yourself that it is worth its seven-figure pricetag. You lift it up to your eye and there you see it: Arial.
I’m not even joking — look at it. This watch has a tourbillon because of course it does, and the word Tourbillon is set in Arial of all typefaces. The calendar’s numbers also appear to be set in Arial and, to make matters worse, it appears to have been stretched vertically to fit, though it could be Arial Narrow. (Update: The numbers actually appear to be in a stretched Helvetica, so this watch has Arial and Helvetica on its face. Neat.) It is the same story across the Patek lineup, and it is a miserable detail in some truly fine work. Noted watch collector and typography enthusiast John Mayer would not be pleased.