This is a long profile by Evan Osnos in the New Yorker and, while it paints a well-researched vignette of Zuckerberg, it’s also confirmation of what you had already probably seen or expected. For example, it catalogues Facebook’s internal belief that if they launch a new feature that has negative reactions, users will eventually come around, even on issues of privacy — the withdrawal of Beacon being one notable exception where user feedback was actually listened to. And on the Alex Jones debacle:
Facebook relented, somewhat. On July 27th, it took down four of Jones’s videos and suspended him for a month. But public pressure did not let up. On August 5th, the dam broke after Apple, saying that the company “does not tolerate hate speech,” stopped distributing five podcasts associated with Jones. Facebook shut down four of Jones’s pages for “repeatedly” violating rules against hate speech and bullying. I asked Zuckerberg why Facebook had wavered in its handling of the situation. He was prickly about the suggestion: “I don’t believe that it is the right thing to ban a person for saying something that is factually incorrect.”
Jones seemed a lot more than factually incorrect, I said.
“O.K., but I think the facts here are pretty clear,” he said, homing in. “The initial questions were around misinformation.” He added, “We don’t take it down and ban people unless it’s directly inciting violence.” He told me that, after Jones was reduced, more complaints about him flooded in, alerting Facebook to older posts, and that the company was debating what to do when Apple announced its ban. Zuckerberg said, “When they moved, it was, like, O.K., we shouldn’t just be sitting on this content and these enforcement decisions. We should move on what we know violates the policy. We need to make a decision now.”
This confirms reporting by Charlie Warzel and Dylan Byers that Apple’s decision was the impetus for Facebook, among other companies, to make a move. Last week, Apple also banned Jones’ company from the App Store. “De-platforming” — as it is known — works, and it’s a decision that Apple, Facebook, and other companies should have made a long time ago.
This irks me:
For many years, Zuckerberg ended Facebook meetings with the half-joking exhortation “Domination!” Although he eventually stopped doing this (in European legal systems, “dominance” refers to corporate monopoly), his discomfort with losing is undimmed. A few years ago, he played Scrabble on a corporate jet with a friend’s daughter, who was in high school at the time. She won. Before they played a second game, he wrote a simple computer program that would look up his letters in the dictionary so that he could choose from all possible words. Zuckerberg’s program had a narrow lead when the flight landed. The girl told me, “During the game in which I was playing the program, everyone around us was taking sides: Team Human and Team Machine.”
I’m a hundred percent sure this was done in good fun. Nevertheless, it reminds me of something that has been rattling around in my head for a while. I’m a competitive person and I want to win at board games; but, I also want to have fun. I like playing with people who also make an effort to win, because it challenges me. Even when I know I’m going to lose, I still have a great time. But I dislike playing with people who need to win. They’re the kind of people who deliberately block all your routes in Ticket to Ride, or buy up one of every property colour in Monopoly. It’s not wrong to do those things, but it doesn’t actually make the game any good. People who have a problem with losing or being wrong sometimes are, generally speaking, destructive assholes.
The New Yorker can spill thousands of words probing Zuckerberg’s psyche and speaking to colleagues about how he’s growing in his unprecedented role of social media Pope to 2.2 billion users, but it’s still the same Zuckerberg who would apparently rather think about scaling and “community” than real-world consequences his company might be involved in.
Facebook has been aware of its role in violence and ethnic cleansing in Myanmar since at least 2014. It entered a market that it knew little about, where traditional media to inform the public was extremely limited, and found that it had built the perfect weapon for organizing mob violence and propaganda. We’ve seen similar situations in Sri Lanka, Libya, the Philippines, and India. One Sri Lankan official characterized the situation to the New York Times, “The germs are ours, but Facebook is the wind.”
But Zuckerberg keeps repeating the same talking points about being “slow” to recognize the problem and how it’s going to take time to fix it. He told the New Yorker that he plans to have 100 people working on translating and moderation in Myanmar by the end of the year. The fact that a company can connect 2 billion people in a little over a decade but can’t hire 100 people over the course of a few years is telling. But the real issue is scale, and the inability of current technology to keep up with that scale.
Facebook can’t play dumb here. According to Osnos’ profile, the “growth” team was the most celebrated and admired inside the company, and their goals were the company’s goals. If they wanted to “dominate” — as Zuckerberg half-jokingly closed every meeting with — they have no excuse for being bad at it when they actually started to do so, and continuing to be terrible years later.
Thomas Reed of Malwarebytes, with a small collection of apps available on the Mac App Store that exfiltrate user data:
It’s blindingly obvious at this point that the Mac App Store is not the safe haven of reputable software that Apple wants it to be. I’ve been saying this for several years now, as we’ve been detecting junk software in the App Store for almost as long as I’ve been at Malwarebytes. This is not new information, but these issues reveal a depth to the problem that most people are unaware of.
We’ve reported software like this to Apple for years, via a variety of channels, and there is rarely any immediate effect. In some cases, we’ve seen offending apps removed quickly, although sometimes those same apps have come back quickly (as was the case with Adware Doctor). In other cases, it has taken as long as six months for a reported app to be removed.
In many cases, apps that we have reported are still in the store.
These are exactly the kinds of things I expect the app review process should catch before apps like these and the aforementioned Adware Doctor make it into the store. The Mac App Store should, if nothing else, be a place for any user to find safe software. Ideally, it’s also one with high-quality, useful, top-tier apps, but security and privacy ought to be the baseline.
There’s an argument to be made about social media as a force for political mobilization — or, say, making friends, whom I may speak to multiple times a week but see only two or three times a year, if ever; research shows shared hatreds are more binding than shared interests — but first I’d like to talk a little bit more about myself. When I wake up every morning I look at my phone to see what has transpired in the night, the final waking moment of which is usually the last time I looked at my phone. This is bad for my sleep cycle, I know, and for the nerves in my hands — I refuse to get one of those knobs you can put on the back of your phone to make it easier to hold, which I see as not just admitting I have a problem but resigning myself to it, as well as broadcasting to strangers who see me using my phone in public that I am a Phone Person (worse: a Phone Woman) — but more important, it is just bad. What I dislike about my life are not the facts of it but its texture, the false tension and paranoia and twitchiness. I exist in a state of “might always be checking something,” and along with being unpleasant, it’s embarrassing.
The sentence I quoted for this link’s title comes in the last paragraph of this essay, but it’s not exactly in the context as you might expect from an essay questioning the substantive value of constant connection. It’s very good.
[Security researcher Patrick Wardle], who shared his findings with TechCrunch, found that Adware Doctor requested access to users’ home directory and files — not unusual for an anti-malware or adware app that scans computers for malicious code — and used that access to collect Chrome, Safari, and Firefox browsing history, and recent App Store searches. The data is then zipped in a file called “history.zip” and sent to a server based in China via “adscan.yelabapp.com.” Two independent security researchers confirmed to Motherboard that Wardle’s report was accurate.
In his blog post, Wardle noted, “The fact that application has been surreptitiously exfiltrating users’ browsing history, possibly for years, is, to put it mildly, rather f#@&’d up!”
Security researcher Privacy 1st tweeted that they initially contacted Apple about the Adware Doctor issue on Aug. 12.
One of the theoretical advantages of the Mac App Store — or any app marketplace with a review process — is that spyware like this could be caught before it is published. Yet Adware Doctor has been in the Mac App Store for years and it could have been pilfering user data for any amount of that time. Apple was even notified about it last month, but it was not removed until today. Either Apple dropped the ball hard here, or there’s something missing to explain why it was apparently not a high priority investigation.
mSpy, the makers of a software-as-a-service product that claims to help more than a million paying customers spy on the mobile devices of their kids and partners, has leaked millions of sensitive records online, including passwords, call logs, text messages, contacts, notes and location data secretly collected from phones running the stealthy spyware.
Less than a week ago, security researcher Nitish Shah directed KrebsOnSecurity to an open database on the Web that allowed anyone to query up-to-the-minute mSpy records for both customer transactions at mSpy’s site and for mobile phone data collected by mSpy’s software. The database required no authentication.
This kind of software is pretty gross to begin with. I’m not a parent, so I might be completely off-base here, but it seems to me that there’s an extraordinary amount of risk that is assumed in collecting everything your kid does relative to the actual benefits you might get out of doing so. Spying on your partner — or, potentially, employees — seems completely unethical.
Shah said when he tried to alert mSpy of his findings, the company’s support personnel ignored him.
“I was chatting with their live support, until they blocked me when I asked them to get me in contact with their CTO or head of security,” Shah said.
KrebsOnSecurity alerted mSpy about the exposed database on Aug. 30. This morning I received an email from mSpy’s chief security officer, who gave only his first name, “Andrew.”
This is a chickenshit response. Regardless of the ethical implications of mSpy’s spyware, a report of a security breach should be treated with more gravity than this. Why wouldn’t they prioritize this? Are they so afraid of making mistakes that they evade acknowledging, fixing, or apologizing for them?
In general, it is appalling to me the lengths that individuals and organizations alike will go to in order to cover up or hide from a mistake or a controversy. If you have any integrity whatsoever, you own your values and your actions. If they are seen as problematic, you try to understand why. If you want to stand by those actions, you should be able to produce evidence for your defence. But change can also be cathartic for everyone involved. There is no honour or benefit in trying to hide from actions that are being questioned.
The editors over at the Sweet Setup asked me to write a short piece on taking pictures with Halide and editing them in Darkroom. It’s the first thing I’ve written in which I specifically recommend not trespassing, so I think it’s worth reading for those curious about jumping beyond the built-in Camera and Photos apps for shooting and editing.
Alphabet Inc.’s Google and Mastercard Inc. brokered a business partnership during about four years of negotiations, according to four people with knowledge of the deal, three of whom worked on it directly. The alliance gave Google an unprecedented asset for measuring retail spending, part of the search giant’s strategy to fortify its primary business against onslaughts from Amazon.com Inc. and others.
Through this test program, Google can anonymously match these existing user profiles to purchases made in physical stores. The result is powerful: Google knows that people clicked on ads and can now tell advertisers that this activity led to actual store sales.
Google is testing the data service with a “small group” of advertisers in the U.S., according to a spokeswoman. With it, marketers see aggregate sales figures and estimates of how many they can attribute to Google ads — but they don’t see a shoppers’ personal information, how much they spend or what exactly they buy. The tests are only available for retailers, not the companies that make the items sold inside stores, the spokeswoman said. The service only applies to its search and shopping ads, she said.
This appears to be part of the data set that the Washington Postpreviously reported was being used to attribute purchases to ads.
Initially, Google devised its own solution, a mobile payments service first called Google Wallet. Part of the original goal was to tie clicks on ads to purchases in physical stores, according to someone who worked on the product. But adoption never took off, so Google began looking for allies. A spokeswoman said its payments service was never used for ads measurement.
Since 2014, Google has flagged for advertisers when someone who clicked an ad visits a physical store, using the Location History feature in Google Maps. Still, the advertiser didn’t know if the shopper made a purchase. So Google added more. A tool, introduced the following year, let advertisers upload email addresses of customers they’ve collected into Google’s ad-buying system, which then encrypted them. Additionally, Google layered on inputs from third-party data brokers, such as Experian Plc and Acxiom Corp., which draw in demographic and financial information for marketers.
This entire program — but particularly these two paragraphs — indicates so much about how all of these companies view the consumer landscape. The solution to not-quite-precise-enough numbers has been to collect more data, and the response to privacy concerns is to fuzz that data a little bit when it’s shared between companies. Based on the actions the surveillance capitalism industry has taken, they have not chosen the correct response of collecting less data.
It is worth noting that privacy was one of Apple’s goals for the design of Apple Pay. According to this Bloomberg report, the complete opposite was true of Google Wallet. As much as we view decisions by any companies as financially-motivated, we should remember to also think of Google’s moves — and those of credit card companies, data brokers, and so forth — as inherently creepy, invasive, and also likely not in the best interests of consumers.
The Outline, the Joshua Topolsky-founded culture website, laid off the last of its two remaining staff writers today. On Twitter, one staff writer, Paris Martineau, announced the shakeup. I’ve confirmed that the other full-time staff member, Ann-Derrick Gaillot, has also been let go. And other non-editorial employees seem to be impacted too. Editors appear to be the only full-time editorial staff the site has left.
The source also noted that The Outline plans to slash its freelance budget despite the dearth of staff writers. The site will likely move from its current Lower East Side office to an undisclosed WeWork location.
These are worrying signs — an online magazine without writers is hardly encouraging. I hope they can recover; the Outline is a particularly interesting publication, and Martineau was one of my favourite writers there.
Anyone who isn’t an expert on the internet would be hard-pressed to explain how tracking on the internet actually works. Some of the negative effects of unchecked tracking are easy to notice, namely eerily-specific targeted advertising and a loss of performance on the web. However, many of the harms of unchecked data collection are completely opaque to users and experts alike, only to be revealed piecemeal by major data breaches. In the near future, Firefox will — by default — protect users by blocking tracking while also offering a clear set of controls to give our users more choice over what information they share with sites.
This will be rolled out in two stages: Firefox 63 — two major releases away from the current build — will start blocking slow-loading trackers, while Firefox 65 will block cross-site tracking. The latter sounds a little bit like Safari’s Intelligent Tracking Prevention feature. However, instead of blocking scripts based on behaviour, Firefox will rely upon a list of trackers created by Disconnect Me.
When pop-ups got out of control in the early ’00s Firefox took a stand and killed them all dead. Now Firefox is taking a stand against tracking on the web because it too has gotten out of control.
Firefox also spearheaded the renaissance of web standards over the past fifteen years or so, but I’m not sure whether it has the kind of sway it once did. Even so, the combination of Apple’s and Mozilla’s prioritization of user privacy is a formidable one.
Of course, Google still makes the world’s most popular browser. There’s simply no way they can join the club of companies that actually care about user privacy with their current business model.