Month: July 2021

Kashmir Hill, the New York Times:

Clearview AI is currently the target of multiple class-action lawsuits and a joint investigation by Britain and Australia. That hasn’t kept investors away.

The New York-based start-up, which scraped billions of photos from the public internet to build a facial-recognition tool used by law enforcement, closed a Series B round of $30 million this month.

The investors, though undeterred by the lawsuits, did not want to be identified. Hoan Ton-That, the company’s chief executive, said they “include institutional investors and private family offices.”

It makes sense that these investors would want their association with the company kept secret, since identifying them as supporters of a creepy facial recognition company is more embarrassing that their inability to understand irony. Still, it shows how the free market is betting that this company will grow and prosper despite its disregard for existing laws, proposed legislation, and a general sense of humanity or ethics.

Dismantle this company and legislate its industry out of existence. Expose the investors who are propping it up.

Mike Masnick, Techdirt:

I think it’s time that we bring back recognition of how innovation, and technology such as the open internet, can actually do tremendous good in the world. I’m not talking about a return to unfettered boosterism and unthinking cheerleading — but a new and better-informed understanding of how innovation can create important and useful outcomes. An understanding that recognizes and aims to minimize the potential downsides, taking the lessons of the techlash and looking for ways to create a better, more innovative world.

I appreciate this cognizant optimistic approach, and am excited to see what Masnick has in store.

Omer Kabir and Hagar Ravet of Calcalis:

Perhaps due to the magnitude of the media interest in the investigation, NSO executives chose to break the secrecy that usually surrounds their company and answer questions directly. In an interview with Calcalist, NSO chief executive Shalev Hulio denied his software was being used for malicious activity. At the heart of his claims is the list of 50,000 phone numbers on which the investigation is based, and which it is claimed are potential NSO targets. The source of the list wasn’t revealed, and according to Hulio, it reached him a month prior to the publication of the investigation, and from a completely different source.

The publications behind the Pegasus Project assert that this list of phone numbers is, in the words of the Guardian, “an indication of intent”. This is clearly not a list of random phone numbers — several of the numbers on it are tied to phones with local evidence of Pegasus software, and many more of the numbers belong to high-profile targets. But, according to Hulio, it is impossible that this is entirely a list of targets:

According to Hulio, “the average for our clients is 100 targets a year. If you take NSO’s entire history, you won’t reach 50,000 Pegasus targets since the company was founded. Pegasus has 45 clients, with around 100 targets per client a year. In addition, this list includes countries that aren’t even our clients and NSO doesn’t even have any list that includes all Pegasus targets – simply because the company itself doesn’t know in real-time how its clients are using the system.”

Hulio says that NSO Group investigated these allegations by scanning clients’ records that agreed to an analysis, and could not find anything that matched the Pegasus Project’s list. But it is hard to believe he is being fully honest with examples like these of his hubris:

“Out of 50,000 numbers they succeeded in verifying that 37 people were targets. Even if we go with that figure, which is severe in itself if it were true, we are saying that out of 50,000 numbers, which were examined by 80 journalists from 17 media organizations around the world, they found that 37 are truly Pegasus, so something is clearly wrong with this list. I’m willing to give you a random list of 50,000 numbers and it will probably also include Pegasus targets.”

If a list of just 50,000 random phone numbers — basically, everyone in a small town — contains Pegasus targets, Pegasus is entirely out of control. It is a catastrophic spyware emergency. Hulio was clearly being hyperbolic, but his bluster generated quite the response from Calcalis’ interviewer:

That isn’t accurate. Out of the 50,000 numbers they physically checked only 67 phones and in 37 of them, they found traces of Pegasus. It isn’t 37 out of 50,000. And there were 12 journalists among them. That is 12 too many.

NSO Group’s response, while impassioned, cannot be trusted. The company has not earned enough public goodwill for its CEO to use such colourful language. But the Pegasus Project’s publication partners also need to clarify what the list of phone numbers actually means, because something here is not adding up.

I used some of the Washington Post’s reporting on the Pegasus Project in my piece about its revelations and lessons, but I never really addressed the Post’s article. I hope you will read what I wrote, especially since this website was down for about five hours today around the time it started picking up traction. Someone kicked the plug out at my web host; what can I say?

Anyway, the Post’s story is also worth reading, despite its headline: “Despite the hype, iPhone security no match for NSO spyware”. iPhone security is not made of “hype” and marketing. On the contrary, the reason this malware is notable is because of its sophistication and capability in an operating system that, while imperfect, is far more secure than almost any consumer device before it, as the Post acknowledged just a few years ago when it claimed Apple was “protecting a terrorist’s iPhone”. According to the Post, the iPhone is both way too locked down for a consumer product and also all of its security is mere hype.

Below the miserable headline and between the typically cynical Reed Albergotti framing, there is a series of worthwhile interviews with current and former Apple employees claiming that the company’s security responses are too often driven by marketing response and the annual software release cycle. The Post:

Current and former Apple employees and people who work with the company say the product release schedule is harrowing, and, because there is little time to vet new products for security flaws, it leads to a proliferation of new bugs that offensive security researchers at companies like NSO Group can use to break into even the newest devices.

[…]

Apple also was a relative latecomer to “bug bounties,” where companies pay independent researchers for finding and disclosing software flaws that could be used by hackers in attacks.

Krstić, Apple’s top security official, pushed for a bug bounty program that was added in 2016, but some independent researchers say they have stopped submitting bugs through the program because Apple tends to pay small rewards and the process can take months or years.

Apple disputes the Post’s characterization of its security processes, quality of its bug bounty program, involvement of marketing in its responses, and overall relationship with security researchers.

However, a suddenly very relevant post from Nicolas Brunner, writing last week, indicates that Apple’s bug bounty program is simply not good enough:

In my understanding, the idea behind the bounty program is that developers report bugs directly to Apple and remain silent about them until fixed in exchange for a security bounty pay. They also state very clearly, what issues do qualify for the bounty program payout on their homepage. Unfortunately, in my case, Apple never fulfilled their part of the deal (until now).

To be frank: Right now, I feel robbed. However I still hope, that the security bounty program turns out to be a win-win situation for both parties. In my current understanding however, I do not see any reason, why developers like myself should continue to contribute to it. In my case, Apple was very slow with responses (the entire process took 14 months), then turned me away without elaborating on the reasons and stopped answering e-mails.

A similarly frustrating experience with Apple’s security team was reported last month by Laxman Muthiyah:

The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report). Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD.

Selling these kind of vulnerabilities to government agencies or private bounty programs could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple.

[…]

But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below.

Apple says that it pays one million dollars for a “zero-click remote chain with full kernel execution and persistence” — and 50% more than that for a zero-day in a beta version — pales compared to the two million dollars that Zerodium is paying for the same kind of exploit.

Steven Troughton-Smith, via Michael Tsai:

I’m not sure why one of the richest companies in the world feels like it needs to be so stingy with its bounty program; it feels far more like a way to keep security issues hidden & unfixed under NDA than a way to find & fix them. More micro-payouts would incentivize researchers.

Security researchers should not have to grovel to get paid for reporting a vulnerability, no matter how small it may seem. Buy why would anyone put themselves through this process when there are plenty of companies out there paying far more?

The good news is that Apple can get most of the way toward fixing this problem by throwing money at it. Apple has deep pockets; it can keep increasing payouts until the grey market cannot possibly compete. That may seem overly simplistic, but at least this security problem is truly very simple for Apple to solve.

This weekend’s first batch of stories from the “Pegasus Project” — a collaboration between seventeen different outlets invited by French investigative publication Forbidden Stories and Amnesty International — offers a rare glimpse into the infrastructure of modern espionage. This is a spaghetti junction of narratives: device security, privatized intelligence and spycraft, appropriate targeting, corporate responsibility, and assassination. It is as tantalizing a story as it is disturbing.

“Pegasus” is a mobile spyware toolkit created and distributed by NSO Group. Once successfully installed, it reportedly has root-level access and can, therefore, exfiltrate anything of intelligence interest: messages, locations, phone records, contacts, and photos are all obvious and confirmed categories. Pegasus can also create new things of intelligence value: it can capture pictures using any of the cameras and record audio using the microphone, all without the user’s knowledge. According to a 2012 Calcalist report, NSO Group is licensed by the Israeli Ministry of Defense to export its spyware to foreign governments, but not private companies or individuals.

There is little record of this software or capability on NSO Group’s website. Instead, the company says that its software helps “find and rescue kidnapped children” and “prevent terrorism”. It recently published a transparency report arguing that it offers lots of software for other purposes. It acknowledged some abuse of Pegasus’ capabilities, but said that those amount to a tiny number and that the company does not sell to “55 countries […] for reasons such as human rights, corruption, and regulatory restrictions”. It does not say in this transparency report which countries’ governments it prohibits from using its intelligence-gathering products.

Much of this conflict is about the stories which NSO Group wants to tell compared to the stories it should be telling: how its software enables human rights abuses, spying on journalists, and expanding authoritarian power. In fact, that is an apt summary for much of the security reporting that comprises the Pegasus Project: the stories that we, the public, have, not the stories that we want to have.

One of the stories that we tell ourselves is that our devices are pretty secure, so long as we keep them up to date, and that we would probably notice an intrusion attempt. The reality, as verified by Citizen Lab at the University of Toronto, is that NSO Group is particularly good at developing spyware:

Citizen Lab independently documented NSO Pegasus spyware installed via successful zero-day zero-click iMessage compromises of an iPhone 12 Pro Max device running iOS 14.6, as well as zero-day zero-click iMessage attacks that successfully installed Pegasus on an iPhone SE2 device running iOS version 14.4, and a zero-click (non-zero-day) iMessage attack on an iPhone SE2 device running iOS 14.0.1. The mechanics of the zero-click exploit for iOS 14.x appear to be substantially different than the KISMET exploit for iOS 13.5.1 and iOS 13.7, suggesting that it is in fact a different zero-click iMessage exploit.

“Zero-day” indicates a vulnerability that has not already been reported to the vendor — in this case, Apple. “Zero-click” means exactly what it sounds like: this is an exploit delivered by iMessage that is executed without any user interaction, and it is wildly difficult to know if your device has been compromised. That is the bad news: the story we like to tell ourselves about mobile device security simply is not true.

But nor is it true that we are all similarly vulnerable to attacks like these, as Ivan Krstić, Apple’s Head of Security Engineering and Architecture, said in a statement to the Washington Post:

Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. […] Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. […]

This situation is reminiscent of the 2019 zero-day attacks against iPhone-using Uyghurs, delivered through news websites popular with Uyghurs and presumably orchestrated by the Chinese government. Those vulnerabilities were quietly fixed at the beginning of that year, but their exploitation was not disclosed until Google’s Project Zero published a deep dive into their existence, at which point Apple issued a statement. I thought it was a poor set of excuses for a digital attack against an entire vulnerable population.

This time, it makes sense to focus on the highly-targeted nature of Pegasus attacks. The use of this spyware is not indiscriminate. But — with reportedly tens of thousands of attempted infections — it is being used in a more widespread way than I think many would assume. Like the exploits used on Uyghurs two years ago, it indicates that iPhone zero-click zero-days might not be the Pappy Van Winkle of the security world. Certainly, they are still rare, but it seems that there are some companies and nation-states that have stocked their pantries for a rainy day and might not be so shy about their use.

Still, nothing so far indicates that a typical person is in danger of falling victim to Pegasus, though the mere presence of zero-click full exploitations is worrisome for every smartphone user. The Guardian reports that the victims of NSO Group’s customers are high-profile individuals: business executives, investigative journalists, world leaders, and close associates. That is not to minimize the effect of this spyware, but its reach is more deliberately limited. If anything, the focus of its deployment teases for us mere mortals the unique security considerations faced by those at higher risk of targeted attack.

Thing is that many of those high-profile people use iPhones. The diplomats and friends of assassinated journalist Jamal Khashoggi profiled by the Washington Post all use iPhones. Many celebrities use iPhones, even when promoting Android devices. Jeff Bezos used an iPhone X.1 Many of the devices examines as part of the Pegasus Project are, indeed, iPhones, which has push the Washington Post team reporting on this investigation to conclude that this is largely an iPhone-specific problem:

Researchers have documented iPhone infections with Pegasus dozens of times in recent years, challenging Apple’s reputation for superior security when compared with its leading rivals, which run Android operating systems by Google.

The months-long investigation by The Post and its partners found more evidence to fuel that debate. Amnesty’s Security Lab examined 67 smartphones whose numbers were on the Forbidden Stories list and found forensic evidence of Pegasus infections or attempts at infections in 37. Of those, 34 were iPhones — 23 that showed signs of a successful Pegasus infection and 11 that showed signs of attempted infection.

If you read Amnesty’s full investigation into Pegasus — and I suggest you do as it is comprehensive — there is a different explanation for why the iPhone is overrepresented in its sample, and a clear warning against oversimplification:

Much of the targeting outlined in this report involves Pegasus attacks targeting iOS devices. It is important to note that this does not necessarily reflect the relative security of iOS devices compared to Android devices, or other operating systems and phone manufacturers.

In Amnesty International’s experience there are significantly more forensic traces accessible to investigators on Apple iOS devices than on stock Android devices, therefore our methodology is focused on the former. As a result, most recent cases of confirmed Pegasus infections have involved iPhones.

iOS clearly has many holes in its security infrastructure that need patching. Reporting from the Post suggests that the demand of launching a major new version of iOS every year — in addition to the four other operating systems Apple updates on an annual cycle — not only takes a toll on the reliability of its software, but also means some critical vulnerabilities take months to get patched. Apple is not alone in that regard, but it does raise questions about the security of the world’s information resting entirely in the hands of engineers at three companies on the American west coast. Is it a good thing that that high-risk people only have a choice between iOS and Android? Does it make sense that many of the world’s biggest companies almost entirely run Windows? Is enough being done to counter the inherent risks of this three-way market?

The security story we have is one of great risk, with responsibility held by very few. There are layers of firewalls and scanners and obfuscation techniques and encryption and all of that — but a determined attacker knows there are limited variables. iOS is not especially weak, but it is exceptionally vertically-integrated. If the latest iPhone running the latest software updates is vulnerable, all iPhones probably are as well.

There are two more contrasting sets of stories I wish to touch on about the responsibility of NSO Group and companies like it in these attacks. First, NSO Group is careful to state that it is merely a vendor and, as such, “does not operate its technology, does not collect, nor possesses, nor has any access to any kind of data of its customers”. However, it is also adamant that its software had zero role in Khashoggi’s assassination. How is it possible to square that certainty with the company’s alleged lack of involvement in the affairs of customers it cannot confirm nor deny?

Second, I gave credit earlier this year to the notion that private marketplaces of security vulnerabilities might actually be beneficial — at least, compared to weakened encryption or some form of “back door”. NSO Group is the reverse side of that argument. The story I like to tell myself is that, given that there is an established market for zero-days, at least that means law enforcement can unlock encrypted smartphones without the need for a twenty first century Clipper Chip. But the story we have is that NSO Group develops espionage software over which, once sold, it has little control. The company’s spyware is now implicated in the targeting of tens of thousands of phones belonging to activists, human rights lawyers, journalists, businesspeople, demonstrators, investigators, world leaders, and friends and colleagues of all of the above. NSO Group is a private company that enables dictators and autocrats, and somehow gets to wash its hands of all responsibility.

The story it wants is of a high technology company saving children and fighting terrorists. The story it has is an abuse of power and a lack of accountability.


  1. You might remember that embarrassing texts and images were leaked from Jeff Bezos’ iPhone a couple of years ago that confirmed that he was cheating on his now ex-wife with his current partner Lauren Sanchez. Bezos got in front of the National Enquirer story with a heroic-seeming Medium post where he copped to the affair.

    In that post, he also insinuated that the Saudi royal family used NSO Group malware to breach his phone’s security and steal that incriminating evidence in retaliation for his ownership of the Washington Post and its coverage of the Saudi royalty’s role in Post contributor Jamal Khashoggi’s assassination. In addition, the Post had aggressively reported on the Enqiurer’s catch-and-kill scheme to silence salacious stories.

    While that got huge amounts of coverage, a funny thing happened not too long after: the Wall Street Journal confirmed that the Enquirer did not get the texts and photos from some secret Saudi arrangement and, instead, simply paid Sanchez’ brother who had stolen them. A fuller story of this public relations score was reported earlier this year by Brad Stone in Bloomberg Businessweek. It seems that, contrary to contemporary reporting, there was little to substantiate rumours of a high-tech break-in by a foreign government.

    It is unclear whether Bezos was simply spinning a boring story in a politically-favourable way; a recent Mother Jones investigation found that Amazon’s public relations team is notorious among journalists for being hostile and telling outright lies. But if he was targeted by the Saudi Arabian royal family using NSO Group software, it is notable that it is apparently not on the list of 55 countries that the company refuses to sell to on the basis of human rights abuses↥︎

Instagram head Adam Mosseri, a few weeks ago:

But today I actually want to talk a bit more about video. And I want to start by saying we’re no longer a photo-sharing app or a square photo-sharing app. The number one reason that people say that they use Instagram in research is to be entertained. So people are looking to us for that. […]

Leaning hard on video at the expensive of everything else — now where have I heard that before?

Ben Thompson:

To this point I have framed Mosseri’s announced changes in the context of Instagram’s continual evolution as an app, from photo filters to network to video to algorithmic feed to Stories. All of those changes, though, were in the spirit of Systrom’s initial mission to capture and share moments. That is why perhaps the most momentous admission by Mosseri is that Instagram’s new mission is simply to be entertainment.

I have to wonder if it is in preparation for more than that, given this piece by Clive Thompson, writing for Medium’s the Debugger:

If you flew during the 90s and 00s, you probably remember SkyMall. It was a catalogue of completely loony products — often high-tech gadgets of dubious promise, such as “a vacuum cleaner to catch flies, an alien butler drink tray, a helmet that promises to regrow your hair using lasers.”

[…]

I can’t say precisely when my Instagram ads began to tip over into SkyMall territory. I’d been noticing the devolution for months, maybe years. But these days when I open up the app, every ad customized for me is some decidedly loopy gewgaw.

Maybe Instagram’s growth continues to be driven by the successful features it can lift directly from other photo- and video-based apps. But I wonder if this mix of ads for bizarre direct-to-consumer goods and the integrated e-commerce functionality are laying the foundation for a platform more like WeChat, Line, or Gojek. Perhaps Instagram does not expand into logistics operations, but why would it not push further into online payments, and buying and selling products? For many, shopping is entertainment. Why not facilitate that inside one of the world’s most popular mobile apps and take a cut of every purchase?

Even discarding my idle speculation, the name “Instagram” sure is beginning to feel outdated or, at least, disconnected.

Susan Heavey, Elizabeth Culliford, and Diane Bartz, Reuters:

Facebook is not doing enough to stop the spread of false claims about COVID-19 and vaccines, White House press secretary Jen Psaki said on Thursday, part of a new administration pushback on misinformation in the United States.

Facebook, which owns Instagram and WhatsApp, needs to work harder to remove inaccurate vaccine information from its platform, Psaki said.

From the White House transcript of that press briefing, in response to a reporter’s question about what actions the U.S. federal government is taking:

In terms of actions, Alex, that we have taken — or we’re working to take, I should say — from the federal government: We’ve increased disinformation research and tracking within the Surgeon General’s office. We’re flagging problematic posts for Facebook that spread disinformation. We’re working with doctors and medical professionals to connect — to connect medical experts with popular — with popular — who are popular with their audiences with — with accurate information and boost trusted content. So we’re helping get trusted content out there.

Psaki’s admission that the government is “flagging” posts with misinformation has caused quite the gnashing of teeth in pockets of the professional commentary circuit, with the Wall Street Journal’s editorial board calling it “censorship coordination”.

But, as Mike Masnick writes at Techdirt, that is not an accurate portrayal of what the Biden administration is doing:

It’s a simple fact: the US government should not be threatening or coercing private companies into taking down protected speech.

But, over the past few days there’s been an absolutely ridiculous shit storm falsely claiming that the White House is, in fact, doing this with Facebook, leading to a whole bunch of nonsense — mainly from the President’s critics. It began on Thursday, when White House press secretary Jen Psaki, in talking about vaccine disinfo, noted that the White House had flagged vaccine disinformation to Facebook. And… critics of the President completely lost their shit claiming that it was a “First Amendment violation” or that it somehow proved Donald Trump’s case against the social media companies.

It did none of those things.

I think Ken White’s messaging is better than the official White House version, but I do not think it would ameliorate the situation for those who believe the administration is colluding with Silicon Valley, or who are exploiting vaccine misinformation for their own gain.

Microsoft’s Claire Anderson, on the company’s Medium-based design blog (can Microsoft not host its own blog?):

As the world moves toward hybrid work scenarios that blend in-person with remote, expressive forms of digital communication are more important than ever. Over 1,800 emoji exist within Microsoft 365, and we’ve been working for the past year to dramatically refresh them by creating a system that is innately Fluent.

We opted for 3D designs over 2D and chose to animate the majority of our emoji. While you’ll see these roll out in product over the coming months, we wanted to share a sneak peek with you in honor of World Emoji Day. We’re also excited to unveil five brand-new emoji that signal our fresh perspective on work, expression, and the spaces in between.

Even though the video in the post is not entirely reflective of the actual textures and detail in these emoji, there is some beautiful design at play in these images. The faces, in particular, are playfully rendered, yet still legible even at the smaller size shown in the banner image. Many of the objects have soft lighting effects that, while slightly reducing contrast, do not seem to affect clarity too much. I am looking forward to seeing what they will look like in actual use.

Jennifer Daniel of Google, a company that hosts its own blog and uses its own top-level domain extension:

Well, it looks like giving some love to hundreds of emoji already on your keyboard — focusing on making them more universal, accessible and authentic — so that you can find an all-new fav emoji (I’m fond of 🎷🐛). And, you can find all of these emoji (yes, including the king, 🐢) across more of Google’s platforms including Android, Gmail, Chat, Chrome OS and YouTube.

I am much less fond of these.

Maddy Varner, the Markup:

If you’ve scrolled through any e-commerce sites lately, you’ve probably seen a version of it: A charming dinner plate costs $28 or “4 interest-free installments of $7.00 by Afterpay.” A pastoral checkered dress could run you $74.50 … or, alternatively, “4 interest-free payments of $18.62 with Klarna.”

In the past year, more and more merchants have started incorporating “buy now, pay later” options into their websites. They’re often prominently featured on product pages, where shoppers who might otherwise click away are encouraged to instead splurge and split their spending into periodic payments.

[…]

While BNPL companies present these loans as a smart budgeting tool, experts say costs can quickly add up, leaving shoppers with mounting debt. And regulators across the world have started to rein in these services, concerned that they can negatively impact the young consumers who tend to use them.

It is a bit disappointing but unsurprising that Apple is rumoured to be working on a competing offering. Once a company has got its feet wet in the murky sea of financial services, why would it be reluctant to go further?

Here is an excellent little single-purpose Mac utility: Unclack automatically mutes your mic when you are typing, and unmutes it when you stop. That’s it; that is all it does.

This was apparently released months ago, but I was only introduced to it recently, and it has been a very good thing to have. I am a loud typist pretty much always — it is a bad habit, I know — and this little utility means that I do not have to remember to mute and unmute my mic during online meetings. However, I have also discovered that I speak more while typing than I realized, thanks to this utility.

This is, for me, a perfect addition to my work-from-home software toolkit, and it is free. Recommended.

Hussein Kesvani, in an opinion piece for the Guardian:

There is an argument that by forcing people to reveal themselves publicly, or giving the platforms access to their identities, they will be “held accountable” for what they write and say on the internet. Though the intentions behind this are understandable, I believe that ID verification proposals are shortsighted. They will give more power to tech companies who already don’t do enough to enforce their existing community guidelines to protect vulnerable users, and, crucially, do little to address the underlying issues that render racial harassment and abuse so ubiquitous.

My pet theory is that our fractured relationship with other users of big online platforms has nothing to do with anonymity and everything to do with standards. Pseudonymity and anonymity have been a part of the internet since it was created. Many users of forums and, before them, BBSes were only known by their handles. The biggest thing that has changed in the last fifteen-or-so years is a weakening of moderation efforts and community standards. It used to be that you had to go to specific websites known for users’ ability to test the limits of good taste and free speech, but that approach was mainstreamed. In the earlier days of Twitter, company executives famously referred to it as the “free speech wing of the free speech party”. Alexis Ohanian repeatedly praised Reddit’s laissez-faire approach to speech, and Facebook has wrestled with moderation issues for well over a decade now. Many users may have been repelled by rampant abuse, and those who remained were able to set a standard for new users to grow accustomed to.

Lax moderation in the founding years of these platforms undoubtably aided their growth, but that rapid ascendency also compounded their inability to moderate as they grew. Mike Masnick of Techdirt has said that moderation is impossible at scale, but I think that is partly because platforms are not moderating at a small scale. Trying to embed community standards into a platform hosting hundreds of millions of users is a fraught exercise. It has to start when these platforms are nascent.

That is my little theory, but it is sort of irrelevant. There is no way to reset platforms to the size they were at their founding so that we can try this whole thing again. I do not know how we, collectively, find a better way to express ourselves online now that the standard has been set. I do not think banning anonymity is a realistic or effective solution. Platforms’ lowered tolerance for abuse is, I think, helpful, if long overdue. But some change perhaps comes from understanding that we are often communicating with real people. I am not arguing that it will solve racism, but Kesvani is right: requiring verified identification to use web platforms will only give a superficial impression of improving on that front, too.

Helen Rosner, of the New Yorker, interviewed Morgan Neville about his new film “Roadrunner”, a documentary about Anthony Bourdain’s life:

There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: “Dude, this is a crazy thing to ask, but I’m curious” Choe begins reading, and then the voice fades into Bourdain’s own: “… and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” I asked Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.”

Since Bourdain wrote the words generated by this faked audio, I can see how this might seem like a fine compromise if you twist your brain around a little bit, but it is diving headfirst into some murky ethical waters. In this specific case, it comes across as exploitative and disrespectful.

This email is apparently not the only generated audio in the film, too, and it is unclear what the circumstances are around other clips. A big reason why there are no clear answers here is because Neville reportedly does not disclose the use of generated speech in the film — and that is inexcusable. For shame.

Kurt Wagner, Bloomberg:

When users get asked on iPhone devices if they’d like to be tracked, the vast majority say no. That’s worrying Facebook Inc.’s advertisers, who are losing access to some of their most valuable targeting data and have already seen a decrease in effectiveness of their ads. 

The new prompt from Apple Inc., which arrived in an iOS software update to iPhones in early June, explicitly asks users of each app whether they are willing to be tracked across their internet activity. Most are saying no, according to Branch, which analyzes mobile app growth. People are giving apps permission to track their behavior just 25% of the time, Branch found, severing a data pipeline that has powered the targeted advertising industry for years.

The opt-in numbers reported by Branch are similar to those last reported by Flurry for worldwide users.

The online advertising industry has been telling us for years that consumers overwhelmingly prefer personalized ads and are only too happy to give up private information. What a crock of lies. The ad tech industry has been relying on a lack of transparency and consent to drive its business. When given a choice, there is now large-scale evidence that people abhor tracking and will usually opt out.

Also, for what it is worth, iOS 14.5, the update that launched App Tracking Transparency, was released in April and not “early June” as reported here. Never change, Bloomberg.

Ilya Brown of Twitter:

We built Fleets as a lower-pressure, ephemeral way for people to share their fleeting thoughts. We hoped Fleets would help more people feel comfortable joining the conversation on Twitter. But, in the time since we introduced Fleets to everyone, we haven’t seen an increase in the number of new people joining the conversation with Fleets like we hoped. Because of this, on August 3, Fleets will no longer be available on Twitter.

You may recall that the Fleets feature was launched globally in November, reportedly due to pressure from Paul Singer’s Elliott Management though Jack Dorsey took issue with that characterization. I am not sure Wall Street is the best place to look for product ideas, but I admire Twitter’s willingness to experiment with copycat features apparently demanded by the same jerks who pushed Argentina to default. That’s innovation.

Joseph Cox, Vice:

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

While American lawmakers have been focused on allegations of criminally anticompetitive practices by bigger tech companies and American media has extensively covered Facebook and Google’s creepy tracking practices, the data “enrichment” industry has skated by with little attention outside of the tech-centric press. Its practices cannot be ignored.

A couple of years ago, records of over one billion people were found on an unprotected server, sourced from two different data enrichment companies. American cellular providers share subscriber information with advertisers and enrichment companies. This entire industry matches identifiers in different data sets to produce more comprehensive, more detailed, and more individualized profiles on people, which it sells back to the advertising industry, other data companies, resellers, and government agencies, according to the privacy policy of one of the two companies in this report.

I thought it might be useful to look at ways to opt out of this kind of associative data collection, so let’s examine those two companies.

FullContact cares so much about privacy that it provides a process for removing your data from its systems — but that is helpful only if you know the company exists. I followed the process and saw that FullContact had linked two of my email addresses and my phone number against a scraped copy of the LinkedIn profile I deleted many years ago, various social media profiles — remember FourSquare? — and my city. If you are familiar with the APIs provided by social media companies, this is probably an unsurprising data set. I sent a request to delete my data and, within an hour, I received an email confirming it was completed.

BIGDBM is much less transparent. On its Data Market page, it brags that it offers:

[…] a secure, cloud-based, self-service data platform that enables users to quickly and easily select data from billions of records. All BIGBDM records contain a persistent individual ID that keeps track of individuals in both online and offline data environments, allowing our customers for marketing to individuals using digital ads, or offline using phones and direct mail.

People seemingly have little control over whether BIGDBM has their identifier. On its privacy page, California users are able to request a copy of their data by completing a PDF form — which, as of writing, returns an error stating that the HTTPS certificate expired last year — and emailing them a copy. Then, BIGDBM may grant access to its California-specific database tool, at which point it appears that users may be allowed to delete their information. Apparently, this only applies to users in California; if you live elsewhere in the United States, BIGDBM may process your request if you nicely ask a sympathetic company representative.

It is unclear how the company treats information about non-Americans. Its privacy policy says that it does not collect information about people in the European Union “as a matter of course”, but how can it guarantee that? And what about people elsewhere?

Of course, all of this is only relevant if you have heard of BIGDBM. Companies like these are often unnamed in the user agreements and privacy policies most users do not read before registering for a service. In many cases, they fall under a generic term, like “vendors”, “partners”, or “other parties”.

It is onerous to require that individual users understand the full consequences of privacy policies like these. They grant most companies the freedom to share whatever information they feel like with whichever third-parties they deem relevant to their business practices. Those parties might re-share it, or mix it with other records to increase its granularity. All of this is permitted under U.S. law. And, because many technology products and services are based in the U.S., it often means that non-Americans are subject to the same policies due to the jurisdiction clause in the user agreement.

New York Times reporters Sheera Frenkel and Cecilia Kang have written a new book about Facebook, “An Ugly Truth”, with an apt cover design. Last week, the Times published an excerpt about the fractured relationship between Sheryl Sandberg and Mark Zuckerberg. But it is the one published yesterday in the Telegraph that I think warrants further comment:

During a period spanning January 2014 to August 2015, the engineer who looked up his onetime date was just one of 52 Facebook employees fired for exploiting their access to user data. Men who looked up the Facebook profiles of women they were interested in made up the vast majority of engineers who abused their privileges. Most did little more than look up users’ information. But a few took it much further. One engineer used the data to confront a woman who had travelled with him on a European holiday; the two had gotten into a fight during the trip, and the engineer tracked her to her new hotel after she left the room they had been sharing. Another engineer accessed a woman’s Facebook page before they had even gone on a first date. He saw that she regularly visited Dolores Park, in San Francisco, and he found her there one day, enjoying the sun with her friends.

I do not know that Facebook will ever live down the reputation established by a teenaged Zuckerberg in an instant message to a friend of his after he launched it:

Zuck: They “trust me”

Zuck: Dumb fucks.

According to this excerpt, the limitations on engineer access to Facebook user data did not change much between the time Zuckerberg sent those messages and mid-2015. Mix that attitude with the goal Zuckerberg elucidated in a 2007 conversation with Sandberg before hiring her:

[…] he described his goal of turning every person in the country with an internet connection into a Facebook user.

There are now databases containing the personal details of about a third of the world’s population which, at least for a span of eighteen months, an average of one engineer was fired every two weeks for improperly accessing users’ profiles, targeted advertising categories, or location data. This excerpt implies they were caught because they had used company-provided computers, and that they only represent a fraction of the “thousands” of engineers spying on Facebook users. This is an extraordinary abuse of power, akin to real-world stalking with fewer risks to the perpetrator.

In this excerpt and in a brief mention in the Times’ review, Alex Stamos comes out looking pretty good. I am curious about whether that holds in the full story.

You may need to log into a Telegraph account to read this link. Or you can just get the book; I placed a hold on a copy from my local library.

Issie Lapowsky, Protocol:

One of the web’s geekiest corners, the W3C is a mostly-online community where the people who operate the internet — website publishers, browser companies, ad tech firms, privacy advocates, academics and others — come together to hash out how the plumbing of the web works. It’s where top developers from companies like Google pitch proposals for new technical standards, the rest of the community fine-tunes them and, if all goes well, the consortium ends up writing the rules that ensure websites are secure and that they work no matter which browser you’re using or where you’re using it.

The W3C’s members do it all by consensus in public Github forums and open Zoom meetings with meticulously documented meeting minutes, creating a rare archive on the internet of conversations between some of the world’s most secretive companies as they collaborate on new rules for the web in plain sight.

But lately, that spirit of collaboration has been under intense strain as the W3C has become a key battleground in the war over web privacy. Over the last year, far from the notice of the average consumer or lawmaker, the people who actually make the web run have converged on this niche community of engineers to wrangle over what privacy really means, how the web can be more private in practice and how much power tech giants should have to unilaterally enact this change.

The “tech giant” framing of this piece obscures the multisided battle that is going on within these discussions. There are browser vendors — like Apple and Brave — that are more privacy-conscious, but with conflicts of interest, as well as people who advocate for these features with fewer conflicts. There are representatives of the big privacy-hostile tech companies: Google and Microsoft1 have web browsers, while Amazon and Facebook do not. And then there are ad tech companies that are smaller than the big tech companies but, as I have repeatedly argued, can be almost as creepy.


  1. Microsoft has a personalized ad network that tracks Windows users across their computers↥︎

Geoffrey A. Fowler, Washington Post:

I recently moved and needed to sign up for Internet and TV service. I chose a package that Comcast advertised would cost $90 per month.

When the first bill arrived, it totaled — surprise! — $127.72. That’s 42 percent more.

As I’ve learned, jacking up prices for service is perfectly legal. It’s also maddeningly common.

[…]

Comcast tells me this is exactly what its customers want. It said it disclosed its copious additional fees to me in various fine-print communications — though only after I entered my credit card number. “We conduct extensive consumer research and host focus groups and incorporate our findings into the way we present information to our customers, all in an effort to help ensure they have a positive experience and can easily understand the details of their service,” said Jennifer Khoury, Comcast’s chief communications officer.

There is nothing American consumers love more than ISPs and hidden fees, with the exception of pretty much anything else. Still, this is not solely a Comcast problem, nor is it only an American problem: one Canadian ISP’s website currently promises “TOTAL TV PLANS FOR $50/MO.” without making it clear that “Total TV” is a brand and not reflective of the bundle pricing, and the $50 per month rate only applies to the first month’s bill, after which it is $150 per month.

Russell Brandom, the Verge:

You don’t always get what you pay for in internet access. Most places only have one option, so you’re stuck picking the good plan or the bad plan from a single carrier, and if the expensive “broadband” plan turns out to be closer to dial-up speeds, there isn’t much you can do. And that’s without getting into the big swaths of the country that don’t even have a broadband option on the table.

So we’re joining with Consumer Reports to take a close look at the problem, collecting as many internet bills as we can to get a sense of which telecoms are holding up their end of the bargain — and which ones are falling short. The idea is to get a bird’s-eye view of the speeds people are actually getting, and what they’re paying for those speeds.

Consumer Reports promises that it will de-identify bills submitted to it. If you are American and would like to participate, you can complete the survey. I would happily participate if a similar study were offered in Canada.

DL Cade, Digital Photography Review:

Ever since Apple unveiled the M1 System on a Chip (SOC)—the CPU/GPU/RAM combo pack that powers the latest 13-inch MacBook Pro, MacBook Air, Mac mini, and the redesigned 24-inch iMac – the creative world has been buzzing. It’s fast, it’s power efficient, it barely needs to be cooled, and since it was designed by Apple for an Apple operating system, the M1 system is optimized to within an inch of its life.

[…]

The problem is that the M1 was never meant to power professional-grade hardware. It’s a preview of coming attractions – an extraordinary appetizer designed to serve the enthusiast and amateur community, while tantalizing pros with a mere taste of what’s possible. Seven months on, the pros are getting impatient.

Tim Bray:

DPReview just published Apple still hasn’t made a truly “Pro” M1 Mac – so what’s the holdup? Following on the good performance and awesome power efficiency of the Apple M1, there’s a hungry background rumble in Mac-land along the lines of “Since the M1 is an entry-level chip, the next CPU is gonna blow everyone’s mind!” But it’s been eight months since the M1 shipped and we haven’t heard from Apple. I have a good guess what’s going on: It’s proving really hard to make a CPU (or SoC) that’s perceptibly faster than the M1. Here’s why.

Bray’s speculation is well-considered, but perhaps misplaced.

Tom’s Guide:

As rumors swirl around a future M1X chip for the MacBook Pro 2021 and a possible M2 chip for the 2022 MacBook Air, Apple sees big things ahead for Apple Silicon, both in terms of achieving new designs and perhaps appealing to the most demanding audience of all — gamers. After all, many of the engineers building Apple’s chips are gamers themselves.

“Of course, you can imagine the pride of some of the GPU folks and imagining, ‘Hey, wouldn’t it be great if it hits a broader set of those really intense gamers,’” said [Apple VP Tim] Milet. “It’s a natural place for us to be looking, to be working closely with our Metal team and our Developer team. We love the challenge.”

The eagerness of seeing how the M1 could possibly be made to look like last year’s technology is understandable. But it has been just one year since Apple announced that it was making this transition, and the first products with the M1 were only announced and shipped in November. Good things take time, I say. A betting person might look at when Apple launched new Mac hardware for the past five years or so, and treat that as guidance for when the announcement will be made for first slate of Apple’s high-end Macs running on its own silicon.

Update: Via Nut Bunnies on Twitter, it is also worth mentioning that there is still an ongoing chip shortage, which not one of these articles does. I still think beginning with consumer Macs and then adding higher-end models later is a perfectly sensible strategy, and there is little indication outside the company that professional models are delayed.

Saikat Chakrabarti on Twitter:

Every now and then I think about bad readability on the Internet has gotten and it makes me sad.

Every major website now requires users to complete the same set of tedious tasks approximately every seven or more days from their last visit, or whenever the site’s cookies expire. It is horrible. Between data-addicted advertisers and marketers, and well-meaning but flawed policies intended to impress upon users some semblance of informed consent, the web is increasingly hard to read.

Via Shoshana Wodinsky in the replies to that tweet, here is an excellent March 2020 piece by David Roth for Columbia Journalism Review:

Even on the websites of august institutions ads interrupt the text every two paragraphs; ads follow you down the sides of the page like store security; ads pop up in boxes that resist being closed, the elusive little x evading your cursor.

There have always been websites like this, usually the kind that we save for private browsing: places to stream out-of-market sporting events, or download bittorrents of hard-to-find films, or browse other things that no reasonable person would admit to.

Now, a great many websites are at least a little bit like this. Not all of these sites are as hard up as they appear, but all of them — the authentically desperate and the merely thirsty, the ones trying heroically to sell their way out of a downward spiral and those blithely steering into it — have made the same choice. Which is to look and feel and be more friendly to advertisers than readers.

The galling thing is that this strategy works — not for users, of course, but on an entirely commercial level it works. Now that we are all inured to the horrific experience created in service of anti-privacy advertising schemes, there is little incentive for mainstream websites to do things any other way. This makes money. News websites can experiment with paywalls, but this problem extends far beyond those kinds of websites. Go to the online store of any retailer and you will have to decline a newsletter box and hide some sort of coupon offer; you might have to do the latter twice because it will appear again if you move your cursor towards the tab bar, triggering what is known in the business as an exit intent popup. You are clearly there to browse and perhaps make a purchase, and the retailer still wants to inundate you with hard sales tactics.

The web has fallen so far in just the past ten years. I am worried about what is the next lowest bar online marketers will collectively decide websites no longer have to clear.