Month: February 2021

Are you stuck inside? Are you tired of the view out of your window? Do you like GeoGuessr but want it to have more motion?

You should give City Guesser a try. Guess the city but, instead of Street View clues, it uses videos of people walking through cities. I’ve found a lot of overlap with my GeoGuessr house rules but with fewer geographic restrictions. Via Waxy.

It has been two and a half years since Bloomberg Businessweek published the now-legendary story of how servers made by Supermicro were compromised by Chinese intelligence at the time of manufacture — servers that ended up in data centres for “a major bank, government contractors”, Apple, and a company acquired by Amazon that counted among its clients the U.S. Department of Defense. Contemporary statements from the named affected companies were unequivocal: either the reporters were completely wrong, or these statements were lies that would carry severe penalties should evidence be found.

In the ensuing years, Jordan Robertson and Michael Riley, the two reporters on the story, have mostly stayed quiet despite frantic calls from security professionals for clarity. Its truthfulness has become something of an obsession for many, including me. On the first anniversary of its publication, I lamented the lack of followup: “either [it is] the greatest information security scoop of the decade or the biggest reporting fuck-up of its type”.

Nearly a year and a half has passed since I wrote that, and it has seemed like it would remain a bizarre stain on Bloomberg Businessweek’s credibility. And then, today, came the followup.

Jordan Robertson and Michael Riley, Bloomberg:

In 2010, the U.S. Department of Defense found thousands of its computer servers sending military network data to China — the result of code hidden in chips that handled the machines’ startup process.

In 2014, Intel Corp. discovered that an elite Chinese hacking group breached its network through a single server that downloaded malware from a supplier’s update site.

And in 2015, the Federal Bureau of Investigation warned multiple companies that Chinese operatives had concealed an extra chip loaded with backdoor code in one manufacturer’s servers.

Each of these distinct attacks had two things in common: China and Super Micro Computer Inc., a computer hardware maker in San Jose, California. They shared one other trait; U.S. spymasters discovered the manipulations but kept them largely secret as they tried to counter each one and learn more about China’s capabilities.

When I woke up this morning and saw Techmeme’s rewritten headline, “Sources: US investigators say hardware and firmware of Supermicro servers were tampered, with an extra chip loaded with a backdoor to send data to China”, I thought there must be some strange bug that is loading old news. Alas, this is a new story, with new sources — over fifty people spoke with the reporters, apparently — new evidence, and new allegations. But rather than clarifying the 2018 article, I find that I have many of the same questions now about two blockbuster articles.

Before I get into my confusion, a necessary caveat: I only have information that has been shared publicly and I am a hobbyist commentator, while Robertson and Riley are journalists who have been collecting details for years. These stories matter a lot, and their allegations are profound, but extraordinary claims demand extraordinary evidence. And based on everything that has been reported so far, I just don’t see it yet. Chalk it up to my own confusion and naïveté, but it seems like I am not alone in finding these reports insufficiently compelling.

Here’s the one-paragraph summary: Supermicro is a big company with lots of clients, any of which would be concerned about a backdoor to a foreign intelligence agency in their hardware. According to these reports, the U.S. intelligence apparatus was mobilized to counter the alleged threat. This has been a high-profile case since the first story was published. And I am supposed to believe that, in two and a half years, the only additional reporting that has been done on this story is from the same journalists at the same publication as the original. Why do I not buy that?

Robertson and Riley’s new report concerns the three specific incidents in the quoted portion above. There is no new information about the apparent victims described in their 2018 story. They do not attempt to expand upon stories about what was found on servers belonging to Apple or the Amazon-acquired company Elemental, nor do they retract any of those claims. The new report makes the case that this is a decade-long problem and that, if you believe the 2010, 2014, and 2015 incidents, you can trust those which were described in 2018. But if you don’t trust the 2018 reporting, it is hard to be convinced by this story.

This time around, there are many more sources, some of which agreed to be named. There is still no clear evidence, however. There are no photographs of chips or compromised motherboards. There are no demonstrations of this attack. There is no indication that any of these things were even shown to the reporters. The new incidents are often described by unnamed “former officials”, though there are a handful of people who are willing to have quotes attributed.

So let’s start with the claims of one of those on-the-record sources:

“In early 2018, two security companies that I advise were briefed by the FBI’s counterintelligence division investigating this discovery of added malicious chips on Supermicro’s motherboards,” said Mike Janke, a former Navy SEAL who co-founded DataTribe, a venture capital firm. “These two companies were subsequently involved in the government investigation, where they used advanced hardware forensics on the actual tampered Supermicro boards to validate the existence of the added malicious chips.”

Janke, whose firm has incubated startups with former members of the U.S. intelligence community, said the two companies are not allowed to speak publicly about that work but they did share details from their analysis with him. He agreed to discuss their findings generally to raise awareness about the threat of Chinese espionage within technology supply chains.

Do not be distracted by the description of Janke as a former Navy SEAL. It is irrelevant to this matter.

One of the companies that has received funding from DataTribe is Dragos, which promises “industrial strength cybersecurity for industrial infrastructure”. It is not clear whether Dragos was one of the firms that received an FBI briefing. However, Dragos’ CEO Robert M. Lee has been consistently critical of Robertson and Riley’s reporting. Lee continues to be skeptical of their claims, saying that they have “routinely shown they struggle on technical details”. That becomes apparent in a detail in this adjacent story of apparently compromised Lenovo ThinkPads used by U.S. forces in Iraq in 2008:

“A large amount of Lenovo laptops were sold to the U.S. military that had a chip encrypted on the motherboard that would record all the data that was being inputted into that laptop and send it back to China,” Lee Chieffalo, who managed a Marine network operations center near Fallujah, Iraq, testified during that 2010 case. “That was a huge security breach. We don’t have any idea how much data they got, but we had to take all those systems off the network.”

Three former U.S officials confirmed Chieffalo’s description of an added chip on Lenovo motherboards. The episode was a warning to the U.S. government about altered hardware, they said.

That quote was pulled from a court transcript, and Chieffalo really did say “a chip encrypted on the motherboard”. That phrase is gibberish. It seems likely that Chieffalo meant to say “a chip embedded on the motherboard”, but the transcript includes no attempt at correction. More worrying for this story, Chieffalo was quoted wholesale without any note from the reporters. It seems reasonable that they could not speculate about the intended word choice, but surely they could have reached Chieffalo for clarification. If not, it seems like an odd choice to approvingly quote it; it undermines my trust in the writers’ understanding.

That trust is critical, particularly as this report implies a much more severe allegation. In 2018, Robertson and Riley wrote that Supermicro servers were compromised at the subcontractor level:

During the ensuing top-secret probe, which remains open more than three years later, investigators determined that the chips allowed the attackers to create a stealth doorway into any network that included the altered machines. Multiple people familiar with the matter say investigators found that the chips had been inserted at factories run by manufacturing subcontractors in China.

That suggests some distance between Supermicro itself and its allegedly compromised boards. If this is true, the company has some wiggle room there to disclaim awareness and terminate that supplier relationship. But in today’s report, Robertson and Riley step up the level of Supermicro’s involvement:

Manufacturers like Supermicro typically license most of their BIOS code from third parties. But government experts determined that part of the implant resided in code customized by workers associated with Supermicro, according to six former U.S. officials briefed on the findings.

Investigators examined the BIOS code in Defense Department servers made by other vendors and found no similar issues. And they discovered the same unusual code in Supermicro servers made by different factories at different times, suggesting the implant was introduced in the design phase.

Overall, the findings pointed to infiltration of Supermicro’s BIOS engineering by China’s intelligence agencies, the six officials said.

The report is careful to say that there is no evidence of executive involvement, and that these changes would have been made by people in a position to be working directly with Supermicro’s server technologies. But that still implies knowledge of this alleged compromise at much closer proximity than some factory in China.

The BIOS manipulation above is dated to 2013. The following year, the report says, the FBI detected nefarious chips on “small batches” of Supermicro boards:

Alarmed by the devices’ sophistication, officials opted to warn a small number of potential targets in briefings that identified Supermicro by name. Executives from 10 companies and one large municipal utility told Bloomberg News that they’d received such warnings. While most executives asked not to be named to discuss sensitive cybersecurity matters, some agreed to go on the record.

In 2018, Businessweek said there were up to thirty companies; it is not clear how much overlap there is with the eleven above. But, as Robertson and Riley write, not a single one has said they found evidence of infiltration. Some blamed a dearth of information from the FBI for their inability to find a problem with their servers, but what if the supposed rogue chips simply did not exist? That would make it especially hard to find evidence for them. Just because government agencies are providing briefings of a possible problem, it does not necessarily mean that problem exists as described.

Here’s one more named source with a funny story:

Darren Mott, who oversaw counterintelligence investigations in the bureau’s Huntsville, Alabama, satellite office, said a well-placed FBI colleague described key details about the added chips for him in October 2018.

“What I was told was there was an additional little component on the Supermicro motherboards that was not supposed to be there,” said Mott, who has since retired. He emphasized that the information was shared in an unclassified setting. “The FBI knew the activity was being conducted by China, knew it was concerning, and alerted certain entities about it.”

If there is a phrase that is jumping out to you in this quote, it is probably “October 2018” because that is when Robertson and Riley published their original “Big Hack” piece. It seems completely plausible to me that Mott’s colleague was describing that Businessweek article. There is nothing here that suggests the colleague was referring to independent knowledge. On the contrary, the fact that this was shared in an “unclassified setting” runs counter to the repeated assertions in both articles about the sensitivity and secrecy of these operations — so secret that, apparently, not even Supermicro was supposed to know.

There is one more incident described in detail. This time, Intel was the supposed target in 2014:

Intel’s investigators found that a Supermicro server began communicating with APT 17 shortly after receiving a firmware patch from an update website that Supermicro had set up for customers. The firmware itself hadn’t been tampered with; the malware arrived as part of a ZIP file downloaded directly from the site, according to accounts of Intel’s presentation.

This delivery mechanism is similar to the one used in the recent SolarWinds hack, in which Russians allegedly targeted government agencies and private companies through software updates. But there was a key difference: In Intel’s case, the malware initially turned up in just one of the firm’s thousands of servers — and then in just one other a few months later. Intel’s investigators concluded that the attackers could target specific machines, making detection much less likely. By contrast, malicious code went to as many as 18,000 SolarWinds users.

This posits an incredibly sophisticated attack — but, again, without supporting evidence. The report says that two steel companies based outside of the U.S. received compromised firmware in 2015 and 2018 from that update site. The Bloomberg story does not mention a 2016 case where Apple found an “infected driver” on one of its servers, which it determined to be accidental. All of these cases point back to an update server that Supermicro’s statement implies was not being served over HTTPS — pause for effect — until some time after that 2018 incident. That’s pretty bad security.

But is it possible that these were more isolated events, and not precise attacks? I am not doubting Intel’s investigative competence, but I am questioning whether the details of this internal presentation have been been accurately relayed to Robertson and Riley. There is no indication that the reporters saw the presentation themselves. If you shed the narrative and look at what is being described here, it sounds like APT17 — an infiltration team that FireEye attributes to the Chinese government — might have compromised Supermicro’s update server and planted malware for its clients to inadvertently install. Both Apple and Intel have denied that this was of notable concern. Malware is certainly a worry, though I am having trouble after all this time trusting the reporting I am basing my theory on. But there is a vast chasm between what has become a routine breach of a supplier with high-value clientele, and the supply chain hardware attack that Bloomberg has been reporting for two and a half years now without turning up a single piece of direct evidence.

There is more in Robertson and Riley’s new piece that one can nitpick; Matt Tait put together a comprehensive Twitter thread of concerns, with an acceptable summary:

FWIW, my money is on this whole saga being, if you dig deeply enough, just briefings related to the 2016 supermicro bad firmware update incident filtered through so many games of telephone that it’s eventually twisted itself into a story about tiny chips that never happened.

The problem remains that we just do not know what is going on here. This is not a trivial matter: there are many companies that rely on Supermicro hardware, and they need to know if there is any chance that any of it is compromised. We now have two lengthy and deeply reported stories with ostensibly alarming conclusions that have produced more confusion than clear answers.

A key indicator of the risk seen in these reports is how Supermicro’s clients behaved after these incidents were disclosed. It turns out that many of them — including Intel, the Pentagon, and NASA — have continued to use Supermicro as a supplier. One would think that, if there were concerns about the security of the company’s products, clients would be cancelling contracts left and right.

Everything about this story is wild and hard to believe. Apparently, there were three different vectors of vulnerabilities in Supermicro products: BIOS manipulation, malicious chips, and insecure firmware updates. In Robertson and Riley’s telling, all three have been exploited over the last eleven years. These attacks cover a few dozen high-profile companies and are being investigated by U.S. intelligence agencies; those agencies are briefing other orgnanizations about the danger. Yet there are only two journalists who have heard anything about this, despite this supposed supply chain attack being one of the most-watched information security stories in recent memory, and Supermicro still is not a prohibited vendor.

I would find this more compelling if this story were corroborated by more outlets with different sources, or if Robertson and Riley were able to produce more rigorous evidence. Then, at least, there would be some clarity. Right now, it feels like I’ve seen this movie before.

Will Oremus, OneZero:

When you join the fast-growing, invite-only social media app Clubhouse — lucky you! — one of the first things the app will ask you to do is grant it access to your iPhone’s contacts. A finger icon points to the “OK” button, which is also in a bolder font and more enticing than the adjacent “Don’t Allow” option. You don’t have to do it, but if you don’t, you lose the ability to invite anyone else to Clubhouse.


Granting an app access to your contacts is ethically dicey, even if it’s an app you trust. If you’re like most people, the contacts in your phone include not just your real-life friends, but also old acquaintances, business associates, doctors, bosses, and people you once went on a bad date with. For journalists, they might also include confidential sources (although careful journalists will avoid this). When you upload those numbers, not only are you telling the app developer that you’re connected to those people, but you’re also telling it that those people are connected to you — which they might or might not have wanted the app to know. For example, say you have an ex or even a harasser you’ve tried to block from your life, but they still have your number in their phone; if they upload their contacts, Clubhouse will know you’re connected to them and make recommendations on that basis.

I have previously written in passing about how invasive it is for apps and services to be able to vacuum up contacts. But I am not sure I have fully expressed how much of a catastrophe it is for privacy and consent.

In 2012, Apple began requiring explicit permission for apps to access a user’s contacts, after many apps — including Path and Foursquare — were shown to be uploading contact lists without users’ explicit knowledge or consent. The thing that has always bugged me about this arrangement is that there is no involvement in this decision from the people in the contact list.

Just because I have someone’s phone number or email address, that does not make it right to push that information into some app’s database. Likewise, when I give someone my contact details, it is not immediately apparent that they will likely, at some point, pass that along to a company that I would not elect to share that information with. And it isn’t just email addresses and phone numbers: contact directories contain all sorts of unique information about people, and it is trivial to merge identifiers to produce more comprehensive dossiers about individuals. This is not hypothetical; it is often marketed as a feature.

The permission dialog iOS presents users before an app is able to access their contacts is, in a sense, being presented to the wrong person: can you really consent on behalf of hundreds of friends, family members, and acquaintances? From a purely ethical perspective, the request ought to be pushed to every contact in the directory for approval, but that would obviously be a nightmare for everyone.

There are clearly legitimate uses for doing this. Allowing people to find contacts already using a service, as Clubhouse is doing, is a reasonable feature. It does not seem like something that can be done on-device, so the best solution that we have is, apparently, to grant apps permission to collect every contact on our phones. But that is a ludicrous tradeoff.

This is why it is so important for there to be strict privacy regulations — particularly in the United States. It should not be left up to individuals or businesses to decide to what extent they are comfortable allowing their users to violate the privacy of others. I do not think legitimate uses of contact matching should be banned; I think these features should be made safer.

Sarah Miller:

So, already feeling very out of sorts, let’s recall, I requested help from the university’s enrollment/grade/whateverthefuck portal help staff, and got into “line” behind 20 people. It did keep updating, you’re 16th, you’re 11th, you’re 6th! This was the best part of my day by far. I like to think I occupied each place in line with proper reverence.

My turn came. I explained the problem. The help guy said he would send me an email. And then he sent an email, to my email, the only email I ever use, the email that never doesn’t get emails, ever. I have no reason to disbelieve this. And I didn’t get it. He sent another one, I didn’t get it. He said I would have to call the university and talk to them about why I couldn’t get emails from the enrollment/grade/whateverthefuck portal. As soon as our chat ended, I got two emails from the help desk of the enrollment/grade/whateverthefuck portal I couldn’t get emails from, one saying, sorry you can’t get emails, we did everything we could, and the other a transcript of the conversation about how I couldn’t get emails. Just in case you’re not grasping this: I got two emails from the place I couldn’t get one email from about not getting that email.

There never feels like an explanation for these things. This piece spoke to me — much like Miller, many of the technical errors I encounter every day are found just after I have done all of the right things, clicked all of the right buttons, and said the exact right incantation. And then, some cheerful error message telling me to try again now, later, or — even worse — nothing at all. I read that circumstance as a good time to step away and peel an orange.

John D. McKinnon and Alex Leary, Wall Street Journal:

A U.S. plan to force the sale of TikTok’s American operations to a group including Oracle Corp. and Walmart Inc. has been shelved indefinitely, people familiar with the situation said, as President Biden undertakes a broad review of his predecessor’s efforts to address potential security risks from Chinese tech companies.


Whatever shape a possible TikTok deal takes, it is not likely to feature the $5 billion fund for education that Mr. Trump had said Oracle and Walmart were preparing to create as part of the deal, according to one of the people familiar with the situation.

To be clear, this was always a sale for show, it remains uncertain whether Oracle and Walmart were actually going to create an education fund, and the curriculum that would apparently have been promoted was the context-free 1776 Project.

Justin O’Beirne is so good at these articles. In this new one, he explores four primary questions:

  1. Why was the Look Around coverage in Canada available so soon after Apple began collecting it and across so much land, compared to every other country Apple has released Look Around imagery for so far?

  2. Why do points of interest differ significantly between Apple Maps’ flat view, Look Around imagery, and the real world?

  3. Why do many of Apple’s most recent Look Around releases lack points-of-interest?

  4. Whatever happened to the vans Apple showed off to Matthew Panzarino in June 2018? The most recent image I can find with one of those LiDAR vans was captured in September 2018 — but none of its imagery is available in Maps nor, as O’Beirne shows, are any of the images captured by those vans.

This is one of my favourite pieces that O’Beirne has done, and not just because he has kind words for yours truly. I love how mysterious this process is and, in particular, how it compares to Google’s efforts. Google obviously has more practice with building maps and processing street-level images. Apple’s approach reveals some hiccups and question marks — I really want to know what those vans were capturing and why they are apparently no longer used.

Alex Birsan:

Apparently, it is quite common for internal package.json files, which contain the names of a javascript project’s dependencies, to become embedded into public script files during their build process, exposing internal package names. Similarly, leaked internal paths or require() calls within these files may also contain dependency names. Apple, Yelp, and Tesla are just a few examples of companies who had internal names exposed in this way.


From one-off mistakes made by developers on their own machines, to misconfigured internal or cloud-based build servers, to systemically vulnerable development pipelines, one thing was clear: squatting valid internal package names was a nearly sure-fire method to get into the networks of some of the biggest tech companies out there, gaining remote code execution, and possibly allowing attackers to add backdoors during builds.

One of the things that is most impressive and, therefore, terrifying about this attack vector is just how clean it is. It has echoes of malicious software updates but it is even more straightforward.

Paul Ford, in a perfect essay for Wired:

Home is supposed to be a constant, steady place, a shelter for a family. It shouldn’t change very much. But an office is basically a big clock with humans for hands. And I find that the people who don’t want to go back to pre-pandemic office culture are the people who are the most concerned about their time. Sometimes this is their personality; they are engineers who look at travel as a waste, who seek efficiencies in their work and health. Sometimes they’re people with other stress, like parents of young children who triangulate between the day care’s schedule, their boss’s expectations, and kids’ needs. For a disabled person, working from home can save hours of daily, needless negotiation. All of these cases are utterly valid. And yet we’re going back. Maybe not all of us, maybe with hybrid schedules. But most of us. We all know it.

This is, as I say, a perfect essay. I would not change one word. But, though I am skeptical of my ability to make a contribution, I always like to add a little something to these links, so here goes.

The thing I miss most about living and working in different places is that it is necessary to travel between them. I began missing my commute so much that, last summer and autumn, I would often take long walks after a workday. That now feels like a luxury. As I write this, Environment Canada’s website tells me that it feels like thirty degrees below freezing and, if I had to go into the office tomorrow at my usual time, it is forecasted to feel ten degrees colder than that.

Much as I am glad to not be turned into a human icicle, the missing drumbeat of home-walk-work-walk-home has been a difficult adaptation. To make matters more monotonous, I am working from the same home office every day because my computer with all of my work stuff is an iMac and it is difficult to drag around. So it doesn’t matter what I am doing — writing email, meeting colleagues, having a quick chat — it all happens in exactly the same place staring at exactly the same screen sitting in exactly the same chair rolling around on exactly the same rug. The rug is a new addition for this year, actually; I thought it would help make my work space feel different and separate. It felt that way for only about a week.

I wonder, will my home office ever become part of my home again, or will it always feel like an extension of where I work?

I am linking to this with the caveat that it is a Bloomberg Businessweek story about supply chains in China, which is a particular genre that the magazine has completely bombed before without any public reckoning or accountability. One wonders why Bloomberg continues to leave its tattered reputation dangling in the wind, making it hard to trust stories from any of its writers.

With that in mind, here’s Austin Carr and Mark Gurman with a look into Apple’s supply chain and how Tim Cook grew Apple from a mere icon in 2011 to the world’s most valuable publicly-traded company:

Apple’s turnaround in the ensuing years has generally been attributed to Jobs’s product genius, beginning with the candy-colored iMacs that turned once-beige appliances into objets d’office. But equally important in Apple’s transformation into the economic and cultural force it is today was Cook’s ability to manufacture those computers, and the iPods, iPhones, and iPads that followed, in massive quantities. For that he adopted strategies similar to those used by HP, Compaq, and Dell, companies that were derided by Jobs but had helped usher in an era of outsourced manufacturing and made-to-order products.


Contract manufacturers worked with all the big electronics companies, but Cook set Apple apart by spending big to buy up next-generation parts years in advance and striking exclusivity deals on key components to ensure Apple would get them ahead of rivals. At the same time he was obsessed with controlling Apple’s costs. Daniel Vidaña, then a supply management director, says Cook particularly fussed over fulfillment times. Faster turnarounds made customers happier and also reduced the financial strain of storing unsold inventory. Vidaña remembers him saying that Apple couldn’t afford to have “spoiled milk.” Cook lowered the company’s month’s worth of stockpiles to days’ and touted, according to a former longtime operations leader, that Apple was “out-Dell-ing Dell” in supply chain efficiencies.

Since Apple made the call to remove from the App Store — a decision that was apparently made to appease a Chinese government that is worried about pro-democracy demonstrators — I have been intrigued by how closely Tim Cook’s ascendance dovetailed with that choice. Hong Kong’s sovereignty was returned to China in July 1997; Cook joined Apple less than a year later in March 1998. Cook was a primary force in moving Apple’s production lines to China, mostly to factories that are located just across the Sham Chun River, which separates mainland China and Hong Kong.

Over the last twenty years, Apple’s dependency on China has grown, as has China’s influence over Hong Kong. The two paths collided in 2019 when demonstrators in Hong Kong used an iOS app to alert others about the location of police barricades, and Apple under Cook’s leadership removed that app from the store. Some commentators saw this as protecting Apple’s access to the Chinese market; I bet its reliance on factories in that country was a greater motivator.

A recent Nikkei report indicated that Apple is seeking to reduce that dependency. But it does not seem to be going well, according to Carr and Gurman:

When Apple engineers started setting up manufacturing in Texas, sources familiar with the matter say, they had a difficult time finding local suppliers willing to invest in retooling their factories for a one-off Mac project. According to a former Apple supply chain worker, huge quantities of certain components needed to be imported from Asia, which caused a domino effect of delays and costs. If a shipment arrived with defective parts, for example, the Texas factory had to wait for the next air-cargo delivery; at factories in Shenzhen, supply replacements were a short drive away. It felt like the opposite of Gou’s ultra-efficient all-in-one Foxconn hubs. “We really emphasized with the suppliers to triple-check their product before they put it on a plane to Texas,” this worker says. “It was a pain.”


Meanwhile, Apple has moved some production of AirPods to Vietnam and iPhones to India, where the company has run into scale and quality issues, too. More significant manufacturing diversification is likely to take years, even as Cook faces pressure to decouple from China over censorship, human-rights violations, and criticism about labor conditions at mainland factories. In an all-hands meeting last year, an employee asked Dan Riccio, then Apple’s hardware chief, why the company continues to build products in China given these ethical problems. The crowd cheered. “Well, that’s above my pay grade,” he responded, before adding that Apple was still working to expand its manufacturing presence beyond China.

I do think that Apple’s executive team really believes in social justice and trying to do the right thing. The factories of its contract manufacturers in China undermine that, and I think they are cognizant of that. But modifying a supply chain as integrated and complex as Apple’s to give the company more leverage in its negotiations with Chinese government officials is an enormous task.

This report contains little new information, but it is an engaging summary of how this supply chain has evolved over time — right up until you get near the end and then there’s this weird paragraph:

In many ways, Cook is now applying the lessons Apple learned building its China manufacturing network to other parts of the business. Its operational prowess has enabled it to churn out more product permutations and accessories. And just as Apple uses its awesome buying power to extract concessions from suppliers, it’s now using its control over an equally impressive digital supply chain, which includes the company’s own subscription services, as well as third-party apps, to generate greater revenue from customers and software developers. In an October report on the tech industry, the House antitrust subcommittee said this influence of its App Store amounted to “monopoly power” and recommended that regulators step in.

Perhaps I am missing something, but the connection between the physical supply chain and the App Store’s distribution policies is tenuous. It is also incorrect: while Carr and Gurman say that Apple is exercising greater control to generate more revenue through its App Store, one of the few notable highlights for iOS developers last year was the announcement of the Small Business Program which lowered Apple’s commission to 15% up to one million dollars. There are many caveats and it is imperfect, but it is the opposite of the squeeze the company puts on its hardware suppliers, not its analog.

Nathan Collier of Malwarebytes:

Late last December we started getting a distress call from our forum patrons. Patrons were experiencing ads that were opening via their default browser out of nowhere. The odd part is none of them had recently installed any apps, and the apps they had installed came from the Google Play store. Then one patron, who goes by username Anon00, discovered that it was coming from a long-time installed app, Barcode Scanner. An app that has 10,000,000+ installs from Google Play! We quickly added the detection, and Google quickly removed the app from its store.

The basic premise of corrupting a straightforward utility app is not new, but it is concerning. In 2018, Craig Silverman of Buzzfeed News revealed that an entire company with the unsubtle name We Purchase Apps acquired over a hundred of them to execute an ad fraud scheme on Android. Becky Hansmeyer wrote about how an iOS wallpaper app was flipped to another developer that packed it full of ads. Browser extensions are another popular vector, particularly for analytics companies that want to spy on users’ browsing. A particularly aggressive Chrome extension generated a 2016 FTC investigation.

Update: Google recently pulled another Chrome extension after its new owners added malware.

Steven Shen:

On iOS, if you turn on “Limit Adult Website” under Screen Time -> Content Restrictions, Safari blocks any website URL containing the word “asian”. Seriously, go try it, it’s unbelievable. I filed a [Feedback] a long time ago. Nothing changed.

Victoria Song, Gizmodo:

Other racial or ethnic search terms such as “Black,” “white,” “Korean,” “Arab” or “French” don’t seem to be impacted by the filter. Also confusingly, some popular pornographic search terms are blocked while others aren’t. For example, the search term “schoolgirl” isn’t blocked but “redhead” is.


Right now, it’s not clear how Apple decides which search terms are adult and which ones aren’t. It’s very possible that this is a goof from some AI program that just scrapes search results for popular porn terms. But at the same time, something like this should not be left to AI without some sort of human curation. The alternative — that a group of humans did oversee and approve this list of terms — would be exponentially worse. Regardless of whether this was or wasn’t an intentional decision, the fact this had been reported to Apple more than a year ago and still nothing has changed is massively disappointing.

It isn’t just iOS — the equivalent feature in MacOS also blocks “Asian” websites, but less consistently. If I enable website restrictions on MacOS, I am able to search Wikipedia and access articles with “Asian” in their titles and URLs; on iOS, those same searches and articles are blocked.

It also is not the sole racial or racially-related term blocked by Screen Time, but it is one of very few. While you can visit Ebony Magazine’s website with parental controls enabled, searching Google for the lyrics to Paul McCartney and Stevie Wonder’s “Ebony and Ivory” is blocked. It is, however, the only one that prohibits the searching of an entire continent.

This is upsetting, made all the more so by Apple’s lack of acknowledgement to Shen’s bug report. I thought of this as a Scunthorpe problem in an earlier version of this post, but that is hardly the case. “Asian” — and “ebony”, for that matter — need more words and context added to be considered sexualized.

Peter Guest, Sen Nguyen, and Randy Mulyanto, Rest of World:

Over the past few years, Grab and other ride-hailing players like Gojek Vietnam, owned by the Indonesian unicorn Gojek, have burned millions of dollars in investor cash trying to attract price-sensitive Vietnamese customers and capture as much market share as possible. The strategy included offering steep discounts to consumers and generous incentives to drivers, who work as independent contractors with few labor protections.

The problem is that now their investors are eager to turn a profit, and the well of cash is starting to dry up. In Vietnam and other Southeast Asian countries, like Indonesia, Grab and Gojek have begun raising prices and squeezing gig workers, especially after Covid-19 lockdowns drastically cut the number of people using ride-hailing apps.

But the two companies are also part of a pattern that started emerging around the world long before the pandemic. Enormous venture firms, most notably the Japanese mega-fund SoftBank, have poured billions into startups with the express purpose of winner-takes-all domination. Often, when it comes time to actually make money, it’s workers at the bottom who bear the burden.

This piece largely concerns speculation of a merger of Grab and Gojek, which would give the combined company a near-monopoly on ride hailing throughout Southeast Asia. Both companies also operate ancillary services that take advantage of their ubiquity; this would be worrying for workers in many industries in the region.

But this is also, in the periphery, about the distorting effect of massive infusions of venture capital. I’ve already written about how companies like Uber have bled billions of dollars on the premise that they can make self-driving cars, something which they offloaded in December. But it is equally worrying that the funding of some venture-backed companies is dependent on aspirations of becoming a monopoly in key markets — something that is enabled by decades of weak antitrust regulation worldwide.

This is my “something lighter” that I promised in the last post: an article about how all software will die one day.

Jason Snell:

Choosing change is tough, but sometimes you don’t choose, and there’s no obvious benefit at the end of the process. If one of your key tools is discontinued, or becomes incompatible with the next version of your operating system or the new hardware you just bought, you’re going to be forced to move eventually. Call Recorder users can keep using it for now, but as soon as they buy a Mac that’s running Apple silicon, the jig is up. Change is coming, inevitably.

I wish software were more durable over the long term, but that comes with its own baggage. That has long been the wrench in the spokes of Microsoft’s bicycle: some companies depend on software written when I was learning to walk.

On an individual level, we have to be okay with adaptation, but it is hard. Luckily, the whole world is constantly changing as well. My favourite instant messaging client no longer works — largely because the IM protocols I once relied upon no longer exist — but that is okay because my friends have moved on to phone-based messaging. It comes with unique benefits; it comes with new drawbacks. And we keep chatting along because it is fine.

One more policy-related post for a hat trick today, and then I promise I will link to something lighter.

There are several recommendations at the beginning of that NYU report I previously linked to. One of them is probably familiar to anyone who has read internet policy discussions for the past, say, three or four years:

Work with Congress to update Section 230. The controversial law should be amended so that its liability shield is conditional, based on social media companies’ acceptance of a range of new responsibilities related to policing content. One of the new platform obligations could be ensuring that algorithms involved in content ranking and recommendation not favor sensationalistic or unreliable material in pursuit of user engagement.

Well, it turns out that Senator Mark Warner introduced a new bill today, cosponsored by Sens. Mazie Hirono and Amy Klobuchar, called the SAFE TECH Act (PDF). Here’s a pause where you can try to work out what that acronym stands for.


It’s the Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act. Nice try, Mark, but it’s no “USA PATRIOT Act”.

The impression given by the press release is that this legislation is merely a minor revision of Section 230 of the Communications Decency Act:

These changes to Section 230 do not guarantee that platforms will be held liable in all, or even most, cases. Proposed changes do not subject platforms to strict liability; and the current legal standards for plaintiffs still present steep obstacles. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer – even when directly enabled by a platform’s actions or design.

But according to the legal experts that Dell Cameron of Gizmodo spoke with, these changes would be catastrophic:

“The ‘payment’ language appears to apply to more than just advertisements. There are a number of services, such as website hosting, for which service providers accept payment to make speech available,” said Jeff Kosseff, a cybersecurity law professor and author of the Section 230 book, The Twenty-Six Words That Created the Internet.

“Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech,” [Sen. Ron] Wyden added. “This bill would have the same effect as a full repeal of 230, but cause vastly more uncertainty and confusion, thanks to the tangle of new exceptions.”

Cathy Gellis, Techdirt:

But it’s the first part that nukes the entire Internet from orbit because it prohibits any site from in any way acquiring any money in any way to subsidize their existence as a platform others can use. That’s what “accepted payment to make the speech available” means. It doesn’t care if the platform actually earns a profit, or runs at a loss. It doesn’t care if it’s even a commercial venture out to make money in the first place. It doesn’t care how big or small it is. It doesn’t even care how the site acquired money so that it could exist to enable others’ expression. Wikipedia, for instance, is subsidized by donors, who provide “payment” so that Wikipedia can exist to make its users’ speech available. But if this bill should pass, then no more Section 230 protection for that site, or any other site that didn’t have an infinite pot of money at the outset to fund it forever. Any site that wants to be economically sustainable, or even simply recoup even some of the costs of operation – let alone actually profit – would have to do so without the benefit of Section 230 if this bill were to pass.

The thing about Section 230 is that it is very easy to argue that it should be modified, but nearly impossible to find a way to change it so as to avoid toppling the Jenga tower that is the web. This bill is a well-intentioned but poor attempt that, if passed as-is, would backfire.

If you are wondering why Republican state legislators are aggressively pushing a prewritten bill limiting the ability for social media companies to moderate their own platforms, look no further than the myths debunked by a new report from NYU’s Stern Center for Business and Human Rights. From the report (PDF):

The trouble with this belief — that tech companies are censoring political viewpoints they find objectionable — is that there is no reliable evidence to support it. There are no credible studies showing that Twitter removes tweets for ideological reasons or that Google manipulates search results to impede conservative candidates (see sidebar on Google on page 12).


The false bias narrative is an example of political disinformation, meaning an untrue assertion that is spread to deceive. In this instance, the deception whips up part of the conservative base, much of which already bitterly distrusts the mainstream media. To call the bias claim disinformation does not, of course, rule out that millions of everyday people sincerely believe it.

I do not think it is surprising that trust in media among Americans has dropped more-or-less steadily ever since Fox News launched in 1996. It is the modern wedge — the thing that established that it is presenting the “other side” of the “liberal mainstream” of CBS, ABC, and NBC. None of that is true, but it fuelled the modern rise of the myth that there are exactly two ideological sides to everything. Other cable news networks copied this formula, none quite as successfully.

Social media is now being treated to the same argument that the mainstream — by disallowing destructive conspiracy theories, attempted insurrectionists, and attacks on the basis of ethnicity, gender, and sexual orientation — is ideologically biased. It is not. These are merely the least these companies can do to restrict intolerant individuals’ ability to bully other users. Many people will never notice those rules because they are generally decent. For the people who cannot have a coherent discussion without resorting to personal attacks, there are miserable “alt-tech” platforms that are happy to brew a toxic environment.

Mike Masnick, Techdirt:

A bunch of Republican state legislators across the country are apparently unconcerned with either the 1st Amendment (or reality) have decided that they need to stop social media companies from engaging in any sort of content moderation. […]

Of course, it’s easy to just point at Florida and say “there goes Florida again…” but it’s actually Republican legislators in a whole bunch of states. And this wasn’t even the first such bill in Florida. A week or so earlier, Republican state Senator Joe Gruters introduced a bill called the “Stop Social Media Censorship Act” which bars any moderation of “religious or political speech.”

Gruters may have introduced the bill, but it doesn’t look like he wrote it. Because in Kentucky, Republican Senators Robby Mills and Phillip Wheeler introduced a nearly identical bill. Oh, and over in Oklahoma, Republican Senator Rob Standridge also introduced an identical bill. In Arizona, it’s Senator Sonny Borrelli who has introduced very similar legislation, though his looks a little different, and (insanely) would try to put into law that a social media website is “deemed to be a publisher” and “deemed not to be a platform” which is, you know, not a thing that actually matters. In North Dakota, there’s Republican State Rep. Tom Kading who’s similar bill also includes the nonsense publisher/platform distinction.

The same bill also recently surfaced in New Hampshire and Mississippi, but the earliest copy I found was pushed in 2018 in Arkansas (PDF). Do not go thinking Arkansas Representative Johnny Rye actually wrote the bill he was proposing, though.

John Moritz, of the Arkansas Democrat-Gazette in January 2019:

In late July, emails began appearing in some Arkansas lawmakers’ in-boxes from an out-of-state man who identified himself as Chris Severe.

He offered a half-dozen pieces of legislation, one of them on human trafficking. He made a simple request.

“We will let you pick who gets to sponsor what. Once you decide, we will reach out [to] them and brief them,” read one message, sent using a pseudonym.


The two pieces of legislation included one bill aimed at stopping “social media censorship” and another that would mandate that devices capable of accessing the Internet have software to block material defined as obscene under Arkansas law. Under the proposal, those devices could be unblocked if users paid a $20 fee.

Moritz’s reporting on the circumstances of this model legislation is truly excellent. Severe is actually Chris Sevier, whose surname is often also spelled Seviere, and he has a remarkable record of putting unconstitutional bills before lawmakers and encouraging their filing. In 2013 in Florida, Sevier sued Apple because the company’s devices can provide access to pornography. In 2017, Sevier was behind legislation pushed in many states that would require porn filtering on all devices. Last year in Mississippi, Sen. Chris McDaniel pushed a Sevier-created bill that would require media coverage of the outcomes of all cases against public figures.

Lest you think he’s just some laughable caricature of a man with a porn obsession, I should also clarify that he seems to be a homophobic jackass who once equated same sex marriage with his relationship with his computer.

Moritz’s 2019 reporting links Sevier to a group called Special Forces of Liberty, which wrote this model legislation for all fifty states. The bill proposed in New Hampshire, for example, is nearly identical to Sevier’s pitch. These bills are all from the same template, pushed by an inactive attorney and fringe actor who is obsessed with creating bills that would fly in the face of the First Amendment. In a video about his social media censorship proposal, he dismisses its blatant unconstitutionality as a “scare tactic”. That is a pretty weak argument to ignore away its fundamental illegality.

Anyway, that is the background of this legislation as best as I can dig it up. I am sure Masnick will have more on this if these proposals get anywhere near becoming law.

FlickType creator Kosta Eleftheriou:

The App Store has a big problem

You: an honest developer, working hard to improve your IAP conversions.

Your competitor: a $2M/year scam running rampant.

Natasha Lomas, TechCrunch:

The scam goes like this: A bunch of Watch keyboard apps are published that purport to have the same slick features as FlickType but instead lock users into paying eye-wateringly high subscription fees for what is, at best, a pale imitation.

You might expect quality to float to the top of the App Store but the trick is sustained by the clones being accompanied by scores of fake reviews/ratings which crowd out any genuine crowdsourced assessment of what’s being sold.

There is a threefold compounding problem here:

  1. There are many apps in the App Store that are effectively counterfeits.

  2. They plant fake reviews to establish legitimacy.

  3. They abuse expensive subscriptions.

To its credit, Apple was quick to pull the apps when Eleftheriou’s Twitter thread became popular. But this is not something that developers should have to police themselves, and it is not a new problem. Since its inception, Apple has promoted the App Store as a trustworthy and safe marketplace, and has referenced that in defending criticism (PDF) of its commission. The least App Review could do is screen for dime store knockoffs and scams like these.

The Office of the Privacy Commissioner of Canada has been investigating Clearview’s behaviour since Kashmir Hill of the New York Times broke the story a little more than a year ago. In its overview, the Office said:

Clearview did not attempt to seek consent from the individuals whose information it collected. Clearview asserted that the information was “publicly available”, and thus exempt from consent requirements. Information collected from public websites, such as social media or professional profiles, and then used for an unrelated purpose, does not fall under the “publicly available” exception of PIPEDA, PIPA AB or PIPA BC. Nor is this information “public by law”, which would exempt it from Quebec’s Private Sector Law, and no exception of this nature exists for other biometric data under LCCJTI. Therefore, we found that Clearview was not exempt from the requirement to obtain consent.

Furthermore, the Offices determined that Clearview collected, used and disclosed the personal information of individuals in Canada for inappropriate purposes, which cannot be rendered appropriate via consent. We found that the mass collection of images and creation of biometric facial recognition arrays by Clearview, for its stated purpose of providing a service to law enforcement personnel, and use by others via trial accounts, represents the mass identification and surveillance of individuals by a private entity in the course of commercial activity. We found Clearview’s purposes to be inappropriate where they: (i) are unrelated to the purposes for which those images were originally posted; (ii) will often be to the detriment of the individual whose images are captured; and (iii) create the risk of significant harm to those individuals, the vast majority of whom have never been and will never be implicated in a crime. Furthermore, it collected images in an unreasonable manner, via indiscriminate scraping of publicly accessible websites.

The Office said that Clearview should entirely exit the Canadian market and remove data it collected about Canadians. But, as Kashmir Hill says, it is not a binding decision, and it is much easier said than done:

The commissioners, who noted that they don’t have the power to fine companies or make orders, sent a “letter of intention” to Clearview AI telling it to cease offering its facial recognition services in Canada, cease the scraping of Canadians’ faces, and to delete images already collected.

That is a difficult order: It’s not possible to tell someone’s nationality or where they live from their face alone.

The weak excuse for a solution that Clearview has come up with is to tell Canadians to individually submit a request to be removed from its products. To be removed, you must give Clearview your email address and a photo of your face. Clearview expects that it is allowed to process facial recognition for every single person for whom images are available unless they manually opt out. It insists that it does not need consent because the images it collects are public. But, as the Office correctly pointed out, the transformative use of these images requires explicit consent:

Beyond Clearview’s collection of images, we also note that its creation of biometric information in the form of vectors constituted a distinct and additional collection and use of personal information, as previously found by the OPC, OIPC AB and OIPC BC in the matter of Cadillac Fairview.


In our view, biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. That being said, within the category of biometric information, there are degrees of sensitivity. It is our view that facial biometric information is particularly sensitive. Possession of a facial recognition template can allow for identification of an individual through comparison against a vast array of images readily available on the Internet, as demonstrated in the matter at hand, or via surreptitious surveillance.

The Office also found that scraping online profiles does not match the legal definition of “publicly accessible”.

This is such a grotesque violation of privacy that there is no question in my mind that Clearview and companies like it cannot continue to operate. United States law has an unsurprisingly permissive attitude towards this sort of thing, but its failure to legislate on a national level should not be exposed to the rest of the world.

Unfortunately, this requires global participation. Every country must have better regulation of this industry because, as Hill says, there is no way to determine nationality from a photo. If Clearview is outlawed in the U.S., what is there to stop it registering in another nationality with similarly weak regulation?

Clearview is almost certainly not the only company scraping the web with the intent of eradicating privacy as we know it, too. Decades of insufficient regulation have brought us to this time. We cannot give up on the basic right to privacy. But I fear that it has been sacrificed to a privatized version of the police state.

If a government directly created something like the Clearview system, it would be seen as a human rights violation. How is there any moral difference when it is instead created by private industry?

Techdirt’s Mike Masnick on Twitter, in response to examples of harassment enabled by technology:

There are tons of these stories. The question is how do you deal with them in ways that don’t throw out all of the good aspects of the internet. How do you distinguish someone running a defamation campaign with someone with a legitimate grievance?


I guess my final point: humanity & society are messy. Sometimes the internet reflects that mess. We shouldn’t immediately jump the conclusion that because the internet reflects that mess that it’s the cause of that mess or that it can solve that mess.

Dare Obasanjo quoted from the above Twitter thread and added:

I like to frame up this question; who do you blame for drunk driving? The drivers, automobile companies, beer companies or bars?

A lot of the discussion about bad behavior online is like only blaming car companies for not requiring a breathalyzer as part of the ignition process.

I think this is a decent analogy, so let’s take it just a step further to explain why I think its implied conclusion misses the mark.

The reason drunk driving is a problem is not the intoxication itself, in a vacuum, but the effects it is likely to have on the driver, passengers, and others. To be perfect clear, I am not condoning drunk driving in any context. But we are not worried about the action of literally drinking too much and then driving a car, as much as we are about its likely effects. So we have widespread campaigns to dissuade people from driving under the influence. And that’s very good.

But we do not stop there, because some people will shamelessly ignore their personal responsibility and drive when they should not: when they are drunk, or high, or tired, or highly irritable, or using their phone. Some of the drivers on the road at any given time should not be behind the wheel, but that is where they are. Please stop using your phone while behind the wheel.

Cars and roads have also changed as regulators recognized that they, too, play a role in a driver’s safety. Seatbelts, airbags, always-on lights, and crumple zones are well-known improvements. Technology has helped usher in ABS, automatic braking, and traction control. There have been subtler changes to car cabins as designs and materials are now chosen to reduce the likelihood of injury when they impact passengers. Roads are now designed to more effectively drain water, improve visibility, and reduce sudden drop-offs. These are not directly a response to drinking and driving, but they help lessen its worst effects.

It was not so long ago that vehicle collisions were treated as a matter of personal responsibility or bad luck; it was widely seen that you should expect to be injured or die when a bunch of steel impacts you, no matter whether you are a driver, passenger, pedestrian, or in another vehicle. But it is now understood that lots of people will crash lots of cars into lots of different things for lots of reasons, and there are many ways to reduce the likelihood of serious injury or death. Personal responsibility undoubtably remains an important factor, but there are lots of things that can be adjusted to lessen the impact of bad decisions.

That brings me back to technology and platforms. The technology landscape of the early 2000s was obsessed with growth; in many ways, it still is. Venture capital firms were happy to lose huge sums of money over many years while platforms grew, with the hope that one day they could slap some ads on everything and call it a business. Moderation was treated more like an impediment to scale, and less like a safety obligation.

A decade and a half later, in the immortal words of @screaminbutcalm:

Me sowing: Haha fuck yeah!!! Yes!!

Me reaping: Well this fucking sucks. What the fuck.

Platform moderation is hard — unquestionably. It is more difficult for images and video than it is for plain text, and it only becomes harder as a platform becomes more popular. But we can only definitively say that is the case for platforms as they are designed and built today, with little advance consideration for reducing abuse.

If we used that “Men in Black” neuralizer to forget how tech platforms work today and had to rebuild them with the dangers of lax moderation in mind, they will probably look and feel somewhat different. But they could be designed and built with more consideration for the real-world effects of abusive users.

If you want me to bring it back to a car analogy, consider the Lamborghini Countach. The original Bertone-designed shape was sadly made lumpier with every new version. But nothing uglified the car more than the plastic bumpers fitted to U.S. models to comply with new safety regulations. The problem was not with the regulations. It was that the car was not designed to accommodate them, so this attempt at compliance looked dreadful. Lamborghini still designs jaw-dropping cars. But now they have been designed to incorporate modern safety regulations so, in addition to looking amazing, they are safer for their occupants and the victims of collisions.

Online platforms face a similar reckoning today. It is not directly their fault that there are horrible people who do horrible things, but they ought to recognize that they can play a role in reducing the predictably horrible effects. Their current efforts resemble the Countach’s plastic bumper extension because the likelihood of abuse continues to be underestimated. They can try to do better, and I welcome these attempts. But I am skeptical that Facebook, Reddit, Twitter, or YouTube are willing to make radical changes. They can’t, really; they are big companies now.

Whatever is being designed and built now that will one day dethrone today’s giants ought to treat safety as a foundational tenet. That should be encouraged by its funding and business model, too, so it does not become something that is auctioned off at a later date.

John Herrman, New York Times:

So what’s the joke, exactly? For The_Donald, one “joke” was that a bunch of self-described losers could help Donald Trump become “God Emperor.” (They were happy with “President.”) For WallStreetBets, the “joke” was that a group of self-described losers (their preferred real descriptors are unprintable) could rig the financial system in their own favor. The punchline was GameStop, and tens of billions of dollars in actual market activity.

The bigger joke, shared by these communities and plenty of others, is, well, everything. Everything is a farce and a fraud, and the surest, or at least most available, way to get ahead is to treat it as such. This is a profoundly nihilistic worldview, and one that in plenty of other contexts might meet hard limits, or come with terrible costs.

I had a bunch of tabs open to read tonight; this story and Daring Fireball were among those. Herrman’s piece runs along similar lines to Gruber’s piece from earlier today about the Republican party’s humouring in November of the former president’s ludicrous lies about voter fraud, and the poor moderation of Facebook Groups. The idea that online discussions that are allowed to fester with increasingly extremist views could result in real-world effects seems to be inconceivable every single time it happens.

That is not to say that the internet causes this behaviour, nor that anonymity does. There is a broader failing of social safety nets and good governance that can take sufficient blame for caustic nihilism. But this multiyear experiment in scant moderation of the discussion of hundreds of millions — or billions — of people is unworkable.