Article

A few weeks ago, my wife generously gifted me the Lego Concorde set. I got around to building it this weekend. It was the first time I had experienced a full Lego set in about twenty years, but it felt exactly as magical as I had remembered. As a kid, I would have felt all the joy and wonder of something so intricate — and so large; the Concorde model is over a metre long — replicating what has to be the most captivating airplane ever built.

Those emotions hit me pretty hard as an adult building it, too, but I could not help but think of how different the real-life version of projects like these used to be conceived compared to how they are now. In the Verge’s coverage of this Lego set, Sean Hollister notes a resurgent interest in supersonic planes, with orders from several airlines for the Boom Overture. As it happens, I am currently reading Ashlee Vance’s “When the Heavens Went on Sale” about the rapid privatization of space.

I am not an expert in any of this. This is not a history lesson, though I did try to avoid any factual errors. Call this little more than a late Sunday night rambling post of how I am feeling now. Pure vibes ahead.

Both the Concorde and the majority of space exploration are products of extraordinary engineering efforts backed by entire countries, bestowing them with a larger purpose than simple business economics. To be sure, it is incredible to read about how private spacecraft from Rocket Lab deliver into orbit tiny satellites made by Planet Labs which create global imagery that, for example, gets used by journalists to uncover oil tankers spoofing their location. That is incredible.

I also wish there was room for these kinds of big, national-level projects of collective pride. Landing on the Moon was a big moment for the entire world, but a particularly special one for people in the United States; the Space Shuttle continued the record of having a national space icon. Concorde was a partnership between British and French interests and it produced something (almost) entirely unique. I do not mean this in a nationalist sense, nor do I want to sound like a full-on communist — though it is strange to me how those two vastly different labels could be seen in the same set of things. Magnificent projects like these seem as though business motives were disregarded in favour of doing something really, really great. Something special.

It is hard to look at the 1950s or 1960s and want to bring almost anything to the present day. The world now is overall a much better place than it was then. Perhaps my rumination is misplaced. But it is bizarre to feel like efforts like these are not possible today because they lack a purpose we care about now. The closest approximation we have now seems to be NASA’s attempt to send people to Mars in the 2030s. NASA is planning to build to that by sending people to the Moon in around 2025, in a craft outsourced to SpaceX. This mission, like Boom’s existence, feels like a retro chic callback to a glorious past mixed with the financialized world of today. Perhaps I do not have an accurate point of reference but it would be nice, I think, if projects like these were a more collective effort that entire nations or the whole world could get behind, as projects for the good of humanity.

In case you have not already heard, let me break the news: this year’s iPhones have a USB-C port where the Lightning port used to be. Aside from the physical attributes of each, the main difference between these ports is that Lightning is a proprietary port specific to Apple devices, while USB-C is an open specification which anybody can use. Or, rather, it is an open set of specifications, and everybody who wants to sell a product in Europe must have a compliant charging port beginning next year. This situation is mostly good today, if a little confusing, and I think it will still be pretty good years in the future.

The main problems with USB-C have been well-documented, here and elsewhere. In short, while USB-C describes the shape of the plug, the indistinguishable cables and ports can each support a vast range of capabilities, from audio adapter mode, through power delivery and USB 2.0 speeds, all the way up to Thunderbolt 5. The plugs look the same, and the cords do not differ in any way other than thickness. This confusion is not exclusive to USB-C: the USB 3 standard was introduced with USB-A ports that looked the same as their USB 2 counterparts, especially in Apple’s implementation; and Lightning has supported USB 3 speeds on some iPad models, but only with some accessories that look very similar to their USB 2-only counterparts. But USB-C takes this mix-and-match approach of capabilities, ports, and cables to an extreme.

The confusion is the bad news. The good news is that I have read seemingly everywhere that most people do not sync their iPhones with a cable any longer. Apparently, there are “wireless” technologies which can do this stuff over the air. The living future is wild. If the only reason you connect your iPhone to a cable is to charge it, I have good news: basically none of this matters and just about any USB-C cable should work fine. I swapped the Lightning cable on my nightstand for a spare USB-A to USB-C cable that came with my keyboard — standards are great. If you, like me, still have an ample supply of USB-A wall adapters and have purchased a non-Apple device in the past five years, you probably have one laying around as well. This is the most any of this is relevant to many people.

But transfer speed matters to me and, well, this is my website. I still sync my iPhone over a wire: I do not trust Apple’s cloud music matching shenanigans and, while I have Wi-Fi syncing enabled, it is dramatically slower than even the miserable cable Apple has been shipping for a long time. I used iPerf to test the Wi-Fi transfer speed between my iMac and my MacBook Pro, and it reported an average of about 100 Mbps. That is a steep downgrade from even USB 2 speeds, never mind USB 3.1 This is important enough to me that it is one of the reasons why I bought the 15 Pro over the standard iPhone 15.

I currently sync around 107 GB of music from my Mac to my iPhone. While the math suggests that should be accomplished in around half an hour at USB 2 speeds, it is not so simple. From previous experience, I know a fresh sync usually takes at least 45 minutes, if not closer to a full hour. Unfortunately, while the iPhone 15 Pro is capable of transfer speeds twenty times faster than its predecessor, the cable in the box is not. However, I have a high-speed cable at my desk already — again, standards are great — so I could take advantage of this. The same fresh sync with my new iPhone took just twelve minutes.

Yeah, this rules.

Okay, so that is not even close to a 20× improvement, and those of you who have Android phones are probably laughing at how long it took for the iPhone to get here. To the latter crowd, I hear you. To the former, I do not think music syncing has ever run at full USB speed ever since it was baked into iTunes. I have no idea why.

Nevertheless, that is a huge improvement, which raises the obvious question: what took so long? If Lightning was capable of USB 3 transfer speeds, why was it limited to a handful of iPad Pro accessories? Why did Apple replace Lightning with USB-C in the iPad in 2018 but retain Lightning in the iPhone, AirPods, and its “Magic” line of Mac accessories? While I do not think it was spite or ego driven, Apple has not provided its own rationale. The closest it ever came to explaining Lightning’s longstanding presence was when it introduced the standard in 2012 — when Phil Schiller said it was a “modern connector for the next decade”. That is less of a defence than it is a commitment and, to be fair to Apple, it exceeded those expectations by one year.

So Apple established an ecosystem with its proprietary connector, and it said it would keep it around for ten years. But that does not preclude improving Lightning, as it did in the iPad Pro. There might be some great reasons why Apple never evolved the iPhone’s connector, but we are only able to speculate. That is also true for keeping a mix of USB-C and Lightning products in its lineup. It could have felt stung by the negative press coverage after the Lightning transition, or maybe it wanted to preserve its ecosystem of first- and third-party products out of stability and perhaps control.2 But we do not know because it has chosen to put its weight behind objecting to the E.U.’s mandated USB-C standard instead of explaining why Lightning is just so darn great.

So let us talk about that.

Apple’s Greg Joswiak, in an interview with Joanna Stern of the Wall Street Journal last year, said he prefers to understand what governments want to accomplish, but to leave the details to businesses like Apple. I kind of get where he is coming from on this. Joz also cites the E.U.’s previous dedication to standardizing around Micro USB, which likely would have sucked today, and it is not difficult to see a similar problem for the future. If Apple or some other business develops a wired connection which is better for the iPhone than USB-C but incompatible with its spec and plug shape, it will be hamstrung, right?

Not so: it already produces a dizzying array of iPhone SKUs, including region-specific variations: the United States has a version without a physical SIM card, while the version sold in China and Hong Kong supports dual physical SIM cards. In addition, there are two other versions depending on local cellular frequency requirements. When multiplied by colours and capacities, I count 232 individual iPhone 15 and 15 Pro SKUs. If Apple needs to make a special E.U. market version, it already has that capability — and if the rest of the world gets a much better iPhone, you can bet Apple will emphasize that in its marketing. E.U. users will complain to their representatives. It will work out just fine.

This is all speculative, however, and these arguments would be far easier to believe had Apple updated its Lightning implementation on the iPhone since its launch. If it was constantly innovating in port design or speed, sure, I would have more sympathy. But it has not and, so, I do not. Besides, the rumour mill is certain the next major innovation in iPhone ports is no ports at all. Selfishly, I hope that is not the case; see my own experiences above. We can also only speculate about whether the iPhone was due to get USB-C this year regardless of legislation pushed by the E.U., or if Apple would continue to ride the Lightning connector.

What I do know for sure is how much better my iPhone user experience has been out of the gate with this high-speed USB port. Setup was faster, accessories are more universal, it is a more capable product, and I faced no transition problems. Your experience may vary. Mine, though, is much better than it used to be, and it is baffling to consider the non-cynical reasons why it has taken this long to get here.


  1. I am not sure why this is the case, as the router and devices on the same network each report internet speeds of upwards of 200 Mbps, and the router shows it is connected to each of my Macs at somewhere between 500 Mbps and 1 Gbps. Alas, I would rather have a nail through my foot than spend a weekend diagnosing network problems. What matters is how fast Wi-Fi syncing is in the real world — in my real world — not in theory, and wired connections blow it out of the water every time. ↥︎

  2. The press coverage after the USB-C transition has been nothing like it was with Lightning eleven years ago. The most negative article I found was from the Telegraph and, frankly, that barely counts. CNBC went with a headline focusing on the cost of an adapter, Dan Moren, for Macworld, pointed out all the places Lightning still exists in Apple’s lineup, and USA Today published a weird article that places the iPhone’s cost on a timeline of its connector options. Maybe some publication crappier than the Telegraph posted something even click-baitier, but that is basically going to be disreputable by default. ↥︎

I thought this analysis by Wally Nowinski, of PerfectRec, was intriguing, but perhaps not completely convincing. Nowinski says the most recent batch of iPhones are, with the exception of the Pro Max, the “most affordable” iPhones since the product’s launch, when adjusting for inflation, and has the figures to prove it. The data is sound. The point of this is what I did not find as compelling.

For a start, it is purportedly a price comparison of unlocked iPhone models in the United States, which immediately makes it less relevant to me. But it is worth noting that Apple did not sell unlocked iPhones in the U.S. until the iPhone 4 in June 2011 — just before the launch of the iPhone 4S — and that the original iPhone was $499 but only worked with AT&T when it came out.

A brief aside, as we are discussing the historical value of unlocked iPhones: while the original iPhone was released without a subsidized contract, it was so tied to AT&T that the first strategies for unlocking one for use on another carrier involved soldering the board. Unlocked iPhones were so coveted at the time that George Hotz swapped the second one he made for a Nissan 350Z and three (locked) iPhones.

Anyway.

This is going to seem self-defining, but Nowinski’s analysis is true for any fixed price because that is how inflation works, and central banks have encouraged modest inflation. Everything else is getting more expensive and a dollar has less spending power, but a thousand-dollar iPhone is still a thousand dollars with more money going around. But that has been true for any thousand-dollar product in the U.S. since the iPhone’s launch, save for a few months in 2020 and the fallout from the 2008 housing crash.

What is more curious to me is that this is the only metric by which the flagship iPhone has become less expensive since its 2007 release. According to this analysis, the base price of a flagship model iPhone has increased from an inflation-adjusted $732 for the original model to $999 for the iPhone 15 Pro. And the Pro models are, I think, the true successors of the iPhone lineage — they get the new SoCs, the new cameras, and new industrial designs. The non-Pro models now get last year’s Pro camera systems, last year’s SoC, and — in the case of the iPhone 15 — last year’s special industrial design touch by way of the Dynamic Island.1 They are a newer expression of the n–1 strategy.

Across the history of each of the product lines, in fact, there are only two instances in which Nowinski’s pricing table shows price decreases:

  • The iPhone 5C is grouped with the iPhone SE line and, thus, appears to show a $150 decrease from the 5C to the SE.

    I do not think this is the correct grouping for these models. The simultaneous releases of the iPhone 5C and 5S were more akin to today’s regular and Pro split. The iPhone 5C was an iPhone 5 with a plastic case instead of aluminum, and Apple only shipped the iPhone 5 for one year. The 5C was not the budget line that is the iPhone SE, but it was not the flagship that year either.

  • The iPhone XR was $50 more expensive than the iPhone 8 of the previous year, and that was matched by a $50 decrease with the iPhone 11 the following year. But, again, not the flagship.

Though it is possible to find isolated cases of Apple dropping the nominal price of its products — like the second-generation iPod Mini, the iMac G4, and the MacBook Air — it has been fairly rare since the iPhone’s introduction. Instead, it tends to hold price points steady for as long as it can. For example, it has tried to maintain a consumer laptop at the sub-thousand-dollar price point since at least 2002. And, sure enough, this analysis basically holds true for that product, too: today’s 13-inch base model MacBook Air with the M2 chip may cost a hundred dollars more than an iBook G3 did in 2002, but the iBook would today cost an inflation-adjusted $1,700.

What I find more interesting about Nowinski’s analysis is the cost of the Plus-model iPhones. They had an inflation-adjusted cost that was just about a thousand dollars — the same price point that today’s “Pro” iPhones start at. Those were the models which introduced a multi-camera setup to the iPhone line and added new photography capabilities. It proved there are plenty of people who will put up with an enormous iPhone if it has the best camera — which sounds familiar. To be sure, many big iPhones are sold to people who want big iPhones. But I would bet a large number of iPhone 7 Pluses, iPhone 12 Pro Maxes, and — now — iPhone 15 Pro Maxes are sold to people who want the best camera in an iPhone, hands and pockets be damned.


  1. A ProMotion display with a higher refresh rate, however, remains a Pro-only thing. ↥︎

I want to try something a little bit different: a review of a product at what is likely the end of my using it. Early product reviews are great buyer’s guides, but they tend to dwell on the novel, which is understandable for using a product for only a week or two. I have lived with my iPhone 12 Pro for nearly three full years — I got mine on its release day in October 2020 — so I know it very well. Here is what I am still impressed by, what has not held up as well, and what I will be looking for when I replace it this year.

This was one eye-catching phone out of the box. Compared to the standard iPhone 12’s glossy glass back, the bead-blasted glass of the Pro models is a subtly luxurious and almost soft finish. I chose the silver model, which I still think is the nicest of the four colours it was available in at launch — the others being graphite, gold, and a finish Apple insists on calling “pacific blue”, all lowercase — and the flat polished steel of the phone’s edge trim lost its magic after just a few months. I rarely use a case and, so, I was expecting scratches. But I did not anticipate some kind of corrosion or blooming on its top edge, which has made the stainless steel look more like chrome-mimic plastic. I bought a stainless steel wristwatch with similar polished surfaces the same year and, despite being knocked around a fair bit and sitting immediately on my skin, it has held up far better.

The steel body is also pretty heavy. It is only fifteen grams heavier than the iPhone X it replaced in my pocket, which was also a steel phone, but I wish iPhones were trending in the other direction. Thinner and lighter may be widely mocked, but for devices carried every day, it is better for me if they dissolve into my life.

Thankfully, the iPhone 15 Pro is rumoured to be made of titanium which — all else being equal — is considerably lighter than steel. The standard iPhone 15 will likely continue to be made of aluminum, which means either model would likely be a lighter phone than the ones I have carried for the past six years. I do have some questions about the wear-and-tear I will be able to expect with a titanium body. Titanium has a mixed history at Apple, but retrospective reviews of Apple Watches made of the material indicate it is holding up far better.

The battery life of my iPhone 12 Pro has also seen some wear-and-tear. After three years of daily use and an uncareful charging regiment, the Battery Health screen says it has retained 87% of its from-new capacity. That is not too bad, especially considering some iPhone 14 Pro owners are reporting similar capacities after just one year. But this generation of iPhone was notable for a slight regression in battery life expectations compared to its predecessor when it was shiny and new, and I have felt that in particular when it is not connected to Wi-Fi. This has been used almost exclusively as an LTE phone — more on that later — and its cellular radio seems hungry.

I bought the 12 Pro over either of the standard iPhone 12 models primarily because of the 56mm-equivalent camera and the larger RAM package. And I am glad I did — around 43% of photos I shot with this phone are from that telephoto camera, compared to 51% captured with the main camera, and only about 6% using the ultra-wide.

These two cameras — the main and telephoto — have performed well. iPhone photos have leaned toward neutrality, with only a minor warm bias, and the images I have captured with the 12 Pro have been no exception. Images captured outdoors in bright daylight are an accurate representation of the scene, with clean HDR matching my own perception. Where this camera shines most is in low-light scenes indoors, and outdoors at night. This is the area where phone cameras have struggled — small sensors do not capture as much light as bigger sensors, of course — and software advancements have played a key role in creating images which look less noisy, more colourful, and better lit. Automatic Night Mode remains a difficult adjustment for me: three years into owning this phone, I still have not gotten used to the idea of holding it in near-stillness for longer than it takes me to tap the shutter button.

Neither image has been processed apart from straightening and cropping.
Photo at night of a dark hilly street and a taillight streak from a passing car.Photo at dusk of an empty outdoor ice rink with overhead industrial-style lights on cables.

I have also noticed a dramatic improvement in images shot in Portrait Mode. While it is supposed to approximate the foreground and background separation you might see with a larger sensor and a portrait lens, I rarely used it on my iPhone X because subjects looked like they were crudely cut from the scene. It is a night-and-day difference with this iPhone: there is a more natural falloff from in-focus areas to backgrounds, the faux bokeh looks more realistic, and it does a better — though still imperfect — job of understanding glassware. I do not take many pictures of people; here are some photos of food I shot with Portrait Mode:

The food images have been processed; the bottle image has not. I do not know if I am in a position to give advice, but here is what I do for food photos: I use the “Vivid” filter to improve image brightness, colour, and contrast. Then, under Adjustments, I play with the image warmth after increasing the image’s magenta tint; the “Vivid” filter is often too green and cool for food.
Photo of a plate of cut, seasoned beets of various colours.Photo, close-up, of an open glass drink bottle.Photo of a tray of prepared falafel balls before frying.Photo of a dark bowl containing cut tomatoes made very glossy by olive oil.

I still find Apple’s photo processing pipeline too eager to reduce noise and, consequently, detail, though this is somewhat offset by other parts of the pipeline like Deep Fusion. This is exacerbated in Night Mode, of course, because it is beginning with a grainier image. I understand why Apple uses high levels of noise reduction; shooting RAW on an iPhone will reveal what the sensor captures before it is put through that pipeline. A very grainy image is probably not going to be appreciated by most people. But these sensors are very good for their size and, in most lighting conditions, some grain is more tasteful to me.

The other thing I feel compelled to mention about the iPhone 12 Pro’s cameras is how they are not the same as those in the 12 Pro Max — unfortunately. The Pro Max had a much larger sensor in its main camera and better stabilization, and its telephoto camera was a little different as well. It is unfortunate because I am not interested in buying a larger phone; the smaller Pro size Apple has settled on is already too large for my liking. And, while successive model pairings — the 13 Pro/Pro Max and 14 Pro/Pro Max — shared identical camera systems between the smaller and larger sizes, rumours suggest the line will repeat the 12 Pro’s bifurcation. If that is true, I will be disappointed, even if it is for good and practical reasons. Not upset that physics cannot be bent to accommodate my purchasing preferences, mind you, just painfully aware of the compromise I would make with either choice.

The iPhone 12 lineup was the first iPhone to support the MagSafe accessory connector, and the first to support 5G cellular networking. I have used neither extensively. I do have an Apple case which is identified by MagSafe by its colour, but I never purchased a compatible charger or any other accessories. As for 5G, my cellular provider only recently added support on its network. Working from home for most of the past three years has meant little cellular data usage, so I would not have taken advantage of any possible improvements if I had switched to a carrier which adopted 5G earlier. My provider recently added 5G support and, in the interest of being comprehensive, I recently upgraded to a 5G plan to see what it would be like in my area. From my desk, using the Speedtest app, 5G transfer speeds were 129 Mbps down and 39 Mbps up; LTE from the same spot recorded 113 Mbps down and 28 Mbps up. I have seen LTE speeds as high as 156 Mbps down and 45 Mbps up from the same spot. On my balcony, 5G tested at 178 Mbps down and 15 Mbps up, while LTE was 74 Mbps down and 18 Mbps up. Latency and jitter differences are a similar tossup. I was promised life-and-death stakes and all I got was this slightly more expensive phone plan.

Neither of these features holds any weight for my iPhone 15 purchasing decisions. The iPhone 15 line will almost certainly switch to a USB-C port after eleven years of iPhones with slow, proprietary, and unchanged Lightning ports. Alas, that means the cables on my nightstand and desk — and in my bag and car — will need to be swapped, though one will be included in the box. I may have avoided noticing this change had I purchased MagSafe charging cables. But, at $55 Canadian a pop, it would have been an expensive way to make the transition easier.1 Since USB-C is an industry standard connector, I can buy all the cheap and fast cables I need.

The iPhone 12 Pro line was also the first phone from Apple to include a LiDAR sensor on the back, which apparently helps with autofocus in low light scenes, and enables better spatial tracking for augmented reality. It is hard for me to say whether I get faster or more accurate autofocus, but I have found the A.R. enhancements surprisingly useful and fun. It is not something I am using every day. But when I stumble across a furniture website with A.R. options, for example, it is immediately rewarding to see the piece in my space and get a pretty accurate impression of its size, with pretty stable real-world object tracking. The biggest knock against anything using the LiDAR sensor is the hit it takes on battery life, which you can feel by how warm the phone gets. Visually, A.R. experiences are smooth and fast, but the warmth you feel is an indication that this phone is being pushed to some kind of limit.

So, that is my three-year experience with the iPhone 12 Pro. I am not somebody who feels compelled to upgrade every year, and even before Apple announces this year’s iPhone lineup in less than one week’s time, I can already expect big changes based on the models available today: a brighter, faster display; better cameras paired to a better image processing pipeline; macro photography; and emergency rescue features I hope to never need. But there are also plenty of unknowns, like whether the new models will continue to increase battery life, or if the phone will feel more pocket-friendly — the iPhone 13 Pro was heavier than my phone, and the 14 Pro heavier still.

I have occasionally wondered whether the 12 Pro was worth the extra cost over the standard 12 for me. The standard models had way better colour options and a Mini version, and the 12 Pro is 15% heavier than the regular model of the same size. But the camera breakdown speaks for itself: I use the telephoto camera so often that it really is a no-brainer. That is what I am looking for most of all in an iPhone 15 model: a better telephoto camera and better battery life in a model that is lighter than this one.


  1. I will tell you what was expensive, which was the USB-C to Lightning cable I bought last year for my travel bag. I have gotten one year’s very light use out of that $25 cable. ↥︎

Stephen Hackett today updated a great, easy-to-follow guide for setting up a Time Machine server on your network. This is something I have been meaning to do for about a year and I figured a Friday evening before a long weekend would be a superb time to make it happen. After all, I already had a Mac to use as a server — my MacBook Air I upgraded from last year — and a hard drive. And Hackett describes it as “easy”.

How hard could it be?

Well, the first series of steps in Hackett’s guide took me fifteen minutes, and ten of those minutes were spent trying to find a Thunderbolt cable. Then I got to the part in the guide where it says I should be able to authenticate and mount the drive, and I hit a wall: I could not move past the user name and password dialog. It was not that my password was being interpreted as though it was incorrect — that comes later — but that it would accept it and then show the dialog again. I could not even mount the external drive in Finder, and sometimes it struggled to mount any drive on the host MacBook Air. I kept seeing errors like “The operation can’t be completed because the original item for ‘Remote Backup’ can’t be found”, and “There was a problem connecting to the server ‘Remote Backup’. You do not have permission to access this server.”

At some point, I also managed to lose access to my MacBook Air through the Screen Sharing app on my MacBook Pro. I would type my user name and password and it would reject it, as though I got either one wrong. But if I launched Screen Sharing from the network-mounted MacBook Air in Finder, it worked fine.

Hours later, I found the solution. System370 on Reddit pointed out in a months-old thread that smbd needs to be granted Full Disk Access permissions in System Preferences on the host Mac. That is the SMB protocol daemon; SMB is the file sharing protocol used to mount the drive on a remote Mac. I enabled Full Disk Access for the daemon, completed Time Machine setup on my MacBook Pro, and it is now creating a Time Machine backup remotely.

I mean absolutely no criticism of Hackett or this guide. In fact, I am grateful for the reminder to set this thing up — finally. But none of the error messages I saw on either machine nor any of Apple’s support articles mention this simple yet critical step.

So, here it is again: if you are enabling File Sharing for a remote disk and something is not working, skip all the troubleshooting you find elsewhere on the web. First, ensure smbd has Full Disk Access, under System Preferences (or System Settings), Security & Privacy.

I hope this keyword-filled post saves others some troublesome troubleshooting, and that Apple will reconsider its strategy of erroring in silence or with irrelevant messages. This is apparently a known problem because, yes, my host MacBook Air is stuck on Catalina.

Benedict Evans:

Whenever anyone proposes new rules or regulations, the people affected always have reasons why this is a terrible idea that will cause huge damage. This applies to bankers, doctors, farmers, lawyers, academics… and indeed software engineers. They always say ‘no’ and policy-makers can’t take that at face value: they discount it by some percentage, as a form of bargaining. But when people say ‘no’, they might actually mean one of three different things, and it’s important to understand the difference.

This is a good piece categorizing the three types of “no” by industries facing new policies as: an aversion to lawmakers telling them what to do, plausible negative consequences of regulation, and technical impossibilities. But Evans does not give nearly enough weight to how often big industry players and their representatives simply lie. They often claim the effects of new regulations will be of the second or third type when there is no evidence to support their claims.

Corporations lying to get their way is not news, of course. A common thread among the examples cited by Evans is that policies which actually do fall into the categories of causing unintended negative effects or being impossible to achieve are noted as such by truly independent experts.

In 2015, after Uber launched in Calgary, the city proposed reasonable and sensible rules, which Uber claimed were entirely “unworkable” for ride sharing as a genre. Many, including popular media outlets, concurred with Uber and begged the city to fold. But it compromised on only a single rule; everything else was passed, meaning that Uber drivers were subject to the same sorts of regulations as taxi drivers because they do the same job. And guess what? Uber has been happily operating in Calgary ever since.

Apple spent years opposing repair legislation on the basis that people would hurt themselves replacing batteries, and that any state which passed such laws would become a “mecca for bad actors”. That line of argument was echoed by some, only for Apple to now support such legislation — with caveats — despite using exactly the same type of battery it says is dangerous for people to swap themselves.

After multiple reports — including several stories by Reveal in 2019 and 2020 — of serious injuries at Amazon warehouses caused in part by its infamously rigorous quotas, and a general avoidance of workers’ rights, lawmakers in California proposed corrective legislation in early 2021. Lobbyists freaked out. In the interim, Reveal found itself smeared “on background” by Amazon’s public relations team, which tells “outright lies” according to multiple reporters. The legislation was signed into law anyway in 2021. It is certainly too early to know its long-term effects, but injury rates at Amazon facilities fell in 2022, though its rates remain double (PDF) the rate of the rest of the industry.

Evans:

I think the structural problem here, across all three kinds of ‘no’, is that this is pretty new to most of us. I often compare regulation of tech to regulation of cars – we do regulate cars, but it’s complicated and there are many different kinds of question. ‘Should cars have different emissions requirements?’ is a different kind of question to ‘does the tax code favour too much low-density development?’ and both questions are complicated. It’s a lot easier to want less congestion in cities than to achieve it, and it’s a lot easier to worry about toxic content on social media than to solve it, or even agree what ‘solve’ would mean.

But we all grew up with cars. We have a pretty good idea of how roads work, and what gearboxes are, even if we’ve never seen one, and if someone proposed that cars should not come with seats or headlights because that’s unfair competition for third-party suppliers, we could all see the problem. When policy-makers ask for secure encryption with a back door, we do not always see that this would like be telling Ford and GM to stop their cars from crashing, and to make them run on gasoline that doesn’t burn. Well yes, that would be nice, but how? They say ‘no’? Easy – just threaten them with a fine of 25% of global revenue and they’ll build it!

The comparison to regulating cars is apt, though bungled by Evans. In the earliest days, cars killed a lot of people — drivers, passengers, and others. Some manufacturers introduced features like seatbelts, but safety was not an effective sales pitch (PDF). The U.S. federal government responded by passing the National Traffic and Motor Vehicle Safety Act which mandated the installation of various safety features, and Ralph Nader published “Unsafe at Any Speed” the same year. Laws were passed to encourage use of seatbelts and discourage drunk driving, and the rate of death and serious injury has fallen even as the sales and use of automobiles have risen. These laws were disputed by automakers and the public but they worked well.

Ever since, most laws which govern cars have trended toward increasing their efficiency, decreasing their damage to the environment, and improving their safety. These laws have arguably not gone far enough. Because many of the pickup trucks and SUVs which are sold to suburban families who dodge potholes and puddles are very heavy, they are exempt from stricter U.S. fuel economy standards. And I would be filled with regret if I did not remind you of the extensive lobbying orchestrated by automakers over the past century-and-a-bit to make walking across a road a crime, reduce taxes on auto sales, and lots of other things.

Lawmakers should of course be attentive to instances where everyone who knows about a topic is telling them it is impossible to do in the way they are proposing. It is not possible to create encryption which ensures no criminals or rogue states are able to intrude but permits execution of a secret wiretap warrant.

But can you really blame lawmakers when regulations are disputed by industry representatives? It sure does not help when the press and public repeat the myths they have created. If industries want to be regulated more effectively, they need to start by being honest. The press can help by more carefully scrutinizing corporate claims; even conservative, business-focused publications should be able to see that simply opposing regulation by parroting public relations pap is a worthless use of their time and words.

Threads’ user base seems to be an object of fascination among the tech press. Mark Zuckerberg says it is “on the trajectory I expect to build a vibrant long term app” with “10s of millions” of users returning daily. Meanwhile, third-party estimators have spent the weeks since Threads’ debut breaking the news that its returning user base is smaller than its total base, and that people are somewhat less interested in it than when it launched, neither of which is surprising or catastrophic.1 Meanwhile, Elon Musk says Twitter is more popular than ever but, then again, he does say a lot of things that are not true.

All that merits discussion, I suppose, but I am more interested in the purpose of Threads. It is obviously a copy of Twitter at its core, but so what? Twitter is the progenitor of a genre of product, derived from instant messenger status messages and built into something entirely different. Everything is a copy, a derivative, a remix. It was not so long ago that many people were equating a person’s ban from mainstream social media platforms with suppression and censorship. That is plenty ridiculous on its face, but it does mean we should support more platforms because it does not make sense for there to be just one Twitter-like service or one YouTube-like video host.

So why is Threads, anyway? How does Meta’s duplication of Twitter — and, indeed, its frequent replication of other features and apps — fit into the company’s overall strategy? What is its strategy? Meta introduced Threads by saying it is “a new, separate space for real-time updates and public conversations”, which “take[s] what Instagram does best and expand[s] that to text”. Meta’s mission is to “[give] people the power to build community and bring the world closer together”. It is a “privacy-focused” set of social media platforms. It is “making significant investments” in its definition of a metaverse which “will unlock monetization opportunities for businesses, developers, and creators”. It is doing a bunch of stuff with generative artificial intelligence.

But what it sells are advertisements. It currently makes a range of products which serve both as venues for those ads, and as activity collection streams for targeting information. This leaves it susceptible to risks on many fronts, including privacy and platform changes, which at least partly explains why it is slowly moving toward its own immersive computing platform.

Ad-supported does not equate to bad. Print and broadcast media have been ad-supported for decades and they are similarly incentivized to increase and retain their audience. But, in their case, they are producing or at least deliberately selecting media of a particular type — stories in a newspaper, songs on a radio station, shows on TV — and in a particular style. Meta’s products resemble that sort of arrangement, but do not strictly mimic it. Its current business model rewards maximizing user engagement and data collection. But, given the digital space, there is little prescription for format. Instagram’s image posts can be text-based; users can write an essay on Facebook; a Threads post can contain nothing more than a set of images.

So Meta has a bunch of things going for it:

  • a business model that incentivizes creating usage and behavioural data at scale,

  • a budget to experiment, and

  • an existing massive user base to drive adoption.

All this explains why Meta is so happy to keep duplicating stuff popularized elsewhere. It cloned Snapchat’s Stories format in Instagram to great success, so it tried cloning Snapchat in its entirety more than once, both of which flopped. After Vine popularized short videos, Facebook launched Riff. After Twitter dumbly let Vine wither and die, and its place was taken by Musical.ly and then TikTok, Facebook launched Lasso, which failed, then Reels and copied its recommendation-heavy feed, moves which — with some help — have been successful. Before BeReal began to tank, it was copied by, uh, TikTok, but Meta was working on its own version, too.

But does any of this suggest to you an ultimate end goal or reason for being? To me, this just looks like Meta is throwing stuff at people in the hope any of it sticks enough for them to open the advertising spigot. In the same way a Zara store is just full of stuff, much of it ripping off the work of others, Meta’s product line does not point to a goal any more specific than its mission statement of “bring[ing] the world closer”. That is meaningless! The same corporate goal could be used by a food importer or a construction firm.

None of this is to say Meta is valueless as a company; clearly it is not. But it makes decisions that look scatterbrained as it fends off possible competitors while trying to build its immersive computing vision. But that might be far enough away that it is sapping any here-and-now vision the company might have. Even if the ideas are copies — and, again, I do not see that as an inherent weakness — I can only think of one truly unique, Meta-specific, and successful take: Threads itself. It feels like a text-only Instagram app, not a mere Twitter clone, and it is more Meta-like for it. That probably explains why I use it infrequently, and why it seems to have been greeted with so much attention. Even so, I do not really understand where it fits into the puzzle of the Meta business as a whole. Is it always going to be a standalone app? Is it a large language model instruction farm? Is it just something the company is playing around with and seeing where it goes, along the lines of its other experimental products? That seems at odds with its self-described “year of efficiency”.

I wish I saw in Meta a more deliberate set of products. Not because I am a shareholder — I am not — but because I think it would be a more interesting business to follow. I wish I had a clearer sense of what makes a Meta product or service.


  1. Then there is the matter of how Sensor Tower and SimilarWeb measure app usage given how restricted their visibility is on Android and, especially, iOS. Sensor Tower runs an ad blocking VPN which it uses in a way not dissimilar from how Meta used Onavo, and several screen time monitoring products, which is something that was not disclosed in an analysis the company did with the New York Times.

    SimilarWeb has a fancy graphic illustrating its data acquisition and delivery process, which it breaks down into collection, synthesis, modelling, and digital intelligence. Is it accurate? Since neither Apple nor Google reports the kind of data SimilarWeb purports to know about apps, it is very difficult to know. But, as its name suggests, its primary business is in web-based tracking, so it is at least possible to compare its data against others’. It says the five most popular questions asked to Google so far this year are “what”, “what to watch”, “how to delete instagram account”, “how to tie a tie”, and “how to screenshot on windows”. PageTraffic says the five most-Googled questions are “what to watch”, “where’s my refund”, “how you like that”, “what is my IP address”, and “how many ounces in a cup”, and Semrush says the top five are “where is my refund”, “how many ounces in a cup”, “how to calculate bmi”, “is rihanna pregnant”, and “how late is the closest grocery store open”. All three use different data sources but are comparable data sets — that is, all from Google, all worldwide, and all from 2023. They also estimate wildly differing search volumes: SimilarWeb’s estimate of the world’s most popular question query, “what”, is searched about 2,015,720 times per month, while Semrush says “where is my refund” is searched 15,500,000 times per month. That is not even close.

    But who knows? Maybe the estimates from these marketing companies really can be extrapolated to determine real-world app usage. Colour me skeptical, though: if there is such wide disagreement in search analysis — a field which uses relatively open and widely accessible data — then what chance do they have of accurately assessing closed software platforms? ↥︎

Today, Meta announced it is beginning the process of ending Canadians’ access to news publications as a consequence of Bill C–18, now known as the Online News Act. This is an entirely predictable consequence of the Act, which requires big tech companies like Google and Meta to decide between having an unpredictable bill for all their news links, or having a fixed cost of zero dollars. In the fullness of time, we will find out if fewer Canadians are using their services because they cannot use them for news coverage, but it seems to me like a safe gamble on the part of Meta and, soon, Google.1

When I tried finding third-party coverage of this announcement, I turned to Google — DuckDuckGo’s news results are filled with articles syndicated at MSN and Yahoo — and had a hard time finding a non-wire report from a Canadian publication. Many were either from Canadian Press or Reuters. Happily, CBC NewsDarren Major filed an original report, but it did not take me long to find problems with both the reporting and the subjects it profiled:

Social media giant Meta says it has officially begun ending news availability on its platforms in Canada starting Tuesday.

Meta, which owns Facebook and Instagram, has been signalling the move was coming after the government passed its Online News Act, Bill C-18, in June.

Those “signals” include the company stating outright that “news availability will be ended on Facebook and Instagram for all users in Canada”, which is not so much a nod as it is a direct confirmation. The news here, as it were, is that Meta followed through on things it said repeatedly.

The law requires big tech giants like Google and Meta to pay media outlets for news content they share or otherwise repurpose on their platforms.

This is not accurate. It requires operators of large social media businesses and search engines to pay for news links either shared by users or which publishers have permitted to be indexed. The tech company does not meaningfully share links itself. This is more fuzzy around links found through search engines, but publishers have control over whether they want their articles to be included in results.

Major’s introduction to this story is not great. Perhaps even more objectionable material can be found in quotations from various elected officials. For example:

Newly appointed Heritage Minister Pascale St-Onge said Meta has refused to participate in the regulatory process.

“This is irresponsible,” she said in a statement. “They would rather block their users from accessing good quality and local news instead of paying their fair share to news organizations.”

This legislation demands preposteriously that publications are owed some kind of “fair share” of revenue from referral sources, but nothing about that reflects how the web works. This feels like the “Mad Men” clip where Don tells Peggy “that is what the money is for”, except it is links.2

The Canadian media landscape sucks. The online ad marketplace sucks. I concede that publications regularly post links to their own work on social media and permit search engines to index their websites out of an expectation that we will find them there, and that it is currently hard to know the true value of these third party referrals to the revenue streams of publishers. But this legislation is a bad way to address that imbalance, and the dichotomy St-Onge presents is wrong. Canadians are not being blocked from accessing news, and there is no “fair share” to be had.

Conservative Leader Pierre Poilievre put the responsibility squarely at the feet of the Liberal government.

“It’s like Nineteen Eighty-Four,” he said. “Who would ever have imagined that in Canada the federal government would pass laws banning people from effectively seeing the news?”

Predictably, Poilievre is just plain wrong. Even the most charitable reading of this statement — one which moves the word “effectively” to between “would” and “pass” — is a ludicrous bad faith reading of this Act and it is embarrassing that political rivals use this sort of language instead of trying to navigate the nuances of a particular topic.

Martin Champoux, heritage critic for the Bloc Québécois, accused Meta of trying to intimidate parliament into rescinding the law.

“This deplorable decision serves no one. In fact, the big losers are the users who will be deprived of their news on social networks,” Champoux said in a statement in French.

It is no secret that Meta has been arguing against the Online News Act since it was first proposed. But what is missing from this criticism is that Meta has actually been making good points all along, and the law is very bad. I am obviously no fan of Meta’s and I trust policymakers more than seemingly most people in the technology commentary space, but even I can acknowledge this Act is poor and these are reasonable concerns:

“In order to provide clarity to the millions of Canadians and businesses who use our platforms, we are announcing today that we have begun the process of ending news availability permanently in Canada,” Rachel Curran, Meta’s head of public policy in Canada, said in a statement.

[…]

“In the future, we hope the Canadian government will recognize the value we already provide the news industry and consider a policy response that upholds the principles of a free and open internet,” Curran said in her statement.

This is completely fair. As Casey Newton wrote earlier this year, there are plenty of other ways the government can support Canadian media without resorting to a link tax.3 But they decided to go after a fundamental component of the web in a way that will, as the Liberal Party acknowledges, create a precedent:

St-Onge said the government will continue to “stand its ground” and suggested other countries are considering drafting similar legislation.

This is a dangerous piece of legislation — one which the government has dug in its heels for and pushed through despite protests from open web activists and major corporations alike. You do not have to be a fan of Google or Meta to recognize how flawed it is. You only need to see the obvious conclusion that requiring sources of web traffic to pay up is going to make them less likely to play that role.


  1. This is entirely unrelated to the topic at hand, but I think it is fascinating how successful the “Meta” brand has become compared to, say, Google’s attempt to introduce Alphabet. To be fair, the commitment to the “Alphabet” branding is pretty weak; the Meta website is more than just a landing page for investors. But something I noticed last week is how Meta still uses “Facebook” branding on its quarterly earnings statements (PDF). Place your bets now on whether “X” takes off. I think everyone except the Elon Musk fanbase will keep calling it Twitter, and keep calling posts “tweets”. ↥︎

  2. It is truly unfortunate that crappy Instagram gurus of business and men’s advice take “Mad Men” literally. ↥︎

  3. A paid link, unfortunately, but I believe you know where to find a workaround. ↥︎

Kevin Purdy, Ars Technica:

Ricky Panesar, CEO of UK repair firm iCorrect, told Forbes that screens replaced on newer iPad Pros (fifth and sixth-generation 12.9-inch and third and fourth-generation 11-inch models) do not deliver straight lines when an Apple Pencil is used to draw at an angle. “They have a memory chip that sits on the screen that’s programmed to only allow the Pencil functionality to work if the screen is connected to the original logic board,” Panesar told Forbes.

A Reddit post from May 23 from a user reporting “jittery” diagonal lines from an Apple Pencil on a newly replaced iPad mini screen suggests the issue may affect more than just the Pro line of iPads.

Usually I would point you to the original source — in this case, Forbes — rather than a rewrite, but I made an exception for the author of this Ars piece. If you recognize the name “Kevin Purdy”, it could be because he used to work for iFixit, which is acknowledged in this article.

At iFixit, Purdy was responsible for repeatedly claiming Apple was sabotaging device repairs, citing as evidence the inability to swap the camera between iPhone 12 units and the Face ID system between iPhone 13s. Unofficial iPhone 11 display swaps showed a message saying the display was not genuine and, importantly, True Tone also could not be enabled. And now we have this iPad and Apple Pencil issue to add to the list.

One thing all of these features — synchronicity between the Pencil and iPad screen, Face ID, cameras, and True Tone — have in common is that they are features which require precise coordination between hardware and software. Different parts perform differently, even if they were made by the same company in the same factory at the same time. Apple achieves a level of uniform behaviour on its devices through a proprietary calibration process.

I find Purdy’s analysis of these issues to be frustratingly shallow and lacking. Every time one of these parts calibration problems arises, Purdy immediately ascribes it to a deliberate repair lockout strategy, even though software updates have ensured these parts remain functional and have even fixed some problems spotted by iFixit. iOS 14.4 corrected camera problems for iPhone 12 models with swapped modules and iOS 15.2 re-enabled Face ID for screen-swapped iPhone 13s. In general, I feel that a warning in Settings that a screen or camera is not an official part is a fair way to notify users about what is going on in the mystery box of electronic wizardry they are holding, especially since the iPhone resale market is stronger than phones from any other company by a huge margin, and some of the components for which Apple requires calibration are for security features like Touch ID and Face ID.

The problem I have with a Machiavellian explanation for Apple’s repair idiosyncrasies is that it does not address the actual problems created by its decisions about device construction and maintenance. Hector Martin put it well:

You need to stop pushing these ridiculous conspiracy theories and instead focus on reality: these machines are complex, their production is complex, their repair is complex, and just swapping parts around willy nilly may not result in a quality result, and that is *normal*. Advocate for Apple to provide access to their calibration re-provisioning processes instead, so you can actually get things set up properly and working as intended by the manufacturer. Them not providing those tools sucks and is anti-repair. The product engineering that requires those tools for a proper outcome is not.

From the perspective of users, it does not matter whether Apple is actively making it harder to repair devices or if that is a side effect of other priorities. And, from the perspective of activists and policymakers, I am not sure it makes much difference either. If hardware and software need to go through some process to become better acquainted with one another and work properly, that is fine by me. If it is all a ruse, that sucks, but ultimately is not relevant.

People should be able to swap screens and batteries — at the very least — without having to find a specifically authorized technician with provisioned access to some internal Apple tool. Not every Apple device owner lives in a big city in a rich country, where Apple’s network of technicians is concentrated. The software should be part of the toolkit available to anyone repairing their device outside the Apple Authorized Service Provider channels.

In fact, that is not too far off from what Apple has been doing. In addition to making parts and tools available and improving device repairability, it recently announced it would be making the System Configuration step entirely self-guided. This is progress. Even so, I see room for improvement. The self-service program is currently offered through a website that looks barely legitimate, let alone connected to Apple, is something the company only offers in a handful of countries which does not currently include Canada, and can alter or stop providing it at any time. Good public policy could ensure most common repairs can be done by anyone who is inclined, with quality parts and tools made available.

Barring evidence proving otherwise, I am not convinced Apple’s final software calibration step is some kind of evil manoeuvre to subvert repairs and kneecap its products. Framing it in those terms is a distraction from effective right-to-repair activism. I am not someone who believes Apple cannot do bad things; any regular reader is well aware of that. But I do believe these kinds of motivations demand proof beyond typical and fair suspicion of big corporations.

At WWDC 2022, Apple previewed a new version of CarPlay. It promised deeper integration, taking over for things like ventilation controls, seat position, and dashboard dials. Such an update will basically require expansive screen space, and it will also permit CarPlay to span multiple screens.

A list of supported models is not expected until much later this year, but it appears we are beginning to see glimpses already in a raft of automaker announcements. Some manufacturers — mostly luxurious brands like Porsche — were already toying with all-screen dashboards. That style is becoming increasingly standard and moving downrange. New models from BMW, Chevrolet, Ford, Hyundai, and Lincoln each have a big, long screen stretching from at least the driver’s side across the centre console with digital dials replacing analogue gauges. While none of the mockups show the new version of CarPlay, this layout seems to be designed with it in mind. The mockup Lincoln showed was so similar to CarPlay that, when iMore asked about it, a spokesperson acknowledged it was “uncanny”.

To be clear, none of these mockups seem to show CarPlay, but they do show a new all-digital interface projected across the entire dashboard — exactly how CarPlay will be presented. Stay tuned for the inevitable acknowledgements by Apple and automakers later this year.

Whether people will like these changes is a different matter altogether. A recent JD Power survey indicates people are increasingly dissatisfied with manufacturer-created digital controls, and prefer integration with their phone, which suggests further development might be well-received. But people also prefer physical controls for common functions like turning on seat heaters or adjusting the air conditioning. Volkswagen is dropping the touch controls it added to its steering wheel in favour of real buttons, for example, while Hyundai says it will keep using buttons even as huge screens sweep across the dashboard of its newest models — see above, for example. Porsche may have been an early adopter of an all-screen dashboard with its Taycan, but the new Cayenne manages to retain tactile controls while also embracing digital ones.1

And I still have not acknowledged the potential for increased screens to become more dangerous. We already know that offloading common controls to screen-based interfaces is more distracting. Some of Apple’s mockups show a series of widgets spread across the dashboard with information about the weather, calendar appointments, smart home devices, music, and world clocks. All this while the vehicle is apparently travelling at around 44 miles per hour (70 kilometres per hour) approaching a crosswalk on a street which is signposted for 30 (50). Yes, I know it is a mockup, but it feels realistic: people really do check their calendar while speeding through intersections. Distractions like these are dangerous to everyone on or near a roadway, including cyclists and pedestrians. In the United States, pedestrian fatalities soared, reaching levels not seen in forty years.

But the story seems more complex than the one these U.S. statistics appear to tell. The Canadian auto market mimics the U.S. one, with a similar proportion of different body styles sold, and distracted driving being responsible for fatal collisions at a similar rate. Even so, fatal collisions in Canada have been declining for the same period where they have been rising south of the border. Crucially, this has been true for the 2019–2021 timeframe for pedestrians as a share of fatalities after rising in 2018, and pedestrian injuries have also been on a declining trend. It is not true for cyclists; however, there is no clear pattern either way.

I am most frequently in those latter two categories of road user: I am usually a pedestrian, and often a cyclist. Despite the wide availability of smartphone integration for many years, I still see people in newer cars holding and looking at their phones while driving erratically. Windscreen mounts remain popular, often immediately in front of the driver.

After digging into what is to come in newer cars and recent statistics, I am left with concern and confusion. It seems that something is different in the United States compared to Canada, though the NHTSA recently announced a turnaround for the first part of 2023. But screen-based controls create increased risk, and I find it hard to believe that will be mitigated by bigger screens and more distractions. I worry that drivers five years from now will be sitting in a massive boxy SUV with a dashboard full of touch-activated widgets, and they will still be staring at the phone in their hand.

After years of different answers for ways to avoid touching phones while behind the wheel — CarPlay and its Android counterpart, voice controls, Bluetooth — it seems that is something some drivers will never be able to give up.


  1. In fact, it looks to me like some functionality is duplicated: there appears to be a seat heater icon in both the centre console and onscreen, suggesting the tactile switches could be stateless. ↥︎

It should have been a surprise to exactly nobody that Threads, Meta’s Twitter-alike, was going to seem hungrier for personal data than Bluesky or Mastodon.

That is how Meta makes its money: its products surveil as much of your behaviour as possible, then they let others buy targeted advertising. That differs from the other two services I mentioned. Bluesky says its business model “must be fundamentally different” because anyone will be able to spin up their own server; it has raised venture capital and is selling custom domain names. Mastodon is similarly free; its development is supported by various sponsors and it has a Patreon account pulling in nearly half a million dollars annually, while most individual servers are donationware.

Meta is not currently running advertising on Threads, but one day it will. Even so, its listings in Apple’s App Store and Google’s Play Store suggest a wide range of privacy infractions, and this has not gone unnoticed. Reece Rogers, of Wired, compared the privacy labels of major Twitter-like applications and services on iOS, and Tristan Louis did the same with a bunch of social apps on Android. Michael Kan, of PC Magazine, noticed before Threads launched that its privacy label indicated it collected all the same data as Twitter plus “Health & Fitness, Financial Info, Sensitive Info, and “Other Data””. That seems quite thorough.

Just as quickly as these critical articles were published came those who rationalized and even defended the privacy violations implied by these labels. Dare Obasanjo — who, it should be noted, previously worked for Meta — said it was “a list of features used by the app framed in the scariest way possible”. Colin Devroe explained Threads had to indicate such a vibrant data collection scheme because Threads accounts are linked to Instagram accounts. My interpretation of this is because you can, for example, shop with Instagram, it is possible to link billing information to a profile.

That there is any coverage whatsoever of the specific privacy impacts of these applications is a testament to the direct language used in these labels. Even though Meta’s privacy policy and the supplemental policy for Threads have been written with comprehension in mind, they are nowhere near as readable as these simplified privacy labels.

Whether those labels are being accurately comprehended, though, is a different story, as indicated in the above examples. The number of apparent privacy intrusions in which Threads engages is alarming at first glance. But many of the categories, at least on iOS, require that users grant permission first, including health information, photos, contacts, and location. Furthermore, it is not clear to me how these data types are used. I only see a couple of passing references to the word “health” in Meta’s privacy policy, for example, and nothing says it communicates at all with the Health database in iOS. Notably, not only does Threads not ask for access to Health information, it also does not request access to any type of protected data — there is no way to look for contacts on Threads, for example — nor does it ask to track when launched. In covering all its bases, Meta has created a privacy label which suggests it is tracking possibly everything, but perhaps not, and there is no way to tell how close that is to reality nor exactly what is being done with that information.

This is in part because privacy labeling is determined by developers, and the consequences of violations for misrepresentation are at the discretion of Apple and Google. Ironically, because each of the players involved are giant businesses, it seems to me like there may be limitations about the degree to which these privacy labels are able to be policed. If Apple or Google were to de-list or block Meta’s apps, you know some lawyers would be talking about possibly anti-competitive motives.

That is not to say privacy labels are useless. A notable difference between the privacy label for Threads and some other apps is not found in the list of categories of information collected. Instead, it is in the title of that list: “Data Linked to You”. It should not be worrisome for a developer to collect diagnostic information, for example, but is it it necessary to associate it with a specific individual? Sure enough, while some apps — like Tapbots’ Ivory and the first-party Mastodon client — say they collect nothing, others, like Bluesky and Tooot — a Mastodon client — acknowledge collecting diagnostic information, but say they do not associate it with individual users. Apple is also pushing for greater transparency by requiring that developers disclose third-party SDKs which collect user data.

All of this, however, continues to place the onus of responsibility on individual users. Somehow, we must individually assess the privacy practices of the apps we use and the SDKs they use. We must be able to forecast how our granting of specific types of data access today will be abused tomorrow, and choose to avoid all features and apps which stray too far. Perhaps the most honest disclosure statements are in the form of the much-hated cookie consent screens which — at their best — give users the option for agreeing to each possible third-party disclosure, or agreeing or disagreeing in bulk. While they provide an aggressive freedom of choice, they are overwhelming and easily ignored.

A better option may be found in not giving users a choice.

The rate at which tech products have changed and evolved has made it impossible to foresee how today’s normal use becomes tomorrow’s privacy exploitation. Vacation photos and selfies posted ten or twenty years ago are now part of at least one massive facial recognition database. Doorbell cameras become a tool for vigilante justice. Using the web and everyday devices normally subjects everyone to unchecked surveillance, traces of which persist for years. The defaults are all configured against personal privacy, and it is up to individuals to find ways of opting out of this system where they can. Besides, blaming users for not fully comprehending all possible consequences of their actions is the weakest rebuttal to reasonable consumer protections.

Privacy labeling, which appeared first in the App Store before it was added to the Play Store, were inspired by food nutrition labels. I am happy to extend that metaphor. At the bottom of many restaurant menus will often be printed a statement which reads something like “eating raw or lightly cooked foods of animal origin may increase your risk of food poisoning”. There are good reasons (PDF) to be notified of that risk and make judgements based on your personal tolerance. But nobody expects the secret ingredient added by a restaurant to their hollandaise to be salmonella. This reasonable disclosure statement is not sufficient to protect kitchen staff from taking reasonable precautions to avoid poisoning patrons.

We can only guess at some pretty scary ways these everyday exploitations of our private data may be used, but we do not have to. We have plenty of evidence already that we need more protections against today’s giant corporations and tomorrow’s startups. It should not be necessary to compare ambiguous labels against company privacy policies and imagine what they could do with all that information just to have a text-based social media account. Frivolity should not be so poisoned.

Last week, the U.S. Federal Trade Commission filed suit against Amazon, alleging design patterns which made it easy to accidentally register for Amazon Prime, and difficult to cancel once enrolled. The lawsuit (PDF) is liberally redacted; what is visible attempts to paint a picture of a business which induces people into a monthly subscription. It is the latest example of Lina Kahn’s vigorous scrutiny of massive businesses, and its recent focus on so-called “dark patterns”, also known as “deceptive design”.

These practices are not new — the FTC writes of “unscrupulous direct-mail and brick-and-mortar retailers” which engage in behaviour which is basically a scam — and U.S. regulators have a history of investigating such businesses. The sweepstakes company Publishers Clearing House is one such example which, in 1994, settled with 14 Attorneys General over its use of deceptive language. The company allegedly gave the impression that entering the contest by returning a postcard with a magazine subscription purchase would increase the odds of winning compared to simply returning the postcard, and using language like “finalist” which implied greater urgency. It did not admit wrongdoing in its settlement, but agreed to change its practices.

Times change and Publishers Clearing House no longer uses such suggestive language in postcards. Instead, as described in an $18 million FTC settlement today, it used deceptive language and dark patterns on its website and through email messages. It gave the impression to customers that a purchase would increase their chances of winning a sweepstakes prize, and used phrases in email messages to make them sound like government documents. Protecting people from deceptive and unfair business practices like these is part of the FTC’s mandate, is a phrase I typed before looking it up and finding that is exactly how the FTC defines its mission. How embarrassing!

Anyway, the FTC’s complaint against Amazon has business-minded voices like Ben Thompson stepping in to defend the company. Thompson admits the FTC may have a point when it comes to Amazon’s purchase flow, pointing to several screenshots posted on Hacker News:

Are these UI decisions that are designed to make subscribing to Prime very easy? Yes, and that is a generous way to put it, to say the least! At the same time, you can be less than generous in your critique, as well. […]

Why should anyone be more sympathetic to Amazon’s perspective? This is the purchase flow for one of the world’s biggest online retailers; it is safe to say every decision here is deliberate and has been vetted by many people. The kindest assumption one can make is that Amazon intended to walk right up to the line of illegally deceiving people while trying not to cross it.

[…] The last image, for example, complains that Amazon is lying because the customer already qualifies for free shipping, while ignoring that the free shipping on offer from Prime arrives three days earlier! That seems like a meaningful distinction.

Thompson is referencing this screenshot, which shows an Amazon checkout page with a $5.99 shipping option preselected, even though a free shipping option is available. A second free shipping option is also presented, the selection of which would subscribe the user to Amazon Prime. The complaint of the user who posted the image is that Amazon has preselected a paid shipping option when a free option is available, knowing that it would take longer for the item to arrive than either the preselected paid shipping option or the fastest Prime choice, and that Amazon presents a Prime subscription as a way to “save $5.99 on eligible items in this order”. A more honest screen would preselect free shipping and explain how subscribing to Prime would arrive sooner.

Nevertheless, this section of Thompson’s piece is the closest he comes to a concession that Amazon’s practices might be shady. There is plenty of real-world evidence to support this position. A March survey by Which? found that one in eight Brits registered unintentionally. They were disproportionately older and poorer. Reporting by Eugene Kim at Insider indicates Amazon has flagged accidental Prime registrations that “erode customer trust” since 2017.

This is why I do not think we need to be more sympathetic to Amazon or blunt criticisms. It is one of the biggest online retailers in the world, and its Prime membership program is a recurring revenue machine. Every decision it makes about registering for that program or buying products on its site is analyzed to death.

Thompson is only warming up. People who use Amazon without paying for Prime are free-riders, economically speaking, because they get some of the efficiencies of the company’s reorganization of its supply chain around a goal of offering same-day delivery:

In this view, Amazon “free-riders” get Prime benefits without paying for Prime; they earn this benefit by successfully navigating Amazon’s dark patterns, which, to be sure, are its own cost. I would also note that Amazon does benefit from free-riders: at the end of the day the most important driver of the company’s profitability is how much leverage it can gain on its massive costs; I would bet that from Amazon’s perspective a “free-rider” who buys things on Amazon is a net positive…as long as there aren’t too many of them.

This is an insane way to justify tricking some people into signing up for a monthly subscription. I do not think this is emphasized nearly enough in Thompson’s article: a Prime subscription is not some irritating email list people are not aware they are signing up for; it costs $15 per month. That may not be a lot to some people but, for others, it can make a difference in a monthly budget. A deliberately deceptive checkout process should not be a puzzle users are expected to solve lest they find themselves sending money every month to a business they might use only rarely.

I have told this story before but it is relevant here: many years ago, I was trying to purchase some clothes at Hudson’s Bay, and the salesperson asked me repeatedly to register for what they called a rewards and discount membership card. I asked if it was a credit card, and they said it was not. Since they had asked so many times and I just wanted to go home, I agreed, only to realize after I had paid that it was a credit card after all. I closed it immediately and returned the clothes.

This being a brick-and-mortar store, it was a particularly aggressive salesperson who was to blame, probably because they were trying to hit a goal set by highers-up. Their behaviour could be changed at a store level — it was one of the few times in my life when I have complained to management. Amazon created and launched in many regions a singular complicated checkout process that attempts to get you to spend $15 per month.

The allegedly deceptive Prime registration process is one thing the FTC has an issue with; it also does not like the Prime cancellation procedure. It says it is too easy to unintentionally create a $15 per month charge on your credit card, and too hard to cancel your membership.

On this, I find the FTC’s case less persuasive, but not insignificant. It is certainly not the worst cancellation process. However, it is worth pointing out the project which created this multistep process was internally referred to as “Iliad”, suggesting its arduous qualities were very much the point. If the data from the survey conducted by Which? is generally fair — that is, those who registered for Prime unintentionally are disproportionately older and less familiar with Amazon as a whole — it seems to me this unnecessarily cumbersome cancellation process would be perceived as even more challenging, though I am wary of reading too much into that survey. Also, it is the one thing Amazon preemptively changed as the FTC was investigating the company, according to the Commission’s suit, so it seems Amazon thought its complaints had at least a little merit.

These are the kinds of issues where we cannot trust the market to regulate itself. Unintentional registrations were a known issue at Amazon, but Kim’s reporting for Insider indicates the fundamental problems were not taken seriously. A difficult cancellation process is similarly not self-regulating. Consumer Intelligence Research Partners, an analyst firm, estimates well over 90% of Prime members remain subscribers. That is probably because most of them like it — CIRP also estimates that one-quarter of Prime members who pay monthly stop and restart their membership. But some of that stickiness over some period of time — likely months rather than years — might be because some users are confused by how to cancel, or they did not know they opted into a free thirty-day trial which automatically converted into a paid membership. Knowing that with a high degree of certainty is very difficult.

But making things better is a good course of action, doable, and maybe even easy. The FTC’s allegations echo 2021 complaints from the Norwegian Consumer Council. Last year, Amazon said it would change its cancellation process in Europe to one which takes just two steps and is clearly labelled. It is fair to argue that its current U.S. process is not that difficult, but it is obviously inferior to the E.U. version. Thompson protests the involvement of “government regulators getting involved in product design on a philosophical level”, but it was that kind of pressure which produced changes in both the U.S. and the E.U. resulting in better designed products for users.

Thompson:

What this means is that, to the extent the FTC is effective is the extent to which Amazon almost certainly makes delivery worse for non-Prime members (i.e. differentiates based on service level instead of dark pattern navigation capability) and/or simply makes Amazon.com Prime only, restricting availability to the people who the FTC insists ought not pay for faster delivery. It’s not clear to me how much of a win this is.

[…]

[…] the fundamental point is that the removal of friction leads to a different set of trade-offs. In the case of targeting and tracking, the payoff is a massive increase in consumer welfare by virtue of access to all of the world’s information (Google), and all of the world’s people (Meta); in the case of things like dark patterns and personal appeals, the payoff is ordering sunglasses for your upcoming fishing trip at 10am and having them in hand at 4pm, or, more broadly, to have access to anything you need no matter where you live.

I held on tight and kept my arms and legs inside the ride at all times, but this is where things took a turn. Thompson here is defending the use of checkout and cancellation flows designed to trick people which are apparently necessary in order to make same-day shipping possible.

I do not wish to downplay how great Amazon and other retailers can be. Easy online shopping and rapid delivery are a convenience for many, but can be life-changing for people with disabilities. Even in Thompson’s example, the ability to get a hard-to-find product delivered in a rush clearly made a difference in his life.

But if the checkout form is redesigned to reduce unintentional Prime sign-ups and cancellations become easier, is Amazon’s entire infrastructure going to become meaningfully worse? If the way Amazon runs its online marketplace can only be maintained by coercing users into registering for Prime and making it hard for them to stop paying — and dangerous and low-paid labour — that seems like a profound argument against the way Amazon works today, not in favour of it. It indicates a company which is deceptive to its core. Amazon needs to do better.

For the first time in more than a decade, it truly feels like we are experiencing massive changes in how we use computers now, and how that will change in the future. The ferocious burgeoning industry of artificial intelligence, machine learning, LLMs, image generators, and other nascent inventions has been a part of our lives first gradually, then suddenly. The growth of this new industry provides an opportunity to reflect on how it ought to be grown while avoiding problems similar to those which have come before.

A frustrating quality of industries and their representatives is a general desire to avoid scrutiny of their inventions and practices. High technology is no different. They begin by claiming things are too new or that worries are unproven and, therefore, there is no need for external policies governing their work. They argue industry-created best practices are sufficient in curtailing bad behaviour. After a period of explosive growth, as regulators are eager to corral growing concerns, those same industry voices protest that regulations will kill jobs and destroy businesses. It is a very clever series of arguments which can luckily be repurposed for any issue.

Eighteen years ago, EPIC reported on the failure of trusting data brokers and online advertising platforms to self-regulate. It compared them unfavourably to the telemarketing industry, which pretended to self-police for years before the Do Not Call list was introduced. At the time, it was a rousing success; unfortunately, regulators were underfunded and failed to keep pace with technological change. Due to overwhelming public frustration with the state of robocalls, the U.S. government began rolling out call verification standards in 2019, and Canadian regulators followed suit. For U.S. numbers, these verification standards will be getting even more stringent just nine days from now.

These are imperfect rules and they are producing mixed results, but they are at least an attempt at addressing a common problem with some success. Meanwhile, a regulatory structure for personal privacy remains elusive. That industry still believes self-regulation is effective despite all evidence to the contrary, as my regular readers are fully aware.

Artificial intelligence and machine learning services are growing in popularity across a wide variety of industries, which makes it a perfect opportunity to create a regulatory structure and a set of ideals for safer development. The European Union has already proposed a set of restrictions based on risk. Some capabilities — like when automated systems are involved in education, law enforcement, or hiring contexts — would be considered “high risk” and subject to ongoing assessment. Other services would face transparency requirements. I do not know if these rules are good but, on their face, the behavioural ideals which the E.U. appears to be constructing are fair. The companies building these tools should be expected to disclose how models were trained and, if they do not do so, there should be consequences. That is not unreasonable.

This is about establishing a set of principles to which new developments in this space must adhere. I am not sure what those look like, but I do not think the correct answer is in letting businesses figure it out before regulators struggle to catch up years later with lobbyist-influenced half-measures. Things can be different this time around if there is a demand and an expectation for doing so. Written and enforced correctly, these regulations can help temper the worst tendencies of this industry while allowing it to flourish.

It has already been a busy year for Apple, and the company has not yet held a single presentation. Just two weeks into the new year, it launched new Macs and a refreshed HomePod, followed by some services updates, and new iPad software. All of those things — and more — were launched via press release instead of the full power of a real demo. They are all things which do not require much of a demo.

What Apple is rumoured to have in store for WWDC, however, demands the pomp and circumstance of one of its signature events.

The rumour mill paints a picture of a headset in the company style. The hardware is allegedly a technical masterstroke.1 But none of that is very interesting, nor does it tell the story of this product. Apple has not tried to quell the rumours and expectations leading up to Monday; on the contrary, it is marketing the conference as a “new era”. The single thing everyone will be asking going into this WWDC is what a mixed reality headset can do when it is developed by a company famously obsessed with the bigger picture.

Earlier entries in the field have come from the usual suspects, with familiar results. Google’s Glass was an interesting but antisocial experiment. Microsoft spent most of 2022 attempting to convince HoloLens users of the future of the device, but announced layoffs in January which affected its augmented and mixed-reality pursuits. Meta is as enthusiastic about these kinds of products as it is institutionally visionless.

The one thing these products have had in common is their lack of a use case that piques the interest of more than a niche audience. Make no mistake: this will disappoint anyone expecting a product which immediately and obviously usurps the iPhone’s place as the go-to, do-anything device for a billion people. I do not think it will feel as capable as a Mac, either, nor do I think it will be as limiting as an Apple Watch.

What it will be, undeniably, is fascinating. It could very well represent a vision of the future of how we all use computers, though it may not be immediately so at its introduction. But even if you lower the massive expectations for this product, it is at the very least a new Apple product category, which is inherently interesting. It may not be a company making just four Macs, but its product line still is not very large. Another category appearing in the main navigation on Apple’s website is a big deal.

Whatever it is, it will also likely represent the kind of product which few of us will buy immediately, even if we want to. If the rumours are correct, the price tag will make our eyes pop, the features will feel somewhat limited, and the hardware — while powerful and polished — will be obviously compromised. While many of us are waiting for a day many years from now when this category feels more attainable, we will be using our existing devices — two billion of them. If much of Apple’s own attention has been directed at the future, what does that mean for its here-and-now lineup?

This is an honest question, not just a rhetorical one. As Apple’s operating system line has grown from one to at least five — more if you count the HomePod’s audioOS and BridgeOS for Macs with T-series chips — the limitations of scale have begun to show. New versions of iPadOS oscillate between key feature updates to fundamental parts of the system, like multitasking, one year, and tepid improvements the next. iOS is a mature platform and, so, it makes sense for there to be fewer core feature updates, but one wishes the slower development cycle would bring increased stability and refinement; actual results have been mixed. MacOS is the system which feels like it ought to be the closest to some imagined finish line, but it also seems like it is decaying in its most core qualities — I am having problems with windows losing foregrounding or not becoming focused when they should. Also, why are Notifications still like that?

Whatever the future may bring, what I hope for this WWDC is what I hope for every year: bug fixes and performance improvements. If iPadOS represents one vision for the future of computing and xrOS is another, more distant one, the most mature products in Apple’s line should reflect a level of solidity and reliability not yet possible for its more ambitious ideas.

I believe coverage of its event should reflect that, too. As magnetic as an entirely new Apple product may be, I hope that can be balanced with scrutiny of the updates which affect the billions of devices already in use. After all, these operating systems and devices go hand-in-hand; neither is available without the other. That represents a great deal of trust between vendor and customer in a weakly competitive market. As excited as I am for what is new and what is next, I know my world for the foreseeable future will be tied to what is announced for the products I already own. They are the tools I use for work and play. I need to have confidence in them, which has been dimmed by Apple’s mediocre record for changes. I filed an average of something like three bug reports every week last year solely from a user-facing perspective. I would love to be able to close some of those and, by doing so, feel like the computers I use today are a solid foundation on which the next generation of digital environments will be built.


  1. Like some kind of reality distortion field. ↥︎

The biggest story in tech for the past fifteen years has been the convergence of a bag full of stuff into a single, pocket-sized, take-everywhere product. From its beginnings on the hips of Wall Street types, it rapidly became the best-selling piece of consumer electronics ever — and it is not even a close race.

I mean, of course it is a success without equal. Many of us can leave our houses with scarcely more than our phone and a set of keys, and the latter is becoming optional, too.

But its Jack-of-all-trades status of course implies it is a master of none. And, as great as a smartphone is, there are still things which other devices do better. That argument was the premise for the introduction of the iPad. It is the reason why I drafted the first notes for this on my phone, but I am currently writing it on a laptop. A smartphone is by no means the best camera you can buy, for example, so it is not uncommon to see people carrying a dedicated camera even if they own a smartphone. I am one of those people.

What if there are other categories for which most people currently find a smartphone useful, but which a dedicated device could do a better job? What if the big story in tech for the next fifteen years — aside from the rise of A.I. — is an undoing of this great convergence, at least in part?

This is not entirely speculative; or, at least, not any more so than the future of tech is in general. The device Humane previewed at TED earlier this year is approximately a standalone version of Siri, for example. Whether it will be a success is a good question, and I have doubts. But some people clearly believe someone would buy one of these things for use in addition to a smartphone, if not to replace one entirely for some people.

So, this is an article of mostly guesswork. I have no confidence in this; let us not even call them “predictions”. But there seems to be something worth exploring here and, since this website has no market swaying powers, I feel totally fine with spending a few hundred words thinking more deeply about this.

Back to Humane. Its product looks like an unbundled and perhaps better personal assistant. Smart speakers are already one example of a device extricated from the confines of the smartphone world, and Humane’s product is effectively one which you can wear, having seemingly similar benefits and restrictions. You cannot watch a movie on one, but you can ask it for nearby recommendations or to translate something. It is a peek at a world seamlessly augmented by high technology.

That future is something which is apparently in the works at every giant computer company. Microsoft released a video in 2008 — you can tell it was 2008 because everything is typeset in Gotham — predicting magic translation glasses by the year 2040. Google actually released augmented reality glasses in 2012 without success. Scaled-back attempts at similar devices have been released by Snap and Meta. The latter is also reportedly working on a more capable product to the point it staked its very identity on its ability to deliver. Apple might be working on some kind of augmented reality glasses as well.

The devices we have today already allow us a taste of an augmented reality experience. It works fine, I suppose. I have used it to place furniture in my living room and try on eyeglasses. I have also used it to plunk a giant skeleton inside my house.

The devices which have been released after the smartphone seem more specialized than ever. Perhaps that is in part because nearly anything looks more specialized than a smartphone, but there are also whole categories of seemingly niche products. Headphones were barely thought of as a device before the craze for wireless earbuds; the market for advanced fitness trackers and smart watches has been booming for years. These were niche markets, yes, until they were not.

This is what got me thinking about this more deeply: these are products which do not need to do everything better all of the time; they are things which can do a lot of things better some of the time, or a handful of things better a lot of the time.

Products with an increased degree of specialization have business justifications, too, since there are more products to sell. It may be very difficult to beat the smartphone in terms of raw sales of another single product, but it is possible to get similar results in the aggregate. It seems like this would benefit tightly integrated businesses, too.

One reason the smartphone is so popular is because it has become possible to make very good phones for not very much money — partly thanks to standardization, partly thanks to components no longer needing to be cutting-edge to be very good, and partly due to exploitative labour practices. As a result, it has become possible for people across income brackets and around the world to use a smartphone. As remarkable as that may be, it is worth remembering technology is not a panacea. Smartphones will not correct the inequality we see in cities or around the world. That said, these devices have been beneficial in developing regions and for individuals of a wide range of incomes. They are the best-selling devices ever created for a reason: they connect just about everyone. People are able to make a living by selling goods through WhatsApp, and can find jobs and services locally.

It therefore seems unlikely to me for the smartphone to disappear in the near future. But, for some, perhaps it becomes increasingly optional. Perhaps the story for them is of less convergence and more specialization. That was an early vision for the Apple Watch. Maybe some of those ideas, while premature, will finally begin to come to fruition in a more meaningful sense for more of us. For what it is worth, I cannot imagine giving up my iPhone but, then again, I could not imagine how truly great a smartphone could be before I saw one.

This is all going to sound very familiar. It is something society will re-litigate a few times a year until the internet goes away instead of, you know, learning.

On Tuesday evening, the BBC’s James Clayton scored an impromptu interview with Elon Musk, which was streamed live on Twitter Spaces. It ran for about ninety minutes and the most popular clip has been a brief segment in which Clayton pressed Musk on a rise in hate speech:

Clayton: You’ve asked me whether my feed, whether it’s got less or more. I’d say it’s got slightly more.

Musk: That’s why I’m asking for examples. Can you name one example?

Clayton: I honestly don’t — honestly…

Musk: You can’t name a single example?

Musk concludes by calling Clayton a liar. It is an awkward segment to watch because it is clear how unprepared Clayton was for this exchange — but his claim is not wrong.

Mike Wendling, BBC:

Several fringe characters that were banned under the previous management have been reinstated.

They include Andrew Anglin, founder of the neo-Nazi Daily Stormer website, and Liz Crokin, one of the biggest propagators of the QAnon conspiracy theory.

[…]

Anti-Semitic tweets doubled from June 2022 to February 2023, according to research from the Institute of Strategic Dialogue (ISD). The same study found that takedowns of such content also increased, but not enough to keep pace with the surge.

Of course, this followup story appears to be a case where the broadcaster would not only like to correct the record, but also stand behind a reporter who struggled with a line of questioning he, in hindsight, should have anticipated. That may undermine readers’ confidence in it. A reader may also doubt the independence of the ISD as it counts as funders government agencies, billionaires’ foundations, and large technology companies. Research like this demands access to Twitter’s API so, if billionaire funders are bothersome, prepare for it to get much worse.

I believe those aspects are worth considering, but are secondary concerns to the findings of the ISD report (PDF). In other words, if there are methodological problems or the study’s conclusions seem contrived, that is a more immediate concern. The ISD concluded by saying two things, and implying one other: Twitter is getting better at removing antisemitic tweets on a proportional basis; Twitter is not keeping up with a doubling of the number of antisemitic tweets being posted; and the total number of antisemitic tweets is still a tiny fraction of the total tweets published daily. That any antisemitic tweets are remaining on the site is obviously not good, but a doubling of a very small number is still a very small number.

The ISD is not the only source for this kind of research. Wendling cites other sources, including the BBC’s own, to make the case that hate speech on Twitter has climbed. Just a few days ago, a preprint study on Musk’s Twitter was released seeking to understand the presence of both hate speech and bots pre- and post-acquisition. Its authors found an increase in both — though, again, still at a relatively low percentage of all tweets.

But even if it is just a handful of posts which are violative of even a baseline understanding of what constitutes hate speech, it is harmful to the person who is targeted and — if you want the detached business case for it — may have a chilling effect on their use of the platform. From Twitter’s own policies:

We recognize that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature, and more harmful.

We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.

That is one reason why it is so important for platforms to set guidelines for permissible speech, and enforce those rules clearly and vigorously. There will always be grey area, but when a platform advertises the grey area as a space for “difficult” or “controversial” arguments, it will begin to slide. From that preprint study (PDF):

Both analyses we performed show large increases in hate speech following Musk’s purchase, with no sign of hate speech returning to previously typical levels. Prior research highlights the consequences of online hate speech, including increased anxiety in users (Saha, Chandrasekharan, and De Choudhury 2019) and offline victimization of targeted groups (Lewis, Rowe, and Wiper 2019). The effects of Twitter’s moderation policies are thus likely far-reaching and will lead to negative consequences if left unchecked.

The researchers note no causal relationship between any specific Twitter rules and the baseline rise in hate speech on the platform. But Musk’s documented views on encouraging a platform environment with fewer guidelines have effectively done the same work as an official policy change. He is an influential figure who could use his unique platform to encourage greater understanding; instead, he spends his very busy day farting out puerile memes — you are welcome — and mocking anti-racist initiatives.

The efforts of some to minimize the effects of hateful conduct as merely being words or not being the responsibility of platforms is grossly out of step with research. These are not new ideas, and we do not need to pretend that light touch moderation for only the most serious offences is an effective strategy. Twitter may not be overrun with hate speech but, for its most frequent targets, it has increasing presence.

This happens over and over again; you would think we would learn something. As I was writing this piece, a clip from the Verge’s “Decoder” podcast was published to TikTok, in which Nilay Patel asks Substack CEO Chris Best a pretty basic question about whether explicitly racist speech would be permitted in the platform’s new Notes section. That clip is not the result of crafty editing; the full transcript is an exercise by Best in equivocation and dodging. At one point, Best tries to claim “we are making a new thing […] we launched this thing one day ago”, but anyone can look at Notes and realize it is not really a new idea. If anything, its recent launch is even less of an excuse than Best believes — because it is new for Substack, it gives the company an opportunity to set reasonable standards from the first day. That it is not doing so and not learning from the work of other platforms and researchers is ridiculous.

Here are three relatively recent interactions I have had with independent software developers:

  • In November 2020,1 I suggested a separate display of the optional external_url property for JSON Feeds in NetNewsWire. I was not sure how to program this, but I thought it was a reasonable idea and, fortunately, Maurice Parker and Brent Simmons agreed. Within a week, it was part of the application. (Because this is open source software, I feel comfortable being precise.)

  • A reader emailed me with questions about iPhone photography. That gave me an idea, which I sent off to a developer, who responded positively to the suggestion.

  • I encountered a strange bug in a Safari extension. I emailed the developer with specific conditions and a screenshot, and received a reply mere hours later asking for more information. A busy week got in the way of my reply, so the developer emailed again several days later to follow up. I was no longer able to reproduce the bug but it was nice to be reminded.

These are just a few of the numerous pleasant experiences I have had with independent software developers. I cannot say the same is true of big corporate developers — not even close.

Of course, to expect otherwise would only be lying to oneself. The biggest companies in the world do not have the time or staff to handle the feedback from millions — billions — of customers on a personalized basis, so they need to triage. Common questions are handled by a voluminous collection of help desk articles. Bug reports are filed in some database to be compared against known problems. Feature suggestions are evaluated in the context of their effect which, on the scale of millions of users, will always be significant.

All of that is understandable. You all know that. And you also know I am totally preaching to the choir when I say that the more you experience that, the more it sucks the joy out of using a computer. When I buy and use software from an independent developer, it feels like I am establishing a relationship with the person or small team that built it; it feels like we both have a stake in the success of the product. But when I use software made by a massive company, I can feel the power imbalance in the pit of my stomach.

It seems unlikely that we can eschew software made by industry giants, and it may be unwise to try. There are advantages to using the same product as millions of other people for collaboration, common understanding, a common ecosystem, and security. While someone may be able to breach a Gmail account, the chances of them hacking into Gmail are vanishingly small. But just as these products may give us a sense of stability, the best of the independents indulge in the fun, the spirit, and the experimental side of software. They are the soul. The time I spend using any of my computers and devices would be so much worse without the indies.


  1. Okay, so only two “recent” interactions. ↥︎

Posts announcing a redesign are often boring and more than a little redundant. Here are four things I think you should know about this new iteration of Pixel Envy:

  1. I have joined this century and now use web fonts — specifically, I am using the excellent IBM Plex family. I am wary of its corporate-specific connection, but I like its legibility, its support for oldstyle figures, and its vintage-modern look.

    I am aware some people are not big fans of web fonts, whether for privacy reasons, bandwidth, or preference. I get it. For what it is worth, I serve these files myself instead of offloading that task on some third party I know nothing about. But if you would prefer to use system fonts instead, I have updated my opt out page.

  2. There is now, at long last, full support for dark mode.

  3. Speaking of things which feel completely outdated, I looked high and low for a current way of making a multi-resource favicon and these decade-old instructions are still the gold standard.

  4. I miss the way old record players and other bits of stereo equipment used specific applications of orange on otherwise near-white or grey objects. Orange is a good colour.

Let me know if anything is catastrophically broken. I am expecting problems with the odd article which uses images, but because there are so few, I am not too bothered, and will update as needed.

I am a fool.

When I linked to Josh Hill’s heartbreaking story of massive data loss in iCloud Photo Library, there was something I neglected to mention. Something I have been keeping secret for the past few years: my photo library outgrew my desktop Mac’s internal storage — the Mac I do all my photo editing on — and I did what nobody is supposed to do. I told Photos to optimize my Mac’s storage.

Yes, for the past few years, the only full copy of my photo library has been in iCloud and, yes, it has worried me just about every day since I changed that preference. This was a very stupid, very bad idea for someone who apparently cares about their photo library, and who has already experienced the pain of massive data loss.

We all have our flaws.

The day after I read Hill’s story, I ordered another external SSD, this time in a ghastly shade of blue — $90 seems like a steep price to pay for the more tasteful beige finish — and it arrived shortly thereafter. The two terabyte model gives me enough space for a local copy of my entire library, plus room to grow. I followed Apple’s documentation to move my photo library over and it was mostly straightforward. I do not need to bore you with tiny details. There are two things which surprised me:

  1. When you set a new photo library as the system default, you will see a warning message appear if you use iCloud features. It says “any photos and videos that have not been fully downloaded will be removed from this Mac”, which makes it feel like a destructive action is about to happen. But that media is, theoretically, in the cloud, so it will be re-downloaded later.

  2. After changing the system library location, Photos says “This Library isn’t searchable in Spotlight due to its location”. Apple says:

    The enhanced Spotlight Search can locate items in the System Photo Library. If you use other libraries, Spotlight does not locate items in those.

    So I assumed this message would disappear after my Mac figured out I had moved its library. A week later, it has not disappeared and images from Photos are, indeed, not searchable in Spotlight. Apple’s documentation implies Spotlight will work for whichever library is the system one, but the message in Photos implies that libraries stored on external drives will not be indexed.

I wish both of these things were clearer, but not as much as I do the status of media which has not been downloaded.

My Mac has been dutifully downloading tens of thousands of original media files from iCloud until earlier this week when it decided to stop. The only information I have is a message in Photos, saying there are 42 originals not yet downloaded — but which ones are missing is anyone’s guess. Photos has Smart Albums but, unlike Music, it does not have a filtering criteria for whether the original file has been downloaded. There does not appear to be any logging, nor any status window. While writing this paragraph, I can see the library file slowly increasing in size; however, the number of original files remaining to be downloaded has not budged.

Apple does not provide much guidance. If I have exhausted the steps in the iCloud Photos help document and the Photos for Mac guide, I can only try using the opaque library repair tool. Beyond Apple’s documentation, the only troubleshooting ideas I can find for this issue are time- and data-consuming. I am told I should try exporting all of my photos as original files which will force Photos to ensure all originals are downloaded, but this is impractical to do for an entire library. If that does not work, I can delete my local library, sign out of iCloud and then back in again, and trigger a library rebuild. None of these options makes much sense for a library of over 70,000 photos totalling 1.3 terabytes.

Happily, after repairing my library and waiting for it to reconcile with iCloud, it seems there were only 21 missing original media files which needed a local copy, and they seem to have downloaded. I still do not know what they were. I only have myself to blame for getting to this point. Even so, the lack of any way for me to figure out which items are only in iCloud and not on my local drive is a baffling omission. It is not quite a silent failure but it is in the spirit of one, where Apple seems to have assumed that its software will perform correctly and users should never need to intervene. In the real world, I just wanted to know what it was waiting on.

Today is apparently World Backup Day and I am happy to have a local copy — or nearly so — of my photo library. Not only does it mean these precious images are stored on my own drive, it also means they can be backed up — in my case, to Backblaze, like everything on my Mac.1 Automatic backups are critically important. My photo library was the only thing I was not truly backing up, and the past few years of having just one copy has been an unnecessary source of stress in my life. After reading this article, I imagine you may be feeling similarly worried about anything you have not backed up. Think of it this way: re-creating your most important documents and rebuilding your local music library would be time consuming at best, but remaking your photo library is impossible.

We have all heard it countless times, but it bears repeating: priorities reflect what we actually do with our time. Backups cost money, this is true. But seeing as most of our really important stuff is entirely digital and often hosted in someone else’s cloud, it is imperative that we have our own copies and we perform our own backups. Software and services need a warranty. Until they have one, we completely control how much we value our data. That is the best we can do.


  1. If you want to sign up for Backblaze, using my affiliate link will lower the cost of my subscription. ↥︎

Reddit user “horizontalhole” discovered something curious:

From 2017–2022 the Vatican flag SVG on Wikimedia Commons contained a mistake. You can now tell which flag manufacturers/emoji platforms used the file.

I found this post via the Depths of Wikipedia Twitter account. Once you see the most noticeable difference — the tiara in the Wikimedia Commons version is filled red while the official flag is white — you begin to see examples everywhere. It seems like a bunch of people in Iraq in 2021 were waving the Wikimedia version because a print shop made a bunch of them. It is also present in images from Thailand, Switzerland, and crowds at the Vatican itself.

But here is where things got weird: the version with a red filled tiara is also present on the Popemobile in a visit to Peru, held outside the Pope’s plane in Rome, was raised outside the Vatican embassy in Italy, and hung at a Catholic Conference building in the United States. In addition to the red tiara, they also have the brighter yellow fill in the key of the 2017 Wikimedia variant. While members of the public may have purchased a faulty flag, it seems unlikely to me for representatives of the Vatican to be using a knockoff. And if you go back far enough in the Getty Images archive, you will see the red variant in photos from Mexico City in 2016 and outside the United Nations in 2015 — both taken before the flag on Wikimedia was changed to the apparently incorrect version.

It gets stranger still. The flag shown on one official Vatican webpage about its history shows only the white-filled tiara, but the cord element below the keys is shown in red, which differs from another official Vatican page where it is shown in white. A translated version of the latter page says nothing about what colour each element is supposed to be aside from the yellow and white field.

Luckily, there is a definitive book by Rev. William M. Becker about the flags of the Vatican. On page 99, there is a picture of the flag from an appendix to the Vatican’s 2000 Fundamental Law, about which the Vatican says:

The flag of Vatican City State is constituted by two fields divided vertically, a yellow one next to the staff and a white one, and bears in the latter the tiara with the keys, all according to the model which forms attachment A of the present Law.

So it is settled, right? The version shown — which has a tiara in white, red-filled corded elements, and gold colours which differ from the yellow of the field — is the only official flag of Vatican City. Case closed?

Nope. On page 103, Becker writes:

State flags flown by Vatican buildings follow the basic constitutional design, but vary widely in details such as proportions, color shades, and emblem details. […]

Indeed, Becker includes a series of flags with variations in the colour used for the keys, the cord element, and the tiara, in the 1980s through 2013. Even well past the publication of what the Vatican deemed its official flag, versions shown in and around Vatican City have differences in the shading of each of these elements. Becker goes on to write that these “variations suggest that Vatican authorities could clarify the flag’s details more precisely”, and laments how “local flagmakers often rely on questionable sources (e.g., Wikipedia)” (106).

It does seem that, officially, the version of the Vatican flag with a white-filled tiara is the most correct option. But even within Vatican City itself and in official use, there is considerable variation. Perhaps most relevant to the original post, it is not necessarily true that a Vatican flag with a red-filled tiara is derived from the 2017–2022 Wikimedia image. However, with a more correct version in the world’s most-used encyclopaedia, it may be a productive case of citogenesis.