Month: November 2021

Ben Thompson:

Given that impact, I can see why Elliott Management would look at Twitter and wonder why it is that the company can’t manage to make more money, but the fact that Twitter is the nexus of online information flow reflects the reality of information on the Internet: massively impactful and economically worthless, particularly when ads — which themselves are digital information — can easily be bought elsewhere.

[…]

So let’s review: there is both little evidence that Twitter can monetize via direct response marketing, and reason to believe that the problem is not simply mismanagement. At the same time, Twitter is absolutely essential to a core group of users who are not simply unconcerned with the problems inherent to Twitter’s public broadcast model (including abuse and mob behavior), but actually find the platform indispensable for precisely those reasons: Twitter is where the news is made, shaped, and battled over, and there is very little chance of another platform displacing it, in large part because no one is economically motivated to do so.

Given this, why not charge for access?

I tell you, when Thompson gets it, he really gets it. “Massively impactful and economically worthless” could be etched in the stone foundation of Twitter’s headquarters.

I happen to be one of the truly sick freaks for whom Twitter is their favourite social network, and I think that is true in part because I do not really understand it. Facebook, Instagram, TikTok, Pinterest — I get what they do and what they are for. But Twitter? It is brief bursts of shouting, shared links, photos, reply guys, and automated feeds. It is chaotic.

I like it so much that I would pay $50 a year to be a member — no joke. And I do not want more half-steps like Twitter Blue. Just charge me for access and I am sure I would pay it, like I would have done for the past fourteen years I have had an account with the site.

Eliot Brown, Wall Street Journal:

In addition to the three fireball incidents, Reef has faced multiple citywide shutdowns over permitting and other regulatory violations, challenges connecting to local utilities, higher-than-expected costs and a labor shortage, said former executives and managers. Many former employees described the environment at Reef as chaotic.

[…]

Reef stands out among ghost-kitchen startups given its large amount of funding — over $1.5 billion — as well as its business model. While competitors tend to rely on large shared kitchens for numerous restaurant brands, Reef’s strategy is focused on putting trailer-size kitchens in parking lots near residential areas.

As a reminder, Reef Technologies is the product of two parking companies — ParkJockey and Impark — owned and generously funded by SoftBank to transform empty stalls into pop-up versions of city amenities. I used to live near one of these trailers: calling it a “ghost” kitchen is apt, given how busy it was with vehicle traffic, yet lacking any of the presence or warmth you would associate with a restaurant. On any night, I could watch delivery drivers pull up in a little hatchback, throw on their hazards, and rush to the fluorescent glow of that trailer in the middle of an otherwise-empty surface lot to fetch someone’s chicken wings and Coke.

All of this is brought to you by parking companies that squat on valuable downtown blocks, thereby helping make cities less friendly, less walkable, and less connected.

Twitter’s press release:

Twitter, Inc. today announced that Jack Dorsey has decided to step down as Chief Executive Officer and that the Board of Directors has unanimously appointed Parag Agrawal as CEO and a member of the Board, effective immediately. Dorsey will remain a member of the Board until his term expires at the 2022 meeting of stockholders. Bret Taylor was named the new Chairman of the Board, succeeding Patrick Pichette who will remain on the Board and continue to serve as chair of the Audit Committee. Agrawal has been with Twitter for more than a decade and has served as Chief Technology Officer since 2017.

Jack Dorsey tweeted a screenshot of his internal announcement email — unfortunately, without alt text. Dorsey has held the CEO title since he reclaimed it from Dick Costolo on July 1, 2015. In February 2020, Paul Singer’s Elliott Management firm made a sizeable investment in Twitter. Elliott’s goals? To replicate the “Stories” format made so popular by Snapchat and Instagram — which went spectacularly — and to get rid of Dorsey as CEO.

While serving as CTO, Agrawal has been shepherding Twitter’s “Bluesky” protocol efforts. Interesting days ahead.

Update: The now-routine context-free old tweet ransacking has begun for Agrawal.

Over a week ago, I requested a copy of my personal data from Amazon, after a few journalists reported some surprising finds in theirs. I am a very light user of Amazon’s services, so I did not expect anything remarkable, but I was curious.

Well, I just got a copy of it this evening, and the most surprising thing was how it was delivered. When you request a copy of your data from another company, it typically takes a few hours or perhaps a few days to become available. Apple says “up to seven days”;1 Google says “possibly hours or days”; Twitter says “24 hours or longer”.

Amazon does not promise to turn around its files nearly as quickly. It says that it can take up to thirty days to create the exported data. When it does become available, you are presented with a list of individual downloads labelled and categorized by function — in mine, there were 57.

And there is no “download all” button.

Oh, and all of the download buttons are not actually direct links to each file, but instead link to an HTML page that fetches the correct download, which means you cannot save the files to a specific folder on your computer.

Remember, I am a light Amazon user, so mine mostly consisted of retail-related files, like my search history, order history, and payment data on file including the last four digits of credit cards. I was a little surprised to see a copy of every order status email Amazon has ever sent me, along with a database of the read status of each one.

Otherwise, there is very little to report, and I probably would not have written anything if the download process were not hysterically cumbersome. I just do not understand why all of this was not delivered as a single zip file. It is like I am being punished for having the audacity to request my data.


  1. I did not request a copy of my iCloud Photos, iCloud Drive, or iCloud email inbox. The promise may be different if I asked for all of that. ↥︎

BuffaloCoward on Reddit, commenting on how often Apple Maps suggests doing a u-turn when using it for navigation directions:

I’ve always found u-turns to be stressful, and I try and avoid them if possible. Apple doesn’t know the turning radius of my car, I just want to turn u-turns off like you can toggle avoid toll roads or highways.

And the thing is, if you’re navigating and miss a turn the thing just turns into a[n] infinite loop of u-turns. No matter how many you skip it’ll just tell you to make another u-turn.

In Alberta, it is illegal to make a u-turn in many circumstances, including at any intersection with traffic lights. Even so, Apple Maps will demand you make a u-turn if you deviate from the route it has selected; it is the only way it seems to know how to return you to the route. Because I am not interested in committing traffic offences, if I miss my turn, I will make three rights in a row to get around the block, and then make a left turn to get back to where I started. That is a completely sensible alternative that Maps simply will not suggest, nor does it ever seem to re-route in a way that will let me follow the current road to a different intersection.

I was inspired by BuffaloCoward’s post, but where I disagree is in making u-turns an option. Apple Maps should simply be better at re-routing. A u-turn should be a last resort measure, only suggested when any other option would be either impossible or require an implausibly longer route. Every other way of changing my route after a missed turn is preferable to a u-turn and should be evaluated first.

Good luck to the team trying to get Project Titan out the door in four years.

On Friday, while Apple was opening a store in Los Angeles, KTLA reporter Rich DeMuro spoke with Tim Cook for a few minutes and asked about the self-repair program. Cook:

Well, we realized that there were some people that wanted to do this [self-repair] and that are trained to do this. You know, they’re the Popular Mechanics crowd, if you will, which I love and have been focused on my entire life. So it feels good to put the manuals out there and get the parts out there — that enables people to do this. Still, if you’re not comfortable doing that, we encourage you to come in the Apple Store and get it done for you. That’s still the best way, perhaps, for most people. But if you’re a technician, then have at it. You’re able to do it yourself.

I am sure Popular Mechanics appreciates the shout-out.

MacRumors obtained an internal memo with a few more details. Joe Rossignol:

Apple’s memo also said that its online parts store will be operated by an unspecified third party. While no official reason was provided, it would certainly be logistically easier for Apple to outsource shipping and receiving of parts to and from customers. A similar system is already in place for Apple Authorized Service Providers.

It makes sense to operate the parts store for end users as a branch of the Authorized Service Providers’ store. Still, it is a little curious for a company as vertically integrated as Apple. The third-party sourcing for authorized repair shops is news to me.

Jeffrey Dastin, Chris Kirkham, and Aditya Kalra, Reuters:

Amazon’s lobbying against privacy protections aims to preserve the company’s access to detailed consumer data that has fueled its explosive online-retailing growth and provided an advantage in emerging technologies, according to the Amazon documents and former employees. The data Amazon amasses includes Alexa voice recordings; videos from home-camera systems; personal health data from fitness trackers; and data on consumers’ web-searching and buying habits from its e-commerce business.

Some of this information is highly sensitive. Under a 2018 California law that passed despite Amazon’s opposition, consumers can access the personal data that technology companies keep on them. After losing that state battle, Amazon last year started allowing all U.S. consumers to access their data. (Customers can request their data at this link.) Seven Reuters reporters obtained and examined their own Amazon dossiers.

Even setting aside its massive cloud computing business, it is staggering to imagine how much information Amazon has access to on its users with historically poor internal controls. For its heaviest users — Prime members who have Ring doorbells, Alexa devices in every room, read their Kindle most nights, and shop at Whole Foods — Amazon has a more-or-less complete picture of their lifestyle.

I am a very light Amazon user, with just one order made in 2021, and six in 2020. I do not have any Alexa or Kindle devices, and have never shopped at Whole Foods. So I was a little surprised when I requested my data on November 19 and was told that it would take up to a month for them to produce a copy. I delayed writing about this story because I wanted to have a copy of my own data in hand, but it has been five days and I have not received anything. Any other large technology company has produced a copy of my data within hours of me making the request, and even the slowest information brokers have taken just a couple days. Is Amazon relying on an entirely manual process?

Some of the examples cited by Reuters are a little weak on their face:

Alexa devices also pulled in data from iPhones and other non-Amazon gear – including one reporter’s iPhone calendar entries, with names of people he was scheduled to contact.

I am not sure it is newsworthy that Alexa devices need to know information about users’ calendar entries in order to respond to queries like “what time is my meeting with Leslie?”, for example. But perhaps it should be — if this reporter was not aware of how much information a smart speaker needed to ingest and share with Amazon’s servers, for some reason, it can understandably feel like an invasion of privacy. If something can be done locally, it probably ought to be.

One more thing:

As executives edited the draft, Herdener summed up a central goal in a margin note: “We want policymakers and press to fear us,” he wrote. He described this desire as a “mantra” that had united department leaders in a Washington strategy session.

This is a terrible goal to even suggest in a margin note, and it is indicative of the kind of ruthless work culture that urgently needs to die.

Will Evans, Wired:

Around the tail end of 2016, a guy named Gary Gagnon — a cybersecurity executive with decades of experience, primarily in federal government work — flew to Seattle to discuss becoming Amazon’s new vice president of information security. His last interview of the day was with Wilke, the consumer CEO, who met Gagnon in a small conference room off of his modest office, dressed in a flannel button-down and jeans. The outfit was part of a tradition, Gagnon recalls Wilke explaining: He always dressed like a warehouse worker during the peak holiday shopping season, to remind folks at headquarters of the people who really kept Amazon churning.

[…]

As he settled into his new role, Gagnon quickly realized that all was not well with “information security” — as he was urged to call it — at Amazon. The size of the company’s network was astounding, but “it was all put together with tape and bubblegum,” a tangle of old and new software, Gagnon says. “It grew up out of a garage and it just kept going from there.” New consumer products were locked down with the utmost secrecy before launch, Gagnon says. But otherwise it seemed like everyone on the network had access to nearly everything, including customer information — and yet there was no insider threat program dedicated to preventing rogue employees from abusing their access while he was there. More fundamentally, he says, the team didn’t seem to have any systematic way of prioritizing its biggest security risks. “It was shocking to me,” Gagnon says.

Every section of this article is a gripping story of internal failures, corruption, and weak excuses. According to Evans’ reporting, Amazon prioritized growth to such an extent that even basic internal privacy controls were not implemented, and tens of thousands of employees had access to far more information than required for their job. Customer details were routinely scavenged and sold, sometimes finding their way into the hands of sketchy third-party firms that blended together several data sources. Evans too often compares this to the Cambridge Analytica scandal at Facebook for my liking.

Yet, despite this exhaustive look at Amazon’s internal practices, Gagnon’s fate somehow gets only a passing mention. He was reportedly fired after a conference in London in circumstances “under dispute”. There is plenty more room for detail and it appears that Evans interviewed Gagnon, but we get no more information than Amazon’s acknowledgement of his termination. Strange.

Apple:

Apple today filed a lawsuit against NSO Group and its parent company to hold it accountable for the surveillance and targeting of Apple users. The complaint provides new information on how NSO Group infected victims’ devices with its Pegasus spyware. To prevent further abuse and harm to its users, Apple is also seeking a permanent injunction to ban NSO Group from using any Apple software, services, or devices.

NSO Group is one of four companies recently added to a list maintained by the U.S. Department of Commerce, which prohibits any U.S. company from selling products or services to NSO Group without U.S. government approval. If it were also legally prohibited from using any of Apple’s products or services, it would surely put a damper on the company’s ability to operate, though it would only be a little bit surprising if NSO Group managed to acquire devices through another route.

A copy of Apple’s complaint is available on CourtListener. This is the second time this legal strategy has been used against NSO Group — Facebook sued it in 2019. The “new information” about how this spyware works mostly appears to be these paragraphs from the suit:

On information and belief, Defendants created more than one hundred Apple IDs using Apple’s systems to be used in their deployment of FORCEDENTRY.

On information and belief, after obtaining Apple IDs, Defendants executed the FORCEDENTRY exploit first by using their computers to contact Apple servers in the United States and abroad to identify other Apple devices. Defendants contacted Apple servers using their Apple IDs to confirm that the target was using an Apple device. Defendants would then send abusive data created by Defendants through Apple servers in the United States and abroad for purposes of this attack. The abusive data was sent to the target phone through Apple’s iMessage service, disabling logging on a targeted Apple device so that Defendants could surreptitiously deliver the Pegasus payload via a larger file. That larger file would be temporarily stored in an encrypted form unreadable to Apple on one of Apple’s iCloud servers in the United States or abroad for delivery to the target.

One of the minor privacy flaws of iMessage is that it will automatically tell you whether someone else has enabled it. All you have to do is type an email address or a phone number into the “To:” field in Messages; if it turns blue, it is an iMessage account and, therefore, associated with an Apple ID and an Apple device. In a vacuum, this is not very meaningful, but it appears that NSO Group was using a similar technique to figure out where to send its spyware.

Perhaps not as headline-making is this announcement:

Apple is notifying the small number of users that it discovered may have been targeted by FORCEDENTRY. Any time Apple discovers activity consistent with a state-sponsored spyware attack, Apple will notify the affected users in accordance with industry best practices.

I cannot find any reports of Apple notifying potential victims of state-sponsored attacks, so this appears to be a new policy. Twitter was doing this in 2015, and Google in 2012.

Update: As of November 24, Apple is now alerting possible targets. Ewa Wrzosek, a prosecutor in Poland, shared screenshots of what one of those warnings looks like. Wrzosek was notified by iMessage; others were sent emails.

First erroneously summarized and mocked as Spotify removing the shuffle button from album pages at Adele’s request, the accurate announcement is far more tame.

Andrew Paul, Input:

First off, the change really only affects Premium users (surprise surprise), so all you plebians not paying your monthly Spotify tithes will still suffer the shuffle. Secondly, it remains easy to enable shuffling album tracks by going to the “Now Playing View” and selecting the shuffle icon. So yeah, less a “take shuffle button off all album pages” as the BBC says, and more a “Premium users get a slightly more streamlined method to play album tracks in order.

Albums used to default to playing in a shuffled order on Spotify, and now they play according to the album sequence — that is the change. Frankly, it is long overdue and it seems silly to me that shuffle was ever the default for this particular play mode. Playlists? Sure. Albums? No way. Individual users should be able to choose if they wish to play an album on shuffle, but it is disrespectful to the art for a platform to make it the default behaviour.

I have read many of the stories about this change, and it still seems unclear how much Adele had to do with it. Alison Foreman of Mashable reported that Adele’s new record was first to receive the album-ordered default playback behaviour; but, when Adele tweeted about it, she quoted a story about how this applied to all albums. I am mostly sure it is not coincidental that the change in default behaviour rolled out the same weekend as Adele’s new record, but it does not seem certain that it came explicitly at her request, either.

In 2015, Facebook launched Instant Articles, which is sort of its version of Google’s Accelerated Mobile Pages format in the sense of it being a phone-first fast-loading proprietary webpage format. It allowed Facebook to capture the ads displayed on those pages.

Karen Hao, MIT Technology Review:

Instant Articles quickly fell out of favor with its original cohort of big mainstream publishers. For them, the payouts weren’t high enough compared with other available forms of monetization. But that was not true for publishers in the Global South, which Facebook began accepting into the program in 2016. In 2018, the company reported paying out $1.5 billion to publishers and app developers (who can also participate in Audience Network). In 2019, that figure had reached multiple billions.

Early on, Facebook performed little quality control on the types of publishers joining the program. The platform’s design also didn’t sufficiently penalize users for posting identical content across Facebook pages — in fact, it rewarded the behavior. Posting the same article on multiple pages could as much as double the number of users who clicked on it and generated ad revenue.

Clickbait farms around the world seized on this flaw as a strategy — one they still use today.

You may quibble with Hao’s use of the term “clickbait”; if so, feel free to replace it with something like “low-quality publishers” in your head. The results are the same.

Hao’s reporting is strong and I recommend this article, but it can also be seen, in part, as an updated and consolidated version of stories published since Instant Articles debuted:

  • In 2016, Kyle Chayka reported for the Verge that the generic and consistent layouts of pages powered by AMP and Instant Articles made it hard to distinguish between legitimate news sources and sketchy blogs.

  • In 2017, Sarah Perez wrote for TechCrunch about how Facebook would begin ranking faster-loading pages higher in users’ News Feeds, a decision that incidentally benefitted Instant Articles. Facebook says that Instant Articles are “ranked in News Feed by the same criteria” used for any other page.

  • Notably, Jane Lytvynenko reported for Buzzfeed News in 2018 that Instant Articles were gaining adoption among disreputable publishers. They also used Facebook’s advertising technology.

By advantaging their own formats — however incidentally they may claim — while eschewing moderation, Google and Facebook must be held at least partially responsible in my eyes for the misinformation they helped fund and spread. I do not mean that in a legal sense; I am not a lawyer. But their moral culpability for this should be attached to them for as long as we think about them.

Mark Gurman, Bloomberg:

For the past several years, Apple’s car team had explored two simultaneous paths: creating a model with limited self-driving capabilities focused on steering and acceleration — similar to many current cars — or a version with full self-driving ability that doesn’t require human intervention.

Under the effort’s new leader — Apple Watch software executive Kevin Lynch — engineers are now concentrating on the second option. Lynch is pushing for a car with a full self-driving system in the first version, said the people, who asked not to be identified because the deliberations are private.

[…]

Apple is internally targeting a launch of its self-driving car in four years, faster than the five- to seven-year timeline that some engineers had been planning for earlier this year. But the timing is fluid, and hitting that 2025 target is dependent on the company’s ability to complete the self-driving system — an ambitious task on that schedule. If Apple is unable to reach its goal, it could either delay a release or initially sell a car with lesser technology.

This is the project I am most doubtful of — not just from Apple, but from the entire industry. I will believe in the possibility of a fully autonomous car when I see one driving like a human would in mixed weather conditions, construction zones, gravel roads, and twisty mountain passes — not until then.

Other companies have loudly trumpeted their attempts at autonomous vehicles with not great results, but Apple has, as you would expect, kept its efforts mostly to itself. I wonder how it is getting along. Gurman reports that a key milestone has been achieved that puts it on the path to launching in the foreseeable future, but I still cannot shake my doubts. It is not because of what we have seen from Tesla or Waymo or others; I think the best way to view Apple is through its own work. And that is a big problem because its history of automation, cartography, and machine learning has not been encouraging. From the company that brought you Apple Maps and Siri is not a great tagline for a vehicle weighing many tonnes and travelling at high speeds with only its own programming to guide it.

But if 2025, or even 2030, is seen internally as a reasonable timeframe for public availability of this thing, it can only be seen as a promising project. I refuse to be anywhere near one — inside or out — until it has proved its capabilities, but this is intriguing.

Matt McFarland, CNN:

I’d spent my morning so far in the backseat of the Model 3 using “full self-driving,” the system that Tesla says will change the world by enabling safe and reliable autonomous vehicles. I’d watched the software nearly crash into a construction site, try to turn into a stopped truck and attempt to drive down the wrong side of the road. Angry drivers blared their horns as the system hesitated, sometimes right in the middle of an intersection.

The Model 3’s “full self-driving” needed plenty of human interventions to protect us and everyone else on the road. Sometimes that meant tapping the brake to turn off the software, so that it wouldn’t try to drive around a car in front of us. Other times we quickly jerked the wheel to avoid a crash. (Tesla tells drivers to pay constant attention to the road, and be prepared to act immediately.)

Watch the video where CNN editor Michael Ballaban drives — well, is present — this thing. It looks terrifying. I am not sure about you, but I would prefer to be in control at all times, rather than relying on partial automation while I maintain a driving level of focus so I may rescue the car when it screws up.

There are caveats, certainly. This is beta software, and it is certainly impressive that it can do some basic driving on its own. But this is not a self-driving car — not even close.

Jack Wellborn:

I also get why people are excited about Microsoft in general. This new Microsoft surprises and delights by doing things that old Microsoft would never consider. They have Visual Studio for the Mac. They make PC hardware. They even include Linux support in Windows. This new Microsoft is exciting and different, but they’ve also been around long enough to show us who they are. Nothing exemplifies that more than Windows on ARM. I think it’s great Microsoft has spent five years pushing Windows on ARM, but no one in their right mind could say they’ve been as successful at it when compared to what Apple has just accomplished. The tech community likes to pretend that Windows on ARM and the Surface Pro X are viable, if not flawed, options when they’re really not.

Wellborn’s selection of quotes from enthusiastic press coverage of Microsoft’s lukewarm ARM efforts reminded me to go look for some reactions to the early rumours and the announcement that Apple would be switching to its own processors. I want to do this not just because these things are funny to read in hindsight, but also because they illustrate why media and analyst coverage often gets this stuff wrong in the first place — especially when it comes to Apple.

Let me take you back to springtime of 2018. Perennial speculation of a shift away from Intel processors in the Mac seemed to be confirmed when Ian King and Mark Gurman of Bloomberg reported on the in-progress transition. A flurry of responses from columnists and reporters followed.

Brian Barrett of Wired seemed to think the architecture shift would necessarily include radical software changes:

Apple could also find users flummoxed at its attempt at the MacOS-iOS mashup that would apparently accompany an ARM transition. It wasn’t so long ago, after all, that Microsoft flamed out spectacularly when it attempted to bring a mobile UI to the desktop in Windows 8, an overhaul that left users feeling mostly confused and annoyed. And while Cupertino has already made some adjustments to give its desktop and mobile operating systems some common ground—its Apple File System, introduced last spring, works across both—it will have to combat years of ingrained expectations about how Apple devices behave.

In fairness, Barrett called his imagined list of problems likely to arise during this transition “surmountable”. But, still, it is a list of doom-and-gloom thoughts about how hard it will be for Apple to move away from Intel, with the assumption that it could only be for the most lightweight and entry-level uses, sort of like ARM laptops that run Windows.

Samuel Axon of Ars Technica speculated that Macs running on Apple’s processors could fit in the lineup like ARM-based Microsoft Surfaces, at least at first (emphasis mine):

While it makes sense for Apple to start sailing on this journey now, it likely won’t arrive at its destination (total independence from Intel) for several years—likely well beyond the 2020 date that Bloomberg names as the earliest launch window for a first Intel-free Mac. If an Apple-chip-powered Mac arrives in 2020, it could be a specialized product in a Mac lineup that still mostly includes Intel-based computers.

Joel Hruska of ExtremeTech was worried about entirely the wrong customer base:

But it’s genuinely surprising that Apple would choose to abandon CPU compatibility given the significant impact x86 had on its Mac product lines. Mac adoption rates shot upwards once people knew their hardware would be seamlessly compatible with Windows. Walking away from that same compatibility now seems foolish, at least as far as good customer support is concerned.

Windows on ARM theoretically presents a solution to this problem, but the WoA OS is limited to 32-bit applications, with no support for x86 drivers, Hyper-V, and limited API compatibility. Supposedly this transition won’t take place before 2020, which gives MS and Apple another 20 months to get their ducks in a row, but 20 months isn’t actually all that much time to perfect cross-OS compatibility, especially not if the goal is to add better and more robust support for 64-bit applications and various types of system drivers.

These analyses have many flaws, but one thing they share is the idea that Microsoft tried similar things and failed, so why should Apple be any different? I buy the argument that Apple’s attempt should not have been deemed a success until the company proved its bonafides, but these predictions are ludicrous: MacOS is nearly the same on Intel and M1 processors, Apple was not timid with its M1 introduction, and I do not imagine the question of Windows’ availability made anyone at Apple blink.

After Apple announced the transition at WWDC 2020 — but, critically, before it announced or shipped any consumer hardware — Alex Cranz of Gizmodo speculated on the company’s motivations with the help of an analyst:

Profits are the likely motivation behind Apple’s biggest moves — for any publicly-traded company’s biggest moves — even when those moves have altruistic outcomes like improving customer privacy. And Apple’s main profit driver is vertical integration: the practice of keeping as many elements of a supply chain in-house as possible to drive down costs, increase revenue, and maintain a hold on the markets it dominates.

“Apple hasn’t been very successful over the past five years with the Mac and most of the innovation has come from Windows vendors,” analyst Patrick Moorhead told Gizmodo. “I think Apple sees vertical integration as a way to lower costs and differentiate. We’ll see. It’s a risky and expensive move for Apple, and right now I’m scratching my head on why Apple would do this. There’s no clear benefit for developers or for users, and it appears Apple is trying to boost profits.”

Apple undoubtably likes vertical integration, and not just because of cost reasons. (By the way, have you noticed how often columnists and analysts write about Apple’s ostensible desire for control as though that is the goal, but rarely define the possible motivations for choosing to build integrated devices instead of collections of parts?) Moorhead’s inability to see benefits for anyone other than Apple looked silly at the time and has only aged worse.1

Neither Moorhead and Cranz give serious thought to the possibility that processors of Apple’s own design could be the foundation of Macs that perform better than their Intel counterparts and get far better battery life. Cranz dances around exploring it for a couple of paragraphs — maybe Apple’s chips will be competitive with those from Intel and AMD — but most of the article is dedicated to the supposedly taller walls of Apple’s garden. There is no clear reason why this is the case: Apple has only ever officially supported the Darwin-based versions of MacOS on its own hardware, no matter what instruction set or vendor its processors use. Moorhead, on the other hand, bet on Apple only transitioning laptops and consumer hardware to its own processors, and retaining Intel for its higher-performance Macs. “Fingers crossed,” Cranz wrote in response.

I assume few Mac users are now crossing their fingers that Apple keeps Intel processors in future products, even at the high end.

Re-reading some of the press from this time in the Mac’s history and comparing it to coverage quoted by Wellborn is a heck of a head-trip. Even without the knowledge that Apple’s own processors would instantly become the benchmark for the personal computer industry, it seems like the flaws in others’ efforts — Microsoft’s in particular — are centred only once reliable rumours surface about Apple’s entry. Then, these writers oftentimes seem to view Apple’s attempts through exactly the same lens as any other company’s, somehow ignoring the vertical integration that so distinguishes it or its own history of product development.

That is not to say the press should have assumed that ARM Macs would be brilliant; skepticism is often lacking in the tech press. But it seems especially egregious in the case of this transition because Apple’s previous processor architecture change is recent memory. Why assume Apple would take a similar route as Microsoft did with Windows on ARM when it always seemed more likely that it would mimic its own past success of moving from PowerPC to Intel?

But, no, the tech press looked to the attempts of other companies as instructive of what Apple would do, despite that being a flawed speculation strategy for decades.

I am reminded of that classic Macalope nugget:

It’s amazing how future Microsoft products beat current Apple products time and time again, isn’t it?

An interpolation of that: it is amazing how present Microsoft problems do not match the speculated doom of similar efforts from Apple, time and time again.


  1. In the 2018 Wired article, Moorhead is also quoted as an expert analyst voice:

    “Computationally I can see a Core i3 or low-end Core i5,” says Patrick Moorhead, founder of Moor Insights & Strategy, comparing ARM’s abilities to entry-level Intel chips. “I can’t imagine that by 2020 they’d have a processor anywhere near the capabilities of a Xeon or a Core i7.”

    Nobody seemed to predict the astonishing power of even a base model M1 MacBook Air. But when read alongside Moorhead’s analysis cited by Cranz in 2020, it looks like he was certain this was purely a play to spend less money with Intel rather than a serious effort to do better. Why would Apple go through the effort of switching for any other reason than because it wanted more than what Intel could offer? ↥︎

Thomas Claburn, the Register:

Future Chromium-based browsers under administrative control will be able to prevent users from viewing webpage source code for specific URLs, a capability that remained unavailable to enterprise customers for the past three years until a bug fix landed earlier this week.

Back on October 15, 2018 an employee of Amplified IT, a Google education partner since acquired by CDW, filed a bug report describing how the Chromium URL Blocklist – which administrators can set to conform with organization or enterprise policy – doesn’t actually work.

Evidently, tech savvy students were viewing the source code of web-based tests to determine the answers.

The rationale for this bug seems pretty weak. If exam software is revealing answers in the page source, it should be rewritten. In this case, it was Google Forms, which makes this bug fix from Google’s Chromium project look especially hinky. But I am convinced a policy like this should behave as expected for all URLs, so it makes sense to make the correction even with the weak example. If you look solely at the facts of this bug and the limited scope of this fix, it should be uncontroversial.

Much dumber still was the hand-wringing about how this is some kind of plot to allow individual websites to block users from viewing markup, which is as technically illiterate as it is alarmist. I was shocked to see how many people spread this version of the story even well after it was clear this was an administrative policy for managed environments.

If the web were still primarily a venue for document viewing, as I naïvely believe it ought to be, I would see this through a more debilitating lens. But the web is basically an operating system and viewing the source tells you little these days. I think that is a bigger regression, but it is only tangentially related to this bug. This is a big, scary pile of nothing.

Apple:

Apple today announced Self Service Repair, which will allow customers who are comfortable with completing their own repairs access to Apple genuine parts and tools. Available first for the iPhone 12 and iPhone 13 lineups, and soon to be followed by Mac computers featuring M1 chips, Self Service Repair will be available early next year in the US and expand to additional countries throughout 2022. Customers join more than 5,000 Apple Authorized Service Providers (AASPs) and 2,800 Independent Repair Providers who have access to these parts, tools, and manuals.

The initial phase of the program will focus on the most commonly serviced modules, such as the iPhone display, battery, and camera. The ability for additional repairs will be available later next year.

Brian Heater, TechCrunch:

Apple hasn’t listed specific prices yet, but customers will get a credit toward the final fee if they mail in the damaged component for recycling. When it launches in the U.S. in early-2022, the store will offer some 200 parts and tools to consumers. Performing these tasks at home won’t void the device’s warranty, though you might if you manage to further damage the product in the process of repairing it — so hew closely to those manuals. After reviewing that, you can purchase parts from the Apple Self Service Repair Online Store.

And you thought Apple could no longer surprise? This makes sense in the context of right-to-repair bills progressing in the U.S. and around the world. Apple has been lobbying against that legislation, often with ludicrous arguments that look especially funny in light of today’s news.

There seem to be a handful of caveats. Most notably, the program is launching only for very recent iPhones in the U.S., and then gradually rolling out to more countries and offering repairs for M1 Macs. This program will not help me replace the battery in my partner’s iPhone X when it is needed. Support for other products currently sold, like Intel Macs and iPads, also has not been announced. I have little hope future Apple Watch and AirPods models will become repairable, but they should be.

While I am cautiously optimistic about this new program, it does not resolve the rationale for oversight. Apple still controls the parts and repair channels, which means it can stop offering this at any time. As welcoming as I think this new direction seems to be, regulations can and should be used to set expectations. We should not be having this same discussion five or ten or twenty years from now.

Update: Maddie Stone, the Verge:

But Apple didn’t change its policy out of the goodness of its heart. The announcement follows months of growing pressure from repair activists and regulators — and its timing seems deliberate, considering a shareholder resolution environmental advocates filed with the company in September asking Apple to re-evaluate its stance on independent repair. Wednesday is a key deadline in the fight over the resolution, with advocates poised to bring the issue to the Securities and Exchange Commission to resolve.

This at least explains the timing.

Amir Shevat and Sonya Penn of Twitter:

With today’s updates, the Twitter API v2 is now officially the primary Twitter API. Over the past several months, we’ve shipped lots of new features and endpoints to the API v2 that weren’t previously available on v1.1, including endpoints for Spaces, posting polls in Tweets, and pinning and unpinning Lists (see a full list here). You can follow our product roadmap for a list of v2 endpoints in development and view our mapping of v1.1 to v2 endpoints.

Dan Brunsdon of Twitter:

Specifically, we’ve removed terms that restricted replication of the Twitter experience, including Twitter’s core features as well as terms that required permission to have high numbers of user tokens.

We know that building solutions that help people on Twitter often means a developer has to build (or replicate) some of the things that are available on Twitter. These changes to our Developer Policy are intended to drive clarity for the developer ecosystem and provide an open API platform that makes it easier for developers to build, innovate, and make an impact on the public conversation.

Perhaps this really is Twitter’s attempt to reignite a community of third-party clients. I hope so. There were dozens of clients ten years ago (via Elle) that offered better versions of the Twitter timeline as well as those that provided a more focused experience. But that feels a bit like history now in large part because of changes made in 2012.

This week’s announcement appears to be Twitter’s mea culpa, but developers are right to be cautious. A third-party client cannot search tweets older than one week, view likes or retweets with comments, use bookmarks, or vote in polls — among many other limitations. Some of these things are on Twitter’s roadmap for API V2, but it is unclear whether all of them will come to fruition. One thing seems certain: we are not going back to the days when users’ posts were available as an RSS feed.

Dan Moren, Macworld:

But one challenge with continually moving the state of the art forward is that sometimes it comes at the expense of making sure the technology that’s already here works as well as it can. After all, if you have to add a dozen new features in a year, that could mean taking away from work enhancing reliability, and squashing bugs in existing features.

We’ve all encountered a slew of problems — some simple (if ridiculous) to fix, others are maddeningly difficult to troubleshoot. As our devices get more and more complex, it’s all too easy for some of those problems to persist for years. And though the best part of the Apple experience has long been “it just works,” the question is… what happens when it doesn’t?

I try not to write outright grief posts here because they are not very fun to read, but I have to get this off my chest.

I was too generous when I gave Apple’s software quality in 2020 a four out of five. It was certainly better than the preceding year, but I should have graded it a whole point lower, at least. 2021 has been even rockier for me, and not just with Apple’s software and services. I feel increasingly as though big software vendors are taking customers’ business for granted.

Quality used to be one of the factors that differentiated Apple’s products from its competitors — not just in the big picture of things “just working”, but also in the details. That feels much less true than it used to. There are big problems: MacOS Monterey bricked a bunch of T2 Macs, and the version of Shortcuts that debuted across Apple’s operating system lineup this year shipped in an unusable state. But the thousand tiny cuts are perhaps more grating: Preview windows do not open in the last-used position or size, unlike any other Mac app; audio does not always initiate in CarPlay, so you have to disconnect and reconnect your phone every so often; Music for MacOS is somehow getting more bloated and less usable with every update ever since it was called “iTunes”; the play/pause (F8) key behaviour is unpredictable and shitty all the time.

Then there are the error messages which, to Moren’s point, make it hard to know what to do when things go wrong. Sometimes, things just fail silently. When there is an error message, it is often unhelpful and vague. Last night, I was trying to edit a shared Pages document on my Mac. The moment I made an edit, I was told that a new version of Pages was available and I needed to update before making the change. So I clicked on the button to open the App Store, but did not see any updates available. It took a few minutes of back-and-forth before I noticed there was a new version but, because that Mac is stuck on Catalina, it was unavailable to me. So it turns out that a shared Pages document can be edited on a newer version which silently breaks compatibility, and the only way someone will find out is when they decode a cheery update notification. I would not mind except this sort of stuff happens all the time in software and services from Apple and plenty of other vendors.

I am not trying to use software; I am trying to get something done, and these tools are frequently an impediment as much as they are a boon.

Moren:

If Apple can’t improve the reliability of its software — and, to a certain degree, it can never guarantee that everything will work perfectly for everyone — it at least owes it to its users to create more robust resources for helping them help themselves. […]

This viewpoint is so engrained for software that it shows up in licenses and end-user agreements. For example, in Apple’s MacOS Monterey agreement (PDF, corrected from the original all-uppercase formatting):

To the maximum extent permitted by applicable law, the Apple software and services are provided “as is” and “as available”, with all faults and without warranty of any kind, and Apple and Apple’s licensors (collectively referred to as “Apple” for the purposes of sections 8 and 9) hereby disclaim all warranties and conditions with respect to the Apple software and services, either express, implied or statutory, including, but not limited to, the implied warranties and/or conditions of merchantability, satisfactory quality, fitness for a particular purpose, accuracy, quiet enjoyment, and non-infringement of third party rights.

Why is this acceptable for software, including operating systems? Nearly anything else you buy — clothing, furniture, transportation, even the hardware the software runs on — has a warranty. Consumer protection laws require manufacturers to stand behind their products and ensure they perform as promised. But not consumer software.1

This is not solely an issue with Apple’s software and services. It is increasingly not the except but the rule with software I use from larger companies — especially those that have adopted the software-as-a-service model. Rarely have I experienced this problem with software from smaller and medium-sized vendors, which is often built by developers who care about the experience of individual customers.

I am baffled that we are expected to rely on software, services, and operating systems made by companies that, legally, do not stand behind their quality.

See Also: Brilliant Hardware in the Valley of the Software Slump from Craig Mod last year, and my comments.


  1. Industrial software often comes with a warranty. Some professional software-as-a-service vendors offer a service level agreement, but this should not be confused with a warranty. If uptime dips below the agreed-upon level, the vendor may give a partial reimbursement; but, there are often many loopholes, and they will not necessarily guarantee the problem will not return. ↥︎