Month: August 2021

Chance Miller, 9to5Mac:

In an internal memo distributed to the teams that worked on this project and obtained by 9to5Mac, Apple acknowledges the “misunderstandings” around the new features, but doubles down on its belief that these features are part of an “important mission” for keeping children safe.

[…]

The memo, which was distributed late last night and obtained by 9to5Mac, was written by Sebastien Marineau-Mes, a software VP at Apple. Marineau-Mes says that Apple will continue to “explain and detail the features” included in this suite of Expanded Protections for Children.

There are legitimate concerns about the technologies Apple announced yesterday, but there is also plenty of confusion. Sometimes, that is the result of oversimplified summaries, as with this tweet from Edward Snowden:

Apple says to “protect children,” they’re updating every iPhone to continuously compare your photos and cloud storage against a secret blacklist. If it finds a hit, they call the cops.

iOS will also tell your parents if you view a nude in iMessage.

Scanning only applies to photos stored in iCloud, Apple was already checking for known images (Update: Apple was not checking. I regret the error.), similar “blacklists” are used by many tech companies, the threshold is not public but it is not one match, Apple does not call the cops, and the optional parent alert feature only applies to accounts belonging to children under the age of 13. Unfortunately, those caveats do not fit into a tweet, so Snowden was apparently obliged to spread misinformation.

But he is right to be worried about what is not known about this system, and what could be done with these same technologies. The hash databases used by CSAM scanning methods have little oversight. The fuzzy hashing used to match images that have been resized, compressed, or altered means that there is an error rate that, however small, may lead to false positives. And there are understandable worries about how systems like these could be expanded beyond these noble goals.

Some other confusion came from early reports published before Apple’s announcement. New York Times reporter Daisuke Wakabayashi on Twitter:

Like many parents, I have pictures of my kids without clothes on (try getting a 4 year old to wear clothes). How are they going to differentiate this from legitimate abuse? Are we to trust AI to understand the difference? Seems fraught.

Wakabayashi was far from the only person conflating the two image detection systems. The confusion even after Apple’s announcement shows that many people on Twitter simply do not read, but it also indicates that Apple’s explanations were perhaps unclear. Here is my attempt at differentiating them:

  • For devices signed into a U.S. Apple ID that is both associated with an iCloud Family and has a birthdate of less than 18 years ago, Messages will attempt to intercept nude or sexually-explicit images. This uses on-device automated categorization to identify the images locally. Messages will display warnings in two cases: before images sent to the device are viewed, and before images taken with the device are sent. If the account belongs to a child under 13, parents can also be notified.

  • Before images are uploaded to iCloud Photos, they will be checked against known examples of CSAM. The images do not leave the device to make this comparison, and this is not a check against all apparently-nude images. This matching happens on the device using a local version of a hash database. The results of the match test are attached to the file when it is encrypted and uploaded to iCloud Photos. When a threshold is crossed in the number of positive matches, only the matched files and their test results will be decrypted for further analysis and possible referral to the NCMEC.

The Messages system may mark an innocent photo of a nude child, as in Wakabayashi’s example, if it was sent to or from a device signed into a minor’s Apple ID. The iCloud Photos system should, theoretically, not match such a photo because it is an original image. No matter how much resemblance it may bear to an image catalogued as part of a broader CSAM investigation, it should register as an entirely unique image.

In any case, all of this requires us to place trust in automated systems using unproven machine learning magic, run by technology companies, and given little third-party oversight. I am not surprised to see people worried by even this limited scope, never mind the possibilities of its expansion.

I hope this is all very clever and works damn near perfectly, but I am skeptical. I tried to sync my iPhone with my Mac earlier today. After twenty minutes of connecting it, disconnecting it, fiddling with settings, and staring at an indeterminate progress bar, I gave up — and that was just copying some files from one device to another over a cable, with every stage made by Apple. I find it hard to believe the same company can promise that it has written some automated systems that can accurately find collections of CSAM without impacting general privacy. I know that these are separate teams and separate people writing separate things. But it is still true that Apple is asking for a great deal of trust that it can get this critically important software right, and I have doubts that any company can.

Zack Whittaker, TechCrunch:

Later this year, Apple will roll out a technology that will allow the company to detect and report known child sexual abuse material to law enforcement in a way it says will preserve user privacy.

Apple told TechCrunch that the detection of child sexual abuse material (CSAM) is one of several new features aimed at better protecting the children who use its services from online harm, including filters to block potentially sexually explicit photos sent and received through a child’s iMessage account. Another feature will intervene when a user tries to search for CSAM-related terms through Siri and Search.

Apple’s announcement:

At Apple, our goal is to create technology that empowers people and enriches their lives — while helping them stay safe. We want to help protect children from predators who use communication tools to recruit and exploit them, and limit the spread of Child Sexual Abuse Material (CSAM).

[…]

These features are coming later this year in updates to iOS 15, iPadOS 15, watchOS 8, and macOS Monterey. [Features available in the U.S.]

This program is ambitious, and protecting children is an important responsibility. These efforts will evolve and expand over time.

To state the blindingly obvious, any amount of child abuse is too much, and we should celebrate any effort to reduce it when the privacy and security of the general public is not compromised. Apple’s announcement sounds like it is walking that fine line, but this is perhaps the biggest change to Apple’s privacy practices in years given that it intercepts messages and scans photos. There are experts in cryptography and privacy that are troubled by today’s announcement, and those concerns are worth taking seriously.

This announcement is fresh, and I am sure the fulness of time will bring more questions, confusion, and clarity. This post is not going to be comprehensive, but it will be an early look at what we do know.

There are three parts to Apple’s announcement:

  • For devices signed into an Apple ID belonging to a user under the age of 18 in an iCloud Family, Messages will flag images that on-device machine learning has determined may be sexually explicit. If the user chooses to view or send the image anyway, users with a Parent/Guardian role in the iCloud Family will be notified if the child is under the age of 13.

  • iPhones and iPads will locally scan images destined for iCloud Photos. Images will be compared on-device to hashes derived from known CSAM imagery, and will attach an encrypted “voucher” with the results when the photo is uploaded. When enough positive vouchers are associated with an account, the flagged photos and vouchers will be decrypted, and the account will be reviewed by a real person at Apple. If the flagged images in the account are actually CSAM and not a false-positive, Apple will suspend it and notify the National Center for Missing and Exploited Children.

  • Siri and Spotlight will display messages when users try to search for CSAM or want to report abuse.

The latter is a good step, and it is relatively straightforward. I am skipping it for the purposes of this post. That leaves the interception of messages, and on-device scanning. Let’s start with the first one, which actually sounds pretty appealing as an option for all users. I think many people would appreciate if their device flagged potentially sensitive images — solicited or unsolicited.

But the mechanism by which it determines if the image is explicit is, like most machine learning, an unaudited mystery. If it errs on the side of caution, it may flag perfectly innocent images which, like the car alarm I have been subjected to for the past minute and a half, makes it less likely that any warning is treated seriously. If it is not strict enough, it may not be effective. Either way, trust in this system must strike a careful balance.

That unaudited system also raises the possibility of abuse. This feature has only been announced for the U.S., but it will surely expand to devices in countries that have required Apple’s complicity in censorship. If it can be used to intercept potentially sexual images, there is no technical reason the same technology could not be trained to identify other kinds of images. The likelihood of that theoretical concern becoming material depends only on the pressure Apple will face, and whether it will risk being unable to sell in a country if it does not comply.

The second feature announced today that has generated the most discussion is the way images destined for iCloud Photos will be checked for CSAM. And it is the one that has me the most confused. Partly, that is because the cryptography proof (PDF) is complete gibberish to me; mostly, it is because I am unclear why this is more concerning now than when Apple acknowledged last January it was checking images in iCloud Photos. I have asked for clarity and will update this post if I hear back. (Update: Apple was not checking iCloud Photos. I regret the error.)

Apple is not alone in scanning user materials for potential CSAM, either. Facebook, Twitter, and Microsoft use a database matching technology developed by the latter called PhotoDNA to detect this imagery, while Google has used a different database since 2008. In all cases — including Apple’s — the implementation is broadly similar: hashes are created from different parts of known images of child abuse. When any image is uploaded to services provided by these companies, its hash is checked against that database and the image is flagged if there is a match.

What is perhaps different about Apple’s approach is that there is some machine learning magic that will generate the same hash for images that have been altered (PDF):

The main purpose of the hash is to ensure that identical and visually similar images result in the same hash, and images that are different from one another result in different hashes. For example, an image that has been slightly cropped or resized should be considered identical to its original and have the same hash.

Apple’s example shows that the same hash is generated by colour and greyscale versions of the same image. If you’re wondering about the fallibility of this magic system, you are not alone. Alongside this announcement, Apple also released a handful of independent assessments of its cryptographic bonafides. But they did not impress Matthew Green of Johns Hopkins University.

Green was the first person to break the news of this then-impending announcement, and is one of several security professionals to express concerns in a Financial Times report by Madhumita Murgia and Tim Bradshaw:

“It is an absolutely appalling idea, because it is going to lead to distributed bulk surveillance of … our phones and laptops,” said Ross Anderson, professor of security engineering at the University of Cambridge.

[…]

“This will break the dam — governments will demand it from everyone,” said Matthew Green, a security professor at Johns Hopkins University, who is believed to be the first researcher to post a tweet about the issue.

Alec Muffett, a security researcher and privacy campaigner who formerly worked at Facebook and Deliveroo, said Apple’s move was “tectonic” and a “huge and regressive step for individual privacy”.

These concerns could also apply to Apple’s existing scanning of iCloud Photos. Regardless, it is a privacy worry when any kind of personal document is being scanned and flagged.

Please do not misinterpret this as a defence for a perceived right to create or retain any of this vile material. But — hard pivot here — it is worth asking whether it makes sense to deputize any company to monitor user accounts for anything illegal.

Let’s apply this logic to something less heinous like copyrighted materials. If these files are merely stored in a cloud service and are not being shared to any third party, should the service provider be monitoring its systems and terminating user accounts with files matching known pirated copies? I am not sure that makes much sense.

Once these files are shared beyond the user’s personal drive, I am fully on board with scanning for legal purposes. Google Drive, for example, will allow users to store copyrighted materials but attempts to restrict sharing options if those files match valid takedown requests. Apple and Google check email attachments for child abuse materials, while Facebook scans users’ messages. That makes sense to me. As with social media companies moderating their platforms, I think there ought to be different expectations when things are broadcast to the world.

Perhaps CSAM should be regarded as a special case where cloud storage companies are deputized to proactively prevent any storage of this material whatsoever. I get that argument, and I think it is worth considering given the gravity of these specific abuses. But, while I am not a privacy absolutist, I think this is another step in a concerning direction where the default device setup gives Apple the keys to users’ data.

iCloud backups were the first step. iPhones and iPads will back up to iCloud by default, and Apple has the ability to turn over backup contents in response to legal demands. In some sense, this is a valid compromise: users benefit from encryption that protects device data from man-in-the-middle attacks and reduces the device’s usefulness in case of theft, while law enforcement still gets access to almost all device data when it is required for an investigation. If you do not like this situation, you must remember to disable iCloud backups and use your own computer instead. But the power of this default means that a user who has gone through the effort of switching off iCloud backups gives the impression of having something to hide, even if they do not.

iCloud Photos is not turned on by default but, as Michael Tsai writes, Apple implicitly encourages its use:

One takeaway is that, CSAM detection aside, Apple already has access to these photos. You shouldn’t upload anything to the cloud that you want to keep private. But Apple isn’t giving users much choice. It doesn’t let you choose a truly private cloud backup or photo syncing provider. If you don’t use iCloud Photo Library, you have to use Image Capture, which is buggy. And you can’t use iCloud to sync some photos but not others. Would you rather give Apple all your photos or risk losing them?

You can use third-party alternatives that keep an entirely separate library — Adobe Lightroom is one example — but it feels like a second-rate experience on iOS.

Tsai:

And, now that the capability is built into Apple’s products, it’s hard to believe that they won’t eventually choose to or be compelled to use it for other purposes. They no longer have the excuse that they would have to “make a new version of the iPhone operating system.”

This is true, but I think a fair counterargument is that Apple’s more proactive approach to child safety takes away one of law enforcement’s favourite complaints about commonplace encryption.

But it represents a similar trade-off to the aforementioned iCloud backups example. Outside of the privacy absolutist’s fictional world, all of privacy is a series of compromises. Today’s announcements raise questions about whether these are the right compromises to be making. What Apple has built here is a local surveillance system that all users are supposed to trust. We must believe that it will not interfere with our use of our devices, that it will flag the accounts of abusers and criminals, and that none of us innocent users will find ourselves falsely implicated. And we must trust it because it is something Apple will be shipping in a future iOS update, and it will not have an “off” switch.

Perhaps this is the only way to make a meaningful dent in this atrocious abuse, especially since the New York Times and the NCMEC shamed Apple for its underwhelming reporting of CSAM on its platforms. But are we prepared for the likely expansion of its capabilities as Apple and other tech companies are increasingly pressured to shoulder more responsibility for the use of their products? I do not think so. This is a laudable effort, but enough academics and experts in this field have raised red flags for me to have some early concerns and many questions.

You may recall ITV News’ reporting earlier this year that Amazon was destroying over one hundred thousand items every week in just one U.K. warehouse. Many of these products were brand new and only destroyed because it was cheaper than continuing to warehouse them, while others were returns.

A new program launched yesterday by Amazon U.K. is an attempt to correct those problems:

Selling partners who want to resell returned items can take advantage of “FBA Grade and Resell,” which is now available in the UK, and will be available in the U.S. by end of year, and in Germany, France, Italy and Spain by early 2022. This programme gives third party selling partners the option to sell returned products on Amazon as “used” items instead of having them sent back to them or donated.

[…]

“FBA Liquidations” gives sellers the option to use the company’s wholesale resale channel and technology to provide sellers with a way to recover a portion of their inventory cost from their returned or overstock inventory. The programme is live in the U.S., Germany, France, Italy and Spain, and is set to go live in the UK in August. Previously a seller would either need to have returned or overstock inventory sent back to them or let Amazon take care of this product through its FBA Donations programme. Now businesses selling on Amazon have a new hassle-free way to recover value on these items by reselling these items through Amazon’s bulk resale partners.

Sounds great, but there are some caveats worth mentioning. First, Amazon says that there will be processing fees for the FBA Grade and Resell program, though it is waiving those fees until the end of the year. The liquidations program, meanwhile, does not show sellers how much they will make until their products have been sold. And, if Amazon cannot sell the overstock, it is automatically placed back into inventory where storage fees will resume. Most notably, Amazon may have created some alternative channels to give sellers the option to reduce waste, but it does not appear to have disincentivized product destruction.

Six years ago, Apple redesigned its website to, among other things, remove the dedicated “Store” section and integrate buying into each product section. Since my entire professional career has centred around web design, this intrigued me. The thinking behind this, presumably, was that the entire website could function as a store because that was basically how many visitors used it anyway: they wanted to buy a product, or they wanted information about a product they were thinking about buying. I thought it was ingenious.

But this direct approach compromised the presentation of some of the secondary and tertiary products. While every section of the then-new site featured an “Accessories” page, it often felt cumbersome to find some of Apple’s oddball products, which became particularly noticeable when HomePods and AirTags were launched. There was a lot more hunting around than was required in the iOS Apple Store app.

The Store page, launched yesterday, seems to fix the clunkiness of that previous redesign. You can tell it is important because it is the first menu item after the Apple logo. It also, unfortunately, makes liberal use of horizontal scrolling. It feels like a page that was laid out before the designer knew what would be in it. At least it again exists.

Here are two little things that I learned from tweets in the last twelve hours or so. First, @c_eck shares a Markup tip:

I just learned how you can get clean markups by pausing at the end of drawing your shape, line or arrow.

For example, if you use the pen tool to circle something in an image, just pause at the end of drawing your probably-shaky shape to get a nice clean circle. I have no idea how anyone is supposed to figure this out on their own, but it works great once you know it’s there.

That came in handy when I wanted to reproduce an observation originally posted on Reddit, via the Halide team:

Big news: the latest iOS 15 beta automatically removes the famous ‘green orb’ lens flares we are so used to on iPhones.

Look to the left-hand side of the sky in the first image, just above the horizon, to see the lens flare. I just had to give it a shot, and it seems to work pretty well. This is the kind of seamless fix that computational photography enables, though this example is a simple patch that could also be done by a golden retriever with access to Photoshop. I will have to try this in some trickier contexts.

Greg Morris:

In April this year I chose to sell my A7iii camera due to lack of use, and wanting to slim down my possessions to a level that I was happy with. I am a minimalist, but sometimes I forget and tend to have to go through a purge every now and again to calm things down. I felt a huge pang when delivering it to its new owner, but due to seeing no end in sight of the pandemic knew I couldn’t leave it gathering dust much longer. Since then I didn’t think much of it, until two weeks ago.

Funny how similar experiences can diverge so radically: I have used my cameras more often in the past eighteen months than in the years prior, and a major reason was this pandemic.

We never had a true lockdown order where I live — arguably to our detriment — but, as in most places, we were encouraged to reduce unnecessary indoor trips and spend time outdoors. So, many times a week for the past year, I have slung my camera over my shoulder and gone for a walk. No destination, no schedule — just a need to walk and take pictures. In the words of Craig Mod in 2014:

While short walks can invigorate or move a stagnant mind, long walks nourish and regenerate.

Mod writes of walking for many hours — maybe days — but I have found two-to-four hours to be a sweet spot for my purposes. I am privileged to be a generic white guy and, so, unlikely to be questioned, harassed, or threatened. I wander through industrial areas after-hours and take pictures of businesses with signs that have not been updated in decades. My camera’s shutter has opened thousands of times in the past year, making images only for myself. It has been my nightly meditation; my comparatively luxurious coping mechanism.

Maybe I could have done this with my phone’s camera, but it would not have been as nice. One of the best things about these walks is that I rarely touch my phone. I will occasionally text my partner to let her know that I am doing okay, and perhaps take a handful of pictures that I will forget to share later. But most of the time, my phone stays in my pocket and my headphones stay silent. Just me, my camera, and one foot pushing the other forward.

In preparation for this preview of the Pixel 6, I was bemused by a quick read-through of the Verge’s reviews of Google’s Pixel phones. The way Dieter Bohn describes them ebbs and flows annually between the premium and the middle-ground. Here is the first from 2016:

[…] But the Pixel is different: although it is manufactured by HTC, it’s fully designed by Google. And Google designed it to compete at the top tier, so it’s priced to match the iPhone and the Galaxy S7. It has a couple incredibly obvious objectives in mind with this phone: make it familiar and make it powerful.

The Pixel 2 followed in 2017, the same year mainstream smartphones crept toward the $1,000 mark with products like the iPhone X and Samsung Galaxy Note 8. But the Pixel 2 started at the same $649 as its predecessor, and the Verge’s review emphasized its lack of flashiness:

The Pixel 2 isn’t the nice dining room table with the fancy silverware. It’s the kitchen counter where you actually eat. It’s not as impressive, but it’s much more comfortable. That’s what makes the design of this year’s Google Phones great. They’re meant to be of use, and they are.

So the first Pixel was designed as a flagship competitor, while its successor was a more mainstream offering. Any guesses for which direction the Pixel 3 drifted toward?

For three years now, the Pixel phones have claimed the mantle of “best Android phone,” but they’ve always done so with asterisks. Those asterisks involved bezels, screen quality, or some other thing. This year, Google aims to claim the mantle again with the Pixel 3 and 3 XL, minus the asterisks.

Accordingly, Google bumped the starting price to $799. Premium, mass-market, then premium again. And then there was the Pixel 4, which changed up the pattern:

Most new phones try to layer on one or two new features year over year. But the Pixel 4 has at least five major new hardware-based features: face unlock, Motion Sense, the new Google Assistant, the new 90Hz display, and a second telephoto camera lens. It’s also available on all four major US carriers for the first time.

Some of those were premium features; some, like the radar-based Motion Sense touchless control, were weird experiments that didn’t really go anywhere. Like its predecessor, it started at $799, and its specs reflected a more midrange phone.

Last year saw the debut of the Pixel 5, which Bohn summarized like so:

It may be disappointing to see Google shy away from the big leagues this year, but I think sticking to making a premium midrange phone is more true to the Pixel’s whole ethos. The Pixel 5 is not an especially exciting phone, but instead of overreaching, Google focused on the fundamentals: build quality, battery life, and, of course, the camera.

So, after attempts at premium, midrange, premium, weird, and midrange, it is about time for Google to try its hand at creating an upmarket phone again. That is, apparently, what the Pixel 6 is supposed to be. Dieter Bohn:

This fall, Google will release two slightly different Pixel phones: the Pixel 6 and the Pixel 6 Pro. If the final versions are anything like the prototypes I saw last week, they will be the first Pixel phones that don’t feel like they’re sandbagging when it comes to build quality. “We knew we didn’t have what it took to be in the ultra high end [in the past],” Osterloh admits. “And this is the first time where we feel like we really have it.”

Both versions of the Pixel were glass sandwiches with fit-and-finish that are finally in the same league as what Samsung, Huawei, and Apple have to offer. “We’ve definitively not been in the flagship tier for the past couple years, this will be different,” says Osterloh. He also admits that “it will certainly be a premium-priced product,” which I take to mean north of $1,000.

Google’s hardware division is a low-stakes gamble for such a large company. In 2019, it apparently sold just seven million units, and Nikkei Asia reported last September that it would make less than a million Pixel 5 models. So it is surprisingly conservative in its experiments. It should able to play around a little more.

Perhaps that this premium-positioned phone is the wildest experiment the company can think of. But if Google fails to explain its advantages and sales are similarly lacklustre, how do you think the Pixel 7 will be priced and marketed? I smell another mid-tier product in the not-too-distant future.

Olivia Solon, NBC News:

Ghada Oueiss, a Lebanese broadcast journalist at Al-Jazeera, was eating dinner at home with her husband last June when she received a message from a colleague telling her to check Twitter. Oueiss opened up the account and was horrified: A private photo taken when she was wearing a bikini in a jacuzzi was being circulated by a network of accounts, accompanied by false claims that the photos were taken at her boss’s house.

[…]

Oueiss is one of several high-profile female journalists and activists who have allegedly been targeted and harassed by authoritarian regimes in the Middle East through hack-and-leak attacks using the Pegasus spyware, created by Israeli surveillance technology company NSO Group. The spyware transforms a phone into a surveillance device, activating microphones and cameras and exporting files without a user knowing.

For Oueiss and several other women whose phones were allegedly targeted, a key part of the harassment and intimidation is the use of private photos. While these photos may seem tame by Western standards, they are considered scandalous in conservative societies like Saudi Arabia and were seemingly used to publicly shame these women and smear their reputations.

NSO Group previously claimed that it would not sell to over fifty countries it determined would abuse the capabilities of its spyware. I am sure that the women whose photos were stolen and whose privacy was violated are not convinced NSO Group does nearly enough to restrict heinous actions by users of its software.

This essay by Molly Wood, in the Atlantic, is framed as a piece about antitrust concerns, but I thought this line of inquiry was more correct and telling:

Microsoft does much more that we’re happy to call “evil” when other companies are involved. It defied its own workers in favor of contracts with the Department of Defense; it’s been quietly doing lots of business with China for decades, including letting Beijing censor results on its Bing search engine and developing AI that critics say can be used for surveillance and repression; it reportedly tried to sell facial-recognition technology to the DEA.

So why does none of it stick? Well, partly because it’s possible that Microsoft isn’t actually doing anything wrong, from a legal perspective.

Yet it’s so big and so dominant and owns so much expensive physical infrastructure that hardly any company can compete with it. Is that illegal? Should it be?

It is hard to say whether Microsoft’s products would improve with competition, but it would take a lot more than bugs and typical incompetence for many big organizations to migrate away from Windows, Exchange, and Office. Microsoft’s competitive position is pretty safe.

Meanwhile, it is surely advantageous for Microsoft to be kind of boring. Earlier this year, I discussed how the more anonymous back-end companies in a website’s stack face less pressure to moderate the use of their platforms compared to more user-facing services. Being boring and maybe a little bit inept are equally effective tactics. Microsoft faced public backlash earlier this year when it inadvertently censored worldwide Bing queries for the Tiananmen Square “tank man” image but, most of the time, there is little mention of Bing’s filtering in China because nobody outside of Redmond uses Bing.

Even so, it is striking to recognize how little attention Microsoft has received in discussion over the power of “big tech” companies. Microsoft is the second most valuable company in the world, and has been for some time. Its market cap has nearly doubled since January of last year, and it is only one of two companies to be worth over two trillion dollars.

Josh Centers, TidBits:

The cascading crises of 2020 — with retail store closures, a shuttered Apple headquarters, and broken supply chains — were the ultimate test of Tim Cook’s leadership. In short, Apple not only survived, it’s once again shattering records (see “Apple’s Q3 2021: Still Making Money Hand Over Fist,” 27 July 2021). Mac sales are stronger than ever, and have been setting records for the past four quarters. After a nearly decade-long slump, iPad sales are higher than they’ve ever been apart from their 2012 peak. The iPhone 12 continues to be a smash hit near the end of its product cycle. Services and Wearables both continue stratospheric growth.

Tim Cook has transformed Apple into a truly antifragile company that actually improves under adversity.

Centers thoughtfully explains how the successes of the first paragraph were made possible due to a decade of planning. But I do not think the second paragraph is truly proven.

Apple is more successfully diversified today than at any point in its history. But I do not think it is true that its balance of products and services is such that, when one declines, another will necessarily rise:

Apple has not just a diverse portfolio, but a diverse portfolio of strong products backed by both physical and online distribution options that keep revenues balanced even in the toughest times. A brick-and-mortar retailer like Dollar General would be devastated by store closures, but for Apple, it was only an annoyance that could be mitigated by the Apple online store. Netflix lives or dies by its subscriber figures, but a dip in Apple TV+ subscriptions is mitigated by a music service, a credit card, warranties, and even a fitness service. HP is nothing without PC and printer sales, but the Mac can coast along at times thanks to Apple’s other offerings.

That Apple’s overall revenue can increase while some individual product offerings may decline does not make the company “antifragile” so much as it is simply “big”. Centers shows that Apple was able to avoid many of the stressors that impacted other industries and companies, but not that the stressors produced gains differently than at other large technology companies in similar lines of business. Because of in-person restrictions, this pandemic made technology companies very rich if they could in any way benefit from remote work or socializing. PC makers like Lenovo (PDF), ASUS, and Acer (PDF) all posted large revenue and profit gains. Software giants like Microsoft and Salesforce are booming, and the pandemic’s effects made Zoom a household name.

This is not a demonstration of “antifragility”. Apple was able to avoid many of the pandemic’s knock-on effects due to good long-term planning. But there is no reason that a different revenue mix will still show gains overall, or that a stressor that does not require a large-scale investment in new technologies will benefit Apple’s businesses. Apple is a robust company, certainly; it has not been beleaguered or fragile for decades now. So, while great long-term planning has demonstrated Apple’s resilience, I do not think the effects of this pandemic prove that it has insulated itself in the way Centers describes.