Month: January 2021

From the Apple Newsroom on January 6, 2011 [sic]:

Apple® today announced that the Mac® App Store℠ is now open for business with more than 1,000 free and paid apps. The Mac App Store brings the revolutionary App Store experience to the Mac, so you can find great new apps, buy them using your iTunes® account, download and install them in just one step. The Mac App Store is available for Snow Leopard® users through Software Update as part of Mac OS® X v10.6.6.

You know the first interesting thing about this? Apple issued a press release when the iOS App Store turned ten; Apple also posted one the day the Mac App Store turned ten, but it wasn’t about the Mac App Store:

As the world navigated an ever-changing new normal of virtual learning, grocery deliveries, and drive-by birthday celebrations, customers relied on Apple services in new ways, turning to expertly curated apps, news, music, podcasts, TV shows, movies, and more to stay entertained, informed, connected, and fit.

There’s a bit in the release touting the “commerce the App Store facilitates”, and Apple used it to announce $1.8 billion spent on the App Store between Christmas Eve and New Year’s Eve, but that’s it. Also, I want to thank the person who decided that Apple’s press releases do not need to contain intellectual property marks.

Perhaps it is not surprising that the Mac App Store did not get its own anniversary announcement. It could be the case that Apple considers the launch of the iPhone App Store the original, and everything else is simply part of that family. Apple also doesn’t indulge in anniversaries very often — the App Store press release was an exception rather than the rule.

But it also speaks to the Mac App Store’s lack of comparable influence. Joe Rossignol, MacRumors:

Since its inception, the Mac App Store has attracted its fair share of criticism from developers. Apple has addressed some of these complaints over the years by allowing developers to offer free trials via in-app purchase, create app bundles, distribute apps on multiple Apple platforms as a universal purchase, view analytics for Mac apps, respond to customer reviews, and more, but some developers remain unsatisfied with the Mac App Store due to Apple’s review process, the lack of upgrade pricing, the lack of sandboxing exceptions for trusted developers, the absence of TestFlight beta testing for Mac apps, and other reasons.

Michael Tsai:

Thinking back to the early days of the Mac App Store, I remember how its introduction killed a nascent third-party effort to build a similar store. And I recall how, just months after the store opened, Apple changed the rules to require that apps be sandboxed. […]

The Mac App Store has led a bizarre life in its first ten years — remember when system software updates, including operating system updates, came through the Mac App Store? A 2018 redesign made it look more modern, but it continues to feel like it was ported from another platform. Like the iOS App Store, it faces moderation problems, and its vast quantity of apps are mostly terrible.

There are some bright spots. I have found that good little utility apps — ABX testers, light audio processing, and the sort — are easy to find in the Mac App Store. Much easier, I think, than finding them on the web. It is also a place where you can find familiar software from big developers alongside plenty of indies, software remains up-to-date with almost no user interaction, and there are no serial numbers to lose.

Unfortunately, there remain fundamental disagreements between Apple’s policies and developers’ wishes that often manifest in comical ways. Recently, for my day job, I needed to use one of Microsoft’s Office apps that I did not have installed. I was able to download it from the Mac App Store but, upon signing in to my workplace Office 365 account, I was told that the type of license on my account was incompatible with that version of the app. I replaced it with a copy from Microsoft’s website with the same version number and was able to log in. I assume this is because there is a conflict between how enterprise licenses are sold and Apple’s in-app purchase rules. It was caused in part by Microsoft’s desire to sell its products under as many subtly-different similarly-named SKUs as possible, and resulted in an error message that was prohibited by App Store rules from being helpful. Regardless of the reasons, all I experienced as a user was confusion and frustration. Oftentimes, it is simply less nice to use the Mac App Store than getting software from the web.

Happy tenth birthday to the Mac App Store; it cannot be the best that Apple can do.

Jason Koebler, writing for Vice in 2018:

There’s not a ton of research on this, but the work that has been done so far is promising. A study published by researchers at Georgia Tech last year found that banning [Reddit’s] most toxic subreddits resulted in less hate speech elsewhere on the site, and especially from the people who were active on those subreddits.

Early results from Data and Society sent to an academic listserv in 2017 noted that it’s “unclear what the unintended effects of no platforming will be in the near and distant future. Right now, this can be construed as an incredibly positive step that platforms are making in responding to public complaints that their services are being used to spread hate speech and further radicalize individuals. However, there could be other unintended consequences. There has already been pushback on the right about the capacity and ethics of technology companies making these decisions. We’ve also seen an exodus towards sites like Gab.ai and away from the more mainstream social media networks.”

I linked to this two years ago when Facebook cracked down on extremist public figures using its platforms, but I figured I would re-up it today.

This is a significant test of deplatforming. It seems to work for media personalities and toxic average users, but will it work for someone who — let’s face it — is still the president of the United States? Will it have significant blowback? I have concerns that it will embolden die-hard followers to commit further acts of violence, but I also think that is a problem for law enforcement and American society as a whole.

I do not think national healing is hastened by broadcast media of any type continuing to permit reckless lies about election fraud from influential figures.

Twitter, perhaps knowing the stakes of suspending the personal account of the president, posted a comprehensive explanation of its reasoning. I have trimmed it to two salient paragraphs:

Due to the ongoing tensions in the United States, and an uptick in the global conversation in regards to the people who violently stormed the Capitol on January 6, 2021, these two Tweets must be read in the context of broader events in the country and the ways in which the President’s statements can be mobilized by different audiences, including to incite violence, as well as in the context of the pattern of behavior from this account in recent weeks. After assessing the language in these Tweets against our Glorification of Violence policy, we have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service.

[…]

Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.

I do not understand why Twitter calls this a “permanent suspension” instead of a ban, but that’s what it is.

Even the most powerful people must face consequences. There must be a generally agreed upon line that cannot be crossed. I guess the line for Twitter, Reddit, and Facebook is when their platforms are used to tacitly encourage people to overthrow a fair election in a stable democracy.

Big platforms experimented with taking the laissez-faire moderation style of 4chan mainstream and it backfired. It is long past time that they took a more active role in user moderation.

See Also: Ben Thompson’s piece from yesterday; Mike Masnick today. I often disagree with both on platform moderation issues — see preceding paragraph — but I think they have articulated well why they support a more hands-off approach to moderation more generally, and why they came to believe this ban is due.

Elamin Abdelmahmoud, Buzzfeed News:

The siege was no doubt terrifying to watch, and doubly so especially for the legislators and staff trapped in the building by raging QAnon followers and Trump dead-enders. Rioters wore shirts glorifying the Holocaust; some shouted what sounded like racial epithets and paraded Confederate flags. Guns were drawn. A woman was shot to death by police. It was a tense, perilous, violent assault on democracy.

But it was also quickly apparent that this was a very dumb coup. A coup with no plot, no end to achieve, no plan but to pose. Thousands invaded the highest centers of power, and the first thing they did was take selfies and videos. They were making content as spoils to take back to the digital empires where they dwell, where that content is currency.

Social media did not cause us to give undue influence to public figures with little concern for the weight of their words and actions, but it surely amplifies and exacerbates it.

Every four years, Americans go to the polls to pick a president and vice president; the following January, the House and Senate certify the results and confirm the winner. That January joint session is routine stuff — something so formal and kind of arcane that it can be hard to remember the procedure during any past election. On this occasion, a mob encouraged and defended by the president decided that they should violently interject themselves into proceedings because they did not like the result.

It was a shocking, terrifying, and entirely unsurprising escalation of the anti-democratic rhetoric frequently used by commentators and pundits in a specific media bubble. But it is also the product of a president who has used his status to elevate blatant lies, codswallop, and self-serving fictions. Most platforms have given him generous leeway to do so since he is a world leader by office if not by any other quality.

Ryan Mac, Buzzfeed News:

The insurrection isn’t just being televised. It’s being orchestrated, promoted, and broadcast on the platforms of companies with a collective value in the trillions of dollars.

And the platforms have let Trump persist. At 2:38 p.m. in DC, Trump issued a new message, in which he did not tell his supporters to stand down.

“Please support our Capitol Police and Law Enforcement. They are truly on the side of our Country. Stay peaceful!” he wrote on Twitter and Facebook, as members of his own party barricaded themselves in chambers and rooms and the vice president was forced to evacuate the building. Police were overwhelmed.

That tweet was posted well after rioters were in the Capitol, minutes after they were at the Senate doors, and just a few minutes before they got into the chamber. This attack was planned in the open and incited by the sitting president through, in part, the affordances of his social media presence. Platforms limited the reach of — and ultimately removed — videos and tweets he posted that could be read as encouraging the rioters. And then Facebook decided enough was enough.

Zoe Christien Jones, CBS News:

President Trump will no longer be able to use his official Facebook and Instagram accounts after the social media giant indefinitely banned him following the violent protests at the U.S. Capitol, Facebook CEO Mark Zuckerberg announced Thursday. Mr. Trump will be banned at least through the end of his presidential term.

“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg wrote in a Facebook post. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”

Twitter suspended the president for twelve hours, and other platforms responded similarly.

Will Oremus, OneZero:

None of this is to say that Facebook is wrong to ban Trump, or that Twitter would be wrong to follow suit. There’s a good case to be made they should have done it well before now. While I’ve made the case for newsworthiness exemptions in the past, particularly on Twitter, it’s perfectly reasonable for media platforms to make judgment calls about the balance between newsworthiness and, say, public health or safety — as long as they admit that is in fact what they’re doing. It’s what true media organizations do every day. The only thing worse than constantly changing the rules would be stubbornly sticking to them when it’s clear they’re inadequate or misguided.

But the dominant platforms have always been loath to own up to their subjectivity, because it highlights the extraordinary, unfettered power they wield over the global public square, and places the responsibility for that power on their own shoulders. That in turn would make it clear that the underlying problem here is not the rules themselves, but the fact that just a few, for-profit entities have such power over global speech and politics in the first place. So they hide behind an ever-changing rulebook, alternately pointing to it when it’s convenient and shoving it under the nearest rug when it isn’t.

These platforms are designed to get advertisements and posts from public figures in front of as many users as possible — similar to the way mass media has worked for a couple of decades now. So what do their leadership teams do when those qualities are abused by someone to threaten public safety and democracy itself? In the case of news media, there are editors who are theoretically able to make factual corrections and put misleading information in context. Unfortunately, the people in charge of those decisions often prefer shouting matches; it’s better television. But social media platforms do not have an equivalent; de-platforming, whether temporarily or permanently, is the closest thing they have short of a soup-to-nuts rearchitecting of how posts are presented.

Rethinking how prominent posts are presented and lies are treated is something platforms should have done a long time ago. Facebook and Twitter are clearly still making all of this up as they go along. It was painfully clear one or two or five years ago that they needed to have new ways of presenting items from world leaders, lawmakers, and their spokespersons that would minimize the use of these platforms for indoctrination and, now, insurrection. They have failed to do so. That is why they have a choice between heavy-handed responses like these and doing next to nothing. In this context, I think the heavy-handed approach is almost certainly the correct one. But none of this should have gone this far — and the failures of these platforms stand out as one reason of many for the escalation of violent rhetoric from authoritative figures and the platforms’ aggressive response.

Michael Buckley of Panic:

Once upon a time, we made one of the earliest MP3 players for the Mac, Audion. We’ve come to appreciate that Audion captured a special moment in time, and we’ve been trying to preserve its history. Back in March, we revealed that we were working on converting Audion faces to a more modern format so they could be preserved.

Since then, we’ve succeeded in converting 867 faces, and are currently working on a further 15 faces, representing every Audion face we know of.

Today, we’d like to give you the chance to experience these faces yourself on any Mac running 10.12 or later. We’re releasing a stripped-down version of Audion for modern macOS to view these faces.

I must say that it is both odd and comforting to see a version of Audion with a MiniDisc player skin running natively on MacOS Big Sur alongside lookalike modern apps.

If you have not yet read the story of how Audion almost became iTunes, now is a great time to do so.

Zac Bowden, Windows Central:

Microsoft is building a universal Outlook client for Windows and Mac that will also replace the default Mail & Calendar apps on Windows 10 when ready. This new client is codenamed Monarch and is based on the already available Outlook Web app available in a browser today.

Project Monarch is the end-goal for Microsoft’s “One Outlook” vision, which aims to build a single Outlook client that works across PC, Mac, and the Web. Right now, Microsoft has a number of different Outlook clients for desktop, including Outlook Web, Outlook (Win32) for Windows, Outlook for Mac, and Mail & Calendar on Windows 10.

Microsoft wants to replace the existing desktop clients with one app built with web technologies. The project will deliver Outlook as a single product, with the same user experience and codebase whether that be on Windows or Mac. It’ll also have a much smaller footprint and be accessible to all users whether they’re free Outlook consumers or commercial business customers.

Some reports have interpreted this as though Microsoft will discard the Mac app redesign it previewed in September. I am not sure that is the case. The new version of Outlook for Mac looks an awful lot like an Electron app already.

Like most web apps in a native wrapper, this sounds like a stopgap way of easing cross-platform development at the cost of usability, quality, speed, and platform integration. To be fair, I am not sure that anyone would pitch today’s desktop Outlook apps as shining examples of quality or speed, but I spend a lot of time from Monday through Friday in the Outlook web app and it is poor.

It feels like a website, of course, so everything performs just a little bit worse. You can open messages in new windows if you would like but, because websites do not know how to do multiwindowing, everything appears by default in the same tab. The app generates multiple <div>s masquerading as tabs within its own tab, but it is HTML with window and tab management brought to you by JavaScript, so how well do you think that works?

My favourite bug is that when you are composing an inline reply it sometimes interprets the delete key not as though it should remove the most recently-typed character but that it should delete the current message thread. And, of course, you cannot undo that with a keyboard shortcut. If you miss the app’s built-in small notification balloon that appears nowhere near where you are typing but has an “undo” button that doesn’t look like a button in it, you’ll have to manually find the thread in the trash and move it back to the inbox.

Websites make for very bad apps.

Sara Morrison, Recode:

I gave away tons of personal data to get the things I needed. Food came from grocery and restaurant delivery services. Everything else — clothes, kitchen tools, a vanity ring light for Zoom calls, office furniture — came from online shopping platforms. I took an Uber instead of public transportation. Zoom became my primary means of communication with most of my coworkers, friends, and family. I attended virtual birthdays and funerals. Therapy was conducted over FaceTime. I downloaded my state’s digital contact tracing tool as soon as it was offered. I put a camera inside my apartment to keep an eye on things when I fled the city for several weeks.

Millions of Americans have had a similar pandemic experience. School went remote, work was done from home, happy hours went virtual. In just a few short months, people shifted their entire lives online, accelerating a trend that would have otherwise taken years and will endure after the pandemic ends — all while exposing more and more personal information to the barely regulated internet ecosystem. At the same time, attempts to enact federal legislation to protect digital privacy were derailed, first by the pandemic and then by increasing politicization over how the internet should be regulated.

Last year marked an increased dependency for much of the world on one of its most poorly-regulated industries. We were held together by many of the same companies that were shown over the preceding several years to be deeply flawed — especially when it comes to privacy.

And it could be so much worse.

Kirsten Han, Rest of World:

[Singaporean] authorities claim that such technologies have greatly strengthened their contact-tracing efforts. In early November, the health minister said that 25,000 close contacts of confirmed Covid-19 cases had been identified through TraceTogether, of which 160 eventually tested positive. The country reported zero cases of community transmission most days in November.

Despite these successes, the imposition of more intrusive data collection technology has unnerved privacy advocates, who worry that the pandemic will be used to justify the surveillance of citizens without consideration of the long-term consequences, and without sufficient checks and balances. 

Those concerns look increasingly well-founded. When Parliament reopened in January 2021, Desmond Tan, the Minister of State at the Ministry of Home Affairs, said that the police would also be able to access TraceTogether data for criminal investigations. The privacy statement on the TraceTogether website, which had previously stated that collected data would “only be used solely for contact tracing of persons possibly exposed to COVID-19,” was amended shortly afterwards.

This is wildly invasive and incredibly short sighted. Device-based contact tracing and exposure notification already faced an uphill battle on privacy. It is now practically impossible in much of the world thanks to early but flawed contact tracing apps and broken promises about proximity data use. But not in Singapore, where their contact tracing app remains mandatory.

Update: “Location” in the last paragraph was changed to “proximity”. Thanks Stuart.

Matt Stoller:

I’ve written a lot about private equity. By ‘private equity,’ I mean financial engineers, financiers who raise large amounts of money and borrow even more to buy firms and loot them. These kinds of private equity barons aren’t specialists who help finance useful products and services, they do cookie cutter deals targeting firms they believe have market power to raise prices, who can lay off workers or sell assets, and/or have some sort of legal loophole advantage. Often they will destroy the underlying business. The giants of the industry, from Blackstone to Apollo, are the children of 1980s junk bond king and fraudster Michael Milken. They are essentially are super-sized mobsters who burn down businesses for the insurance money.

In private equity takeovers of software, the gist is the same, with the players a bit different. It’s not Apollo and Blackstone, it’s Vista Equity Partners, Thoma Bravo, and Silver Lake, but it’s the same cookie cutter style deal flow, the same financing arrangements, and the same business model risks. But in this case, the private equity owner of SolarWinds burned down far more than just the firm.

U.S. intelligence agencies may have confirmed today that these attacks were perpetrated by Russians. But this particularly good piece from Stoller makes a satisfying case for the structural reasons behind this breach.

CBS’ 60 Minutes aired a story, reported by Scott Pelley, arguing that cases of harassment and abuse from online sources are enabled by Section 230 of the Communications Decency Act:

A priority of the new president and Congress will be reining in the giants of social media. On this, Democrats and Republicans agree. Their target is a federal law known as Section 230. In a single sentence it set off the ‘big bang’ helping to create the universe of Google, Facebook, Twitter and the rest. Some critics of the law say that it leaves social media free to ignore lies, hoaxes and slander that can wreck the lives of innocent people. One of those critics is Lenny Pozner. After a tragedy in his own life, Pozner has become a champion for victims of online lies, people including Maatje and Matt Benassi, who, overnight, became the target of death threats like these.

[…]

Right about now you might be thinking, they should sue. But that’s the problem. They can’t file hundreds of lawsuits against internet trolls hiding behind aliases. And they can’t sue the internet platforms because of that law known as Section 230 of the Communications Decency Act of 1996. Written before Facebook or Google were invented, Section 230 says, in just 26 words, that internet platforms are not liable for what their users post.

These cases are truly terrible — but they are not enabled by Section 230 as much as by the generosities afforded by the First Amendment combined with the scale of these platforms. And, as Mike Masnick of Techdirt points out, major platforms have eventually been responsive to user complaints:

Over and over again, the report blames Section 230 for all of this. Incredibly, at the end of the report, they admit that the video from that nutjob conspiracy theorist was taken down from YouTube after people complained about it. In other words Section 230 did exactly what it was supposed to do in enabling YouTube to pull down videos like that. But, of course, unless you watch the entire 60 Minutes segment, you’ll miss that, and still think that 230 is somehow to blame.

Facebook, Twitter, and YouTube have thankfully stepped up their moderation efforts in the last couple of years. But because of their scale — partially due to network effects, and partially because of a reluctance to use antitrust precedent to slow their roll — this increased moderation has been mistakenly referred to as “censorship”. None of this has anything to do with Section 230, however.

60 Minutes filmed a very good interview with Jeff Kosseff, an expert on Section 230, of which only a part made it into the final report. I am disappointed that they axed Kosseff’s historical context:

To understand why [Section 230] is necessary, you really have to go back to what the law was before Section 230, and that is: what is the liability for distributors of content that others create? Before the internet, that was bookstores and newsstands. And the general rule was that, if you are a distributor of someone else’s content, you’re only liable if you know or have reason to know if it’s illegal.

That compares favourably with Section 230, which requires platforms to remove illegal materials when they are notified and encourages them to moderate proactively.1 Because of the explosive growth of these platforms, moderation is extremely difficult.

Kosseff also fields a question from Pelley about news publishers:

Scott Pelley: But help me understand, the same is not true for other forms of media. If somebody says something defamatory on 60 Minutes or on Fox or CNN or in The New York Times, those organizations can be sued. So why not Google, YouTube, Facebook?

Jeff Kosseff: So the difference between a social media site and let’s say the Letters to the Editor page of The New York Times is the vast amount of content that they deliver. So I mean you might have five or ten letters to the editor on a page. You could have I think it’s 6,000 tweets per second. […]

One other difference is that the press relies upon human beings making a decision about what should be published and what should not. An interview subject can make a dubious and potentially defamatory claim, but it is up to the system of reporters and editors and fact-checkers to determine whether that claim ought to be shown to the public. Online platforms are more infrastructural. Making them legally liable for what their users publish would be like making it fair game to sue newsstands and grocery stores for selling copies of the Times containing an illegally defamatory story.

Perhaps owing to their unique scale and manipulated reach, I hope that platforms will continue to take a more active role in curbing high-profile bad faith use. I do not think making Twitter liable for my dumb tweets, or websites liable for their users’ comments, is a sensible way of getting there.


  1. Platforms’ own rules mean that what they disallow is not necessarily the same as what the law disallows. ↥︎

I liked Timothy Buck’s explanation of why accessibility matters in everything, and the simple list of tips to improve it in tech products. A key thing to think about is that, when you make things more accessible for more people, you make those things better for every user. Nobody wants things to be harder to use.

Update: As of November 17, 2021, Trieu Pham dropped this case. The original post follows.

In the lawsuit (PDF), Trieu Pham, the App Store reviewer, alleges he was harassed at work on the basis of race and national origin — he is of Vietnamese ancestry — and that he was fired for his 2018 support of an app created by a Chinese dissident that claimed to showed corruption within the Chinese government.

Michael Tsai has a good summary of the suit and some related links, including this excerpt from the suit:

After plaintiff Pham approved the Guo Media App, the Chinese government contacted defendant Apple and demanded that the Guo Media App be removed from defendant Apple’s App Store. Defendant Apple then performed an internal investigation and identified plaintiff Pham as the App Reviewer who approved the Guo Media App.

In or around late September 2018, shortly after defendant Apple provided plaintiff Pham with the DCP, plaintiff Pham was called to a meeting to discuss the Guo Media App with multiple defendant Apple supervisors and managers. At this meeting, defendant Apple supervisors stated that the Guo Media App is critical of the Chinese government and, therefore, should be removed from the App Store. Plaintiff Pham responded stating the Guo Media App publishes valid claims of corruption against the Chinese government and Chinese Communist Party and, therefore, should not be taken down. Plaintiff Pham further told his supervisors that the Guo Media App does not contain violent content or incite violence; does not violate any of defendant Apple’s policies and procedures regarding Apps; and, therefore, it should remain on the App Store as a matter of free speech.

I think this is a more complicated story than how it is being covered. It sounds like another clear-cut case of Apple’s deference to Chinese government interests — and that may be true. The judge in this case has denied that Pham was subject to a harassing work environment, but is allowing him to make the case that he was fired as retaliation for his approval of this app.

However, the app in question is a complex story in its own right. Guo Media was formed by Guo Wengui, a billionaire who fled criminal charges in China in 2014 to hide in his massive Manhattan apartment overlooking Central Park. It was aboard Guo’s yacht where Steve Bannon was arrested last year on fraud charges; Bannon worked with Guo to raise funds and launch Guo Media.

According to the New York Times, many of Guo’s corruption claims appear valid or plausible; many appear to be fictional. Guo’s media company was responsible for the fictional story that the pandemic originated in a Wuhan bioweapons lab, and has a history of spreading disinformation. The G News app remains available in the Canadian App Store as of publishing. So, Guo Media is a shady company with potentially criminal founders, and G News publishes a lot of nonsense. But, according to Pham’s suit, three reviewers for the App Store in China approved it before Pham, and it was only then that Chinese government officials allegedly demanded its removal.

Apple’s dependency on its China-based manufacturing partners remains what I see as its biggest liability heading into 2021.1 Regardless of whether Pham’s claims turn out to be true, even the appearance of deference to a specific government’s censorship campaign is worrying. If government officials were so concerned about Guo Media, they could block it with the national firewall without involving Apple. But it appears that Apple is okay with being complicit. Apple has a China public relations problem because it has actual problems tied to its complex relationship with the country’s government.


  1. This is true to some extent for every participant in a worldwide economy that depends on manufacturing and supply chains in China. Apple’s situation is more complex and perhaps a greater liability because it has physical products and apps and media distributed under its name. ↥︎

Pei Li, Reuters:

Apple removed 39,000 game apps on its China store Thursday, the biggest removal ever in a single day, as it set year-end as deadline for all game publishers to obtain a licence.

[…]

Including the 39,000 games, Apple removed more than 46,000 apps in total from its store on Thursday. Games affected by the sweep included Ubisoft title Assassin’s Creed Identity and NBA 2K20, according to research firm Qimai.

Qimai also said only 74 of the top 1,500 paid games on Apple store survived the purge.

Yuan Yang, Financial Times, reporting in July that App Store updates were frozen for games before the deadline was extended until the end of the year:

Until now, Apple has allowed Chinese games to be downloaded from the App Store while their developers wait for an official licence from Chinese regulators.

[…]

Analysts and lawyers in Beijing suggested that the Chinese government had decided to step up enforcement on Apple, the largest US company operating in China, after broader tensions between Washington and Beijing.

Apparently, getting a license for paid game titles in China is a huge pain in the ass that requires approval from government censors and having an office within the country. But if Apple wants to continue providing apps through its own App Store, it has little choice but to comply with these requirements. Of course, requiring that iOS apps come from the App Store is also a choice, but one that increasingly comes with trade-offs for the company and third-party developers. Is it still a fair compromise?

Amphetamine is a simple free app that sits in the menu bar and keeps a Mac awake — the spiritual successor to Caffeine, which has not been updated in years. It is well-liked; Apple liked it so much they featured it in a Mac App Store story.

So it surely came as a surprise to William C. Gustafson, the app’s developer, when Apple decided that it was in violation of policies that prohibit glorification of controlled substances:

Apple then proceeded to threaten to remove Amphetamine from the Mac App Store on January 12th, 2021 if changes to the app were not made. It is my belief that Amphetamine is not in violation of any of Apple’s Guidelines. It is also my belief that there are a lot of people out there who feel the same way as me, and want to see Amphetamine.app continue to flourish without a complete re-branding.

[…]

Apple further specified: “Your app appears to promote inappropriate use of controlled substances. Specifically, your app name and icon include references to controlled substances, pills.”

I can see how this app could be interpreted as violating those policies. It has a pill for an icon, and amphetamines are controlled substances in most countries. But:

  1. It does not promote drug use any more than the MacOS feature named “Mission Control” gives users the impression they can now work at NASA.

  2. Apple gave this app a dedicated editorial feature in the App Store, thereby increasing awareness of an app called “Amphetamine” — and it is only now that it says the app’s name is incompatible with its policies? That seems like a bait and switch.

I get that App Review might not catch policy violators on a first pass or even after several updates. But surely there comes a time when Apple has to decide that it looks less petty to treat a violation of a policy as minor as this as a special grandfathered case. If an app is featured by the App Store team, Apple ought to suspend their right to complain about superficial rule-breaking — if that is what this is, and I am still not convinced that Amphetamine violates the spirit of those policies.

The slightly good news here is that, unlike an iOS app, the removal of this Mac app would not entirely destroy its existence. It could be distributed outside of the Mac App Store if the developer chooses. But it should be allowed to remain.

Update: Gustafson says that Apple confirmed Amphetamine will stay in the store without a name change. In a parallel universe where this story did not receive press coverage, would the outcome be the same?

Michael Cavna, Washington Post:

The final “Calvin and Hobbes” strip was fittingly published on a Sunday — Dec. 31, 1995 — the day of the week on which Bill Watterson could create on a large color-burst canvas of dynamic art and narrative possibility, harking back to great early newspaper comics like “Krazy Kat.” The cartoonist bid farewell knowing his strip was at its aesthetic pinnacle.

“It seemed a gesture of respect and gratitude toward my characters to leave them at top form,” Watterson wrote in his introduction to “The Complete Calvin and Hobbes” box-set collection. “I like to think that, now that I’m not recording everything they do, Calvin and Hobbes are out there having an even better time.”

Calvin and Hobbes are two characters that felt like old friends from the moment I met them, and that has never faded. It is the finest American comic strip there has ever been.

Jon Gotow (via Michael Tsai):

Yes, the Open and Save dialogs keep appearing at their smallest possible sizes in Big Sur 11.1. It’s not just you, and it’s not something you’ve done wrong – it’s a bug in Big Sur.

[…]

Sadly, resizing the dialog so it’s larger only works on the current one. Every time you’re presented with an Open or Save dialog, it’ll be back to its uselessly small size again because Big Sur doesn’t remember the past size like it’s supposed to.

If this feels like deja vu, it might be because there was a similar bug in Yosemite where Open and Save dialogs grew by twenty-two pixels every time they were opened. Coincidentally, or perhaps not, Yosemite was the most recent major redesign of MacOS before Big Sur.