In our discussions, journalists and human rights defenders, including those from Myanmar, described fearing the weight of having to relentlessly prove what’s real and what is fake. They worried their work would become not just debunking rumors, but having to prove that something is authentic. Skeptical audiences and public factions second-guess the evidence to reinforce and protect their worldview, and to justify actions and partisan reasoning. In the US, for example, conspiracists and right-wing supporters dismissed former president Donald Trump’s awkward concession speech after the attack on the Capitol by claiming “it’s a deepfake.”
The TikTok videos of “Tom Cruise” are certainly impressive and terrifying, but they are also an edge case. If you have not yet seen them, I recommend checking them out. I think Gregory nails the real concern of deepfake videos: it is the paranoia, more than the videos themselves. It is the mere presence of deepfakes as a concept that is concerning, because it is yet another piece of technobabble that can manifest in the wrong hands as propaganda and conspiracy theory mongering.
Sure is bizarre to be living at a time when we are, as humankind, more scientifically literate than ever before while increasingly doubting the reality in front of our very eyes. Last year, Kirby Ferguson put together a terrific video about magical thinking. The subject matter is kind of heavy, but it is worth a watch for capturing the strangeness of this time.
Last week, Apple published an update to its Platform Security Guide. The PDF now weighs in at nearly two hundred pages and includes a lot of updates — particularly with the launch of the M1 Macs late last year. Unfortunately, because of its density, it is not exactly a breezy thing to write about.
As wonderful as the Apple Platform Security guide is as a resource, writing about it is about as easy as writing a hot take on the latest updates to the dictionary. Sure, the guide has numerous updates and lots of new content, but the real story isn’t in the details, but in the larger directions of Apple’s security program, how it impacts Apple’s customers, and what it means to the technology industry at large.
From that broader perspective, the writing is on the wall. The future of cybersecurity is vertical integration. By vertical integration, I mean the combination of hardware, software, and cloud-based services to build a comprehensive ecosystem. Vertical integration for increased security isn’t merely a trend at Apple, it’s one we see in wide swaths of the industry, including such key players as Amazon Web Services. When security really matters, it’s hard to compete if you don’t have complete control of the stack: hardware, software, and services.
Vertical integration in the name of privacy and security is the purest expression of the Cook doctrine that I can think of. We got a little preview of the acceleration of this strategy not too long ago — and a glimpse of its limitations in November — but it has been a tentpole of Apple’s security strategy for ages. Recall the way Touch ID was pitched when the iPhone 5S was introduced, for example. Phil Schiller repeatedly pointed to its deep software integration while Dan Riccio, in a promotional video, explained how the fingerprints were stored in the Secure Enclave.
All of this makes me wonder whatever happened to Project McQueen, Apple’s effort to eliminate its reliance on third-party data centres for iCloud. Surely this project did not die when some of the engineers responsible for it left the company, but Apple still depends on others for hosting. From page 109 of the guide:
Each file is broken into chunks and encrypted by iCloud using AES128 and a key derived from each chunk’s contents, with the keys using SHA256. The keys and the file’s metadata are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information or the keys, using both Apple and third- party storage services — such as Amazon Web Services or Google Cloud Platform — but these partners don’t have the keys to decrypt the user’s data stored on their servers.
Even though Amazon and Google absolutely cannot — and, even if these files were not strongly encrypted, would not — access users’ data, it is still strange that Apple still relies on third-party data centres given its tight proprietary integration.
I’ve been picking through this guide for the past week and trying to understand it as best I can. It is, as an Apple spokesperson explained to Jai Vijayan at Dark Reading, intended to be more reflective of security researchers’ wishes and needs, which you can read as being more comprehensive with a greater level of technical detail. One item I noticed on pages 14–15 is the new counter lockbox feature in recent devices:
Devices first released in Fall 2020 or later are equipped with a 2nd-generation Secure Storage Component. The 2nd-generation Secure Storage Component adds counter lockboxes. Each counter lockbox stores a 128-bit salt, a 128-bit passcode verifier, an 8-bit counter, and an 8-bit maximum attempt value. Access to the counter lockboxes is through an encrypted and authenticated protocol.
Counter lockboxes hold the entropy needed to unlock passcode-protected user data. To access the user data, the paired Secure Enclave must derive the correct passcode entropy value from the user’s passcode and the Secure Enclave’s UID. The user’s passcode can’t be learned using unlock attempts sent from a source other than the paired Secure Enclave. If the passcode attempt limit is exceeded (for example, 10 attempts on iPhone), the passcode-protected data is erased completely by the Secure Storage Component.
I read this as a countermeasure against devices, such as the GrayKey, that try to crack iPhones by guessing their passcodes using some vulnerability that gives them infinite attempts. I cannot find any record of a GrayKey successfully being used against an iPhone 12 model, but I did find an article highlighting a recent funding round for the company.
But not all are convinced by Grayshift’s long-term capabilities, given Apple’s consistent improvement of the iPhone’s security. The GrayKey is believed to be capable of hacking iPhones up to the iPhone 11, though it’s unclear how effective the tool is against the iPhone 12. “It’s most likely they can’t do much, if anything at all, with the iPhone 12 and iOS 14,” said Vladimir Katalov, CEO of another forensics company, Elcomsoft. “Perhaps they just want to cash out.”
Katalov previously speculated that iOS 12 defeated the GrayKey but, clearly, some new method was developed to keep it working — at least, until recently. A new method may be discovered again. But it seems that Apple is particularly keen to address concerns about passcode vulnerabilities exploited by third parties. It is too bad that iCloud backups remain a critically weak point.
During a scheduled companywide meeting, Andrew Bosworth, Facebook’s vice president of augmented and virtual reality, told employees that the company is currently assessing whether or not it has the legal capacity to offer facial recognition on devices that are reportedly set to launch later this year. Nothing had been decided, he said, and he noted that current state laws may make it impossible for Facebook to offer people the ability to search for others based on pictures of their face.
“Face recognition … might be the thorniest issue, where the benefits are so clear, and the risks are so clear, and we don’t know where to balance those things,” Bosworth said in response to an employee question about whether people would be able to “mark their faces as unsearchable” when smart glasses become a prevalent technology. The unnamed worker specifically highlighted fears about the potential for “real-world harm,” including “stalkers.”
Andrew Bosworth confirmed the report on Twitter with an unsurprisingly defensive tone:
We’ve been open about our efforts to build AR glasses and are still in the early stages. Face recognition is a hugely controversial topic and for good reason and I was speaking about was how we are going to have to have a very public discussion about the pros and cons.
In our meeting today I specifically said the future product would be fine without it but there were some nice use cases if it could be done in a way the public and regulators were comfortable with.
Anyone who thinks for even a second about the negative consequences about a feature like this knows that there is absolutely no circumstance in which this is an open-ended “discussion”, as Bosworth seems to think. Face blindness is certainly real, and we must be compassionate to those who live with it, but adding a technology layer from one of the world’s most privacy-hostile companies is not a solution. It is a catastrophe waiting to happen.
Services are also attempting to reduce the content-moderation load by reducing the incentives or opportunity for bad behavior. Pinterest, for example, has from its earliest days minimized the size and significance of comments, says Ms. Chou, the former Pinterest engineer, in part by putting them in a smaller typeface and making them harder to find. This made comments less appealing to trolls and spammers, she adds.
The dating app Bumble only allows women to reach out to men. Flipping the script of a typical dating app has arguably made Bumble more welcoming for women, says Mr. Davis, of Spectrum Labs. Bumble has other features designed to pre-emptively reduce or eliminate harassment, says Chief Product Officer Miles Norris, including a “super block” feature that builds a comprehensive digital dossier on banned users. This means that if, for example, banned users attempt to create a new account with a fresh email address, they can be detected and blocked based on other identifying features.
No matter how effectively platforms become at removing unwanted and inappropriate media, it will always be preferable to me for these services and products to be designed to discourage the need for heavy moderation in the first place. It is unsurprising to me that the platforms taking this approach and highlighted here by Mims are used by womenmore than, say, Twitter, Reddit, or YouTube. I have long harboured a pet theory that it is a positive feedback loop due, in part, to considering the negative ramifications of specific features. These platforms are certainly not perfect, but their more thoughtful feature design means they are less prone to misuse, which means they are more appealing to women and other people who are more likely to face abuse online. By contrast, platforms that deploy features without that kind of foresight quickly become overwhelmed with misuse, driving away some of those who tend to be on its receiving end.
89,132,938 videos were removed globally in the second half of 2020 for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of these videos, 11,775,777 were removed in the US.
92.4% of these videos were removed before a user reported them, 83.3% were removed before they received any views, and 93.5% were removed within 24 hours of being posted.
51,505 videos were removed for promoting COVID-19 misinformation. Of these videos, 86% were removed before they were reported to us, 87% were removed within 24 hours of being uploaded to TikTok, and 71% had zero views.
TikTok credits its automated systems for detecting violating videos before they were viewed. For comparison, around 94% of YouTube videos were automatically flagged, but only around 40% were removed with zero views.
TikTok’s moderation efforts do come with a bit of an asterisk, however, because the platform is owned by ByteDance, which also runs Douyin, the version of TikTok only available in China.
The truth is, political speech comprised a tiny fraction of deleted content. Chinese netizens are fluent in self-censorship and know what not to say. ByteDance’s platforms — Douyin, Toutiao, Xigua and Huoshan — are mostly entertainment apps. We mostly censored content the Chinese government considers morally hazardous — pornography, lewd conversations, nudity, graphic images and curse words — as well as unauthorized livestreaming sales and content that violated copyright.
It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.
For clarity, TikTok and Douyin are entirely separate platforms. But one of the reasons TikTok’s moderation efforts are so effective — especially for a platform that has grown dramatically in such a short period of time — is, basically, because they have to be.
According to Kim Scheinberg, she and her husband John Kullmann had decided to move back to the east coast in 2000. In order to make the move, Kullmann had to work on a more independent project at Apple. Ultimately, he started work on an Intel version of Mac OS X. Eighteen months later, in December 2001, his boss asks him to show him what he’s been working on.
In JK’s office, Joe watches in amazement as JK boots up an Intel PC and up on the screen comes the familiar ‘Welcome to Macintosh’.
Joe pauses, silent for a moment, then says, “I’ll be right back.”
He comes back a few minutes later with Bertrand Serlet.
Max (our 1-year-old) and I were in the office when this happened because I was picking JK up from work. Bertrand walks in, watches the PC boot up, and says to JK, “How long would it take you to get this running on a (Sony) Vaio?” JK replies, “Not long” and Bertrand says, “Two weeks? Three?”
JK said more like two *hours*. Three hours, tops.
Bertrand tells JK to go to Fry’s (the famous West Coast computer chain) and buy the top of the line, most expensive Vaio they have. So off JK, Max and I go to Frys. We return to Apple less than an hour later. By 7:30 that evening, the Vaio is running the Mac OS. [My husband disputes my memory of this and says that Matt Watson bought the Vaio. Maybe Matt will chime in.]
I’m Canadian; this story is my only association with Fry’s. From what I’ve seen of the memories that have been posted across the web tonight, it held a special place in the hearts of many.
It appears that Facebook and the Australian government are resolving their differences. Facebook says that it will be restoring links to news on its platform; the government will make some adjustments to the law.
But while a country and a social media company were scuffling, the latter’s power became obvious to those in the South Pacific.
Dr Amanda Watson, a research fellow at the Australian National University’s Coral Bell School of Asia Pacific Affairs, and an expert in digital technology use in the Pacific, said there was widespread confusion across the Pacific about the practical ramifications of Facebook’s Australian news ban.
“Facebook is the primary platform, because a number of telco providers offer cheaper Facebook data, or bonus Facebook data. Many Pacific Islanders might know how to do some basic Facebooking, but it’s questionable if they would be able to open an internet search engine and search for news, or go to a particular web address. There are technical confidence issues, and that’s linked to education levels in the Pacific, and how long people have had access to the internet.”
Watson is describing the practice of zero-rating and one reason why it is so pernicious. Zero-rating sounds great on its face. It means that popular services can strike deals with telecom providers so, at its best, some of the things most people do on the web are not counted against data quotas.
In the case of Facebook Free Basics — formerly Internet.org, which is among the most specious branding exercises I can imagine — there are a handful of websites and services that are included in mobile plans. Many of the websites selected by Facebook to receive this special treatment are American, including Facebook itself, of course. The result of this is that, according to a 2015 survey, only 5% of Americans agreed with the statement that “Facebook is the internet” compared to 65% of respondents in Nigeria, 61% in Indonesia, and 58% in India — countries where Facebook Free Basics is available.
In the quote above, Watson describes a lack of technical ability for not accessing many websites outside of Facebook, but there is another major hurdle: cost. Data plans can be expensive, and many news websites are garbage. Sticking to the websites included in Facebook Free Basics is not just easier, it is an economic reality that Facebook is taking advantage of.
In much of the world, internet policy effectively is Facebook policy, and vice-versa. One reason for that is the ferocious speed at which Facebook grew and acquired potential competitors. Though an American company, WhatsApp was wildly popular mostly outside of the U.S. before Facebook bought it. That’s why treating antitrust as a solely American concern — or something of trivial relevance, or something that can be eradicated with the eventual passage of time — is such a frustrating response from those of us who live elsewhere.
So, yes, Australian policy requiring Facebook to pay Rupert Murdoch’s empire so that users of the former can link to the latter does seem pretty ridiculous. But it is extraordinary to see a huge chunk of the world’s ad spending redirected to two American companies headquartered within a ten minute drive of each other. Many independent and local media entities around the world are bleeding so that Murdoch can buy another yacht with the money Facebook and Google should be using to pay their taxes.
Update: A reluctance to effectively govern in the United States is not the only way to gain technical dominance.
Apple relies on bugs reported through its Feedback systems. As this Spotlight bug isn’t easy to recognise, users and third-party developers are only now realising the effects of Dave’s simple coding error. Without thorough testing, Apple is almost completely reliant on Feedback to detect and diagnose bugs.
This system is both flawed and woefully inefficient, as any expert in quality management will tell you. It’s like letting cars roll off the production line with no windows, and waiting for customers to bring them back to have them installed. By far the best choice is to build correctly the first time, or, as second best, to detect and rectify defects before shipping. So long as shipping updates remains relatively cheap, and your customers are happy to report all the defects which you didn’t fix, it appears to work, at least in the short term.
I’ve now reached the stage where I simply don’t have time to report all these bugs, nor should I have to. Indeed, I’ve realised that in doing so, I only help perpetuate Apple’s flawed engineering practices.
I was thinking about this piece earlier today as I filed a handful of pretty standard bug reports based on some visual problems I noticed in Big Sur.
For each one, Feedback Assistant automatically collected whole-system diagnostics, which wholly consumes system resources for a few minutes as it spits out a folder of logs totalling well over a gigabyte, plus the same folder as a compressed archive. The archive file is submitted to Apple and the uncompressed folder is locally cached for a little while — the oldest one on my drive is from January 4. It does not matter what the feedback is related to; this is a minimum requirement of all bug reports. If you are filing a report about many system features — Bluetooth or Time Machine, for example — it will also require you to collect separate diagnostics.1
Often, I suspect, users will not attach all of the diagnostics needed for Apple’s developers to even find the bug. But I have to wonder how effective it is to be collecting so many system reports all of the time, and whether it is making a meaningful difference to the quality of software — particularly before it is shipped. I have hundreds of open bug reports, many of which are years old and associated with “more than ten” similar reports. How can any engineering team begin to triage all of this information to fix problems that have shipped?
To its credit, the quality of Apple’s software seems to have stabilized in the last year or so. But after the last several years, it feels more like the hole has stopped getting deeper and less like we are climbing out of it.
FB8993839 for the Time Machine bug. I have a recent top-of-the-line iMac connected by USB-C to a fast SSD and it’s still slow as hell. I do not understand this. ↩︎
In July last year, scientists shot another car to Mars from Earth. It touched down a few days ago and, for the first time, provided video of the descent and audio from the planet. There is also a helicopter aboard, which will apparently be flown near the rover within the next month. A little over a century ago, humans first took to the air on Earth; by the end of March, if all goes to plan, humans will remotely pilot an aircraft in another planet’s atmosphere. Incredible.
I linked to an article earlier this month from Mike Masnick of Techdirt, explaining that several similar bills were being pushed in U.S. state legislatures to combat so-called “social media censorship”. These bills share virtually all of their language and have obvious First Amendment problems. In that linked piece, I showed that these bills were likely the work of a bizarre campaign associated with Chris Sevier.
As of February 3, Sevier says that “28 states are moving forward on this”. I could not come up with a number anywhere near that. But there is one thing Sevier is right about: the bill template is now, technically, bipartisan, as Democratic lawmaker Mike Gabbard introduced it in the Hawaiian Senate.
It looks like Apple has started to crack down on scam attempts by rejecting apps that look like they have subscriptions or other in-app purchases with prices that don’t seem reasonable to the App Review team.
9to5Mac obtained access to a rejection email shared by a developer that provides a subscription service through their app. It shows a rejection message from Apple telling them that their app would not be approved because the prices of their in-app purchase products “do not reflect the value of the features and content offered to the user.” Apple’s email goes as far as calling it a “rip-off to customers” (you can read the full letter at the end of this post).
This is not Apple’s sole response to fighting App Store scams. iOS 14.5 has a subtly redesigned subscription sheet that more clearly displays the cost of the subscription and its payment term.
I have waffled a bit on whether it makes sense for Apple to be the filter for the appropriateness of app pricing. It has always been a little bit at the mercy of Apple’s discretion — remember the I Am Rich app? — but legitimatedevelopers have concerns about whether their apps will be second-guessed by some reviewer as being too expensive. And I am quite sure that, if the hypothetical becomes a reality, it is likely to be resolved with a few emails. But developers’ livelihoods are often on the line; there are no alternative native app marketplaces on iOS.
The proof of this strategy’s success will be in Apple’s execution, but that in itself is a little worrisome. It is a largely subjective measure; who is an app reviewer to say whether an app is worth five dollars a week or five dollars a month? Apple does not have a history of wild incompetence with its handling of the App Store, but there are enough stories of mistakes and heavy-handedness that this is being viewed as a potential concern even by longstanding developers of high-quality apps.
I hope this helps. There are enough of these fleeceware scams in the store to be impacting its reputation. A crackdown is clearly necessary. The question is whether the App Store team is capable of executing it.
In recent years, and whether we realize it or not, biometric technologies such as face and iris recognition have crept into every facet of our lives. These technologies link people who would otherwise have public anonymity to detailed profiles of information about them, kept by everything from security companies to financial institutions. They are used to screen CCTV camera footage, for keyless entry in apartment buildings, and even in contactless banking. And now, increasingly, algorithms designed to recognize us are being used in border control. Canada has been researching and piloting facial recognition at our borders for a few years, but — at least based on publicly available information — we haven’t yet implemented it on as large a scale as the US has. Examining how these technologies are being used and how quickly they are proliferating at the southern US border is perhaps our best way of getting a glimpse of what may be in our own future—especially given that any American adoption of technology shapes not only Canada–US travel but, as the world learned after 9/11, international travel protocols.
Canada has tested a “deception-detection system,” similar to iBorderCtrl, called the Automated Virtual Agent for Truth Assessment in Real Time, or AVATAR. Canada Border Services Agency employees tested AVATAR in March 2016. Eighty-two volunteers from government agencies and academic partners took part in the experiment, with half of them playing “imposters” and “smugglers,” which the study labelled “liars,” and the other half playing innocent travellers, referred to as “non-liars.” The system’s sensors recorded more than a million biometric and nonbiometric measurements for each person and spat out an assessment of guilt or innocence. The test showed that AVATAR was “better than a random guess” and better than humans at detecting “liars.” However, the study concluded, “results of this experiment may not represent real world results.” The report recommended “further testing in a variety of border control applications.” (A CBSA spokesperson told me the agency has not tested AVATAR beyond the 2018 report and is not currently considering using it on actual travellers.)
These technologies are deeply concerning from a privacy perspective. The risks of their misuse are so great that their implementation should be prohibited — at least until a legal framework is in place, but I think forever. There is no reason we should test them on a “trial” basis; no new problems exist that biometrics systems are solving by being used sooner.
But I am curious about our relationship with their biases and accuracy. The fundamental concerns about depending on machine learning boil down to whether suspicions about its reliability are grounded in reality, and whether we are less prone to examining its results in depth. I have always been skeptical of machines replacing humans in jobs that require high levels of judgement. But I began questioning that very general assumption last summer after reading a convincing argument from Aaron Gordon at Vice that speed cameras are actually fine:
Speed and red light cameras are a proven, functional technology that make roads safer by slowing drivers down. They’re widely used in other countries and can also enforce parking restrictions like not blocking bus or bike lanes. They’re incredibly effective enforcers of the law. They never need coffee breaks, don’t let their friends or coworkers off easy, and certainly don’t discriminate based on the color of the driver’s skin. Because these automated systems are looking at vehicles, not people’s faces, they avoid the implicit bias quandaries that, say, facial recognition systems have, although, as Dave Cooke from the Union of Concerned Scientists tweeted, “the equitability of traffic cameras is dependent upon who is determining where to place them.”
Loath as I am to admit it, Gordon and the researchers in his article have got a point. There are few instances where something is as unambiguous as a vehicle speeding or running a red light. If the equipment is accurately calibrated and there is ample amber light time, the biggest frustration for drivers is that they can no longer speed with abandon or race through changing lights — which are things they should not have been doing in any circumstance. I am not arguing that we should put speed cameras every hundred metres on every road, nor that punitive measures are the only or even best behavioural correction, merely that these cameras can actually reduce bias. Please do not send hate mail.
Facial recognition, iris recognition, gait recognition — these biometrics methods are clearly more complex than identifying whether a car was speeding. But I have to wonder if there is an assumption by some that there is a linear and logical progression from one to the other, and there simply is not. Biometrics are more like forensics, and courtrooms still accept junk science. It appears that all that is being done with machine learning is to disguise the assumptions involved in matching one part of a person’s body or behaviour to their entire self.
It comes back to Maciej Cegłowski’s aphorism that “machine learning is money laundering for bias”:
When we talk about the moral economy of tech, we must confront the fact that we have created a powerful tool of social control. Those who run the surveillance apparatus understand its capabilities in a way the average citizen does not. My greatest fear is seeing the full might of the surveillance apparatus unleashed against a despised minority, in a democratic country.
What we’ve done as technologists is leave a loaded gun lying around, in the hopes that no one will ever pick it up and use it.
Well we’re using it now, and we have done little to assure there are no bystanders in the path of the bullet.
With a Canadian law being drafted that is similar to the one moving forward in Australia, I have been watching this story intently. My hope has been that Canadian lawmakers will see the responses to these policies and adjust theirs accordingly. I am particularly concerned about its effects on local media, like the excellent Sprawl here in Calgary.
Third, this incident provides an important reminder that independent and smaller media will bear the biggest brunt of these policies. The reality is that the Australian battle really pits Facebook against Rupert Murdoch’s media empire. In other words, giant vs. giant. In Canada, the large media companies such as Postmedia and Torstar are the most vocal lobbyists on this issue, but smaller, independent media have already indicated that they do not support the News Media Canada lobbying campaign and want the benefit of links from social media services.
These policy proposals seem to fundamentally misunderstand the use of links on the web. This is entirely speculative, but I have long wondered if the appearance of Open Graph tags has anything to do with confusion about what is part of Facebook and what is third-party material. Link previews have repeatedly been associated with bad framing and untrustworthy practices. I wonder if these thumbnails may also blur the lines too much with what most people consider an external link.
On February 2, GameStop closed at $90, less than 20 percent of its all-time high, which it had reached just a few days earlier. Like many internet stories, the narrative may start with the “little guy” winning — David against Goliath — but they rarely end that way. The little guy loses, not because he is irrational and too emotional, but because of his relative power in society.
Similarly, Facebook was first celebrated for empowering dissidents during the Arab Spring, but just a few years later it was a key tool in helping Donald Trump win the presidency — and then, later, in clipping his wings, when it joined with other major social-media companies to deplatform him following the insurrection at the Capitol. The reality is that Facebook and Twitter and YouTube are not for or against the little guy: They make money with a business model that requires optimizing for engagement through surveillance. That explains a lot more than the “for or against” narrative. As historian Melvin Kranzberg’s famous aphorism goes: “Technology is neither good nor bad; nor is it neutral.”
As Tufekci writes, the social contract broken by those with power makes us want to see the relatively minor gains by the rest of us for more than they really are — at the same time as wealth and power continue to concentrate.
According to sections of a filing in the lawsuit that were unredacted on Wednesday, a Facebook product manager in charge of potential reach proposed changing the definition of the metric in mid-2018 to render it more accurate.
However, internal emails show that his suggestion was rebuffed by Facebook executives overseeing metrics on the grounds that the “revenue impact” for the company would be “significant”, the filing said.
The product manager responded by saying “it’s revenue we should have never made given the fact it’s based on wrong data”, the complaint said.
Casey Newton, writing earlier this week about Australia’s forthcoming law requiring platforms to pay publishers when the former links to the latter:
I appreciate that more countries are now taking an interest in how to shore up their ailing media companies. But it seems to me that any legislation ought to begin with the aim of creating sustainable media jobs, rather than simply parceling out payments to the country’s biggest publishers. For starters, Australia could invest directly in nonprofit public media, which has consistently been shown to have significant civic benefits.
Or it could head down its current path, which is aimed at reducing the power of the tech giants, but — like so much regulation now under consideration around the world — will likely only entrench them further. For journalism to become more sustainable in the long run, it can’t rely on handouts from the biggest tech companies of the moment to the biggest publishers of the moment.
And that’s why I half hope that Google and Facebook call Australia’s bluff, and pull their news links from the platforms. I’ve never been in love with the idea of Google or Facebook being a primary news destination for most people, anyway. And while a retreat from that world would have some real costs in the short term, any withdrawal would also likely be temporary.
Facebook did; Google did not, instead agreeing to give News Corp “significant payments”. The results of this policy do not appear to encourage quality journalism. Instead, Google has helped further entrench Rupert Murdoch’s longtime dominance of Australian media, while Facebook users will only be able to link to websites not informational enough to be considered news.
But here’s the thing: Peacock doesn’t have to be good to compete. It simply has to exist on Comcast. Peacock Premium (with ads) will be free if you are a Comcast or Cox subscriber, and all of those subscribers are getting Peacock’s ads and driving up Peacock’s subscriber numbers, which makes those ads more lucrative for NBCUniversal. And that should come as no surprise to anyone who knows that Comcast owns NBCUniversal and that Cox licenses Comcast’s streaming platform.
Instead of the old horizontal bundling — in which cable companies packaged a bunch of channels together so that people paid for some things they weren’t going to watch to get what they were — the new bundling is going to be vertical, where you pay for internet and get a streaming service in return. It’s not just Comcast that’s doing this. AT&T owns HBO, and it’s going to give premium AT&T mobile and broadband customers HBO Max (not to be confused, although you could definitely be forgiven for doing so, with the existing HBO Go and HBO Now apps) bundles at no extra charge.
As Netflix and Disney cement their leads over an array of rivals in the video-streaming market, top executives at Comcast and its NBCUniversal arm are in a quandary. Only 11.3 million households regularly watch NBCU’s Peacock service, far fewer than use competing services, according to recent internal data viewed by The Information. NBCU wants to ramp up Peacock’s growth, particularly among paying users, but without spending a lot of money.
One way that Peacock might grow its subscriptions would be to merge or bundle with another firm or service. The Information reported that NBCUniversal has pitched ViacomCBS about bundling with CBS All Access — soon to relaunch as Paramount+ — at a discounted rate, a pitch that evidently piqued ViacomCBS’s interest for a potential offering in overseas markets. But the outlet also cited NBCUniversal chief Jeff Shell, prior to taking his current position, as telling colleagues that the company would need to merge with WarnerMedia to remain competitive. The Information did, however, note that there’s no indication such a deal has been proposed and it’s possible that Shell’s position has changed.
Trendacosta nearly called it, but with a twist: why choose between vertical or horizontal integration when they could do both? Welcome to the era of everything you hate about cable news bundles mixed with everything you hate about monolithic conglomerates.
Technology giants Google and Facebook will be required to negotiate with Australian media companies over payment for news content and notify them of algorithm changes under a mandatory code of conduct.
Unfortunately, this means people and news organisations in Australia are now restricted from posting news links and sharing or viewing Australian and international news content on Facebook. Globally, posting and sharing news links from Australian publishers is also restricted. To do this, we are using a combination of technologies to restrict news content and we will have processes to review any content that was inadvertently removed.
Facebook’s response to this law differs from Google’s — the latter signed a bunch of agreements, including with News Corp, to pay publishers in exchange for showcasing their news. Facebook says that its relationship with publishers is different:
For Facebook, the business gain from news is minimal. News makes up less than 4% of the content people see in their News Feed. […]
Presumably, that means the remaining 96% is feed.
This is the biggest contemporary experiment in figuring out what it is like when publishers in one country no longer receive traffic from Facebook. It is unclear just how many clicks Facebook sends Australian news publishers. Facebook says it provided over five billion referrals last year, which obviously sounds like a lot, but that may be only a single-digit percentage of all news website visits in Australia.
Maybe this means that Australian Facebook users will become some of the best news consumers in the world because they will have to look elsewhere. They won’t rely on what Facebook thinks they want to see. It could be good for publishers, too, who will surely be happy to avoid Facebook’s algorithmic Jenga game.
But, if Facebook referrals are a significant amount of traffic to news websites, this law will have backfired in a quick and predictable way. Or, if Facebook detects “news” material imprecisely — which it will — it could permit the circulation of bullshit. The lies are free.
The North Dakota state Senate is jumping into a simmering feud between Apple and iOS software developers with a bill that would make it illegal for device makers to require to use their app stores and payment systems.
The bill (PDF) has two main prongs. First, it would make it unlawful for companies such as Google and Apple to make their app stores the “exclusive means” of distributing apps on their platforms. Second, it would prohibit those providers from requiring third parties to use their digital transaction or in-app payment systems in their applications.
Apple Chief Privacy Engineer Erik Neuenschwander told the committee the bill “threatens to destroy iPhone as you know it” by mandating changes which he said would “undermine the privacy, security, safety, and performance that’s built into iPhone by design.”
“Simply put, we work hard to keep bad apps out of the App Store; (the bill) could require us to let them in,” he said.
This argument basically assumes that it’s App Review, not iOS’s security features, that’s protecting users. Yet we have numerous examples of the App Store failing to do so, while at the same time mistakenly blocking good apps and developers. This happens both because the review process doesn’t scale and because it’s technically impossible to completely review how an app will behave. People definitely have more confidence installing software from an app store, but it’s mostly false confidence. Decades of experience with platforms like the Mac and Android that allow sideloading show that a more open approach works just fine. macOS’s anti-malware features have never been better.
The North Dakota state senate voted 36-11 on Tuesday not to pass a bill that would have required app stores to enable software developers to use their own payment processing software and avoid fees charged by Apple and Google.
If this bill had passed, what do you think Apple would have done?
Stop offering products and services in North Dakota
Construct an entirely separate iOS and App Store model for the citizens of North Dakota
Upend its entire App Store business model
I know there are some developers who think the second and third options are likely, but North Dakota has less than a million residents. I think Apple could afford to forego Fargo.
Microsoft’s Office app — a single app combining Word, Excel and PowerPoint features — is available for iPads from the Apple App Store as of today, February 16. Microsoft officials said earlier this month to expect this app to show up in the App Store, but didn’t say when that would happen.
Microsoft’s goal in creating these lightweight, combined Office apps was to address the needs of users for whom full versions of Word, Excel, and PowerPoint on their mobile devices was overkill in terms of features and download size. Microsoft also integrated its Lens technology into this app to make it easier for users to convert images into Word and Excel documents, scan PDFs and capture whiteboards. The app also is meant to simplify the process of making quick notes, signing PDFs and transferring files between devices.
Many of Office’s features are available for iPad users free of charge, but iPad Pro users require a subscription to do anything.
More broadly, I am finding it difficult to adapt to increasingly unified applications on my Mac and iPad. I am not sure if this is an age and experience thing — I am used to switching between apps with multiple documents or windows open. Aside from web browsers and development environments, I use tabs infrequently within any apps because I am often juggling between many files. The advantages of thinking in an application-based model are outweighed, for me, by a document-based model.
This unified Office app has many of the same problems as, for example, Electron apps and web apps generally. Each document consumes the entire app. You can use the app in split screen, as Apple now requires, but it does not fully support multitasking within the app. So it is not possible to, for example, build a PowerPoint presentation based on a Word document outline, or reference one Excel spreadsheet while working in another.
Microsoft’s discrete Office apps do support document-based multitasking, so this is perhaps an oversight. But it is consistent with the way some of Microsoft’s other Office apps work. Teams, for example, is almost entirely a single-window Electron app on MacOS. The current meeting will open in a new window but, otherwise, it is an entirely single-window app. There is a built-in “Files” feature — for documents shared with your team members — but it is not possible to open multiple directories at once. Nor is it possible to see a chat window and a calendar at the same time.
Perhaps the way I see applications, windows, and documents is outdated and incompatible with the increasingly web-centric model. I may be failing to adapt. But many of these restrictions seem designed mostly to help companies churn out updates across different operating systems at breakneck pace, without requiring as much platform-specific development. The software-as-a-service model seems to incentivize frequent updates, quality be damned. Unified apps — like cross-platform frameworks and lowest common denominator codebases — are yet another way to reduce friction in development. This trend is worrisome.
Please know that I do not blame individual developers for this. They are merely working within a system that has been created around them.
See Also:A Step Back, something I wrote last year about many of Apple’s one-window apps.
Are you stuck inside? Are you tired of the view out of your window? Do you like GeoGuessr but want it to have more motion?
You should give City Guesser a try. Guess the city but, instead of Street View clues, it uses videos of people walking through cities. I’ve found a lot of overlap with my GeoGuessr house rules but with fewer geographic restrictions. Via Waxy.
It has been two and a half years since Bloomberg Businessweek published the now-legendary story of how servers made by Supermicro were compromised by Chinese intelligence at the time of manufacture — servers that ended up in data centres for “a major bank, government contractors”, Apple, and a company acquired by Amazon that counted among its clients the U.S. Department of Defense. Contemporary statements from the named affected companies were unequivocal: either the reporters were completely wrong, or these statements were lies that would carry severe penalties should evidence be found.
In the ensuing years, Jordan Robertson and Michael Riley, the two reporters on the story, have mostly stayed quiet despite frantic calls from security professionals for clarity. Its truthfulness has become something of an obsession for many, including me. On the first anniversary of its publication, I lamented the lack of followup: “either [it is] the greatest information security scoop of the decade or the biggest reporting fuck-up of its type”.
Nearly a year and a half has passed since I wrote that, and it has seemed like it would remain a bizarre stain on Bloomberg Businessweek’s credibility. And then, today, came the followup.
In 2010, the U.S. Department of Defense found thousands of its computer servers sending military network data to China — the result of code hidden in chips that handled the machines’ startup process.
In 2014, Intel Corp. discovered that an elite Chinese hacking group breached its network through a single server that downloaded malware from a supplier’s update site.
And in 2015, the Federal Bureau of Investigation warned multiple companies that Chinese operatives had concealed an extra chip loaded with backdoor code in one manufacturer’s servers.
Each of these distinct attacks had two things in common: China and Super Micro Computer Inc., a computer hardware maker in San Jose, California. They shared one other trait; U.S. spymasters discovered the manipulations but kept them largely secret as they tried to counter each one and learn more about China’s capabilities.
When I woke up this morning and saw Techmeme’s rewritten headline, “Sources: US investigators say hardware and firmware of Supermicro servers were tampered, with an extra chip loaded with a backdoor to send data to China”, I thought there must be some strange bug that is loading old news. Alas, this is a new story, with new sources — over fifty people spoke with the reporters, apparently — new evidence, and new allegations. But rather than clarifying the 2018 article, I find that I have many of the same questions now about two blockbuster articles.
Before I get into my confusion, a necessary caveat: I only have information that has been shared publicly and I am a hobbyist commentator, while Robertson and Riley are journalists who have been collecting details for years. These stories matter a lot, and their allegations are profound, but extraordinary claims demand extraordinary evidence. And based on everything that has been reported so far, I just don’t see it yet. Chalk it up to my own confusion and naïveté, but it seems like I am not alone in finding these reports insufficiently compelling.
Here’s the one-paragraph summary: Supermicro is a big company with lots of clients, any of which would be concerned about a backdoor to a foreign intelligence agency in their hardware. According to these reports, the U.S. intelligence apparatus was mobilized to counter the alleged threat. This has been a high-profile case since the first story was published. And I am supposed to believe that, in two and a half years, the only additional reporting that has been done on this story is from the same journalists at the same publication as the original. Why do I not buy that?
Robertson and Riley’s new report concerns the three specific incidents in the quoted portion above. There is no new information about the apparent victims described in their 2018 story. They do not attempt to expand upon stories about what was found on servers belonging to Apple or the Amazon-acquired company Elemental, nor do they retract any of those claims. The new report makes the case that this is a decade-long problem and that, if you believe the 2010, 2014, and 2015 incidents, you can trust those which were described in 2018. But if you don’t trust the 2018 reporting, it is hard to be convinced by this story.
This time around, there are many more sources, some of which agreed to be named. There is still no clear evidence, however. There are no photographs of chips or compromised motherboards. There are no demonstrations of this attack. There is no indication that any of these things were even shown to the reporters. The new incidents are often described by unnamed “former officials”, though there are a handful of people who are willing to have quotes attributed.
So let’s start with the claims of one of those on-the-record sources:
“In early 2018, two security companies that I advise were briefed by the FBI’s counterintelligence division investigating this discovery of added malicious chips on Supermicro’s motherboards,” said Mike Janke, a former Navy SEAL who co-founded DataTribe, a venture capital firm. “These two companies were subsequently involved in the government investigation, where they used advanced hardware forensics on the actual tampered Supermicro boards to validate the existence of the added malicious chips.”
Janke, whose firm has incubated startups with former members of the U.S. intelligence community, said the two companies are not allowed to speak publicly about that work but they did share details from their analysis with him. He agreed to discuss their findings generally to raise awareness about the threat of Chinese espionage within technology supply chains.
Do not be distracted by the description of Janke as a former Navy SEAL. It is irrelevant to this matter.
One of the companies that has received funding from DataTribe is Dragos, which promises “industrial strength cybersecurity for industrial infrastructure”. It is not clear whether Dragos was one of the firms that received an FBI briefing. However, Dragos’ CEO Robert M. Lee has been consistentlycritical of Robertson and Riley’s reporting. Lee continues to be skeptical of their claims, saying that they have “routinely shown they struggle on technical details”. That becomes apparent in a detail in this adjacent story of apparently compromised Lenovo ThinkPads used by U.S. forces in Iraq in 2008:
“A large amount of Lenovo laptops were sold to the U.S. military that had a chip encrypted on the motherboard that would record all the data that was being inputted into that laptop and send it back to China,” Lee Chieffalo, who managed a Marine network operations center near Fallujah, Iraq, testified during that 2010 case. “That was a huge security breach. We don’t have any idea how much data they got, but we had to take all those systems off the network.”
Three former U.S officials confirmed Chieffalo’s description of an added chip on Lenovo motherboards. The episode was a warning to the U.S. government about altered hardware, they said.
That quote was pulled from a court transcript, and Chieffalo really did say “a chip encrypted on the motherboard”. That phrase is gibberish. It seems likely that Chieffalo meant to say “a chip embedded on the motherboard”, but the transcript includes no attempt at correction. More worrying for this story, Chieffalo was quoted wholesale without any note from the reporters. It seems reasonable that they could not speculate about the intended word choice, but surely they could have reached Chieffalo for clarification. If not, it seems like an odd choice to approvingly quote it; it undermines my trust in the writers’ understanding.
That trust is critical, particularly as this report implies a much more severe allegation. In 2018, Robertson and Riley wrote that Supermicro servers were compromised at the subcontractor level:
During the ensuing top-secret probe, which remains open more than three years later, investigators determined that the chips allowed the attackers to create a stealth doorway into any network that included the altered machines. Multiple people familiar with the matter say investigators found that the chips had been inserted at factories run by manufacturing subcontractors in China.
That suggests some distance between Supermicro itself and its allegedly compromised boards. If this is true, the company has some wiggle room there to disclaim awareness and terminate that supplier relationship. But in today’s report, Robertson and Riley step up the level of Supermicro’s involvement:
Manufacturers like Supermicro typically license most of their BIOS code from third parties. But government experts determined that part of the implant resided in code customized by workers associated with Supermicro, according to six former U.S. officials briefed on the findings.
Investigators examined the BIOS code in Defense Department servers made by other vendors and found no similar issues. And they discovered the same unusual code in Supermicro servers made by different factories at different times, suggesting the implant was introduced in the design phase.
Overall, the findings pointed to infiltration of Supermicro’s BIOS engineering by China’s intelligence agencies, the six officials said.
The report is careful to say that there is no evidence of executive involvement, and that these changes would have been made by people in a position to be working directly with Supermicro’s server technologies. But that still implies knowledge of this alleged compromise at much closer proximity than some factory in China.
The BIOS manipulation above is dated to 2013. The following year, the report says, the FBI detected nefarious chips on “small batches” of Supermicro boards:
Alarmed by the devices’ sophistication, officials opted to warn a small number of potential targets in briefings that identified Supermicro by name. Executives from 10 companies and one large municipal utility told Bloomberg News that they’d received such warnings. While most executives asked not to be named to discuss sensitive cybersecurity matters, some agreed to go on the record.
In 2018, Businessweek said there were up to thirty companies; it is not clear how much overlap there is with the eleven above. But, as Robertson and Riley write, not a single one has said they found evidence of infiltration. Some blamed a dearth of information from the FBI for their inability to find a problem with their servers, but what if the supposed rogue chips simply did not exist? That would make it especially hard to find evidence for them. Just because government agencies are providing briefings of a possible problem, it does not necessarily mean that problem exists as described.
Here’s one more named source with a funny story:
Darren Mott, who oversaw counterintelligence investigations in the bureau’s Huntsville, Alabama, satellite office, said a well-placed FBI colleague described key details about the added chips for him in October 2018.
“What I was told was there was an additional little component on the Supermicro motherboards that was not supposed to be there,” said Mott, who has since retired. He emphasized that the information was shared in an unclassified setting. “The FBI knew the activity was being conducted by China, knew it was concerning, and alerted certain entities about it.”
If there is a phrase that is jumping out to you in this quote, it is probably “October 2018” because that is when Robertson and Riley published their original “Big Hack” piece. It seems completely plausible to me that Mott’s colleague was describing that Businessweek article. There is nothing here that suggests the colleague was referring to independent knowledge. On the contrary, the fact that this was shared in an “unclassified setting” runs counter to the repeated assertions in both articles about the sensitivity and secrecy of these operations — so secret that, apparently, not even Supermicro was supposed to know.
There is one more incident described in detail. This time, Intel was the supposed target in 2014:
Intel’s investigators found that a Supermicro server began communicating with APT 17 shortly after receiving a firmware patch from an update website that Supermicro had set up for customers. The firmware itself hadn’t been tampered with; the malware arrived as part of a ZIP file downloaded directly from the site, according to accounts of Intel’s presentation.
This delivery mechanism is similar to the one used in the recent SolarWinds hack, in which Russians allegedly targeted government agencies and private companies through software updates. But there was a key difference: In Intel’s case, the malware initially turned up in just one of the firm’s thousands of servers — and then in just one other a few months later. Intel’s investigators concluded that the attackers could target specific machines, making detection much less likely. By contrast, malicious code went to as many as 18,000 SolarWinds users.
This posits an incredibly sophisticated attack — but, again, without supporting evidence. The report says that two steel companies based outside of the U.S. received compromised firmware in 2015 and 2018 from that update site. The Bloomberg story does not mention a 2016 case where Apple found an “infected driver” on one of its servers, which it determined to be accidental. All of these cases point back to an update server that Supermicro’s statement implies was not being served over HTTPS — pause for effect — until some time after that 2018 incident. That’s pretty bad security.
But is it possible that these were more isolated events, and not precise attacks? I am not doubting Intel’s investigative competence, but I am questioning whether the details of this internal presentation have been been accurately relayed to Robertson and Riley. There is no indication that the reporters saw the presentation themselves. If you shed the narrative and look at what is being described here, it sounds like APT17 — an infiltration team that FireEye attributes to the Chinese government — might have compromised Supermicro’s update server and planted malware for its clients to inadvertently install. Both Apple and Intel have denied that this was of notable concern. Malware is certainly a worry, though I am having trouble after all this time trusting the reporting I am basing my theory on. But there is a vast chasm between what has become a routine breach of a supplier with high-value clientele, and the supply chain hardware attack that Bloomberg has been reporting for two and a half years now without turning up a single piece of direct evidence.
FWIW, my money is on this whole saga being, if you dig deeply enough, just briefings related to the 2016 supermicro bad firmware update incident filtered through so many games of telephone that it’s eventually twisted itself into a story about tiny chips that never happened.
The problem remains that we just do not know what is going on here. This is not a trivial matter: there are many companies that rely on Supermicro hardware, and they need to know if there is any chance that any of it is compromised. We now have two lengthy and deeply reported stories with ostensibly alarming conclusions that have produced more confusion than clear answers.
A key indicator of the risk seen in these reports is how Supermicro’s clients behaved after these incidents were disclosed. It turns out that many of them — including Intel, the Pentagon, and NASA — have continued to use Supermicro as a supplier. One would think that, if there were concerns about the security of the company’s products, clients would be cancelling contracts left and right.
Everything about this story is wild and hard to believe. Apparently, there were three different vectors of vulnerabilities in Supermicro products: BIOS manipulation, malicious chips, and insecure firmware updates. In Robertson and Riley’s telling, all three have been exploited over the last eleven years. These attacks cover a few dozen high-profile companies and are being investigated by U.S. intelligence agencies; those agencies are briefing other orgnanizations about the danger. Yet there are only two journalists who have heard anything about this, despite this supposed supply chain attack being one of the most-watched information security stories in recent memory, and Supermicro still is not a prohibited vendor.
I would find this more compelling if this story were corroborated by more outlets with different sources, or if Robertson and Riley were able to produce more rigorous evidence. Then, at least, there would be some clarity. Right now, it feels like I’ve seen this movie before.
When you join the fast-growing, invite-only social media app Clubhouse — lucky you! — one of the first things the app will ask you to do is grant it access to your iPhone’s contacts. A finger icon points to the “OK” button, which is also in a bolder font and more enticing than the adjacent “Don’t Allow” option. You don’t have to do it, but if you don’t, you lose the ability to invite anyone else to Clubhouse.
Granting an app access to your contacts is ethically dicey, even if it’s an app you trust. If you’re like most people, the contacts in your phone include not just your real-life friends, but also old acquaintances, business associates, doctors, bosses, and people you once went on a bad date with. For journalists, they might also include confidential sources (although careful journalists will avoid this). When you upload those numbers, not only are you telling the app developer that you’re connected to those people, but you’re also telling it that those people are connected to you — which they might or might not have wanted the app to know. For example, say you have an ex or even a harasser you’ve tried to block from your life, but they still have your number in their phone; if they upload their contacts, Clubhouse will know you’re connected to them and make recommendations on that basis.
I have previously written in passing about how invasive it is for apps and services to be able to vacuum up contacts. But I am not sure I have fully expressed how much of a catastrophe it is for privacy and consent.
In 2012, Apple began requiring explicit permission for apps to access a user’s contacts, after many apps — including Path and Foursquare — were shown to be uploading contact lists without users’ explicit knowledge or consent. The thing that has always bugged me about this arrangement is that there is no involvement in this decision from the people in the contact list.
Just because I have someone’s phone number or email address, that does not make it right to push that information into some app’s database. Likewise, when I give someone my contact details, it is not immediately apparent that they will likely, at some point, pass that along to a company that I would not elect to share that information with. And it isn’t just email addresses and phone numbers: contact directories contain all sorts of unique information about people, and it is trivial to merge identifiers to produce more comprehensive dossiers about individuals. This is not hypothetical; it is often marketed as a feature.
The permission dialog iOS presents users before an app is able to access their contacts is, in a sense, being presented to the wrong person: can you really consent on behalf of hundreds of friends, family members, and acquaintances? From a purely ethical perspective, the request ought to be pushed to every contact in the directory for approval, but that would obviously be a nightmare for everyone.
There are clearly legitimate uses for doing this. Allowing people to find contacts already using a service, as Clubhouse is doing, is a reasonable feature. It does not seem like something that can be done on-device, so the best solution that we have is, apparently, to grant apps permission to collect every contact on our phones. But that is a ludicrous tradeoff.
This is why it is so important for there to be strict privacy regulations — particularly in the United States. It should not be left up to individuals or businesses to decide to what extent they are comfortable allowing their users to violate the privacy of others. I do not think legitimate uses of contact matching should be banned; I think these features should be made safer.
So, already feeling very out of sorts, let’s recall, I requested help from the university’s enrollment/grade/whateverthefuck portal help staff, and got into “line” behind 20 people. It did keep updating, you’re 16th, you’re 11th, you’re 6th! This was the best part of my day by far. I like to think I occupied each place in line with proper reverence.
My turn came. I explained the problem. The help guy said he would send me an email. And then he sent an email, to my email, the only email I ever use, the email that never doesn’t get emails, ever. I have no reason to disbelieve this. And I didn’t get it. He sent another one, I didn’t get it. He said I would have to call the university and talk to them about why I couldn’t get emails from the enrollment/grade/whateverthefuck portal. As soon as our chat ended, I got two emails from the help desk of the enrollment/grade/whateverthefuck portal I couldn’t get emails from, one saying, sorry you can’t get emails, we did everything we could, and the other a transcript of the conversation about how I couldn’t get emails. Just in case you’re not grasping this: I got two emails from the place I couldn’t get one email from about not getting that email.
There never feels like an explanation for these things. This piece spoke to me — much like Miller, many of the technical errors I encounter every day are found just after I have done all of the right things, clicked all of the right buttons, and said the exact right incantation. And then, some cheerful error message telling me to try again now, later, or — even worse — nothing at all. I read that circumstance as a good time to step away and peel an orange.
John D. McKinnon and Alex Leary, Wall Street Journal:
A U.S. plan to force the sale of TikTok’s American operations to a group including Oracle Corp. and Walmart Inc. has been shelved indefinitely, people familiar with the situation said, as President Biden undertakes a broad review of his predecessor’s efforts to address potential security risks from Chinese tech companies.
Whatever shape a possible TikTok deal takes, it is not likely to feature the $5 billion fund for education that Mr. Trump had said Oracle and Walmart were preparing to create as part of the deal, according to one of the people familiar with the situation.
To be clear, this was always a sale for show, it remains uncertain whether Oracle and Walmart were actually going to create an education fund, and the curriculum that would apparently have been promoted was the context-free 1776 Project.
Justin O’Beirne is so good at these articles. In this new one, he explores four primary questions:
Why was the Look Around coverage in Canada available so soon after Apple began collecting it and across so much land, compared to every other country Apple has released Look Around imagery for so far?
Why do points of interest differ significantly between Apple Maps’ flat view, Look Around imagery, and the real world?
Why do many of Apple’s most recent Look Around releases lack points-of-interest?
Whatever happened to the vans Apple showed off to Matthew Panzarino in June 2018? The most recent image I can find with one of those LiDAR vans was captured in September 2018 — but none of its imagery is available in Maps nor, as O’Beirne shows, are any of the images captured by those vans.
This is one of my favourite pieces that O’Beirne has done, and not just because he has kind words for yours truly. I love how mysterious this process is and, in particular, how it compares to Google’s efforts. Google obviously has more practice with building maps and processing street-level images. Apple’s approach reveals some hiccups and question marks — I really want to know what those vans were capturing and why they are apparently no longer used.
From one-off mistakes made by developers on their own machines, to misconfigured internal or cloud-based build servers, to systemically vulnerable development pipelines, one thing was clear: squatting valid internal package names was a nearly sure-fire method to get into the networks of some of the biggest tech companies out there, gaining remote code execution, and possibly allowing attackers to add backdoors during builds.
One of the things that is most impressive and, therefore, terrifying about this attack vector is just how clean it is. It has echoes of malicious software updates but it is even more straightforward.
Home is supposed to be a constant, steady place, a shelter for a family. It shouldn’t change very much. But an office is basically a big clock with humans for hands. And I find that the people who don’t want to go back to pre-pandemic office culture are the people who are the most concerned about their time. Sometimes this is their personality; they are engineers who look at travel as a waste, who seek efficiencies in their work and health. Sometimes they’re people with other stress, like parents of young children who triangulate between the day care’s schedule, their boss’s expectations, and kids’ needs. For a disabled person, working from home can save hours of daily, needless negotiation. All of these cases are utterly valid. And yet we’re going back. Maybe not all of us, maybe with hybrid schedules. But most of us. We all know it.
This is, as I say, a perfect essay. I would not change one word. But, though I am skeptical of my ability to make a contribution, I always like to add a little something to these links, so here goes.
The thing I miss most about living and working in different places is that it is necessary to travel between them. I began missing my commute so much that, last summer and autumn, I would often take long walks after a workday. That now feels like a luxury. As I write this, Environment Canada’s website tells me that it feels like thirty degrees below freezing and, if I had to go into the office tomorrow at my usual time, it is forecasted to feel ten degrees colder than that.
Much as I am glad to not be turned into a human icicle, the missing drumbeat of home-walk-work-walk-home has been a difficult adaptation. To make matters more monotonous, I am working from the same home office every day because my computer with all of my work stuff is an iMac and it is difficult to drag around. So it doesn’t matter what I am doing — writing email, meeting colleagues, having a quick chat — it all happens in exactly the same place staring at exactly the same screen sitting in exactly the same chair rolling around on exactly the same rug. The rug is a new addition for this year, actually; I thought it would help make my work space feel different and separate. It felt that way for only about a week.
I wonder, will my home office ever become part of my home again, or will it always feel like an extension of where I work?
I am linking to this with the caveat that it is a Bloomberg Businessweek story about supply chains in China, which is a particular genre that the magazine has completely bombed before without any public reckoning or accountability. One wonders why Bloomberg continues to leave its tattered reputation dangling in the wind, making it hard to trust stories from any of its writers.
With that in mind, here’s Austin Carr and Mark Gurman with a look into Apple’s supply chain and how Tim Cook grew Apple from a mere icon in 2011 to the world’s most valuable publicly-traded company:
Apple’s turnaround in the ensuing years has generally been attributed to Jobs’s product genius, beginning with the candy-colored iMacs that turned once-beige appliances into objets d’office. But equally important in Apple’s transformation into the economic and cultural force it is today was Cook’s ability to manufacture those computers, and the iPods, iPhones, and iPads that followed, in massive quantities. For that he adopted strategies similar to those used by HP, Compaq, and Dell, companies that were derided by Jobs but had helped usher in an era of outsourced manufacturing and made-to-order products.
Contract manufacturers worked with all the big electronics companies, but Cook set Apple apart by spending big to buy up next-generation parts years in advance and striking exclusivity deals on key components to ensure Apple would get them ahead of rivals. At the same time he was obsessed with controlling Apple’s costs. Daniel Vidaña, then a supply management director, says Cook particularly fussed over fulfillment times. Faster turnarounds made customers happier and also reduced the financial strain of storing unsold inventory. Vidaña remembers him saying that Apple couldn’t afford to have “spoiled milk.” Cook lowered the company’s month’s worth of stockpiles to days’ and touted, according to a former longtime operations leader, that Apple was “out-Dell-ing Dell” in supply chain efficiencies.
Since Apple made the call to remove HKmap.live from the App Store — a decision that was apparently made to appease a Chinese government that is worried about pro-democracy demonstrators — I have been intrigued by how closely Tim Cook’s ascendance dovetailed with that choice. Hong Kong’s sovereignty was returned to China in July 1997; Cook joined Apple less than a year later in March 1998. Cook was a primary force in moving Apple’s production lines to China, mostly to factories that are located just across the Sham Chun River, which separates mainland China and Hong Kong.
Over the last twenty years, Apple’s dependency on China has grown, as has China’s influence over Hong Kong. The two paths collided in 2019 when demonstrators in Hong Kong used an iOS app to alert others about the location of police barricades, and Apple under Cook’s leadership removed that app from the store. Some commentators saw this as protecting Apple’s access to the Chinese market; I bet its reliance on factories in that country were a greater motivator.
A recent Nikkei report indicated that Apple is seeking to reduce that dependency. But it does not seem to be going well, according to Carr and Gurman:
When Apple engineers started setting up manufacturing in Texas, sources familiar with the matter say, they had a difficult time finding local suppliers willing to invest in retooling their factories for a one-off Mac project. According to a former Apple supply chain worker, huge quantities of certain components needed to be imported from Asia, which caused a domino effect of delays and costs. If a shipment arrived with defective parts, for example, the Texas factory had to wait for the next air-cargo delivery; at factories in Shenzhen, supply replacements were a short drive away. It felt like the opposite of Gou’s ultra-efficient all-in-one Foxconn hubs. “We really emphasized with the suppliers to triple-check their product before they put it on a plane to Texas,” this worker says. “It was a pain.”
Meanwhile, Apple has moved some production of AirPods to Vietnam and iPhones to India, where the company has run into scale and quality issues, too. More significant manufacturing diversification is likely to take years, even as Cook faces pressure to decouple from China over censorship, human-rights violations, and criticism about labor conditions at mainland factories. In an all-hands meeting last year, an employee asked Dan Riccio, then Apple’s hardware chief, why the company continues to build products in China given these ethical problems. The crowd cheered. “Well, that’s above my pay grade,” he responded, before adding that Apple was still working to expand its manufacturing presence beyond China.
I do think that Apple’s executive team really believes in social justice and trying to do the right thing. The factories of its contract manufacturers in China undermine that, and I think they are cognizant of that. But modifying a supply chain as integrated and complex as Apple’s to give the company more leverage in its negotiations with Chinese government officials is an enormous task.
This report contains little new information, but it is an engaging summary of how this supply chain has evolved over time — right up until you get near the end and then there’s this weird paragraph:
In many ways, Cook is now applying the lessons Apple learned building its China manufacturing network to other parts of the business. Its operational prowess has enabled it to churn out more product permutations and accessories. And just as Apple uses its awesome buying power to extract concessions from suppliers, it’s now using its control over an equally impressive digital supply chain, which includes the company’s own subscription services, as well as third-party apps, to generate greater revenue from customers and software developers. In an October report on the tech industry, the House antitrust subcommittee said this influence of its App Store amounted to “monopoly power” and recommended that regulators step in.
Perhaps I am missing something, but the connection between the physical supply chain and the App Store’s distribution policies is tenuous. It is also incorrect: while Carr and Gurman say that Apple is exercising greater control to generate more revenue through its App Store, one of the few notable highlights for iOS developers last year was the announcement of the Small Business Program which lowered Apple’s commission to 15% up to one million dollars. There are many caveats and it is imperfect, but it is the opposite of the squeeze the company puts on its hardware suppliers, not its analog.
Late last December we started getting a distress call from our forum patrons. Patrons were experiencing ads that were opening via their default browser out of nowhere. The odd part is none of them had recently installed any apps, and the apps they had installed came from the Google Play store. Then one patron, who goes by username Anon00, discovered that it was coming from a long-time installed app, Barcode Scanner. An app that has 10,000,000+ installs from Google Play! We quickly added the detection, and Google quickly removed the app from its store.
The basic premise of corrupting a straightforward utility app is not new, but it is concerning. In 2018, Craig Silverman of Buzzfeed News revealed that an entire company with the unsubtle name We Purchase Apps acquired over a hundred of them to execute an ad fraud scheme on Android. Becky Hansmeyer wrote about how an iOS wallpaper app was flipped to another developer that packed it full of ads. Browser extensions are another popular vector, particularly for analytics companies that want to spy on users’ browsing. A particularly aggressive Chrome extension generated a 2016 FTC investigation.
On iOS, if you turn on “Limit Adult Website” under Screen Time -> Content Restrictions, Safari blocks any website URL containing the word “asian”. Seriously, go try it, it’s unbelievable. I filed a [Feedback] a long time ago. Nothing changed.
Other racial or ethnic search terms such as “Black,” “white,” “Korean,” “Arab” or “French” don’t seem to be impacted by the filter. Also confusingly, some popular pornographic search terms are blocked while others aren’t. For example, the search term “schoolgirl” isn’t blocked but “redhead” is.
Right now, it’s not clear how Apple decides which search terms are adult and which ones aren’t. It’s very possible that this is a goof from some AI program that just scrapes search results for popular porn terms. But at the same time, something like this should not be left to AI without some sort of human curation. The alternative — that a group of humans did oversee and approve this list of terms — would be exponentially worse. Regardless of whether this was or wasn’t an intentional decision, the fact this had been reported to Apple more than a year ago and still nothing has changed is massively disappointing.
It isn’t just iOS — the equivalent feature in MacOS also blocks “Asian” websites, but less consistently. If I enable website restrictions on MacOS, I am able to search Wikipedia and access articles with “Asian” in their titles and URLs; on iOS, those same searches and articles are blocked.
It also is not the sole racial or racially-related term blocked by Screen Time, but it is one of very few. While you can visit Ebony Magazine’s website with parental controls enabled, searching Google for the lyrics to Paul McCartney and Stevie Wonder’s “Ebony and Ivory” is blocked. It is, however, the only one that prohibits the searching of an entire continent.
This is upsetting, made all the more so by Apple’s lack of acknowledgement to Shen’s bug report. I thought of this as a Scunthorpe problem in an earlier version of this post, but that is hardly the case. “Asian” — and “ebony”, for that matter — need more words and context added to be considered sexualized.
Peter Guest, Sen Nguyen, and Randy Mulyanto, Rest of World:
Over the past few years, Grab and other ride-hailing players like Gojek Vietnam, owned by the Indonesian unicorn Gojek, have burned millions of dollars in investor cash trying to attract price-sensitive Vietnamese customers and capture as much market share as possible. The strategy included offering steep discounts to consumers and generous incentives to drivers, who work as independent contractors with few labor protections.
The problem is that now their investors are eager to turn a profit, and the well of cash is starting to dry up. In Vietnam and other Southeast Asian countries, like Indonesia, Grab and Gojek have begun raising prices and squeezing gig workers, especially after Covid-19 lockdowns drastically cut the number of people using ride-hailing apps.
But the two companies are also part of a pattern that started emerging around the world long before the pandemic. Enormous venture firms, most notably the Japanese mega-fund SoftBank, have poured billions into startups with the express purpose of winner-takes-all domination. Often, when it comes time to actually make money, it’s workers at the bottom who bear the burden.
This piece largely concerns speculation of a merger of Grab and Gojek, which would give the combined company a near-monopoly on ride hailing throughout Southeast Asia. Both companies also operate ancillary services that take advantage of their ubiquity; this would be worrying for workers in many industries in the region.
But this is also, in the periphery, about the distorting effect of massive infusions of venture capital. I’ve already written about how companies like Uber have bled billions of dollars on the premise that they can make self-driving cars, something which they offloaded in December. But it is equally worrying that the funding of some venture-backed companies is dependent on aspirations of becoming a monopoly in key markets — something that is enabled by decades of weak antitrust regulation worldwide.
This is my “something lighter” that I promised in the last post: an article about how all software will die one day.
Choosing change is tough, but sometimes you don’t choose, and there’s no obvious benefit at the end of the process. If one of your key tools is discontinued, or becomes incompatible with the next version of your operating system or the new hardware you just bought, you’re going to be forced to move eventually. Call Recorder users can keep using it for now, but as soon as they buy a Mac that’s running Apple silicon, the jig is up. Change is coming, inevitably.
I wish software were more durable over the long term, but that comes with its own baggage. That has long been the wrench in the spokes of Microsoft’s bicycle: some companies depend on software written when I was learning to walk.
On an individual level, we have to be okay with adaptation, but it is hard. Luckily, the whole world is constantly changing as well. My favourite instant messaging client no longer works — largely because the IM protocols I once relied upon no longer exist — but that is okay because my friends have moved on to phone-based messaging. It comes with unique benefits; it comes with new drawbacks. And we keep chatting along because it is fine.
One more policy-related post for a hat trick today, and then I promise I will link to something lighter.
There are several recommendations at the beginning of that NYU report I previously linked to. One of them is probably familiar to anyone who has read internet policy discussions for the past, say, three or four years:
Work with Congress to update Section 230. The controversial law should be amended so that its liability shield is conditional, based on social media companies’ acceptance of a range of new responsibilities related to policing content. One of the new platform obligations could be ensuring that algorithms involved in content ranking and recommendation not favor sensationalistic or unreliable material in pursuit of user engagement.
Well, it turns out that Senator Mark Warner introduced a new bill today, cosponsored by Sens. Mazie Hirono and Amy Klobuchar, called the SAFE TECH Act (PDF). Here’s a pause where you can try to work out what that acronym stands for.
It’s the Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms Act. Nice try, Mark, but it’s no “USA PATRIOT Act”.
The impression given by the press release is that this legislation is merely a minor revision of Section 230 of the Communications Decency Act:
These changes to Section 230 do not guarantee that platforms will be held liable in all, or even most, cases. Proposed changes do not subject platforms to strict liability; and the current legal standards for plaintiffs still present steep obstacles. Rather, these reforms ensure that victims have an opportunity to raise claims without Section 230 serving as a categorical bar to their efforts to seek legal redress for harms they suffer – even when directly enabled by a platform’s actions or design.
But according to the legal experts that Dell Cameron of Gizmodo spoke with, these changes would be catastrophic:
“The ‘payment’ language appears to apply to more than just advertisements. There are a number of services, such as website hosting, for which service providers accept payment to make speech available,” said Jeff Kosseff, a cybersecurity law professor and author of the Section 230 book, The Twenty-Six Words That Created the Internet.
“Creating liability for all commercial relationships would cause web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech,” [Sen. Ron] Wyden added. “This bill would have the same effect as a full repeal of 230, but cause vastly more uncertainty and confusion, thanks to the tangle of new exceptions.”
But it’s the first part that nukes the entire Internet from orbit because it prohibits any site from in any way acquiring any money in any way to subsidize their existence as a platform others can use. That’s what “accepted payment to make the speech available” means. It doesn’t care if the platform actually earns a profit, or runs at a loss. It doesn’t care if it’s even a commercial venture out to make money in the first place. It doesn’t care how big or small it is. It doesn’t even care how the site acquired money so that it could exist to enable others’ expression. Wikipedia, for instance, is subsidized by donors, who provide “payment” so that Wikipedia can exist to make its users’ speech available. But if this bill should pass, then no more Section 230 protection for that site, or any other site that didn’t have an infinite pot of money at the outset to fund it forever. Any site that wants to be economically sustainable, or even simply recoup even some of the costs of operation – let alone actually profit – would have to do so without the benefit of Section 230 if this bill were to pass.
The thing about Section 230 is that it is very easy to argue that it should be modified, but nearly impossible to find a way to change it so as to avoid toppling the Jenga tower that is the web. This bill is a well-intentioned but poor attempt that, if passed as-is, would backfire.
The trouble with this belief — that tech companies are censoring political viewpoints they find objectionable — is that there is no reliable evidence to support it. There are no credible studies showing that Twitter removes tweets for ideological reasons or that Google manipulates search results to impede conservative candidates (see sidebar on Google on page 12).
The false bias narrative is an example of political disinformation, meaning an untrue assertion that is spread to deceive. In this instance, the deception whips up part of the conservative base, much of which already bitterly distrusts the mainstream media. To call the bias claim disinformation does not, of course, rule out that millions of everyday people sincerely believe it.
I do not think it is surprising that trust in media among Americans has dropped more-or-less steadily ever since Fox News launched in 1996. It is the modern wedge — the thing that established that it is presenting the “other side” of the “liberal mainstream” of CBS, ABC, and NBC. None of that is true, but it fuelled the modern rise of the myth that there are exactly two ideological sides to everything. Other cable news networks copied this formula, none quite as successfully.
Social media is now being treated to the same argument that the mainstream — by disallowing destructive conspiracy theories, attempted insurrectionists, and attacks on the basis of ethnicity, gender, and sexual orientation — is ideologically biased. It is not. These are merely the least these companies can do to restrict intolerant individuals’ ability to bully other users. Many people will never notice those rules because they are generally decent. For the people who cannot have a coherent discussion without resorting to personal attacks, there are miserable “alt-tech” platforms that are happy to brew a toxic environment.
A bunch of Republican state legislators across the country are apparently unconcerned with either the 1st Amendment (or reality) have decided that they need to stop social media companies from engaging in any sort of content moderation. […]
Of course, it’s easy to just point at Florida and say “there goes Florida again…” but it’s actually Republican legislators in a whole bunch of states. And this wasn’t even the first such bill in Florida. A week or so earlier, Republican state Senator Joe Gruters introduced a bill called the “Stop Social Media Censorship Act” which bars any moderation of “religious or political speech.”
Gruters may have introduced the bill, but it doesn’t look like he wrote it. Because in Kentucky, Republican Senators Robby Mills and Phillip Wheeler introduced a nearly identical bill. Oh, and over in Oklahoma, Republican Senator Rob Standridge also introduced an identical bill. In Arizona, it’s Senator Sonny Borrelli who has introduced very similar legislation, though his looks a little different, and (insanely) would try to put into law that a social media website is “deemed to be a publisher” and “deemed not to be a platform” which is, you know, not a thing that actually matters. In North Dakota, there’s Republican State Rep. Tom Kading who’s similar bill also includes the nonsense publisher/platform distinction.
The same bill also recently surfaced in New Hampshire and Mississippi, but the earliest copy I found was pushed in 2018 in Arkansas (PDF). Do not go thinking Arkansas Representative Johnny Rye actually wrote the bill he was proposing, though.
John Moritz, of the Arkansas Democrat-Gazette in January 2019:
In late July, emails began appearing in some Arkansas lawmakers’ in-boxes from an out-of-state man who identified himself as Chris Severe.
He offered a half-dozen pieces of legislation, one of them on human trafficking. He made a simple request.
“We will let you pick who gets to sponsor what. Once you decide, we will reach out [to] them and brief them,” read one message, sent using a pseudonym.
The two pieces of legislation included one bill aimed at stopping “social media censorship” and another that would mandate that devices capable of accessing the Internet have software to block material defined as obscene under Arkansas law. Under the proposal, those devices could be unblocked if users paid a $20 fee.
Moritz’s reporting on the circumstances of this model legislation is truly excellent. Severe is actually Chris Sevier, whose surname is often also spelled Seviere, and he has a remarkable record of putting unconstitutional bills before lawmakers and encouraging their filing. In 2013 in Florida, Sevier sued Apple because the company’s devices can provide access to pornography. In 2017, Sevier was behind legislation pushed in many states that would require porn filtering on all devices. Last year in Mississippi, Sen. Chris McDaniel pushed a Sevier-created bill that would require media coverage of the outcomes of all cases against public figures.
Lest you think he’s just some laughable caricature of a man with a porn obsession, I should also clarify that he seems to be a homophobic jackass who once equated same sex marriage with his relationship with his computer.
Moritz’s 2019 reporting links Sevier to a group called Special Forces of Liberty, which wrote this model legislation for all fifty states. The bill proposed in New Hampshire, for example, is nearly identical to Sevier’s pitch. These bills are all from the same template, pushed by an inactive attorney and fringe actor who is obsessed with creating bills that would fly in the face of the First Amendment. In a video about his social media censorship proposal, he dismisses its blatant unconstitutionality as a “scare tactic”. That is a pretty weak argument to ignore away its fundamental illegality.
Anyway, that is the background of this legislation as best as I can dig it up. I am sure Masnick will have more on this if these proposals get anywhere near becoming law.
The scam goes like this: A bunch of Watch keyboard apps are published that purport to have the same slick features as FlickType but instead lock users into paying eye-wateringly high subscription fees for what is, at best, a pale imitation.
You might expect quality to float to the top of the App Store but the trick is sustained by the clones being accompanied by scores of fake reviews/ratings which crowd out any genuine crowdsourced assessment of what’s being sold.
There is a threefold compounding problem here:
There are many apps in the App Store that are effectively counterfeits.
They plant fake reviews to establish legitimacy.
They abuse expensive subscriptions.
To its credit, Apple was quick to pull the apps when Eleftheriou’s Twitter thread became popular. But this is not something that developers should have to police themselves, and it is not a new problem. Since its inception, Apple has promoted the App Store as a trustworthy and safe marketplace, and has referenced that in defending criticism (PDF) of its commission. The least App Review could do is screen for dime store knockoffs and scams like these.
Clearview did not attempt to seek consent from the individuals whose information it collected. Clearview asserted that the information was “publicly available”, and thus exempt from consent requirements. Information collected from public websites, such as social media or professional profiles, and then used for an unrelated purpose, does not fall under the “publicly available” exception of PIPEDA, PIPA AB or PIPA BC. Nor is this information “public by law”, which would exempt it from Quebec’s Private Sector Law, and no exception of this nature exists for other biometric data under LCCJTI. Therefore, we found that Clearview was not exempt from the requirement to obtain consent.
Furthermore, the Offices determined that Clearview collected, used and disclosed the personal information of individuals in Canada for inappropriate purposes, which cannot be rendered appropriate via consent. We found that the mass collection of images and creation of biometric facial recognition arrays by Clearview, for its stated purpose of providing a service to law enforcement personnel, and use by others via trial accounts, represents the mass identification and surveillance of individuals by a private entity in the course of commercial activity. We found Clearview’s purposes to be inappropriate where they: (i) are unrelated to the purposes for which those images were originally posted; (ii) will often be to the detriment of the individual whose images are captured; and (iii) create the risk of significant harm to those individuals, the vast majority of whom have never been and will never be implicated in a crime. Furthermore, it collected images in an unreasonable manner, via indiscriminate scraping of publicly accessible websites.
The Office said that Clearview should entirely exit the Canadian market and remove data it collected about Canadians. But, as Kashmir Hill says, it is not a binding decision, and it is much easier said than done:
The commissioners, who noted that they don’t have the power to fine companies or make orders, sent a “letter of intention” to Clearview AI telling it to cease offering its facial recognition services in Canada, cease the scraping of Canadians’ faces, and to delete images already collected.
That is a difficult order: It’s not possible to tell someone’s nationality or where they live from their face alone.
The weak excuse for a solution that Clearview has come up with is to tell Canadians to individually submit a request to be removed from its products. To be removed, you must give Clearview your email address and a photo of your face. Clearview expects that it is allowed to process facial recognition for every single person for whom images are available unless they manually opt out. It insists that it does not need consent because the images it collects are public. But, as the Office correctly pointed out, the transformative use of these images requires explicit consent:
Beyond Clearview’s collection of images, we also note that its creation of biometric information in the form of vectors constituted a distinct and additional collection and use of personal information, as previously found by the OPC, OIPC AB and OIPC BC in the matter of Cadillac Fairview.
In our view, biometric information is sensitive in almost all circumstances. It is intrinsically, and in most instances permanently, linked to the individual. It is distinctive, unlikely to vary over time, difficult to change and largely unique to the individual. That being said, within the category of biometric information, there are degrees of sensitivity. It is our view that facial biometric information is particularly sensitive. Possession of a facial recognition template can allow for identification of an individual through comparison against a vast array of images readily available on the Internet, as demonstrated in the matter at hand, or via surreptitious surveillance.
The Office also found that scraping online profiles does not match the legal definition of “publicly accessible”.
This is such a grotesque violation of privacy that there is no question in my mind that Clearview and companies like it cannot continue to operate. United States law has an unsurprisingly permissive attitude towards this sort of thing, but its failure to legislate on a national level should not be exposed to the rest of the world.
Unfortunately, this requires global participation. Every country must have better regulation of this industry because, as Hill says, there is no way to determine nationality from a photo. If Clearview is outlawed in the U.S., what is there to stop it registering in another nationality with similarly weak regulation?
Clearview is almost certainly not the only company scraping the web with the intent of eradicating privacy as we know it, too. Decades of insufficient regulation have brought us to this time. We cannot give up on the basic right to privacy. But I fear that it has been sacrificed to a privatized version of the police state.
If a government directly created something like the Clearview system, it would be seen as a human rights violation. How is there any moral difference when it is instead created by private industry?
There are tons of these stories. The question is how do you deal with them in ways that don’t throw out all of the good aspects of the internet. How do you distinguish someone running a defamation campaign with someone with a legitimate grievance?
I guess my final point: humanity & society are messy. Sometimes the internet reflects that mess. We shouldn’t immediately jump the conclusion that because the internet reflects that mess that it’s the cause of that mess or that it can solve that mess.
I like to frame up this question; who do you blame for drunk driving? The drivers, automobile companies, beer companies or bars?
A lot of the discussion about bad behavior online is like only blaming car companies for not requiring a breathalyzer as part of the ignition process.
I think this is a decent analogy, so let’s take it just a step further to explain why I think its implied conclusion misses the mark.
The reason drunk driving is a problem is not the intoxication itself, in a vacuum, but the effects it is likely to have on the driver, passengers, and others. To be perfect clear, I am not condoning drunk driving in any context. But we are not worried about the action of literally drinking too much and then driving a car, as much as we are about its likely effects. So we have widespread campaigns to dissuade people from driving under the influence. And that’s very good.
But we do not stop there, because some people will shamelessly ignore their personal responsibility and drive when they should not: when they are drunk, or high, or tired, or highly irritable, or using their phone. Some of the drivers on the road at any given time should not be behind the wheel, but that is where they are. Please stop using your phone while behind the wheel.
Cars and roads have also changed as regulators recognized that they, too, play a role in a driver’s safety. Seatbelts, airbags, always-on lights, and crumple zones are well-known improvements. Technology has helped usher in ABS, automatic braking, and traction control. There have been subtler changes to car cabins as designs and materials are now chosen to reduce the likelihood of injury when they impact passengers. Roads are now designed to more effectively drain water, improve visibility, and reduce sudden drop-offs. These are not directly a response to drinking and driving, but they help lessen its worst effects.
It was not so long ago that vehicle collisions were treated as a matter of personal responsibility or bad luck; it was widely seen that you should expect to be injured or die when a bunch of steel impacts you, no matter whether you are a driver, passenger, pedestrian, or in another vehicle. But it is now understood that lots of people will crash lots of cars into lots of different things for lots of reasons, and there are many ways to reduce the likelihood of serious injury or death. Personal responsibility undoubtably remains an important factor, but there are lots of things that can be adjusted to lessen the impact of bad decisions.
That brings me back to technology and platforms. The technology landscape of the early 2000s was obsessed with growth; in many ways, it still is. Venture capital firms were happy to lose huge sums of money over many years while platforms grew, with the hope that one day they could slap some ads on everything and call it a business. Moderation was treated more like an impediment to scale, and less like a safety obligation.
Me reaping: Well this fucking sucks. What the fuck.
Platform moderation is hard — unquestionably. It is more difficult for images and video than it is for plain text, and it only becomes harder as a platform becomes more popular. But we can only definitively say that is the case for platforms as they are designed and built today, with little advance consideration for reducing abuse.
If we used that “Men in Black” neuralizer to forget how tech platforms work today and had to rebuild them with the dangers of lax moderation in mind, they will probably look and feel somewhat different. But they could be designed and built with more consideration for the real-world effects of abusive users.
If you want me to bring it back to a car analogy, consider the Lamborghini Countach. The original Bertone-designed shape was sadly made lumpier with every new version. But nothing uglified the car more than the plastic bumpers fitted to U.S. models to comply with new safety regulations. The problem was not with the regulations. It was that the car was not designed to accommodate them, so this attempt at compliance looked dreadful. Lamborghini still designs jaw-dropping cars. But now they have been designed to incorporate modern safety regulations so, in addition to looking amazing, they are safer for their occupants and the victims of collisions.
Online platforms face a similar reckoning today. It is not directly their fault that there are horrible people who do horrible things, but they ought to recognize that they can play a role in reducing the predictably horrible effects. Their current efforts resemble the Countach’s plastic bumper extension because the likelihood of abuse continues to be underestimated. They can try to do better, and I welcome these attempts. But I am skeptical that Facebook, Reddit, Twitter, or YouTube are willing to make radical changes. They can’t, really; they are big companies now.
Whatever is being designed and built now that will one day dethrone today’s giants ought to treat safety as a foundational tenet. That should be encouraged by its funding and business model, too, so it does not become something that is auctioned off at a later date.
So what’s the joke, exactly? For The_Donald, one “joke” was that a bunch of self-described losers could help Donald Trump become “God Emperor.” (They were happy with “President.”) For WallStreetBets, the “joke” was that a group of self-described losers (their preferred real descriptors are unprintable) could rig the financial system in their own favor. The punchline was GameStop, and tens of billions of dollars in actual market activity.
The bigger joke, shared by these communities and plenty of others, is, well, everything. Everything is a farce and a fraud, and the surest, or at least most available, way to get ahead is to treat it as such. This is a profoundly nihilistic worldview, and one that in plenty of other contexts might meet hard limits, or come with terrible costs.
I had a bunch of tabs open to read tonight; this story and Daring Fireball were among those. Herrman’s piece runs along similar lines to Gruber’s piece from earlier today about the Republican party’s humouring in November of the former president’s ludicrous lies about voter fraud, and the poor moderation of Facebook Groups. The idea that online discussions that are allowed to fester with increasingly extremist views could result in real-world effects seems to be inconceivable every single time it happens.
That is not to say that the internet causes this behaviour, nor that anonymity does. There is a broader failing of social safety nets and good governance that can take sufficient blame for caustic nihilism. But this multiyear experiment in scant moderation of the discussion of hundreds of millions — or billions — of people is unworkable.
It’s hard to knock Big G. They’re still the biggest and best—a Morgan Stanley analyst estimated that Google Maps generated almost $3B in advertising revenue in 2019 alone. But I suspect we’re at the tail end of the golden era for Google Maps. They appear, to me, to be acting from a place of fear and conservatism rather than innovation.
What gives Google Maps an edge over other experiences? Today, I would argue the three pillars of its comparative advantage are Places, Street View, and 3D data. But in a few years, I think it will mostly just be Places. And shortly thereafter, it may well wind up with no remaining edge beyond convenience and brand loyalty.
Morrison presents a compelling argument that there are now several viable competitors to Google Maps. But, like most things map-related, it is hard to argue that this is globally true. Nearly every competitor cited by Morrison is based in the U.S. — most often in California — with the exception of OpenStreetMap, which is decentralized but was founded in the U.K. My experience with maps from U.S.-based companies is that they are often unreliable elsewhere, even in major cities. That’s not an anti-American knock; it is simply the reality of trying to build geographically-sensitive products elsewhere.
For example, the Chinese company Tencent has comprehensive maps of China, Hong Kong, Taiwan, Japan, and Korea, but nowhere else. The Russian company Yandex says it has maps of most of the world. But a spot-check of a handful of cities is missing most of the roads in Calgary, Toronto, Portland, Memphis, Osaka, Hyderabad, and Addis Ababa — to name just a few.
I do not think that any single competitor can replace Google Maps worldwide. I do not think that should be the goal, however, especially if we think more in terms of protocols rather than platforms. I like the sound of a more universal maps protocol that makes possible localized tiles and place data, and the nuances that engenders.
I’m excited to announce that this Q3 I’ll transition to Executive Chair of the Amazon Board and Andy Jassy will become CEO. In the Exec Chair role, I intend to focus my energies and attention on new products and early initiatives. Andy is well known inside the company and has been at Amazon almost as long as I have. He will be an outstanding leader, and he has my full confidence.
Bezos says that he’s not retiring, but is instead “focus[ing] on the Day 1 Fund, the Bezos Earth Fund, Blue Origin, The Washington Post, and my other passions”. The Day One Fund is Bezos’ charitable giving endeavour; the Earth Fund is a $10 billion commitment for climate change research. Blue Origin is space exploration as a private tourist.
This news was delivered concurrent with Amazon’s fourth quarter earnings where it announced by far its best quarter and year in the company’s history. Bezos is stepping out of the CEO role with a net worth of $196 billion, according to Forbes, which increased from $113 billion since April and from $73 billion in 2017. Bezos didn’t even crack the top ten list in 2015.
Calgary is about to enter a run of very cold days. Not bitterly cold, but much worse than the last two months have offered. This essay struck a different chord than it otherwise might have. I loved it; I think you will.
Apple today released the iOS 14.5 and iPadOS 14.5 beta updates for developers, and included in the new software is a feature that’s designed to make it easier to unlock an iPhone while wearing a mask by leveraging the Apple Watch.
An opt-in setting lets you turn on a feature that allows an iPhone to be unlocked with both Face ID and an authenticated Apple Watch combined. You can find this setting by opening up the Settings app, going to the Face ID & Passcode section, entering your passcode, and then toggling on “Unlock with Apple Watch.”
People have been begging for this for the last year but I am not surprised it has taken so long. The security architecture of this must be fascinating. An Apple Watch remains unlocked on your wrist by authenticating with the connected iPhone; now the iPhone is also authenticated by that same Apple Watch. Looks like I picked a bad time to not own a functioning Apple Watch.
This is not explicitly about famous journalists who now make hundreds of thousands of dollars per year with a Substack subscription and appearances on Fox, but it is not not about those writers either.
Weather apps have been a UI design playground for years, but no app I can think of has been quite as much a playground for the user as Carrot Weather 5. There is a remarkable level of customization available, but Brian Mueller, Carrot’s developer, has implemented it thoughtfully. I am loving this update.
[Gary] Babcock, a software engineer, got off the phone and Googled himself. The results were full of posts on strange sites accusing him of being a thief, a fraudster and a pedophile. The posts listed Mr. Babcock’s contact details and employer.
The images were the worst: photos taken from his LinkedIn and Facebook pages that had “pedophile” written across them in red type. Someone had posted the doctored images on Pinterest, and Google’s algorithms apparently liked things from Pinterest, and so the pictures were positioned at the very top of the Google results for “Guy Babcock.”
There is something cunning — albeit merciless and cruel — about this smear campaign. Google’s search ranking algorithms already know that websites like Ripoff Report and Cheater Bot are basically full of bullshit and potentially libellous claims, so these websites are almost never at the top of search results. Pinterest’s better Google ranking is being used to launder this trash.
Pinterest has a decent reputation on the moderation of its platform, but this is a concerning vector of attack. The potential for this to be repeated and abused to greater extent seems obvious to me.
Ripoff Report is one of hundreds of “complaint sites” — others include She’s a Homewrecker, Cheaterbot and Deadbeats Exposed — that let people anonymously expose an unreliable handyman, a cheating ex, a sexual predator.
But there is no fact-checking. The sites often charge money to take down posts, even defamatory ones. And there is limited accountability. Ripoff Report, like the others, notes on its site that, thanks to Section 230 of the federal Communications Decency Act, it isn’t responsible for what its users post.
This story is a near-perfect intersection of the unlimited distribution of the web and the lack of accountability that permits such a ruthless smear campaign. In a particularly distressing turn, Hill herself got caught up in it, too, after contacting the alleged perpetrator behind the attacks on Babcock.
However, entirely removing CDA Section 230 protections could make websites legally accountable for things posted by third parties. If that happens, most of the web that allows for user contributions would disappear. That means the end of comment sections, hosted blog platforms, photo and video platforms, and more. Only the biggest players would be able to support such a burden, and they would necessarily be radically altered.
I know the E.U.’s right to erasure rule — passed as part of GDPR — has a poor reputation; it is seen by some as a way to erase history. But, at least in this very narrow instance of websites, it seems to have had some effect. The websites mentioned are either prohibited from being accessed in the E.U. by their terms-of-service, or they block European visitors by geolocation. It is not a case of them being unable to comply with the right to erasure; they are simply unwilling to do so.
The Ripoff Report FAQ says that the company will never remove any complaint because doing so would amount to “rampant censorship”. I see it as a sociopathic level of indifference for the damage the website facilitates.
Jason Snell again graciously allowed me to participate in the annual Six Colors Apple report card, so I graded the performance of a multi-trillion-dollar company from my low-rent apartment. There simply aren’t enough column inches in his report card for all of my silly thoughts. I have therefore generously given myself some space here to share them with you.
As much as 2020 was a worldwide catastrophe, it was impressive to see Apple handle pandemic issues remarkably well and still deliver excellence in the hardware, software, and services that we increasingly depended on. If there wasn’t widespread disease, Apple’s year could have played out nearly identically and I do not imagine it would have been received any differently.
Now, onto specific categories, graded from 1–5, 5 being best and 1 being Apple TV-iest. Spoiler alert!
It will be a while before we know if 2020 was to personal computers what 2007 was to phones, but the M1 Macs feel similarly impactful on the industry at large. Apple demonstrated a scarcely-believable leap by delivering Macs powered by its own SoCs that got great battery life and outperformed just about any other Mac that has ever existed. And to make things even more wild, Apple shoehorned this combination into the least-expensive computers it makes. A holy crap revolutionary year, and it is only an appetizer for forthcoming iMac and MacBook Pro models.
Aside from the M1 models, Apple updated nearly all of its Mac product range except the Mac Pro. The iMac Pro only dropped its 8-core config, but pretty much everything else is the same as when it debuted three years ago.
The best news, aside from the M1 lineup, is that the loathed butterfly keyboard was finally banished from the Mac. Good riddance.
MacOS Big Sur is a decent update by recent MacOS standards. The new design language is going in a good direction, but there are contrast and legibility problems. It is, thankfully, night-and-day more stable than Catalina which I am thrilled that I skipped on my iMac and annoyed that I installed on a MacBook Air that will not get a Big Sur update. Fiddlesticks. But Big Sur has its share of new and old bugs that, while doing nothing so dramatic as forcing the system to reboot, indicate to me that the technical debt of years past is not being settled. More in the Software Quality section.
I picked a great year to buy a new iPhone; I picked a terrible year to buy a new iPhone. The five new phones released in 2020 made for the easiest product line to understand and the hardest to choose from. Do I get the 12 Mini, the size I have been begging Apple to make? Do I get the 12 Pro Max with its ridiculously good camera? How about one of the middle models? What about the great value of the SE? It was a difficult decision, but I got the Pro. And then, because I wish the Pro was lighter and smaller, I seriously considered swapping it for the Mini, but didn’t because ProRAW was released shortly after. Buying a telephone is just so hard.
iOS 14 is a tremendous update as well. Widgets are a welcome addition to everyone’s home screen and have spurred a joyous customization scene. ProRAW is a compelling feature for the iPhone 12 Pro models, and is implemented thoughtfully and simply. The App Drawer is excellent for a packrat like me.
2019 was a rough year for Apple operating system stability but, while iOS 13 was better for me than Catalina, iOS 14 has been noticeably less buggy and more stable. I hope this commitment to features and quality can be repeated every year.
Consider my 4-out-of-5 grade a very high 4, but not quite a 5. The iPhone XR remains in the lineup and feels increasingly out of place, and I truly wish the Pro came in a smaller and lighter package. I considered going for a perfect score but, well, it’s my report card.
The thing the iPad lineup has needed most from the late 2010s was clarity; for the past few years, that is what it has gotten. 2020 brought good hardware updates that has made each iPad feel more accurately placed in the line — with the exception of the Mini, which remains a year behind its entry-level sibling.
But the biggest iPad updates this year were in accessories and in software. Trackpad and mouse compatibility updated a legacy input method for a modern platform, and its introduction was complemented by the new Magic Keyboard case. iPadOS 14 brought further improvements like sidebars, pull-down menus, and components that no longer cover the entire screen.
Despite all of these changes, I remain hungry for more. This is only the second year the iPad has had “iPadOS” and, while it is becoming more of its own thing, its roots in a smartphone operating system are still apparent in a way that sometimes impedes its full potential.
After many difficult years, it seems like Apple is taking the iPad seriously again. I would like to see more steady improvements so that every version of iPadOS feels increasingly like its own operating system even if it continues to look largely like iOS. This one is tougher to grade. I have waffled between 3 and 4, but I settled on the lower number. Think of it as a positive and enthusiastic 3-out-of-5.
Wearables (including Apple Watch): 3
Grades were submitted separately for the Apple Watch and Wearables. I have no experience with the Apple Watch this year, so I did not submit a grade.
Only one new AirPods model was introduced in 2020 but it was big. The AirPods Max certainly live up to their name in weight alone.
Aside from that, rattling in the AirPods Pro models was a common problem from when they were released and it took until October 2020, a full year after the product’s launch, for Apple to correct the problem. Customers can exchange their problematic pair for free, but the environmental waste of even a small percentage of flawed models is hard to bat away.
AirPods continue to be the iconic wireless headphone in the same way that white earbuds were to the iPod. I wish they were less expensive, though, particularly since the batteries have a lifespan of only a couple of years of regular use.
Apple TV: 1
I guess my lowest grade must go to the product that seems like Apple’s lowest priority. It is kind of embarrassing at this point.
The two Apple TV models on sale today were released three and five years ago, and have remained unchanged since. It isn’t solely a problem of device age or cost; it is that these products feel like they were introduced for a different era. This includes the remote, by the way. I know it is repetitive to complain about, but it still sucks and there appears to be no urgency for completing the new remote.
On the software side, tvOS 14 contains few updates. It now supports 4K videos in YouTube and through AirPlay, and HomeKit camera monitoring. Meanwhile, the Music app still does not work well, screensavers no longer match the time of day so there are sometimes very bright screensavers at night, and the overuse of slow animations makes the entire system feel sluggish. None of these things are new in tvOS 14; they are all very old problems that remain unfixed.
The solution to a good television experience remains elusive — and not just for Apple.
No matter whether you look at Apple’s balance sheet or its product strategy, it is clear that it is now fully and truly a services company. That focus has manifested in an increasingly compelling range of things you can give Apple five or ten dollars a month for; or, if you are fully entranced, you can get the whole package for a healthy discount in the new Apple One bundle subscription. Cool.
It has also brought increased reliability to the service offerings. Apple’s internet products used to be a joke, but they have shown near-perfect stability in recent years. Cloud-based services had a rocky year for stability in 2020 and iCloud was no exception around Christmastime but, generally, the reliability of these services instills confidence.
New for this year were the well-received Fitness+ workout service and a bevy of new TV+ shows. Apple also rolled out services to a bunch more countries. But this focus on services has not come without its foibles, as Apple aggressively promotes subscriptions throughout its products in advertisements, up-sells, and push notifications to the irritation of anyone who wishes not to subscribe. Some of these services also introduce liabilities in antitrust and corporate behaviour, something which I will explore later.
I have no experience with HomeKit so I did not grade it.
Hardware Reliability: 3
2020 was the year we bid farewell to the butterfly keyboard and, with it, the most glaring hardware reliability problem in Apple’s lineup. A quick perusal of Apple’s open repair programs and the “hardware” tag on Michael Tsai’s blog shows a few notable quality problems:
“Stained” appearance with the anti-reflective coating on Retina display-equipped notebooks
Display problems with iPhone 11 models manufactured into May 2020
AirPods Pro crackling problems that were only resolved a full year after the product’s debut
Overall, an average year for hardware quality, but an improvement in the sense that you can no longer buy an Apple laptop with a defective keyboard design.
I suppose this score could have gone one notch higher.
Software Quality: 4
The roller coaster ride continues. 2019? Not good! 2020? Pretty good!
Big Sur is stable, but its redesign contains questionable choices that impair usability, some of which I strongly feel should not have shipped — to name two, notifications and the new alert style. Outside of redesign issues, I have seen new graphical glitches when editing images in Photos or using Finder’s Quick Look feature on my iMac. The Music app, while better than the one in Catalina, is slower and more buggy than iTunes used to be. There are older problems, too: with PDF rendering in Preview, with APFS containers in Finder (and Finder’s overall speed), and with data loss in Mail.
iOS 14 is much stable and without major bugs; or, at least, none that I have seen. There are animation glitches here-and-there, and I wish Siri suggestions were better.
On the other end of the scale, tvOS 14 is mediocre, some first-party apps have languished, and using Siri in any context is an experience that still ranges from lacklustre to downright shameful. I hope software quality improves in the future, particularly on the Mac. MacOS has never seemed less like it will cause a whole-system crash, but the myriad bugs introduced in the last several years have made it feel brittle.
I am now thinking I mixed up the scores for software and hardware quality. Oops.
Developer Relations: 2
An absolutely polarized year for developer relations.
On the one hand, Apple introduced a new mechanism to challenge rulings and added a program to reduce commissions to 15% for developers making less than $1 million annually. Running WWDC virtually was also a silver lining in a dark year. It’s the first WWCC I attended because hotels are thousands of dollars but my apartment has no extra cost.
On the other — oh boy, where do we begin? Apple is being sued by Epic Games along antitrust lines; Epic’s arguments are being supported by Facebook, Microsoft, and plenty of smaller developers. One can imagine ulterior motives for the plaintiff’s side, but it does not speak well for Apple’s status among developers that it is being sued. Also, there was that matter of the Hey app’s rejection just days before WWDC, and the difficulty of trying to squeeze the streaming game app model into Apple’s App Store model. Documentation still stinks, and Apple still has communication problems with developers.
Apple’s relationship with developers hit its lowest point in recent memory in 2020, but it also spurred the company to make changes. Developers should be excited to build apps for the Mac instead of relying on shitty cross-platform frameworks like Electron. They should be motivated by the jewellery-like quality of the iPhone 12 models and build apps that match in fit and finish. But I have seen enough comments this year that indicate that everyone — from one-person shops to moderate indies to big names — is worried that their app will be pulled from the store for some new interpretation of an old rule, or that Apple’s services push will raid their revenue model. There must be a better way.
Social/Societal Impact: 2
As with its developer relations, Apple’s 2020 environmental and social record sits at the extreme ends of the scale.
Apple’s response to the pandemic is commendable, from what I could see on the outside. Its store closures often outpaced restrictions from local health authorities in Canada and the U.S., but it kept retail staff on and found ways for them to work from home. It was also quick to allow corporate employees to work remotely, something it generally resists.
In a year of intensified focus on racial inequities, Apple pledged $100 million to projects intended to help right long-standing wrongs, and committed to diversity-supporting corporate practices. There is much more progress that it can make internally, particularly in leadership roles, but its recent hiring practices indicate that it is trying to do better.
Apple continues to invest in privacy and security features across its operating system and services lineup, like allowing users to decline third-party tracking in iOS apps. It also bucked another ridiculous request from the Justice Department and disabled an enterprise distribution certificate used by the creepy facial recognition company Clearview AI.
But a report at the beginning of 2020 drew a connection between discussions with the FBI and Apple’s failure to encrypt iCloud backups. It remains unclear whether one directly followed the other. Apple’s encryption policies remain confusing as far as knowing exactly which parties have access to what data. Still, Apple’s record on privacy is a high standard that its peers will never meet unless they change their business model.
China remains Apple’s biggest liability on two fronts: its supply chain, and services like the App Store and Apple TV Plus. Several reports in 2020 said that Apple was uniquely deferential to Chinese government sensitivities in its App Store policies and its original media. Many other big name companies, wary of being excluded from the Chinese market, have also faced similar accusations. But it is hard to think of one other than Apple that must balance those demands against its entire manufacturing capability. No company can be complicit in the Chinese government’s inhumane treatment of Uyghurs.
Apple is also facing increased antitrust scrutiny around the world for the way it runs the App Store, the commissions it charges third-party developers, and the way it uses private APIs.
Apple’s environmental record is less of a mixed bag. It is recycling more of the materials used in its products, new iPhones come in much smaller boxes containing nearly no plastic. Apple also says that its own operations are entirely carbon neutral, and says that its supply chain will follow by 2030.
For environmental reasons, many new products no longer ship with AC adapters in the box, and to prove it wasn’t screwing around, Apple made Lisa Jackson announce this while standing on the roof of its headquarters. Reactions to this change were predictably mixed, but it seems plausible that this has a big impact at Apple’s scale. I’m still not convinced that it makes sense to sell its charging mat without one.
Apple still isn’t keen on third-party repairs of its products, but it expanded its independent repair shop program to allow servicing of Macs.
If this were two separate categories, I think Apple’s environmental record is a 4/5 and its social record is a 2/5 — at best. I am not averaging those grades because I see liabilities with China and antitrust to be too significant.
As I wrote at the top, 2020 was a standout year in Apple’s history — even without considering the many obstacles created by this ongoing pandemic. As my workflow is dependent on these products and services, I appreciate the hard work that has gone into improving their features, but I am even happier that everything I use is, on the whole, more reliable.
I have long enjoyed reading the annual Apple report card that Jason Snell organizes and publishes. It is a finger on the pulse of a company that remains uniquely interesting despite becoming a proper behemoth worth about six times as much now as it was when this survey began.
New in this year’s report card is a failing grade; guess which product that was given to. Also, high praise for Apple’s first own-silicon Macs, the iPhone lineup, services, and comments from many writers including yours truly. I’m grateful that Snell asked me to participate again.
I do not know that I have made it clear how much I appreciate many aspects of MacOS Big Sur’s visual design update, which is something even I did not fully understand until I was compelled to use Catalina full-time for a week.
Last year, my iMac developed some shadowing in the corner of its display, and it bugged me. Over the holiday break, I took it in for a new display which required leaving it at the Apple Store for about a week.1 I needed a workaround for my day job — my old MacBook Air running Catalina hooked up to my Thunderbolt Display did just fine. But, screen fidelity aside, it was clear after a day that using Catalina felt cramped and messy. Icons and text in the menu bar were not as well-aligned. Rows in the Finder were squished together like every pixel on the display remained precious real estate.
Big Sur changed all of that for the better. There is subtly more space around many interface elements, and there is a clearer sense of structure. But it also introduced problems for readability, many of which are the result of an obsession with translucency and brightness.
Translucent effects have been a part of modern MacOS since it debuted as Mac OS X in 2001 with the then-fresh “Aqua” visual interface language. But it was only with Yosemite in 2014 that Apple began using translucency more liberally throughout the system. No longer was its use confined to the Dock and menu bar. The linen texture in the login window and Notification Centre became a sheet of frosted glass. Application toolbars, once metallic, changed to a sheer grey gradient, and sidebars went from a solid light blue fill to yet another light grey glassy texture.
There seem to be some rules for translucency in the post-Yosemite MacOS environment that should, in theory, give it a sense of structure:
Translucent elements are only visible in foreground windows.
Elements in background windows are always opaque.
The Dock and Menu Bar — including the elements they spawn, like Notification Centre, pull down menus, and Dock menus — are always translucent.
However, in practice, these rules introduce all sorts of problems for legibility, logic, and clarity.
Let’s start with the first rule and foreground windows. I usually keep my Mac in light mode, so the translucent elements in MacOS appear similar to their non-translucent predecessors. But adding translucency to those elements, a la rule one, negatively impacts contrast. A sidebar that shows through some of the colour of a very dark background element, for example, will necessarily reduce the contrast between the dark sidebar text and the now mid-grey sidebar fill.
MacOS attempts to mitigate contrast problems through more blending effects. The translucent textures in MacOS are called “Vibrancy”, since it is not a simple matter of dropping the opacity of the foreground elements. Elements with Vibrancy also blur what is in the background, and blend it with the foreground texture in ways that are intended to improve legibility no matter what the background layers contain. There are different levels of transparency and blur — what Apple calls “materials” — available to developers, and stacking them tends to produce bizarre effects. For example, take a look at the sidebar buttons in Reminders:
That is dark grey text atop a mid-grey button texture in a light grey sidebar. Subjectively, I find it unpleasant to look at; more objectively, it has insufficient contrast. It is the same with the Search field located in an application’s Help menu:
The menu material is a brighter translucent layer. The Search field is scarcely differentiated by a slightly different material containing dark grey text. On a dark background, it is difficult to read.
One more example of stacked layers of translucency, as seen in Big Sur’s new alert style:
Here, the Cancel button is a different material compared to that used for the alert window — though still translucent — and its text colour is a shade of the background. This combines in a smeared, hard-to-read mess.
The second rule of vibrancy in MacOS is that background windows are entirely opaque for what I assume are performance reasons.2 This excludes system features that are created from the menubar or Dock, which are always translucent, and these choices sometimes create illogical situations. In Notification Centre, this manifests as stacks of notifications where all of them appear to be translucent to background windows but the foreground notification is not translucent to the notifications apparently stacked behind it:
It would surely be a terrible idea to treat each notification as a translucent sheet; I can imagine how much that would compromise their already-shaky legibility. But if translucency is supposed to help differentiate the frontmost layers of system components, surely the notifications behind it should be opaque.3
Nevertheless, the rest of the system behaves as though the foreground window is comprised of panes of glass, and the background windows are made of solid plastic. Often, this means background windows actually have better contrast than windows in the foreground. Here’s Reminders again:
If the window control widgets were in full colour instead of grey, you would be forgiven for assuming this is just what the app looks like all the time. Several MacOS apps are similarly more legible when they are in the background: Music, Contacts, Calendar’s single day view, Dictionary, and Voice Memos — to name a handful.
Some other apps exhibit reduced contrast when they are in the background. But because the sidebar is often a critical interface element, compromising its legibility in foreground windows means going to more extreme lengths of illegibility for background windows. When Mail is in the background, the text labels and icons in the toolbar and sidebar become a mid-grey that, on the light grey background, is difficult to read. This makes it more obviously an inactive window, but it is so bright that it clashes with the active window and makes it hard to quickly see what is in the foreground:
Translucency in these areas has been part of MacOS for years, but Big Sur’s redesign introduces brighter windows that look nearly the same in the foreground and background. This is not directly because there are translucent elements in foreground windows, but the wildly insufficient contrast in background windows is necessary because foreground windows in Big Sur are already lower-contrast — because of their translucent elements. To borrow from a Lewis Black sketch on the “Daily Show”, it’s like Six Degrees of Kevin Bacon, except it’s just two degrees and Kevin Bacon is translucency.
There is a way to switch all of this off: in System Preferences, under Accessibility and then Display, you can tick the Reduce Transparency box. This makes everything opaque — though, unfortunately, it is indiscriminate, and I think the Dock works just fine as a translucent smear — but foreground and background windows in Big Sur are still similarly bright. You can also select Increase Contrast if you want a black border around interactive elements. But no matter how much I appreciate Apple’s accessibility efforts, I can’t help but think it is strange that there are two systemwide preferences for different degrees of making interface elements readable.
One of the critiques I often see of Apple’s recent visual interface direction is that it looks better in screenshots than in actual use. I disagree; I think translucency only makes sense when using MacOS or iOS. The smeared background of a toolbar looks worse in screenshots than it does in use. But I think this effect works far better in iOS than on the Mac.
The translucent glass effect was adapted from iOS 7, introduced the year prior to Yosemite. Compared to the Mac, iOS devices have strict rules about where application windows will be. On an iPhone, an app is always running full-screen; on an iPad, there are more options, including apps that float over others. But only a handful of system-level processes — Notification Centre, Control Centre, Spotlight, and, until recently, Siri — blur the contents of windows behind. Translucency in apps only affects the contents of that app: toolbars give an impression of what lays beyond the current scroll view; contextual menus sit in a layer overtop the app area. Translucency here adds depth to an interface with a singular context.
MacOS is messier than this. It encourages the use of multiple windows stacked haphazardly. Making elements of the foreground window translucent provides no new information. On the contrary, it creates confusion within itself and for background windows that are necessarily opaque. Everything has become lower-contrast necessitating the use of preferences to make the system readable.
That doesn’t mean translucency should go away entirely. But if I were in charge of these things, here’s what I would change:
Sidebar translucency has got to go.
Stacked translucent elements — like the Search field in the Help menu, and buttons in dialog boxes — have also got to go. Translucency is fine as a background for some windows, but every item on top of it should be opaque.
Translucency in toolbars can remain, as its effects are limited by the window’s boundaries. Perhaps this would have good enough performance that it could affect all visible window areas, not just the window in the foreground.
The texture of foreground and background windows should be more obviously different.
The background of a pull down menu should be nearly opaque.
I am not asking for much here, and I know design decisions like these are contentious. There are some people who think Apple should return to previous eras of visual interface design. I disagree; I think Big Sur is generally a positive iteration, particularly for its clearer structure. But Apple could learn a thing or two from its previous operating system design patterns. Most obviously: contrast is good.
Thankfully, covered by one of the few times I have actually purchased AppleCare. ↩︎
This is true for every app except Calculator, but it is true enough that I can call it a rule and treat Calculator as a special case. Why is Calculator so special? Great question. ↩︎
There is a similar story in that Reminders screenshot. Notice the sidebar is scrolled slightly but, aside from the shadow, the sidebar’s texture is seamless and there is no sign of the scrolled items in the background of the glass layer on top. ↩︎
Even the “retail” is plenty of well-off people. While most of the media coverage focuses on laid-off college-aged kids and uses the term “mom and pop” investors (is that even okay in 2021?), I can promise you plenty of friends in tech (who often make a lot more than friends in finance nowadays [Haha, sorry -Can]) and doctors are sending me $GME or $AMC gain porn. The already wealthy are having a laugh, catalyzed by r/WSB. I mean, the world’s richest man joined in [on] the fun. Can we please stop making this about the “little guy” vs. “Wall Street”?
But when that stock starts absolutely tanking — what happens? Does it stop exactly at some reasonable enterprise valuation? That’s not how momentum works. In trading, everyone loved the adage “don’t try to catch a falling knife” but while everyone enjoyed the lulz on the way up (other than Melvin Capital), it will be [GameStop CEO Jim] Bell’s job to catch the knife on the way down to try to keep as many of these people still employed as possible. Whatever happens, it will dangerous and ugly. Real employees could lose their real jobs thanks to the lulz.
The rally around heavily-shorted stocks like GameStop and AMC has been a surprisingly gripping story for something so inherently dull. From GameStop’s staggering rise, to trading apps intervening — and the backlash and lawsuit that resulted. If you are like me and not a Wall Street goon, the narrative of a bunch of Reddit users causing a hedge fund to require emergency funding to the tune of $2.75 billion is hilarious and satisfying. The effects of stock markets may impact us all, but it is only a game for the rich.
But Roy’s essay is the best thing I’ve read so far about this event. The part of this story that caught my eye from the beginning is that the rally’s earliest Reddit– and YouTube-based cheerleader bought over fifty-thousand dollars worth of GameStop stock in 2019. That is not the kind of money that regular people can just throw at probably-failing mall companies. A handful of people who were probably wealthy to begin with are going to make a lot of money. A lot more people — from retail traders who don’t really know what they are doing, to GameStop employees — are going to be hurting after this is over. Big money traders are still running this thing and it still sucks for the rest of us.
If this puts a damper on your enthusiasm for this story, I’m sorry.
Apple’s upcoming App Tracking Transparency (ATT) policy will require developers to ask for permission when they use certain information from other companies’ apps and websites for advertising purposes, even if they already have user consent. […]
As for its own apps, Google says that it is working on privacy labelling:
When Apple’s policy goes into effect, we will no longer use information (such as IDFA) that falls under ATT for the handful of our iOS apps that currently use it for advertising purposes. As such, we will not show the ATT prompt on those apps, in line with Apple’s guidance. We are working hard to understand and comply with Apple’s guidelines for all of our apps in the App Store. As our iOS apps are updated with new features or bug fixes, you’ll see updates to our app page listings that include the new App Privacy Details.
Whenever that might be. Again, I do not see this as nefarious. Google has had plenty of time to prepare for this, but there was also loads of time to prepare for E.U. privacy regulations and Canadian anti-spam laws when those were enacted, and some companies were still in a rush to comply. It is embarrassing more than it is concerning.
Speaking of China, Apple sold more stuff between October and December in that country than it did in the entire world for all of 2006. Remember when people wondered what Apple would do after the iPod fad passed? How times change.
Apple is ramping up the production of iPhones, iPads, Macs and other products outside of China, Nikkei Asia has learned, in a sign that the tech giant is continuing to accelerate its production diversification despite hopes that U.S.-China tensions will ease under President Joe Biden.
Apple’s moves are part of a larger trend of global tech giants reducing their production dependence on China, long known as the world’s factory. The country’s rising labor costs, the prolonged trade tensions between Washington and Beijing and the outbreak of the coronavirus pandemic that severely disrupted the supply chain have all driven home the risks of depending too heavily on one country. The U.S. government, moreover, has initiated a “supply chain restructuring” campaign and urged tech suppliers to move away from China, the Nikkei reported earlier.
The reported scale of this diversification would have been unimaginable just a few years ago.
But it is key to remember that final assembly is the end link of a supply chain that remains dominated by Chinese manufacturers. These moves will certainly help avoid tariffs when finished devices are imported into other countries, but the dependency on Chinese companies further back along the supply chain remains a liability for companies that are also involved in, say, software distribution and original media.
On January 5, Google told TechCrunch that [privacy labelling] would be added to its iOS apps “this week or the next week,” but both this week and the next week have come and gone with no update. It has now been well over a month since Google last updated its apps.
When it said that an update was coming soon, Google gave no reason for the delay, and still has not offered up an explanation for the lengthy period of time between app updates. Google typically pushes updates much more frequently across its catalog of apps, and its Android apps have continued to be updated regularly.
A handful of Google’s apps have gained privacy labels. Clover mentions Translate, Authenticator, Motion Stills, Google Play Movies, and Google Classroom; in addition, I also found that Wear OS, Smart Lock, and Stadia have privacy labels. None have been updated with a new version in the last month.
I am skeptical that the potential for negative press coverage is a reason for Google to delay app updates when there is a much simpler explanation. A review of the version histories for Google’s most popular apps1 shows that there are sometimes big gaps in updates over the holidays. In 2019, Chrome was last updated on December 10, and it took until February 5 for the next version to be released. It is a similar story for Google News, which had a gap from December 9, 2019 until February 6, 2020, while Hangouts went from September 2019 until the end of February 2020 without an update.
Now, a caveat: the App Store only shows twenty-five of the most recent updates. I do not know why that is the case; it is a frustrating limitation in this case specifically. Some of Google’s apps were updated nearly every week at different points in 2020, so it was not possible for me to go back far enough.
I doubt this is nefarious. Apple has a limited number of categories in its privacy labelling feature, so the labels for some of its most popular apps will be similar to Facebook’s — or possibly a little better. Facebook may have been the subject of attention when it was required to be more up front about how much user data it collected, so much that it replied with a full-page newspaper ad. But the negative press coverage over Facebook’s privacy labels died off after a couple of days, and I doubt it has remained in the public conscience.
These privacy labels are certainly helpful for many apps. But everyone already knows that Facebook and Google treat your personal data as an all-you-can-eat buffet. I doubt Google is avoiding the inevitable, simply because it has little reason to do so. Just think of it as a holiday delay.
Which, by the way, revealed to me that Apple continues to have a problem with App Store comment spam. What is the value of comments on the App Store, really? I say switch them off. ↩︎
The new Academic Research track could have major implications for researchers studying election security, misinformation and other big issues affecting Twitter. While the company has previously made this kind of data available to developers, it was prohibitively expensive for most researchers. But with the new API, approved researchers will be able access a full history of all public conversations on Twitter, as well as advanced search and filtering tools for free.
There are, however, a few limitations. For one, it won’t be available to independent researchers. According to Twitter, the research API will be limited to Twitter-approved students or “research-focused employees” of academic institutions. Additionally, Twitter only provides historical data for accounts and conversations that are currently viewable on its platform. That means tweets from suspended accounts, or content that’s been removed, won’t be accessible to researchers. This could be a significant hurdle to people studying misinformation, extremism, hate speech, or other areas where content often violates Twitter’s rules.
Ironically, under these restrictions, the better Twitter gets at moderating its platform, the harder it will become for researchers to study its seedier qualities.
Tapbots released a new version of Tweetbot today. It looks a little nicer, the bird in the icon now looks stoned, and there are a handful of enhancements. But the big news is that most of the app is locked behind a subscription: if you want to use multiple accounts, filter your timeline, use push notifications, or send a tweet, you’ll have to pay up.
Each new major version of Tweetbot from its first release was a paid upgrade until Tweetbot 5 in 2018. That was a free update from the previous version, released in 2015 for ten dollars in the U.S. after launch pricing expired, or about two dollars per year. At six dollars per year, is the new version worth the price jump and paying for it annually?
I have no issue with subscriptions conceptually, but they rightly carry the expectation that in return for regular payments, users will receive meaningful, periodic updates. Recognizing this, many developers time the move to a subscription with a substantial app update to start off on the right foot, which Tapbots hasn’t done. Tweetbot’s subscription is primarily based on the promise of future updates. […]
Tapbots’ ability to update Tweetbot is, alas, limited by how fast Twitter builds out its new more developer-friendly API. For example, while you can now view polls in Tweetbot, you cannot vote in them; it will prompt you to open the poll in the Twitter app if you try. You cannot view who liked a tweet or retweeted a post with a comment. You cannot search tweets from more than the last seven days. All of these limitations are on Twitter’s end and have nothing to do with Tweetbot specifically.
That is perhaps the hardest sell for Tweetbot. Your subscription fee is paying for Tapbots to translate Twitter’s API into a really nice app experience, but whatever work they can do is limited by what Twitter makes available to them. Over the last few years, Tweetbot updates have been as modest as Twitter’s API adjustments, and it is unclear how fast Twitter will roll out changes to its new API.
For what it is worth, I love the Tweetbot experience so much that I bought an annual subscription. But I wonder if, three years from now — after spending the Canadian pricing of $21 — I will feel like it is a good ongoing investment rather than an annual cost.
Huge collection of 6K wallpapers created by Hector Simpson based on the timeless Aqua pattern in Tiger. Free at phone and iPad sizes; just $3 for desktop options that include light and dark mode variants, plus two dynamic wallpapers. I bought this instantly.
The Great Deplatforming was a response to a singular and extreme event: Trump’s incitement of the Capitol attack. As journalist Casey Newton pointed out in his newsletter, Platformer, it was notable how quickly the full stack of tech companies reacted. We shouldn’t assume that Amazon will just start taking down any site because it did it this time. This was truly an unprecedented event. On the other hand, do we dare think for a moment that Bad Shit won’t keep happening? Buddy, bad things are going to happen. Worse things. Things we can’t even imagine yet!
Long before Facebook, Twitter, and YouTube were excusing their moderation failures with lines like “there’s always more work to be done” and “if you only knew about all the stuff we remove before you see it,” Something Awful, the influential message board from the early internet, managed to create a healthy community by aggressively banning bozos. As the site’s founder, Rich “Lowtax” Kyanka, told the Outline in 2017, the big platforms might have had an easier time of it if they’d done the same thing, instead of chasing growth at any cost: […]
This is the Oversight Board, a hitherto obscure body that will, over the next 87 days, rule on one of the most important questions in the world: Should Donald J. Trump be permitted to return to Facebook and reconnect with his millions of followers?
But the board has been handling pretty humdrum stuff so far. It has spent a lot of time, two people involved told me, discussing nipples, and how artificial intelligence can identify different nipples in different contexts. Board members have also begun pushing to have more power over the crucial question of how Facebook amplifies content, rather than just deciding on taking posts down and putting them up, those people said. In October, it took on a half-dozen cases, about posts by random users, not world leaders: Can Facebook users in Brazil post images of women’s nipples to educate their followers about breast cancer? Should the platform allow users to repost a Muslim leader’s angry tweet about France? It is expected to finally issue rulings at the end of this week, after what participants described as a long training followed by slow and intense deliberations.
That is certainly a range of topics, though one continues to wonder how much — if anything — can truly be offloaded to artificial intelligence.
[…] Our media, speech forums, and distribution systems are all run by cartels and monopolists whom governments can’t even tax – forget regulating them.
The most consequential regulation of these industries is negative regulation – a failure to block anticompetitive mergers and market-cornering vertical monopolies.
Doctorow calls this censorship; I disagree that what we have seen from tech giants truly amounts to that. Moderation is not synonymous with censorship. I do not think you can look at the vast landscape of publishing options available to just about anyone and conclude that speech is more restricted now than it was, say, ten or twenty years ago.
But Doctorow is right in observing that there are now bigger beasts that effectively function as unelected governments. They materialized because venture capital and regulators incentivized lower costs and faster growth. Our existing primary venues for online discussion are a product of this era, and it shows: Twitter’s newest solution for countering misinformation is adding comments from trusted users. It is such a Web 2.0 solution that it could have a wet floor effect logo and one of those shiny gummy-coloured “beta” callouts.
The way that we figure out how to create healthier communities is by trying new things in technology and antitrust policy. Ironically, I anticipate many of these new ideas will more closely model those from a previous generation, but necessarily updated for billions of people connected through the internet. It does seem unlikely to me that all of those people will be connected through the same platform — a new Facebook, for example — and more likely that they will be connected through the same protocols.
Apple today announced Dan Riccio will transition to a new role focusing on a new project and reporting to CEO Tim Cook, building on more than two decades of innovation, service, and leadership at Apple. John Ternus will now lead Apple’s Hardware Engineering organization as a member of the executive team.
Alex Kantrowitz, writing in his Big Technology newsletter:
On December 22, 2020, Facebook VP Andrew “Boz” Bosworth wrote his colleagues with a stark message on privacy. “The way we operated for a long time,” he said, “is no longer the best way to serve those who use our products.”
In an internal memo called “The Big Shift,” obtained by Big Technology and first reported here, Bosworth called on Facebook employees to prioritize privacy as they built their products, even to the detriment of the user’s experience. The public’s expectations on privacy were changing, he said, and Facebook’s old approach wasn’t cutting it anymore.
In a follow-up note to his division, Bosworth said they’d invert their product development process. “Instead of imagining a product and trimming it down to fit modern standards of data privacy and security,” he said, “we will start with the assumption that we can’t collect, use, or store any data. The burden is on us to demonstrate why certain data is truly required for the product to work.”
I cannot imagine what it must feel like to know that something you invented over thirty years ago underpins the computers on tens of millions of desks and in a billion pockets. It must be pretty incredible. Cox sure made his dent in the universe.
Google has reached an agreement with an association of French publishers over how it will be pay for reuse of snippets of their content. This is a result of application of a ‘neighbouring right’ for news which was transposed into national law following a pan-EU copyright reform agreed back in 2019.
The tech giant was also keen to emphasize that French law and the EU copyright directive do not require consent for the use of links or “very short extracts”, adding that it’s paying for online use on its surfaces for publisher content that goes beyond links and very short extracts — such as a News Showcase panel curated by the publisher.
I selected paragraphs that are directly relevant to the next article, but I recommend reading the bits I trimmed out in Lomas’ piece. In a nut, after French authorities first told Google that it would have to pay for snippet use, Google removed them. But French authorities pointed out that Google can’t ride copyright violations to a dominant market position and then jettison responsibility when it is asked to comply with the law — so Google was compelled to strike an agreement.
Google’s threat to cut off search to Australian users and walk away from $4 billion in revenue has sparked warnings the digital giants are not bluffing over laws designed to force them to pay for news.
The code aims to force digital platforms to pay media companies for news content, and follows a 12-month review into Google and Facebook by the competition watchdog. The legislation, which was introduced into the House of Representatives in December, comes amid a push by global governments to rein in the power of digital monopolies.
[Google’s lawyer] also revealed that news queries comprised only 1.25 per cent of all Google searches, but under intense questioning from senators, said the company was concerned the proposed Australian code would set an international precedent.
Canadian publishers are hoping for a similar scheme but I can’t imagine they will be successful. I wrote last year that these ideas are ill-advised. I was wrong about France, but the idea that — as the lawyer Hannah Marshall put it in the Herald article — web giants should “pay for the right to supply audience to the news publishers” is nonsensical. The French agreement appears to be more comprehensive than simply charging for the right to link. But should I pay TechCrunch and the Herald because I am sending you to their websites? I have not seen a good argument for that.
A home security technician admitted Thursday that he secretly accessed the cameras of more than 200 customers, particularly attractive women, to spy on while they undressed, slept, or had sex, federal prosecutors said.
Telesforo Aviles, a 35-year-old former employee for the security company ADT, admitted he secretly accessed the customers’ accounts more than 9,600 times over more than four years, according to a guilty plea submitted in court.
Hernandez did not include copies of those court documents, so I thought I would pull them to see what he plead guilty to. Here’s a shared folder with a handful of key documents in related cases.1 And you know what he plead guilty to in the federal case? Well, you probably do if you read the headline: a single count of computer fraud. He committed privacy violations against hundreds of people — including minors — but the crime he committed was accessing the cameras without permission. That is outrageous.
Some of the women who Aviles spied on attempted to file class action suits in Texas and Florida against ADT. But a judge stayed a case in Florida because ADT has an arbitration clause in its contract. That is, everyone who was affected by this extraordinary breach of trust must individually negotiate a settlement with ADT, after which they will be sworn to confidentiality.
This is a dismal combination of some of the worst attributes of modern life: insecure internet-of-things devices led to intrusion and spying but, because there is no federal privacy law in the United States, the victims are unable to seek relief on those grounds, and any legal standing they do have is subject to negotiation by arbitration.
I tried using iCloud’s folder sharing feature, but it is adamant that you add the folder to your own iCloud account before you can view its contents. Not great. ↩︎
Rosenworcel served as an FCC commissioner during both the Obama and Trump administrations. She supported net neutrality rules and opposed mega-mergers that came before the agency including that between T-Mobile and Sprint.
Her greatest focus, however, has been on shoring up the FCC’s subsidy programs and the broadband connectivity data they rely on.
Rosenworcel has particularly emphasized the need to close the “homework gap” — the divide between students who have fast, reliable in-home internet and those who don’t.
Lewis Day writes about TV Licencing in Britain, and the detector vans used to discover if a household is watching unlicensed BBC broadcasts:
Alternatively, a search warrant may be granted on the basis of evidence gleaned from a TV detector van. Outfitted with equipment to detect a TV set in use, the vans roam the streets of the United Kingdom, often dispatched to addresses with lapsed or absent TV licences. If the van detects that a set may be operating and receiving broadcast signals, TV Licencing can apply to the court for the requisite warrant to take the investigation further. The vans are almost solely used to support warrant applications; the detection van evidence is rarely if ever used in court to prosecute a licence evader. With a warrant in hand, officers will use direct evidence such as a television found plugged into an aerial to bring an evader to justice through the courts.
Historically, antennas were used to detect specific frequencies. These days, with streaming and flat panel TVs, it appears that vans are still used but the detection methods are somewhat obscured.
An official Adobe history describes the PDF’s goal as being able to “exchange information between machines, between systems, between users in a way that ensured that the file would look the same everywhere it went.” This meant creating “a digital interchange format that preserved author intent,” says David Parmenter, director of engineering for Adobe Document Cloud, “which is, at a really high level, what a PDF tries to do.”
Beneath the highly technical language is something pretty basic: The mission of the PDF is simply to be the digital version of old-fashioned paper.
For then-unforeseen reasons, that mission has proved to be somewhat frustrated by the invention of the smartphone:
Adobe’s most recent and ongoing efforts around the PDF have centered on adapting the format to the smartphone era. Late last year, the company debuted what it calls a “Liquid Mode” option that rejiggers PDFs for easier phone-screen-sized reading. On the creator and developer side, it has recently made the documents easier to embed in websites and is working on the ability to incorporate 3-dimensional renderings into PDFs.
Is there a market for 3D renderings in PDF form? The PDF format’s simplicity has surely been a key factor in its success.
In fact, if you are reading this on a Mac, an iPhone, or an iPad, you’re looking at PDFs hundreds of times every day. Icons across the system are PDFs — including those in toolbars, the menu bar, and throughout apps — because PDFs can contain infinitely-scalable vector graphics. Mac OS X has always contained PDF elements, but their use was expanded around the time Apple was working on a programmatic resolution-independent user interface defined by XML files. Ultimately, higher resolution interfaces were solved through pixel doubling along both axes, but I wonder what happened to that project.
Today’s inauguration ceremonies in the United States were notable for many reasons: Vice President Kamala Harris has ascended to the highest office, so far, for a (non-white) woman in the U.S.; it was backdropped by evidence of a yearlong pandemic; it marked the return of full sentences and oratory skill above that of a startled goose.
But, aside from the procedural events, it is hard to think of a higher point than Amanda Gorman’s reading of her poem “The Hill We Climb”. A little less than six minutes of entirely captivating verse.
It’s hard to pinpoint exactly when we lost control of what we see, read — and even think — to the biggest social-media companies.
I put it right around 2016. That was the year Twitter and Instagram joined Facebook and YouTube in the algorithmic future. Ruled by robots programmed to keep our attention as long as possible, they promoted stuff we’d most likely tap, share or heart — and buried everything else.
If it were just about us and our friends and family, that would be one thing, but for years social media hasn’t been just about keeping up with Auntie Sue. It’s the funnel through which many now see and form their views of the world.
This is something we should continue to keep in mind as social media companies evolve. I am doubtful of the longevity of audience-specific platform clones — Parler, for example, or the Facebook copycat MeWe shown in Stern’s article — but I am certain that the major platforms will have to keep changing in response to the kinds of problems that have bubbled up in recent years. I hope there is increasing emphasis on quality and user control; as Stern says, this is something platforms have already proved they can readily adjust for these outcomes.
This is no substitute for a better option, which is to avoid using social media platforms as a primary referral source for news. A better-designed algorithm is no substitute for keen human editors across multiple reliable publishers. But, since our collective dependence on social media is unlikely to subside, it is an ethical responsibility of these platforms to better tune how they sort users’ feeds.
Americans, most directly impacted by this presidential administration’s everything, are surely looking forward to welcoming a new administration, imperfect as it may be. But it is going to allow the rest of us who are incidentally impacted by the actions of the most world’s powerful nation to breathe a little easier.
This president is so weak and sad that he’s going to get out of town early Wednesday morning. A fitting end to an administration defined by deliberate misery. Anyway, here’s a Tom Tomorrow strip.
As the FBI continues to round up rioters who stormed the U.S. Capitol on Jan. 6 to try to stop President-elect Joe Biden’s inauguration last week, it’s finding that a number of them seem to have openly confessed to crimes on open social media, a review of court documents shows.
The subject of another much-circulated photo, of a cheerful and waving bearded man walking through the Capitol with the speaker’s lectern, has been identified by the Bradenton Herald as Florida man Adam Johnson (not “Via Getty”). Johnson was arrested on Friday and hit with the same three charges as Barnett. The complaint against Johnson references photos posted on his own Facebook account that appear to show him inside the Capitol building and were sourced from a newspaper article about the riot. Additionally, someone who has a mutual friend with Johnson called the FBI to report that he was the man in the photo with the lectern.
Johnson’s lawyer admitted to reporters that the photograph of his client is “a problem.”
“I’m not a magician,” Dan Eckhart added. “We’ve got a photograph of our client in what appears to be inside a federal building or inside the Capitol with government property.”
An affidavit from an FBI special agent filed in court Tuesday says Eduardo Florea stockpiled more than 1,000 rounds of ammo and threatened to kill Sen.-elect Raphael Warnock of Georgia.
The affidavit says the FBI received records from Parler to identify the user behind the account “LoneWolfWar,” where the threats originated. Parler provided the phone number associated with the account, the affidavit says, and the FBI used it, and info from T-Mobile, to identify Florea.
Tinder, Bumble and other dating apps are using images captured from inside the Capitol siege and other evidence to identify and ban rioters’ accounts, causing immediate consequences for those who participated as police move toward making hundreds of arrests.
Amanda Spataro, a 25-year-old logistics coordinator in Tampa, called it her “civic duty” to swipe through dating apps for men who’d posted incriminating pictures of themselves. On Bumble, she found one man with a picture that seemed likely to have come from the insurrection; his response to a prompt about his “perfect first date” was: “Storming the Capitol.”
“Most people, you think if you’re going to commit a crime, you’re not going to brag about it,” Spataro said in an interview.
You would think that, wouldn’t you? But only if you, you know, think.
The Capitol riot was a boundary-busting event in almost every way, and its impact on the digital privacy debate was no different. The insurrectionists’ acts were so galling, so frightening, that suddenly, even those who might oppose digital surveillance and forensics techniques in other contexts, like, say, identifying peaceful protesters at a Black Lives Matter rally, feel justified in deploying those tools against the rioters. The shifting goalposts have sparked a tense debate among researchers of online extremism about the right way to stitch together the digital scraps of someone’s life to publicly accuse them of committing a crime — or whether there is a right way at all.
I think a piece by Astead W. Herndon in the New York Times is a good explanation of the false equivalence between Black Lives Matter protests and the criminal surge of U.S. Capitol rioting morons. But Lapowsky’s article raises good arguments about the dangers of false accusations, attempts at mob justice, and the risks faced by those identifying extremists.
A widely adopted, decentralized protocol is an opportunity for social networks to “pass the buck” on moderation responsibilities to a broader network, one person involved with the early stages of bluesky suggests, allowing individual applications on the protocol to decide which accounts and networks its users are blocked from accessing.
Social platforms like Parler or Gab could theoretically rebuild their networks on bluesky, benefitting from its stability and the network effects of an open protocol. Researchers involved are also clear that such a system would also provide a meaningful measure against government censorship and protect the speech of marginalized groups across the globe.
The internet itself is comprised of a series of decentralized protocols. While I don’t want to minimize the worries of those involved with the oddly-lowercased bluesky effort, a universal protocol for short messages seems more in-line with the internet I remember before a handful of big American platforms corralled the worldwide market for communication. One could see it as “passing the buck”, but it is equally valid to see this as reducing singular influence and control.
It is baffling to me that, in 2021, I still do not know the security practices of the devices and cloud services I use more frequently than ever.
This became particularly worrisome last year when I began working my day job from my personal computer. I have several things in my favour: it is an iMac, not a portable computer, so there is dramatically less risk of unauthorized physical access; I keep an encrypted Time Machine backup and an encrypted BackBlaze remote backup; I use pretty good passwords. But what about my phone and iCloud, for example? I do not use either for much work stuff, but I inevitably have some communications and two-factor authentication apps on my iPhone, and I use iCloud for backups.
Over the holidays, I immersed myself in an early copy of a new report by Johns Hopkins University students Maximilian Zinkus and Tushar Jois, and associate professor Matthew Green, as I tried to find answers for what should be simple questions. The researchers’ conclusions in the now-published report were eye-opening to me:
Limited benefit of encryption for powered-on devices. We observed that a surprising amount of sensitive data maintained by built-in applications is protected using a weak “available after first unlock” (AFU) protection class, which does not evict decryption keys from memory when the phone is locked. The impact is that the vast majority of sensitive user data from Apple’s built-in applications can be accessed from a phone that is captured and logically exploited while it is in a powered-on (but locked) state.
Limitations of “end-to-end encrypted” cloud services. Several Apple cloud services advertise “end-to-end” encryption in which only the user (with knowledge of a password or passcode) can access cloud-stored data. We find that the end-to-end confidentiality of some encrypted services is undermined when used in tandem with the iCloud backup service. More critically, we observe that Apple’s documentation and user settings blur the distinction between “encrypted” (such that Apple has access) and “end-to-end encrypted” in a manner that makes it difficult to understand which data is available to Apple. Finally, we observe a fundamental weakness in the system: Apple can easily cause user data to be re-provisioned to a new (and possibly compromised) [Hardware Security Module] simply by presenting a single dialog on a user’s phone. We discuss techniques for mitigating this vulnerability.
The muddy distinction between “encryption”, “end-to-end encryption”, and “true end-to-end decryption in a way that Apple cannot reverse” remains a source of consternation, especially as Apple’s own documentation is anything but precise.
“It just really shocked me, because I came into this project thinking that these phones are really protecting user data well,” says Johns Hopkins cryptographer Matthew Green, who oversaw the research. “Now I’ve come out of the project thinking almost nothing is protected as much as it could be. So why do we need a backdoor for law enforcement when the protections that these phones actually offer are so bad?”
If there is one conclusion of this report that is damning by its silver lining, it is that it calls bullshit on law enforcement’s insistence that smartphone encryption creates some sort of investigative black hole. Maybe Apple has successfully created encryption that is strong enough for personal and business use with a strictly-controlled opening for legitimate legal use — by sticking itself in the middle of that chain. That is how I am reading between the lines of the statement an unidentified spokesperson provided Wired:
The researchers shared their findings with the Android and iOS teams ahead of publication. An Apple spokesperson told WIRED that the company’s security work is focused on protecting users from hackers, thieves, and criminals looking to steal personal information. The types of attacks the researchers are looking at are very costly to develop, the spokesperson pointed out; they require physical access to the target device and only work until Apple patches the vulnerabilities they exploit. Apple also stressed that its goal with iOS is to balance security and convenience.
There are security problems with iOS devices and iCloud services that Apple can and should fix, but I bet there are many that it will not because it is perhaps unwise to be a company that is explicitly trying to block any subpoena from having an effect. If that is the case, Apple ought to say so. It should be plainly clear to users what their security options are, and Apple ought to be more honest in its marketing and documentation of these features.
If Apple is appointing itself guardian of its users’ data — in iCloud form, including defaulted-to-on iCloud Backups of iPhones, iPads, and Apple Watches — that also means that it can respond to law enforcement requests at any level by any agency. Depending on how much you trust your local police and national intelligence services, perhaps that does not seem like a great idea to you. More worrying is that it leaves Apple open to potentially being a part of corrupt regimes’ human rights abuses if it is responsive to data requests for activists’ accounts, or if it complies with device search requests from border patrol.
Maybe there are only bad options, and this is the best bad option that strikes the least worst balance between individual security and mass security. But the compromises seem real and profound — and are, officially, undocumented.
Do you want to know what Apple’s 2021 Mac lineup looks like? Well, new reports from Ming-Chi Kuo and Mark Gurman that dropped in rapid succession today — almost like there was a meeting at Apple this week to discuss new products — paint a rosy picture.
Juli Clover, of the very appropriately named MacRumors:
According to Kuo, Apple is developing two models in 14 and 16-inch size options. The new MacBook Pro machines will feature a flat-edged design, which Kuo describes as “similar to the iPhone 12” with no curves like current models. It will be the most significant design update to the MacBook Pro in the last five years.
There will be no OLED Touch Bar included, with Apple instead returning to physical function keys. Kuo says the MagSafe charging connector design will be restored, though it’s not quite clear what that means as Apple has transitioned to USB-C. The refreshed MacBook Pro models will have additional ports, and Kuo says that most people may not need to purchase dongles to supplement the available ports on the new machines. Since 2016, Apple’s MacBook Pro models have been limited to USB-C ports with no other ports available.
All of the new MacBook Pro models will feature Apple silicon chips, and there will be no Intel chip options included.
These leaks were echoed by Mark Gurman, who also added that the displays in the new MacBook Pro models would be brighter and higher-contrast.
If these rumours are accurate, these products seem inspired by the early 2010s golden age of the MacBook Pro: lots of ports, MagSafe, and a great keyboard. All of these things were part of the much-loved models of that time before they were removed in favour of four USB-C and Thunderbolt combo ports which doubled as charging ports, and a poor keyboard. The latter problem was fixed; the former decision still feels like a compromise too much of the time. The excitement for these rumours seems telling. You’ve got to wonder what ports would be added; I can’t see USB-A or Ethernet making a comeback, and even HDMI and Micro SD ports feel like a stretch.
I would still love to read a deeply reported explanation of what happened with the Mac notebook range from 2012 through the present day. I think there must be an interesting story in there about being ready for the short-term backlash of trying new things, only to find that long-term compromises remain.
Apple Inc. is planning the first redesign of its iMac all-in-one desktop computer since 2012, part of a shift away from Intel Corp. processors to its own silicon, according to people familiar with the plans.
The new models will slim down the thick black borders around the screen and do away with the sizable metal chin area in favor of a design similar to Apple’s Pro Display XDR monitor. These iMacs will have a flat back, moving away from the curved rear of the current iMac. Apple is planning to launch two versions — codenamed J456 and J457 — to replace the existing 21.5-inch and 27-inch models later this year, the people said, asking not to be identified because the products are not yet announced.
Gurman also says that Apple is working on two new Mac Pro models — one of which he says may continue to use Intel’s processors, but that does not pass my sniff test — and a less-expensive standalone display.
The rumours that were published today represent nearly every Mac in Apple’s lineup that has yet to receive Apple’s own processors, with the exception of the iMac Pro. But, given the M1’s performance and the smaller Mac Pro model, it is possible the iMac Pro may simply be discontinued.
I’m just spitballing but, maybe if Apple’s feeling in a real retro mood, the new iMac could just be called the “Mac”. Just a thought.
Ben Thompson, on the different responses around the world to tech companies’ restrictions over the past week:
Make no mistake, Europe is far more restrictive on speech than the U.S. is, including strict anti-Nazi laws in Germany, the right to be forgotten, and other prohibitions on broadly defined “harms”; the difference from the German and French perspective, though, is that those restrictions come from the government, not private companies.
This sentiment, as I noted yesterday, is completely foreign to Americans, who whatever their differences on the degree to which online speech should be policed, are united in their belief that the legislature is the wrong place to start; the First Amendment isn’t just a law, but a culture. The implication of American tech companies serving the entire world, though, is that that American culture, so familiar to Americans yet anathema to most Europeans, is the only choice for the latter.
One of the reasons it is interesting to be a Canadian writing about tech is because, generally speaking, we take influence from both Western European and American perspectives on all sorts of matters. Our right of expression is not as wide-ranging as that of the U.S., but it lacks many European limitations as well. Like many in Europe, Canadians feel perfectly able to express their views in public — more than Americans in their country — and do not feel that the small number of legal limitations are restrictive.
This week’s sweeping restrictions of the social media accounts of the president of the United States and the deplatforming of Parler were a necessarily American response to problems in America. The president was not silenced or censored, but his association with private companies was revoked because they did not want to deal with his particular brand of nightmare fuel. But it is clearly not a solution for worldwide issues — especially when non-U.S. countries struggle to enforce their laws against American companies.
Thompson’s prediction for the future of the internet is intriguing:
Here technology itself will return to the forefront: if the priority for an increasing number of citizens, companies, and countries is to escape centralization, then the answer will not be competing centralized entities, but rather a return to open protocols. This is the only way to match and perhaps surpass the R&D advantages enjoyed by centralized tech companies; open technologies can be worked on collectively, and forked individually, gaining both the benefits of scale and inevitability of sovereignty and self-determination.
Apple last year pledged a hundred million dollars in a new Racial Equity and Justice Initiative, promising big investments in underrepresented individuals and communities in the United States, initially, and around the world.
Apple today announced a set of major new projects as part of its $100 million Racial Equity and Justice Initiative (REJI) to help dismantle systemic barriers to opportunity and combat injustices faced by communities of colour. These forward-looking and comprehensive efforts include the Propel Center, a first-of-its-kind global innovation and learning hub for Historically Black Colleges and Universities (HBCUs); an Apple Developer Academy to support coding and tech education for students in Detroit; and venture capital funding for Black and Brown entrepreneurs. Together, Apple’s REJI commitments aim to expand opportunities for communities of colour across the country and to help build the next generation of diverse leaders.
A former Apple employee who noted that he was “not Black or Hispanic” described his experience on a team that was developing speech recognition for Siri, the virtual assistant program. As they worked on different English dialects — Australian, Singaporean, and Indian English — he asked his boss: “What about African American English?” To this his boss responded: “Well, Apple products are for the premium market.”
Benjamin notes that this interaction took place a year after Apple acquired Dr. Dre’s Beats brand:
The irony, the former employee seemed to imply, was that the company could somehow devalue and value Blackness at the same time.
For what it is worth, in the video introducing Apple’s racial equity initiative in June, Cook acknowledged the need for thorough correction:
In our supply chain and professional service partners, we’re committed to increasing our total spending with Black-owned partners, and increasing representation across companies we do business with. […]
We’re taking significant new steps on diversity and inclusion within Apple, because there is more that we can and must do to hire, develop, and support those from underrepresented groups — especially our Black and Brown colleagues.
Change begins at the top. But the “top” is somewhat relative: it is true not only at the executive level, but at each layer of management. African American English is still not a language option in Siri five years later, apparently at least in part because it doesn’t fit the “premium market” — and that is just one example. These changes take time and the projects announced today are surely a terrific investment in the future, but it must be acknowledged that Apple continues to have internal deficiencies today that it has the power to correct.
The research began with the observation that in the offline world, healthy communities have traditionally been served by thriving public spaces: town squares, libraries, parks, and so on. Like digital social networks, these spaces are open to all. But unlike those networks, they are owned by the community rather than a corporation. As you would expect, that difference results in a very different experience for the user.
Public spaces display a number of features that build healthier communities, according to researchers. “Humans have designed spaces for public life for millennia,” they write, “and there are lessons here that can be helpful for digital life.”
Even if the specifics of this research may need ironing out, the gist of it is inspiring. I looked through Civil Signals’ slide deck; I thought this was an eye-opening observation about the language often used for the ways social networks ought to be improved:
Encourage Civility. What counts as civil is often defined by dominant groups.
Reduce Polarization. Polarization isn’t the problem — dehumanization and lack of cross-connection are.
Increase Diversity. Mere contact with other groups or their ideas does not increase tolerance.
Inform People. Not all information is equally valuable to citizens.
Increase Trust. Not all institutions or individuals deserve trust.
Allow Participatory Governance. We think this is an important idea, but outside the scope of this research.
Last point notwithstanding, these are excellent arguments against which apparent improvements should be tested.
WhatsApp rival Telegram has seen a 500 per cent increase in new users amid widespread dissatisfaction with the way the Facebook-owned app handles people’s data.
Telegram recorded 25 million new users over the last 72 hours, according to founder Pavel Durov, taking the total number of users above 500 million.
This is roughly a quarter of the estimated 2 billion WhatsApp users around the world, though many users of the world’s most popular messaging app took to social media this week to urge others to leave the platform due to privacy concerns.
Right-wing extremists are using channels on the encrypted communication app Telegram to call for violence against government officials on Jan. 20, the day President-elect Joe Biden is inaugurated, with some extremists sharing knowledge of how to make, conceal and use homemade guns and bombs.
The messages are being posted in Telegram chatrooms where white supremacist content has been freely shared for months, but chatter on the channels has increased since extremists have been forced off other platforms in the wake of the siege of the U.S. Capitol last week by pro-Trump rioters.
Ignore the scary use of “encrypted” here — Telegram Channels are not encrypted, and encryption itself is not a specific worry. What interests me are the ways that misinformation is spreading now.
Telegram Channels are public, and messages posted in there can be forwarded within Telegram to other Channels, Groups, and individuals. They can also be sent as standard web links, which is how I came across a semi-popular post — archived here — claiming that:
Apple is about to pull Telegram from the App Store
Apple is going to remotely delete all existing copies of Telegram installed on users’ devices
You can prevent your copy from being deleted by disabling the ability to delete apps in parental controls
I have no reason to suspect the first claim is true. Telegram is a widely-used messaging app popular around the world for mostly legitimate uses. The second claim is incongruent with Apple’s treatment of Parler; existing copies of the app are still functional, theoretically. The third claim has absolutely no relevance towards anything here, and is not how that feature works. But this one message has racked up well over two hundred thousand views.
It is not just Telegram, or even solely a problem of these apps. Ben Collins, also at NBC News, explains how these fictions are now circulating through text messages:
One viral false conspiracy theory shared across the U.S. implores users to disable automatic software updates on their cellphones, claiming that the next patch will disable an emergency broadcasting system message from President Donald Trump. The false rumors are usually attached to another urban legend about a blackout coming in the next two weeks, which say people should be “prepared with food and water.”
Another viral text is a link to a deceptively edited video, also known as a “cheapfake,” that first appeared on the Twitter-like social media platform Parler. It features a series of mashed-up speeches by Trump that are realigned to lead the viewer to falsely believe he is calling for an uprising on Jan. 20.
This reminds me of the days of chain social media posts — post this and tag three friends or you won’t wake up tomorrow, that kind of thing — and chain emails before that, and chain posts on BBSes before that, and chain letters with actual postage before that.
But those posts seem quaint compared to what we’re seeing today. The conspiracy theories of the past seemed to be based on historical events. The ones that are circulating now are creating an alternate reality for the here and now. Instead of overanalyzing specks of dust in the Zapruder film decades on, there are now people who make a living by denying an audience the reality of what is happening before their very eyes. Neither chain letters nor invented versions of events are new; but, perhaps it is the combination of those and the speed of technology and lucrative careers in professional grifting that have given these messages unwelcome power.
Gayle King of CBS This Morning interviewed Tim Cook for an initiative Apple is announcing tomorrow. King previewed it today by saying:
It is not a new product. We should say, it is not a new product. It’s something, I think, bigger and better than that.
So, if not a new product, what could it be?
Apple’s January events have often centred around education, so it would not surprise me if the tip that fell into my lap is correct. It seems that Apple is a founding partner of the new to-be-officially-unveiled Propel Center in Atlanta:
Spanning 50,000 square feet, Propel Center will include state-of-the-art spaces to accommodate lecture halls, learning labs and common areas to facilitate group learning. The physical Propel Center will serve as a centralized nexus and symbol for HBCU collaboration across the country.
In addition to the main building, Propel will also be offering on-campus labs equipped by Apple, and online instruction with master classes from Spike Lee, Lisa Jackson, and — awkwardly — Jack Ma. There’s lots more; the promo site is only one page, but it’s worth checking out.
Anyway, I am not saying this is definitely what Cook will be talking about, but I am as confident as I can be that you will hear more about this tomorrow.
Now consider the current Mac product line. It would be instantly recognizable to a visitor from the early 2010s.
Of course, Macs have evolved a lot in the intervening years on the inside. But the exteriors of Apple’s Macs look remarkably like they did in 2012, if not 2007. It’s been a decade or more of quiet iteration without really rethinking the fundamentals of the product — except that one time, which Apple rapidly came to regret.
I have written before about Apple’s deliberate strategy to keep the industrial design of its first M1 Macs identical to their Intel-based predecessors. But Snell is right: the Mac lineup used to be more experimental and hungry for evolution in its hardware. What changed? Or, more accurately, why so little change?
Apple has settled on a nearly all-aluminum line; the wildest configuration is the choice of gold for MacBook Air models. Aside from the lack of a glowing Apple logo, today’s MacBook Pro looks from not-too-far away much like the one from 2008.
I am not one to argue for change for its own sake. But, as Apple plays around with the industrial design of the iPhone, iPad, and Apple Watch every few years, I have to wonder if part of the reason for the largely stagnant Intel era was the Intel processors themselves. Now that Apple is working with its own processors that have different thermal constraints and can be entirely custom-engineered, will we perhaps see a renaissance of experimentation in materials and form? I am not so sure; I do not want to get ahead of myself. The MacBook Air may not be the ideal laptop design, but it is pretty darn close. There is a reason it is the computer copied by every Windows OEM.
Maybe it is just my age, but I miss the era of glossy white finishes. I’m looking at my white and silver iPhone and it looks as crisp and modern and futuristic as you’d expect, without looking chintzy. Just a thought.
Jillian C. York of the Electronic Frontier Foundation on her personal blog:
Since Twitter and Facebook banned Donald Trump and began “purging” QAnon conspiracists, a segment of the chattering class has been making all sorts of wild proclamations about this “precedent-setting” event. As such, I thought I’d set the record straight.
Everything in the social media ecosystem was once tilted in the favor of toxic forces, from the algorithms that push our content feeds toward extremism to the companies’ longstanding reticence to admit it. Imagine a foosball game on a slanted table. Yes, the little soccer players could try to stop each rush of the rolling ball, but all their spinning wouldn’t matter in the end. Over the past few years, however, that table has started to be righted. Driven by outside pressure over election disinformation, mass killings, and COVID-19 striking close to home — and perhaps most significantly, internal employee revolts — the companies’ leaders have put into place a series of measures that make it harder for toxic forces. From banning certain types of ads to de-ranking certain lies, these safeguards built up, piece by piece, culminating in the deplatforming of the Internet’s loudest voice.
While it is impossible for anyone to have complete foresight about this moment, one need only look back at the abuse hurled most often at already marginalized individuals and groups on these platforms to know that the warning signs were there all along. Trouble is that these platforms did not meaningfully change to address the causes of why they were being used in bad faith. I am under no illusions that horrible people would not exist on these or any other online platform. But I do think it is possible that, had those who make decisions at these companies taken more seriously the concerns of those on the receiving end of viral hate, they would have been better equipped to scale their moderation strategies.
My own opinion is that this collision of politics, society, and technology has been a long time coming. As far back as 2010, I have argued about the legislative challenges facing technology will be more acute than technological changes themselves. My argument has been that these social platforms are essentially nation-states and require a higher level of social and civic etiquette established and enforced through official policies. When evaluating the performance of Twitter, Facebook, and others on this particular score, the phrase I have often used is “dereliction of duty.”
Malik doesn’t directly say this and I do not want to put words in his mouth, so this is my own extension of his piece: I think part of that duty must be in their careful moderation. That means creating limitations around problematic posts and users very quickly; it also means applying the lightest possible touch. For over a decade now, the largest social platforms have often been far too cautious about setting expectations of behaviour.
Concern about firefighting efforts doesn’t get us far enough when there are prolific arsonists.
Perhaps it is somewhat restrictive for Amazon to decide that it does not want to host a social network that is deliberately under-moderated — to the extent that it lacked basic controls against posting child sexual abuse imagery — on which an attempted insurrection was planned, and where many users in the days before and after that attack described their plans for assassinating lawmakers. I am not, however, convinced it says anything more general about the power of big tech companies. I imagine the moneyed backers of Parler can find another host for their Nazi-filled community of “free speech advocates”, or they can put some servers together themselves. They can surely pull themselves up by their gold-tipped bootstraps — unfortunately for society.
Update: Parler has now found a home with a hosting company that specializes in the same sort of websites. It is almost as though this was not censorship as much as it was one company not wishing to do business with another company for completely understandable reasons.
In an email sent this morning and obtained by BuzzFeed News, Apple wrote to Parler’s executives that there had been complaints that the service had been used to plan and coordinate the storming of the US Capitol by President Donald Trump’s supporters on Wednesday. The insurrection left five people dead, including a police officer.
“We have received numerous complaints regarding objectionable content in your Parler service, accusations that the Parler app was used to plan, coordinate, and facilitate the illegal activities in Washington D.C. on January 6, 2021 that led (among other things) to loss of life, numerous injuries, and the destruction of property,” Apple wrote to Parler. “The app also appears to continue to be used to plan and facilitate yet further illegal and dangerous activities.”
Apple gave Parler a day from when it sent its letter to submit a new version of the app alongside a moderation policy. Google did not wait; it pulled the app from the Play Store this afternoon.
From Apple’s letter, as quoted in the article:
Your CEO was quoted recently saying “But I don’t feel responsible for any of this and neither should the platform, considering we’re a neutral town square that just adheres to the law.” We want to be clear that Parler is in fact responsible for all the user generated content present on your service and for ensuring that this content meets App Store requirements for the safety and protection of our users. We won’t distribute apps that present dangerous and harmful content.
For what it is worth, it will still be possible to post to Parler from its website even if these apps are removed. It is not as though Parler does not exist on the iPhone after tomorrow when, inevitably, the ostensibly unmoderated platform fails to produce a tighter moderation strategy.
This clearly relates to questions about whether it is fair that users’ native software choices on the iPhone are limited by Apple’s control over the platform and its only software distribution mechanism. It seems reasonable to me that Apple would choose not to provide a platform for apps that have little to no moderation in place. Both Apple and Google disallowed clients for Gab — Twitter but for explicit Nazis — in their respective stores. Apple rejected the app at submission time, while Google permitted it and then pulled it:
Google explained the removal in an e-mail to Ars. “In order to be on the Play Store, social networking apps need to demonstrate a sufficient level of moderation, including for content that encourages violence and advocates hate against groups of people,” the statement read. “This is a long-standing rule and clearly stated in our developer policies. Developers always have the opportunity to appeal a suspension and may have their apps reinstated if they’ve addressed the policy violations and are compliant with our Developer Program Policies.”
Gab now runs on Mastodon, which is a decentralized standard that allows different communities to moderate posts as they choose. There are many Mastodon clients in the App Store, likely because there is not really a singular Mastodon product as much as there are many posts collected through a standard format.
Apple® today announced that the Mac® App Store℠ is now open for business with more than 1,000 free and paid apps. The Mac App Store brings the revolutionary App Store experience to the Mac, so you can find great new apps, buy them using your iTunes® account, download and install them in just one step. The Mac App Store is available for Snow Leopard® users through Software Update as part of Mac OS® X v10.6.6.
You know the first interesting thing about this? Apple issued a press release when the iOS App Store turned ten; Apple also posted one the day the Mac App Store turned ten, but it wasn’t about the Mac App Store:
As the world navigated an ever-changing new normal of virtual learning, grocery deliveries, and drive-by birthday celebrations, customers relied on Apple services in new ways, turning to expertly curated apps, news, music, podcasts, TV shows, movies, and more to stay entertained, informed, connected, and fit.
There’s a bit in the release touting the “commerce the App Store facilitates”, and Apple used it to announce $1.8 billion spent on the App Store between Christmas Eve and New Year’s Eve, but that’s it. Also, I want to thank the person who decided that Apple’s press releases do not need to contain intellectual property marks.
Perhaps it is not surprising that the Mac App Store did not get its own anniversary announcement. It could be the case that Apple considers the launch of the iPhone App Store the original, and everything else is simply part of that family. Apple also doesn’t indulge in anniversaries very often — the App Store press release was an exception rather than the rule.
But it also speaks to the Mac App Store’s lack of comparable influence. Joe Rossignol, MacRumors:
Since its inception, the Mac App Store has attracted its fair share of criticism from developers. Apple has addressed some of these complaints over the years by allowing developers to offer free trials via in-app purchase, create app bundles, distribute apps on multiple Apple platforms as a universal purchase, view analytics for Mac apps, respond to customer reviews, and more, but some developers remain unsatisfied with the Mac App Store due to Apple’s review process, the lack of upgrade pricing, the lack of sandboxing exceptions for trusted developers, the absence of TestFlight beta testing for Mac apps, and other reasons.
Thinking back to the early days of the Mac App Store, I remember how its introduction killed a nascent third-party effort to build a similar store. And I recall how, just months after the store opened, Apple changed the rules to require that apps be sandboxed. […]
The Mac App Store has led a bizarre life in its first ten years — remember when system software updates, including operating system updates, came through the Mac App Store? A 2018 redesign made it look more modern, but it continues to feel like it was ported from another platform. Like the iOS App Store, it faces moderation problems, and its vast quantity of apps are mostly terrible.
There are some bright spots. I have found that good little utility apps — ABX testers, light audio processing, and the sort — are easy to find in the Mac App Store. Much easier, I think, than finding them on the web. It is also a place where you can find familiar software from big developers alongside plenty of indies, software remains up-to-date with almost no user interaction, and there are no serial numbers to lose.
Unfortunately, there remain fundamental disagreements between Apple’s policies and developers’ wishes that often manifest in comical ways. Recently, for my day job, I needed to use one of Microsoft’s Office apps that I did not have installed. I was able to download it from the Mac App Store but, upon signing in to my workplace Office 365 account, I was told that the type of license on my account was incompatible with that version of the app. I replaced it with a copy from Microsoft’s website with the same version number and was able to log in. I assume this is because there is a conflict between how enterprise licenses are sold and Apple’s in-app purchase rules. It was caused in part by Microsoft’s desire to sell its products under as many subtly-different similarly-named SKUs as possible, and resulted in an error message that was prohibited by App Store rules from being helpful. Regardless of the reasons, all I experienced as a user was confusion and frustration. Oftentimes, it is simply less nice to use the Mac App Store than getting software from the web.
Happy tenth birthday to the Mac App Store; it cannot be the best that Apple can do.
There’s not a ton of research on this, but the work that has been done so far is promising. A study published by researchers at Georgia Tech last year found that banning [Reddit’s] most toxic subreddits resulted in less hate speech elsewhere on the site, and especially from the people who were active on those subreddits.
Early results from Data and Society sent to an academic listserv in 2017 noted that it’s “unclear what the unintended effects of no platforming will be in the near and distant future. Right now, this can be construed as an incredibly positive step that platforms are making in responding to public complaints that their services are being used to spread hate speech and further radicalize individuals. However, there could be other unintended consequences. There has already been pushback on the right about the capacity and ethics of technology companies making these decisions. We’ve also seen an exodus towards sites like Gab.ai and away from the more mainstream social media networks.”
I linked to this two years ago when Facebook cracked down on extremist public figures using its platforms, but I figured I would re-up it today.
This is a significant test of deplatforming. It seems to work for media personalities and toxic average users, but will it work for someone who — let’s face it — is still the president of the United States? Will it have significant blowback? I have concerns that it will embolden die-hard followers to commit further acts of violence, but I also think that is a problem for law enforcement and American society as a whole.
I do not think national healing is hastened by broadcast media of any type continuing to permit reckless lies about election fraud from influential figures.
Twitter, perhaps knowing the stakes of suspending the personal account of the president, posted a comprehensive explanation of its reasoning. I have trimmed it to two salient paragraphs:
Due to the ongoing tensions in the United States, and an uptick in the global conversation in regards to the people who violently stormed the Capitol on January 6, 2021, these two Tweets must be read in the context of broader events in the country and the ways in which the President’s statements can be mobilized by different audiences, including to incite violence, as well as in the context of the pattern of behavior from this account in recent weeks. After assessing the language in these Tweets against our Glorification of Violence policy, we have determined that these Tweets are in violation of the Glorification of Violence Policy and the user @realDonaldTrump should be immediately permanently suspended from the service.
Plans for future armed protests have already begun proliferating on and off-Twitter, including a proposed secondary attack on the US Capitol and state capitol buildings on January 17, 2021.
I do not understand why Twitter calls this a “permanent suspension” instead of a ban, but that’s what it is.
Even the most powerful people must face consequences. There must be a generally agreed upon line that cannot be crossed. I guess the line for Twitter, Reddit, and Facebook is when their platforms are used to tacitly encourage people to overthrow a fair election in a stable democracy.
Big platforms experimented with taking the laissez-faire moderation style of 4chan mainstream and it backfired. It is long past time that they took a more active role in user moderation.
See Also:Ben Thompson’s piece from yesterday; Mike Masnick today. I often disagree with both on platform moderation issues — see preceding paragraph — but I think they have articulated well why they support a more hands-off approach to moderation more generally, and why they came to believe this ban is due.
The siege was no doubt terrifying to watch, and doubly so especially for the legislators and staff trapped in the building by raging QAnon followers and Trump dead-enders. Rioters wore shirts glorifying the Holocaust; some shouted what sounded like racial epithets and paraded Confederate flags. Guns were drawn. A woman was shot to death by police. It was a tense, perilous, violent assault on democracy.
But it was also quickly apparent that this was a very dumb coup. A coup with no plot, no end to achieve, no plan but to pose. Thousands invaded the highest centers of power, and the first thing they did was take selfies and videos. They were making content as spoils to take back to the digital empires where they dwell, where that content is currency.
Social media did not cause us to give undue influence to public figures with little concern for the weight of their words and actions, but it surely amplifies and exacerbates it.
Every four years, Americans go to the polls to pick a president and vice president; the following January, the House and Senate certify the results and confirm the winner. That January joint session is routine stuff — something so formal and kind of arcane that it can be hard to remember the procedure during any past election. On this occasion, a mob encouraged and defended by the president decided that they should violently interject themselves into proceedings because they did not like the result.
It was a shocking, terrifying, and entirely unsurprising escalation of the anti-democratic rhetoric frequently used by commentators and pundits in a specific media bubble. But it is also the product of a president who has used his status to elevate blatant lies, codswallop, and self-serving fictions. Most platforms have given him generous leeway to do so since he is a world leader by office if not by any other quality.
The insurrection isn’t just being televised. It’s being orchestrated, promoted, and broadcast on the platforms of companies with a collective value in the trillions of dollars.
And the platforms have let Trump persist. At 2:38 p.m. in DC, Trump issued a new message, in which he did not tell his supporters to stand down.
“Please support our Capitol Police and Law Enforcement. They are truly on the side of our Country. Stay peaceful!” he wrote on Twitter and Facebook, as members of his own party barricaded themselves in chambers and rooms and the vice president was forced to evacuate the building. Police were overwhelmed.
That tweet was posted well after rioters were in the Capitol, minutes after they were at the Senate doors, and just a few minutes before they got into the chamber. This attack was planned in the open and incited by the sitting president through, in part, the affordances of his social media presence. Platforms limited the reach of — and ultimately removed — videos and tweets he posted that could be read as encouraging the rioters. And then Facebook decided enough was enough.
President Trump will no longer be able to use his official Facebook and Instagram accounts after the social media giant indefinitely banned him following the violent protests at the U.S. Capitol, Facebook CEO Mark Zuckerberg announced Thursday. Mr. Trump will be banned at least through the end of his presidential term.
“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” Zuckerberg wrote in a Facebook post. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”
None of this is to say that Facebook is wrong to ban Trump, or that Twitter would be wrong to follow suit. There’s a good case to be made they should have done it well before now. While I’ve made the case for newsworthiness exemptions in the past, particularly on Twitter, it’s perfectly reasonable for media platforms to make judgment calls about the balance between newsworthiness and, say, public health or safety — as long as they admit that is in fact what they’re doing. It’s what true media organizations do every day. The only thing worse than constantly changing the rules would be stubbornly sticking to them when it’s clear they’re inadequate or misguided.
But the dominant platforms have always been loath to own up to their subjectivity, because it highlights the extraordinary, unfettered power they wield over the global public square, and places the responsibility for that power on their own shoulders. That in turn would make it clear that the underlying problem here is not the rules themselves, but the fact that just a few, for-profit entities have such power over global speech and politics in the first place. So they hide behind an ever-changing rulebook, alternately pointing to it when it’s convenient and shoving it under the nearest rug when it isn’t.
These platforms are designed to get advertisements and posts from public figures in front of as many users as possible — similar to the way mass media has worked for a couple of decades now. So what do their leadership teams do when those qualities are abused by someone to threaten public safety and democracy itself? In the case of news media, there are editors who are theoretically able to make factual corrections and put misleading information in context. Unfortunately, the people in charge of those decisions often prefer shouting matches; it’s better television. But social media platforms do not have an equivalent; de-platforming, whether temporarily or permanently, is the closest thing they have short of a soup-to-nuts rearchitecting of how posts are presented.
Rethinking how prominent posts are presented and lies are treated is something platforms should have done a long time ago. Facebook and Twitter are clearly still making all of this up as they go along. It was painfully clear one or two or five years ago that they needed to have new ways of presenting items from world leaders, lawmakers, and their spokespersons that would minimize the use of these platforms for indoctrination and, now, insurrection. They have failed to do so. That is why they have a choice between heavy-handed responses like these and doing next to nothing. In this context, I think the heavy-handed approach is almost certainly the correct one. But none of this should have gone this far — and the failures of these platforms stand out as one reason of many for the escalation of violent rhetoric from authoritative figures and the platforms’ aggressive response.
Once upon a time, we made one of the earliest MP3 players for the Mac, Audion. We’ve come to appreciate that Audion captured a special moment in time, and we’ve been trying to preserve its history. Back in March, we revealed that we were working on converting Audion faces to a more modern format so they could be preserved.
Since then, we’ve succeeded in converting 867 faces, and are currently working on a further 15 faces, representing every Audion face we know of.
Today, we’d like to give you the chance to experience these faces yourself on any Mac running 10.12 or later. We’re releasing a stripped-down version of Audion for modern macOS to view these faces.
I must say that it is both odd and comforting to see a version of Audion with a MiniDisc player skin running natively on MacOS Big Sur alongside lookalike modern apps.
If you have not yet read the story of how Audion almost became iTunes, now is a great time to do so.
Microsoft is building a universal Outlook client for Windows and Mac that will also replace the default Mail & Calendar apps on Windows 10 when ready. This new client is codenamed Monarch and is based on the already available Outlook Web app available in a browser today.
Project Monarch is the end-goal for Microsoft’s “One Outlook” vision, which aims to build a single Outlook client that works across PC, Mac, and the Web. Right now, Microsoft has a number of different Outlook clients for desktop, including Outlook Web, Outlook (Win32) for Windows, Outlook for Mac, and Mail & Calendar on Windows 10.
Microsoft wants to replace the existing desktop clients with one app built with web technologies. The project will deliver Outlook as a single product, with the same user experience and codebase whether that be on Windows or Mac. It’ll also have a much smaller footprint and be accessible to all users whether they’re free Outlook consumers or commercial business customers.
Some reports have interpreted this as though Microsoft will discard the Mac app redesign it previewed in September. I am not sure that is the case. The new version of Outlook for Mac looks an awful lot like an Electron app already.
Like most web apps in a native wrapper, this sounds like a stopgap way of easing cross-platform development at the cost of usability, quality, speed, and platform integration. To be fair, I am not sure that anyone would pitch today’s desktop Outlook apps as shining examples of quality or speed, but I spend a lot of time from Monday through Friday in the Outlook web app and it is poor.
My favourite bug is that when you are composing an inline reply it sometimes interprets the delete key not as though it should remove the most recently-typed character but that it should delete the current message thread. And, of course, you cannot undo that with a keyboard shortcut. If you miss the app’s built-in small notification balloon that appears nowhere near where you are typing but has an “undo” button that doesn’t look like a button in it, you’ll have to manually find the thread in the trash and move it back to the inbox.
I gave away tons of personal data to get the things I needed. Food came from grocery and restaurant delivery services. Everything else — clothes, kitchen tools, a vanity ring light for Zoom calls, office furniture — came from online shopping platforms. I took an Uber instead of public transportation. Zoom became my primary means of communication with most of my coworkers, friends, and family. I attended virtual birthdays and funerals. Therapy was conducted over FaceTime. I downloaded my state’s digital contact tracing tool as soon as it was offered. I put a camera inside my apartment to keep an eye on things when I fled the city for several weeks.
Millions of Americans have had a similar pandemic experience. School went remote, work was done from home, happy hours went virtual. In just a few short months, people shifted their entire lives online, accelerating a trend that would have otherwise taken years and will endure after the pandemic ends — all while exposing more and more personal information to the barely regulated internet ecosystem. At the same time, attempts to enact federal legislation to protect digital privacy were derailed, first by the pandemic and then by increasing politicization over how the internet should be regulated.
Last year marked an increased dependency for much of the world on one of its most poorly-regulated industries. We were held together by many of the same companies that were shown over the preceding several years to be deeply flawed — especially when it comes to privacy.
[Singaporean] authorities claim that such technologies have greatly strengthened their contact-tracing efforts. In early November, the health minister said that 25,000 close contacts of confirmed Covid-19 cases had been identified through TraceTogether, of which 160 eventually tested positive. The country reported zero cases of community transmission most days in November.
Despite these successes, the imposition of more intrusive data collection technology has unnerved privacy advocates, who worry that the pandemic will be used to justify the surveillance of citizens without consideration of the long-term consequences, and without sufficient checks and balances.
This is wildly invasive and incredibly short sighted. Device-based contact tracing and exposure notification already faced an uphill battle on privacy. It is now practically impossible in much of the world thanks to early but flawed contact tracing apps and broken promises about proximity data use. But not in Singapore, where their contact tracing app remains mandatory.
Update: “Location” in the last paragraph was changed to “proximity”. Thanks Stuart.
I’ve written a lot about private equity. By ‘private equity,’ I mean financial engineers, financiers who raise large amounts of money and borrow even more to buy firms and loot them. These kinds of private equity barons aren’t specialists who help finance useful products and services, they do cookie cutter deals targeting firms they believe have market power to raise prices, who can lay off workers or sell assets, and/or have some sort of legal loophole advantage. Often they will destroy the underlying business. The giants of the industry, from Blackstone to Apollo, are the children of 1980s junk bond king and fraudster Michael Milken. They are essentially are super-sized mobsters who burn down businesses for the insurance money.
In private equity takeovers of software, the gist is the same, with the players a bit different. It’s not Apollo and Blackstone, it’s Vista Equity Partners, Thoma Bravo, and Silver Lake, but it’s the same cookie cutter style deal flow, the same financing arrangements, and the same business model risks. But in this case, the private equity owner of SolarWinds burned down far more than just the firm.
U.S. intelligence agencies may have confirmed today that these attacks were perpetrated by Russians. But this particularly good piece from Stoller makes a satisfying case for the structural reasons behind this breach.
CBS’ 60 Minutes aired a story, reported by Scott Pelley, arguing that cases of harassment and abuse from online sources are enabled by Section 230 of the Communications Decency Act:
A priority of the new president and Congress will be reining in the giants of social media. On this, Democrats and Republicans agree. Their target is a federal law known as Section 230. In a single sentence it set off the ‘big bang’ helping to create the universe of Google, Facebook, Twitter and the rest. Some critics of the law say that it leaves social media free to ignore lies, hoaxes and slander that can wreck the lives of innocent people. One of those critics is Lenny Pozner. After a tragedy in his own life, Pozner has become a champion for victims of online lies, people including Maatje and Matt Benassi, who, overnight, became the target of death threats like these.
Right about now you might be thinking, they should sue. But that’s the problem. They can’t file hundreds of lawsuits against internet trolls hiding behind aliases. And they can’t sue the internet platforms because of that law known as Section 230 of the Communications Decency Act of 1996. Written before Facebook or Google were invented, Section 230 says, in just 26 words, that internet platforms are not liable for what their users post.
These cases are truly terrible — but they are not enabled by Section 230 as much as by the generosities afforded by the First Amendment combined with the scale of these platforms. And, as Mike Masnick of Techdirt points out, major platforms have eventually been responsive to user complaints:
Over and over again, the report blames Section 230 for all of this. Incredibly, at the end of the report, they admit that the video from that nutjob conspiracy theorist was taken down from YouTube after people complained about it. In other words Section 230 did exactly what it was supposed to do in enabling YouTube to pull down videos like that. But, of course, unless you watch the entire 60 Minutes segment, you’ll miss that, and still think that 230 is somehow to blame.
Facebook, Twitter, and YouTube have thankfully stepped up their moderation efforts in the last couple of years. But because of their scale — partially due to network effects, and partially because of a reluctance to use antitrust precedent to slow their roll — this increased moderation has been mistakenly referred to as “censorship”. None of this has anything to do with Section 230, however.
60 Minutes filmed a very good interview with Jeff Kosseff, an expert on Section 230, of which only a part made it into the final report. I am disappointed that they axed Kosseff’s historical context:
To understand why [Section 230] is necessary, you really have to go back to what the law was before Section 230, and that is: what is the liability for distributors of content that others create? Before the internet, that was bookstores and newsstands. And the general rule was that, if you are a distributor of someone else’s content, you’re only liable if you know or have reason to know if it’s illegal.
That compares favourably with Section 230, which requires platforms to remove illegal materials when they are notified and encourages them to moderate proactively.1 Because of the explosive growth of these platforms, moderation is extremely difficult.
Kosseff also fields a question from Pelley about news publishers:
Scott Pelley: But help me understand, the same is not true for other forms of media. If somebody says something defamatory on 60 Minutes or on Fox or CNN or in The New York Times, those organizations can be sued. So why not Google, YouTube, Facebook?
Jeff Kosseff: So the difference between a social media site and let’s say the Letters to the Editor page of The New York Times is the vast amount of content that they deliver. So I mean you might have five or ten letters to the editor on a page. You could have I think it’s 6,000 tweets per second. […]
One other difference is that the press relies upon human beings making a decision about what should be published and what should not. An interview subject can make a dubious and potentially defamatory claim, but it is up to the system of reporters and editors and fact-checkers to determine whether that claim ought to be shown to the public. Online platforms are more infrastructural. Making them legally liable for what their users publish would be like making it fair game to sue newsstands and grocery stores for selling copies of the Times containing an illegally defamatory story.
Perhaps owing to their unique scale and manipulated reach, I hope that platforms will continue to take a more active role in curbing high-profile bad faith use. I do not think making Twitter liable for my dumb tweets, or websites liable for their users’ comments, is a sensible way of getting there.
I liked Timothy Buck’s explanation of why accessibility matters in everything, and the simple list of tips to improve it in tech products. A key thing to think about is that, when you make things more accessible for more people, you make those things better for every user. Nobody wants things to be harder to use.
In the lawsuit (PDF), Trieu Pham, the App Store reviewer, alleges he was harassed at work on the basis of race and national origin — he is of Vietnamese ancestry — and that he was fired for his 2018 support of an app created by a Chinese dissident that claimed to showed corruption within the Chinese government.
Michael Tsai has a good summary of the suit and some related links, including this excerpt from the suit:
After plaintiff Pham approved the Guo Media App, the Chinese government contacted defendant Apple and demanded that the Guo Media App be removed from defendant Apple’s App Store. Defendant Apple then performed an internal investigation and identified plaintiff Pham as the App Reviewer who approved the Guo Media App.
In or around late September 2018, shortly after defendant Apple provided plaintiff Pham with the DCP, plaintiff Pham was called to a meeting to discuss the Guo Media App with multiple defendant Apple supervisors and managers. At this meeting, defendant Apple supervisors stated that the Guo Media App is critical of the Chinese government and, therefore, should be removed from the App Store. Plaintiff Pham responded stating the Guo Media App publishes valid claims of corruption against the Chinese government and Chinese Communist Party and, therefore, should not be taken down. Plaintiff Pham further told his supervisors that the Guo Media App does not contain violent content or incite violence; does not violate any of defendant Apple’s policies and procedures regarding Apps; and, therefore, it should remain on the App Store as a matter of free speech.
I think this is a more complicated story than it is being covered. It sounds like another clear-cut case of Apple’s deference to Chinese government interests — and that may be true. The judge in this case has denied that Pham was subject to a harassing work environment, but is allowing him to make the case that he was fired as retaliation for his approval of this app.
According to the New York Times, many of Guo’s corruption claims appear valid or plausible; many appear to be fictional. Guo’s media company was responsible for the fictional story that the pandemic originated in a Wuhan bioweapons lab, and has a history of spreading disinformation. The G News app remains available in the Canadian App Store as of publishing. So, Guo Media is a shady company with potentially criminal founders, and G News publishes a lot of nonsense. But, according to Pham’s suit, three reviewers for the App Store in China approved it before Pham, and it was only then that Chinese government officials allegedly demanded its removal.
Apple’s dependency on its China-based manufacturing partners remains what I see as its biggest liability heading into 2021.1 Regardless of whether Pham’s claims turn out to be true, even the appearance of deference to a specific government’s censorship campaign is worrying. If government officials were so concerned about Guo Media, they could block it with the national firewall without involving Apple. But it appears that Apple is okay with being complicit. Apple has a China public relations problem because it has actual problems tied to its complex relationship with the country’s government.
This is true to some extent for every participant in a worldwide economy that depends on manufacturing and supply chains in China. Apple’s situation is more complex and perhaps a greater liability because it has physical products and apps and media distributed under its name. ↩︎
Apple removed 39,000 game apps on its China store Thursday, the biggest removal ever in a single day, as it set year-end as deadline for all game publishers to obtain a licence.
Including the 39,000 games, Apple removed more than 46,000 apps in total from its store on Thursday. Games affected by the sweep included Ubisoft title Assassin’s Creed Identity and NBA 2K20, according to research firm Qimai.
Qimai also said only 74 of the top 1,500 paid games on Apple store survived the purge.
Yuan Yang, Financial Times, reporting in July that App Store updates were frozen for games before the deadline was extended until the end of the year:
Until now, Apple has allowed Chinese games to be downloaded from the App Store while their developers wait for an official licence from Chinese regulators.
Analysts and lawyers in Beijing suggested that the Chinese government had decided to step up enforcement on Apple, the largest US company operating in China, after broader tensions between Washington and Beijing.
Apparently, getting a license for paid game titles in China is a huge pain in the ass that requires approval from government censors and having an office within the country. But if Apple wants to continue providing apps through its own App Store, it has little choice but to comply with these requirements. Of course, requiring that iOS apps come from the App Store is also a choice, but one that increasingly comes with trade-offs for the company and third-party developers. Is it still a fair compromise?
Amphetamine is a simple free app that sits in the menu bar and keeps a Mac awake — the spiritual successor to Caffeine, which has not been updated in years. It is well-liked; Apple liked it so much they featured it in a Mac App Store story.
So it surely came as a surprise to William C. Gustafson, the app’s developer, when Apple decided that it was in violation of policies that prohibit glorification of controlled substances:
Apple then proceeded to threaten to remove Amphetamine from the Mac App Store on January 12th, 2021 if changes to the app were not made. It is my belief that Amphetamine is not in violation of any of Apple’s Guidelines. It is also my belief that there are a lot of people out there who feel the same way as me, and want to see Amphetamine.app continue to flourish without a complete re-branding.
Apple further specified: “Your app appears to promote inappropriate use of controlled substances. Specifically, your app name and icon include references to controlled substances, pills.”
I can see how this app could be interpreted as violating those policies. It has a pill for an icon, and amphetamines are controlled substances in most countries. But:
It does not promote drug use any more than the MacOS feature named “Mission Control” gives users the impression they can now work at NASA.
Apple gave this app a dedicated editorial feature in the App Store, thereby increasing awareness of an app called “Amphetamine” — and it is only now that it says the app’s name is incompatible with its policies? That seems like a bait and switch.
I get that App Review might not catch policy violators on a first pass or even after several updates. But surely there comes a time when Apple has to decide that it looks less petty to treat a violation of a policy as minor as this as a special grandfathered case. If an app is featured by the App Store team, Apple ought to suspend their right to complain about superficial rule-breaking — if that is what this is, and I am still not convinced that Amphetamine violates the spirit of those policies.
The slightly good news here is that, unlike an iOS app, the removal of this Mac app would not entirely destroy its existence. It could be distributed outside of the Mac App Store if the developer chooses. But it should be allowed to remain.
Update:Gustafson says that Apple confirmed Amphetamine will stay in the store without a name change. In a parallel universe where this story did not receive press coverage, would the outcome be the same?
The final “Calvin and Hobbes” strip was fittingly published on a Sunday — Dec. 31, 1995 — the day of the week on which Bill Watterson could create on a large color-burst canvas of dynamic art and narrative possibility, harking back to great early newspaper comics like “Krazy Kat.” The cartoonist bid farewell knowing his strip was at its aesthetic pinnacle.
“It seemed a gesture of respect and gratitude toward my characters to leave them at top form,” Watterson wrote in his introduction to “The Complete Calvin and Hobbes” box-set collection. “I like to think that, now that I’m not recording everything they do, Calvin and Hobbes are out there having an even better time.”
Calvin and Hobbes are two characters that felt like old friends from the moment I met them, and that has never faded. It is the finest American comic strip there has ever been.
Yes, the Open and Save dialogs keep appearing at their smallest possible sizes in Big Sur 11.1. It’s not just you, and it’s not something you’ve done wrong – it’s a bug in Big Sur.
Sadly, resizing the dialog so it’s larger only works on the current one. Every time you’re presented with an Open or Save dialog, it’ll be back to its uselessly small size again because Big Sur doesn’t remember the past size like it’s supposed to.
If this feels like deja vu, it might be because there was a similar bug in Yosemite where Open and Save dialogs grew by twenty-two pixels every time they were opened. Coincidentally, or perhaps not, Yosemite was the most recent major redesign of MacOS before Big Sur.