Pixel Envy

Written by Nick Heer.

Archive for February, 2021

The Deepfake Danger Is Less About the Videos Themselves

Sam Gregory of Witness for Wired:

In our discussions, journalists and human rights defenders, including those from Myanmar, described fearing the weight of having to relentlessly prove what’s real and what is fake. They worried their work would become not just debunking rumors, but having to prove that something is authentic. Skeptical audiences and public factions second-guess the evidence to reinforce and protect their worldview, and to justify actions and partisan reasoning. In the US, for example, conspiracists and right-wing supporters dismissed former president Donald Trump’s awkward concession speech after the attack on the Capitol by claiming “it’s a deepfake.”

The TikTok videos of “Tom Cruise” are certainly impressive and terrifying, but they are also an edge case. If you have not yet seen them, I recommend checking them out. I think Gregory nails the real concern of deepfake videos: it is the paranoia, more than the videos themselves. It is the mere presence of deepfakes as a concept that is concerning, because it is yet another piece of technobabble that can manifest in the wrong hands as propaganda and conspiracy theory mongering.

Sure is bizarre to be living at a time when we are, as humankind, more scientifically literate than ever before while increasingly doubting the reality in front of our very eyes. Last year, Kirby Ferguson put together a terrific video about magical thinking. The subject matter is kind of heavy, but it is worth a watch for capturing the strangeness of this time.

Apple’s Platform Security Guide Places Greater Emphasis on Vertical Integration

Last week, Apple published an update to its Platform Security Guide. The PDF now weighs in at nearly two hundred pages and includes a lot of updates — particularly with the launch of the M1 Macs late last year. Unfortunately, because of its density, it is not exactly a breezy thing to write about.

Rich Mogull, TidBits:

As wonderful as the Apple Platform Security guide is as a resource, writing about it is about as easy as writing a hot take on the latest updates to the dictionary. Sure, the guide has numerous updates and lots of new content, but the real story isn’t in the details, but in the larger directions of Apple’s security program, how it impacts Apple’s customers, and what it means to the technology industry at large.

From that broader perspective, the writing is on the wall. The future of cybersecurity is vertical integration. By vertical integration, I mean the combination of hardware, software, and cloud-based services to build a comprehensive ecosystem. Vertical integration for increased security isn’t merely a trend at Apple, it’s one we see in wide swaths of the industry, including such key players as Amazon Web Services. When security really matters, it’s hard to compete if you don’t have complete control of the stack: hardware, software, and services.

Vertical integration in the name of privacy and security is the purest expression of the Cook doctrine that I can think of. We got a little preview of the acceleration of this strategy not too long ago — and a glimpse of its limitations in November — but it has been a tentpole of Apple’s security strategy for ages. Recall the way Touch ID was pitched when the iPhone 5S was introduced, for example. Phil Schiller repeatedly pointed to its deep software integration while Dan Riccio, in a promotional video, explained how the fingerprints were stored in the Secure Enclave.

All of this makes me wonder whatever happened to Project McQueen, Apple’s effort to eliminate its reliance on third-party data centres for iCloud. Surely this project did not die when some of the engineers responsible for it left the company, but Apple still depends on others for hosting. From page 109 of the guide:

Each file is broken into chunks and encrypted by iCloud using AES128 and a key derived from each chunk’s contents, with the keys using SHA256. The keys and the file’s metadata are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information or the keys, using both Apple and third- party storage services — such as Amazon Web Services or Google Cloud Platform — but these partners don’t have the keys to decrypt the user’s data stored on their servers.

Even though Amazon and Google absolutely cannot — and, even if these files were not strongly encrypted, would not — access users’ data, it is still strange that Apple still relies on third-party data centres given its tight proprietary integration.

I’ve been picking through this guide for the past week and trying to understand it as best I can. It is, as an Apple spokesperson explained to Jai Vijayan at Dark Reading, intended to be more reflective of security researchers’ wishes and needs, which you can read as being more comprehensive with a greater level of technical detail. One item I noticed on pages 14–15 is the new counter lockbox feature in recent devices:

Devices first released in Fall 2020 or later are equipped with a 2nd-generation Secure Storage Component. The 2nd-generation Secure Storage Component adds counter lockboxes. Each counter lockbox stores a 128-bit salt, a 128-bit passcode verifier, an 8-bit counter, and an 8-bit maximum attempt value. Access to the counter lockboxes is through an encrypted and authenticated protocol.

Counter lockboxes hold the entropy needed to unlock passcode-protected user data. To access the user data, the paired Secure Enclave must derive the correct passcode entropy value from the user’s passcode and the Secure Enclave’s UID. The user’s passcode can’t be learned using unlock attempts sent from a source other than the paired Secure Enclave. If the passcode attempt limit is exceeded (for example, 10 attempts on iPhone), the passcode-protected data is erased completely by the Secure Storage Component.

I read this as a countermeasure against devices, such as the GrayKey, that try to crack iPhones by guessing their passcodes using some vulnerability that gives them infinite attempts. I cannot find any record of a GrayKey successfully being used against an iPhone 12 model, but I did find an article highlighting a recent funding round for the company.

Thomas Brewster, Forbes:

But not all are convinced by Grayshift’s long-term capabilities, given Apple’s consistent improvement of the iPhone’s security. The GrayKey is believed to be capable of hacking iPhones up to the iPhone 11, though it’s unclear how effective the tool is against the iPhone 12. “It’s most likely they can’t do much, if anything at all, with the iPhone 12 and iOS 14,” said Vladimir Katalov, CEO of another forensics company, Elcomsoft. “Perhaps they just want to cash out.”

Katalov previously speculated that iOS 12 defeated the GrayKey but, clearly, some new method was developed to keep it working — at least, until recently. A new method may be discovered again. But it seems that Apple is particularly keen to address concerns about passcode vulnerabilities exploited by third parties. It is too bad that iCloud backups remain a critically weak point.

Facebook Is Considering Facial Recognition for Its Upcoming Smart Glasses

Ryan Mac, Buzzfeed News:

During a scheduled companywide meeting, Andrew Bosworth, Facebook’s vice president of augmented and virtual reality, told employees that the company is currently assessing whether or not it has the legal capacity to offer facial recognition on devices that are reportedly set to launch later this year. Nothing had been decided, he said, and he noted that current state laws may make it impossible for Facebook to offer people the ability to search for others based on pictures of their face.

“Face recognition … might be the thorniest issue, where the benefits are so clear, and the risks are so clear, and we don’t know where to balance those things,” Bosworth said in response to an employee question about whether people would be able to “mark their faces as unsearchable” when smart glasses become a prevalent technology. The unnamed worker specifically highlighted fears about the potential for “real-world harm,” including “stalkers.”

Andrew Bosworth confirmed the report on Twitter with an unsurprisingly defensive tone:

We’ve been open about our efforts to build AR glasses and are still in the early stages. Face recognition is a hugely controversial topic and for good reason and I was speaking about was how we are going to have to have a very public discussion about the pros and cons.

In our meeting today I specifically said the future product would be fine without it but there were some nice use cases if it could be done in a way the public and regulators were comfortable with.

Anyone who thinks for even a second about the negative consequences about a feature like this knows that there is absolutely no circumstance in which this is an open-ended “discussion”, as Bosworth seems to think. Face blindness is certainly real, and we must be compassionate to those who live with it, but adding a technology layer from one of the world’s most privacy-hostile companies is not a solution. It is a catastrophe waiting to happen.

Tech Companies Are Finally Going Beyond the Bare Minimum for Moderation

Christopher Mims, Wall Street Journal:

Services are also attempting to reduce the content-moderation load by reducing the incentives or opportunity for bad behavior. Pinterest, for example, has from its earliest days minimized the size and significance of comments, says Ms. Chou, the former Pinterest engineer, in part by putting them in a smaller typeface and making them harder to find. This made comments less appealing to trolls and spammers, she adds.

The dating app Bumble only allows women to reach out to men. Flipping the script of a typical dating app has arguably made Bumble more welcoming for women, says Mr. Davis, of Spectrum Labs. Bumble has other features designed to pre-emptively reduce or eliminate harassment, says Chief Product Officer Miles Norris, including a “super block” feature that builds a comprehensive digital dossier on banned users. This means that if, for example, banned users attempt to create a new account with a fresh email address, they can be detected and blocked based on other identifying features.

No matter how effectively platforms become at removing unwanted and inappropriate media, it will always be preferable to me for these services and products to be designed to discourage the need for heavy moderation in the first place. It is unsurprising to me that the platforms taking this approach and highlighted here by Mims are used by women more than, say, Twitter, Reddit, or YouTube. I have long harboured a pet theory that it is a positive feedback loop due, in part, to considering the negative ramifications of specific features. These platforms are certainly not perfect, but their more thoughtful feature design means they are less prone to misuse, which means they are more appealing to women and other people who are more likely to face abuse online. By contrast, platforms that deploy features without that kind of foresight quickly become overwhelmed with misuse, driving away some of those who tend to be on its receiving end.

Anyway, that is just a little speculation.

TikTok Says That It Removed 89 Million Videos in the Second Half of 2020, Most Before They Were Ever Seen

Michael Beckerman and Eric Han of TikTok:

89,132,938 videos were removed globally in the second half of 2020 for violating our Community Guidelines or Terms of Service, which is less than 1% of all videos uploaded on TikTok. Of these videos, 11,775,777 were removed in the US.

92.4% of these videos were removed before a user reported them, 83.3% were removed before they received any views, and 93.5% were removed within 24 hours of being posted.

[…]

51,505 videos were removed for promoting COVID-19 misinformation. Of these videos, 86% were removed before they were reported to us, 87% were removed within 24 hours of being uploaded to TikTok, and 71% had zero views.

TikTok credits its automated systems for detecting violating videos before they were viewed. For comparison, around 94% of YouTube videos were automatically flagged, but only around 40% were removed with zero views.

TikTok’s moderation efforts do come with a bit of an asterisk, however, because the platform is owned by ByteDance, which also runs Douyin, the version of TikTok only available in China.

A former ByteDance moderator referred to by the pseudonym Li An recently told their story to Shen Lu of Protocol:

The truth is, political speech comprised a tiny fraction of deleted content. Chinese netizens are fluent in self-censorship and know what not to say. ByteDance’s platforms — Douyin, Toutiao, Xigua and Huoshan — are mostly entertainment apps. We mostly censored content the Chinese government considers morally hazardous — pornography, lewd conversations, nudity, graphic images and curse words — as well as unauthorized livestreaming sales and content that violated copyright.

[…]

It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.

For clarity, TikTok and Douyin are entirely separate platforms. But one of the reasons TikTok’s moderation efforts are so effective — especially for a platform that has grown dramatically in such a short period of time — is, basically, because they have to be.

Fry’s and Intel Macs

The news tonight that Fry’s Electronics will be closing forever reminded me of this story posted to Quora in 2012.

Arnold Kim of MacRumors:

According to Kim Scheinberg, she and her husband John Kullmann had decided to move back to the east coast in 2000. In order to make the move, Kullmann had to work on a more independent project at Apple. Ultimately, he started work on an Intel version of Mac OS X. Eighteen months later, in December 2001, his boss asks him to show him what he’s been working on.

Kim Scheinberg:

In JK’s office, Joe watches in amazement as JK boots up an Intel PC and up on the screen comes the familiar ‘Welcome to Macintosh’.

Joe pauses, silent for a moment, then says, “I’ll be right back.”

He comes back a few minutes later with Bertrand Serlet.

Max (our 1-year-old) and I were in the office when this happened because I was picking JK up from work. Bertrand walks in, watches the PC boot up, and says to JK, “How long would it take you to get this running on a (Sony) Vaio?” JK replies, “Not long” and Bertrand says, “Two weeks? Three?”

JK said more like two *hours*. Three hours, tops.

Bertrand tells JK to go to Fry’s (the famous West Coast computer chain) and buy the top of the line, most expensive Vaio they have. So off JK, Max and I go to Frys. We return to Apple less than an hour later. By 7:30 that evening, the Vaio is running the Mac OS. [My husband disputes my memory of this and says that Matt Watson bought the Vaio. Maybe Matt will chime in.]

I’m Canadian; this story is my only association with Fry’s. From what I’ve seen of the memories that have been posted across the web tonight, it held a special place in the hearts of many.

Zero-Rating Policies for Facebook Left Many in the South Pacific Without the News

It appears that Facebook and the Australian government are resolving their differences. Facebook says that it will be restoring links to news on its platform; the government will make some adjustments to the law.

But while a country and a social media company were scuffling, the latter’s power became obvious to those in the South Pacific.

Sheldon Chanel, the Guardian:

Dr Amanda Watson, a research fellow at the Australian National University’s Coral Bell School of Asia Pacific Affairs, and an expert in digital technology use in the Pacific, said there was widespread confusion across the Pacific about the practical ramifications of Facebook’s Australian news ban.

[…]

“Facebook is the primary platform, because a number of telco providers offer cheaper Facebook data, or bonus Facebook data. Many Pacific Islanders might know how to do some basic Facebooking, but it’s questionable if they would be able to open an internet search engine and search for news, or go to a particular web address. There are technical confidence issues, and that’s linked to education levels in the Pacific, and how long people have had access to the internet.”

Watson is describing the practice of zero-rating and one reason why it is so pernicious. Zero-rating sounds great on its face. It means that popular services can strike deals with telecom providers so, at its best, some of the things most people do on the web are not counted against data quotas.

In the case of Facebook Free Basics — formerly Internet.org, which is among the most specious branding exercises I can imagine — there are a handful of websites and services that are included in mobile plans. Many of the websites selected by Facebook to receive this special treatment are American, including Facebook itself, of course. The result of this is that, according to a 2015 survey, only 5% of Americans agreed with the statement that “Facebook is the internet” compared to 65% of respondents in Nigeria, 61% in Indonesia, and 58% in India — countries where Facebook Free Basics is available.

In the quote above, Watson describes a lack of technical ability for not accessing many websites outside of Facebook, but there is another major hurdle: cost. Data plans can be expensive, and many news websites are garbage. Sticking to the websites included in Facebook Free Basics is not just easier, it is an economic reality that Facebook is taking advantage of.

In much of the world, internet policy effectively is Facebook policy, and vice-versa. One reason for that is the ferocious speed at which Facebook grew and acquired potential competitors. Though an American company, WhatsApp was wildly popular mostly outside of the U.S. before Facebook bought it. That’s why treating antitrust as a solely American concern — or something of trivial relevance, or something that can be eradicated with the eventual passage of time — is such a frustrating response from those of us who live elsewhere.

So, yes, Australian policy requiring Facebook to pay Rupert Murdoch’s empire so that users of the former can link to the latter does seem pretty ridiculous. But it is extraordinary to see a huge chunk of the world’s ad spending redirected to two American companies headquartered within a ten minute drive of each other. Many independent and local media entities around the world are bleeding so that Murdoch can buy another yacht with the money Facebook and Google should be using to pay their taxes.

Update: A reluctance to effectively govern in the United States is not the only way to gain technical dominance.

Infinite Feedback Loop

Howard Oakley:

Apple relies on bugs reported through its Feedback systems. As this Spotlight bug isn’t easy to recognise, users and third-party developers are only now realising the effects of Dave’s simple coding error. Without thorough testing, Apple is almost completely reliant on Feedback to detect and diagnose bugs.

This system is both flawed and woefully inefficient, as any expert in quality management will tell you. It’s like letting cars roll off the production line with no windows, and waiting for customers to bring them back to have them installed. By far the best choice is to build correctly the first time, or, as second best, to detect and rectify defects before shipping. So long as shipping updates remains relatively cheap, and your customers are happy to report all the defects which you didn’t fix, it appears to work, at least in the short term.

I’ve now reached the stage where I simply don’t have time to report all these bugs, nor should I have to. Indeed, I’ve realised that in doing so, I only help perpetuate Apple’s flawed engineering practices.

I was thinking about this piece earlier today as I filed a handful of pretty standard bug reports based on some visual problems I noticed in Big Sur.

For each one, Feedback Assistant automatically collected whole-system diagnostics, which wholly consumes system resources for a few minutes as it spits out a folder of logs totalling well over a gigabyte, plus the same folder as a compressed archive. The archive file is submitted to Apple and the uncompressed folder is locally cached for a little while — the oldest one on my drive is from January 4. It does not matter what the feedback is related to; this is a minimum requirement of all bug reports. If you are filing a report about many system features — Bluetooth or Time Machine, for example — it will also require you to collect separate diagnostics.1

Often, I suspect, users will not attach all of the diagnostics needed for Apple’s developers to even find the bug. But I have to wonder how effective it is to be collecting so many system reports all of the time, and whether it is making a meaningful difference to the quality of software — particularly before it is shipped. I have hundreds of open bug reports, many of which are years old and associated with “more than ten” similar reports. How can any engineering team begin to triage all of this information to fix problems that have shipped?

To its credit, the quality of Apple’s software seems to have stabilized in the last year or so. But after the last several years, it feels more like the hole has stopped getting deeper and less like we are climbing out of it.


  1. FB8993839 for the Time Machine bug. I have a recent top-of-the-line iMac connected by USB-C to a fast SSD and it’s still slow as hell. I do not understand this. ↩︎

Perseverance Touches Down on Mars

In July last year, scientists shot another car to Mars from Earth. It touched down a few days ago and, for the first time, provided video of the descent and audio from the planet. There is also a helicopter aboard, which will apparently be flown near the rover within the next month. A little over a century ago, humans first took to the air on Earth; by the end of March, if all goes to plan, humans will remotely pilot an aircraft in another planet’s atmosphere. Incredible.

Following Up on Copycat ‘Social Media Censorship’ Laws

I linked to an article earlier this month from Mike Masnick of Techdirt, explaining that several similar bills were being pushed in U.S. state legislatures to combat so-called “social media censorship”. These bills share virtually all of their language and have obvious First Amendment problems. In that linked piece, I showed that these bills were likely the work of a bizarre campaign associated with Chris Sevier.

Well, now I have some confirmation and a few more details. Emails released to me by the Florida state Senate show that Sevier and John Gunter Jr. have been lobbying hard for this bill for at least a year in that state alone. State Sen. Joe Gruters’ office asked for talking points to push the 2020 version of the bill. It was dropped last March, but it has been reconstituted for the 2021 session as House Bill 33, introduced by this fucking guy. Sevier and Gunter are responsible for pitching the same bill in Arkansas in 2018.

As of February 3, Sevier says that “28 states are moving forward on this”. I could not come up with a number anywhere near that. But there is one thing Sevier is right about: the bill template is now, technically, bipartisan, as Democratic lawmaker Mike Gabbard introduced it in the Hawaiian Senate.

Apple Cracks Down on Apps With ‘Irrationally High Prices’

Guilherme Rambo, 9to5Mac:

It looks like Apple has started to crack down on scam attempts by rejecting apps that look like they have subscriptions or other in-app purchases with prices that don’t seem reasonable to the App Review team.

9to5Mac obtained access to a rejection email shared by a developer that provides a subscription service through their app. It shows a rejection message from Apple telling them that their app would not be approved because the prices of their in-app purchase products “do not reflect the value of the features and content offered to the user.” Apple’s email goes as far as calling it a “rip-off to customers” (you can read the full letter at the end of this post).

This is not Apple’s sole response to fighting App Store scams. iOS 14.5 has a subtly redesigned subscription sheet that more clearly displays the cost of the subscription and its payment term.

I have waffled a bit on whether it makes sense for Apple to be the filter for the appropriateness of app pricing. It has always been a little bit at the mercy of Apple’s discretion — remember the I Am Rich app? — but legitimate developers have concerns about whether their apps will be second-guessed by some reviewer as being too expensive. And I am quite sure that, if the hypothetical becomes a reality, it is likely to be resolved with a few emails. But developers’ livelihoods are often on the line; there are no alternative native app marketplaces on iOS.

The proof of this strategy’s success will be in Apple’s execution, but that in itself is a little worrisome. It is a largely subjective measure; who is an app reviewer to say whether an app is worth five dollars a week or five dollars a month? Apple does not have a history of wild incompetence with its handling of the App Store, but there are enough stories of mistakes and heavy-handedness that this is being viewed as a potential concern even by longstanding developers of high-quality apps.

I hope this helps. There are enough of these fleeceware scams in the store to be impacting its reputation. A crackdown is clearly necessary. The question is whether the App Store team is capable of executing it.

Unproven Biometrics Are Increasingly Being Used at Border Crossings and by Law Enforcement

Hilary Beaumont, the Walrus:

In recent years, and whether we realize it or not, biometric technologies such as face and iris recognition have crept into every facet of our lives. These technologies link people who would otherwise have public anonymity to detailed profiles of information about them, kept by everything from security companies to financial institutions. They are used to screen CCTV camera footage, for keyless entry in apartment buildings, and even in contactless banking. And now, increasingly, algorithms designed to recognize us are being used in border control. Canada has been researching and piloting facial recognition at our borders for a few years, but — at least based on publicly available information — we haven’t yet implemented it on as large a scale as the US has. Examining how these technologies are being used and how quickly they are proliferating at the southern US border is perhaps our best way of getting a glimpse of what may be in our own future—especially given that any American adoption of technology shapes not only Canada–US travel but, as the world learned after 9/11, international travel protocols.

[…]

Canada has tested a “deception-detection system,” similar to iBorderCtrl, called the Automated Virtual Agent for Truth Assessment in Real Time, or AVATAR. Canada Border Services Agency employees tested AVATAR in March 2016. Eighty-two volunteers from government agencies and academic partners took part in the experiment, with half of them playing “imposters” and “smugglers,” which the study labelled “liars,” and the other half playing innocent travellers, referred to as “non-liars.” The system’s sensors recorded more than a million biometric and nonbiometric measurements for each person and spat out an assessment of guilt or innocence. The test showed that AVATAR was “better than a random guess” and better than humans at detecting “liars.” However, the study concluded, “results of this experiment may not represent real world results.” The report recommended “further testing in a variety of border control applications.” (A CBSA spokesperson told me the agency has not tested AVATAR beyond the 2018 report and is not currently considering using it on actual travellers.)

These technologies are deeply concerning from a privacy perspective. The risks of their misuse are so great that their implementation should be prohibited — at least until a legal framework is in place, but I think forever. There is no reason we should test them on a “trial” basis; no new problems exist that biometrics systems are solving by being used sooner.

But I am curious about our relationship with their biases and accuracy. The fundamental concerns about depending on machine learning boil down to whether suspicions about its reliability are grounded in reality, and whether we are less prone to examining its results in depth. I have always been skeptical of machines replacing humans in jobs that require high levels of judgement. But I began questioning that very general assumption last summer after reading a convincing argument from Aaron Gordon at Vice that speed cameras are actually fine:

Speed and red light cameras are a proven, functional technology that make roads safer by slowing drivers down. They’re widely used in other countries and can also enforce parking restrictions like not blocking bus or bike lanes. They’re incredibly effective enforcers of the law. They never need coffee breaks, don’t let their friends or coworkers off easy, and certainly don’t discriminate based on the color of the driver’s skin. Because these automated systems are looking at vehicles, not people’s faces, they avoid the implicit bias quandaries that, say, facial recognition systems have, although, as Dave Cooke from the Union of Concerned Scientists tweeted, “the equitability of traffic cameras is dependent upon who is determining where to place them.”

Loath as I am to admit it, Gordon and the researchers in his article have got a point. There are few instances where something is as unambiguous as a vehicle speeding or running a red light. If the equipment is accurately calibrated and there is ample amber light time, the biggest frustration for drivers is that they can no longer speed with abandon or race through changing lights — which are things they should not have been doing in any circumstance. I am not arguing that we should put speed cameras every hundred metres on every road, nor that punitive measures are the only or even best behavioural correction, merely that these cameras can actually reduce bias. Please do not send hate mail.

Facial recognition, iris recognition, gait recognition — these biometrics methods are clearly more complex than identifying whether a car was speeding. But I have to wonder if there is an assumption by some that there is a linear and logical progression from one to the other, and there simply is not. Biometrics are more like forensics, and courtrooms still accept junk science. It appears that all that is being done with machine learning is to disguise the assumptions involved in matching one part of a person’s body or behaviour to their entire self.

It comes back to Maciej Cegłowski’s aphorism that “machine learning is money laundering for bias”:

When we talk about the moral economy of tech, we must confront the fact that we have created a powerful tool of social control. Those who run the surveillance apparatus understand its capabilities in a way the average citizen does not. My greatest fear is seeing the full might of the surveillance apparatus unleashed against a despised minority, in a democratic country.

What we’ve done as technologists is leave a loaded gun lying around, in the hopes that no one will ever pick it up and use it.

Well we’re using it now, and we have done little to assure there are no bystanders in the path of the bullet.

Some Warning Signs for Canada From the Australian Government’s Battle With Facebook

With a Canadian law being drafted that is similar to the one moving forward in Australia, I have been watching this story intently. My hope has been that Canadian lawmakers will see the responses to these policies and adjust theirs accordingly. I am particularly concerned about its effects on local media, like the excellent Sprawl here in Calgary.

Michael Geist:

Third, this incident provides an important reminder that independent and smaller media will bear the biggest brunt of these policies. The reality is that the Australian battle really pits Facebook against Rupert Murdoch’s media empire. In other words, giant vs. giant. In Canada, the large media companies such as Postmedia and Torstar are the most vocal lobbyists on this issue, but smaller, independent media have already indicated that they do not support the News Media Canada lobbying campaign and want the benefit of links from social media services.

These policy proposals seem to fundamentally misunderstand the use of links on the web. This is entirely speculative, but I have long wondered if the appearance of Open Graph tags has anything to do with confusion about what is part of Facebook and what is third-party material. Link previews have repeatedly been associated with bad framing and untrustworthy practices. I wonder if these thumbnails may also blur the lines too much with what most people consider an external link.

Neither Good, Nor Bad, Nor Neutral

Zeynep Tufekci, the Atlantic:

On February 2, GameStop closed at $90, less than 20 percent of its all-time high, which it had reached just a few days earlier. Like many internet stories, the narrative may start with the “little guy” winning — David against Goliath — but they rarely end that way. The little guy loses, not because he is irrational and too emotional, but because of his relative power in society.

Similarly, Facebook was first celebrated for empowering dissidents during the Arab Spring, but just a few years later it was a key tool in helping Donald Trump win the presidency — and then, later, in clipping his wings, when it joined with other major social-media companies to deplatform him following the insurrection at the Capitol. The reality is that Facebook and Twitter and YouTube are not for or against the little guy: They make money with a business model that requires optimizing for engagement through surveillance. That explains a lot more than the “for or against” narrative. As historian Melvin Kranzberg’s famous aphorism goes: “Technology is neither good nor bad; nor is it neutral.”

As Tufekci writes, the social contract broken by those with power makes us want to see the relatively minor gains by the rest of us for more than they really are — at the same time as wealth and power continue to concentrate.

Lawsuit: Facebook Overstated Advertising Reach Estimates for Years

Hannah Murphy, Financial Times:

According to sections of a filing in the lawsuit that were unredacted on Wednesday, a Facebook product manager in charge of potential reach proposed changing the definition of the metric in mid-2018 to render it more accurate. 

However, internal emails show that his suggestion was rebuffed by Facebook executives overseeing metrics on the grounds that the “revenue impact” for the company would be “significant”, the filing said.

The product manager responded by saying “it’s revenue we should have never made given the fact it’s based on wrong data”, the complaint said.

Several years ago, Facebook admitted that it grossly overstated video views at a time when many publishers were “pivoting to video” specifically for the platform. In 2017, a Wall Street firm found that the reach of Facebook ads in the U.S. for some demographics were estimated to be millions greater than the total number of living people in those same segments. In November, Facebook said it was exaggerating the conversion rate of some ad campaigns.

Awful strange how these problems always seem to benefit Facebook.

Australia’s Bad Bargain With Platforms

Casey Newton, writing earlier this week about Australia’s forthcoming law requiring platforms to pay publishers when the former links to the latter:

I appreciate that more countries are now taking an interest in how to shore up their ailing media companies. But it seems to me that any legislation ought to begin with the aim of creating sustainable media jobs, rather than simply parceling out payments to the country’s biggest publishers. For starters, Australia could invest directly in nonprofit public media, which has consistently been shown to have significant civic benefits.

Or it could head down its current path, which is aimed at reducing the power of the tech giants, but — like so much regulation now under consideration around the world — will likely only entrench them further. For journalism to become more sustainable in the long run, it can’t rely on handouts from the biggest tech companies of the moment to the biggest publishers of the moment.

And that’s why I half hope that Google and Facebook call Australia’s bluff, and pull their news links from the platforms. I’ve never been in love with the idea of Google or Facebook being a primary news destination for most people, anyway. And while a retreat from that world would have some real costs in the short term, any withdrawal would also likely be temporary.

Facebook did; Google did not, instead agreeing to give News Corp “significant payments”. The results of this policy do not appear to encourage quality journalism. Instead, Google has helped further entrench Rupert Murdoch’s longtime dominance of Australian media, while Facebook users will only be able to link to websites not informational enough to be considered news.

Streaming Services Are Slowly Reaching Their Ultimate Cable Bundle Form

The EFF’s Katharine Trendacosta, writing for Slate this time last year:

But here’s the thing: Peacock doesn’t have to be good to compete. It simply has to exist on Comcast. Peacock Premium (with ads) will be free if you are a Comcast or Cox subscriber, and all of those subscribers are getting Peacock’s ads and driving up Peacock’s subscriber numbers, which makes those ads more lucrative for NBCUniversal. And that should come as no surprise to anyone who knows that Comcast owns NBCUniversal and that Cox licenses Comcast’s streaming platform.

[…]

Instead of the old horizontal bundling — in which cable companies packaged a bunch of channels together so that people paid for some things they weren’t going to watch to get what they were — the new bundling is going to be vertical, where you pay for internet and get a streaming service in return. It’s not just Comcast that’s doing this. AT&T owns HBO, and it’s going to give premium AT&T mobile and broadband customers HBO Max (not to be confused, although you could definitely be forgiven for doing so, with the existing HBO Go and HBO Now apps) bundles at no extra charge.

Jessica Toonkel and Tom Dotan, the Information:

As Netflix and Disney cement their leads over an array of rivals in the video-streaming market, top executives at Comcast and its NBCUniversal arm are in a quandary. Only 11.3 million households regularly watch NBCU’s Peacock service, far fewer than use competing services, according to recent internal data viewed by The Information. NBCU wants to ramp up Peacock’s growth, particularly among paying users, but without spending a lot of money.

Catie Keck, Gizmodo:

One way that Peacock might grow its subscriptions would be to merge or bundle with another firm or service. The Information reported that NBCUniversal has pitched ViacomCBS about bundling with CBS All Access — soon to relaunch as Paramount+ — at a discounted rate, a pitch that evidently piqued ViacomCBS’s interest for a potential offering in overseas markets. But the outlet also cited NBCUniversal chief Jeff Shell, prior to taking his current position, as telling colleagues that the company would need to merge with WarnerMedia to remain competitive. The Information did, however, note that there’s no indication such a deal has been proposed and it’s possible that Shell’s position has changed.

Trendacosta nearly called it, but with a twist: why choose between vertical or horizontal integration when they could do both? Welcome to the era of everything you hate about cable news bundles mixed with everything you hate about monolithic conglomerates.

Facebook Begins Restricting News Links for Australian Users and Australian Publishers Worldwide

Natalie Oliveri, reporting for 9 News in July:

Technology giants Google and Facebook will be required to negotiate with Australian media companies over payment for news content and notify them of algorithm changes under a mandatory code of conduct.

Today, this law was moved forward.

William Easton of Facebook:

Unfortunately, this means people and news organisations in Australia are now restricted from posting news links and sharing or viewing Australian and international news content on Facebook. Globally, posting and sharing news links from Australian publishers is also restricted. To do this, we are using a combination of technologies to restrict news content and we will have processes to review any content that was inadvertently removed.

This was stupidly framed by Matt Stoller as “bann[ing] the ENTIRE WORLD from getting Australian news content” and “censoring all of Australia”, which is a unique level of wrong and dumb.

Facebook’s response to this law differs from Google’s — the latter signed a bunch of agreements, including with News Corp, to pay publishers in exchange for showcasing their news. Facebook says that its relationship with publishers is different:

For Facebook, the business gain from news is minimal. News makes up less than 4% of the content people see in their News Feed. […]

Presumably, that means the remaining 96% is feed.

This is the biggest contemporary experiment in figuring out what it is like when publishers in one country no longer receive traffic from Facebook. It is unclear just how many clicks Facebook sends Australian news publishers. Facebook says it provided over five billion referrals last year, which obviously sounds like a lot, but that may be only a single-digit percentage of all news website visits in Australia.

Maybe this means that Australian Facebook users will become some of the best news consumers in the world because they will have to look elsewhere. They won’t rely on what Facebook thinks they want to see. It could be good for publishers, too, who will surely be happy to avoid Facebook’s algorithmic Jenga game.

But, if Facebook referrals are a significant amount of traffic to news websites, this law will have backfired in a quick and predictable way. Or, if Facebook detects “news” material imprecisely — which it will — it could permit the circulation of bullshit. The lies are free.

North Dakota Senate Rejects Bill That Would Require Alternative App Stores

Kate Cox, Ars Technica:

The North Dakota state Senate is jumping into a simmering feud between Apple and iOS software developers with a bill that would make it illegal for device makers to require to use their app stores and payment systems.

The bill (PDF) has two main prongs. First, it would make it unlawful for companies such as Google and Apple to make their app stores the “exclusive means” of distributing apps on their platforms. Second, it would prohibit those providers from requiring third parties to use their digital transaction or in-app payment systems in their applications.

Jack Dura, the Bismarck Tribune:

Apple Chief Privacy Engineer Erik Neuenschwander told the committee the bill “threatens to destroy iPhone as you know it” by mandating changes which he said would “undermine the privacy, security, safety, and performance that’s built into iPhone by design.”

“Simply put, we work hard to keep bad apps out of the App Store; (the bill) could require us to let them in,” he said.

Michael Tsai:

This argument basically assumes that it’s App Review, not iOS’s security features, that’s protecting users. Yet we have numerous examples of the App Store failing to do so, while at the same time mistakenly blocking good apps and developers. This happens both because the review process doesn’t scale and because it’s technically impossible to completely review how an app will behave. People definitely have more confidence installing software from an app store, but it’s mostly false confidence. Decades of experience with platforms like the Mac and Android that allow sideloading show that a more open approach works just fine. macOS’s anti-malware features have never been better.

Kif Leswing, CNBC:

The North Dakota state senate voted 36-11 on Tuesday not to pass a bill that would have required app stores to enable software developers to use their own payment processing software and avoid fees charged by Apple and Google.

If this bill had passed, what do you think Apple would have done?

  1. Stop offering products and services in North Dakota

  2. Construct an entirely separate iOS and App Store model for the citizens of North Dakota

  3. Upend its entire App Store business model

I know there are some developers who think the second and third options are likely, but North Dakota has less than a million residents. I think Apple could afford to forego Fargo.

Microsoft Now Offers a Unified Office App for iPad

Mary Jo Foley, ZDNet:

Microsoft’s Office app — a single app combining Word, Excel and PowerPoint features — is available for iPads from the Apple App Store as of today, February 16. Microsoft officials said earlier this month to expect this app to show up in the App Store, but didn’t say when that would happen.

[…]

Microsoft’s goal in creating these lightweight, combined Office apps was to address the needs of users for whom full versions of Word, Excel, and PowerPoint on their mobile devices was overkill in terms of features and download size. Microsoft also integrated its Lens technology into this app to make it easier for users to convert images into Word and Excel documents, scan PDFs and capture whiteboards. The app also is meant to simplify the process of making quick notes, signing PDFs and transferring files between devices.

Many of Office’s features are available for iPad users free of charge, but iPad Pro users require a subscription to do anything.

More broadly, I am finding it difficult to adapt to increasingly unified applications on my Mac and iPad. I am not sure if this is an age and experience thing — I am used to switching between apps with multiple documents or windows open. Aside from web browsers and development environments, I use tabs infrequently within any apps because I am often juggling between many files. The advantages of thinking in an application-based model are outweighed, for me, by a document-based model.

This unified Office app has many of the same problems as, for example, Electron apps and web apps generally. Each document consumes the entire app. You can use the app in split screen, as Apple now requires, but it does not fully support multitasking within the app. So it is not possible to, for example, build a PowerPoint presentation based on a Word document outline, or reference one Excel spreadsheet while working in another.

Microsoft’s discrete Office apps do support document-based multitasking, so this is perhaps an oversight. But it is consistent with the way some of Microsoft’s other Office apps work. Teams, for example, is almost entirely a single-window Electron app on MacOS. The current meeting will open in a new window but, otherwise, it is an entirely single-window app. There is a built-in “Files” feature — for documents shared with your team members — but it is not possible to open multiple directories at once. Nor is it possible to see a chat window and a calendar at the same time.

Perhaps the way I see applications, windows, and documents is outdated and incompatible with the increasingly web-centric model. I may be failing to adapt. But many of these restrictions seem designed mostly to help companies churn out updates across different operating systems at breakneck pace, without requiring as much platform-specific development. The software-as-a-service model seems to incentivize frequent updates, quality be damned. Unified apps — like cross-platform frameworks and lowest common denominator codebases — are yet another way to reduce friction in development. This trend is worrisome.

Please know that I do not blame individual developers for this. They are merely working within a system that has been created around them.

See Also: A Step Back, something I wrote last year about many of Apple’s one-window apps.