Month: April 2024

In the 1970s and 1980s, in-house researchers at Exxon began to understand how crude oil and its derivatives were leading to environmental devestation. They were among the first to comprehensively connect the use of their company’s core products to the warming of the Earth, and they predicted some of the harms which would result. But their research was treated as mere suggestion by Exxon because the effects of obvious legislation would “alter profoundly the strategic direction of the energy industry”. It would be a business nightmare.

Forty years later, the world has concluded its warmest year in recorded history by starting another. Perhaps we would have been more able to act if businesses like Exxon equivocated less all these years. Instead, they publicly created confusion and minimized lawmakers’ knowledge. The continued success of their industry lay in keeping these secrets.


“The success lies in the secrecy” is a shibboleth of the private surveillance industry, as described in Byron Tau’s new book, “Means of Control”. It is easy to find parallels to my opening anecdote throughout though, to be clear, a direct comparison to human-led ecological destruction is a knowingly exaggerated metaphor. The erosion of privacy and civil liberties is horrifying in its own right, and shares key attributes: those in the industry knew what they were doing and allowed it to persist because it was lucrative and, in a post-9/11 landscape, ostensibly justified.

Tau’s byline is likely familiar to anyone interested in online privacy. For several years at the Wall Street Journal, he produced dozens of deeply reported articles about the intertwined businesses of online advertising, smartphone software, data brokers, and intelligence agencies. Tau no longer writes for the Journal, but “Means of Control” is an expansion of that earlier work and carefully arranged into a coherent set of stories.

Tau’s book, like so many others describing the current state of surveillance, begins with the terrorists attacks of September 11 2001. This was the early days, when Acxiom realized it could connect its consumer data set to flight and passport records. The U.S. government ate it up and its appetite proved insatiable. Tau documents the growth of an industry that did not exist — could not exist — before the invention of electronic transactions, targeted advertising, virtually limitless digital storage, and near-universal smartphone use. This rapid transformation occurred not only with little regulatory oversight, but with government encouragement, including through investments in startups like Dataminr, GeoIQ, PlaceIQ, and PlanetRisk.

In near-chronological order, Tau tells the stories which have defined this era. Remember when documentation released by Edward Snowden showed how data created by mobile ad networks was being used by intelligence services? Or how a group of Colorado Catholics bought up location data for outing priests who used gay-targeted dating apps? Or how a defence contractor quietly operates nContext, an adtech firm, which permits the U.S. intelligence apparatus to effectively wiretap the global digital ad market? Regarding the latter, Tau writes of a meeting he had with a source who showed him a “list of all of the advertising exchanges that America’s intelligence agencies had access to”, and who told him American adversaries were doing the exact same thing.

What impresses most about this book is not the volume of specific incidents — though it certainly delivers on that front — but the way they are all woven together into a broader narrative perhaps best summarized by Tau himself: “classified does not mean better”. That can be true for volume and variety, and it is also true for the relative ease with which it is available. Tracking someone halfway around the world no longer requires flying people in or even paying off people on the ground. Someone in a Virginia office park can just make that happen and likely so, too, can other someones in Moscow and Sydney and Pyongyang and Ottawa, all powered by data from companies based in friendly and hostile nations alike.

The tension running through Tau’s book is in the compromise I feel he attempts to strike between acknowledging the national security utility of a surveillance state while describing how the U.S. has abdicated the standards of privacy and freedom it has long claimed are foundational rights. His reporting often reads as an understandable combination of awe and disgust. The U.S. has, it seems, slid in the direction of the kinds of authoritarian states its administration routinely criticizes. But Tau is right to clarify in the book’s epilogue that the U.S. is not, for example, China, separated from the standards of the latter by “a thin membrane of laws, norms, social capital, and — perhaps most of all — a lingering culture of discomfort” with concentrated state power. However, the preceding chapters of the book show questions about power do not fully extend into the private sector, where there has long been pride in the scale and global reach of U.S. businesses but concern about their influence. Tau’s reporting shows how U.S. privacy standards have been exported worldwide. For a more pedestrian example, consider the frequent praise–complaint sandwiches of Amazon, Meta, Starbucks, and Walmart, to throw a few names out there.

Corporate self-governance is an entirely inadequate response. Just about every data broker and intermediary from Tau’s writing which I looked up promised it was “privacy-first” or used similar language. Every business insists in marketing literature it is concerned about privacy and says they ensure they are careful about how they collect and use information, and they have been doing so for decades — yet here we are. Entire industries have been built on the backs of tissue-thin user consent and a flexible definition of “privacy”.

When polled, people say they are concerned about how corporations and the government collect and use data. Still, when lawmakers mandate choices for users about their data collection preferences, the results do not appear to show a society that cares about personal privacy.

In response to the E.U.’s General Data Privacy Regulation, websites decided they wanted to continue collecting and sharing loads of data with advertisers, so they created the now-ubiquitous cookie consent sheet. The GPDR does not explicitly mandate this mechanism and many remain non-compliant with the rules and intention of the law, but they are a particularly common form of user consent. However, if you arrive at a website and it asks you whether you are okay with it sharing your personal data with hundreds of ad tech firms, are you providing meaningful consent with a single button click? Hardly.

Similarly, something like 10–40% of iOS users agree to allow apps to track them. In the E.U., the cost of opting out of Meta’s tracking will be €6–10 per month which, I assume, few people will pay.

All of these examples illustrate how inadequately we assess cost, utility, and risk. It is tempting to think of this as a personal responsibility issue akin to cigarette smoking but, as we are so often reminded, none of this data is particularly valuable in isolation — it must be aggregated in vast amounts. It is therefore much more like an environmental problem.

As with global warming, exposé after exposé after exposé is written about how our failure to act has produced extraordinary consequences. All of the technologies powering targeted advertising have enabled grotesque and pervasive surveillance as Tau documents so thoroughly. Yet these are abstract concerns compared to a fee to use Instagram, or the prospect of reading hundreds of privacy policies with a lawyer and negotiating each of them so that one may have a smidge of control over their private information.

There are technical answers to many of these concerns, and there are also policy answers. There is no reason both should not be used.

I have become increasingly convinced the best legal solution is one which creates a framework limiting the scope of data collection, restricting it to only that which is necessary to perform user-selected tasks, and preventing mass retention of bulk data. Above all, users should not be able to choose a model that puts them in obvious future peril. Many of you probably live in a society where so much is subject to consumer choice. What I wrote sounds pretty drastic, but it is not. If anything, it is substantially less radical than the status quo that permits such expansive surveillance on the basis that we “agreed” to it.

Any such policy should also be paired with something like the Fourth Amendment is Not For Sale Act in the U.S. — similar legislation is desperately needed in Canada as well — to prevent sneaky exclusions from longstanding legal principles.

Last month, Wired reported that Near Intelligence — a data broker you can read more about in Tau’s book — was able to trace dozens of individual trips to Jeffrey Epstein’s island. That could be a powerful investigative tool. It is also very strange and pretty creepy all that information was held by some random company you probably have not heard of or thought about outside stories like these. I am obviously not defending the horrendous shit Epstein and his friends did. But it is really, really weird that Near is capable of producing this data set. When interviewed by Wired, Eva Galperin, of the Electronic Frontier Foundation, said “I just don’t know how many more of these stories we need to have in order to get strong privacy regulations.”

Exactly. Yet I have long been convinced an effective privacy bill could not be implemented in either the United States nor European Union, and certainly not with any degree of urgency. And, no, Matt Stoller: de facto rules on the backs of specific FTC decisions do not count. Real laws are needed. But the products and services which would be affected are too popular and too powerful. The E.U. is home to dozens of ad tech firms that promise full identity resolution. The U.S. would not want to destroy such an important economic sector, either.

Imagine my surprise when, while I was in middle of writing this review, U.S. lawmakers announced the American Privacy Rights Act (PDF). If passed, it would give individuals more control over how their information — including biological identifiers — may be collected, used, and retained. Importantly, it requires data minimization by default. It would be the most comprehensive federal privacy legislation in the U.S., and it also promises various security protections and remedies, though I think lawmakers’ promise to “prevent data from being hacked or stolen” might be a smidge unrealistic.

Such rules would more-or-less match the GDPR in setting a global privacy regime that other countries would be expected to meet, since so much of the world’s data is processed in the U.S. or otherwise under U.S. legal jurisdiction. The proposed law borrows heavily from the state-level California Consumer Privacy Act, too. My worry is that it will be treated by corporations similarly to the GDPR and CCPA by continuing to offload decision-making to users while taking advantage of a deliberate imbalance of power. Still, any progress on this front is necessary.

So, too, is it useful for anyone to help us understand how corporations and governments have jointly benefitted from privacy-hostile technologies. Tau’s “Means of Control” is one such example. You should read it. It is a deep exploration of one specific angle of how data flows from consumer software to surprising recipients. You may think you know this story, but I bet you will learn something. Even if you are not a government target — I cannot imagine I am — it is a reminder that the global private surveillance industry only functions because we all participate, however unwillingly. People get tracked based on their own devices, but also those around them. That is perhaps among the most offensive conclusions of Tau’s reporting. We have all been conscripted for any government buying this data. It only works because it is everywhere and used by everybody.

For all they have erred, democracies are not authoritarian societies. Without reporting like Tau’s, we would be unable to see what our own governments are doing and — just as important — how that differs from actual police states. As Tau writes, “in China, the state wants you to know you’re being watched. In America, the success lies in the secrecy“. Well, the secret is out. We now know what is happening despite the best efforts of an industry to keep it quiet, just like we know the Earth is heating up. Both problems massively affect our lived environment. Nobody — least of all me — would seriously compare the two. But we can say the same about each of them: now we know. We have the information. Now comes the hard part: regaining control.

I was perhaps a little optimistic about Humane’s A.I. Pin. It seems like an interesting attempt at doing something a little different and outside the mainstream device space. But the early reviews have dampened any of intrigue I may have had.

In its current guise, it is a solution in search of problems. It does not even have a timer function — the one thing I can count on Siri to deliver. For someone with a disability, something like this could make a lot of sense if it worked reliably and quickly, but it seems like it is too finicky.

Sherman Smith, Kansas Reflector:

Facebook’s unrefined artificial intelligence misclassified a Kansas Reflector article about climate change as a security risk, and in a cascade of failures blocked the domains of news sites that published the article, according to technology experts interviewed for this story and Facebook’s public statements.

Blake E. Reid:

The punchline of this story was, is, and remains not that Meta maliciously censored a journalist for criticizing them, but that it built a fundamentally broken service for ubiquitously intermediating global discourse at such a large scale that it can’t even cogently explain how the service works.

This was always a sufficient explanation for the Reflector situation, and one that does not require any level of deliberate censorship or conspiracy for such a small target. Yet, it seems as though many of those who boosted the narrative that Facebook blocks critical reporting cannot seem to shake that. I got the above link from Marisa Kabas, who commented:

They’re allowing shitty AI to run their multi-billion dollar platforms, which somehow knows to block content critical of them as a cybersecurity threat.

That is not an accurate summary of what has transpired, especially if you read it with the wink-and-nod tone I imply from its phrasing. There is plenty to criticize about the control Meta exercises and the way in which it moderates its platforms without resorting to nonsense.

Nicole Lipman, N+1 magazine:

But both things can be true. SHEIN might be singled out as the worst fast-fashion retailer because the United States fears and envies China and has a particular interest in denigrating its successes, and it might be singled out because it is, in fact, the worst: the greatest polluter, the most flagrant IP thief, the largest violator of human rights, and — arguably worst of all — the most profitable. SHEIN has shown the world that unsustainability pays. Together with the companies that will follow its example of ultra-fast fashion, SHEIN will accelerate the already-rapid acceleration toward global catastrophe.

Consider the volume of critical press coverage, for decades, documenting outrageous practices in any number of consumer industries — fashion, technology, whatever — and then consider how those same industries, and even the same businesses, continue to grow and thrive. We now live in a world of Shein, Temu, and Amazon, all of which are the exact opposite of the values we claim to hold, yet are hugely popular and growing. The worse they are, the more they are rewarded.

See Also: Michael Hobbes’ deep 2016 investigation, for the Huffington Post, about the “myth of the ethical shopper”.

Speaking of repairability, Samuel Gibbs reviewed, for the Guardian, the new Fairphone Fairbuds:

The Fairbuds cost £129 (€149) and are designed from the ground up to be as sustainable as possible, combining fair trade and recycled materials with replaceable parts that can be swapped in and out with a standard small screwdriver.

[…]

The earbuds have a little door hidden behind a silicone sleeve, which opens to reveal a small button battery ready to be replaced once it wears out. The design seems so simple you wonder why no one has tried it before.

Gibbs noted an audio sync issue which the company says it was working on. Otherwise, these seem to be perfectly fine true water-resistant wireless earbuds with approximately similar battery life to Apple’s AirPods Pro.

It turns out I am currently in the market for a new set of wireless earbuds. My second-generation AirPods are down to just a few minutes of usable battery charge, and I have been reluctant to buy another set because of the fixed lifespan owing to the glued-in battery. I am sure there are ways these are less good than AirPods but, for my priorities, I think these are the right trade-off. Sadly, they are not yet available in Canada.

Apple, in a press release that does not once contain either of the words “Oregon” or “regulation”:

Today Apple announced an upcoming enhancement to existing repair processes that will enable customers and independent repair providers to utilize used Apple parts in repairs. Beginning with select iPhone models this fall, the new process is designed to maintain an iPhone user’s privacy, security, and safety, while offering consumers more options, increasing product longevity, and minimizing the environmental impact of a repair. Used genuine Apple parts will now benefit from the full functionality and security afforded by the original factory calibration, just like new genuine Apple parts.

Apple goes on to say that parts calibration will soon be done on-device, and goes further to provide a genuinely good use of pairing: if parts are scavenged from iPhones with Activation Lock enabled, they will be “restricted” in some way.

This all sounds pretty great and, it would seem, entirely triggered by regulatory changes. But it also seems to me that it is designed to challenge the parts pairing section of Oregon’s right-to-repair law (PDF). Specifically, this portion:

(b) For consumer electronic equipment that is manufactured for the first time, and first sold or used in this state, after January 1, 2025, an original equipment manufacturer may not use parts pairing to:

[…]

(B) Reduce the functionality or performance of consumer electronic equipment; […]

A clause a little later in the same section does not oblige manufacturers to “make available special documentation, tools, parts or other devices or implements that would disable or override, without an owner’s authorization, anti-theft” features set by the device owner. It looks like the total meaning of the law is that Apple’s anti-theft features would be prohibited in Oregon because doing so would reduce their functionality. That is my non-lawyer reading, anyway: it creates an understandable reason for pairing, and grounds for Apple to fight it. Just a guess, but I bet this comes up later.

Supantha Mukherjee and Foo Yun Chee, Reuters:

Independent browser companies in the European Union are seeing a spike in users in the first month after EU legislation forced Alphabet’s Google, Microsoft, and Apple to make it easier for users to switch to rivals, according to data provided to Reuters by six companies.

The early results come after the EU’s sweeping Digital Markets Act, which aims to remove unfair competition, took effect on March 7, forcing big tech companies to offer mobile users the ability to select from a list of available web browsers from a “choice screen.”

I was skeptical about the efficacy of a browser ballot screen, but I guess I should not be surprised by this news. It turns out people may pick other options if you make the choice more prominent.

Via Ben Lovejoy, who covered the report for 9to5Mac but, as of publishing, has not linked to it, and writes:

Other browser companies claim that the process is convoluted, and provides no information on any of the browsers listed. They say this means iPhone users are more likely to simply pick the name they know, which is most likely to be Safari.

I have seen others suggest people may be picking third-party browsers because they are unclear about what a web browser is, or are unsure which one they want to use. I can see legitimacy in both arguments — but that is just how choice works. A lot of people buy the same brand of a product even when they have other options because it is the one they recognize; others choose based on criteria unrelated to the product itself. This is not a new phenomenon. What is fascinating to me is seeing how its application to web browsers on a smartphone is being treated as exotic.

An analogy some have turned to — including me — in describing the difference between first- and third-party apps on the iPhone is that it is something like the difference between store generic brands and national name brands. This has been misleading because users have not, in the case of competitors to first-party apps, been placed in a neutral starting position.

It has so far been a little bit like entering a store where they give you a basket of house brand products and you have to decide which third-party options you want to add or exchange to the basket. Someone needs to really care in order to make the effort. Now, because of this ballot screen, the market is a little more levelled, and it seems some users are responding.

Over the past several years, consequences have been slowly dripping out regarding Apple’s decision to silently curb iPhone performance in cases of poor battery capacity. First, the French competition authorities fined the company, then Apple settled a U.S. class action. In March, the Canadian equivalent class action suit was settled.

Alisha Parchment, CBC News:

Current or former iPhone 6 and 7 users in Canada can now submit a settlement claim for a class-action lawsuit that could pay up to $150 to eligible users of the affected devices.

For clarity, it also covers current and former iPhone 6S and iPhone SE (first generation) owners. If you have owned one of those devices and can find the serial number, you can process a claim, or opt out, until September 3. Quebec residents are ineligible.

Sarah Perez, TechCrunch:

WordPress.com owner Automattic is acquiring Beeper, the company behind the iMessage-on-Android solution that was referenced by the Department of Justice in its antitrust lawsuit against Apple. The deal, which was for $125 million according to sources close to the matter, is Automattic’s second acquisition of a cross-platform messaging solution after [buying Texts.com][bt] last October.

Matt Mullenweg:

A lot of people are asking about iMessage on Android… I have zero interest in fighting with Apple, I think instead it’s best to focus on messaging networks that want more engagement from power-user clients. This is an area I’m excited to work on when I return from my sabbatical next month.

Seems like a smart way for Beeper to become better resourced, and a bet by Automattic on more legislation like the Digital Markets Act enabling further interoperable messaging.

Louise Matsakis, Wired:

[Zen] Goziker worked at TikTok for only six months. He didn’t hold a senior position inside the company. His lawsuit, and a second one he filed in March against several US government agencies, makes a number of improbable claims. He asserts that he was put under 24-hour surveillance by TikTok and the FBI while working remotely in Mexico. He claims that US attorney general Merrick Garland, director of national intelligence Avril Haines, and other top officials “wickedly instigated” his firing. And he states that the FBI helped the CIA share his private information with foreign governments. The suits do not appear to include evidence for any of these claims.

Here is a copy of Goziker’s complaint and it is quite the read, as you can probably imagine. He alleges, without evidence, corruption between members of the Biden administration trying to gain political favours from TikTok executives, effectively placing himself as the central character in a complex geopolitical plot.

Perhaps more believable is Goziker’s claim that he was the source for the recordings reported on in June 2022 by Emily Baker-White, then at Buzzfeed News, in an article pretentiously framed as the “TikTok Tapes”. While the story’s accuracy is not affected by a bloviating source, it sure makes me more concerned the clips were taken out of context. To be clear, I have no evidence of that and I am sure Baker-White was diligent in reporting out the story.

Zuha Siddiqui, Samriddhi Sakunia, and Faisal Mahmud, Rest of World:

To better understand air quality exposure among gig workers in South Asia, Rest of World gave three gig workers — one each in Lahore, New Delhi, and Dhaka — air quality monitors to wear throughout a regular shift in January. The Atmotube Pro monitors continually tracked their exposure to carcinogenic pollutants — specifically PM1, PM2.5, and PM10 (different sizes of particulate matter), and volatile organic compounds such as benzene and formaldehyde.

[…]

Although pollution can affect anyone exposed to it, delivery riders are particularly vulnerable owing to the nature of their work: They are outside for extended periods of time, often on congested streets, with little shelter from the smog.

These are obviously among the most extreme examples of what delivery workers’ lungs endure. Conditions similar to these are common across Southeast Asia and South Asia, but are not limited to those regions. According to IQAir, many cities in South Africa are dealing with dangerous levels of pollution, and winter months are particularly hazardous in Chile.

Back in the United States, John Oliver spent the main portion of the March 31 edition of “Last Week Tonight” talking about delivery workers. I have to wonder how any of these supposedly revolutionary “gig” jobs will last in their current form.

Update: Corrected to reflect that July is, in fact, winter in Chile. What a silly mistake.

Louie Mantia:

I used to instantly delete emails about a company’s policy changes, but now I’m taking a different approach. Before I delete the email, I delete the account.

[…]

But why am I the one who has to delete the account?

Companies are too comfortable modifying their policies passively over years, because they get to retain user data even if users don’t explicitly consent to a policy change.

Via Eric Schwarz:

I couldn’t agree more with the sentiment of this entire post. A few months ago, I decided to clean up old and unnecessary accounts and the amount of companies that either fought me on the request or hid behind the “you don’t live in California” excuse was frustrating. […]

Rodrigo Ghedin:

My face is in several places. Back there, before the facial recognition algorithms and the generative AIs, I thought it would be good to show the face to pass… credibility? Confidence? I don’t know. Maybe it wasn’t even a necessity as it’s today, because we didn’t have AIs that wrote convincing gibberish. Simpler times.

Three posts on a theme: our inability to forecast technological development or changing incentives. It once used to be prohibitive to retain data collections. When it was physical, it was the kind of thing only librarians and archivists could do, in buildings designed specifically for that purpose. There was built-in encouragement to purge old and irrelevant things. For a few decades now, it has become more costly to delete things — who knows what value some column in a database or a formerly active user’s account could generate? Better hold onto it.

The Calgary Cassette Preservation Society:

Documenting Calgary’s music scene since 2007, this is the new home of the CCPS. We’re bringing over content from our old site and will be adding more stuff (including a long-dreamed of gig poster archive) in the coming months.

Via Boshika Gupta, CBC News:

The website is a work in progress as the team finishes migrating content to the site with the hope that the updated resource will make it super easy for music lovers to look up bands and listen to their recordings.

[Arif] Ansari is also aiming to include other ephemera from Calgary’s music scene, such as historic gig posters.

I am sad to say I had not heard of the original site until I read this CBC article. 2007 was a few years after I began going to gigs by many of the local artists documented on Ansari’s site, and I am hoping to see more recordings as people rummage around bins of old tapes.

Earlier this week, Dave Kendall of documentary production company Prairie Hollow and formerly of a Topeka, Kansas PBS station, wrote in the Kansas Reflector an article criticizing Meta. Kendall says he tried to promote posts on Facebook for a screening of “Hot Times in the Heartland” but was prevented from doing so. A presumably automated message said it was not compliant with its political ads policy.

I will note Meta’s ambiguous and apparently fluid definition of which posts count as political. But Kendall comes to the ridiculous conclusion that “Meta deems climate change too controversial for discussion” based solely on his inability to “boost” an existing post. Being pedantic but correct, that means that Meta did not prohibit discussion generally, just the ad.

I cannot fault Kendall’s frustration, however, as he correctly describes the non-specific support page and nonexistent support:

But in the Meta-verse, where it seems virtually impossible to connect with a human being associated with the administration of the platform, rules are rules, and it appears they would prefer to suppress anything that might prove problematic for them.

Exactly. This accurately describes the imbalanced power of even buying ads on Meta’s platforms. Advertisers are Meta’s customers and, unless one is a big spender, they receive little to no guidance. There are only automated checks and catch-all support contacts, neither of which are particularly helpful for anything other than obvious issues.

A short while later in the editorial, however, things take a turn for the wrong again:

The implications of such policies for our democracy are alarming. Why should corporate entities be able to dictate what type of speech or content is acceptable?

In a centralized social network like Facebook, the same automated technologies which flagged this post also flag and remove posts which contribute to a poor community. We already know how lax policies turn out and why those theories do not last in the real world.

Of course, in a decentralized social network, it is possible to create communities with different policies. The same spec that underpins Mastodon, for example, also powers Gab and Truth Social. Perhaps that is more similar to the system which Kendall would prefer — but that is not how Facebook is built.

Whatever issue Facebook flagged regarding those ads — Kendall is not clear, and I suspect that is because Facebook is not clear either — the problems of its poor response intensified later that day.

Clay Wirestone and Sherman Smith, opinion editor and editor-in-chief, respectively, of the Kansas Reflector:

This morning, sometime between 8:20 and 8:50 a.m. Thursday, Facebook removed all posts linking to Kansas Reflector’s website.

This move not only affected Kansas Reflector’s Facebook page, where we link to nearly every story we publish, but the pages of everyone who has ever shared a story from us.

[…]

Coincidentally, the removals happened the same day we published a column from Dave Kendall that is critical of Facebook’s decision to reject certain types of advertising: “When Facebook fails, local media matters even more for our planet’s future.”

Marisa Kabas, writing in the Handbasket:

Something strange started happening Thursday morning: Facebook users who’d at some point in the past posted a link to a story from the Kansas Reflector received notifications that their posts had violated community standards on cybersecurity. “It looks like you tried to gather sensitive information, or shared malicious software,” the alert said.

[…]

Shortly after 4, it appeared most links to the site were posting properly on Meta properties—Facebook, Instagram Threads — except for one: [Thursday’s column][ed] critical of Facebook.

If you wanted to make a kind-of-lame modern conspiracy movie, this is where the music swells and it becomes a fast-paced techno-thriller. Kabas followed this article with one titled “Here’s the Column Meta Doesn’t Want You to See”, republishing Kendall’s full article “in an attempt to sidestep Meta’s censorship”.

While this interpretation of a deliberate effort by Facebook to silence critical reporting is kind of understandable, given its poor communication and the lack of adequate followup, it hardly strikes me as realistic. In what world would Meta care so much about tepid criticism published by a small news operation that it would take deliberate manual actions to censor it? Even if you believe Meta would be more likely to kneecap a less visible target than, say, a national news outlet, it does not make sense for Facebook to be this actively involved in hiding any of the commentary I have linked to so far.

Facebook’s explanation sounds more plausible to me. Sherman Smith, Kansas Reflector:

Facebook spokesman Andy Stone in a phone call Friday attributed the removal of those posts, along with all Kansas Reflector posts the day before, to “a mistaken security issue that popped up.” He wouldn’t elaborate on how the mistake happened and said there would be no further explanation.

[…]

“It was a security issue related to the Kansas Reflector domain, along with the News From The States domain and The Handbasket domain,” Stone added. “It was not this particular story. It was at the domain level.”

If some system at Meta erroneously flagged as a threat Kendall’s original attempt to boost a post, it makes sense that related stories and domains would also be flagged. Consider how beneficial this same chain of effects could be if there were actually a malicious link: not only does it block the main offending link, but also any adjacent links that look similar, and any copycats or references. That is an entirely fair way to prevent extreme platform abuse. In this case, with large numbers of people trying to post one link that had already been flagged, alongside other similar links, it is easy to see how Meta’s systems might see suspicious behaviour.

For an even simpler example, consider how someone forgetting a password for their account looks exactly the same as someone trying to break into it. On any website worth its salt, you will be slowed down or prevented from trying more than some small number of password attempts, even if you are the actual account owner. This is common security behaviour; Meta’s is merely more advanced.

This is not to say Meta got this right — not even a little bit. I have no reason to carry water for Meta and I have plenty to criticize; more on that later. Unfortunately, the coverage of this non-story has been wildly disproportionate and misses the actual problems. CNN reported that Meta was “accused of censoring” the post. The Wrap said definitively that it “block[ed] Kansas Reflector and MSNBC columnist over op-ed criticizing Facebook”. An article in PC Magazine claimed “Facebook really, really doesn’t want you to read” Kendall’s story.

This is all nonsense.

What is true and deeply frustrating is the weak approach of companies like Meta and Google toward customer service. Both have offloaded the administrative work of approving or rejecting ads to largely automated systems, with often vague and unhelpful responses, because they have prioritized scale above quality from their earliest days.

For contrast, consider how apps made available in Apple’s App Store have always received human review. There are plenty of automated processes, too, which can detect obvious problems like the presence of known malware — but if an app passes those tests, a person sees it before approving or rejecting it. Of course, this system is also deeply flawed; see the vast number of articles and links I have posted over the years about the topic. Any developer can tell you that Apple’s support has problems, too. But you can see a difference in approaches between companies which have scaled with human intervention, and those which have avoided it.

Criticism of Meta in this case is absolutely warranted. It should be held to a higher standard, with more options available for disputing its moderation judgements, and its pathetic response in this case deserves the scrutiny and scorn it is receiving. This is particularly true as it rolls out its de-prioritization of “political” posts in users’ feeds, while continuing to dodge meaningful explanations of what will be affected.

Dion Lefler, the Wichita Eagle:

Both myself and Eagle investigative reporter Chance Swaim have tried to contact Facebook/Meta — although we knew before we started that it’s a waste of time and typing.

Their corporate phone number is a we-don’t-give-a-bleep recording that hangs up on you after two repeats. And their so-called media relations department is where press emails go to die.

Trying to understand how these megalithic corporations make decisions is painful enough, and their ability to dodge the press gives the impression they are not accountable to anybody. They may operate our social spaces and digital marketplaces, but they are oftentimes poor stewards. There will always be problems at this scale. Yet, it often seems as though public-facing tech businesses, in particular, behave as though they are still scrappy upstarts with little responsibility to a larger public. Meta is proud to say its products “empower more than 3 billion people around the world”. I cannot imagine what it is like to design systems which affect that many people. But it is important to criticize the company when it messes up this badly without resorting to conspiracy theories or misleading narratives. The press can do better. But Meta also needs to be more responsive, less hostile, and offer better explanations of how these systems work because, like just about any massive entity, nobody should be trusting it at its word.

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you.

It’s easy to setup, doubles the speed at which Safari loads and blocks all YouTube ads.

Magic Lasso YouTube adblock

Magic Lasso is an efficient, high performance and native Safari ad blocker. With over 4,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad and Mac.

It blocks all intrusive ads, trackers, and annoyances — letting you experience a faster, cleaner and more secure web browsing experience.

The app also blocks over 10 types of YouTube ads; including all:

  • video ads

  • pop up banner ads.

  • search ads

  • plus many more

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

Gaby Del Valle, the Verge:

The complaint emphasizes that, unlike iMessages, iPhone users’ SMS communications with Android users — i.e., green bubble texts — lack encryption. 

“Apple forces other platforms to use SMS messaging. It doesn’t allow them to integrate with iMessage or another encrypted message platform built-in,” Cliff Steinhauer, director of information security and engagement at the National Cybersecurity Alliance, told The Verge in a phone interview. Since SMS messages aren’t encrypted, they’re less secure by default.

Apple has previously said its devices would begin supporting RCS, a more secure messaging protocol that will make communications with Android devices encrypted, later this year.

There is a theoretically good discussion in this story about the compromises of the iPhone’s privacy and security model, and its dependence on a benevolent dictator. But these three paragraphs are silly.

It is obviously true that the SMS standard does not have any support for encrypted messages, but that is also true of RCS. Del Valle links to the Verge’s own reporting of Apple’s RCS support in which it says it “could enable support for encryption”, but any end-to-end encryption of RCS messages is currently thanks to implementation decisions made by Google — and Apple will not match that support. Instead, it says it will advocate for end-to-end encryption standards in the RCS spec. The claim that it will “will make communications with Android devices encrypted” is simply untrue.

The key phrase in what Steinhauer said is “built-in” and that will not change when RCS support is added. In fact, it is not even clear to me that most conversation between iOS and Android users happens over SMS. I would not be surprised if that were true, given that it is a universal standard, but most popular third-party messaging applications are now or are in the process of becoming end-to-end encrypted.

It seems to me like the rest of this article raises good arguments about how Apple runs the iPhone and the App Store, from a range of perspectives. One person says commercial spyware impacts Android phones more often, another says a moderate increase in risk is worth it for loosening Apple’s control, and so on. But it feels like a moot discussion because this article is nominally about the U.S. Department of Justice’s case against Apple — and its primary complaints are barely related to App Store policy. The closest the DoJ gets is with questions about super apps, cloud streaming gaming apps, and digital wallets, but most of its issues are with Apple’s restrictions around private APIs. The region with a big opening-up of app distribution on iOS is the E.U., and it will be a good experiment in which concerns shake out as true and which are mongering.

Security is one thing to watch out for but, if there are privacy concerns, the U.S. should pass a sweeping nationwide legal framework for privacy. If individual privacy ought to be a right, then it should be spelled out in law, and no company should be able to use it as paper-thin justification for its platform choices. There are times when Apple’s policing decisions seem entirely legitimate, and there are times when it seems — as the DoJ memorably put it — like an “elastic shield”. It would be better for everyone, I think, if there were universal privacy standards that did not depend on the user’s hardware and software choices. Any company could be restrictive if they would like, but there should be a baseline substantially higher than the one that exists today.

Maxwell Zeff, Gizmodo:

Just over half of Amazon Fresh stores are equipped with Just Walk Out. The technology allows customers to skip checkout altogether by scanning a QR code when they enter the store. Though it seemed completely automated, Just Walk Out relied on more than 1,000 people in India watching and labeling videos to ensure accurate checkouts. The cashiers were simply moved off-site, and they watched you as you shopped.

Zeff says, paraphrasing the Information’s reporting, that 70% of sales needed human review as of 2022, though Amazon says that is inaccurate.

Based on this story and reporting from the Associated Press, it sounds like Amazon is only ending Just Walk Out support in its own stores. According to the AP and Amazon’s customer website and retailer marketing page, several other stores will still use a technology it continues to say works by using “computer vision, sensor fusion, and deep learning”.

How is this not basically a scam? It certainly feels that way: if I was sold this ostensibly automated feat of technology, I would feel cheated by Amazon if it was mostly possible because someone was watching a live camera feed and making manual corrections. If the Information’s reporting is correct, only 30% of transactions are as magically automated as Amazon claims. However, Amazon told Gizmodo that only a “small minority” of transactions need human review today — but, then again, Amazon has marketed this whole thing from jump as though it is just computers figuring it all out.

Amazon says it will be replacing Just Walk Out with its smart shopping cart. Just like those from Instacart, it will show personalized ads on a cart’s screen.

In March, a massive amount of AT&T customer data was leaked on a well-known marketplace. The data included extremely sensitive subscriber information, including Social Security Numbers that were apparently decrypted from how they were stored. AT&T initially denied its own systems were breached but, in a statement a couple of weeks later — apparently prompted by Zack Whittaker of TechCrunch — it acknowledged it or “one of its vendors” could be the source.

AT&T also said it was released on the “dark web” but, like, you can just Google the forum where they are available. It is a normal non-Tor website.

Anyway, Om Malik was a customer and expects some of his information is in this leak, and is not impressed with AT&T’s response:

These guys get in touch when you are late with your payment — but not when they can’t do their job. My initial reaction to the news was the all-too-familiar rage, and the all-too-often repeated four-letter words. AT&T wants you to sign up and get free monitoring from one of the three credit bureaus — which have been hacked at some point.

This is no different from what T-Mobile did when it was hacked. The problem with such actions is that it leads to nowhere — placing the entire responsibility on the citizen, who is left dealing with the mess created by large corporations through no fault of their own. […]

I think Malik is right. There is a sort of creeping pessimism that comes with a now-steady gush of data breaches because, it seems, so much has already been disclosed that the leak of another copy of your personal information only makes an already large pile a little bit bigger. But even though bad security practices should not go unpunished, a debilitating penalty for any corporation which fails to protect its records has little effect compared to the misery of each affected person for years.