Karl Bode, referencing a dumb Fortune article published last month:

Nothing that headline says is true. It doesn’t seem to matter. “CEO said a thing!” journalism involves, again, no actual journalism. Sam Altman, Mark Cuban, and Mark Zuckerberg are frequent beneficiaries of the U.S. corporate press’ absolute dedication to propping up extraction class mythologies via clickbait.

Nobody has benefited more from this style of journalism than Elon Musk. His fake supergenius engineer persona was propped up by a lazy press for the better part of two decades before the public even started to realize Musk’s primary skillset was opportunistically using familial wealth and the Paypal money he lucked into to saddle up to actual innovators and take singular credit for their work.

There is a symbiotic relationship these CEOs have with modern and traditional media alike. Musk goes on a three-hour “deeply researched” podcast and says some bullshit about how space will “be by far the cheapest place to put A.I. It will be space in 36 months or less. Maybe 30 months”. And then the host replies “36 months?” and Musk says “less than 36 months”, and then they are off for ten minutes discussing this as though it is a real thing that will really happen. Then real publications cover it like it is serious and real and, when asked for comment, Musk’s companies do not engage.

All these articles and videos bring in the views despite lacking the substance implied by either their publisher or, in the case of these video interviews, their length and serious tone. These CEOs know they can just say stuff. There is no reason to take them at their word, nor to publish a raft of articles based on whatever they say in some friendly and loose interview. Or a tweet, for that matter.

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

As an efficient, high performance and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

Best in class YouTube ad blocking

Magic Lasso Adblock is easy to setup, doubles the speed at which Safari loads, and also blocks all YouTube ads — including all:

  • video ads

  • pop up banner ads

  • search ads

  • plus many more

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Kashmir Hill, Kalley Huang, and Mike Isaac report, in the New York Times, that Meta has been planning on bringing facial recognition features to its smart glasses. There is a money quote in this article you may have seen on social media already, but I want to give a greater context to it (the facial recognition feature is called “Name Tag”, at least internally):

[…] The document, from May, described plans to first release Name Tag to attendees of a conference for the blind, which the company did not do last year, before making it available to the general public.

Meta’s internal memo said the political tumult in the United States was good timing for the feature’s release.

“We will launch during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns,” according to the document from Meta’s Reality Labs, which works on hardware including smart glasses.

The second part of this is a cynical view of public relations that would be surprising from most any company, yet seems pretty typical for Meta. This memo is apparently from May, a few months before a Customs and Border Protection agent wore Meta’s Ray-Bans to a raid, so I am not sure civil rights organizations would ignore the feature today. However, the first part of the quote I included also seems pretty cynical: releasing it as an accessibility feature first.

Facial recognition may potentially be useful to people with disabilities, assuming it works well, and I do not want to sweep that aside in the abstract. But this is Meta. It is a company with a notoriously terrible record on privacy, to the extent it is bound by a 20-year consent order (PDF) with the U.S. Federal Trade Commission, which it violated in multiple ways, one of which concerned facial recognition features. Perhaps there is a way for technology to help people recognize faces that is safe and respectful but, despite positioning itself as a privacy-focused company — coincidentally, at the same time as the FTC said it violated its consent decree — Meta will not be delivering that future.

For one, it is still considering the scope of which faces its glasses ought to recognize:

Meta is exploring who should be recognizable through the technology, two of the people said. Possible options include recognizing people a user knows because they are connected on a Meta platform, and identifying people whom the user may not know but who have a public account on a Meta site like Instagram.

The feature would not give people the ability to look up anyone they encountered as a universal facial recognition tool, two people familiar with the plans said.

Instagram has over three billion monthly users and, while that does not translate perfectly to three billion public personal accounts, it seems to me like a large proportion of people any of us randomly meet would be identifiable. Why should that suggestion even make it past the very first mention of it in some meeting long ago? Some ideas are obviously bad and should be quashed immediately.

Dell Cameron, Wired:

United States Customs and Border Protection plans to spend $225,000 for a year of access to Clearview AI, a face recognition tool that compares photos against billions of images scraped from the internet.

The deal extends access to Clearview tools to Border Patrol’s headquarters intelligence division (INTEL) and the National Targeting Center, units that collect and analyze data as part of what CBP calls a coordinated effort to “disrupt, degrade, and dismantle” people and networks viewed as security threats.

Lindsey Wilkinson, FedScoop:

In the last year, CBP has deployed several AI technologies, such as NexisXplore, to aid in open-source research of potential threats and to identify travelers. The Homeland Security organization last year began using Mobile Fortify as a facial comparison and fingerprint matching tool to quickly verify persons of interest. CBP Link is another AI use case that cropped up in the past year, streamlining facial recognition and real-time identity verification.

CBP began piloting Clearview AI’s technology in 2025, too, according to DHS’s AI inventory. The technology needed to be — and was — tuned to produce better results and limit misidentification. Guardrails have been identified to some degree.

A reminder that the way Clearview works is by scraping images it associates with specific individuals, including from sources like Facebook, across the web at massive scale — over sixty billion, according to Cameron. This is not facial recognition of criminals or even people suspected of wrongdoing. It is recognition of anyone who has a face that has been photographed and shared even semi-publicly.

This contract is likely part of the technologies for identifying incoming travellers, and not just in the U.S. — a 2022 article on the CBP website says other countries are using the CBP software that will likely have Clearview integration.

Howard Oakley:

What Apple doesn’t reveal is that it has improved, if not fixed, the shortcomings in Accessibility’s Reduced Transparency setting. When that’s enabled, at least some of the visual mess resulting from Liquid Glass, for example in the Search box in System Settings, is now cleaned up, as the sidebar header is now opaque. It’s a small step, but does address one of the most glaring faults in 26.2.

In apps like Messages and Preview, the toolbar finally has a solid background when Reduce Transparency is turned on instead of the translucent gradient previously. The toolbar itself and the buttons within it remain ill-defined, however, unless you also turn on Increase Contrast, which Apple clearly does not want you to do because it makes the system look ridiculous. Also, when Reduce Transparency is turned on, Siri looks like this:

Siri on MacOS Tahoe with illegible white text against a pastel background

One would assume this is the kind of thing someone at Apple would notice if there were people working there who used Siri, tested Accessibility features, and cared about contrast.

Adam Engst, TidBits:

Two other Liquid Glass-related pecadillos fared less well. First, although Apple fixed a macOS 26.2 problem that caused the column divider handles to be overwritten by scroll bars (first screenshot below), if you hide both the path bar and status bar, an unseemly gap appears between the scroll bar and the handles (fourth screenshot below). Additionally, while toggling the path and status bars, I managed to get the filenames to overwrite the status bar (third screenshot below). Worse, all of these were taken with Reduce Transparency on, so why are filenames ever visible under the scroll bar?

The problem with a cross-platform top-to-bottom redesign that puts translucency at the forefront is that it means addressing each of the ever-increasing number of control conditions. And then you are still stuck with Liquid Glass’ reflective quality. Even with Reduce Transparency turned on, the Dock will brighten — in light mode — when dragging an application window near it, because it is reflecting the large white expanse. Technically, the opacity of the Dock has not changed, but it still carries the perception of a translucent area with the impact it has on its contrast. Apple has written itself a whole new set of bugs to fix.

Logan McMillen, of the New Republic, is very worried about TikTok’s new ownership in the United States — so worried, in fact, that it deserves a conspiratorial touch:

The Americanization of TikTok has also introduced a more visible form of suppression through the algorithmic throttling of dissent. In the wake of recent ICE shootings in Minneapolis, users and high-profile creators alike reported that anti-ICE videos were instantly met with “zero views” or flagged as “ineligible for recommendation,” effectively purging them from the platform’s influential “For You” feed.

The new TikTok USDS Joint Venture LLC attributed these irregularities to a convenient data center power outage at its Oracle-hosted facilities. While the public attention this episode garnered will make it more conspicuous if user content gets throttled on TikTok again, the tools are there: By leveraging shadow bans and aggressive content moderation, TikTok can, if it wanted to, ensure that any visual evidence of ICE’s overreach is silenced before it reaches the masses.

If these claims sound familiar to you, it is probably because the same angle was used to argue for the divestiture of TikTok’s U.S. assets in the first place. The same implications and the same shadowy tones were invoked to make the case that TikTok was censoring users’ posts on explicit or implied instruction from Chinese authorities — and it was not convincing then, either.

These paragraphs appear near the bottom of the piece, where readers will find the following note:

This article originally misidentified TikTok’s privacy policy. It also misidentified the extant privacy policy as an updated one.

This article has been updated throughout for clarity.

That got me wondering. I compared the original article against the latest version, and put them into Diffchecker. The revisions are striking. Not only did the original version of the piece repeat the misleading claim that TikTok’s U.S. privacy policy changes were an effort to collect citizenship status, it suggested TikTok was directly “feed[ing] its data” to the Department of Homeland Security. On the power outage, quoted above, McMillen was more explicitly conspiratorial, originally writing “the timing suggests a more deliberate intention”.

404 Media reporter Jason Koelber, in a Bluesky thread [sic]:

it’s important to keep our guard up and it can be very useful to speculate about where things may go. that is not the same as saying without evidence that these things are already happening, or seeing different capabilities and assuming they are all being mashed together on a super spy platform

The revised version of the article is more responsible than the original. That is not a high bar. There are discrete things McMillen gets right, but the sum of these parts is considerably less informative and less useful.

For example, McMillen points out how advertising identifiers can be used for surveillance, which is true but is not specific to TikTok either before or after the U.S. assets divestiture. It is a function of this massive industry in which we all participate to some extent. If we want to have a discussion — or, better yet, action — regarding better privacy laws, we can do that without the snowballing effect of mashing together several possible actions and determining it is definitely a coordinated effort between ICE, TikTok, Palantir, Amazon’s Ring, and the Supreme Court of the United States.

Speaking of infinite scrolling, Kyle Hughes just updated Information Superhighway, his app for endlessly reading randomized Wikipedia articles. It has a new icon and a Liquid Glass button — and that is the update, as far as I can tell.

The only kind of brain rot this will give you is more of an ageing and fermenting process that is the result of scrolling from an article about a Japanese erotic black comedy-horror film, to another about an archaeological site in Belize, and then to one about the British Pirate Party.

A free app, with no strings attached. Probably the best way to infinitely scroll your day away.

The European Commission:

The Commission’s investigation preliminarily indicates that TikTok did not adequately assess how these addictive features could harm the physical and mental wellbeing of its users, including minors and vulnerable adults.

For example, by constantly ‘rewarding’ users with new content, certain design features of TikTok fuel the urge to keep scrolling and shift the brain of users into ‘autopilot mode’. Scientific research shows that this may lead to compulsive behaviour and reduce users’ self-control.

Additionally, in its assessment, TikTok disregarded important indicators of compulsive use of the app, such as the time that minors spend on TikTok at night, the frequency with which users open the app, and other potential indicators.

It is fair for regulators to question the efficacy of measures claiming to “promote healthier sleep habits”. This wishy-washy verbiage is just as irritating as when it is employed by supplement companies and it should be more strictly regulated.

Trying to isolate infinite scrolling as a key factor in encouraging unhealthy habits is, I think, oversimplifying the issue. Contrary to the conclusions drawn by some people, I am unsure if that is what the Commission is suggesting. The Commission appears to have found this is one part of a constellation of features that are intended to increase the time users spend in the app, regardless of the impact it may have on users. In an article published last year in Perspectives on Public Health, two psychologists sought to distinguish this kind of compulsive use from other internet-driven phenomena, arguing that short-form video “has been particularly effective at triggering psychological patterns that keep users in a continuous scrolling loop”, pointing to a 2023 article in Proceedings of the ACM on Human-Computer Interaction. It is a mix of the engaging quality of video with the unknown of what comes next — like flipping through television channels, only entirely tailored to what each user has previously become enamoured with.

Casey Newton reported on the Commission’s investigation and a similar U.S. lawsuit. Here is the lede:

The old way of thinking about how to make social platforms safer was that you had to make them do more content moderation. Hire more people, take down more posts, put warning labels on others. Suspend people who posted hate speech, and incitements to violence, or who led insurrections against their own governments.

At the insistence of lawmakers around the world, social platforms did all of this and more. But in the end they had satisfied almost no one. To the left, these new measures hadn’t gone nearly far enough. To the right, they represented an intolerable infringement of their freedom of expression.

I find the left–right framing of the outcomes of this entirely unproductive and, frankly, dumb. Even as a broad generalization, it makes little sense: there are plenty of groups across the political spectrum arguing their speech is being suppressed. I am not arguing these individual complaints are necessarily invalid. I just think Newton’s argument is silly.

Adequate moderation is an effective tool for limiting the spread of potentially harmful posts for users of all ages. While Substack is totally cool with Nazis, that stance rarely makes for a healthy community. Better behaviour, even from pseudonymous users, is encouraged by marginalizing harmful speech and setting relatively strict boundaries for what is permissible. Moderation is difficult to do well, impossible to do right, and insufficient on its own — of course — but it is not an old, outdated way of thinking, regardless of what Mark Zuckerberg argues.

Newton:

Of course, Instagram Reels and YouTube Shorts work in similar ways. And so, whether on the stand or before the commission, I hope platform executives are called to answer: if you did want to make your products addictive, how different would they really look from the ones we have now?

This is a very good argument. All of these platforms are deliberately designed to maximize user time. They are not magic, nor are they casting some kind of spell on users, but we are increasingly aware they have risks for people of all ages. Is it so unreasonable for regulators to have a role?

When I watched a bunch of A.I. company ads earlier this year, I noted Anthropic’s spot was boring and vague. Well, that did not last, as it began running a series of excellent ads mocking the concept of ads appearing in a chatbot. They are sharp and well-written. No wonder Anthropic aired them during the Super Bowl.

Anthropic also published a commitment to keep Claude ad-free. I doubt this will age well. Call me cynical, but my assumption is that Anthropic will one day have ads in its products, but perhaps not “Claude” specifically.

The reason anyone is discussing this is because ads are coming to OpenAI’s ChatGPT:

ChatGPT is used by hundreds of millions of people for learning, work, and everyday decisions. Keeping the Free and Go tiers fast and reliable requires significant infrastructure and ongoing investment. Ads help fund that work, supporting broader access to AI through higher quality free and low cost options, and enabling us to keep improving the intelligence and capabilities we offer over time. If you prefer not to see ads, you can upgrade to our Plus or Pro plans, or opt out of ads in the Free tier in exchange for fewer daily free messages.

Ads do not influence the answers ChatGPT gives you. Answers are optimized based on what’s most helpful to you. When you see an ad, they are always clearly labeled as sponsored and visually separated from the organic answer.

It is incredible how far we have come for these barely-distinguished placements to be called “visually separated”. Google’s ads, for example, used to have a coloured background, eventually fading to white. The “sponsored link” text turned into a little yellow “Ad” badge, eventually becoming today’s little bold “Ad” text. Apple, too, has made its App Store ads blend into normal results. In OpenAI’s case, they have opted to delineate ads by using a grey background and labelling them “Sponsored”.

Now OpenAI has something different to optimize for. We can all pretend that free market forces will punish the company if it does not move carefully, or it inserts too many ads, or if organic results start to feel influenced by ad buyers. But we have already seen how this works with Google search, in Instagram, in YouTube, and elsewhere. These platforms are ad-heavy to the detriment and frustration of users, yet they remain successful and growing. No matter what you think of OpenAI’s goals already, ads are going to fundamentally change ChatGPT and the company as a whole.

Do you want to block ads and trackers across all apps on your iPhone, iPad, or Mac — not just in Safari?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Magic Lasso: No ads, No trackers, No annoyances, No worries

The new App Ad Blocking feature in Magic Lasso Adblock v5.0 builds upon our powerful Safari and YouTube ad blocking, extending protection to:

  • News apps

  • Social media

  • Games

  • Other browsers like Chrome and Firefox

All ad blocking is done directly on your device, using a fast, efficient Swift-based architecture that follows our strict zero data collection policy.

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Geraldine McKelvie, the Guardian:

The global publishing platform Substack is generating revenue from newsletters that promote virulent Nazi ideology, white supremacy and antisemitism, a Guardian investigation has found.

I appreciate the intent of yet another article drawing attention to Substack’s willingness to host straightforward no-ambiguity Nazi publications, but I wish McKelvie and the Guardian would have given more credit to all the similar reporting that came before them. For example:

Among them are newsletters that openly promote racist ideology. One, called NatSocToday, which has 2,800 subscribers, charges $80 – about £60 – for an annual subscription, though most of its posts are available for free.

This is the very same account which, according to reporting by Taylor Lorenz last year, was promoted in a push notification from Substack. Substack told Lorenz the notification was “a serious error”. In the same article, Lorenz drew attention to NatSocToday’s recommendation of another explicitly Nazi publication hosted on Substack called the White Rabbit. This, too, is included as an example in McKelvie’s more recent report. Lorenz’s prior reporting goes unmentioned.

However, because both stories have contemporary screenshots of each Nazi publication’s profile, we can learn something — and this is another reason why I wish Lorenz’s story was cited. NatSocToday’s 2,800 subscribers as of this week does not sound like very much, but when Lorenz published her article at the end of July, it had only 746 subscribers. It has grown by over 2,000 subscribers in just six months. The same appears true of the White Rabbit, which went from “8.6K+” subscribers to “10K+” in the same timeframe.

One thing McKelvie gets wrong is suggesting “subscribers” equates to “paying members”. Scrolling through the subscriber lists of both of the publications above shows a mix of paid and free members. This is supported by Substack’s documentation, which I wish I had thought of checking before visiting either hateful newsletter. That is, while Substack is surely making some money by mainstreaming and recommending the kind of garbage people used to have to deliberately try and find, it is not a ten percent cut of the annual rate multiplied by the subscriber count.

By the way, while I am throwing some stones here, I should point out that Lorenz herself launched her User Magazine newsletter about a year after Jonathan M. Katz’s article “Substack Has a Nazi Problem”. Based on its archive, Lorenz just repurposed her personal Substack newsletter and existing audience to create User Mag. But Substack’s whole premise is that you own your email list and can bring it elsewhere, so Lorenz could have chosen any platform. Substack was never just infrastructure — it is a social media website with longform posts as its substance, and indifferent moderation as a feature.

If you want to understand what goes into a big YouTube production, this behind-the-scenes look from the tenth most popular tech channel seems to be a good place to start. It is remarkable how Marques Brownlee has grown from being just a guy making webcam videos from home to having a dedicated production space full of staff — and it all kind of hinges on YouTube, a singular video hosting platform. That would make me anxious daily, but Brownlee has made it work for about nine years.

Another thing that surprised me about this behind-the-scenes is just how far some companies will go to accommodate Brownlee. The Google Pixel team brought a bunch of unreleased devices to him for a first impressions video. That video was embargoed until an hour before Google was set to announce those devices. Brownlee, like anyone in the review space, has been accused of bias and favouritism, usually unfairly. If I were him, this kind of closeness would make me feel uncomfortable, as though Google is using me. I think Brownlee’s videos speak for themselves, however.

Josh Scott, BetaKit:

Earlier this week, Canada’s innovation ministry shared the results from the 30-day national consultation it held late last year to inform the country’s upcoming AI strategy. The consultation included feedback from 28 members of Canada’s AI task force, appointed by AI Minister Evan Solomon, which included representatives from industries and fields from tech to academia.

[…]

All of the AI strategy task force member submissions can be read in full here. The official government report on those submissions was put together with a mix of AI tools; BetaKit, instead, assigned a human reporter to review all 348 pages.

I read the high-level summary (PDF) and it very much has the vibe of being created with A.I. — it is unbearably dull.

Michael Geist:

There are many other examples, including differing views on digital sovereignty, regulatory sequencing (what should be prioritized first), and inclusion. Put 28 experts on a panel and there is obviously going to be differing views. Indeed, that’s the point of gathering a diversity of perspectives. In theory, the government got what it asked for with expert reports that frequently point to uncomfortable questions. However, many disappear in the government summary that smooths over urgency and re-frames hard choices as balanced policy choices.

Perhaps my favourite passage is on page 9 of the summary (PDF):

Stakeholders were divided between optimism for AI’s potential and skepticism about its risks. Supporters see opportunities for productivity gains and economic growth, while critics warn of ethical, environmental and social harms.

It is entirely accurate, yes, but it is so funny to me to summarize the arguments raised by critics in this breezy, balanced matter. It says nothing and yet gives the impression that these are of similar weight.

Sharon Lerner and Andy Kroll, reporting for ProPublica in 2024:

An analysis of more than 2,000 public-records requests submitted by [Colin] Aamot, [Mike] Howell and [Roman] Jankowski to more than two dozen federal offices and agencies, including the State Department, the Department of Homeland Security and the Federal Trade Commission, shows an intense focus on hot-button phrases used by individual government workers.

Those 2,000 requests are just the tip of the iceberg, Howell told ProPublica in an interview. Howell, the executive director of the Oversight Project, estimated that his group had submitted more than 50,000 information requests over the past two years. He described the project as “the most prestigious international investigative operation in the world.”

Miranda Green, reporting for Columbia Journalism Review this week:

Founded in 2019, Metric has been criticized for jury tampering and tied to pay-for-play political schemes and fake newspapers that land in mailboxes ahead of key elections. Recently, it has focused on obtaining troves of public records. In the past year, an investigation by the Tow Center for Digital Journalism has found Metric filed more than nine thousand Freedom of Information Act requests across all fifty states.

[…]

Brian Timpone, one of the founders of Metric Media, disputed Tow’s FOIA count, saying, “We have sent far more FOIAs than that.” He did not respond to a list of detailed questions. Metric’s other founders, Dan Proft and Bradley Cameron, did not respond to requests for comment.

I recognize that, as a resident of a country outside the United States — in fact, a resident of one that is struggling with its own sunshine law problems — my commentary here is of little weight. I am, in principle, not opposed to a voluminous set of requests, and especially not when they are for things that should be public already.

However, while the Heritage Foundation portrays its efforts as “oversight”, the sheer number of requests made by these groups creates logjams for everyone thereby preventing oversight. Oftentimes, they are merely requesting a huge aimless dump of communications to sift through. For example, the most recent log (PDF) from the Department of Homeland Security’s Science and Technology Directorate is mostly comprised of requests from Aamot and Howell for emails and text messages.

Some people believe adding a fee to each request — perhaps only for commercial requests. But it is generally not free to file these kinds of requests in Canada, and we also have a massive backlog. Another option is proactive disclosure of high-ranking officials’ communications and calendar entries, but I imagine that will cause all sorts of tangential problems even with liberal use of redactions. Perhaps there are technological improvements that could allow for more automated discovery, but a real person would still need to verify the work and screen for exclusions. There is no way out of this without a significant increase in funding and staffing, especially not when it is easier than ever to flood agencies with automated requests that, regardless of their intentions, must be processed like any other.

Kirk McElhearn:

I use Apple News to keep up on topics that I don’t find in sources I pay for (The Guardian and The New York Times). But there’s no way I’m going to pay the exorbitant price Apple wants for Apple News+ – £13 – because, while you get more publications, you still get ads.

And those ads have gotten worse recently. Many if not most of them look like and probably are scams. Here are a few examples from Apple News today.

Apple promotes News by saying it offers “trusted sources” in an app that is “rewriting the reading experience”. And, when Apple partnered with Taboola, Sara Fischer at Axios reported it would “establish certain levels of [quality] control around which advertisers it will sell through to Apple apps”. Unsurprisingly, the highest quality Taboola ads are still bait for doing some fraud, and it is all over Apple’s ostensibly selective apps. Probably good for services revenue, though.

Bad news from the CIA. I mean, probably not what Senator Ron Wyden was referring to and, on a relative scale for the CIA, this is pretty tame. But, still, disappointing:

One of CIA’s oldest and most recognizable intelligence publications, The World Factbook, has sunset. The World Factbook served the Intelligence Community and the general public as a longstanding, one-stop basic reference about countries and communities around the globe. Let’s take a quick look into the history of The World Factbook.

Simon Willison:

In a bizarre act of cultural vandalism they’ve not just removed the entire site (including the archives of previous versions) but they’ve also set every single page to be a 302 redirect to their closure announcement.

The Factbook has been released into the public domain since the start. There’s no reason not to continue to serve archived versions – a banner at the top of the page saying it’s no longer maintained would be much better than removing all of that valuable content entirely.

I am just guessing here, but I think the CIA can afford to keep this stuff available online indefinitely. The Internet Archive can and it is not being given tens of billions of dollars annually

There have been a few stories recently involving the investigation of leaks by U.S. government employees and contractors, and the naked aggression shown toward leakers, and I thought it would be useful to round them up.

First, the U.S. Department of the Treasury, in a press release announcing the cancellation of contracts with Booz Allen Hamilton:

Most notably, between 2018 and 2020, Charles Edward Littlejohn — an employee of Booz Allen Hamilton — stole and leaked the confidential tax returns and return information of hundreds of thousands of taxpayers. To date, the IRS determined that the data breach affected approximately 406,000 taxpayers. Littlejohn has pled guilty to felony charges for disclosing confidential tax information without authorization.

Littlejohn was prosecuted under the Biden administration, and is being sued by the current president. The stories produced from the information he revealed, however, thankfully remain available. The New York Times and ProPublica each have stories revealing how little income tax is paid by the wealthiest Americans. It is not just a pittance relative to their net worth; in some cases, it is absolutely nothing.

Kim Zetter, Zero Day:

In February 2018, he was back in a position with access to IRS taxpayer data but didn’t immediately steal records. Prosecutors say he developed a “sophisticated” scheme to download the documents nine months later. This included not searching directly for documents related to the government official, which might have triggered a system alert, but querying the database “using more generalized parameters.” Prosecutors don’t specify the search terms Littlejohn used, but they note that the search parameters he used would have produced not only the tax records of the government official he sought to expose, but also those of other taxpayers he wasn’t targeting. By November 2018, he had extracted 15 years worth of tax records for President Trump, prosecutors say.

Because IRS protocols can detect and prevent “large downloads or uploads from IRS systems and devices,” according to prosecutors, Littlejohn avoided copying the records to removable media such as a USB stick — as Edward Snowden had done when he took documents from NSA servers. Instead Littlejohn “exploited a loophole in those controls” by transmitting the stolen tax records to a private website that he controlled, which was not accessible to the public.

Despite these careful steps, Littlejohn was ultimately caught, though I am not sure how. I read through relevant docket entries and, unless I missed something, I am not sure the government has explained its investigation, particularly since Littlejohn pleaded guilty.

A different case — Richard Luscombe and Jeremy Barr, reporting for the Guardian, last month:

The FBI raided the home of a Washington Post reporter early on Wednesday in what the newspaper called a “highly unusual and aggressive” move by law enforcement, and press freedom groups condemned as a “tremendous intrusion” by the Trump administration.

Agents descended on the Virginia home of Hannah Natanson as part of an investigation into a government contractor accused of illegally retaining classified government materials.

Nikita Mazurov, the Intercept:

Federal prosecutors on January 9 charged Aurelio Luis Perez-Lugones, an IT specialist for an unnamed government contractor, with “the offense of unlawful retention of national defense information,” according to an FBI affidavit. The case attracted national attention after federal agents investigating Perez-Lugones searched the home of a Washington Post reporter. But overlooked so far in the media coverage is the fact that a surprising surveillance tool pointed investigators toward Perez-Lugones: an office printer with a photographic memory.

It is particularly rich for the Intercept to be pointing to the printer as a reason this individual was allegedly outed. Secret documents published by the site in 2017 included printer stenography that, while not directly implicated (PDF) in revealing the leaker’s identity, was insufficiently protective of their source.

In the case of Perez-Lugones, investigators were apparently able retrace his footsteps, as described in paragraphs 16 through 29 of the affidavit (PDF). It does not sound like he took particularly careful steps to avoid leaving a history of the documents he accessed and then printed. I have no illusions that my audience is full of people with top secret clearance and an urge to leak documents to the press, but anyone who is should consider reading — on their personal device in private browsing mode — the guidance provided by Freedom of the Press Foundation and NiemanLab.

Joseph Cox, 404 Media:

The FBI has been unable to access a Washington Post reporter’s seized iPhone because it was in Lockdown Mode, a sometimes overlooked feature that makes iPhones broadly more secure, according to recently filed court records.

Some general-audience publications, like the HuffPost, are promoting the use of Lockdown Mode as a “useful and simple built-in tool you should turn on ASAP” for anyone who “feels targeted by cybersecurity threats”. But we are all targeted, to some extent or another, by cybersecurity threats. Most people should not use Lockdown Mode. It is an enormously disruptive option that is only a reasonable trade-off for anyone who has good reason to believe they would be uniquely targeted.

Cox:

The FBI was still able to access another of Natanson’s devices, namely a second silver Macbook Pro. “Once opened, the laptop asked for a Touch Id or a Password,” the court record says. Natanson said she does not use biometrics for her devices, but after investigators told her to try, “when she applied her index finger to the fingerprint reader, the laptop unlocked.” The court record says the FBI has not yet obtained a full physical image of the device, which provides an essentially complete picture of what was stored on it. But the agents did take photos and audio recordings of conversations stored in the laptop’s Signal application, the court record says.

Warrants for seizing electronic devices have, for several years now, sometimes contained a clause reading something like “law enforcement personnel are authorized to […] press or swipe the fingers (including thumbs) of (the warrant subject) to the fingerprint scanner of the device(s) [and] hold the device(s) in front of the face of (the warrant subject to activate the facial recognition feature”.

One thing every iPhone owner should know is that they can temporarily disable biometric features by pressing and holding the power button (on the right-hand side of the device) and either volume button for a few seconds, until the “slide to power off” option appears. To reactivate biometric features, you will need to enter your passcode. You can press these buttons while your phone is in your pocket. You should do this any time you are anticipating an interaction with law enforcement or those working on their behalf.

However, I cannot find a similar capability for a MacBook with a Touch ID sensor. If you are the kind of person who feels like Lockdown Mode might apply to you, you should consider turning off Touch ID, too, and sticking with a strong and memorable passphrase.

Paris Marx is trying to wean himself off U.S. tech services, in large part because of the leverage this dependence enables. On streaming music, and with a reasonable rejection of Sweden-based Spotify, Marx was left with a couple other options:

I’ve been on Apple Music for the past few years, but recently switched to Deezer 🇫🇷 and don’t see why I would need to go back given the catalogs of music-streaming services are pretty similar — unlike on the video side of things. Maybe another plus: Deezer isn’t trying to push video at me like Spotify does.

[…] Like streaming video though, there is another option: grabbing an old iPod or new MP3 player and loading it up with the music you want to listen to.

Well, Marx found his old iPod and is figuring out what needs modernizing and fixing up.

For Christmas this past year, a family member wanted a replacement for their iPod Nano. I looked up and down for options — something small and usable while running, but still a quality product with an easy syncing experience. I found I had basically two options: cheap iPod Shuffle lookalikes that are slow and difficult to use, and extremely expensive players for enthusiasts. So I bought a refurbished iPod Nano — and they love it.

While I was shopping in that store, it was awfully tempting to pick up a refurbished iPod Classic for myself. I still have a 60 GB fifth-generation model, though I do not think it can hold a charge. I remember when that felt like a lot of storage. I do not miss my iPod Touch or old iPhones, but I miss that iPod.

Lars Ingebrigtsen:

For some reason or other, people have been posting a lot of excerpts from old emails on Twitter over the last few days. The most vital question everybody’s asking themselves is: What’s up with all those equals signs?!

And that’s something I’m somewhat of an expert on. I mean, having written mail readers and stuff; not because I’ve been to Caribbean islands.

Good to know for anyone reading a giant tranche of someone else’s email which, through context clues, you may realize is the least creepy part of it all.