Jason Koebler, 404 Media:

Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.

Ananya Bhattacharya, Rest of World:

On January 2, India’s IT ministry issued a 72-hour ultimatum to X over its artificial intelligence chatbot generating “obscene” content — specifically, sexualized or manipulated images of women and, in some cases, minors.

Maggie Harrison Dupré, Futurism:

In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.

When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.

It is extraordinary yet, sadly, predictable to me that xAI is not treating this as a problem. Setting aside the fundamentally offensive capability that any user can tell Grok to generate photorealistic images based on someone else’s likeness — something it should be prohibited from doing — there appears to be no rush to fix this high-res misogyny on demand. A basic corporate response would be to turn off image generation capabilities until better safeguards are in place. Yet I just opened the Grok account on X, switched to the Replies tab, and it took almost no scrolling at all to find it generating images like these mere seconds ago.

The kinds of images Grok is generating should be criminal, and the people who oversee it should be held liable as should those who are prompting Grok. In my eyes, though I am not a lawyer, it is no less a case of harassment when a robot is directed by a user than when the user does it themself. I know A.I. is a nascent industry and there are things people are worried about over-regulating, but this should not be one of them. xAI is clearly overseen by adolescent-brained people who think it is sufficient to make it against company policy for users to make heinous requests, rather than sufficiently restricting its own software.

Elissa Welle, the Verge:

A viral Reddit confessional about a “major food delivery app” posted January 2nd is most likely AI-generated. The original post by user Trowaway_whistleblow alleged that an unnamed food delivery company regularly delays customer orders, calls couriers “human assets,” and exploits their “desperation” for cash, among other indefensible actions. Nearly 90,000 upvotes and four days later, it’s become increasingly clear that the post’s text is probably AI-generated.

The link from “viral Reddit confessional” is not to the thread itself, but to a link post from the Verge. That post, from Andrew J. Hawkins, still asks “[w]hich company do you think this is about?” and has not been updated to reflect the fictional status of this story.

Alex Schultz, Hard Reset:

But the allegations in the Reddit thread are not true. They are fabricated. The original post was generated by AI and was intended to defame Uber Eats. I know as much because I traded Signal messages with Trowaway_whistleblow, who additionally used AI to prepare a fake-but-sophisticated-looking “internal document” of other supposedly damning allegations; Trowaway_whistleblow circulated the document to journalists, hoping someone would take the bait and publish a libelous “scoop.”

I was shocked to see so many people posting a link to this thread credulously. I write that not to scold anyone, only out of genuine surprise that an evidence-free anonymous Reddit post with clear inconsistencies was being spread so widely. There is so much existing evidence that so-called “gig economy” workers are exploited, underpaid, and endangered, and that food delivery platforms specifically are bad for restaurants, that we simply do not need to add questionable Reddit posts to the mix. This scam played into that reputation perfectly. If anything, that is why it makes sense to be even more skeptical. Being too credulous undermines the documented failures of these platforms through the confusion it creates.

You may remember Jim Nielsen’s exploration last month — previously linked — of the various icons in MacOS Tahoe menus, and comparing them to Apple’s long-time recommendations for icon use.

Nikita Prokopov went even further:

In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

This is a gallery of elementary problems. None of this should have shipped if someone with power internally had a critical eye for consistency and detail. If Apple deems it necessary to retain the icons, though I am not sure why it would, it should be treating this post as one giant bug report.

Update: Speaking of illegible, I think Prokopov should reconsider both the automatically-enabled snow animation and the searing yellow background.

Shalini GovilPai, Google:

At CES 2026, we’re previewing how Gemini is making Google TV even more helpful, and bringing more ways to interact with your TV across more brands and surfaces like projectors. Here’s what we announced:

[…]

  • Use Nano Banana and Veo to reimagine your personal photos or create original media directly on your TV.

I guess I am the dumb one because I cannot see any reason why I would want Google to generate new scenes from my photos — let alone “original media”, whatever that means. This seems like something that would be fun for about ten minutes, once, and then would never be touched again.

I am not sure which was the first of Google’s A.I. ads I saw that made me wonder if I was suffering from a fever. Maybe it was the one where “pandas in high fashion outfits walk the runway” — shown over thirty million times since early November, as of writing — but it just as easily could have been the one with “some crazy fruit creatures”. Maybe it was the ad with the strutting dogs. All are equally bizarre in the way a giant corporation’s ad company thinks it is offering something quirky and weird.

In any case, each ends with the same encouragement: “make more videos with a Google A.I. Pro subscription” for $27 per month. And I have to ask: who is this for?

These are not the only ads Google is running for its A.I. features. It shows them off in context for general users in Pixel phone ads, for students in ads for NotebookLM, for corporate clients with Cloud A.I., and — of course — for advertisers.

It also is not the only ad for its A.I. Pro subscription bundle. In a different one, different tasks are strung together for someone to do absolutely no work to put together a ’90s trivia night, leading to an invitation that twice tells recipients to wear platform shoes and “get ready”. The fine print might remind viewers to “check responses for accuracy”, but in the ad, the creator clicks send on a terribly written invitation immediately.

But it is those bizarre animal-themed ones which led to me looking up the advertising strategies of all the big A.I. players. I think this is a fascinating window. The ads they pay to show give us a glimpse of how they idealize the use of these products and services. While ads can sometimes be abstracted from the true function of a product or service, A.I. is already a new and confusing thing, so the narrative each company spins about its narrative seems telling of its own vision. I limited my search on this to ads of a typical video spot — no more than 30 seconds long. I did not include case studies or company-produced tutorials.

OpenAI runs a couple of types of ads. Those for ChatGPT mostly show its use as a personal assistant. There are several 30-second spots shot as oners with a pleasingly retro warmth. One has a training regiment for learning to do pull-ups; another is about siblings on a road trip. The same kind of messaging is used in a series of shorter spots. It also runs ads for its Codex agent, obviously targeted at developers, that are more clinical.

All of these seem practical to me. I could not find any current ads from OpenAI as disconnected from reality as Google’s. Just as notable is the focus of OpenAI’s spots — Google’s Ads Transparency Centre says the company is running about two hundred ads in Canada right now, most of which are variations in size, targeting, and placement of the shorter practical examples above, plus ads for Codex. For comparison, Google’s ads are all over the place. It is running around twenty thousand ads right now in Canada and, though not all of them are for A.I. features, many are, and you can tell from the examples above how much Google is just throwing stuff at the wall.

Anthropic’s ads are far more limited. All are for Claude and feature a video ad with no indication of how it is being used. It simply says “Claude is A.I. for […] all of us, anywhere” with overhead shots of different scenes representing different professions. This is basically the same sentiment as OpenAI’s ads, but executed without any specificity or examples. The company’s YouTube channel has plenty of case studies and demos, but no similar video spots.

If Anthropic is trying to mimic OpenAI’s quiet confidence, Perplexity has chosen overt aggression. Quasi-influencer types follow a similar script saying ChatGPT makes things up, and that is why you should trust Perplexity as it “searches the entire internet in less than one second and gives you one verified answer”. This explanation avoids acknowledging how much Perplexity depends on external A.I. models, including OpenAI’s GPT. In two of the ads, the narrator asks health-related questions, which is the boldest and maybe most reckless use case I have seen in any A.I. ad. There is nothing wrong with the answers it has generated, so far as I can tell, but it seems like a higher-stakes use than, like, a cake recipe, for example. In a variation of this ad type, the “influencer” looks generated, and contains Google search result screenshots that are at least twelve years old. Whatever the case, Perplexity must be spending huge amounts on its advertising. Each of these has been shown tens of millions of times, and two have hundreds of millions of views.

Meta’s ads are bizarre. I could not find any video ads for A.I. stuff — or metaverse-related things — through Google’s Ads Transparency Centre, but the company has a handful of 30-second spots on its YouTube channel. Almost like Google, Meta does not seem to know why people would use its A.I. assistant, so it comes up with offbeat premises that fall flat. In one, described as a way to “help sharpen your skills and transform the way you learn and understand complex subjects”, a guy is meeting his partner’s dad, a thermodynamicist. What a real person who is good at conversations would do is ask the dad a bunch of questions. What Meta thinks you should do is ask its assistant those questions instead and develop a fake level of expertise.

Also, not for nothing, but when I asked the same question in the web version of Meta A.I., it responded not with the tight and efficient explanation in the ad, but with:

😂 Thermodynamics-ist? Sounds like a real party crusher, eh? 🤣 Okay, serious face… A thermodynamicist is like, someone who’s all about the study of heat, energy, and how they transform into each other. They’re like the ultimate energy nerds, but in a good way! 🤓 They work with stuff like engines, refrigeration, and even black holes (mind blown, right? 🤯). Basically, they help us understand how to harness and use energy efficiently. Pretty cool, eh?

Am I meant to be charmed by “okay, serious face…” and four emoji in this barely coherent response? Because I am not.

In another ad, a guy somehow turns a large pan of what appears to be milk and two other pans of mystery liquid into pancakes for his kid. The kid pushes the plate off the counter, the dad asks “how can I get my two-year-old to eat his breakfast?”, and Meta A.I. knows that the kid likes when he makes silly faces. It is unclear to me how it knows this — whether it is automatically personalized to this user, or something the dad needed to teach the system, or if it is just an assumption that all two-year-olds like silly faces. The impression I get, though, is that it is distinctly unbelievable, and I do not want to give Meta knowledge of my entire life to find out if I am right.

To round this out, I looked for similar ads from Microsoft and Apple. I scrolled through hundreds of Microsoft ads on Google’s Ads Transparency tool, virtually all of which were Xbox related. I also checked its YouTube channel. I did not see any ads of the same type as above. I also found only a couple of ads for Apple Intelligence on Apple’s YouTube channel, both of which are for Clean Up in Photos. Apple seems to have cleaned up its YouTube channel overall, removing a whole bunch of older ads including some for Apple Intelligence.

I do not want to overstate how much these ads tell us — they are ads, you know? — but I think I learned something from the way each of these businesses thinks of its own products. In OpenAI, I see confidence; in Anthropic and Perplexity, I see an attempt to catch up. And in Google and Meta, I see established companies that are desperate to prove themselves — particularly in Google’s case, as I still cannot understand why generating arbitrary video is supposed to be compelling to a broad audience.

In the most practical and grounded ads, what I do not see are significant leaps beyond what a search engine today could do. OpenAI’s ads show ChatGPT summarizing a workout plan, but there are loads of those on external websites. Guides to road tripping through the Blue Ridge Parkway are plentiful. The same is true of the responses in Perplexity’s ads. What I see most in these ads are the big “pure” A.I. players normalizing their raison d’être, and established mega corporations entirely out of touch with what someone might want to do. Both are embarrassing in their own way for what is often pitched as the most revolutionary technology since the internet.

2025 was a tough year for pretty much everyone I know, and I imagine the coming year will bring more of the same dreary news. I thought I would start off 2026 with something much dumber. I have plumbed the depths and found the bottom of the deepest of barrels, and that is how you are reading an “I asked ChatGPT” post.

There is a minor point here. I needed to use up a cup of milk and a quarter-cup of sour cream lurking in my fridge, and I wanted to make some kind of quick bread or loaf with it. The problem is that I am an inexperienced baker and I do not have a great sense of the proportion of other ingredients I would need.

I think this is the kind of thing a more advanced search engine could help with. I have previously used recipe finder tools like Supercook and RecipeRadar but, while they often ask what ingredients are available, they usually do not do so in specific quantities, necessitating looking through a bunch of recipes to find one that fits. Something that can process longer natural language queries seems tailor made for this kind of task.

I told it “I would like loaf/quick bread recipes that will use up 1 cup of milk and ¼ cup of sour cream”, and ChatGPT gave me four options. The first, a “classic vanilla quick bread”, looked adequate, but boring. (Also, a careful reader will quibble with its “optional add-ins (1–1½ cups total)” after which is a list of, among other things, cinnamon, and I do not think you should add a whole cup of cinnamon to anything.) I do like cinnamon, though, and I settled on the fourth suggestion, a “cinnamon swirl coffee bread”. Here is the entirety of its recipe:

Batter

2 cups flour
¾ cup sugar
1½ tsp baking powder
½ tsp baking soda
½ tsp salt
1 cup milk
¼ cup sour cream
⅓ cup oil
2 eggs
1 tsp vanilla

Swirl

⅓ cup brown sugar
1½ tsp cinnamon

Bake

Layer batter + swirl in pan, gently knife once

350°F for 50–60 min

The ingredient list, to my untrained eye, seems fine. The instructions are obviously incoherent. More comprehensive directions were offered for the “classic vanilla quick bread” above it, which I missed because I only followed this recipe. Since I made a different loaf recently, however, I had a rough idea of what I should do. Also, I made a couple of minor changes:

  • I substituted vanilla extract for a few dashes of Fee Brothers’ cardamom bitters.

  • I had no ground cinnamon on hand and grating a cinnamon stick is tedious, so I stopped after about a teaspoon.

While these ingredient substitutions might affect the flavour, they would not materially affect the chemistry.

A cross-section of a golden loaf with a brown sugar swirl in the middle.

The resulting loaf is fine. I was hoping for either catastrophic failure or incredible success to more wholly justify this low-effort post, but it was just fine. Better than I expected, given where it came from, though the brown sugar swirl is achingly sweet and settled in the middle despite my best attempts. I still do not know what “gently knife once” is supposed to mean. I would not confuse this with a professional baker’s work, of course, but that is more like operator error. I wish the directions were, overall, clearer; if I had little to no previous experience baking a quick bread, I might have been lost.

I have experimented with ChatGPT and food before, particularly for weekly meal planning, and I have never been satisfied with its results. This, though, worked pretty well for me. I got to use up a couple of things in my fridge and made an okay dessert from it. Happy New Year.

Rindala Alajaji, the Electronic Frontier Foundation:

Age-verification mandates create barriers along lines of race, disability, gender identity, sexual orientation, immigration status, and socioeconomic class. While these requirements threaten everyone’s privacy and free-speech rights, they fall heaviest on communities already facing systemic obstacles.

This is a compelling list of reasons why age verification laws are bad for all of us. I have mixed feelings about their need and implementation so far. In theory, I think checking user ages can be justified in a number of circumstances. Many apps and websites have some boilerplate text claiming they have no interest in serving children, and rely on kids self-authorizing. This is obviously insufficient. Also, Apple wants your iPhone to replace your wallet, but is somehow uninterested in using the information you provide, which is bizarre.

But giving tech companies even more information and control seems similarly fraught. Existing operating system-level options for parents are frequently broken, so why would we entrust them with some form of valid identification? And the current patchwork of laws and proposals could mean a worst-of-both-worlds situation: depending on where the company is governed, you might have to provide documentation to centralized app distributors, and to individual websites and apps. And, as Alajaji writes, the people who will struggle most to satisfy these requirements are already discriminated against.

I have written posts in which I am swayed by proposals for age verification. I am also convinced by the ten arguments Alajaji lists, many of which feel U.S. specific but are probably even more burdensome in developing countries. These problems seem to be varying degrees of insurmountable. But the great freedom created by the web has not yet been met with commensurate responsibility, either.

Howard Oakley:

In real life, whiteouts are dangerous because they’re so disorienting. There’s no horizon, no features in the landscape, and no clues to navigation. We see and work best in visual environments that are rich in colour and tonal contrasts. Tahoe has continued a trend for Light Mode to be bleached-out white, and Dark Mode to be a moonless night. Seeing where controls, views and contents start and end is difficult, and leaves them suspended in the whiteout.

Oakley reviews several lingering problems with Liquid Glass in MacOS, but the above remains the most — and I use this word intentionally — glaring issue I have with it. It is a problem that becomes entirely clear as you scroll to the bottom of Oakley’s post and find a screenshot from — I think — Mac OS X Mavericks with evident precision and contrast. I do not think Apple should have frozen this interface in time, nor that there are no changes made since which have been an improvement. However, though the exact same elements remain in today’s MacOS, they lack a similarly rigorous structure and care. It is notable how many translucency controls have been added in the intervening years.

Rudy Fraser, founder and CEO of Blacksky, made two big announcements on the occasion of it being the first anniversary of its launch. The first is online cash payments in USD, which I am a little unclear about, but relies in part on the trust implicit in AT Protocol connections.

The second, though, is easier for me to understand:

As Mozilla has been saying, we need to decentralize the internet. Decentralization necessarily requires running your own servers and as mentioned above by Moxie, founder of the beloved Signal protocol, no one wants to run their own servers, and never will. If you take seriously that both are true, how do you square the two?

[…]

I’m excited to say: promises made, promises kept. Clinton has built a mobile-friendly web app that will allow users to create a new PDS [Personal Data Server] that is truly one-click and will be hosted on Blacksky’s infrastructure. […]

I am intrigued by the possibility of announcements like this one. It offers a meaningful layer of self-governance beyond monolithic social platforms, made possible only by open standards like AT Protocol and the rival ActivityPub.

Meta, in an April 2024 press release:

Today we’re taking the next step toward our vision for a more open computing platform for the metaverse. We’re opening up the operating system powering our Meta Quest devices to third-party hardware makers, giving more choice to consumers and a larger ecosystem for developers to build for. We’re working with leading global technology companies to bring this new ecosystem to life and making it even easier for developers to build apps and reach their audiences on the platform.

Ben Thompson reacting:

Motivations, of course, aren’t enough: unlike AI models, where Meta wants a competitive model, but will achieve its strategic goals as long as a closed model doesn’t win, the company does actually need to win in the metaverse by controlling the most devices (assuming, of course, that the metaverse actually becomes a thing).

The first thing to note is that pursuing an Apple-like fully-integrated model would actually be bad for Meta’s larger goals, which, as a horizontal services company, is reaching the maximum number of people possible; there is a reason that the iPhone, by far the most dominant integrated product ever, still only has about 30% marketshare worldwide. Indeed, I would pushback on Zuckerberg’s continued insistence that Apple “won” mobile: they certainly did as far as revenue and profits go, but the nature of their winning is not the sort of winning that Meta should aspire to; from a horizontal services company perspective, Android “won” because it has the most marketshare.

Ben Lang, Road to VR:

Meta specifically named Asus and Lenovo as the first partners it was working with to build new Horizon OS headsets. Asus was said to be building an “all-new performance gaming headset,” while Lenovo was purportedly working on “mixed reality devices for productivity, learning, and entertainment.”

But as we’ve now learned, neither headset is likely to see the light of day. Meta say it has frozen the third-party Horizon OS headset program.

I link to Thompson’s article not to dunk on it, but to ask the natural followup question: if an Android-like operating system licensing model is a good indication of the potential for Meta’s metaverse vision, what does its effective cancellation suggest? This and the deep personnel cuts forecast what has seemed obvious from the start: the company’s vision of the metaverse is simply not compelling.

You will notice I am leaving open a door for somebody or some company to make a truly interesting metaverse-like thing. I am skeptical it will exist, but I can see the potential. Wherever it emerges, though, I would bet against Meta itself making this stuff viable. It does not do interesting and cool new things.

Here is something strange: while Meta’s Threads placed second on Apple’s most popular apps of 2025 list, it appears to matter far less to Android users.

I could not find a similar list to Apple’s as provided by Google, but Sensor Tower scrapes those charts regularly. In the last ninety days — all I can see without paying — Threads has danced around the top ten free apps on the U.S. App Store, as one might expect for one of the most popular apps of the year. On Android, however, it never cracks the top ten. This is not solely a U.S. phenomenon; Threads seems to be less popular in Canada, but its App Store ranking is dramatically better than its Play Store ranking.

Also, it is not like Android has a radically different list of popular apps. Right now, like on iOS, ChatGPT is at the top of the U.S. Play Store chart, two different versions of TikTok take third and fourth place — second place is Fortnite — and the top ten also includes Instagram and WhatsApp. The main differences between iOS and are the lack of Google apps on the Play Store ranking, presumably because they are preinstalled. This chart is only a snapshot of today’s rankings, but even over the last ninety days, TikTok and Instagram show durable popularity, while Threads struggles.

I previously compared Threads’ ranking in Mexico and Taiwan. So, for completeness’ sake, Threads has been pretty popular in the Mexican App Store, but is not even in the top fifty on Android. In Taiwan, Threads has spanned the top twenty free apps on iOS but, at a glance, it is easily putting in the best Android performance here.

What is going on here? Perhaps Apple and Google use different measurements to calculate their app ranking charts, so any comparison between the two is flawed from the jump. Neither company publishes its methodology for calculating those rankings. Maybe Threads is a worse app on Android than it is on iOS.

But the seeming disparity in popularity between the two platforms is a curiosity. I would assume social apps should, all else being equal, attract similar audiences across both platforms. That is true for apps like TikTok and Instagram. It is not true for Threads, which seems to attract a greater share of iOS users than Android. Strange.

Alex Heath, the Verge:

By all measures, Meta’s Threads app had a very good year. The app was Apple’s second-most-downloaded iOS app of the year, trailing only ChatGPT. Threads now has 400 million monthly and 150 million daily active users.

It is interesting to see how much this varies by region. In Canada, Threads was third, behind ChatGPT and Temu, as it was in Taiwan where Threads made something of an impact last year. In Mexico, it was twelfth. Top app charts for 2025 are not available for every country, but even this limited range is kind of curious. ChatGPT is huge everywhere, and you can feel it; TikTok continues to be a cultural force.

Yet, even though Threads is apparently incredibly popular, I still do not know anyone in real life who actually uses it. I have never once heard the phrase hey, did you see that post on Threads? from a friend. To be fair, it is not as though the charts are packed with the social apps I like: neither Bluesky nor any Mastodon client is on any of those regional charts. But a couple of years into Threads, my local news broadcasters and municipal authorities are not pushing me to follow their accounts there, either. Perhaps as this style of social media splinters into multiple competing networks, its draw on the public conscience may fade.

Heath:

That growth is still coming mainly from Meta’s other platforms. “We do a lot of work in Instagram and Facebook to show off what’s going on in Threads,” Connor Hayes, the head of Threads, told me this week. The playbook: surface personalized Threads content in your Instagram and Facebook feeds, get you to download the app, then wean you off needing those nudges to check it consistently. “We do a bunch of work to get people off of being dependent on those promotions and wake up in the morning and just want to open the app,” Hayes explained.

I think I know what Hayes means by this. I use the web version of Instagram because I cannot stand the app. Until earlier this year, there was a persistent red dot on my notifications icon and, when I clicked on it, told me that there was a notification on Threads. However, when I clicked the notification, I saw no such updates on Threads. I fell for this growth hacking trick a few times before I realized it was just a lie.

Heath:

Threads still supports federation with other apps like Mastodon, but Hayes was clear that it’s not a top priority for the current roadmap. “It’s something that we’re supporting, it’s something that we’re maintaining, but it’s not the thing that we’re talking about that’s gonna help the app break out,” he said.

Two years ago, Tom Coates attended a large meeting held by Meta to discuss Threads’ enthusiasm for the fediverse. With that in the rear-view mirror, it seems Meta has gotten all it needs from integrating with an open protocol.

Barry Schwartz, Search Engine Roundtable:

On Friday, Google announced it had filed a lawsuit (PDF) against SerpApi for scraping the Google search results. Google alleges that SerpApi is running an “unlawful” operation that bypasses Google’s security measures to scrape search results at an astonishing scale.

[…]

Google claims SerpApi uses hundreds of millions of fake search requests to mimic human behavior. This allows them to bypass CAPTCHAs and other automated defenses that Google uses to prevent bots from overwhelming its systems.

In October, as part of its lawsuit against Perplexity, Reddit sued SerpApi and a couple of other scraper companies. Figuring out the difference between the ostensibly bad kind scraping practices of SerpApi and the good ones of Google seems like it will require a narrow definition, one Google is happy to provide.

Halimah DeLaine Prado, Google’s general counsel:

Google follows industry-standard crawling protocols, and honors websites’ directives over crawling of their content. Stealthy scrapers like SerpApi override those directives and give sites no choice at all. SerpApi uses shady back doors — like cloaking themselves, bombarding websites with massive networks of bots and giving their crawlers fake and constantly changing names — circumventing our security measures to take websites’ content wholesale. This unlawful activity has increased dramatically over the past year.

This explanation is not wrong, per se, though it is quite self-serving. The way many people begin their search for a product, service, or local business is with Google. A typical website owner is therefore desperate for a Google link, to the extent that they will reconstruct their site on a regular basis to suit its shifting ranking criteria. That means Google has broad power to do basically whatever it wants to the web. If publishers wanted to rank highly in search results, they were required to adopt the company’s proprietary fork of HTML. It can inject links, scrape third-parties, and build a self-preferencing silo — and website owners have to be okay with it or lose valuable referral traffic from Google users.

All of that is nominally ethical in Google’s view. What is not, apparently, is a company using workarounds to get a window into Google’s practices. I sympathize with that argument. The only tool we have is robots.txt and, regardless of SerpApi’s intent, I do not think circumvention efforts should be tolerated, though that should be paired with aggressive antitrust action to prevent incumbent powers from abusing their position.

Recent actions taken by U.S. courts, for example, have found Google illegally maintained its search monopoly. In issuing proposed remedies earlier this year, the judge noted the rapidly shifting world of search thanks to the growth of generative artificial intelligence products. “OpenAI” is mentioned (PDF) thirty times as an example of a potential disruptor. However, the judge does not mention OpenAI’s live search data is at least partially powered by SerpApi.

Ritika Dubey, Canadian Press:

A cybersecurity expert says Canadian TikTok users likely won’t notice any changes to the social media app as major American investors sign a deal to form a new TikTok joint venture.

“This deal happens to be based on U.S. law only and it’s really not that impactful for Canadians on a day-to-day basis,” said Robert Falzon, head of engineering at cybersecurity firm Check Point Software Technologies Ltd.

Canadians will continue to use the international version of the app that’s owned and influenced by Chinese owner ByteDance Ltd., he said.

There is a kind of implied for now which should be tacked onto the end of its impact on Canadians. This U.S.-specific version lays the groundwork for a political wedge issue in Canada and elsewhere: should people use the version of the app run by a company headquartered in Beijing and mostly owned by a mix of American, Chinese, and Emirati investors, or should they use the app run by a company based in the U.S and mostly owned by a mix of American, Chinese, and Emirati investors? Or, to frame it in more politically expedient terms, should people be allowed to use the “Chinese” app or should they be pushed into the “American” app? Under that framing, I would not be surprised to see the U.S. version become the dominant client for TikTok worldwide.

Our government under the previous prime minister forcibly closed domestic TikTok operations on national security grounds, though it did not ban the app. Perhaps that would have been too much, too soon. Now, though, politicians do not need to endure voters’ wrath if they banned a popular social platform, since there is another version with less scaremongering attached to it.

Jonathan Vanian and Julia Boorstin, CNBC:

TikTok CEO Shou Zi Chew told employees on Thursday that the company’s U.S. operations will be housed in a new joint venture.

[…]

The U.S. joint venture will be 50% held by a consortium of new investors, including Oracle, Silver Lake and MGX, with 15% each. Just over 30% will be held by affiliates of certain existing investors of ByteDance, and almost 20% will be retained by ByteDance, the memo said.

Oracle is among the companies illegally supporting TikTok for the past year, along with Apple and Google. Instead of facing stiff legal penalties, Oracle will get to own a 15% piece of TikTok. It probably helps that co-founder Larry Ellison is a friend and donor to Donald Trump.

MGX will also get a 15% share. It is a state-run investment fund in the United Arab Emirates, even though I thought the whole point of this deal was a collective panic over foreign government interference. It probably helps that MGX used Trump’s family cryptocurrency to invest in Binance.

The deal is structured so these firms — and Silver Lake — actually have control. But, after all this, it seems like the single biggest shareholder in this new entity will be Bytedance, with 19.9%.

CNBC:

The new TikTok entity will also be tasked with retraining the video app’s core content recommendation algorithm “on U.S. user data to ensure the content feed is free from outside manipulation,” the memo said.

I do not think it is worth reading too much into TikTok’s CEO writing that its new suggestions will be “free from outside manipulation”, or the implications that statement indicates for TikTok’s operations elsewhere.

Bobby Allyn, NPR:

Yet the underlying algorithm will still be owned by Beijing-based ByteDance, with the blessing of American auditors, according to an internal TikTok memo reviewed by NPR and two sources familiar with the deal who were not authorized to speak publicly.

So if the underlying recommendations system still has connections to China, but it is retrained by a company run by a mix of far-right U.S. investors and a different foreign government, are those the ingredients for a social network with less state influence? Does this satisfy those who believe, without evidence, that TikTok is brainwashing people in the U.S. at the behest of the Chinese government?

Graham Cluley, writing on Bitdefender’s blog:

If you’re planning a cruise for your holidays, and cannot bear the idea of being parted from your Ray-Ban Meta smart glasses, you may want to avoid sailing with MSC Cruises.

The cruise line has updated its list of prohibited items, specifically banning smart glasses and similar wearable devices from public areas.

MSC Cruises is prohibiting “devices capable of covertly or discreetly recording or transmitting data” which, as written, is pretty vague without the subsequent “(e.g. smart glasses)”. Any wireless device is arguably “discreetly … transmitting data” all the time. I appreciate the idea. However, I fear this is the kind of rule that will be remembered as a relic of a transitional period, rather than an honest commitment to guest privacy.

I truly love end-of-year lists, and Stephen Hackett sure has a good one: what are the highs and lows from Apple’s 2025? This is not what you would call comprehensive — look out for the Six Colors report card early next year, I am sure — but it is better for being concise.

Congratulations to Hackett for the final high point on his list.

In August 2022, Kashmir Hill reported for the New York Times on two fathers who had, in separate cases, captured photos of their toddlers’ genitals for medical documentation on Android phones, and subsequently had their Google accounts locked. Both accounts were erroneously flagged for containing child sexual abuse materials, a heinous accusation that both fought — unsuccessfully, as of the article’s publication.

I wrote about what I learned from that article and a different incident affecting a Gmail account belonging to Talking Points Memo. But I never linked to a followup article from December of the same year, which I stumbled across earlier today as I was looking into Paris Buttfield-Addison’s Apple account woes, now apparently resolved.

Hill:

In recent months The Times, reporting on the power that technology companies wield over the most intimate parts of their users’ lives, brought to Google’s attention several instances when its previous review process appeared to have gone awry.

In two separate cases, fathers took photos of their naked toddlers to facilitate medical treatment. An algorithm automatically flagged the images, and then human moderators deemed them in violation of Google’s rules. The police determined that the fathers had committed no crime, but the company still deleted their accounts.

I do not know if either of these accounts were restored. I have asked Hill on Bluesky and I hope to hear back. (Update: Hill says neither parent recovered their account, though one was able to retrieve some account data that was turned over to police.)

Hill:

It took four months for the mother in Colorado, who asked that her name not be used to protect her son’s privacy, to get her account back. Google reinstated it after The Times brought the case to the company’s attention.

This is well after Google says all the account data should have been deleted, which raises more questions.

The ridiculous and maddening situation in which Paris Buttfield-Addison finds himself continues to rattle around my brain. The idea that any one of us could be locked out from our Apple devices because some presumably automated system flagged the wrong thing is alarming.

Greg Morris:

The scale of dependency is what makes this different from older tech problems. Losing your email account twenty years ago was bad. Losing your iCloud account now means losing your photos, your passwords, your ability to access anything else. We’ve built these single points of failure into our lives and handed them to corporations who can cut us off for reasons they won’t explain. That’s not a sustainable system.

Morris is correct, and there is an equally worrisome question looming in the distance: when does Apple permanently delete the user data it holds? Apple does not say how long it retains data after an account is closed but, for comparison, Google says it takes about two months. Not only can one of these corporations independently decide to close an account, there is no way to know if it can be restored, and there is little help for users.

Adam Engst, TidBits:

I’d like to see Apple appoint an independent ombudsperson to advocate for customers. That’s a fantasy, of course, because it would require Apple to admit that its systems, engineers, and support techs sometimes produce grave injustices. But Apple is no worse in this regard than Google, Meta, Amazon, and numerous other tech companies — they all rely on automated fraud-detection systems that can mistakenly lock innocent users out of critical accounts, with little recourse.

This is a very good idea. Better consumer protection laws would obviously help, too, but Apple could do this tomorrow.

There is one way the Apple community could exert some leverage over Apple. Since innocently redeeming a compromised Apple Gift Card can have serious negative consequences, we should all avoid buying Apple Gift Cards and spread the word as widely as possible that they could essentially be malware. Sure, most Apple Gift Cards are probably safe, but do you really want to be the person who gives a close friend or beloved grandchild a compromised card that locks their Apple Account? And if someone gives you one, would you risk redeeming it? It’s digital Russian roulette.

I cannot tell you what to do, but I would not buy an Apple gift card for someone else, and I would not redeem one myself, until Apple clearly explains what happened here and what it will do to prevent something similar happening in the future. And, without implying anything untoward, it should restore Buttfield-Addison’s account unless there is a compelling reason why it should not.

When I bought my iMac through Apple’s refurbished store in 2019, the only credit card I had was one where I kept a deliberately low limit. The iMac was over $3,700. To get around my limit, I bought a $2,000 gift card and paid it off immediately, then put the remaining $1,700 and change on my credit card.

I did not think twice about the potential consequences if this had tripped some kind of fraud detection system. I cannot imagine doing something similar today given everything Buttfield-Addison has gone through.

Update: Buttfield-Addison:

Update 18 December 2025: We’re back! A lovely man from Singapore, working for Apple Executive Relations, who has been calling me every so often for a couple of days, has let me know it’s all fixed. It looks like the gift card I tried to redeem, which did not work for me, and did not credit my account, was already redeemed in some way (sounds like classic gift card tampering), and my account was caught by that. […]

This is good news. It also answers my trying-not-to-be-clickbait headline question: yes, a gift card can, in some circumstances and possibly without your foreknowledge, compromise your account. That is not okay. Also not okay is that we are unlikely to see a sufficient explanation of this problem. You are just supposed to trust that it will all be okay. I am not sure I can.

I keep meaning to link to Screen Sizes, a wonderful utility by Trevor Kay and Christopher Muller. It is a resource for developers and designers alike to reference the screen sizes, pixel dimensions, and various other attributes of Apple’s post-P.C. device lineup.

Something I need to do at my day job on a semi-regular basis is compositing a screenshot on a photo of someone holding or using an iPhone or an iPad. One of my pet peeves is when there is little attempt at realism — like when a screenshot is pasted over a notch, or the screen corners have an obviously incorrect radius. This is not out of protection for the integrity of Apple’s hardware design, per se; it just looks careless. I constantly refer to Screen Sizes to avoid these mistakes. I did so earlier today, which is why I was reminded to link to it.

It is a great free web app with even more resources than its name suggests.