Month: January 2026

Jason Koebler, 404 Media:

Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.

Ananya Bhattacharya, Rest of World:

On January 2, India’s IT ministry issued a 72-hour ultimatum to X over its artificial intelligence chatbot generating “obscene” content — specifically, sexualized or manipulated images of women and, in some cases, minors.

Maggie Harrison Dupré, Futurism:

In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.

When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.

It is extraordinary yet, sadly, predictable to me that xAI is not treating this as a problem. Setting aside the fundamentally offensive capability that any user can tell Grok to generate photorealistic images based on someone else’s likeness — something it should be prohibited from doing — there appears to be no rush to fix this high-res misogyny on demand. A basic corporate response would be to turn off image generation capabilities until better safeguards are in place. Yet I just opened the Grok account on X, switched to the Replies tab, and it took almost no scrolling at all to find it generating images like these mere seconds ago.

The kinds of images Grok is generating should be criminal, and the people who oversee it should be held liable as should those who are prompting Grok. In my eyes, though I am not a lawyer, it is no less a case of harassment when a robot is directed by a user than when the user does it themself. I know A.I. is a nascent industry and there are things people are worried about over-regulating, but this should not be one of them. xAI is clearly overseen by adolescent-brained people who think it is sufficient to make it against company policy for users to make heinous requests, rather than sufficiently restricting its own software.

Elissa Welle, the Verge:

A viral Reddit confessional about a “major food delivery app” posted January 2nd is most likely AI-generated. The original post by user Trowaway_whistleblow alleged that an unnamed food delivery company regularly delays customer orders, calls couriers “human assets,” and exploits their “desperation” for cash, among other indefensible actions. Nearly 90,000 upvotes and four days later, it’s become increasingly clear that the post’s text is probably AI-generated.

The link from “viral Reddit confessional” is not to the thread itself, but to a link post from the Verge. That post, from Andrew J. Hawkins, still asks “[w]hich company do you think this is about?” and has not been updated to reflect the fictional status of this story.

Alex Schultz, Hard Reset:

But the allegations in the Reddit thread are not true. They are fabricated. The original post was generated by AI and was intended to defame Uber Eats. I know as much because I traded Signal messages with Trowaway_whistleblow, who additionally used AI to prepare a fake-but-sophisticated-looking “internal document” of other supposedly damning allegations; Trowaway_whistleblow circulated the document to journalists, hoping someone would take the bait and publish a libelous “scoop.”

I was shocked to see so many people posting a link to this thread credulously. I write that not to scold anyone, only out of genuine surprise that an evidence-free anonymous Reddit post with clear inconsistencies was being spread so widely. There is so much existing evidence that so-called “gig economy” workers are exploited, underpaid, and endangered, and that food delivery platforms specifically are bad for restaurants, that we simply do not need to add questionable Reddit posts to the mix. This scam played into that reputation perfectly. If anything, that is why it makes sense to be even more skeptical. Being too credulous undermines the documented failures of these platforms through the confusion it creates.

You may remember Jim Nielsen’s exploration last month — previously linked — of the various icons in MacOS Tahoe menus, and comparing them to Apple’s long-time recommendations for icon use.

Nikita Prokopov went even further:

In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

This is a gallery of elementary problems. None of this should have shipped if someone with power internally had a critical eye for consistency and detail. If Apple deems it necessary to retain the icons, though I am not sure why it would, it should be treating this post as one giant bug report.

Update: Speaking of illegible, I think Prokopov should reconsider both the automatically-enabled snow animation and the searing yellow background.

Shalini GovilPai, Google:

At CES 2026, we’re previewing how Gemini is making Google TV even more helpful, and bringing more ways to interact with your TV across more brands and surfaces like projectors. Here’s what we announced:

[…]

  • Use Nano Banana and Veo to reimagine your personal photos or create original media directly on your TV.

I guess I am the dumb one because I cannot see any reason why I would want Google to generate new scenes from my photos — let alone “original media”, whatever that means. This seems like something that would be fun for about ten minutes, once, and then would never be touched again.

I am not sure which was the first of Google’s A.I. ads I saw that made me wonder if I was suffering from a fever. Maybe it was the one where “pandas in high fashion outfits walk the runway” — shown over thirty million times since early November, as of writing — but it just as easily could have been the one with “some crazy fruit creatures”. Maybe it was the ad with the strutting dogs. All are equally bizarre in the way a giant corporation’s ad company thinks it is offering something quirky and weird.

In any case, each ends with the same encouragement: “make more videos with a Google A.I. Pro subscription” for $27 per month. And I have to ask: who is this for?

These are not the only ads Google is running for its A.I. features. It shows them off in context for general users in Pixel phone ads, for students in ads for NotebookLM, for corporate clients with Cloud A.I., and — of course — for advertisers.

It also is not the only ad for its A.I. Pro subscription bundle. In a different one, different tasks are strung together for someone to do absolutely no work to put together a ’90s trivia night, leading to an invitation that twice tells recipients to wear platform shoes and “get ready”. The fine print might remind viewers to “check responses for accuracy”, but in the ad, the creator clicks send on a terribly written invitation immediately.

But it is those bizarre animal-themed ones which led to me looking up the advertising strategies of all the big A.I. players. I think this is a fascinating window. The ads they pay to show give us a glimpse of how they idealize the use of these products and services. While ads can sometimes be abstracted from the true function of a product or service, A.I. is already a new and confusing thing, so the narrative each company spins about its narrative seems telling of its own vision. I limited my search on this to ads of a typical video spot — no more than 30 seconds long. I did not include case studies or company-produced tutorials.

OpenAI runs a couple of types of ads. Those for ChatGPT mostly show its use as a personal assistant. There are several 30-second spots shot as oners with a pleasingly retro warmth. One has a training regiment for learning to do pull-ups; another is about siblings on a road trip. The same kind of messaging is used in a series of shorter spots. It also runs ads for its Codex agent, obviously targeted at developers, that are more clinical.

All of these seem practical to me. I could not find any current ads from OpenAI as disconnected from reality as Google’s. Just as notable is the focus of OpenAI’s spots — Google’s Ads Transparency Centre says the company is running about two hundred ads in Canada right now, most of which are variations in size, targeting, and placement of the shorter practical examples above, plus ads for Codex. For comparison, Google’s ads are all over the place. It is running around twenty thousand ads right now in Canada and, though not all of them are for A.I. features, many are, and you can tell from the examples above how much Google is just throwing stuff at the wall.

Anthropic’s ads are far more limited. All are for Claude and feature a video ad with no indication of how it is being used. It simply says “Claude is A.I. for […] all of us, anywhere” with overhead shots of different scenes representing different professions. This is basically the same sentiment as OpenAI’s ads, but executed without any specificity or examples. The company’s YouTube channel has plenty of case studies and demos, but no similar video spots.

If Anthropic is trying to mimic OpenAI’s quiet confidence, Perplexity has chosen overt aggression. Quasi-influencer types follow a similar script saying ChatGPT makes things up, and that is why you should trust Perplexity as it “searches the entire internet in less than one second and gives you one verified answer”. This explanation avoids acknowledging how much Perplexity depends on external A.I. models, including OpenAI’s GPT. In two of the ads, the narrator asks health-related questions, which is the boldest and maybe most reckless use case I have seen in any A.I. ad. There is nothing wrong with the answers it has generated, so far as I can tell, but it seems like a higher-stakes use than, like, a cake recipe, for example. In a variation of this ad type, the “influencer” looks generated, and contains Google search result screenshots that are at least twelve years old. Whatever the case, Perplexity must be spending huge amounts on its advertising. Each of these has been shown tens of millions of times, and two have hundreds of millions of views.

Meta’s ads are bizarre. I could not find any video ads for A.I. stuff — or metaverse-related things — through Google’s Ads Transparency Centre, but the company has a handful of 30-second spots on its YouTube channel. Almost like Google, Meta does not seem to know why people would use its A.I. assistant, so it comes up with offbeat premises that fall flat. In one, described as a way to “help sharpen your skills and transform the way you learn and understand complex subjects”, a guy is meeting his partner’s dad, a thermodynamicist. What a real person who is good at conversations would do is ask the dad a bunch of questions. What Meta thinks you should do is ask its assistant those questions instead and develop a fake level of expertise.

Also, not for nothing, but when I asked the same question in the web version of Meta A.I., it responded not with the tight and efficient explanation in the ad, but with:

😂 Thermodynamics-ist? Sounds like a real party crusher, eh? 🤣 Okay, serious face… A thermodynamicist is like, someone who’s all about the study of heat, energy, and how they transform into each other. They’re like the ultimate energy nerds, but in a good way! 🤓 They work with stuff like engines, refrigeration, and even black holes (mind blown, right? 🤯). Basically, they help us understand how to harness and use energy efficiently. Pretty cool, eh?

Am I meant to be charmed by “okay, serious face…” and four emoji in this barely coherent response? Because I am not.

In another ad, a guy somehow turns a large pan of what appears to be milk and two other pans of mystery liquid into pancakes for his kid. The kid pushes the plate off the counter, the dad asks “how can I get my two-year-old to eat his breakfast?”, and Meta A.I. knows that the kid likes when he makes silly faces. It is unclear to me how it knows this — whether it is automatically personalized to this user, or something the dad needed to teach the system, or if it is just an assumption that all two-year-olds like silly faces. The impression I get, though, is that it is distinctly unbelievable, and I do not want to give Meta knowledge of my entire life to find out if I am right.

To round this out, I looked for similar ads from Microsoft and Apple. I scrolled through hundreds of Microsoft ads on Google’s Ads Transparency tool, virtually all of which were Xbox related. I also checked its YouTube channel. I did not see any ads of the same type as above. I also found only a couple of ads for Apple Intelligence on Apple’s YouTube channel, both of which are for Clean Up in Photos. Apple seems to have cleaned up its YouTube channel overall, removing a whole bunch of older ads including some for Apple Intelligence.

I do not want to overstate how much these ads tell us — they are ads, you know? — but I think I learned something from the way each of these businesses thinks of its own products. In OpenAI, I see confidence; in Anthropic and Perplexity, I see an attempt to catch up. And in Google and Meta, I see established companies that are desperate to prove themselves — particularly in Google’s case, as I still cannot understand why generating arbitrary video is supposed to be compelling to a broad audience.

In the most practical and grounded ads, what I do not see are significant leaps beyond what a search engine today could do. OpenAI’s ads show ChatGPT summarizing a workout plan, but there are loads of those on external websites. Guides to road tripping through the Blue Ridge Parkway are plentiful. The same is true of the responses in Perplexity’s ads. What I see most in these ads are the big “pure” A.I. players normalizing their raison d’être, and established mega corporations entirely out of touch with what someone might want to do. Both are embarrassing in their own way for what is often pitched as the most revolutionary technology since the internet.

2025 was a tough year for pretty much everyone I know, and I imagine the coming year will bring more of the same dreary news. I thought I would start off 2026 with something much dumber. I have plumbed the depths and found the bottom of the deepest of barrels, and that is how you are reading an “I asked ChatGPT” post.

There is a minor point here. I needed to use up a cup of milk and a quarter-cup of sour cream lurking in my fridge, and I wanted to make some kind of quick bread or loaf with it. The problem is that I am an inexperienced baker and I do not have a great sense of the proportion of other ingredients I would need.

I think this is the kind of thing a more advanced search engine could help with. I have previously used recipe finder tools like Supercook and RecipeRadar but, while they often ask what ingredients are available, they usually do not do so in specific quantities, necessitating looking through a bunch of recipes to find one that fits. Something that can process longer natural language queries seems tailor made for this kind of task.

I told it “I would like loaf/quick bread recipes that will use up 1 cup of milk and ¼ cup of sour cream”, and ChatGPT gave me four options. The first, a “classic vanilla quick bread”, looked adequate, but boring. (Also, a careful reader will quibble with its “optional add-ins (1–1½ cups total)” after which is a list of, among other things, cinnamon, and I do not think you should add a whole cup of cinnamon to anything.) I do like cinnamon, though, and I settled on the fourth suggestion, a “cinnamon swirl coffee bread”. Here is the entirety of its recipe:

Batter

2 cups flour
¾ cup sugar
1½ tsp baking powder
½ tsp baking soda
½ tsp salt
1 cup milk
¼ cup sour cream
⅓ cup oil
2 eggs
1 tsp vanilla

Swirl

⅓ cup brown sugar
1½ tsp cinnamon

Bake

Layer batter + swirl in pan, gently knife once

350°F for 50–60 min

The ingredient list, to my untrained eye, seems fine. The instructions are obviously incoherent. More comprehensive directions were offered for the “classic vanilla quick bread” above it, which I missed because I only followed this recipe. Since I made a different loaf recently, however, I had a rough idea of what I should do. Also, I made a couple of minor changes:

  • I substituted vanilla extract for a few dashes of Fee Brothers’ cardamom bitters.

  • I had no ground cinnamon on hand and grating a cinnamon stick is tedious, so I stopped after about a teaspoon.

While these ingredient substitutions might affect the flavour, they would not materially affect the chemistry.

A cross-section of a golden loaf with a brown sugar swirl in the middle.

The resulting loaf is fine. I was hoping for either catastrophic failure or incredible success to more wholly justify this low-effort post, but it was just fine. Better than I expected, given where it came from, though the brown sugar swirl is achingly sweet and settled in the middle despite my best attempts. I still do not know what “gently knife once” is supposed to mean. I would not confuse this with a professional baker’s work, of course, but that is more like operator error. I wish the directions were, overall, clearer; if I had little to no previous experience baking a quick bread, I might have been lost.

I have experimented with ChatGPT and food before, particularly for weekly meal planning, and I have never been satisfied with its results. This, though, worked pretty well for me. I got to use up a couple of things in my fridge and made an okay dessert from it. Happy New Year.