Month: January 2026

Jason Snell, Six Colors:

I’ve been thinking about Apple’s relationship with computer displays lately. Maybe it was the report that the iMac Pro might somehow return, combined with John Voorhees of MacStories detailing how he gave up the Studio Display for an ASUS monitor? And, of course, there’s the prospect that we may be seeing new Apple-made standalone displays in 2026.

I don’t want to go back to a world where Apple no longer makes standalone displays. But that said, I think the company’s approach to display technology needs a serious upgrade.

Co-signed. When I bought my iMac 5K in early 2019,1 it was the only big desktop display Apple sold. Now, as I think about an update, I am dismayed I will not officially be able to use it as an external display, though it seems like Luna Display might be worth a shot over Thunderbolt 3. If that does not work, it is gutting to me that I will have to effectively re-purchase this same 5K panel for my desk. And then there is the question of what I should do with my otherwise-good-just-outdated iMac.

I have been keeping an eye on Michael Tsai’s posts about displays, particularly those regarding non-Apple 27-inch 5K models. I know I have to adjust my expectations. Even so, these all look pretty tacky. Imagine being from Asus or ViewSonic and thinking I want to look at an ugly logo all day long. Still, a near-thousand-dollar difference is hard to dismiss, especially since Apple’s expensive display continues to have software problems.

And, of course, the only other display Apple sells is a 32-inch model that is over-engineered for most people, costs nearly eight thousand dollars in Canada with a stand, and has not been updated in over six years. Asus’ comparable 6K display is under $2,000, though it is not as bright and it looks pretty tragic.


  1. Arguably the worst time in modern Apple history to be shopping for a new Mac. Butterfly keyboards in all the laptops, stale desktops, and on the verge of the last batch of Intel-based Macs. ↥︎

Elizabeth Lopatto, the Verge:

Less than five years ago, I sat through the interminable Epic v. Apple antitrust trial. Real heads will remember that Apple’s lawyers heavily implied that a naked bananaman called Mr. Peely was somehow inappropriate for court. This came after a week where Apple argued that an indie storefront that users could install via Epic was a problem because it hosted porny games, calling games on Itch.io “offensive and sexualized.”

You know what’s “offensive and sexualized,” you worthless fucking cowards? Nonconsensual AI-generated images of women in bikinis spreading their legs, and of children with so-called “donut glaze” on their faces — which, by the way, were being generated at a rate of one per minute. I’d also call that “offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste” and *especially *“just plain creepy”! Do you need a back brace to stand up straight, buddy? Because at this point, I am certain you haven’t got a single vertebra.

Correctly righteous anger. That there has barely been any reckoning with this from Apple, Google, or even xAI despite at least a week of mainstream media coverage shows a callous indifference to the ongoing effects of the victims of this abuse. The best argument I can imagine — and it is a terrible argument — is that Apple’s lawyers advised the company against doing anything that looks anticompetitive since it is currently being sued by xAI. This is why it is a bad idea to rely on private corporations to do the job of regulators and law enforcement — but, still, Apple and Google should not be carrying apps from this company for as long as it continues to be a mass-scale abuse generator.

Hartley Charlton, MacRumors:

In the first week of January last year, 89.3% of MacRumors visitors used a version of iOS 18. This year, during the same time period, only 25.7% of MacRumors readers are running a version of iOS 26. In the absence of official numbers from Apple, the true adoption rate remains unknown, but the data suggests a level of hesitation toward iOS 26 that has not been seen in recent years.

The numbers being reported — 15% from StatCounter, 26% from MacRumors, and 55% from TelemetryDeck — are all over the place, but there is a clear-ish direction: people are not updating to iOS 26 like they have previous versions of iOS. There are lots of possible reasons why. Liquid Glass is the most visible explanation, but it is also possible the growing size of iOS plays a role. Apple Intelligence alone consumes nearly 7 GB of disk space on my iPhone.

Even so, the differences in these numbers are wild, especially compared to previous years where there was tighter agreement between different reports. For example, in January last year, TelemetryDeck reported about 78% were running iOS 18, StatCounter said it was 63%, and Apple itself said it was 68% of all iPhone users, rising to 76% adoption among users of devices four years old or newer. That is a fifteen point spread between. This year, with only third-party data so far available, it is a forty point spread between StatCounter’s 15% and TelemetryDeck’s 55%. Something is not adding up.

Update: My iPhone running iOS 26.3 is detected by StatCounter’s user agent detection tool as an iOS 18.7 device. This reflects how StatCounter says is how it collects its figures. Two other devices running iOS 26 were also detected by StatCounter as iOS 18.7 devices; however, on one of them in the Chrome browser, StatCounter correctly detected it as iOS 26.1. I also see this effect in my own limited analytics, where the only reports of iOS 26 versions are non-Safari browsers. If an analytics package relies on the OS version string in the user agent, it will also misreport iOS 26 Safari users.

Yesterday, I linked to a report from Ed Hardy, Cult of Mac, pointing to a shockingly low iOS 26 adoption rate compared to previous years. Hardy relied on date from StatCounter, which uses web traffic at massive scale to measure all kinds of stuff, including operating system versions.

Given that StatCounter’s data has been similar to Apple’s own reporting of version adoption in previous years, I wrote:

[…] StatCounter’s figures might be off, but it would be shocking if they were out by 40-plus percent. That would point to a serious measurement error that, somehow, did not impact previous reporting.

Well, it turns out there is likely a measurement difference that would not have impacted iOS 18 or before.

Jen Simmons and others who work on WebKit, in September:

Also, now in Safari on iOS, iPadOS, and visionOS 26 the user agent string no longer lists the current version of the operating system. Safari 18.6 on iOS has a UA string of:

Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/18.6 Mobile/15E148 Safari/604.1

And Safari 26.0 on iOS has a UA string of:

Mozilla/5.0 (iPhone; CPU iPhone OS 18_6 like Mac OS X) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/26.0 Mobile/15E148 Safari/604.1

In both, you will notice iPhone OS is set to “18_6” despite only one of them actually running iOS 18.6. If StatCounter was relying on this part of the user agent string for calculating operating system version number, it could be inaccurate. There is still a Safari version number that could be a proxy for the operating system version in the latter part of the user agent string, however. On my iPhone, running iOS 26.3, the relevant section reads Version/26.3 Mobile/15E148 Safari. The iPhone OS string also reads “18_7”, which is also true for users running iOS 26.2.

It is not like StatCounter has no data for iOS 26. It shows traffic from iOS 26.1 and 26.2, indicating it likely updated its tracking metrics. It is possible some of the 18.6 and 18.7 traffic is also iOS 26 — we just do not know how much.

Data from TelemetryDeck seems more robust, and suggests about 55% of iOS users have updated to iOS 26, compared to about 78% of users one year ago running iOS 18. Not as bad as StatCounter’s figures, but still a twenty-point gap between latest version uptake last year and this year.

Thanks to Sam Gross for pointing me in this direction.

Emanuel Maiberg, of 404 Media, has been following a Telegram group in which members find workarounds to guardrails in generative A.I. products. Instead of finding interesting exploits to do clever things, though, the loopholes are being harnessed mostly to harass, abuse, and bully women. It is pretty revolting.

Maiberg:

It’s good and correct for people to be shocked and upset when they wake up one morning and see that their X feed is flooded with AI-generated images of minors in bikinis, but what is clear to me from following this Telegram community for a couple of years now is that nonconsensual sexual images of real people, including minors, is the cost of doing business with AI image generators. Some companies do a better job of preventing this abuse than others, but judging by the exploits I see on Telegram, when it comes to Grok, this problem will get a lot worse before it gets better.

It is clear Maiberg is just as disgusted with this as any person should be, so I am not trying to um, actually this, but I am not sure treating it as a “cost of doing business” is correct. The design and capabilities of these products matters immensely and, by permitting a relatively open-ended environment, xAI allows for experimentation to find its weak points. This is true of any generative A.I. product with a text box as its input. (As opposed to, say, a generative object removal tool in an image editor.) The degree of impact may also vary depending on the quality or style of the image — though, personally, I would still be alarmed if someone were harassing me with images even if they were cartoons.

Matt Burgess and Maddy Varner, Wired:

Unlike on X, where Grok’s output is public by default, images and videos created on the Grok app or website using its Imagine model are not shared openly. If a user has shared an Imagine URL, though, it may be visible to anyone. A cache of around 1,200 Imagine links, plus a WIRED review of those either indexed by Google or shared on a deepfake porn forum, shows disturbing sexual videos that are vastly more explicit than images created by Grok on X.

Caroline Haskins, Wired:

Over the past two years, Apple and Google removed a number of “nudify” and AI image-generation apps after investigations by the BBC and 404 Media found they were being advertised or used to effectively turn ordinary photos into explicit images of women without their consent.

But at the time of publication, both the X app and the stand-alone Grok app remain available in both app stores. Apple, Google, and X did not respond to requests for comment. Grok is operated by Musk’s multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said that it takes action against illegal content on its platform, including CSAM. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content,” the company warned.

X’s threat of “consequences” would be more convincing if it had functional site moderation generally.

Apps have been kicked off the App Store for far less than what X is today. Removing it — and the rest of xAI’s apps — would be a good start, but we should not expect private companies to do the job of law enforcement and regulators. There is a good case for banning X as long as it continues to permit this poorly-moderated image generator. People should be criminally charged, too.

Ed Hardy, Cult of Mac:

Nevertheless, iOS 26 adoption is extremely low. Roughly four months after launching in mid-September, only about 15% of iPhone users have some version of the new operating system installed. That’s according to data for January 2026 from StatCounter. Instead, most users hold onto previous versions.

For comparison, in January 2025, about 63% of iPhone users had some iOS 18 version installed. So after roughly the same amount of time, the adoption rate of Apple newest OS was about four times higher.

As far as I can tell, Apple updates its own iOS version stats twice annually — once with summer numbers, and again in the winter. Last year, it updated its stats on January 24, and it indicated 68% of iPhone users who transacted with the App Store on January 21 were using iOS 18. This is pretty close to StatCounter’s 63%. In February 2024, 66% were using the then-newest iOS 17; as Hardy writes, StatCounter reported it was around 54% at the time. A greater gap, to be sure, but it was clear well over half of iPhone users had updated. StatCounter’s figures might be off, but it would be shocking if they were out by 40-plus percent. That would point to a serious measurement error that, somehow, did not impact previous reporting.

Update: Turns out there might actually be large measurement differences that did not impact previous years’ reporting.

Update: Relying on StatCounter’s data could be flawed because Safari and third-party browsers report different iOS version numbers due to a change in iOS 26.

Juli Clover, MacRumors:

Language learning app Duolingo has apparently been using the iPhone’s Live Activity feature to display ads on the Lock Screen and the Dynamic Island, which violates Apple’s design guidelines.

According to multiple reports on Reddit, the Duolingo app has been displaying an ad for a “Super offer,” which is Duolingo’s paid subscription option.

I saw this, too.

Clover points to Apple’s Human Interface Guidelines, which once advised “avoid using a Live Activity to display ads or promotions”, but now explicitly say “don’t use a Live Activity” to show ads. But the HIG is not the App Store Guidelines, and there is nothing in there expressly prohibiting this behaviour, as far as I can see. Tacky.

Kate Conger, New York Times:

Elon Musk’s artificial intelligence company, xAI, said on Tuesday that it had raised $20 billion from investors to fund its expansion in the race to train the most intelligent chatbot.

[…]

Investors in xAI’s latest funding included Fidelity, the Qatar Investment Authority and Valor Equity Partners, a firm led by Mr. Musk’s friend and former Tesla board member Antonio Gracias. Nvidia, the maker of A.I. chips, also participated. In total, xAI has raised more than $42 billion, according to PitchBook.

Twenty billion dollars is a ghastly sum of money to give anyone, let alone someone who, in his official capacity, cut funding and killed over half a million people. This feels less like an investment for direct financial gain, and more like a way for a select group of people to find influence through a company connected to a transparently corrupt government.

If you are wondering about the timing — immediately after a wave of coverage about xAI’s Grok generating a flurry of abusive and heinous imagery — it turns out this is not the first time Grok’s output has not affected major investment news. In July, shortly after Grok began parroting praise of Adolf Hitler, the U.S. government’s Chief Digital and Artificial Intelligence Office announced xAI would be one of several A.I. companies receiving up to $200 million. As of today, Grok’s stream of replies on X are still full of it generating sexualized images of real women, mostly without their consent. People with lots of money seem to think all of this is completely fine.

Jason Koebler, 404 Media:

Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules.

Ananya Bhattacharya, Rest of World:

On January 2, India’s IT ministry issued a 72-hour ultimatum to X over its artificial intelligence chatbot generating “obscene” content — specifically, sexualized or manipulated images of women and, in some cases, minors.

Maggie Harrison Dupré, Futurism:

In addition to the sexual imagery of underage girls, the women depicted in Grok-generated nonconsensual porn range from some who appear to be private citizens to a slew of celebrities, from famous actresses to the First Lady of the United States. And somehow, that was only the tip of the iceberg.

When we dug through this content, we noticed another stomach-churning variation of the trend: Grok, at the request of users, altering images to depict real women being sexually abused, humiliated, hurt, and even killed.

It is extraordinary yet, sadly, predictable to me that xAI is not treating this as a problem. Setting aside the fundamentally offensive capability that any user can tell Grok to generate photorealistic images based on someone else’s likeness — something it should be prohibited from doing — there appears to be no rush to fix this high-res misogyny on demand. A basic corporate response would be to turn off image generation capabilities until better safeguards are in place. Yet I just opened the Grok account on X, switched to the Replies tab, and it took almost no scrolling at all to find it generating images like these mere seconds ago.

The kinds of images Grok is generating should be criminal, and the people who oversee it should be held liable as should those who are prompting Grok. In my eyes, though I am not a lawyer, it is no less a case of harassment when a robot is directed by a user than when the user does it themself. I know A.I. is a nascent industry and there are things people are worried about over-regulating, but this should not be one of them. xAI is clearly overseen by adolescent-brained people who think it is sufficient to make it against company policy for users to make heinous requests, rather than sufficiently restricting its own software.

Elissa Welle, the Verge:

A viral Reddit confessional about a “major food delivery app” posted January 2nd is most likely AI-generated. The original post by user Trowaway_whistleblow alleged that an unnamed food delivery company regularly delays customer orders, calls couriers “human assets,” and exploits their “desperation” for cash, among other indefensible actions. Nearly 90,000 upvotes and four days later, it’s become increasingly clear that the post’s text is probably AI-generated.

The link from “viral Reddit confessional” is not to the thread itself, but to a link post from the Verge. That post, from Andrew J. Hawkins, still asks “[w]hich company do you think this is about?” and has not been updated to reflect the fictional status of this story.

Alex Schultz, Hard Reset:

But the allegations in the Reddit thread are not true. They are fabricated. The original post was generated by AI and was intended to defame Uber Eats. I know as much because I traded Signal messages with Trowaway_whistleblow, who additionally used AI to prepare a fake-but-sophisticated-looking “internal document” of other supposedly damning allegations; Trowaway_whistleblow circulated the document to journalists, hoping someone would take the bait and publish a libelous “scoop.”

I was shocked to see so many people posting a link to this thread credulously. I write that not to scold anyone, only out of genuine surprise that an evidence-free anonymous Reddit post with clear inconsistencies was being spread so widely. There is so much existing evidence that so-called “gig economy” workers are exploited, underpaid, and endangered, and that food delivery platforms specifically are bad for restaurants, that we simply do not need to add questionable Reddit posts to the mix. This scam played into that reputation perfectly. If anything, that is why it makes sense to be even more skeptical. Being too credulous undermines the documented failures of these platforms through the confusion it creates.

You may remember Jim Nielsen’s exploration last month — previously linked — of the various icons in MacOS Tahoe menus, and comparing them to Apple’s long-time recommendations for icon use.

Nikita Prokopov went even further:

In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

This is a gallery of elementary problems. None of this should have shipped if someone with power internally had a critical eye for consistency and detail. If Apple deems it necessary to retain the icons, though I am not sure why it would, it should be treating this post as one giant bug report.

Update: Speaking of illegible, I think Prokopov should reconsider both the automatically-enabled snow animation and the searing yellow background.

Shalini GovilPai, Google:

At CES 2026, we’re previewing how Gemini is making Google TV even more helpful, and bringing more ways to interact with your TV across more brands and surfaces like projectors. Here’s what we announced:

[…]

  • Use Nano Banana and Veo to reimagine your personal photos or create original media directly on your TV.

I guess I am the dumb one because I cannot see any reason why I would want Google to generate new scenes from my photos — let alone “original media”, whatever that means. This seems like something that would be fun for about ten minutes, once, and then would never be touched again.

I am not sure which was the first of Google’s A.I. ads I saw that made me wonder if I was suffering from a fever. Maybe it was the one where “pandas in high fashion outfits walk the runway” — shown over thirty million times since early November, as of writing — but it just as easily could have been the one with “some crazy fruit creatures”. Maybe it was the ad with the strutting dogs. All are equally bizarre in the way a giant corporation’s ad company thinks it is offering something quirky and weird.

In any case, each ends with the same encouragement: “make more videos with a Google A.I. Pro subscription” for $27 per month. And I have to ask: who is this for?

These are not the only ads Google is running for its A.I. features. It shows them off in context for general users in Pixel phone ads, for students in ads for NotebookLM, for corporate clients with Cloud A.I., and — of course — for advertisers.

It also is not the only ad for its A.I. Pro subscription bundle. In a different one, different tasks are strung together for someone to do absolutely no work to put together a ’90s trivia night, leading to an invitation that twice tells recipients to wear platform shoes and “get ready”. The fine print might remind viewers to “check responses for accuracy”, but in the ad, the creator clicks send on a terribly written invitation immediately.

But it is those bizarre animal-themed ones which led to me looking up the advertising strategies of all the big A.I. players. I think this is a fascinating window. The ads they pay to show give us a glimpse of how they idealize the use of these products and services. While ads can sometimes be abstracted from the true function of a product or service, A.I. is already a new and confusing thing, so the narrative each company spins about its narrative seems telling of its own vision. I limited my search on this to ads of a typical video spot — no more than 30 seconds long. I did not include case studies or company-produced tutorials.

OpenAI runs a couple of types of ads. Those for ChatGPT mostly show its use as a personal assistant. There are several 30-second spots shot as oners with a pleasingly retro warmth. One has a training regiment for learning to do pull-ups; another is about siblings on a road trip. The same kind of messaging is used in a series of shorter spots. It also runs ads for its Codex agent, obviously targeted at developers, that are more clinical.

All of these seem practical to me. I could not find any current ads from OpenAI as disconnected from reality as Google’s. Just as notable is the focus of OpenAI’s spots — Google’s Ads Transparency Centre says the company is running about two hundred ads in Canada right now, most of which are variations in size, targeting, and placement of the shorter practical examples above, plus ads for Codex. For comparison, Google’s ads are all over the place. It is running around twenty thousand ads right now in Canada and, though not all of them are for A.I. features, many are, and you can tell from the examples above how much Google is just throwing stuff at the wall.

Anthropic’s ads are far more limited. All are for Claude and feature a video ad with no indication of how it is being used. It simply says “Claude is A.I. for […] all of us, anywhere” with overhead shots of different scenes representing different professions. This is basically the same sentiment as OpenAI’s ads, but executed without any specificity or examples. The company’s YouTube channel has plenty of case studies and demos, but no similar video spots.

If Anthropic is trying to mimic OpenAI’s quiet confidence, Perplexity has chosen overt aggression. Quasi-influencer types follow a similar script saying ChatGPT makes things up, and that is why you should trust Perplexity as it “searches the entire internet in less than one second and gives you one verified answer”. This explanation avoids acknowledging how much Perplexity depends on external A.I. models, including OpenAI’s GPT. In two of the ads, the narrator asks health-related questions, which is the boldest and maybe most reckless use case I have seen in any A.I. ad. There is nothing wrong with the answers it has generated, so far as I can tell, but it seems like a higher-stakes use than, like, a cake recipe, for example. In a variation of this ad type, the “influencer” looks generated, and contains Google search result screenshots that are at least twelve years old. Whatever the case, Perplexity must be spending huge amounts on its advertising. Each of these has been shown tens of millions of times, and two have hundreds of millions of views.

Meta’s ads are bizarre. I could not find any video ads for A.I. stuff — or metaverse-related things — through Google’s Ads Transparency Centre, but the company has a handful of 30-second spots on its YouTube channel. Almost like Google, Meta does not seem to know why people would use its A.I. assistant, so it comes up with offbeat premises that fall flat. In one, described as a way to “help sharpen your skills and transform the way you learn and understand complex subjects”, a guy is meeting his partner’s dad, a thermodynamicist. What a real person who is good at conversations would do is ask the dad a bunch of questions. What Meta thinks you should do is ask its assistant those questions instead and develop a fake level of expertise.

Also, not for nothing, but when I asked the same question in the web version of Meta A.I., it responded not with the tight and efficient explanation in the ad, but with:

😂 Thermodynamics-ist? Sounds like a real party crusher, eh? 🤣 Okay, serious face… A thermodynamicist is like, someone who’s all about the study of heat, energy, and how they transform into each other. They’re like the ultimate energy nerds, but in a good way! 🤓 They work with stuff like engines, refrigeration, and even black holes (mind blown, right? 🤯). Basically, they help us understand how to harness and use energy efficiently. Pretty cool, eh?

Am I meant to be charmed by “okay, serious face…” and four emoji in this barely coherent response? Because I am not.

In another ad, a guy somehow turns a large pan of what appears to be milk and two other pans of mystery liquid into pancakes for his kid. The kid pushes the plate off the counter, the dad asks “how can I get my two-year-old to eat his breakfast?”, and Meta A.I. knows that the kid likes when he makes silly faces. It is unclear to me how it knows this — whether it is automatically personalized to this user, or something the dad needed to teach the system, or if it is just an assumption that all two-year-olds like silly faces. The impression I get, though, is that it is distinctly unbelievable, and I do not want to give Meta knowledge of my entire life to find out if I am right.

To round this out, I looked for similar ads from Microsoft and Apple. I scrolled through hundreds of Microsoft ads on Google’s Ads Transparency tool, virtually all of which were Xbox related. I also checked its YouTube channel. I did not see any ads of the same type as above. I also found only a couple of ads for Apple Intelligence on Apple’s YouTube channel, both of which are for Clean Up in Photos. Apple seems to have cleaned up its YouTube channel overall, removing a whole bunch of older ads including some for Apple Intelligence.

I do not want to overstate how much these ads tell us — they are ads, you know? — but I think I learned something from the way each of these businesses thinks of its own products. In OpenAI, I see confidence; in Anthropic and Perplexity, I see an attempt to catch up. And in Google and Meta, I see established companies that are desperate to prove themselves — particularly in Google’s case, as I still cannot understand why generating arbitrary video is supposed to be compelling to a broad audience.

In the most practical and grounded ads, what I do not see are significant leaps beyond what a search engine today could do. OpenAI’s ads show ChatGPT summarizing a workout plan, but there are loads of those on external websites. Guides to road tripping through the Blue Ridge Parkway are plentiful. The same is true of the responses in Perplexity’s ads. What I see most in these ads are the big “pure” A.I. players normalizing their raison d’être, and established mega corporations entirely out of touch with what someone might want to do. Both are embarrassing in their own way for what is often pitched as the most revolutionary technology since the internet.

2025 was a tough year for pretty much everyone I know, and I imagine the coming year will bring more of the same dreary news. I thought I would start off 2026 with something much dumber. I have plumbed the depths and found the bottom of the deepest of barrels, and that is how you are reading an “I asked ChatGPT” post.

There is a minor point here. I needed to use up a cup of milk and a quarter-cup of sour cream lurking in my fridge, and I wanted to make some kind of quick bread or loaf with it. The problem is that I am an inexperienced baker and I do not have a great sense of the proportion of other ingredients I would need.

I think this is the kind of thing a more advanced search engine could help with. I have previously used recipe finder tools like Supercook and RecipeRadar but, while they often ask what ingredients are available, they usually do not do so in specific quantities, necessitating looking through a bunch of recipes to find one that fits. Something that can process longer natural language queries seems tailor made for this kind of task.

I told it “I would like loaf/quick bread recipes that will use up 1 cup of milk and ¼ cup of sour cream”, and ChatGPT gave me four options. The first, a “classic vanilla quick bread”, looked adequate, but boring. (Also, a careful reader will quibble with its “optional add-ins (1–1½ cups total)” after which is a list of, among other things, cinnamon, and I do not think you should add a whole cup of cinnamon to anything.) I do like cinnamon, though, and I settled on the fourth suggestion, a “cinnamon swirl coffee bread”. Here is the entirety of its recipe:

Batter

2 cups flour
¾ cup sugar
1½ tsp baking powder
½ tsp baking soda
½ tsp salt
1 cup milk
¼ cup sour cream
⅓ cup oil
2 eggs
1 tsp vanilla

Swirl

⅓ cup brown sugar
1½ tsp cinnamon

Bake

Layer batter + swirl in pan, gently knife once

350°F for 50–60 min

The ingredient list, to my untrained eye, seems fine. The instructions are obviously incoherent. More comprehensive directions were offered for the “classic vanilla quick bread” above it, which I missed because I only followed this recipe. Since I made a different loaf recently, however, I had a rough idea of what I should do. Also, I made a couple of minor changes:

  • I substituted vanilla extract for a few dashes of Fee Brothers’ cardamom bitters.

  • I had no ground cinnamon on hand and grating a cinnamon stick is tedious, so I stopped after about a teaspoon.

While these ingredient substitutions might affect the flavour, they would not materially affect the chemistry.

A cross-section of a golden loaf with a brown sugar swirl in the middle.

The resulting loaf is fine. I was hoping for either catastrophic failure or incredible success to more wholly justify this low-effort post, but it was just fine. Better than I expected, given where it came from, though the brown sugar swirl is achingly sweet and settled in the middle despite my best attempts. I still do not know what “gently knife once” is supposed to mean. I would not confuse this with a professional baker’s work, of course, but that is more like operator error. I wish the directions were, overall, clearer; if I had little to no previous experience baking a quick bread, I might have been lost.

I have experimented with ChatGPT and food before, particularly for weekly meal planning, and I have never been satisfied with its results. This, though, worked pretty well for me. I got to use up a couple of things in my fridge and made an okay dessert from it. Happy New Year.