Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

As an efficient, high performance and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

Best in class YouTube ad blocking

Magic Lasso Adblock is easy to setup, doubles the speed at which Safari loads, and also blocks all YouTube ads — including all:

  • video ads

  • pop up banner ads

  • search ads

  • plus many more

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

LordPan1492 on Reddit is, I think, the first person to have spotted this:

We notices since last week Friday that some devices has altered hosts files. Adobe still says that everything in the host file referring Adobe should be removed (to remove all license avoidance lines). But I know have 3 lines added to the hosts file, and I think if I’m starting to remove them, they will be re-added later.

## Adobe Creative Cloud WAM - Start ##

166.117.29.222 detect-ccd.creativecloud.adobe.com

## Adobe Creative Cloud WAM - End ##

User thenickdude, in response, with more detail in a second post:

They’re using this to detect if you have Creative Cloud already installed, from on their website.

Michael Tsai is among many people who have found the same is true on their Macs. For whatever reason, my hosts file has not been mucked with by Adobe.

In his headline, Tsai says this is “for their analytics”, but I do not think that is right. I spent a little time digging into this today and, while I have nothing concrete, I expect this is for integrations between web apps and the company’s desktop apps. In Adobe Express — free web apps for a handful of common image and PDF editing tasks — there are at least two JavaScript files containing references to a ccdDetectUtil, presumably standing for “Creative Cloud Desktop detection utility”. If the user has the desktop apps installed, it appears to suggest the Express app, too, and I am guessing this also powers a thing where you can update a Creative Cloud desktop app by clicking a button on the web.

I could not get any of this stuff to trigger, even by manually adding the entry to my /etc/hosts file. Also, this is not a defence of Adobe. There should be no tolerance for this kind of meddling with system files. If Adobe wants to have these kinds of integrations, that is what a custom URL protocol is for.

Aarian Marshall, Wired:

All the companies that responded to the senator’s office say they use remote assistants — humans charged with responding to autonomous vehicles when they get confused, stuck, or in emergencies. The programs, experts say, are an important part of any autonomous vehicle company’s safety considerations, a backstop for a technology that’s becoming safer by the year but will continue to run into new situations on the road indefinitely.

In a report also released Tuesday, Senator Markey said the new details were not enough. “Every autonomous-vehicle company refused to disclose how often their AVs require assistance from [remote assistants]—hiding key information from the public about their AV’s true level of autonomy,” he wrote. “This information is critical for lawmakers, regulators, and the public to understand the potential safety risks with AVs.”

The report (PDF) is not comprehensive but it is worth reading, along with the responses sent (PDF) by each company. Of them, Tesla is the only one to say human assistants can directly drive an otherwise autonomous car at speeds of up to 16 kilometres per hour (10 miles per hour).

I am not sure what to make of wording across the letters, which feels carefully calibrated to avoid disrupting the marketing of these services while acknowledging the need for safety drivers. I do not think Tesla’s remote driving capability is inherently a bad idea because some incidents will need the skills of a real person. But, surely, someone sitting at a desk in an office park halfway across the country is not exactly the best person to be driving that car except for a precise situation which has been engineered so that a person sitting at a desk is, in fact, the only capable driver of that car. Like, I play Gran Turismo but I do not think I would do a very good job of getting a Tesla out of a ditch with a joystick or whatever.

Anyway, sure would be nice to know how often a person needs to intervene, but I bet none of these companies are going to willingly disclose that unless they all do. Nobody is going to move first.

Ashley King, Digital Media News:

Spotify is introducing new video controls that enable users to turn off video content, including music videos or video podcasts, as well as Canvas looping visuals. The toggles will be available for both personal and Family Plan accounts.

“More than 70% of Spotify users say more video content would enhance their experience on Spotify, but not every listener wants the same experience,” Spotify said in a statement. “By putting control directly in users’ hands, it’s now easier to switch without friction.”

If Apple is looking for features to copy, this can be near the top of the list. Many albums have videos tucked at the end of the track list and it is a downright jarring experience when playback switches from audio, especially in the desktop app where a video player pops up out of nowhere.

Andrew Murphy:

The speed of writing code was never your problem. If you thought it was, the gap between that belief and reality is where all your actual problems live. The competitive advantage doesn’t go to the team that writes code fastest. It goes to the team that figured out what to build, built it, and got it into users’ hands while everyone else was still drowning in a review queue full of AI-generated PRs that nobody has the time or the energy to read.

Via Elizabeth Ayer:

The fact that we are *not* seeing wildly improving software all around us tells us everything we need to know.

There is no flourishing of value delivery, new product categories, more needs being satisfied better. It’s the opposite.

All we are seeing is decreases in quality, because 👏 code 👏 creation 👏 is not 👏 the problem.

Nilay Patel, making a tangentially related point on Bluesky:

I keep saying “there are no great consumer AI products” and people keep replying to me with like model capability updates and wild OpenClaw setups and I really fear Software Brain is irreversible

The iPhone was a consumer product so great that enterprises were forced to adopt it! That’s the bar, not the other way around.

I completely agree with Murphy’s argument from a professional perspective. Though I write limited code these days, I want to understand it by developing it myself. The bottleneck there is a quality-based one. I need to know what I am building, and what bugs I have created so that I may create something better. I cannot get that through generated code because, as for anything automatic, I will stop being attentive.

But for personal projects, the bottleneck is absolutely a function of available time. Little side projects sit there until I have ample time to solve them. For example, MarsEdit has a lovely little bookmarklet that will start a new post containing the highlighted text. For years, I had been meaning to modify it to Markdown-encode any emphasized text and set links in my preferred reference style. My JavaScript skills being quite rusty, I knew that was going to require ample time that I did not want to spend. So last year, I threw it at ChatGPT, and it did an admirable job of updating it to my needs.

I am conflicted about this. I decided to avoid learning something and judge the output solely based on whether it works as expected. And, to Patel’s point, I felt like I was using a corporate tool for some hobbyist project, which is unpleasant. It has solved a point of friction in my workflow — not itself a bottleneck, per se, just something I found a little bit annoying.

Maggie Harrison Dupré, Futurism:

On Thursday, the New York Times published a glowing profile of a company called Medvi. The basic premise of the piece is that a single guy named Matthew Gallagher had used AI to rapidly build a pharmaceutical enterprise that’s on track to do nearly $2 billion in sales this year, while hiring only a skeleton crew of humans to operate the vast AI-powered venture. According to the NYT, it’s a stunning achievement that heralds a new era of business; OpenAI CEO Sam Altman, who predicted the rise of this kind of company back in 2024, told the newspaper that he’d “like to meet the guy” behind the project.

“A $1.8 billion company with just two employees?” the NYT rhapsodized. “In the age of AI, it’s increasingly possible.”

The NYT’s tech coverage is generally pretty solid. But the framing of its story, and what it left out, left us pretty stunned. That’s because back in May of last year, we ran our own investigation of Medvi — and not only was what we found far more disturbing than the NYT’s credulous story let on, but the situation has gotten even worse since then.

The Times should be retracting this story. Instead, when I opened its app this morning, it was featuring the story in its “In Case You Missed It” section.

Charlie Warzel, the Atlantic:

There is something disorienting, horrible, and somehow fitting in the timing of all of this. That one man with the means to do it would threaten destruction of a part of our planet at the same moment its beauty and fragility are on full display. We are, in this tense moment, living with our own overview effect. Four are watching from afar. But the rest of us are watching too — left to reckon with our own place on the pale blue dot, reminded of all the ways we might die, and all the reasons for which to live.

The effect of toggling between news about Artemis II — which, yes, may not be as scientifically rigorous as one might hope, yet is undeniably a very cool event — and an objective threat of genocide has squeezed me to feel ways I did not know I could at the same time.

Microsoft’s Defender Security Research Team:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

Microsoft redacted the names of websites currently using this technique but, with the information they provided, it was trivial for me to find a dozen examples — yet, somehow, not the one in the screenshot. I am not saying Microsoft was faking this, only that it is already common enough that this one example was drowned out by a bunch of others.

Rand Fishkin, SparkToro:

Google alone was responsible for 73.7% of all desktop searches across the 41 domains we analyzed in the US in Q4 2025 (as noted, the graph is not to scale or none of the other label names would be visible). That’s obviously huge, but it’s also far lower than how their market share is usually reported (e.g. Statcounter, whose methodology puts them at 90%+, or our prior, more limited analyses with similar numbers) and higher than what they tried to use in their antitrust defense (i.e. data from Evercore ISI, an “equities research firm”).

Perhaps more fascinating and unexpected are the other domains with more search activity than ChatGPT: Amazon, Bing, and YouTube. Three domains where search marketers historically have put limited effort compared to the onslaught of dollars flooding the “we need to rank in ChatGPT!” space.

Nevertheless, marketers are eager to manipulate it from the start.

Both of the above links are from a fabulous report by Mia Sato, of the Verge (gift link), who also wrote about ads in ChatGPT:

The ads were intrusive, the complaints went, and suspect, given that the example hot sauce ad appeared to be related to the preceding conversation. OpenAI CEO Sam Altman has claimed artificial intelligence can take over human jobs, cure cancer, and surpass human intelligence — and instead, people complained, he gave users banner ads?

But it appears that what people were really upset about was that a bubble had burst, that the chatbot they used for relationship advice, career coaching, therapy, and homework suddenly seemed vulnerable to manipulation. Unlike the rest of the internet, ChatGPT conversations felt private, safe from the clutches of brands and marketers chasing conversions. The reality, of course, is that it’s been happening all along.

Now that normal search results are all junked up with mostly — but not always — accurate A.I.-generated summaries, and all the links to A.I.-generated nonsense, and the alternatives are the large language models that generate all this stuff in the first place, what does searching the web look like in a few years’ time? Does Google get a handle on this, or do we have to constantly answer CAPTCHAs to search properly? This is not a Google-only problem; alternative search engines like DuckDuckGo and Kagi are good — often very good, in fact — but DuckDuckGo’s results are also full of generated garbage, and both lack Google’s more extensive historical records.

OpenAI’s Fidji Simo:

I’m excited to share that we’ve acquired TBPN. This acquisition brings a team with strong editorial instincts, deep audience understanding, and a proven ability to convene influential voices across tech, business, and culture.

OpenAI and TBPN jointly promise to retain the show’s independence while OpenAI is, according to its press release, “excited to bring their amazing comms and marketing instincts to the team”.

Alex Valdes, CNet:

TBPN launched in October 2024 and has been compared to ESPN in how it covers tech — two guys at a big desk with news, analysis, commentary and banter about topics such as AI, crypto, startups and the defense industry. The show’s two hosts and co-founders, Jordi Hays and John Coogan, have had some of tech’s biggest names in studio — OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, entrepreneur Mark Cuban and Salesforce’s Marc Benioff, to name some.

Ryan Broderick, Garbage Day:

Now, Technology Brother #1, Coogan, has written about their desire to remain niche. “If TBPN hits 10M subscribers, something has gone very wrong,” he wrote on LinkedIn last month. “From the very beginning we knew our core audience size: about 200,000 founders, executives, and position players in tech and finance. It may seem small but we were building for a very specialized audience.”

Call me delusional, but I cannot imagine many founders and executives have the ability to watch a three-hour daily livestream. I will not spoil it too much, but Broderick’s theory is pretty reasonable: OpenAI bought it for its nominal authenticity, however manufactured it is.

Ronan Farrow and Andrew Marantz spent a year and a half investigating Sam Altman for the New Yorker and, in particular, the many people around him who say he lies habitually and cannot be trusted. This feels like it could be a personal attack but, in the hands of Farrow and Marantz, it is carefully adjudicated including through several on-the-record conversations with Altman. Unfortunately, like many people who have been accused of similar behaviour, Altman cannot seem to remember much when confronted with these accusations.

This reads at times like a petty drama of infighting, in large part because this is a horribly insular club of ultra-wealthy people who simultaneously treat the technology they are working to create as having all the power of nuclear weapons, yet with all the growth potential of a hot new social network. Everyone is nominally an intellectual engaged in thoughtful research. Yet it is difficult to take anyone seriously.

Farrow and Marantz:

[…] After [Ilya] Sutskever grew more distressed about A.I. safety, he compiled the memos about [Sam] Altman and [Greg] Brockman. They have since taken on a legendary status in Silicon Valley; in some circles, they are simply called the Ilya Memos. Meanwhile, [Dario] Amodei was continuing to assemble notes. These and the other documents related to him chart his shift from cautious idealism to alarm. His language is more heated than Sutskever’s, by turns incensed at Altman — “His words were almost certainly bullshit” — and wistful about what he says was a failure to correct OpenAI’s course.

Neither collection of documents contains a smoking gun. Rather, they recount an accumulation of alleged deceptions and manipulations, each of which might, in isolation, be greeted with a shrug: Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.”

These guys are obsessed with artificial general intelligence in concept and seem to think of the world in those terms. Between that and the palling around they do with similarly rich and disconnected colleagues, I cannot imagine any of them can be trusted with developing these technologies in ways that are beneficial for the rest of us — even if they are being honest.

Do you want an all-in-one solution to block ads, trackers, and annoyances across all your Apple devices?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Sponsor: Magic Lasso Adblock

With Magic Lasso Adblock you can effortlessly block ads on your iPhone, iPad, Mac, and Apple TV.

Magic Lasso is a single, native app that includes everything you need:

  • Safari Ad Blocking — Browse 2.0× faster in Safari by blocking all ads, with no annoying distractions or pop ups

  • YouTube Ad Blocking — Block all YouTube ads in Safari, including all video ads, banner ads, search ads, plus many more

  • App Ad Blocking — Block ads and trackers across the news, social media, and game apps on your device, including other browsers such as Chrome and Firefox

  • Apple TV Ad Blocking — Watch your favourite TV shows with less interruptions and protect your privacy from in-app ad tracking with Magic Lasso on your Apple TV

Best of all, with Magic Lasso Adblock, all ad blocking is done directly on your device, using a fast, efficient Swift-based architecture that follows our strict zero data collection policy.

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, Mac, and Apple TV.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, ensure your browsing history, app usage, and viewing habits stay private with Magic Lasso Adblock.

Join over 400,000 users and download Magic Lasso Adblock today.

Barry Petchesky, Defector (gift link):

NASA shared another photo Wiseman took, a slice of Earth peeking in the Orion’s window. No human has seen the Earth look this small since 1972. Low-earth orbit, where every single crewed space mission since Apollo has operated, tops out at around 1,000 miles above Earth’s surface. The International Space Station orbits a mere 250 miles up. Orion is currently about 95,000 miles away.

It is a wonderful photograph.

There is an E.U. organization called Fairlinked that is a “trade association and advocacy group for commercial LinkedIn users”, and it recently released a report about serious privacy concerns with LinkedIn:

Microsoft Corporation’s LinkedIn is running a massive, global, and illegal spying operation on every computer that visits their website.

[…]

Because LinkedIn knows each visitor’s name, employer, and job title, every detected extension is matched to an identified individual. And because LinkedIn knows where each user works, these individual scans aggregate into detailed profiles of companies, institutions, and government agencies, revealing which software tools their employees use without the organization’s knowledge or consent.

Fairlinked raises two major points of contention: a script on LinkedIn allegedly fingerprints visitors and, if they use a Chromium-based browser, it also compares a known list of browser extensions against the extensions the visitors has installed.

When this was first documented in 2017 by Dan Andrews, LinkedIn was scanning for 38 extensions. One of which was Daxtra Magnet, which “references your recruitment database, such as Taleo, Bullhorn, Salesforce, Adapt, etc. and automatically checks it for a match to an online candidate profile that you are looking at”. Two weeks prior, Andrews writes, LinkedIn was scanning for 28 extensions. Then, when Mark Percival explored this behaviour in February 2026, LinkedIn was now identifying 2,953 extensions. It is now at over 6,200. Some of them are comparable to Daxtra Magnet in that they make use of LinkedIn data specifically, while others are completely irrelevant to the site, or recruiting or job hunting in general.

This is very obviously a severe privacy violation because it can and probably does tie back to named and identified individuals. The amount and type of information collected by this system is ripe for abuse. This is very bad.

However, this campaign is being waged by an industry group that has its own privacy problems. Fairlinked is promoting a lawsuit filed against LinkedIn by Teamfluence, which makes software that allows users to bypass LinkedIn’s daily connection request limits, build up their contacts database, and run automations based on who visits their company or individual profiles. In one example, Teamfluence says it can automatically retrieve the email and phone number of anyone who clicks “like” on a LinkedIn post; in another example, it allows companies to detect website visits from prospective clients’ offices. This product enables spam or, to put it nicely, unsolicited outreach at scale. And, yes, Teamfluence is distributed as a browser extension.

Fairlinked has no documentation of its member groups and barely any of its leadership. One of its board members is an “S. Morell”, and it just happens that Teamfluence was founded by someone named Steven Morell. Another board member is “J. Liebling” and, unsurprisingly, a Jan-Jakob Liebling is an executive at Teamfluence.

There, too, are a bunch of companies that have made their business on the back of LinkedIn data. This is not comparable to Teamfluence or Daxtra Magnet, but it is worth underscoring an entire industry that thrives on this data. LinkedIn has been on a tear trying to curtail it. Just last year, the company sued two companies — ProxyCurl and ProAPIs — to force them to stop scraping its site. This has been going on for years. A massive 2019 leak of “enrichment” data from People Data Labs at least partly originated from LinkedIn scraping. The same year, a U.S. court found it was legal for hiQ Labs to scrape LinkedIn, a decision that was reaffirmed in 2022 after a brief detour through the U.S. Supreme Court. However, LinkedIn was allowed to reinforce its terms of service and could restrict scraping.

Again, to be clear, mass scraping does not appear to be a practice Teamfluence is engaged in. In the E.U., LinkedIn is considered a gatekeeper under the Digital Markets Act and, so, must meet certain obligations of interoperability. That seems quite reasonable. However, the personal and identifiable data held by LinkedIn is basically a world of organizational charts masquerading as a bleak social network. Allowing for interoperability could also open the doors for greater exploitation of user data without adequate individual control. I wish none of this existed.

I am so glad I do not work in an industry where having a LinkedIn profile is basically an obligation.

Hana Lee Goldin:

The search bar you already have is more capable than that arrangement requires you to know. With the right syntax, it becomes a precision instrument: narrow by domain, by date, by file type, by exact phrase. We can pull up archived pages, surface open file directories, and even find what people said in forums instead of what brands want us to find. None of it requires a new tool or a paid account. The capability has been there the whole time.

Advanced search operations are something Google does better than any competitor. DuckDuckGo has its bangs and I like them very much, but Google has a vast catalogue able to be searched with such precision — to a point. If you use these advanced search operators, get ready to see a lot of CAPTCHAs. Google will slow you down and may even block you temporarily if you use it too well.

The newest episode of “Upgrade” is a wonderful retelling of a very particular history (also available as a video):

Jason and Myke tell the story of Apple’s origin. It emerged from the unique environment of the Santa Clara valley suburbs of the ’70s thanks to the particular genius of its two co-founders and some surprising help they got along the way.

Though I was familiar with much of this, I cannot think of many better people to tell it than Jason Snell. I have already seen one thinkpiece after another about what a fifty year-old — ish — Apple means in the grand scope, and there is definitely a place for that. Today’s Apple is a long way from this origin story, of course, but what a story it is.

This gives me an excuse to explain why I am fascinated by this one computer company. Though this story is great, that is not why, nor is it the history of successfully bringing the graphical user interface to the market, nor the ’90s–’00s turnaround. Those are all parts of it. But the main reason I am fascinated by Apple is that it has built such a distinct identity for itself. It has not always stuck to it but, if anything, I think that helps reinforce the existence of an Apple-y identity. Some might attribute that to a particular way of marketing itself which, while true, also emphasizes how important that identity is: when its messaging does not match the products, services, experience, or expected corporate behaviour, it is noticeable.

This is all a bit mythical, to be sure. The garage-era Steves probably would not imagine Apple celebrating its fiftieth birthday by being the second most valuable corporation in the world, nor would they think it would hire Paul McCartney for its employee party. To me, one of those things feels more Apple-y than the other. It feels right for the company to celebrate with a music legend; it probably does not need to be quite so rich or powerful to do that, though. Apple has long been a really, really big corporation, and that — in itself — does not feel very Apple-y to me. That, too, is fascinating.

Mark Gurman, last week in Bloomberg:

Apple Inc. plans to open Siri to outside artificial intelligence assistants, a major move aimed at bolstering the iPhone as an AI platform.

The company is preparing to make the change as part of a Siri overhaul in its upcoming iOS 27 operating system update, according to people with knowledge of the matter. The assistant can already tap into ChatGPT through a partnership with OpenAI, but Apple will now allow competing services to do the same.

This is not unexpected. In the Apple Intelligence introduction at WWDC 2024, Craig Federighi said “we want you to be able to use these external models without having to jump between different tools”, and that they were “starting” with ChatGPT. Gurman points this out and also notes Federighi’s teased Google Gemini integration. Tim Cook, in an October 2025 earnings call, said much the same. (Gurman also notes that this integration is “separate from Apple’s work with Google to rebuild Siri using Gemini models”, but “the news initially weighed on shares of Google”, which I am sure is exactly the reason for them dropping 3.4% and nothing to do with an existing weeklong slide but, then again, I do not work at Bloomberg so who the hell am I to say?)

Gurman, in his “Power On” newsletter over the weekend, further explored what he calls Apple “doubl[ing] down” on a “revamped A.I. and Siri strategy”:

That reality is shaping the company’s new approach, set to be unveiled at the Worldwide Developers Conference on June 8. Rather than engaging in an AI arms race, Apple is focusing on its core strengths: selling highly profitable hardware and making money off the services that run on it.

Historically, Apple’s software — iMessage, Maps and Photos, for example — has been about driving product sales rather than generating revenue in their own right. Rivals, in contrast, are aggressively monetizing AI through subscriptions and premium apps. Apple understands that few, if any, users will pay for Siri or its other AI technology. The opportunity to turn Apple Intelligence into a moneymaker has effectively passed.

What would have been more newsworthy here is if Apple’s A.I. strategy were anything other than building software exclusively for its proprietary hardware. This does not sound like a “revamped” strategy; it sounds like Apple’s whole deal. If it can use Apple Intelligence or Siri in the future, it certainly might; it is putting ads in Apple Maps after all. Services is a money-printing machine with less risk. But it is still a hardware company.

This part made me double-take and wonder if I missed something. In February 2024, following Apple’s cancellation of its car project, Gurman predicted that hardware would continue to be Apple’s primary business “for now”, as though that will change in the near future. This has been constant since Apple Intelligence was announced at WWDC that year.

What one could argue has been a change of strategy is the rumoured development of a chatbot; Gurman called it a “strategic shift” when he broke the news. But that, too, is somewhat inaccurate in two ways: Gurman’s description of it is as an overhauled version of Siri that will let people do normal Siri stuff — setting timers, end of list — plus some of the features Apple announced in 2024 but has not yet shipped which, confusingly, were also first set to ship in an update to iOS 26 without the wholly new version of Siri but also depending on Gemini. Got it?

But even that is not much of a strategy shift. Gurman tweeted in May 2024 — before WWDC and the debut of Apple Intelligence — that “Apple isn’t building its own chatbot but knows the market wants it so it’s going elsewhere for it. It’s the same playbook as search.” So, again, it is just borrowing from its ages-old playbook. It will continue to have proprietary stuff that ostensibly works seamlessly across a user’s Apple-branded hardware, allow installation of third-party add-ons, and rely on Google for some core functionality. How, exactly, is this a “revamp”?

Anyway, here is what Gurman wrote in January after the Gemini announcement and before the first build of iOS 26.4 was released:

Today, Apple appears to be less than a month away from unveiling the results of this partnership. The company has been planning an announcement of the new Siri in the second half of February, when it will give demonstrations of the functionality.

Whether that takes the form of a major event or a smaller, tightly controlled briefing — perhaps at Apple’s New York media loft — remains unclear. Either way, Apple is just weeks away from finally delivering on the Siri promises made at its Worldwide Developers Conference back in June 2024. At long last, the assistant should be able to tap into personal data and on-screen content to fulfill tasks.

Apple today shipped the first build of iOS 26.5 to developers without any sign of those features. While they may come in a later build, Juli Clover, of MacRumors, speculates they have been kicked to iOS 27.

Does not seem like much has changed at all.

Sometimes, I do not recognize a trap until I am already in it. Photos in iCloud is one such situation.

When Apple launched iCloud Photo Library in 2014, I was all-in. Not only is it where I store the photos I take on my iPhone, it is where I keep the ones from my digital cameras and my film scans, and everything from my old iPhoto and Aperture libraries. I have culled a bunch of bad photos and I try not to hoard, but it is more-or-less a catalogue of every photo I have taken since mid-2007. I like the idea of a centralized database of my photos, available on all my devices, that is functionally part of my backup strategy.1

But, also, it is large. When I started putting photos in there eleven years ago with a 200 GB plan, I failed to recognize it would become an albatross. iCloud Storage says it is now 1.5 TB and, between the amount of other stuff I have in iCloud and my Family Sharing usage, I have just 82 GB of available space. 2 TB seemed like such a large amount of space until I used 1.9 of it.

Apple’s next iCloud tier is a generous 6 TB, but it costs another $324 per year. I could buy a new 6 TB hard disk annually for that kind of money. While upgrading tiers is, by far, the easiest way to solve this problem, it only kicks that can down that road, the end of which currently has whatever two terabytes’ worth of cans looks like.

A better solution is to recognize I do not need instant access to all 95,000 photos in my library, but iCloud has no room for this kind of nuance. The iCloud syncing preference is either on or off for the entire library.

Unfortunately, trying to explain what goes wrong when you try to deviate from Apple’s model of how photo libraries ought to work will become a bit of a rant. And I will preface this by saying this is all using Photos running on MacOS Ventura, which is many years behind the most recent version of MacOS. It is not possible for me to use the latest version of Photos to make these changes because upgraded libraries cannot be opened by older versions of Photos. However, in my defense, I will also note that the version on Ventura is Photos 8.0 and these are the kinds of bugs and omissions inexcusable after that many revisions.

So: the next best thing is to create a separate Photos library — one that will remain unsynced with iCloud. Photos makes this pretty easy by launching while holding the Option (⌥) key. But how does one move images from one library to the other? Photos is a single-window application — you cannot even open different images in new windows, let alone run separate libraries in separate windows. This should be possible, but it is not.

As a workaround, Apple allows you to import images from one Photos library into another — but not if the source library is synced with iCloud. You therefore need to turn off iCloud sync before proceeding, at which point you may discover that iCloud is not as dependable as you might have expected.

I have “Download Originals to this Mac” enabled, which means that Photos should — should — retain a full copy of my library on my local disk. But when I unchecked the “iCloud Photos” box in Settings, I was greeted by a dialog box informing me that I would lose 817 low-resolution local copies, something which should not exist given my settings, though reassuring me that the originals were indeed safe in iCloud. There is no way to know which photos these are nor, therefore, any way to confirm they are actually stored at full resolution in iCloud. I tried all the usual troubleshooting steps. I repaired my library, then attempted to turn off iCloud Photos; now I had 850 low-resolution local copies. I tried a neat trick where you select all the pictures in your library and select “Play Slideshow”, at which point my Mac said it was downloading 733 original images, then I tried turning off iCloud Photos again and was told I would lose around 150 low-resolution copies.

You will note none of these numbers add or resolve correctly. That is, I have learned, pretty standard for Photos. Currently, it says I have 94,529 photos and 898 videos in the “Library” view, but if I select all the items in that view, it says there are a total of 95,433 items selected, which is not the same as 94,529 + 898. It is only a difference of six items but, also, it is an inexplicable difference of six.

At this point, I figured I would assume those 150 photos were probably in iCloud, sacrifice the low-resolution local copies, and prepare for importing into the second non-synced library I had created. So I did that, switched libraries, and selected my main library for import. You might think reading one Photos library from another stored on the same SSD would be pretty quick. Yes, there are over 95,000 items and they all have associated thumbnails, but it takes only a beat to load the library from scratch in Photos.

It took over thirty minutes.

After I patiently waited that out, I selected a batch of photos from a specific event and chose to import them into an album, so they stay categorized. Oh, that is right — just because you are importing across Photos libraries, that does not mean the structure will be retained. There is no way, as far as I can tell, to keep the same albums across libraries; you need to rebuild them.

After those finished importing, I pulled up my main library again to do the next event. You might expect it to retain some memory of the import source I had only just accessed. No — it took another thirty minutes to load. It does this every time I want to import media from my main library. It is not like that library is changing; it is no longer synced with iCloud, remember. It just treats every time it is opened as the first time.

And it was at this point I realized the importer did not display my library in an organized or logical fashion. I had expected it to be sorted old-to-new since that is how Photos says it is displayed, but I saw photos from many different years all jumbled together. It is almost in order, at times, but then I would notice sequential photos scattered all over.

My guess — and this is only a guess — is that it sub-orders by album, but does no further sorting after that. This is a problem for me given a quirk in my organizational structure. In addition to albums for different events, I have smart albums for each of my cameras and each of my iPhone’s individual lenses. But that still does not excuse the importer’s inability to sort old-to-new. The event I spotted early on and was able to import was basically a fluke. If I continued using this cross-library importing strategy, I would not be able to keep track of which photos I could remove from my main library.

There is another option, which is to export a selection of unmodified originals from my primary library to a folder on disk, and then switch libraries, and import them. This is an imperfect solution. Most obviously, it requires a healthy amount of spare disk space, enough to store the selected set of photos thrice, at least temporarily: once in the primary library, once in the folder, and once in the new library. It also means any adjustments made using the Photos app will be discarded — but, then again, importing directly from the library only copies the edited version of a photo without any of its history or adjustments preserved.

What I would not do, under any circumstance — and what I would strongly recommend anyone avoiding — is to use the Export Photos option. This will produce a bunch of lossy-compressed photos, and you do not want that.

Anyway, on my first attempt of trying the export-originals-then-import process, I exported the 20,528 oldest photos in my library to a folder. Then I switched to the archive library I had created, and imported that same folder. After it was complete, Photos said it had imported 17,848 items, a difference of nearly 3,000 photos. To answer your question: no, I have no idea why, or which ones, or what happened here.

This sucks. And it particularly sucks because most data is at least kind of important, but photos are really important, and I cannot trust this application to handle them.

There is this quote that has stuck with me for nearly twenty years, from Scott Forstall’s introduction to Time Machine (31:30) at WWDC 2006. Maybe it is the message itself or maybe it is the perfectly timed voice crack on the word “awful”, but this resonated with me:

When I look on my Mac, I find these pictures of my kids that, to me, are absolutely priceless. And in fact, I have thousands of these photos.

If I were to lose a single one of these photos, it would be awful. But if I were to lose all of these photos because my hard drive died, I’d be devastated. I never, ever want to lose these photos.

I have this library stored locally and backed up, or at least I though I did. I thought I could trust iCloud to be an extra layer of insurance. What I am now realizing is that iCloud may, in fact, be a liability. The simple fact is that I have no idea the state my photos library is currently in: which photos I have in full resolution locally, which ones are low-resolution with iCloud originals, and which ones have possibly been lost.

The kindest and least cynical interpretation of the state of iCloud Photos is that Apple does not care nearly enough about this “absolutely priceless” data. (A more cynical explanation is, of course, that services revenue has compromised Apple’s standards.) Many of these photos are, in fact, priceless to me, which is why I am questioning whether I want iCloud involved at all. I certainly have no reason to give Apple more money each month to keep wrecking my library.

I will need to dedicate real, significant time to minimizing my iCloud dependence. I will need to check and re-check everything I do as best I can, while recognizing the difficulty I will have in doing so with the limited information I have in my iCloud account. This is undeniably frustrating. I am glad I caught this, however, as I sure had not previously thought nearly as much as I should have about the integrity of my library. Now, I am correcting for it. I hope it is not too late.


  1. It is no longer the sole place I store my photos. I have everything stored locally, too, and that gets backed up with Backblaze. Or, at least, I think I have everything stored locally. ↥︎

Gabriel Hilty, Toronto Star:

Speaking alongside Chief Myron Demkiw on Thursday at Toronto police headquarters, Public Safety Minister Gary Anandasangaree said Bill C-22, the Lawful Access Act, will “create a legal framework for modernized, lawful access regime in Canada,” something that police forces have been requested “for decades.”

The bill is Prime Minister Mark Carney government’s second push to pass expanded police search powers into law. An earlier proposal on lawful access was met with widespread concerns over potential overreach.

Paula Tran, Ottawa Citizen:

“The bill effectively lowers the standard that police have to meet. Sure, law enforcement says they’re happy, but that means they need less evidence and need to do less work to get the information about subscribers, and I don’t think that’s that’s a good thing. It’s the lowest standard in Canadian criminal law,” [Michael] Geist said.

[…]

Bill C-22 also proposes new legislation that would compel telecommunication companies to store and retain client metadata, like device location, for a year and to make it available to law enforcement and CSIS with a warrant. The metadata can be used to track a person’s live location in case they pose a national security threat or are considered to be in danger.

OpenMedia is running a campaign to email Members of Parliament, though I am suspicious these form letter campaigns actually work. It is a bare minimum signal since it requires almost no commitment. My M.P. is usually opposed to anything proposed by this government, since he is in the official opposition, but his reaction to this bill’s much worse predecessor is that it contained “the most commonsensical security changes we need to make in Canada”. I expect I will be writing him and, when I do, I will be sure to adjust OpenMedia’s form letter. If you are writing to your M.P., I suggest you do the same if you can spare the time.

Meera Raman, Globe and Mail:

Wealthsimple is seeking to offer prediction trading in Canada, a controversial type of betting on real-world events that has surged in popularity in the past year, and has been largely banned in this country.

[…]

The approval for Ontario-based Wealthsimple permits it only to offer contracts tied to economic indicators, financial markets and climate trends, the company confirmed – not sports or elections, which are among the most popular uses of prediction markets in the United States.

Interactive Brokers launched here last April. Why are we doing this to ourselves?