Link Log

Charlie Warzel, the Atlantic:

There is something disorienting, horrible, and somehow fitting in the timing of all of this. That one man with the means to do it would threaten destruction of a part of our planet at the same moment its beauty and fragility are on full display. We are, in this tense moment, living with our own overview effect. Four are watching from afar. But the rest of us are watching too — left to reckon with our own place on the pale blue dot, reminded of all the ways we might die, and all the reasons for which to live.

The effect of toggling between news about Artemis II — which, yes, may not be as scientifically rigorous as one might hope, yet is undeniably a very cool event — and an objective threat of genocide has squeezed me to feel ways I did not know I could at the same time.

Microsoft’s Defender Security Research Team:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters (MITRE ATLAS® AML.T0080, AML.T0051).

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

Microsoft redacted the names of websites currently using this technique but, with the information they provided, it was trivial for me to find a dozen examples — yet, somehow, not the one in the screenshot. I am not saying Microsoft was faking this, only that it is already common enough that this one example was drowned out by a bunch of others.

Rand Fishkin, SparkToro:

Google alone was responsible for 73.7% of all desktop searches across the 41 domains we analyzed in the US in Q4 2025 (as noted, the graph is not to scale or none of the other label names would be visible). That’s obviously huge, but it’s also far lower than how their market share is usually reported (e.g. Statcounter, whose methodology puts them at 90%+, or our prior, more limited analyses with similar numbers) and higher than what they tried to use in their antitrust defense (i.e. data from Evercore ISI, an “equities research firm”).

Perhaps more fascinating and unexpected are the other domains with more search activity than ChatGPT: Amazon, Bing, and YouTube. Three domains where search marketers historically have put limited effort compared to the onslaught of dollars flooding the “we need to rank in ChatGPT!” space.

Nevertheless, marketers are eager to manipulate it from the start.

Both of the above links are from a fabulous report by Mia Sato, of the Verge (gift link), who also wrote about ads in ChatGPT:

The ads were intrusive, the complaints went, and suspect, given that the example hot sauce ad appeared to be related to the preceding conversation. OpenAI CEO Sam Altman has claimed artificial intelligence can take over human jobs, cure cancer, and surpass human intelligence — and instead, people complained, he gave users banner ads?

But it appears that what people were really upset about was that a bubble had burst, that the chatbot they used for relationship advice, career coaching, therapy, and homework suddenly seemed vulnerable to manipulation. Unlike the rest of the internet, ChatGPT conversations felt private, safe from the clutches of brands and marketers chasing conversions. The reality, of course, is that it’s been happening all along.

Now that normal search results are all junked up with mostly — but not always — accurate A.I.-generated summaries, and all the links to A.I.-generated nonsense, and the alternatives are the large language models that generate all this stuff in the first place, what does searching the web look like in a few years’ time? Does Google get a handle on this, or do we have to constantly answer CAPTCHAs to search properly? This is not a Google-only problem; alternative search engines like DuckDuckGo and Kagi are good — often very good, in fact — but DuckDuckGo’s results are also full of generated garbage, and both lack Google’s more extensive historical records.

OpenAI’s Fidji Simo:

I’m excited to share that we’ve acquired TBPN. This acquisition brings a team with strong editorial instincts, deep audience understanding, and a proven ability to convene influential voices across tech, business, and culture.

OpenAI and TBPN jointly promise to retain the show’s independence while OpenAI is, according to its press release, “excited to bring their amazing comms and marketing instincts to the team”.

Alex Valdes, CNet:

TBPN launched in October 2024 and has been compared to ESPN in how it covers tech — two guys at a big desk with news, analysis, commentary and banter about topics such as AI, crypto, startups and the defense industry. The show’s two hosts and co-founders, Jordi Hays and John Coogan, have had some of tech’s biggest names in studio — OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Satya Nadella, entrepreneur Mark Cuban and Salesforce’s Marc Benioff, to name some.

Ryan Broderick, Garbage Day:

Now, Technology Brother #1, Coogan, has written about their desire to remain niche. “If TBPN hits 10M subscribers, something has gone very wrong,” he wrote on LinkedIn last month. “From the very beginning we knew our core audience size: about 200,000 founders, executives, and position players in tech and finance. It may seem small but we were building for a very specialized audience.”

Call me delusional, but I cannot imagine many founders and executives have the ability to watch a three-hour daily livestream. I will not spoil it too much, but Broderick’s theory is pretty reasonable: OpenAI bought it for its nominal authenticity, however manufactured it is.

Ronan Farrow and Andrew Marantz spent a year and a half investigating Sam Altman for the New Yorker and, in particular, the many people around him who say he lies habitually and cannot be trusted. This feels like it could be a personal attack but, in the hands of Farrow and Marantz, it is carefully adjudicated including through several on-the-record conversations with Altman. Unfortunately, like many people who have been accused of similar behaviour, Altman cannot seem to remember much when confronted with these accusations.

This reads at times like a petty drama of infighting, in large part because this is a horribly insular club of ultra-wealthy people who simultaneously treat the technology they are working to create as having all the power of nuclear weapons, yet with all the growth potential of a hot new social network. Everyone is nominally an intellectual engaged in thoughtful research. Yet it is difficult to take anyone seriously.

Farrow and Marantz:

[…] After [Ilya] Sutskever grew more distressed about A.I. safety, he compiled the memos about [Sam] Altman and [Greg] Brockman. They have since taken on a legendary status in Silicon Valley; in some circles, they are simply called the Ilya Memos. Meanwhile, [Dario] Amodei was continuing to assemble notes. These and the other documents related to him chart his shift from cautious idealism to alarm. His language is more heated than Sutskever’s, by turns incensed at Altman — “His words were almost certainly bullshit” — and wistful about what he says was a failure to correct OpenAI’s course.

Neither collection of documents contains a smoking gun. Rather, they recount an accumulation of alleged deceptions and manipulations, each of which might, in isolation, be greeted with a shrug: Altman purportedly offers the same job to two people, tells contradictory stories about who should appear on a live stream, dissembles about safety requirements. But Sutskever concluded that this kind of behavior “does not create an environment conducive to the creation of a safe AGI.” Amodei and Sutskever were never close friends, but they reached similar conclusions. Amodei wrote, “The problem with OpenAI is Sam himself.”

These guys are obsessed with artificial general intelligence in concept and seem to think of the world in those terms. Between that and the palling around they do with similarly rich and disconnected colleagues, I cannot imagine any of them can be trusted with developing these technologies in ways that are beneficial for the rest of us — even if they are being honest.

Do you want an all-in-one solution to block ads, trackers, and annoyances across all your Apple devices?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Sponsor: Magic Lasso Adblock

With Magic Lasso Adblock you can effortlessly block ads on your iPhone, iPad, Mac, and Apple TV.

Magic Lasso is a single, native app that includes everything you need:

  • Safari Ad Blocking — Browse 2.0× faster in Safari by blocking all ads, with no annoying distractions or pop ups

  • YouTube Ad Blocking — Block all YouTube ads in Safari, including all video ads, banner ads, search ads, plus many more

  • App Ad Blocking — Block ads and trackers across the news, social media, and game apps on your device, including other browsers such as Chrome and Firefox

  • Apple TV Ad Blocking — Watch your favourite TV shows with less interruptions and protect your privacy from in-app ad tracking with Magic Lasso on your Apple TV

Best of all, with Magic Lasso Adblock, all ad blocking is done directly on your device, using a fast, efficient Swift-based architecture that follows our strict zero data collection policy.

With over 5,000 five star reviews, it’s simply the best ad blocker for your iPhone, iPad, Mac, and Apple TV.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, ensure your browsing history, app usage, and viewing habits stay private with Magic Lasso Adblock.

Join over 400,000 users and download Magic Lasso Adblock today.

Barry Petchesky, Defector (gift link):

NASA shared another photo Wiseman took, a slice of Earth peeking in the Orion’s window. No human has seen the Earth look this small since 1972. Low-earth orbit, where every single crewed space mission since Apollo has operated, tops out at around 1,000 miles above Earth’s surface. The International Space Station orbits a mere 250 miles up. Orion is currently about 95,000 miles away.

It is a wonderful photograph.

There is an E.U. organization called Fairlinked that is a “trade association and advocacy group for commercial LinkedIn users”, and it recently released a report about serious privacy concerns with LinkedIn:

Microsoft Corporation’s LinkedIn is running a massive, global, and illegal spying operation on every computer that visits their website.

[…]

Because LinkedIn knows each visitor’s name, employer, and job title, every detected extension is matched to an identified individual. And because LinkedIn knows where each user works, these individual scans aggregate into detailed profiles of companies, institutions, and government agencies, revealing which software tools their employees use without the organization’s knowledge or consent.

Fairlinked raises two major points of contention: a script on LinkedIn allegedly fingerprints visitors and, if they use a Chromium-based browser, it also compares a known list of browser extensions against the extensions the visitors has installed.

When this was first documented in 2017 by Dan Andrews, LinkedIn was scanning for 38 extensions. One of which was Daxtra Magnet, which “references your recruitment database, such as Taleo, Bullhorn, Salesforce, Adapt, etc. and automatically checks it for a match to an online candidate profile that you are looking at”. Two weeks prior, Andrews writes, LinkedIn was scanning for 28 extensions. Then, when Mark Percival explored this behaviour in February 2026, LinkedIn was now identifying 2,953 extensions. It is now at over 6,200. Some of them are comparable to Daxtra Magnet in that they make use of LinkedIn data specifically, while others are completely irrelevant to the site, or recruiting or job hunting in general.

This is very obviously a severe privacy violation because it can and probably does tie back to named and identified individuals. The amount and type of information collected by this system is ripe for abuse. This is very bad.

However, this campaign is being waged by an industry group that has its own privacy problems. Fairlinked is promoting a lawsuit filed against LinkedIn by Teamfluence, which makes software that allows users to bypass LinkedIn’s daily connection request limits, build up their contacts database, and run automations based on who visits their company or individual profiles. In one example, Teamfluence says it can automatically retrieve the email and phone number of anyone who clicks “like” on a LinkedIn post; in another example, it allows companies to detect website visits from prospective clients’ offices. This product enables spam or, to put it nicely, unsolicited outreach at scale. And, yes, Teamfluence is distributed as a browser extension.

Fairlinked has no documentation of its member groups and barely any of its leadership. One of its board members is an “S. Morell”, and it just happens that Teamfluence was founded by someone named Steven Morell. Another board member is “J. Liebling” and, unsurprisingly, a Jan-Jakob Liebling is an executive at Teamfluence.

There, too, are a bunch of companies that have made their business on the back of LinkedIn data. This is not comparable to Teamfluence or Daxtra Magnet, but it is worth underscoring an entire industry that thrives on this data. LinkedIn has been on a tear trying to curtail it. Just last year, the company sued two companies — ProxyCurl and ProAPIs — to force them to stop scraping its site. This has been going on for years. A massive 2019 leak of “enrichment” data from People Data Labs at least partly originated from LinkedIn scraping. The same year, a U.S. court found it was legal for hiQ Labs to scrape LinkedIn, a decision that was reaffirmed in 2022 after a brief detour through the U.S. Supreme Court. However, LinkedIn was allowed to reinforce its terms of service and could restrict scraping.

Again, to be clear, mass scraping does not appear to be a practice Teamfluence is engaged in. In the E.U., LinkedIn is considered a gatekeeper under the Digital Markets Act and, so, must meet certain obligations of interoperability. That seems quite reasonable. However, the personal and identifiable data held by LinkedIn is basically a world of organizational charts masquerading as a bleak social network. Allowing for interoperability could also open the doors for greater exploitation of user data without adequate individual control. I wish none of this existed.

I am so glad I do not work in an industry where having a LinkedIn profile is basically an obligation.

Hana Lee Goldin:

The search bar you already have is more capable than that arrangement requires you to know. With the right syntax, it becomes a precision instrument: narrow by domain, by date, by file type, by exact phrase. We can pull up archived pages, surface open file directories, and even find what people said in forums instead of what brands want us to find. None of it requires a new tool or a paid account. The capability has been there the whole time.

Advanced search operations are something Google does better than any competitor. DuckDuckGo has its bangs and I like them very much, but Google has a vast catalogue able to be searched with such precision — to a point. If you use these advanced search operators, get ready to see a lot of CAPTCHAs. Google will slow you down and may even block you temporarily if you use it too well.

The newest episode of “Upgrade” is a wonderful retelling of a very particular history (also available as a video):

Jason and Myke tell the story of Apple’s origin. It emerged from the unique environment of the Santa Clara valley suburbs of the ’70s thanks to the particular genius of its two co-founders and some surprising help they got along the way.

Though I was familiar with much of this, I cannot think of many better people to tell it than Jason Snell. I have already seen one thinkpiece after another about what a fifty year-old — ish — Apple means in the grand scope, and there is definitely a place for that. Today’s Apple is a long way from this origin story, of course, but what a story it is.

This gives me an excuse to explain why I am fascinated by this one computer company. Though this story is great, that is not why, nor is it the history of successfully bringing the graphical user interface to the market, nor the ’90s–’00s turnaround. Those are all parts of it. But the main reason I am fascinated by Apple is that it has built such a distinct identity for itself. It has not always stuck to it but, if anything, I think that helps reinforce the existence of an Apple-y identity. Some might attribute that to a particular way of marketing itself which, while true, also emphasizes how important that identity is: when its messaging does not match the products, services, experience, or expected corporate behaviour, it is noticeable.

This is all a bit mythical, to be sure. The garage-era Steves probably would not imagine Apple celebrating its fiftieth birthday by being the second most valuable corporation in the world, nor would they think it would hire Paul McCartney for its employee party. To me, one of those things feels more Apple-y than the other. It feels right for the company to celebrate with a music legend; it probably does not need to be quite so rich or powerful to do that, though. Apple has long been a really, really big corporation, and that — in itself — does not feel very Apple-y to me. That, too, is fascinating.

Gabriel Hilty, Toronto Star:

Speaking alongside Chief Myron Demkiw on Thursday at Toronto police headquarters, Public Safety Minister Gary Anandasangaree said Bill C-22, the Lawful Access Act, will “create a legal framework for modernized, lawful access regime in Canada,” something that police forces have been requested “for decades.”

The bill is Prime Minister Mark Carney government’s second push to pass expanded police search powers into law. An earlier proposal on lawful access was met with widespread concerns over potential overreach.

Paula Tran, Ottawa Citizen:

“The bill effectively lowers the standard that police have to meet. Sure, law enforcement says they’re happy, but that means they need less evidence and need to do less work to get the information about subscribers, and I don’t think that’s that’s a good thing. It’s the lowest standard in Canadian criminal law,” [Michael] Geist said.

[…]

Bill C-22 also proposes new legislation that would compel telecommunication companies to store and retain client metadata, like device location, for a year and to make it available to law enforcement and CSIS with a warrant. The metadata can be used to track a person’s live location in case they pose a national security threat or are considered to be in danger.

OpenMedia is running a campaign to email Members of Parliament, though I am suspicious these form letter campaigns actually work. It is a bare minimum signal since it requires almost no commitment. My M.P. is usually opposed to anything proposed by this government, since he is in the official opposition, but his reaction to this bill’s much worse predecessor is that it contained “the most commonsensical security changes we need to make in Canada”. I expect I will be writing him and, when I do, I will be sure to adjust OpenMedia’s form letter. If you are writing to your M.P., I suggest you do the same if you can spare the time.

Meera Raman, Globe and Mail:

Wealthsimple is seeking to offer prediction trading in Canada, a controversial type of betting on real-world events that has surged in popularity in the past year, and has been largely banned in this country.

[…]

The approval for Ontario-based Wealthsimple permits it only to offer contracts tied to economic indicators, financial markets and climate trends, the company confirmed – not sports or elections, which are among the most popular uses of prediction markets in the United States.

Interactive Brokers launched here last April. Why are we doing this to ourselves?

Mike Masnick, of Techdirt, unsurprisingly opposes the verdicts earlier this week finding Meta and Google guilty of liability for how their products impact children’s safety. I think it is a perspective worth reading. Unlike the Wall Street Journal, Masnick respects your intelligence and brings actual substance. Still, I have some disagreements.

Masnick, on the “design choices” argument:

This distinction — between “design” and “content” — sounds reasonable for about three seconds. Then you realize it falls apart completely.

Here’s a thought experiment: imagine Instagram, but every single post is a video of paint drying. Same infinite scroll. Same autoplay. Same algorithmic recommendations. Same notification systems. Is anyone addicted? Is anyone harmed? Is anyone suing?

Of course not. Because infinite scroll is not inherently harmful. Autoplay is not inherently harmful. Algorithmic recommendations are not inherently harmful. These features only matter because of the content they deliver. The “addictive design” does nothing without the underlying user-generated content that makes people want to keep scrolling.

This sounds like a reasonable retort until you think about it for three more seconds and realize that the lack of neutrality in the outcomes of these decisions is the entire point. Users post all kinds of stuff on social media platforms, and those posts can be delivered in all kinds of different ways, as Masnick also writes. They can be shown in reverse-chronological order in a lengthy scroll, or they can be shown one at a time like with Stories. The source of the posts someone sees might be limited to just accounts a user has opted into, or it can be broadened to any account from anyone in the world. Twitter used to have a public “firehose” feed.

But many of the biggest and most popular platforms have coalesced around a feed of material users did not ask for. This is not like television, where each show has been produced and vetted by human beings, and there are expectations for what is on at different times of the day. This is automated and users have virtually no control within the platforms themselves. If you do not like what Instagram is serving you on your main feed, your choice is to stop using Instagram entirely — even if you like and use other features.

Platforms know people will post objectionable and graphic material if they are given a text box or an upload button. We know it is “impossible” to moderate a platform well at scale. But we are supposed to believe they have basically no responsibility for what users post and what their systems surface in users’ feeds? Pick one.

Masnick, on the risks of legal accountability for smaller platforms:

And this is already happening. TikTok and Snap were also named as defendants in the California case. They both settled before trial — not because they necessarily thought they’d lose on the merits, but because the cost of fighting through a multi-week jury trial can be staggering. If companies the size of TikTok and Snap can’t stomach the expense, imagine what this means for mid-size platforms, small forums, or individual website operators.

I am going to need a citation that TikTok and Snap caved because they could not afford continuing to fight. It seems just as plausible they could see which way the winds were blowing, given what I have read so far in the evidence that has been released.

Masnick:

One of the key pieces of evidence the New Mexico attorney general used against Meta was the company’s 2023 decision to add end-to-end encryption to Facebook Messenger. The argument went like this: predators used Messenger to groom minors and exchange child sexual abuse material. By encrypting those messages, Meta made it harder for law enforcement to access evidence of those crimes. Therefore, the encryption was a design choice that enabled harm.

The state is now seeking court-mandated changes including “protecting minors from encrypted communications that shield bad actors.”

Yes, the end result of the New Mexico ruling might be that Meta is ordered to make everyone’s communications less secure. That should be terrifying to everyone. Even those cheering on the verdict.

This is undeniably a worrisome precedent. I will note Raúl Torrez, New Mexico’s Attorney General and the man who brought this case against Meta, says he wants to do so for minors only. The implementation of this is an obvious question, though one that mandated age-gating would admittedly make straightforward.

Meta cited low usage when it announced earlier this month that it would be turning off end-to-end encryption in Instagram. If it is a question of safety or liability, it is one Meta would probably find difficult to articulate given end-to-end encryption remains available and enabled by default in Messenger and WhatsApp. An executive raised concerns about the feature when it was being planned, drawing a distinction between it and WhatsApp because the latter “does not make it easy to make social connections, meaning making Messenger e2ee will be far, far worse”.

I think Masnick makes some good arguments in this piece and raises some good questions. It is very possible or even likely this all gets unwound when it is appealed. I, too, expect the ripple effects of these cases to create some chaos. But I do not think the correct response to a lack of corporate accountability — or, frankly, standards — is, in Masnick’s words, “actually funding mental health care for young people”. That is not to say mental health should not be funded, only that it is a red herring response. In the U.S., total spending on children’s mental health care rose by 50% between 2011 and 2017; it continued to rise through the pandemic, of course. Perhaps that is not enough. But, also, it is extraordinary to think that we should allow companies to do knowingly harmful things and expect everyone else to correct for the predictable outcomes.

Chance Miller, of 9to5Mac, serving here as Apple’s official bad news launderer:

It’s the end of an era: Apple has confirmed to 9to5Mac that the Mac Pro is being discontinued. It has been removed from Apple’s website as of Thursday afternoon. The “buy” page on Apple’s website for the Mac Pro now redirects to the Mac’s homepage, where all references have been removed.

Apple has also confirmed to 9to5Mac that it has no plans to offer future Mac Pro hardware.

Mark Gurman reported last year that it was “on the back burner”.

The Mac Pro was, realistically, killed off when the Apple Silicon era ended support for expandability and upgradability. The Mac Studio effectively takes its place, and is strategically similar to the “trash can” Mac Pro with all expandability offloaded to external peripherals. Unfortunate, but I think it was dishonest to keep selling this version of a “pro” Macintosh.

Morgan Lee, Associated Press:

A New Mexico jury found Tuesday that social media conglomerate Meta is harmful to children’s mental health and in violation of state consumer protection law.

The landmark decision comes after a nearly seven-week trial. Jurors sided with state prosecutors who argued that Meta — which owns Instagram, Facebook and WhatsApp — prioritized profits over safety. The jury determined Meta violated parts of the state’s Unfair Practices Act on accusations the company hid what it knew [about] the dangers of child sexual exploitation on its platforms and impacts on child mental health.

Meta communications jackass Andy Stone noted on X his company’s delight to be liable for “a fraction of what the State sought”. The company says it will appeal the verdict.

Stephen Morris and Hannah Murphy, Financial Times:

Meta and Google were found liable in a landmark legal case that social media platforms are designed to be addictive to children, opening up the tech giants to penalties in thousands of similar claims filed around the US.

A jury in the Los Angeles trial on Wednesday returned a verdict after nine days of deliberation, finding Meta’s platforms such as Instagram and Google’s YouTube were harmful to children and teenagers and that the companies failed to warn users of the dangers.

Dara Kerr, the Guardian:

To come to its liability decision, the jury was asked whether the companies’ negligence was a substantial factor in causing harm to KGM [the plaintiff] and if the tech firms knew the design of their products was dangerous. The 12-person panel of jurors returned a 10-2 split answering in favor of the plaintiff on every single question.

Meta says it will also appeal this verdict.

Sonja Sharp, Los Angeles Times:

Collectively, the suits seek to prove that harm flowed not from user content but from the design and operation of the platforms themselves.

That’s a critical legal distinction, experts say. Social media companies have so far been protected by a powerful 1996 law called Section 230, which has shielded the apps from responsibility for what happens to children who use it.

For its part, the Wall Street Journal editorial board is standing up for beleaguered social media companies in an editorial today criticizing everything about these verdicts, including this specific means of liability, which it calls a “dodge” around Section 230.

But it is not. The principles described by Section 230 are a good foundation for the internet. This law, while U.S.-centric, has enabled the web around the world to flourish. Making companies legally liable for the things users post will not fix the mess we are in, but it would cause great damage if enacted.

Product design, though, is a different question. It would be a mistake, I think, to read Section 230 as a blanket allowance for any way platforms wish to use or display users’ posts. (Update: In part, that is because it is a free speech question.) From my entirely layman perspective, it has never struck me as entirely reasonable that the recommendations systems of these platforms should have no duty or expectation of care.

The Journal’s editorial board largely exists to produce rage bait and defend the interests of the powerful, so I am loath to give it too much attention, but I thought this paragraph was pretty rich:

Trial lawyers and juries may figure that Big Tech companies can afford to pay, but extorting companies is certain to have downstream consequences. Meta and Google are spending hundreds of billions of dollars on artificial intelligence this year, which could have positive social impacts such as accelerating treatments for cancer.

Do not sue tech companies because they could be finding cancer treatments — why should I take this editorial board seriously if its members are writing jokes like these? They think you are stupid.

As for the two cases, I am curious about how these conclusions actually play out. I imagine other people who feel their lives have been eroded by the specific way these platforms are designed will be able to test their claims in court, too, and that it will be complicated by the inevitably lengthy appeals and relitigation process.

I am admittedly a little irritated by both decisions being reached by jury instead of a judge; I would have preferred to see reasoning instead of overwhelming agreement among random people. However, it sends a strong signal to big social media platforms that people saw and heard evidence about how these products are designed, and they agreed it was damaging. This is true of all users, not just children. Meta tunes its feeds (PDF) for maximizing engagement across the board, and it surely is not the only one. There are a staggering number of partially redacted exhibits released today to go through, if one is so inclined.

If these big social platforms are listening, the signals are out there: people may be spending a lot of time with these products, but that is not a good proxy for their enjoyment or satisfaction. Research indicates a moderate amount of use is correlated with neutral or even positive outcomes among children, yet there are too many incentives in these apps to push past self-control mechanisms. These products should be designed differently.

Ashley Capoot and Jonathan Vanian, CNBC:

Meta is laying off several hundred employees on Wednesday, CNBC confirmed.

The cuts are happening across several different organizations within the company, including Facebook, global operations, recruiting, sales and its virtual reality division Reality Labs, according to a source familiar with the company’s plans who asked not to be named because they are confidential.

Some impacted employees are being offered new roles within the company, the person said. In some cases, those new positions will require relocation.

“Several hundred” employees is a long way off from the numbers reported earlier this month. Perhaps Reuters got it all wrong but, more worryingly for employees, perhaps those figures were correct and this is only the beginning.

Danny Bolella attended one of Apple’s “Let’s Talk Liquid Glass” workshops:

Let’s address the elephant in the room. If you read the comments on my articles or browse the iOS subreddits, there is a vocal contingent of developers betting that Apple is going to roll back Liquid Glass.

The rationale usually points to the initial community backlash, the slower adoption rate of iOS 26, and the news that Alan Dye left Apple for Meta. The prevailing theory has been: “Just wait it out. They’ll revert to flat design.”

I shared this exact sentiment with the Apple team.

Their reaction? Genuine shock. They were actually concerned that developers were holding onto this position. They made it emphatically clear that Liquid Glass is absolutely moving forward, evolving, and expanding across the ecosystem.

Unsurprising. Though I expect a number of people reading this will be disappointed, I cannot imagine a world in which Apple would either revert to its previous design language or whip together something new. It is going to ride Liquid Glass and evolve it for a long time; if history is a good rule of thumb, assume ten years.

In theory, this is a good thing. Even on MacOS, I can find things I prefer to its predecessor, though admittedly they are few and far between. This visual design feels much more at home on iOS. The things that cause me far more frustration on a daily basis are the unrelenting bugs across Apple’s ecosystem, like how I just finished listening to an album with my headphones and then, when I clicked “play” on a new album, Music on MacOS decided it should AirPlay to my television instead of continuing through my headphones. That kind of stuff.

Regardless of whatever one thinks the visual qualities of Liquid Glass, the software quality problem is notable there, too. We are now on the OS 26.4 set of releases and I am still running into plenty of instances with bizarre and distracting compositing problems. On my iPhone, the gradients that are supposed to help with legibility in the status bar and toolbar appear, disappear, and change colour with seemingly little relevance to what is underneath them. Notification Centre remains illegible until it is fully pulled down. Plus, I still see the kinds of graphics bugs and Auto Layout problems I have seen for a decade.

I hope to see a more fully considered version of the Liquid Glass design language at WWDC this year, and not merely from a visual perspective. This user interface is software, just like dedicated applications, and it is chockablock full of bugs.

Bolella, emphasis mine:

I plan to share an article soon where I break down the exact physics, z-axis rules, and “Barbell Layouts” of this hierarchy. But the high-level takeaway from the NYC labs is crystal clear: maximize your content, push your controls to the poles, and never let the interface compete with the information.

If you say so, Apple.

Berber Jin, Wall Street Journal:

CEO Sam Altman announced the changes to staff on Tuesday, writing that the company would wind down products that use its video models. In addition to the consumer app, OpenAI is also discontinuing a version of Sora for developers and won’t support video functionality inside ChatGPT, either.

OpenAI is not shutting this down because it has ethical qualms with what it has created, despite good reasons to do just that. It is because it is expensive without any clear reason for it to exist other than because OpenAI wants to be everywhere.

If you are desperate for a completely synthetic social media feed, Meta’s Vibes is apparently still around. Users are readily abusing it, of course, because that is what happens if you give people a text input box.

Update: In a tweet, OpenAI has confirmed it is shutting down Sora. But, while it originally announced “We’re saying goodbye to Sora”, it changed that about an hour later to read “We’re saying goodbye to the Sora app“, emphasis mine. The Journal has not changed its report to retract claims about shutting down the platform altogether, though, while OpenAI continues to promote Sora API pricing.

Apple, in a press release with the title “Introducing Apple Business — a new all‑in‑one platform for businesses of all sizes”, buried in a section tucked in the middle labelled “Enhanced Discoverability in Apple Maps”, both of which are so anodyne as to encourage missing this key bit of news:

Every day, users choose Apple Maps to discover and explore places and businesses around them. Beginning this summer in the U.S. and Canada, businesses will have a new way to be discovered by using Apple Business to create ads on Maps. Ads on Maps will appear when users search in Maps, and can appear at the top of a user’s search results based on relevance, as well as at the top of a new Suggested Places experience in Maps, which will display recommendations based on what’s trending nearby, the user’s recent searches, and more. Ads will be clearly marked to ensure transparency for Maps users.

The way they are “clearly marked” is with a light blue background and a small “Ad” badge, though it is worth noting Apple has been testing an even less obvious demarcation for App Store ads. In the case of the App Store, I have found the advertising blitz junks up search results more than it helps me find things I am interested in.

This is surely not something users are asking for. I would settle for a more reliable search engine, one that prioritizes results immediately near me instead of finding places in cities often hundreds of kilometres away. There are no details yet on what targeting advertisers will be allowed to use, but it will be extremely frustrating if the only reason I begin seeing more immediately relevant results is because a local business had to pay for the spot.

Update: I have this one little nagging thought I cannot shake. Maps has been an imperfect — to be kind — app for nearly fifteen years, but it was ultimately a self-evident piece of good software, at least in theory. It was a directory of points-of-interest, and a means of getting directions. With this announcement, it becomes a container for advertising. Its primary function feels corrupted, at least a little bit, because what users care about is now subservient to the interests of the businesses paying Apple.

Lorenzo Franceschi-Bicchierai and Zack Whittaker, TechCrunch:

Last week, cybersecurity researchers uncovered a hacking campaign targeting iPhone users that used an advanced hacking tool called DarkSword. Now someone has leaked a newer version of DarkSword and published it on the code-sharing site GitHub.

Researchers are warning that this will allow any hacker to easily use the tools to target iPhone users running older versions of Apple’s operating systems who have not yet updated to its latest iOS 26 software. This likely affects hundreds of millions of actively used iPhones and iPads, according to Apple’s own data on out-of-date devices.

This is an entirely different exploit chain to the “Coruna” one which also surfaced earlier this month — so now there are two massive security exploits just floating around in the wild affecting a large number of iPhones. Apple is apparently concerned enough about these vulnerabilities that it is issuing patches as far back as iOS 15 though, disappointingly, only for devices that do not support newer major versions. If you have a device that can run iOS 26, you will be safer if it is running iOS 26.

It is, I should say, pretty brazen for the developers of this exploit chain to call the JavaScript file “rce_loader.js”. RCE stands for remote code execution. It is basically like calling the file “hacking_happens_here.js”.