Apple:

Apple today announced that Jennifer Newstead will become Apple’s general counsel on March 1, 2026, following a transition of duties from Kate Adams, who has served as Apple’s general counsel since 2017. She will join Apple as senior vice president in January, reporting to CEO Tim Cook and serving on Apple’s executive team.

In addition, Lisa Jackson, vice president for Environment, Policy, and Social Initiatives, will retire in late January 2026. The Government Affairs organization will transition to Adams, who will oversee the team until her retirement late next year, after which it will be led by Newstead. Newstead’s title will become senior vice president, General Counsel and Government Affairs, reflecting the combining of the two organizations. The Environment and Social Initiatives teams will report to Apple chief operating officer Sabih Khan.

What will tomorrow bring, I wonder?

Newstead has spent the past year working closely with Joel Kaplan, and fighting the FTC’s case against Meta — successfully, I should add. Before that, she was a Trump appointee at the U.S. State Department. Well positioned, then, to fight Apple’s U.S. antitrust lawsuit against a second-term Trump government that has successfully solicited Apple’s money.

John Voorhees, MacStories:

Although Apple doesn’t say so in its press release, it’s pretty clear that a few things are playing out among its executive ranks. First, a large number of them are approaching retirement age, and Apple is transitioning and changing roles internally to account for those who are retiring. Second, the company is dealing with departures like Alan Dye’s and what appears to be the less-than-voluntary retirement of John Giannandrea. Finally, the company is reducing the number of Tim Cook’s direct reports, which is undoubtedly to simplify the transition to a new CEO in the relatively near future.

A careful reader will notice Apple’s newsroom page currently has press releases for these departures and, from earlier this week, John Giannandrea’s, but there is nothing about Alan Dye’s. In fact, even in the statement quoted by Bloomberg, Dye is not mentioned. In fairness, Adams, Giannandrea, and Jackson all have bios on Apple’s leadership page. Dye’s was removed between 2017 and 2018.

Starting to think Mark Gurman might be wrong about that FT report.

Jonathan Slotkin, a surgeon and venture capital investor, wrote for the New York Times about data released by Waymo indicating impressive safety improvements over human drivers through June 2025:

If Waymo’s results are indicative of the broader future of autonomous vehicles, we may be on the path to eliminating traffic deaths as a leading cause of mortality in the United States. While many see this as a tech story, I view it as a public health breakthrough.

[…]

There’s a public health imperative to quickly expand the adoption of autonomous vehicles. […]

We should be skeptical of all self-reported stats, but these figures look downright impressive.

Slotkin responsibly notes several caveats, though neglects to mention the specific cities in which Waymo operates: Austin, Los Angeles, Phoenix, and San Francisco. These are warm cities with relatively low annual precipitation, almost none of which is ever snow. Slotkin’s enthusiasm for widespread adoption should be tempered somewhat by this narrow range of climate data. Still, its data is compelling. These cars seem to crash less often than those driven by people in the same cities and, in particular, avoid causing serious injuries at an impressive rate.

It is therefore baffling to me that Waymo appears to be treating this as a cushion for experimentation.

Katherine Bindley, in a Wall Street Journal article published the very same day as Slotkin’s Times piece:

The training wheels are off. Like the rule-following nice guy who’s tired of being taken advantage of, Waymos are putting their own needs first. They’re bending traffic laws, getting impatient with pedestrians and embracing the idea that when it comes to city driving, politeness doesn’t pay: It’s every car for itself.

[…]

Waymo has been trying to make its cars “confidently assertive,” says Chris Ludwick, a senior director of product management with Waymo, which is owned by Google parent Alphabet. “That was really necessary for us to actually scale this up in San Francisco, especially because of how busy it gets.”

A couple years ago, Tesla’s erroneously named “Full Self-Driving” feature began cruising through crosswalks if it judged it could pass a crossing pedestrian in time, and I wrote:

Advocates of autonomous vehicles often say increased safety is one of its biggest advantages over human drivers. Compliance with the law may not be the most accurate proxy for what constitutes safe driving, but not to a disqualifying extent. Right now, it is the best framework we have, and autonomous vehicles should follow the law. That should not be a controversial statement.

I stand by that. A likely reason for Waymo’s impressive data is that its cars behave with caution and deference. Substituting that with “confidently assertive” driving is a move in entirely the wrong direction. It should not roll through stop signs, even if its systems understand nobody is around. It should not mess up the order of an all-way stop intersection. I have problems with the way traffic laws are written, but it is not up to one company in California to develop a proprietary interpretation. Just follow the law.

Slotkin:

This is not a call to replace every vehicle tomorrow. For one thing, self-driving technology is still expensive. Each car’s equipment costs $100,000 beyond the base price, and Waymo doesn’t yet sell cars for personal use. Even once that changes, many Americans love driving; some will resist any change that seems to alter that freedom.

[…]

There is likely to be some initial public trepidation. We do not need everyone to use self-driving cars to realize profound safety gains, however. If 30 percent of cars were fully automated, it might prevent 40 percent of crashes, as autonomous vehicles both avoid causing crashes and respond better when human drivers err. Insurance markets will accelerate this transition, as premiums start to favor autonomous vehicles.

Slotkin is entirely correct in writing that “Americans love driving” — the U.S. National Household Travel Survey, last conducted in 2022, found 90.5% of commuters said they primarily used a car of some kind (table 7-2, page 50). 4.1% said they used public transit, 2.9% said they walked, and just 2.5% said they chose another mode of transportation in which taxicabs are grouped along with bikes and motorcycles. Those figures are about the same in 2017, though with an unfortunate decline in the number of transit commuters. Commuting is not the only reason for travelling, of course, but this suggests to me that even if every taxicab ride was in an autonomous Waymo, there would still be a massive gap to achieve that 30% adoption rate Slotkin wants. And, if insurance companies begin incentivizing autonomous vehicles, it really means rich people will reap the reward of being able to buy a new car.

Any argument about road safety has to be more comprehensive than what Slotkin is presenting in this article. Regardless of how impressive Waymo’s stats are, it is a vision of the future that is an individualized solution to a systemic problem. I have no specialized knowledge in this area, but I am fascinated by it. I read about this stuff obsessively. The things I want to see are things everyone can benefit from: improvements to street design that encourage drivers to travel at lower speeds, wider sidewalks making walking more comfortable, and generous wheeling infrastructure for bicycles, wheelchairs, and scooters. We can encourage the adoption of technological solutions, too; if this data holds up, it would seem welcome. But we can do so much better for everyone, and on a more predictable timeline.

This is, as Slotkin writes, a public health matter. Where I live, record numbers of people are dying, in part because more people than ever are driving bigger and heavier vehicles with taller fronts while they are distracted. Many of those vehicles will still be on the road in twenty years’ time, even if we accelerate the adoption pace of more autonomous vehicles. We do not need to wait for a headline-friendly technological upgrade. There are boring things cities can start doing tomorrow that would save lives.

Mark Gurman, Bloomberg:

Meta Platforms Inc. has poached Apple Inc.’s most prominent design executive in a major coup that underscores a push by the social networking giant into AI-equipped consumer devices.

The company is hiring Alan Dye, who has served as the head of Apple’s user interface design team since 2015, according to people with knowledge of the matter. Apple is replacing Dye with longtime designer Stephen Lemay, according to the people, who asked not to be identified because the personnel changes haven’t been announced.

Big week for changes in Apple leadership.

I am sure more will trickle out about this, but one thing notable to me is that Lemay has been a software designer for over 25 years at Apple. Dye, on the other hand, came from marketing and print design. I do not want to put too much weight on that — someone can be a sufficiently talented multidisciplinary designer — but I am curious to see what Lemay might do in a more senior role.

Admittedly I also have some (perhaps morbid) curiosity about what Dye will do at Meta.

One more note from Gurman’s report:

Dye had taken on a more significant role at Apple after Ive left, helping define how the company’s latest operating systems, apps and devices look and feel. The executive informed Apple this week that he’d decided to leave, though top management had already been bracing for his departure, the people said. Dye will join Meta as chief design officer on Dec. 31.

Let me get this straight: Dye personally launches an overhaul of Apple’s entire visual interface language, then leaves. Is that a good sign for its reception, either internally or externally?

Benj Edwards, Ars Technica:

Microsoft has lowered sales growth targets for its AI agent products after many salespeople missed their quotas in the fiscal year ending in June, according to a report Wednesday from The Information. The adjustment is reportedly unusual for Microsoft, and it comes after the company missed a number of ambitious sales goals for its AI offerings.

Based on Edwards’ summary — I still have no interest in paying for the Information — it sounds like this mostly affects sales of A.I. “agents”, a riskier technology proposition for businesses. This sounds to me like more concrete evidence of a plateau in corporate interest than the surveys reported on by the Economist.

Todd Vaziri:

As far as I can tell, Paul Haine was the first to notice something weird going on with HBO Max’ presentation. In one of season one’s most memorable moments, Roger Sterling barfs in front of clients after climbing many flights of stairs. As a surprise to Paul, you can clearly see the pretend puke hose (that is ultimately strapped to the back side of John Slattery’s face) in the background, along with two techs who are modulating the flow. Yeah, you’re not supposed to see that.

It appears as though this represents the original photography, unaltered before digital visual effects got involved. Somehow, this episode (along with many others) do not include all the digital visual effects that were in the original broadcasts and home video releases. It’s a bizarro mistake for Lionsgate and HBO Max to make and not discover until after the show was streaming to customers.

Eric Vilas-Boas, Vulture:

How did this happen? Apparently, this wasn’t actually HBO Max’s fault — the streamer received incorrect files from Lionsgate Television, a source familiar with the exchange tells Vulture. Lionsgate is now in the process of getting HBO Max the correct files, and the episodes will be updated as soon as possible.

It just feels clumsy and silly for Lionsgate to supply the wrong files in the first place, and for nobody at HBO to verify they are the correct work. An amateur mistake, frankly, for an ostensibly premium service costing U.S. $11–$23 per month. If I were king for a day, it would be illegal to sell or stream a remastered version of something — a show, an album, whatever — without the original being available alongside it.

Apple:

Apple today announced John Giannandrea, Apple’s senior vice president for Machine Learning and AI Strategy, is stepping down from his position and will serve as an advisor to the company before retiring in the spring of 2026. Apple also announced that renowned AI researcher Amar Subramanya has joined Apple as vice president of AI, reporting to Craig Federighi. Subramanya will be leading critical areas, including Apple Foundation Models, ML research, and AI Safety and Evaluation. The balance of Giannandrea’s organization will shift to Sabih Khan and Eddy Cue to align closer with similar organizations.

When Apple hired Giannandrea from Google in 2018, the New York Times called it a “major coup”, given that Siri was “less effective than its counterparts at Google and Amazon”. The world changed a lot in the past six-and-a-half years, though: Siri is now also worse than a bunch of A.I. products. Of course, Giannandrea’s role at Apple was not limited to Siri. He spent time on the Project Titan autonomous car, which was cancelled early last year, before moving to generative A.I. projects. The first results of that effort were shown at WWDC last year; the most impressive features have yet to ship.

I feel embarrassed and dumb for hoping Giannandrea would help shake the company out of its bizarre Siri stupor. Alas, he is now on the Graceful Executive Exit Express, where he gets to spend a few more months at Apple in a kind of transitional capacity — you know the drill. Maybe Subramanya will help move the needle. Maybe this ex-Googler will make it so. Maybe I, Charlie Brown, will get to kick that football.

The Economist:

On November 20th American statisticians released the results of a survey. Buried in the data is a trend with implications for trillions of dollars of spending. Researchers at the Census Bureau ask firms if they have used artificial intelligence “in producing goods and services” in the past two weeks. Recently, we estimate, the employment-weighted share of Americans using AI at work has fallen by a percentage point, and now sits at 11% (see chart 1). Adoption has fallen sharply at the largest businesses, those employing over 250 people. Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.

[…]

Even unofficial surveys point to stagnating corporate adoption. Jon Hartley of Stanford University and colleagues found that in September 37% of Americans used generative AI at work, down from 46% in June. A tracker by Alex Bick of the Federal Reserve Bank of St Louis and colleagues revealed that, in August 2024, 12.1% of working-age adults used generative AI every day at work. A year later 12.6% did. Ramp, a fintech firm, finds that in early 2025 AI use soared at American firms to 40%, before levelling off. The growth in adoption really does seem to be slowing.

I am skeptical of the metrics used by the Economist to produce this summary, in part because they are all over the place, and also because they are mostly surveys. I am not sure people always know they are using a generative A.I. product, especially when those features are increasingly just part of the modern office software stack.

While the Economist has an unfortunate allergy to linking to its sources, I wanted to track them down because a fuller context is sometimes more revealing. I believe the U.S. Census data is the Business Trends and Outlook Survey though I am not certain because its charts are just plain, non-interactive images. In any case, it is the Economist’s own estimate of falling — not stalling — adoption by workers, not an estimate produced by the Census Bureau, which is curious given two of its other sources indicate more of a plateau instead of a decline.

The Hartley, et al. survey is available here and contains some fascinating results other than the specific figures highlighted by the Economist — in particular, that the construction industry has the fourth-highest adoption of generative A.I., that Gemini is shown in Figure 9 as more popular than ChatGPT even though the text on page 7 indicates the opposite, and that the word “Microsoft” does not appear once in the entire document. I have some admittedly uninformed and amateur questions about its validity. At any rate, this is the only source the Economist cites which indicates a decline.

The data point attributed to the tracker operated by the Federal Reserve Bank of St. Louis is curious. The Economist notes “in August 2024, 12.1% of working-age adults used generative A.I. every day at work. A year later 12.6% did”, but I am looking at the dashboard right now, and it says the share using generative A.I. daily at work is 13.8%, not 12.6%. In the same time period, the share of people using it “at least once last week” jumped from 36.1% to 46.9%. I have no idea where that 12.6% number came from.

Finally, Ramp’s data is easy enough to find. Again, I have to wonder about the Economist’s selective presentation. If you switch the chart from an overall view to a sector-based view, you can see adoption of paid subscriptions has more than doubled in many industries compared to October last year. This is true even in “accommodation and food services”, where I have to imagine use cases are few and far between.

After finding the actual source of the Economist’s data, it has left me skeptical of the premise of this article. However, plateauing interest — at least for now — makes sense to me on a gut level. There is a ceiling to work one can entrust to interns or entry-level employees, and that is approximately similar for many of today’s A.I. tools. There are also sector-level limits. Consider Ramp’s data showing high adoption in the tech and finance industries, with considerably less in sectors like healthcare and food services. (Curiously, Ramp says only 29% of the U.S. construction industry has a subscription to generative A.I. products, while Hartley, et al. says over 40% of the construction industry is using it.)

I commend any attempt to figure out how useful generative A.I. is in the real world. One of the problems with this industry right now is that its biggest purveyors are not public companies and, therefore, have fewer disclosure requirements. Like any company, they are incentivized to inflate their importance, but we have little understanding of how much they are exaggerating. If you want to hear some corporate gibberish, OpenAI interviewed executives at companies like Philips and Scania about their use of ChatGPT, but I do not know what I gleaned from either interview — something about experimentation and vague stuff about people being excited to use it, I suppose. It is not very compelling to me. I am not in the C-suite, though.

The biggest public A.I. firm is arguably Microsoft. It has rolled out Copilot to Windows and Office users around the world. Again, however, its press releases leave much to be desired. Levi Strauss employees, Microsoft says, “report the devices and operating system have led to significant improvements in speed, reliability and data handling, with features like the Copilot key helping reduce the time employees spend searching and free up more time for creating”. Sure. In another case study, Microsoft and Pantone brag about the integration of a colour palette generator that you can use with words instead of your eyes.

Microsoft has every incentive to pretend Copilot is a revolutionary technology. For people actually doing the work, however, its ever-nagging presence might be one of many nuisances getting in the way of the job that person actually knows how to do. A few months ago, the company replaced the familiar Office portal with a Copilot prompt box. It is still little more than a thing I need to bypass to get to my work.

All the stats and apparent enthusiasm about A.I. in the workplace are, as far as I can tell, a giant mess. A problem with this technology is that the ways in which it is revolutionary are often not very useful, its practical application in a work context is a mixed bag that depends on industry and role, and its hype encourages otherwise respectable organizations to suggest their proximity to its promised future.

The Economist being what it is, much of this article revolves around the insufficiently realized efficiency and productivity gains, and that is certainly something for business-minded people to think about. But there are more fundamental issues with generative A.I. to struggle with. It is a technology built on a shaky foundation. It shrinks the already-scant field of entry-level jobs. Its results are unpredictable and can validate harm. The list goes on, yet it is being loudly inserted into our SaaS-dominated world as a top-down mandate.

It turns out A.I. is not magic dust you can sprinkle on a workforce to double their productivity. CEOs might be thrilled by having all their email summarized, but the rest of us do not need that. We need things like better balance of work and real life, good benefits, and adequate compensation. Those are things a team leader cannot buy with a $25-per-month-per-seat ChatGPT business license.

Tyler Hall:

Maybe it’s because my eyes are getting old or maybe it’s because the contrast between windows on macOS keeps getting worse. Either way, I built a tiny Mac app last night that draws a border around the active window. I named it “Alan”.

A good, cheeky name. The results are not what I would call beautiful, but that is not the point, is it? It works well. I wish it did not feel understandable for there to be an app that draws a big border around the currently active window. That should be something made sufficiently obvious by the system.

Unfortunately, this is a problem plaguing the latest versions of MacOS and Windows alike, which is baffling to me. The bar for what constitutes acceptable user interface design seems to have fallen low enough that it is tripping everyone at the two major desktop operating system vendors.

Hank Green was not getting a lot of traction on a promotional post on Threads about a sale on his store. He got just over thirty likes, which does not sound awful, until you learn that was over the span of seven hours and across Green’s following of 806,000 accounts on Threads.

So he tried replying to rage bait with basically the same post, and that was far more successful. But, also, it has some pretty crappy implications:

That’s the signal that Threads is taking from this: Threads is like oh, there’s a discussion going on.

It’s 2025! Meta knows that “lots of discussion” is not a surrogate for “good things happening”!

I assume the home feed ranking systems are similar for Threads and Instagram — though they might not be — and I cannot tell you how many times my feed is packed with posts from many days to a week prior. So many businesses I frequent use it as a promotional tool for time-bound things I learn about only afterward. The same thing is true of Stories, since they are sorted based on how frequently you interact with an account.

Everyone is allowed one conspiracy theory, right? Mine is that a primary reason Meta is hostile to reverse-chronological feeds is because it requires businesses to buy advertising. I have no proof to support this, but it seems entirely plausible.

You have seen Moraine Lake. Maybe it was on a postcard or in a travel brochure, or it was on Reddit, or in Windows Vista, or as part of a “Best of California” demo on Apple’s website. Perhaps you were doing laundry in Lucerne. But I am sure you have seen it somewhere.

Moraine Lake is not in California — or Switzerland, for that matter. It is right here in Alberta, between Banff and Lake Louise, and I have been lucky enough to visit many times. One time I was particularly lucky, in a way I only knew in hindsight. I am not sure the confluence of events occurring in October 2019 is likely to be repeated for me.

In 2019, the road up to the lake would be open to the public from May until about mid-October, though the closing day would depend on when it was safe to travel. This is one reason why so many pictures of it have only the faintest hint of snow capping the mountains behind — it is only really accessible in summer.

I am not sure why we decided to head up to Lake Louise and Moraine Lake that Saturday. Perhaps it was just an excuse to get out of the house. It was just a few days before the road was shut for the season.

We visited Lake Louise first and it was, you know, just fine. Then we headed to Moraine.

I posted a higher-quality version of this on my Glass profile.
A photo of Moraine Lake, Alberta, frozen with chunks of ice and rocks on its surface.

Walking from the car to the lakeshore, we could see its surface was that familiar blue-turquoise, but it was entirely frozen. I took a few images from the shore. Then we realized we could just walk on it, as did the handful of other people who were there. This is one of several photos I took from the surface of the lake, the glassy ice reflecting that famous mountain range in the background.

I am not sure I would be able to capture a similar image today. Banff and Lake Louise have received more visitors than ever in recent years, to the extent private vehicles are no longer allowed to travel up to Moraine Lake. A shuttle bus is now required. The lake also does not reliably freeze at an accessible time and, when it does, it can be covered in snow or the water line may have receded. I am not arguing this is an impossible image to create going forward. I just do not think I am likely to see it this way again.

I am very glad I remembered to bring my camera.

Winston Cho, the Hollywood Reporter:

To rewind, authors and publishers have gained access to Slack messages between OpenAI’s employees discussing the erasure of the datasets, named “books 1 and books 2.” But the court held off on whether plaintiffs should get other communications that the company argued were protected by attorney-client privilege.

In a controversial decision that was appealed by OpenAI on Wednesday, U.S. District Judge Ona Wang found that OpenAI must hand over documents revealing the company’s motivations for deleting the datasets. OpenAI’s in-house legal team will be deposed.

Wang’s decision (PDF), to the extent I can read it as a layperson, examines OpenAI’s shifting story about why it erased the books 1 and books2 data sets — apparently, the only time possible training materials were deleted.

I am not sure it has yet been proven OpenAI trained its models on pirated books. Anthropic settled a similar suit in September, and Meta and Apple are facing similar accusations. For practical purposes, however, it is trivial to suggest it did use pirated data in general: if you have access to its Sora app, enter any prompt followed by the word “camrip”.

What is a camrip?, a strictly law-abiding person might ask. It is a label added to a movie pirated in the old-fashioned way: by pointing a video camera at the screen in a theatre. As a result, these videos have a distinctive look and sound which is reproduced perfectly by Sora. It is very difficult for me to see a way in which OpenAI could have trained this model to understand what a camrip is without feeding it a bunch of them, and I do not know of a legitimate source for such videos.

The Internet Archive released a WordPress plugin not too long ago:

Internet Archive Wayback Machine Link Fixer is a WordPress plugin designed to combat link rot—the gradual decay of web links as pages are moved, changed, or taken down. It automatically scans your post content — on save and across existing posts — to detect outbound links. For each one, it checks the Internet Archive’s Wayback Machine for an archived version and creates a snapshot if one isn’t available.

Via Michael Tsai:

The part where it replaces broken links with archive links is implemented in JavaScript. I like that it doesn’t modify the post content in your database. It seems safe to install the plug-in without worrying about it messing anything up. However, I had kind of hoped that it would fix the links as part of the PHP rendering process. Doing it in JavaScript means that the fixed links are not available in the actual HTML tags on the page. And the data that the JavaScript uses is stored in an invisible <div> under the attribute data-iawmlf-post-links, which makes the page fail validation.

I love the idea of this plugin, but I do not love this implementation. I think I understand why it works this way: for the nondestructive property mentioned by Tsai, and also to account for its dependence on a third-party service of varying reliability. I would love to see a demo of this plugin in action.

Nicholas Hune-Brown, the Local:

Every media era gets the fabulists it deserves. If Stephen Glass, Jayson Blair and the other late 20th century fakers were looking for the prestige and power that came with journalism in that moment, then this generation’s internet scammers are scavenging in the wreckage of a degraded media environment. They’re taking advantage of an ecosystem uniquely susceptible to fraud—where publications with prestigious names publish rickety journalism under their brands, where fact-checkers have been axed and editors are overworked, where technology has made falsifying pitches and entire articles trivially easy, and where decades of devaluing journalism as simply more “content” have blurred the lines so much it can be difficult to remember where they were to begin with.

This is likely not the first story you have read about a freelancer managing to land bylines in prestigious publications thanks to dependency on A.I. tools, but it is one told very well.

Good tip from Jeff Johnson:

My business website has a number of “Download on the App Store” links for my App Store apps. Here’s an example of what that looks like:

[…]

The problem is that Live Text, “Select text in images to copy or take action,” is enabled by default on iOS devices (Settings → General → Language & Region), which can interfere with the contextual menu in Safari. Pressing down on the above link may select the text inside the image instead of selecting the link URL.

I love the Live Text feature, but it often conflicts with graphics like these. There is a good, simple, two-line CSS trick for web developers that should cover most situations. Also, if you rock a user stylesheet — and I think you should — it seems to work fine as a universal solution. Any issues I have found have been minor and not worth noting. I say give it a shot.

Update: Adding Johnson’s CSS to a user stylesheet mucks up the layout of Techmeme a little bit. You can exclude it by adding div:not(.ii) > before a:has(> img) { display: inline-block; }.

Quinn Nelson:

[…] at a moment when the Mac has roared back to the centre of Apple’s universe, the iPad feels closer than ever to fulfilling its original promise. Except it doesn’t, not really, because while the iPad has gained windowing and external display support, pro apps, all the trappings of a “real computer”, underneath it all, iPadOS is still a fundamentally mobile operating system with mobile constraints baked into its very DNA.

Meanwhile, the Mac is rumoured to be getting everything the iPad does best: touchscreens, OLED displays, thinner designs.

There are things I quibble with in Nelson’s video, including the above-quoted comparison to mere rumours about the Mac. The rest of the video is more compelling as it presents comparisons with the same or similar software on each platform in real-world head-to-head matches.

Via Federico Viticci, MacStories:

I’m so happy that Apple seems to be taking iPadOS more seriously than ever this year. But now I can’t help but wonder if the iPad’s problems run deeper than windowing when it comes to getting serious work done on it.

Apple’s post-iPhone platforms are only as good as Apple will allow them to be. I am not saying it needs to be possible to swap out Bluetooth drivers or monkey around with low-level code, but without more flexibility, platforms like the iPad and Vision Pro are destined to progress only at the rate Apple says is acceptable, and with the third-party apps it says are permissible. These are apparently the operating systems for the future of computers. They are not required to have similar limitations to the iPhone, but they do anyway. Those restrictions are holding back the potential of these platforms.

Marina Dunbar, the Guardian:

Many of the most influential personalities in the “Make America great again” (Maga) movement on X are based outside of the US, including Russia, Nigeria and India, a new transparency feature on the social media site has revealed.

The new tool, called “about this account”, became available on Friday to users of the Elon Musk-owned platform. It allows anyone to see where an account is located, when it joined the platform, how often its username has been changed, and how the X app was downloaded.

This is a similar approach to adding labels or notes to tweets containing misinformation in that it is adding more speech and context. It is more automatic, but the function and intent is comparable, which means Musk’s hobbyist P.R. team must be all worked up. But I checked, and none seem particularly bothered. Maybe they actually care about trust and safety now, or maybe they are lying hacks.

Mike Masnick, Techdirt:

For years, Matt Taibbi, Michael Shellenberger, and their allies have insisted that anyone working on these [trust and safety] problems was part of a “censorship industrial complex” designed to silence political speech. Politicians like Ted Cruz and Jim Jordan repeated these lies. They treated trust & safety work as a threat to democracy itself.

Then Musk rolled out one basic feature, and within hours proved exactly why trust & safety work existed in the first place.

Jason Koebler, 404 Media, has been covering the monetization of social media:

This has created an ecosystem of side hustlers trying to gain access to these programs and YouTube and Instagram creators teaching people how to gain access to them. It is possible to find these guide videos easily if you search for things like “monetized X account” on YouTube. Translating that phrase and searching in other languages (such as Hindi, Portuguese, Vietnamese, etc) will bring up guides in those languages. Within seconds, I was able to find a handful of YouTubers explaining in Hindi how to create monetized X accounts; other videos on the creators’ pages explain how to fill these accounts with AI-generated content. These guides also exist in English, and it is increasingly popular to sell guides to make “AI influencers,” and AI newsletters, Reels accounts, and TikTok accounts regardless of the country that you’re from.

[…]

Americans are being targeted because advertisers pay higher ad rates to reach American internet users, who are among the wealthiest in the world. In turn, social media companies pay more money if the people engaging with the content are American. This has created a system where it makes financial sense for people from the entire world to specifically target Americans with highly engaging, divisive content. It pays more.

The U.S. market is a larger audience, too. But those of us in rich countries outside the U.S. should not get too comfortable; I found plenty of guides similar to the ones shown by Koebler for targeting Australia, Canada, Germany, New Zealand, and more. Worrisome — especially if you, say, are somewhere with an electorate trying to drive the place you live off a cliff.

Update: Several X accounts purporting to be Albertans supporting separatism appear to be from outside Canada, including a “Concerned 🍁 Mum”, “Samantha”, “Canada the Illusion”, and this “Albertan” all from the United States, and a smaller account from Laos. I tried to check more, but X’s fragile servers are aggressively rate-limited.

I do not think people from outside a country are forbidden from offering an opinion on what is happening within it. I would be a pretty staggering hypocrite if I thought that. Nor do I think we should automatically assume people who are stoking hostile politics on social media are necessarily external or bots. It is more like a reflection of who we are now, and how easily that can be exploited.

Jonathan Weil, Wall Street Journal:

It seems like a marvel of financial engineering: Meta Platforms is building a $27 billion data center in Louisiana, financed with debt, and neither the data center nor the debt will be on its own balance sheet.

That outcome looks too good to be true, and it probably is.

The phrase “marvel of financial engineering” does not seem like a compliment. In addition to the evidence from Weil’s article, Meta is taking advantage of a tax exemption created by Louisiana’s state legislature. But, in its argument, it is merely a user of this data centre.

Also, colour me skeptical this data centre will truly be “the size of Manhattan” before the bubble bursts, despite the disruption to life in the area.

Update: Paris Martineau points to Weil’s bio noting he was “the first reporter to challenge Enron’s accounting practices”.

Fred Vogelstein, Crazy Stupid Tech — which, again, is a compliment:

We’re not only in a bubble but one that is arguably the biggest technology mania any of us have ever witnessed. We’re even back reinventing time. Back in 1999 we talked about internet time, where every year in the new economy was like a dog year – equivalent to seven years in the old.

Now VCs, investors and executives are talking about AI dog years – let’s just call them mouse years – which is internet time divided by five? Or is it by 11? Or 12? Sure, things move way faster than they did a generation ago. But by that math one year today now equals 35 years in 1995. Really?

A sobering piece that, unfortunately, is somewhat undercut since it lacks a single mention of layoffs, jobs, employment, or any other indication that this bubble will wreck the lives of people far outside its immediate orbit. In fairness, few of the related articles linked at the bottom mention that, either. Articles in Stratechery, the Brookings Institute, and the New York Times want you to think a bubble is just a sign of building something new and wonderful. A Bloomberg newsletter mentions layoffs only in the context of changing odds in predictions markets — I chuckled — while M.G. Siegler notes all the people who are being laid off while new A.I. hires get multimillion-dollar employment packages. Maybe all the pain and suffering that is likely to result from the implosion of this massive sector is too obvious to mention for the MBA and finance types. I think it is worth stating, though, not least because it acknowledges other people are worth caring about at least as much as innovation and growth and all that stuff.

Rohan Grover and Josh Widera, Techdirt:

Taking data disaffection into consideration, digital privacy is a cultural issue – not an individual responsibility – and one that cannot be addressed with personal choice and consent. To be clear, comprehensive data privacy law and changing behavior are both important. But storytelling can also play a powerful role in shaping how people think and feel about the world around them.

The correct answer to corporate and government contempt for our privacy must be in legislation. A systemic problem is not solved by each of us individually fiddling with confusing settings. But we do not get to adequate laws by treating this as a lost argument.