Month: December 2024

Cristina Criddle and Hannah Murphy, Financial Times:

Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3bn users.

[…]

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform … that’s where we see all of this going,” he [Meta’s Connor Hayes] added.

Imagine opening any of Meta’s products after this has taken over. Imagine how little you will see from the friends and family members you actually care about. Imagine how much slop you will be greeted with — a feed alternating between slop, suggested posts, and ads, with just enough of what you actually opened the app to see. Now consider how this will affect people who are more committed to Meta’s products, whether for economic reasons or social cohesion.

A big problem for Meta is that it is institutionally very dumb. I do not want to oversell this too much, but I truly believe this is the case. There are lots of smart people working there and its leadership clearly understands something about how people use social media.

But there is a vast sense of dumb in its attempts to deliver the next generation of its products. Its social media products are dependent on “engagement”, which is sometimes a product of users’ actual interest and, at other times, an artifact of Meta’s success or failure in controlling what they see. Maybe its “metaverse” will be interesting one day, but it seems deeply embarrassing so far.

Patrick Beuth et al., in a German-language report in Der Spiegel, as translated by Apple’s built-in translator:

Because many of the vehicle data could be linked to the names and contact details of the drivers, owners or fleet managers. Precise location data could be viewed on 460,000 vehicles, which allowed conclusions to be drawn about the lives of the people behind the steering wheels – just like the two politicians.

[…]

It is a more than embarrassing breakdown for the already struggling group. It’s a shame. Especially in the software, where VW lags behind the competition anyway. Of all things, the security of private data, which the Germans like to cite as a location advantage over the much more lax USA.

Linus Neumann, of the Chaos Computer Club, also German-language, also translated by Safari:

The information collected by VW subsidiary Cariad contains precise information on the location and time of the ignition. The movement data is linked to other personal data. In this way, they also allow conclusions to be drawn about suppliers, service providers, employees or camouflage organizations of the security authorities.

Anthony Alaniz, Motor1:

The hacker group, the Chaos Computer Club, informed Cariad about the vulnerability, which quickly patched the issue. Cariad told Spiegel that the vulnerability was a “misconfiguration” and that the company doesn’t merge data that would allow someone to create a profile about a person. According to the company, the researchers had to combine different data sets by “bypassing several security mechanisms.” It also said it’s unaware of anyone accessing the data other than CCC.

Cariad has a lot of gall to issue a statement redirecting blame to someone defeating “security mechanisms” instead of the possibility all this stored data could be re-identified in the first place.

Matthew Green on Bluesky:

I love that Apple is trying to do privacy-related services, but this [“Enhanced Visual Search” setting] just appeared at the bottom of my Settings screen over the holiday break when I wasn’t paying attention. It sends data about my private photos to Apple.

The first mention of this preference I can find is a Reddit thread from August.

Apple says it is an entirely private process:

Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides IP address. This prevents Apple from learning about the information in your photos. […]

The company goes into more technical detail in a Machine Learning blog post. What I am confused about is what this feature actually does. It sounds like it compares landmarks identified locally against a database too vast to store locally, thus enabling more accurate lookups. It also sounds like matching is done with entirely visual data, and it does not rely on photo metadata. But because Apple did not announce this feature and poorly documents it, we simply do not know. One document says trust us to analyze your photos remotely; another says here are all the technical reasons you can trust us. Nowhere does Apple plainly say what is going on.

Jeff Johnson:

Of course, this user never requested that my on-device experiences be “enriched” by phoning home to Cupertino. This choice was made by Apple, silently, without my consent.

From my own perspective, computing privacy is simple: if something happens entirely on my computer, then it’s private, whereas if my computer sends data to the manufacturer of the computer, then it’s not private, or at least not entirely private. Thus, the only way to guarantee computing privacy is to not send data off the device.

I see this feature implemented with responsibility and privacy in nearly every way, but, because it is poorly explained and enabled by default, it is difficult to trust. Photo libraries are inherently sensitive. It is completely fair for users to be suspicious of this feature.

A press release from U.K. consumer advocacy group Which?:

The consumer champion rated products across four categories and gave them overall privacy scores for factors including consent and what data access they want. Researchers found data collection often went well beyond what was necessary for the functionality of the product – suggesting data could, in some cases, be being shared with third parties for marketing purposes. Which? is calling for firms to prioritise privacy over profits.

This includes products as pedestrian as air fryers, which apparently wanted the precise location of users and permission to record audio. There could be a valid reason for these permissions — for example, perhaps the app allows you to automate the air fryer to preheat when you return home; or, perhaps there is voice control functionality which, for understandable reasons, is not delineated in a permissions request for “recording” one’s voice.

I downloaded the Xiaomi app to look into these possibilities, but I was unable to proceed unless I created an account and connected a relevant product. I also looked at manuals for different smart air fryers from these brands, but that did not clear anything up because — wisely — these manufacturers do not include app-related information in their printed documentation.

Even if these permissions requests are perfectly innocent and were correctly documented — according to Which?, they are not — it is ridiculous that buyers need to consider all this just to use some appliance.

Matthew Gault, Gizmodo:

But it shouldn’t be this way. Every piece of tech shouldn’t be a devil’s bargain where we allow a tech company to read through our phone’s contact list so we can remotely shut off an oven. More people are pissed about this issue and complaining to their government. Watchdog groups in the U.K. and the U.S. are paying attention.

We can do something about this. We can have comprehensive privacy laws with the backing of well-funded regulators. But until that happens, everything “smart” is capable of lucrative contributions to the broader data broker and surveillance advertising markets, just because people want to use the product’s features.

Emily Rae Pasiuk, CBC News:

Celebrity investor Kevin O’Leary says he is planning to bankroll and build what he says will be the world’s largest artificial intelligence data centre.

The proposal — named Wonder Valley — is slated to be built in the District of Greenview, near Grande Prairie, Alta.

If you believe this pitch for a seventy billion dollar A.I. data centre campus, you are a mark. A sucker. You have been duped. Bamboozled. Hoodwinked.

Our premier is very excited.

Max Fawcett, the National Observer:

The website in question features a video that has to be seen to be believed — and even then, only as a punchline. The imagined bucolic landscape of rolling hills, gentle streams and friendly wildlife feels like a Disney-fied version of northern California rather than northern Alberta. The campus itself is straight out of Silicon Valley.

The narration is even more hilariously unrealistic, if that’s even possible. “There’s a valley,” we’re told, “where technology and nature will come together in perfect harmony. Where the future will pulse at every turn. Bursting with nature and wildlife, nurturing the present as it embraces the future. The valley where the brightest minds unite, where electrons, machines and data come together in an algorithmic dance.”

If moneyed knuckleheads want to try building this thing, that should be between them and the people of the Municipal District of Greenview — but leave the rest of us out of this. The province should not subsidize or contribute to it, but they are obviously enamoured by the new natural gas power plants to be constructed (emphasis mine):

“The GIG’s ideal cold-weather climate, a highly skilled labor force, Alberta’s pro-business policies and attractive tax regime make the GIG the perfect site for this project. We want to deliver transformative economic impact and the lowest possible carbon emissions afforded to us by the quality of gas in the area, our efficient design and the potential to add Geothermal power as well. Together, these factors create a blueprint for sustainability and success that can be recognized worldwide. This is the Greenview Model” said Mr. O’Leary.

Transparently dishonest.

Wired has been publishing a series of predictions about the coming state of the world. Unsurprisingly, most concern artificial intelligence — how it might impact health, music, our choices, the climate, and more. It is an issue of the magazine Wired describes as its “annual trends briefing”, but it also kind of like a hundred-page op-ed section. It is a mixed bag.

A.I. critic Gary Marcus contributed a short piece about what he sees as the pointlessness of generative A.I. — and it is weak. That is not necessarily because of any specific argument, but because of how unfocused it is despite its brevity. It opens with a short history of OpenAI’s models, with Marcus writing “Generative A.I. doesn’t actually work that well, and maybe it never will”. Thesis established, he begins building the case:

Fundamentally, the engine of generative AI is fill-in-the-blanks, or what I like to call “autocomplete on steroids.” Such systems are great at predicting what might sound good or plausible in a given context, but not at understanding at a deeper level what they are saying; an AI is constitutionally incapable of fact-checking its own work. This has led to massive problems with “hallucination,” in which the system asserts, without qualification, things that aren’t true, while inserting boneheaded errors on everything from arithmetic to science. As they say in the military: “frequently wrong, never in doubt.”

Systems that are frequently wrong and never in doubt make for fabulous demos, but are often lousy products in themselves. If 2023 was the year of AI hype, 2024 has been the year of AI disillusionment. Something that I argued in August 2023, to initial skepticism, has been felt more frequently: generative AI might turn out to be a dud. The profits aren’t there — estimates suggest that OpenAI’s 2024 operating loss may be $5 billion — and the valuation of more than $80 billion doesn’t line up with the lack of profits. Meanwhile, many customers seem disappointed with what they can actually do with ChatGPT, relative to the extraordinarily high initial expectations that had become commonplace.

Marcus’ financial figures here are bizarre and incorrect. He quotes a Yahoo News-syndicated copy of a PC Gamer article, which references a Windows Central repackaging of a paywalled report by the Information — a wholly unnecessary game of telephone when the New York Times obtained financial documents with the same conclusion, and which were confirmed by CNBC. The summary of that Information article — that OpenAI “may run out of cash in 12 months, unless they raise more [money]”, as Marcus wrote on X — is somewhat irrelevant now after OpenAI proceeded to raise $6.6 billion at a staggering valuation of $157 billion.

I will leave analysis of these financials to MBA types. Maybe OpenAI is like Amazon, which took eight years to turn its first profit, or Uber, which took fourteen years. Maybe it is unlike either and there is no way to make this enterprise profitable.

None of that actually matters, though, when considering Marcus’ actual argument. He posits that OpenAI is financially unsound as-is, and that Meta’s language models are free. Unless OpenAI “come outs [sic] with some major advance worthy of the name of GPT-5 before the end of 2025″, the company will be in a perilous state and, “since it is the poster child for the whole field, the entire thing may well soon go bust”. But hold on: we have gone from ChatGPT is disappointing “many customers” — no citation provided — to the entire concept of generative A.I. being a dead end. None of this adds up.

The most obvious problem is that generative A.I. is not just ChatGPT or other similar chat bots; it is an entire genre of features. I wrote earlier this month about some of the features I use regularly, like Generative Remove in Adobe Lightroom Classic. As far as I know, this is no different than something like OpenAI’s Dall‍-‍E in concept: it has been trained on a large library of images to generate something new. Instead of responding to a text-based prompt, it predicts how it should replicate textures and objects in an arbitrary image. It is far from perfect, but it is dramatically better than the healing brush tool before it, and clone stamping before that.

There are other examples of generative A.I. as features of creative tools. It can extend images and replace backgrounds pretty well. The technology may be mediocre at making video on its own terms, but it is capable of improving the quality of interpolated slow motion. In the technology industry, it is good at helping developers debug their work and generate new code.

Yet, if you take Marcus at his word, these things and everything else generative A.I. “might turn out to be a dud”. Why? Marcus does not say. He does, however, keep underscoring how shaky he finds OpenAI’s business situation. But this Wired article is ostensibly about generative A.I.’s usefulness — or, in Marcus’ framing, its lack thereof — which is completely irrelevant to this one company’s financials. Unless, that is, you believe the reason OpenAI will lose five billion dollars this year is because people are unhappy with it, which is not the case. It simply costs a fortune to train and run.

The one thing Marcus keeps coming back to is the lack of a “moat” around generative A.I., which is not an original position. Even if this is true, I do not see this as evidence of a generative A.I. bubble bursting — at least, not in the sense of how many products it is included in or what capabilities it will be trained on.

What this looks like, to me, is commoditization. If there is a financial bubble, this might mean it bursts, but it does not mean the field is wiped out. Adobe is not disappearing; neither are Google or Meta or Microsoft. While I have doubts about whether chat-like interfaces will continue to be a way we interact with generative A.I., it continues to find relevance in things many of us do every day.

Chris Doel in a video on YouTube:

The rechargeable cells in disposable vapes are so valuable, why are they thrown away after one use?!

In this video I extract 130 disposable vape batteries and convert it into a 1500W ebike battery.

There are probably some money-minded people who are impressed by how commoditized lithium-ion batteries have become. But this seems to me like obvious waste. For one thing, it is a bad idea to throw lithium-ion batteries in the garbage, let alone litter with them. For another, this is a terribly inefficient use of resources, and the manufacturers must be aware of that. One of them writes, in its FAQ, that its disposable vape model “is a rechargeable device containing a non-replaceable lithium-ion battery”, but because it is not refillable, it cannot be re-used. It seems impossible to me to write a sentence like that without realizing its absurdity.

(Via Ian Betteridge, who had a great point about this but is currently doing a website migration and I am unable to find the post.)

Imogen West-Knights, the Dial:

I guess it depends on what you think learning a language means or should mean. When we say that we are “learning a language,” the assumption is that the goal is to reach conversational fluency. To speak to other speakers. When I was doing Duolingo Irish, I did not learn Irish, but I did learn a fair bit about Irish: how to pronounce Irish words, the way syntax works in the language, and so on. I do read French reasonably well. Even when I was barely able to speak a sentence in Swedish, it was still better to be able to understand what was being said around me than not to understand at all. But was I learning a language or was I learning translation? What is the difference?

I am approaching a year-long Duolingo streak, and this piece spoke to me in its criticisms of Duolingo and its appreciation of it. I am not bilingual and, though that would be a nice thing to achieve, I am fully aware I cannot achieve that through a few minutes per day in an app. But learning a little bit every day has — equally slowly — given me a glimpse into how to begin learning more. It opens the door to a different language for me in a way other techniques have not resonated.

Suzanne Smalley, the Record:

The developer of the powerful Pegasus spyware was found liable on Friday for its role in the infection of devices belonging to 1,400 WhatsApp users.

The precedent-setting ruling from a Northern California federal judge could lead to massive damages against NSO Group, whose notorious spyware has been reportedly used, and often abused, by a roster of anonymous government clients worldwide.

Apple dropped its similar suit against NSO Group in September on the grounds it believed it would be unable to compel the production of evidence. But the judge in WhatsApp’s case, Phyllis J. Hamilton, was correctly furious (PDF) about NSO Group’s behaviour:

Overall, the court concludes that defendants have repeatedly failed to produce relevant discovery and failed to obey court orders regarding such discovery. Most significant is the Pegasus source code, and defendants’ position that their production obligations were limited to only the code on the AWS server is a position that the court cannot see as reasonable given the history and context of the case. Moreover, defendants’ limitation of its production such that it is viewable only by Israeli citizens present in Israel is simply impracticable for a lawsuit that is to be litigated in this district.

Accordingly, the court concludes that plaintiffs’ motion for sanctions must be GRANTED. […]

I am not a lawyer, and it seems Apple’s — presumably, very expensive — representation had good reason to worry about this case. But I wish they would have pursued it. It would have been nice to see NSO Group on the receiving end of two of these rulings instead of just one.

Liz Pelly, Harpers:

According to a source close to the company, Spotify’s own internal research showed that many users were not coming to the platform to listen to specific artists or albums; they just needed something to serve as a soundtrack for their days, like a study playlist or maybe a dinner soundtrack. In the lean-back listening environment that streaming had helped champion, listeners often weren’t even aware of what song or artist they were hearing. As a result, the thinking seemed to be: Why pay full-price royalties if users were only half listening? It was likely from this reasoning that the Perfect Fit Content program was created.

After at least a year of piloting, PFC was presented to Spotify editors in 2017 as one of the company’s new bets to achieve profitability. According to a former employee, just a few months later, a new column appeared on the dashboard editors used to monitor internal playlists. The dashboard was where editors could view various stats: plays, likes, skip rates, saves. And now, right at the top of the page, editors could see how successfully each playlist embraced “music commissioned to fit a certain playlist/mood with improved margins,” as PFC was described internally.

Reading this article made me feel quite sad. The musicians responsible for tracks like these seem quite talented. Browsing the roster of Epidemic Sound — one of the production companies cited in Pelly’s story — indicates many of them are working in multiple genres. I did not hear anything particularly interesting or notable, but I do not want to suggest this is bad; there is a valid place for stock tracks. And, though this article focuses on Spotify, there is no shortage of Epidemic’s music and that of similar production companies in Apple Music playlists, either.

Despite their skill, none of these musicians make very much money from Epidemic Sound, according to Pelly, or in payouts from streaming companies. That is not a unique situation — streaming notoriously pays nearly all artists poorly — though it seems like an ongoing part of the devaluation of music. Filler tracks have long been commonplace, but they used to be confined to production libraries, not placed side-by-side with recognizable artists. For many musicians to earn enough, they become contributors to their own downfall.

(Via Jason Tate.)

Ed Zitron:

The tools we use in our daily lives outside of our devices have mostly stayed the same. While buttons on our cars might have moved around — and I’m not even getting into Tesla’s designs right now — we generally have a brake, an accelerator, a wheel, and a turn signal. Boarding an airplane has worked mostly the same way since I started flying, other than moving from physical tickets to digital ones. We’re not expected to work out “the new way to use a toilet” every few months because somebody decided we were finishing too quickly.

Yet our apps and the platforms we use every day operate by a totally different moral and intellectual compass. While the idea of an update is fairly noble (and not always negative) — that something you’ve bought can be maintained and improved over time is a good thing — many tech platforms see it as a means to further extract and exploit, to push users into doing things that either keep them on the app longer or take more-profitable actions.

A barn-burner of an essay, and a perfect way to summarize this year — this decade, just about — in technology. One may quibble with individual examples — I, for one, cannot remotely co-sign “[t]he destruction of Google Search […] should be written about like a war crime” — but the industry-wide trend leads directly to Zitron’s deserved and palpable frustration.

With my limited understanding of economics terms, I think of this in the context of reducing consumer surplus. A straightforward example is to increase prices — smartphone makers, for example, recognized years ago they were leaving money on the table and would have no problem selling phones for well over a thousand dollars.

Some of these software companies have figured out they are also charging less than they could. Microsoft Office used to cost $200 in the United States for an indefinite license. It is now $70 per year which means, after three years, users are paying more than the previous one-time fee. Adobe’s Creative Suite used to cost a whopping $2,600; the current pricing for Creative Cloud is $660 per year. But after four years, Creative Cloud costs more. There are advantages to subscription pricing, like ongoing updates; there are also disadvantages, like being required to pay indefinitely to continue using their features. Depending on how often one upgrades, they might find subscription pricing less expensive in the long run, but the choice is no longer theirs.

Remarkably, the consumer surplus is not closed solely through increased dollar amounts. There are, as Zitron ably documents, plenty of non-financial costs, too. I expect this to some extent in free-to-use products, but it infects everything. I am, in the long term, paying more to be annoyed more often — to the huge benefit of these corporations.

The end of the year is an exciting time because, try as I might, I cannot possibly listen to all the records released in a year. That means I miss a whole bunch of terrific albums until they are mentioned in some year-end roundup. These are some of my favourites:

  • The main link on this post goes to Album of the Year and its aggregate roundup of critics’ picks. You already know which album is number one.

  • The Quietus published its quirky-as-usual list earlier this month. There are also several genre-specific lists. Their picks are sometimes hard to like, but never boring.

  • Gramophone last week posted its picks for the best classical albums of the year. It is a big list and, even skimming it, I think there is something in there for just about everyone.

  • Anthony Fantano has begun List Week with some honourable mentions and his favourite EPs of the year. New videos daily with the best — and worst — the year had to offer.

I am sure there are terrific records that do not appear on any of these lists, just as I am sure there are album rankings you reference instead. If there is one you think I would like, please let me know. So far, these are the lists I prefer to help me find great albums I missed in 2024.

Mark Zuckerberg:

Threads strong momentum continues — now 300M+ monthly actives and 100M+ daily actives 💪

It looks like I was right to be skeptical of data from Similarweb which, a few weeks ago, said Threads’ daily user count in the U.K. and U.S. was only one-and-a-half times larger than Bluesky’s 3.5 million. Maybe there really are only 5.25 million Threads users in the U.S. and the other 95 million daily active users are elsewhere, but I doubt it.

Then again, who says Meta and Similarweb are measuring daily users in the same way? Meta is incentivized to inflate its figures to the maximum extent legally allowable. Does this count the number of people who viewed a Threads post as suggested in Instagram? I doubt it. I would like to see both companies show their work, however.

The Verge’s Tom Warren on Threads:

[A]pparently there are 100 million daily active users of Threads. I wonder how this is counted, and how many actually engage with Threads content every day. I wonder because Bluesky still feels a lot more alive than Threads, despite having a lot less users.

Bluesky, at the very least, does not feel like it is full of people trying to replace a lack of personality with Facebook-grade memes and a business consulting side hustle.

Charles Rollet, TechCrunch:

After announcing a crackdown in September, Telegram now says it has removed 15.4 million groups and channels related to harmful content like fraud and terrorism in 2024, noting this effort was “enhanced with cutting-edge AI moderation tools.”

This makes more sense to me if “A.I.” stands for “arrested and indicted”.

Telegram launched a new moderation dashboard. The charts are all shown with daily granularity and you can adjust the date range, but it is hard to get a clear sense of a trend. Rollet reports “there’s a noticeable increase in enforcement after Durov’s arrest in August”, but I am not sure that is so clear. I wanted to find out.

Happily, the chart source data is exposed in the page’s markup. A little copying-and-pasting and a little cleaning up yielded a Numbers document you can download with figures from December 2023 up to yesterday.1 The trend line shows a modest increase in total moderation actions and banned terrorism communities. CSAM bans are, however, slightly declining.


  1. The site shows zero bans for CSAM and terrorism-related groups for today, but no other day has anything close to zero, so I assume this is an issue of incomplete data. I removed them. ↥︎

Thomas Claburn, the Register:

WordPress hosting firm Automattic and its CEO Matthew Mullenweg have been ordered to stop interfering with the business of rival WP Engine.

California District Court judge Araceli Martínez-Olguín on Tuesday issued a preliminary injunction [PDF] against Automattic and Mullenweg, finding that plaintiff WP Engine is likely to prevail in its claims.

Ernie Smith, Tedium:

Mullenweg’s anger about this situation, well-documented at this point, is rooted in the private equity ownership of WP Engine and how he felt it would threaten to damage the open-source community his company had built. And to be clear, the people who own the platforms we rely on? That is a real issue. The idea that a large company can undermine an open-source project? Point to Mullenweg. He has reasons to be nervous.

But I don’t think that’s really what’s been happening here. I think the concern, if we’re really being honest, reflects frustration that Mullenweg has struggled to make Automattic into the firm that WP Engine has become — the first choice for businesses and agencies looking to get a site online. His actions since September — which, mind you, included building a website promoting the number of WP Engine users that had left that platform — have only come to underline that. And despite his claims otherwise, his actions have clearly spoken in the other direction.

To wit, Samantha Cole, 404 Media:

WordPress co-founder and CEO of Automattic Matt Mullenweg is trolling contributors and users of the WordPress open-source project by requiring them to check a box that says “Pineapple is delicious on pizza.”

The change was spotted by WordPress contributors late Sunday, and is still up as of Monday morning. Trying to log in or create a new account without checking the box returns a “please try again” error.

It is still there now.

Nobody seems to know why this checkbox was changed rather than being removed, but this change was not Mullenweg’s doing — at least, not directly. No reason was provided by the responsible contributor. And this is the CMS upon which over 40% of the web is built, in the hands of these clowns?

Zak Vescera and Adrian Ghobrial, Investigative Journalism Foundation:

An IJF and CTV National News investigation has found 422 Richards St. is one of dozens of cases across Canada where multiple money services businesses (MSBs) are incorporated at the same address, sometimes without the knowledge or consent of the location’s actual occupant. 

Financial crime experts say it goes against the spirit of Canada’s registration requirements for such businesses, which are considered high-risk for money laundering and terrorist financing.

Brian Krebs:

According to [Richard] Sanders, all 122 of the services he tested are processing transactions through a company called Cryptomus, which says it is a cryptocurrency payments platform based in Vancouver, British Columbia. Cryptomus’ website says its parent firm — Xeltox Enterprises Ltd. (formerly certa-pay[.]com) — is registered as a money service business (MSB) with the Financial Transactions and Reports Analysis Centre of Canada (FINTRAC).

Sanders said the payment data he gathered also shows that at least 56 cryptocurrency exchanges are currently using Cryptomus to process transactions, including financial entities with names like casher[.]su, grumbot[.]com, flymoney[.]biz, obama[.]ru and swop[.]is.

Between this and TD Bank laundering drug money, Canada sure is making a name for itself as a great place for financial crime. We are not yet the United Kingdom but it is good to have aspirations I suppose.

The U.S. newsmagazine 60 Minutes recently broadcast a short segment about the silent human toll borne as a result of outsourced labelling for A.I. systems. I do not think it is as comprehensive to, for example, Josh Dzieza’s reporting last year — previously linked — but it is worth your time.

Fellow foreigners: I found yt-dlp quite useful.

Nerima Wako-Ojiwa, executive director of Siasa Place, interviewed by Lesley Stahl:

Our labor law is about 20 years old, it doesn’t touch on digital labor. I do think that our labor laws need to recognize it — but not just in Kenya alone. Because what happens is when we start to push back, in terms of protections of workers, a lot of these companies, they shut down and they move to a neighboring country.

These workers are essential for making these systems function, but are treated as far more disposable by both their immediate employers and their clients: big, rich technology companies. They can and should do better for the people they depend on.

Kevin Purdy, Ars Technica:

It might not ever be fully dead, but Firefox calling it quits on Do Not Track (DNT) is a strong indication that an idealistic movement born more than 13 years ago has truly reached the end of its viable life.

[…]

Besides lacking regulatory teeth, DNT was also generally overcome by advancements in tracking. All the signals put out by a browser — plug-ins, time zone, monitor resolution, even the DNT option itself — could be used to effectively track a user, even across browsers. Apple dropped DNT from Safari in 2019, citing both its ineffectiveness and fingerprinting.

Unfortunately, the replacement for Do Not Track — the Global Privacy Control — is not quickly catching on despite claiming to have “broad industry support” and legal might.

Graham Fraser, BBC News:

Apple Intelligence, launched in the UK earlier this week, uses artificial intelligence (AI) to summarise and group together notifications.

This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.

Fraser also points to an inaccurate summary attributed to the New York Times. Even scrolling through my notifications right now, I can see misleading summaries. One, interpreted by Apple Intelligence as a story about a “vaccine lawyer helps pick health officials”, actually refers to an anti-vaccine lawyer who thinks immunizing against polio is dangerous. I have seen far dumber examples since this feature became part of beta builds over the past months.

I am not opposed to the concept of summarized text. It can, in theory, be helpful to glance at my phone and know whether I need to respond to something sooner rather than later. But every error chips away at a user’s trust, to the point where they need to double-check for accuracy — at which point, the summary is creating additional work.

I can see why the BBC is upset about this, particularly after years of declining trust in media. I had notification summaries switched on for only select few apps. I have now ensured they are turned off for news apps.

Update: Markus Müller-Simhofer:

If you are using macOS 15.2, please be careful with those priority alerts. It listed a fraud email as priority for me.

This is not just a one-off. I have also seen this in my own use. Mail also repeatedly decided to prioritize “guest contributor” spam emails over genuinely useful messages like shipping notifications. Sometimes, it works as expected and is useful; sometimes, the priority message feature simply does not show up. It is bizarre.

Manton Reece:

Trying Sora. It’s extraordinary that this works at all, and it’s even faster than I was expecting. OpenAI seems to have built a whole UI system around this app too.

Something I neglected to mention in my thoughts about Sora from earlier this week — which spiralled into something else entirely — is this U.I. which, though new to OpenAI, feels familiar. In screen recordings, it looks strikingly like Apple’s Photos app for Mac and at iCloud on the web, right down to the way an item’s metadata is displayed. I am sure there are other applications that have a similar layout and appearance, but that was my immediate impression.