Month: August 2023

A recent story in the New York Times is the most recent iteration of the theme that people — especially young people — are increasingly using subtitles when watching movies and shows. It is more-or-less a retread of a late June story in IndieWire, which was similar to an early June story in the Atlantic, which was an awful lot like a February article from ABC News, which reflected a January story from the Guardian, which was not dissimilar to a 2021 BBC News article — and I am slowly running out of ways to explain how much reporting there has been about this phenomenon.

Brian X. Chen, of the Times:

“It’s getting worse,” said Si Lewis, who has run Hidden Connections, a home theater installation company in Alameda, Calif., for nearly 40 years. “All of my customers have issues with hearing the dialogue, and many of them use closed captions.”

The garbled prattle in TV shows and movies is now a widely discussed problem that tech and media companies are just beginning to unravel with solutions such as speech-boosting software algorithms, which I tested. […]

In Chen’s reporting, this home theatre installer blames unclear audio on the tiny speakers in flat screen televisions, while a sound engineer blames the difficulty of transitioning from big theatres to smaller devices and a lack of consistent specifications in streaming.

But, surely, none of this represents a complete explanation. Movies have been enjoyed first in a theatre and then at home for decades, and streaming services have been around for long enough to become a known constant. It cannot be an issue only with flat screens, as I have a pair of decent speakers hooked up to my receiver and a lot of media still sounds like muddy garbage; music played through the same system, on the other hand, does not. A home audio system is never going to match a theatre’s quality. But if lots of people are complaining about this problem — and have been doing so for years — it surely means movies and shows are not being mixed for how people actually watch them.

Yair Rosenberg, the Atlantic:

In real life, if someone crashed a gathering of strangers and started disrupting conversations while shouting abuse, they’d quickly be bounced from the party. Yet on social media, this sort of caustic conduct is not only tolerated but sometimes celebrated. In our day-to-day lives, getting disciplined for misbehavior is how we learn to be better. But because such norms were never upheld on the internet, many spaces turned toxic, and many people never got the feedback they needed to grow out of their bad habits. Blocking is part of that feedback. […]

Though harassment and abuse are the most obvious cases for blocking another user, I find a low threshold is necessary for a more enjoyable use of these platforms. It removes from your view any user who spoils your experience for any reason. That is excellent. If anything, I think using the “block” button on social media is increasingly necessary, as platform owners have decided to decrease the extent to which users control their own experience.

A user’s Twitter feed used to contain only the accounts they followed. Twitter then began injecting tweets your followed accounts “liked” from other users, and now promotes all kinds of tweets from accounts users do not follow. The launch of Threads went in entirely the opposite direction: for the first several weeks of availability, the only feed you would see contained a mix of posts from accounts you followed and “suggestions” from other accounts.

There are benefits to this — Threads looked lively without a user needing to follow a single account. But it requires a liberal use of the “block” button to regain any semblance of control.

Paris Marx:

Microsoft is really hitting it out of the park with its AI-generated travel stories! If you visit Ottawa, it highly recommends the Ottawa Food Bank and provides a great tip for tourists: “Consider going into it on an empty stomach.”

Jay Peters, the Verge:

If you try to view the story at the link we originally included in this article, you’ll see a message that says “this page no longer exists.” However, Microsoft’s article is still accessible from another link.

Microsoft laid off journalists at Microsoft News and MSN in 2020 to replace them with artificial intelligence. Microsoft didn’t immediately respond to a request for comment.

The article was pulled for users using British English, but remains accessible in American English and, perhaps more relevant, Canadian English. How hard can it be to remove all versions of this obviously dumb article?

Way of the future.

Update: Microsoft seems to have pulled the article entirely. I cannot find a language code which works.

Jesse Squires, last March:

I discovered earlier this week that my website is no longer being indexed by Bing and DuckDuckGo. In fact, it appears that it has been deliberately removed from their search indexes. On Bing, rather than display a “no results” message, it displays a “Some results have been removed” message, which is very concerning. Notably, however, Google search is working fine.

Jeff Johnson, last June:

Yesterday I was searching with DuckDuckGo and noticed to my dismay that my business web site https://underpassapp.com was missing entirely from the search results! (My personal web site https://lapcatsoftware.com is still in DuckDuckGo, however.)

Many people don’t realize that DuckDuckGo sources its search results from Microsoft Bing. I learned this when I was looking into advertising on DuckDuckGo, and I found that they also outsource their advertising to Microsoft. It turns out that my business web site was removed from Bing, which explains why it’s missing from DuckDuckGo.

Dave Rupert in January:

It came to my attention that my site does not appear on DuckDuckGo search results. Even when searching for “daverupert.com” directly. After some digging, DuckDuckGo used to get their site index from Yandex, but now gets their site index from Bing and sure enough… I didn’t appear on Bing either.

Jack Yan in January:

And when it comes to Bing’s index collapse—or whatever you wish to call it—it’s no more pronounced than here (well, at least among the sites that even get listed). For site:techdirt.com:

Google: 54,700 results, 393 visible

Mojeek: 48,818 results, 1,000 visible

Yandex: 2,000 results, 250 visible

Gigablast: 200 results, 200 visible

Yep: 10 results, 10 visible

Baidu: 1 result, 1 visible

Bing: 1 result, 1 visible

Mike Masnick, of Techdirt, in July:

This morning, however, someone alerted me to the fact that DuckDuckGo currently shows zero results for Techdirt. Not even some random old article. Zero. None. Zilch.

[…]

Bing appears to have deleted all links to Techdirt. Though at least it tells you that “some results have been removed.” Though it doesn’t say why.

Masnick again yesterday:

Either way, a few hours later DuckDuckGo added back… a single link(!) to Techdirt’s front page, which we mentioned in an update. The next day, I heard from a couple people who said they had reached out to people at Microsoft, and I was told that this sometimes happen, and that the Bing team will eventually fix it (though it might happen faster if something gets public attention). Either way, about a day after I had written about Techdirt being erased, we were back in both Bing and DuckDuckGo and I considered it a one-off bug that had been fixed.

But… it’s back. I happened to just check on Bing and saw that we’re gone again (though now there’s also a big obnoxious box trying to get me to chat):

Based on links collected by Michael Tsai, this appears to be a somewhat common issue where Bing — and, consequently, the many small search engines which rely on its results — will completely de-index a website for no apparent reason. Bing still shows zero results for a query for site:techdirt.com, though it does again say “some results have been removed” and links to a generic help page.

Hazel” of “Team YouTube” on August 8:

Starting today, if you have YouTube watch history off and have no significant prior watch history, features that require watch history to provide video recommendations will be disabled – like your YouTube home feed. This means that starting today, your home feed may look a lot different: you’ll be able to see the search bar and the left-hand guide menu, with no feed of recommended videos thus allowing you to more easily search, browse subscribed channels and explore Topic tabs instead.

I received this change today and it is a pleasant update. My YouTube homepage no longer features rage bait, reaction videos, or body language experts — just a white screen, a search box, and a nag from YouTube for me to re-enable my history. This also applies to the YouTube iOS app, as you may expect.

Ethan Baron, Mercury News:

Owners of some older iPhone models are expected to receive about $65 each after a judge cleared the way for payments in a class-action lawsuit accusing Apple of secretly throttling phone performance.

The Cupertino cell phone giant agreed in 2020 to pay up to $500 million to resolve a lawsuit alleging it had perpetrated “one of the largest consumer frauds in history” by surreptitiously slowing the performance of certain iPhone models to address problems with batteries and processors.

I remain stunned that anyone at Apple thought it would be completely fine to kneecap iPhones with underperforming batteries without telling users. Asking for forgiveness instead of permission works when you borrow a coworker’s pen, not when you alter the product characteristics of millions of smartphones without a word of communication. It has got to be one of the stupidest decisions made by this company in the past decade.

Jason Snell wrote about the history of the iMac on its twenty-fifth anniversary for the Verge:

While PC makers spent many years trying (and failing, for the most part) to make iMac knockoffs, it was really a transitional device. While Apple still has a nice business selling iMacs to families, schools, and hotel check-in desks, most of the computers it sells are laptops.

Still, I think the iMac pointed the way to the era of ubiquitous laptops. (What is a laptop but an all-in-one computer? Fortunately, laptops don’t weigh 38 pounds like the iMac G3.) From the very beginning, the iMac was criticized as being limited and underpowered. Apple frequently used laptop parts in the iMac, whether it was for cost savings or miniaturization reasons. Today, Mac desktops use more or less the same parts as Mac laptops.

To wit, while Apple’s own Mac chips debuted in two laptops and a Mac Mini which all looked the same as the Intel models they replaced, the M1 iMac was the first of the family to sport a new industrial design language. Unfortunately, it has remained unchanged for 837 days as of writing — the longest delay between iMac updates in years, and one which will have knock-on effects.

New iMacs are expected in October, according to Mark Gurman, as part of the debut of the M3 lineup.

Sam Wolfson, the Guardian:

Over the past month, I’ve spoken with engineers at Apple about how Maps started to get good. They told me that as well as data from city officials, including digital dashboards that update maps automatically, they also monitor changes from riders themselves. They can see if there is an unusually large number of people riding bikes in a particular location and then send someone with a backpack or a car to see if there’s a new bike path in that area. Sometimes they get this information before it’s been officially recorded by the government.

These kinds of techniques mean that while Apple has become more competitive with Waze and Google Maps on driving instructions, it’s on cycling and public transit that Apple Maps has built perhaps the most impressive resource yet available — with incredibly detailed instructions than can open up a city even for a nervous cyclist (Eddy Cue, unsurprisingly, describes it as the best cycling map in the world).

Justin O’Beirne:

As of early August 2023, Google appears to be rolling out Apple Maps-style road markings in New York, Los Angeles, San Francisco, and Seattle:

[…]

There are interesting differences between Google’s and Apple’s approaches. Google, for instance, denotes parking spots along roads, while Apple doesn’t. But Apple denotes areas marked as bus stops, while Google doesn’t:

[…]

They also have different approaches to displaying cycling lanes:

Cue says all of the right things in the Guardian interview and the article makes Apple Maps look good. It is promising to hear Cue’s apparent passion for proper cycling directions, interior mapping at airports, and various points of interest. O’Beirne’s article is a worthy complement to Wolfson’s story as I think Apple’s presentation of cycle lanes, especially, is so much better than Google’s.

I hope to see such investment locally and there are clues that Apple is working on it: Apple says it has begun collecting data with its backpack kit in several areas in Calgary which appears to be a prerequisite for the Detailed City Experience.

As Cue himself recognises, “there are really only two mapmakers left in the world, in ourselves and Google” – and that monopoly of information, says Clancy Wilmott, a professor specialising in digital cartographies at Berkley, has consequences.

As far as I can figure out, this is simply untrue. OpenStreetMap is a well-known and widely used alternative to maps from Apple and Google, Collins Bartholomew is a longtime mapmaker, and Felt is a newer startup. Here, Nokia’s old mapping business, is still around too, as is TomTom. I believe all gather data independently of one another and, crucially, two of them — OpenStreetMap and TomTom — provide data for Apple Maps.

But there is some truth to what Cue is saying, at least for consumers, and I am not sure the world is best served by depending on two Californian tech giants for its place and direction needs. Both offer more capability in the United States than in any other nation and, anecdotally, point-of-interest data is more accurate inside the U.S. than elsewhere.

iA:

The prevalent critique of the DMA often rests on vague condemnations, perpetuating the belief that the EU creates impractical laws. Our analysis showed quite the opposite.

Drafted by experts in law and technology, the DMA tackles numerous industry issues, benefiting consumers and smaller companies trying to compete with tech giants. It regulates market dominance, dark patterns, and unfair practices, aiming to prevent a few corporations from monopolizing choices.

Jesper:

I view this as a cornerstone of civil rights and customer rights in the same vein as the GDPR. The EU does not get everything right and are not the foremost authority on how this all should work. But they are in the same place as the United States Government was before passing the Clean Air Act and Clean Water Act. When the corporations involved have decided that they don’t feel like doing anything, what else is left to do?

I appreciated iA’s careful exploration of the Digital Markets Act. I hope that analysis is more accurate than the alarmist takes I have seen from this side of the Atlantic, as I am more optimistic than many about seeing what happens when platform owners are forced to compete.

There remain lingering concerns, like the requirement for interoperability among messaging platforms, which may impact privacy protections. Many E.U. member states have expressed interest in weakening end-to-end encryption. That is not part of this Act but is, I think, contextually relevant.

I am also worried that the tech companies affected by this Act will treat it with contempt and make users’ experiences worse instead of adapting in a favourable way. After GDPR was passed, owners of web properties did their best to avoid compliance. They could choose to collect less information and avoid nagging visitors with repeated confirmation of privacy violations. Instead, cookie consent sheets are simply added to the long list of things users need to deal with — alongside classics like subscribe to our newsletter!, Enable notifications!, Share your location!, and It looks like you are shopping from Canada! — any time they visit most popular websites. It is hard to blame them, though, given how the online ad market works, and overhauling that seems like too great a burden to place on a single piece of privacy legislation.

Stephen Hackett:

Somehow, this page is still up on Apple’s website, albeit with a big red non-Retina banner reading “Apple TV app is the new home of iTunes Movie Trailers” that doesn’t actually link anywhere. What a sad way to die.

I am one of the approximately eight remaining users of the iTunes Movie Trailers page and its corresponding app. Both say trailers are migrating to the Apple TV app, but clicking or tapping on the banner has no effect. I also cannot find any trailers in the TV app, the name of which is rapidly becoming hilarious for all of the things it does.

Update: Several of you have written to tell me trailers are now in the TV app, under the Store tab. I believe you but I am still not seeing them, and I bet it is a Canadian availability problem.

For years, the Internet Archive has been carefully preserving 78 RPM records created with fragile shellac for something it calls the Great 78 Project. The idea is that it is impossible to tell whether digital recordings or original records will last longer, so it is archiving both, and has built up quite the library of recordings.

Andy Maxwell, TorrentFreak:

Record labels including UMG, Capitol and Sony have filed a copyright infringement lawsuit in the United States targeting Internet Archive and founder Brewster Kale, among others. Filed in Manhattan federal court late Friday, the complaint alleges infringement of 2,749 works, recorded by deceased artists, including Frank Sinatra, Billie Holiday, Louis Armstrong and Bing Crosby.

I am not a lawyer; I do not know how valid this suit is. But, writing as a layperson, this threat stinks. While versions of some of these recordings are present in newer formats, there is to me a vast difference between preserving these specific pressings compared to making available any version. I have no idea if that makes a legal difference — again, not a lawyer — but there are artistic and technical reasons which should not be ignored. Different record pressings sound different, sometimes by a lot.

Besides, it is not as though people are treating the Great 78 Project as a replacement for a streaming service. The Internet Archive does not show total plays or downloads, but the most-viewed recording in the collection has less than 140,000 views as of writing. Notable for a 1942 folk recording, for sure, but the most popular song from the same artist on Spotify has over half a million plays.

For these record labels to claim that the Internet Archive is “undermin[ing] the value of music” is laughable. They are preserving specific recordings in an industry that sees all versions of an album as identical, and treated the loss of thousands of original masters as an inconvenience.

Michael Tsai replied to my comment on CNet’s decision to purge some of its older articles:

It’s quite possible the consultants were taking them for a ride or are just wrong. But it’s also possible that the SEO people who follow this stuff really closely for a living have figured out something non-intuitive and unexpected. Google obviously doesn’t want to say that it incentivizes sites to delete content, and the algorithms are probably not intentionally designed to do that, but that doesn’t mean this result isn’t an emergent property of complex algorithms and models that no one fully understands.

makeitdouble” on Hacker News:

While CNET might not be the most reliable side, Google telling content owners to not play SEO games is also too biased to be taken at face value.

It reminds me of Apple’s “don’t run to the press” advice when hitting bugs or app review issues. While we’d assume Apple knows best, going against their advice totally works and is by far the most efficient action for anyone with enough reach.

Both of these are fair arguments for why it does not make sense to trust Google’s description of its own indexing and ranking criteria. I thought it was worthwhile bringing them to your attention.

Despite working closely with them in various capacities for much of my career, I have a bias against search optimization experts because I think most of what they suggest is either superstition, or done in greater volume and to a higher degree by scammers and machines. The latter have a nominally adversarial relationship with Google as it would prefer to direct people to articles which are useful and authoritative, not just regurgitating something taken from elsewhere.1 The objective of chum producers is to rank well on Google, and it does not matter much how or what it takes. They are not motivated to produce useful information, but to get as many clicks as possible because their websites are monetized through ads and referral links. The more they behave badly, the more legitimate businesses mimic their tactics, and the closer Google gets to adjusting its criteria to compensate. But the chum producers will always be one step ahead.

I look at Google search similar to the way I look at the stock market. I do not know if I need a disclaimer that this is not financial advice; it is obviously not. Here is what I figure: I can try to beat the market by buying shares in some business or hedging against some commodity in the hopes that I can beat the odds. But I am not a trader; I do not have time to commit to doing that for a living. The best thing I can do — and the only thing I actually do — is park some money in an index fund and hope for slow gradual gains.

Google is kind of the same way, I figure. You can try all sorts of games to improve your website’s rankings, but there are people who are motivated to beat everyone else. Their tactics will motivate Google to adjust its criteria, and chum producers can adapt more quickly than a legitimate business. I think search optimization experts see a number of effects and do their best to ascribe them to causes. Because Google deliberately keeps its inner workings nonspecific, as Tsai quite rightly points out, it is plausible for older material to have some impact on the site as a whole. But it does not make sense to me to purge old stories from a news site like you would remove out-of-stock products from a storefront.

The whole entire point of a publisher like CNet is to chronicle an industry. It is too bad its new owners do not see that in either its history or its future.


  1. Google itself, though… ↥︎

Matt Birchler:

My top suggestion with a bullet is to stop having this [subscribe] modal appear every single time I read a post from a newsletter I’m not subscribed to. It’s even worse when I subscribe via RSS, so I’ve already subbed, even if they don’t know it.

Call this a co-sign. Substack’s modal subscribe dialog — and its sibling nag screen when I leave a tab open for a while — makes for a uniquely terrible way of reading blog posts from authors who have increasingly moved to this platform.

Jani Patokallio:

Do you like reading articles in publications like Bloomberg, the Wall Street Journal or the Economist, but can’t afford to pay what can be hundreds of dollars a year in subscriptions? If so, odds are you’ve already stumbled on archive.today, which provides easy access to these and much more: just paste in the article link, and you’ll get back a snapshot of the page, full content included.

[…]

The Internet Archive is a legitimate 501(c)(3) non-profit with a budget of $37 million and 169 full-time employees in 2019. archive.today, by contrast, is an opaque mystery. So who runs this and where did they come from?

This unique resource is as useful as it is likely to be fleeting. I am amazed it has survived for as long as it has.

I find the alarmist tone of this series of reports from NewsGuard to be largely unwarranted. People used to worry about blogs or the internet itself spreading misinformation, which has undeniably been the case, but you could make that argument about every publishing medium in history. Do not get me wrong; I understand why robots churning out fact-free nonsense in an trustworthy tone is concerning. I am just unsure that it is a unique problem. It is just more nonsense in the sea in which we are already swimming.

This feels to me like the inverse of yesterday’s news that CNet is purging its archives. Just as a wave of sites embraces machine-generated articles — including CNet itself — it is erasing a legacy of contemporary coverage.

Thomas Germain, Gizmodo:

Tech news website CNET has deleted thousands of old articles over the past few months in a bid to improve its performance in Google Search results, Gizmodo has learned.

[…]

Whether or not deleting articles is an effective business strategy, it causes other problems that have nothing to do with search engines. For a publisher like CNET — one of the oldest tech news sites on the internet — removing articles means losing parts of the public record that could have unforeseen historical significance in the future. It also means the hundreds of journalists who’ve published articles on CNET could lose access to their body of work.

Google says this whole strategy is bullshit. A bunch of SEO types Germain interviewed swear by it, but they believe in a lot of really bizarre stuff. It sounds like nonsense to me. After all, Google also prioritizes authority, and a well-known website which has chronicled the history of an industry for decades is pretty damn impressive. Why would “a 1996 article about available AOL service tiers” — per the internal memo — cause a negative effect on the site’s rankings, anyhow? I cannot think of a good reason why a news site purging its archives makes any sense whatsoever.

The Canadian Association of Broadcasters and U.S.-based National Association of Broadcasters have issued a joint statement condemning Meta for preventing users from linking to news on its platforms in Canada following the passage of the Online News Act:

Meta – a nearly trillion-dollar company – repeatedly chooses to restrict news content for its users to avoid compensating news producers for the value it gains on their vital journalism. These retaliatory tactics demonstrate Meta’s monopolistic dominance over the advertising marketplace and its ability to dictate how radio and TV broadcasters, newspapers and others can reach audiences online. […]

CBC/Radio-Canada has co-signed this statement.

The nature of Meta’s business is that it gains value just about any time anyone interacts with its platforms, but broadcasters are in a class of their own for demanding payment for external links to their work. I understand their argument: if a business in Calgary wants to advertise online to an audience in Calgary, they will likely buy advertising from U.S. companies instead of through Calgary-based media outlets. But a tax on links is a poor way to address a problem with the business model. Also, Meta does not have a monopoly on online advertising, nor does it dominate.

The broadcasters are asking the Competition Bureau to investigate Meta over what it sees is a “retaliatory” measure. The Bureau says it is looking into whether it has any reason to intervene.

Bryan Carney, the Tyee:

So how will you keep up with your favourite publishers during this war of brinkmanship? Well, one way might be Really Simple.

As in: Really Simple Syndication, or RSS.

Carney neglects to mention NetNewsWire, still the best feed reader for MacOS and iOS.

Update: The Beaverton is outraged Meta considers it “news”.

Threads’ user base seems to be an object of fascination among the tech press. Mark Zuckerberg says it is “on the trajectory I expect to build a vibrant long term app” with “10s of millions” of users returning daily. Meanwhile, third-party estimators have spent the weeks since Threads’ debut breaking the news that its returning user base is smaller than its total base, and that people are somewhat less interested in it than when it launched, neither of which is surprising or catastrophic.1 Meanwhile, Elon Musk says Twitter is more popular than ever but, then again, he does say a lot of things that are not true.

All that merits discussion, I suppose, but I am more interested in the purpose of Threads. It is obviously a copy of Twitter at its core, but so what? Twitter is the progenitor of a genre of product, derived from instant messenger status messages and built into something entirely different. Everything is a copy, a derivative, a remix. It was not so long ago that many people were equating a person’s ban from mainstream social media platforms with suppression and censorship. That is plenty ridiculous on its face, but it does mean we should support more platforms because it does not make sense for there to be just one Twitter-like service or one YouTube-like video host.

So why is Threads, anyway? How does Meta’s duplication of Twitter — and, indeed, its frequent replication of other features and apps — fit into the company’s overall strategy? What is its strategy? Meta introduced Threads by saying it is “a new, separate space for real-time updates and public conversations”, which “take[s] what Instagram does best and expand[s] that to text”. Meta’s mission is to “[give] people the power to build community and bring the world closer together”. It is a “privacy-focused” set of social media platforms. It is “making significant investments” in its definition of a metaverse which “will unlock monetization opportunities for businesses, developers, and creators”. It is doing a bunch of stuff with generative artificial intelligence.

But what it sells are advertisements. It currently makes a range of products which serve both as venues for those ads, and as activity collection streams for targeting information. This leaves it susceptible to risks on many fronts, including privacy and platform changes, which at least partly explains why it is slowly moving toward its own immersive computing platform.

Ad-supported does not equate to bad. Print and broadcast media have been ad-supported for decades and they are similarly incentivized to increase and retain their audience. But, in their case, they are producing or at least deliberately selecting media of a particular type — stories in a newspaper, songs on a radio station, shows on TV — and in a particular style. Meta’s products resemble that sort of arrangement, but do not strictly mimic it. Its current business model rewards maximizing user engagement and data collection. But, given the digital space, there is little prescription for format. Instagram’s image posts can be text-based; users can write an essay on Facebook; a Threads post can contain nothing more than a set of images.

So Meta has a bunch of things going for it:

  • a business model that incentivizes creating usage and behavioural data at scale,

  • a budget to experiment, and

  • an existing massive user base to drive adoption.

All this explains why Meta is so happy to keep duplicating stuff popularized elsewhere. It cloned Snapchat’s Stories format in Instagram to great success, so it tried cloning Snapchat in its entirety more than once, both of which flopped. After Vine popularized short videos, Facebook launched Riff. After Twitter dumbly let Vine wither and die, and its place was taken by Musical.ly and then TikTok, Facebook launched Lasso, which failed, then Reels and copied its recommendation-heavy feed, moves which — with some help — have been successful. Before BeReal began to tank, it was copied by, uh, TikTok, but Meta was working on its own version, too.

But does any of this suggest to you an ultimate end goal or reason for being? To me, this just looks like Meta is throwing stuff at people in the hope any of it sticks enough for them to open the advertising spigot. In the same way a Zara store is just full of stuff, much of it ripping off the work of others, Meta’s product line does not point to a goal any more specific than its mission statement of “bring[ing] the world closer”. That is meaningless! The same corporate goal could be used by a food importer or a construction firm.

None of this is to say Meta is valueless as a company; clearly it is not. But it makes decisions that look scatterbrained as it fends off possible competitors while trying to build its immersive computing vision. But that might be far enough away that it is sapping any here-and-now vision the company might have. Even if the ideas are copies — and, again, I do not see that as an inherent weakness — I can only think of one truly unique, Meta-specific, and successful take: Threads itself. It feels like a text-only Instagram app, not a mere Twitter clone, and it is more Meta-like for it. That probably explains why I use it infrequently, and why it seems to have been greeted with so much attention. Even so, I do not really understand where it fits into the puzzle of the Meta business as a whole. Is it always going to be a standalone app? Is it a large language model instruction farm? Is it just something the company is playing around with and seeing where it goes, along the lines of its other experimental products? That seems at odds with its self-described “year of efficiency”.

I wish I saw in Meta a more deliberate set of products. Not because I am a shareholder — I am not — but because I think it would be a more interesting business to follow. I wish I had a clearer sense of what makes a Meta product or service.


  1. Then there is the matter of how Sensor Tower and SimilarWeb measure app usage given how restricted their visibility is on Android and, especially, iOS. Sensor Tower runs an ad blocking VPN which it uses in a way not dissimilar from how Meta used Onavo, and several screen time monitoring products, which is something that was not disclosed in an analysis the company did with the New York Times.

    SimilarWeb has a fancy graphic illustrating its data acquisition and delivery process, which it breaks down into collection, synthesis, modelling, and digital intelligence. Is it accurate? Since neither Apple nor Google reports the kind of data SimilarWeb purports to know about apps, it is very difficult to know. But, as its name suggests, its primary business is in web-based tracking, so it is at least possible to compare its data against others’. It says the five most popular questions asked to Google so far this year are “what”, “what to watch”, “how to delete instagram account”, “how to tie a tie”, and “how to screenshot on windows”. PageTraffic says the five most-Googled questions are “what to watch”, “where’s my refund”, “how you like that”, “what is my IP address”, and “how many ounces in a cup”, and Semrush says the top five are “where is my refund”, “how many ounces in a cup”, “how to calculate bmi”, “is rihanna pregnant”, and “how late is the closest grocery store open”. All three use different data sources but are comparable data sets — that is, all from Google, all worldwide, and all from 2023. They also estimate wildly differing search volumes: SimilarWeb’s estimate of the world’s most popular question query, “what”, is searched about 2,015,720 times per month, while Semrush says “where is my refund” is searched 15,500,000 times per month. That is not even close.

    But who knows? Maybe the estimates from these marketing companies really can be extrapolated to determine real-world app usage. Colour me skeptical, though: if there is such wide disagreement in search analysis — a field which uses relatively open and widely accessible data — then what chance do they have of accurately assessing closed software platforms? ↥︎

Alex Ivanovs, Stackdiary:

Zoom’s updated policy states that all rights to Service Generated Data are retained solely by Zoom. This extends to Zoom’s rights to modify, distribute, process, share, maintain, and store such data “for any purpose, to the extent and in the manner permitted under applicable law.”

What raises alarm is the explicit mention of the company’s right to use this data for machine learning and artificial intelligence, including training and tuning of algorithms and models. This effectively allows Zoom to train its AI on customer content without providing an opt-out option, a decision that is likely to spark significant debate about user privacy and consent.

Smita Hashim of Zoom (emphasis theirs):

We changed our terms of service in March 2023 to be more transparent about how we use and who owns the various forms of content across our platform.

[…]

To reiterate: we do not use audio, video, or chat content for training our models without customer consent.

Zoom is trialling a summary feature which uses machine learning techniques, and it appears administrators are able to opt out of data sharing while still having access to the feature. But why is all of this contained in a monolithic terms-of-service document? Few people read these things in full and even fewer understand them. It may appear simpler, but features which require this kind of compromise should have specific and separate documentation for meaningful explicit consent.

Preetika Rana, Wall Street Journal:

Uber Technologies posted its first-ever operating profit in the second quarter, a milestone in its long-term efforts to stem losses in its businesses carrying people and delivering food.

Since its 2009 founding, Uber has not once proved it is actually profitable — until now. Two reasons it has been able to achieve this milestone are because it is showing more ads — which is not a convincing demonstration that its core model of underpaying drivers in pirate taxis is sustainable — and because it is increasing its prices while cutting driver earnings. The end of free money is requiring Uber to behave like a real business.