Manton Reece:

Trying Sora. It’s extraordinary that this works at all, and it’s even faster than I was expecting. OpenAI seems to have built a whole UI system around this app too.

Something I neglected to mention in my thoughts about Sora from earlier this week — which spiralled into something else entirely — is this U.I. which, though new to OpenAI, feels familiar. In screen recordings, it looks strikingly like Apple’s Photos app for Mac and at iCloud on the web, right down to the way an item’s metadata is displayed. I am sure there are other applications that have a similar layout and appearance, but that was my immediate impression.

Jason Koebler, 404 Media:

TCL said it expects to sell 43 million televisions this year. To augment the revenue from its TV sales, it has created a free TV service called TCL+, which is supported by targeted advertising. A few months ago, TCL announced the creation of the TCL Film Machine, which is a studio that is creating AI-generated films that will run on TCL+. TCL invited me to the TCL Chinese Theater, which it now owns, to watch the first five AI-generated films that will air on TCL+ starting this week.

What an extraordinary paragraph. Tired of movies feeling increasingly derivative? Too bad; here are some that literally — by definition — are entirely so. You can watch all five on YouTube if you really want.

They are, with the exception of the visuals, short movies like any other. They were apparently written, directed, voice acted, and edited by real people. How long do you think TCL will maintain that level of investment?

Joe Rosensteel:

As for the obviously terrible quality, Jason brings up the ol’ “this is the worst it will ever be” chestnut, and [TCL’s] Chris agrees, and elaborates. What’s left unsaid with TCL’s approach is that this is the best the TCL Channel will ever be, because it’s optimizing for this quality level. They’re setting this as the bar. If Chris is to be believed, and that TCL will always employ roughly this same number of people to make something, then that would indicate to me that they’ll simply be able to make more videos of this quality, not the same number of videos at a higher quality.

In a world of slop and content, these truly seem like both. I feel a little uncomfortable writing that because people worked on them, but that is how they come across. All programming on broadcast television is, to some extent, the thing luring you in so you will also watch some ads. These movies are not that. Regardless of the technology behind them, they are simply poor movies. They are not much fun to watch and I am not sure why I would sit through ad breaks to see anything like these shorts.

In some sense, they function in basically the same way as A.I. TikTok spam. They can be produced at volume as TCL inevitably relies less on the human involvement in these examples. But would you rather watch Letterman re-runs instead?

Hannah Murphy and John Burn-Murdoch, Financial Times:

Since election day, app usage of Bluesky in the US and UK skyrocketed by almost 300 per cent to 3.5mn daily users, according to data from research group Similarweb. The site was boosted as academics, journalists and left-leaning politicians abandoned X, whose billionaire owner is a prominent supporter of the president-elect.

Prior to November 5, Threads had five times more daily active users in the US than Bluesky, which has just 20 full-time staff and was initially funded by Twitter when Jack Dorsey was its chief executive. Now, Threads is only 1.5 times larger than its rival, Similarweb said.

Raphael Boyd, the Guardian:

A mass departure from Elon Musk’s X has led to the site losing about 2.7 million active Apple and Android users in the US in two months, with its rival social media platform Bluesky gaining nearly 2.5 million over the same period.

[…]

According to the digital market intelligence company Similarweb, the number of daily active US users on X has dropped by 8.4% since early October, from 32.3 million to 29.6 million.

These are interesting numbers from a questionable source. I wanted to check the work of Similarweb against user numbers reported by any tech company but, unsurprisingly, I could not find any overlap. Such figures would be both redundant if they were accurate and embarrassing if they were not.

Twitter stopped reporting monthly active users in early 2019 owing to steady declining numbers. In its then-most-recent figures, Twitter claimed between 66 and 69 million monthly active U.S. users — over twice as many as Similarweb reports X having now. However, despite their superficial similarity, the methodologies of these companies are likely very different; I could not find any pre-Musk monthly active user statistics for Twitter as reported by Similarweb for comparison. Also, do note the numbers in the first article — the one from the Times — are daily active users, not monthly.

These figures are clearly derived from something and not just made up. Even so, I would not read into them too much. But here is a bizarre possibility: perhaps all these traffic stats really are comparable. Maybe X has lost half its U.S. user base in the past five years, and services which did not exist then are growing fast. And even if Bluesky and Threads do not have as large a user base as X, it does not necessarily mean they are not meaningful. A perfect case study is Twitter itself, which has never had a user base like Facebook or Instagram. Even so, Twitter punched above its weight and swayed public conversation in a way Facebook rarely has, for better and worse.

A little over two years after OpenAI released ChatGPT upon the world, and about four years since Dall-E, the company’s toolset now — “finally” — makes it possible to generate video. Sora, as it is called, is not the first generative video tool to be released to the public; there are already offerings from Hotshot, Luma, Runway, and Tencent. OpenAI’s is the highest-profile so far, though: the one many people will use, and the products of which we will likely all be exposed to.

A generator of video is naturally best seen demonstrated in that format, and I think Marques Brownlee’s preview is a good place to start. The results are, as I wrote in February when Sora was first shown, undeniably impressive. No matter how complicated my views about generative A.I. — and I will get there — it is bewildering that a computer can, in a matter of seconds, transform noise into a convincing ten-second clip depicting whatever was typed into a text box. It can transform still images into video, too.

It is hard to see this as anything other than extraordinary. Enough has been written by now about “any sufficiently advanced technology [being] indistinguishable from magic” to bore, but this truly captures it in a Penn & Teller kind of way: knowing how it works only makes it somehow more incredible. Feed computers on a vast scale video which has been labelled — partly by people, and partly by automated means which are reliant on this exact same training process — and it can average that into entirely new video that often appears plausible.1 I am basing my assessment on the results generated by others because Sora requires a paid OpenAI account, and because there is currently a waiting list.

There are, of course, limitations of both technology and policy. Sora has problems with physics, the placement of objects in space, and consistency between and within shots. Sora does not generate audio, even though OpenAI has the capability. Prompts in text and images are checked for copyright violations, public figures’ likenesses, criminal usage, and so forth. But there is no meaningful restrictions on the video itself. This is not how things must be; this is a design decision.

I keep thinking about the differences between A.I. features and A.I. products. I use very few A.I. products; an open-ended image generator, for example, is technically interesting but not very useful to me. Unlike a crop of Substack writers, I do not think pretending to have commissioned art lends me any credibility. But I now use A.I. features on a regular basis, in part because so many things are now “A.I. features” in name and by seemingly no other quality. Generative Remove in Adobe Lightroom Classic, for example, has become a terrific part of my creative workflow. There are edits I sometimes want to make which, if not for this feature, would require vastly more time which, depending on the job, I may not have. It is an image generator just like Dall-E or Stable Diffusion, but it is limited by design.

Adobe is not taking a principled stance; Photoshop contains a text-based image generator which, I think, does not benefit from being so open-ended. It would, for me, be improved if its functionality were integrated into more specific tools; for example, the crop tool could also allow generative reframing.

Sora, like ChatGPT and Dall-E, is an A.I. product. But I would find its capabilities more useful and compelling if they were a feature within a broader video editing environment. Its existence implies a set of tools which could benefit a video editor’s workflow. For example, the object removal and tracking features in Premiere Pro feel more useful to me than its ability to generate b-roll, which just seems like a crappy excuse to avoid buying stock footage or paying for a second unit.

Limiting generative A.I. in this manner would also make its products more grounded in reality and less likely to be abused. It would also mean withholding capabilities. Clearly, there are some people who see a demonstration of the power of generative A.I. as a worthwhile endeavour unto itself. As a science experiment, I get it, but I do not think these open-ended tools should be publicly available. Alas, that is not the future venture capitalists, and shareholders, and — I guess — the creators of these products have decided is best for us.

We are now living in a world of slop, and we have been for some time. It began as infinite reams of text-based slop intended to be surfaced in search results. It became image-based slop which paired perfectly with Facebook’s pivot to TikTok-like recommendations. Image slop and audio slop came together to produce image slideshow slop dumped into the pipelines of Instagram Reels, TikTok, YouTube Shorts. Brace yourselves for a torrent of video slop about pyramids and the Bermuda triangle and pyramids. None of these were made using Sora, as far as I know; at least some were generated by Hailuo from Minimax. I had to dig a little bit for these examples, but not too much, and it is only going to get worse.

Much has been written about how all this generative stuff has the capability of manipulating reality — and rightfully so. It lends credence to lies, and its mere existence can cause unwarranted doubt. But there is another problem: all of this makes our world a little bit worse because it is cheap to produce in volume. We are on the receiving end of a bullshit industry, and the toolmakers see no reason to slow it down. Every big platform — including the web itself — is full of this stuff, and it is worse for all of us. Cynicism aside, I cannot imagine the leadership at Google or Meta actually enjoys using their own products as they wade through generated garbage.

This is hitting each of us in similar ways. If you use a computer that is connected to the internet, you are likely running into A.I.-generated stuff all the time, perhaps without being fully aware of it. The recipe you followed, the repair guide you found, the code you copy-and-pasted, and the images in the video you watched? Any of them could have been generated in a data farm somewhere. I do not think that is inherently bad, though it is an uncertain feeling.

I am part of the millennial generation. I grew up at a time in which we were told we were experiencing something brand new in world history. The internet allowed anyone to publish anything, and it was impossible to verify this new flood of information. We were taught to think critically and be cautious, since we never knew who created anything. Now we have a different problem: we are unsure what created anything.


  1. Without thinking about why it is the case, it is interesting how generative A.I. has no problem creating realistic-seeming text as text, but it struggles when it is an image containing text. But with a little knowledge about how these things work, that makes sense. ↥︎

The RSS feed for this website runs through Feedpress and, at some point in November, I must have done something to cause it to behave unreliably. It took me a while to track down in part because I have the JSON feed in NetNewsWire, but not the RSS feed. A silly oversight, I admit.

I think it is fixed, but please let me know if I have still made a mess of things. I recommend subscribing to the JSON feed anyhow if that is an option for you.

Liv McMahon and Lily Jamali, BBC News:

TikTok’s bid to overturn a law which would see it banned or sold in the US from early 2025 has been rejected.

[…]

TikTok says it will now take its fight to the US Supreme Court, the country’s highest legal authority.

The court’s opinion (PDF) is not particularly long. As this is framed as a question of national security, the court gives substantial deference to the government’s assessment of TikTok’s threat. It also views the legislation passed earlier this year to limit data brokers as a complementary component of this TikTok divest-or-ban law.

I still do not find this argument particularly compelling. There is still too much dependence on classified information and too little public evidence. A generous interpretation of this is the court knows something I do not, and perhaps this is completely justified. But who knows? The paranoia over this app is leaking but the proof is not.

Donald Trump’s victory in the 2024 US Presidential Election may also present a lifeline for the app.

Despite unsuccessfully attempting to ban TikTok during his first term in 2020, he said in the run-up to the November elections he would not allow the ban on TikTok to take effect.

I would be shocked if the incoming administration remains committed to overturning this ban, and not just because of its historically flaky reputation. This very decision references the actions of the first Trump presidency, though it owes more to the more tailored policies of the Biden administration.

If the U.S. Supreme Court does not stay this order and TikTok’s U.S. operations are not jettisoned from its global business, the ban will go into effect the day before Trump’s inauguration.

Last month, Brazilian competition authorities ruled against Apple, finding in an increasingly familiar pattern that its anti-steering App Store rules are illegal. It imposed a twenty-day deadline for compliance.

Filipe Espósito, 9to5Mac:

According to a new Valor Econômico report, a Brazilian Federal Court judge has ruled that the decision by Cade, the Brazilian regulator, is “disproportionate and unnecessary.” The judge understood that the measures imposed by the regulator “change, in a sensitive and structural way” Apple’s business operation.

Cade ruled on November 26 that Apple would have 20 days to comply with antitrust legislation, otherwise it would be fined R$250,000 (US$42,000) per day. Apple had previously appealed on the grounds that the changes requested were too complex and would take too long to be made, so the company wouldn’t be able to meet the 20-day deadline.

Twenty days does seem like a tight turnaround. I have obviously no idea what it would take to copy-and-paste the same policies it uses in Japan, Korea, and the United States, but perhaps it would be easier to rip off the bandage and do so worldwide.

Howard Oakley:

Over those 11 years, governments have come and gone, my grandchildren have grown up and one is now at university, we survived Covid, lost QuickTime and 32-bit code, and now use Apple silicon Macs. But one thing has remained unchanged through all of that, the Finder column width bug.

Maybe this is the year this bug will bubble up to the top of an intern’s to-fix list. As a dedicated user of the column view, I would not miss it.

Want to experience twice as fast load times in Safari on your iPhone, iPad and Mac?

Then download Magic Lasso Adblock — the ad blocker designed for you.

Magic Lasso Adblock: browse 2.0x faster

As an efficient, high performance, and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

By cutting down on ads and trackers, common news websites load 2× faster and browsing uses less data while saving energy and battery life.

Rely on Magic Lasso Adblock to:

  • Improve your privacy and security by removing ad trackers

  • Block all YouTube ads, including pre-roll video ads

  • Block annoying cookie notices and privacy prompts

  • Double battery life during heavy web browsing

  • Lower data usage when on the go

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad. and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Go figure — just one day after writing about how Apple’s ambiguous descriptions of supposedly clever features has the potential to rob trust, my phone has become haunted.

I saw a suggestion from Siri that I turn on Do Not Disturb until the end of an event in my calendar — a reservation at a restaurant from 8:30 until 10:00 this morning. No such matching event was in Fantastical. It was, however, shown in the Calendar app as a Siri Suggestion.

What I think happened is that I was looking at that restaurant on OpenTable at perhaps 8:00 this morning. I was doing so in my web browser on my Mac, and I was not logged into OpenTable. My Mac and iPhone are both running operating system beta builds with Apple Intelligence enabled. Siri must have interpreted this mere browsing as me making a reservation, and then added it to my calendar without my asking, and then made a suggestion based on that fictional event.

This was not helpful. It was, in fact, perplexing and creepy. I do not know how all of these things were able to work together to produce this result, but I do not like it at all. It is obvious how this would make anyone question whether they can trust Apple Intelligence, A.I. systems generally, Siri, and their personal privacy. Truly bizarre.

Spencer Ackerman has been a national security reporter for over twenty years, and was partially responsible for the Guardian’s coverage of NSA documents leaked by Edward Snowden. He has good reason to be skeptical of privacy claims in general, and his experience updating his iPhone made him worried:

Recently, I installed Apple’s iOS 18.1 update. Shame on me for not realizing sooner that I should be checking app permissions for Siri — which I had thought I disabled as soon as I bought my device — but after installing it, I noticed this update appeared to change Siri’s defaults.

Apple has a history with changing preferences and dark patterns. This is particularly relevant in the case of the iOS 18.1 update because it was the one with Apple Intelligence, which creates new ambiguity between what is happening on-device and what goes to a server farm somewhere.

Allen Pike:

While easy tasks are handled by their on-device models, Apple’s cloud is used for what I’d call moderate-difficulty work: summarizing long emails, generating patches for Photos’ Clean Up feature, or refining prose in response to a prompt in Writing Tools. In my testing, Clean Up works quite well, while the other server-driven features are what you’d expect from a medium-sized model: nothing impressive.

Users shouldn’t need to care whether a task is completed locally or not, so each feature just quietly uses the backend that Apple feels is appropriate. The relative performance of these two systems over time will probably lead to some features being moved from cloud to device, or vice versa.

It would be nice if it truly did not matter — and, for many users, the blurry line between the two is probably fine. Private Cloud Compute seems to be trustworthy. But I fully appreciate Ackerman’s worries. Someone in his position necessarily must understand what is being stored and processed in which context.

However, Ackerman appears to have interpreted this setting change incorrectly:

I was alarmed to see that even my secure communications apps, like Proton and Signal, were toggled by default to “Learn from this App” and enable some subsidiary functions. I had to swipe them all off.

This setting was, to Ackerman, evidence of Apple “uploading your data to its new cloud-based AI project”, which is a reasonable assumption at a glance. Apple, like every technology company in the past two years, has decided to loudly market everything as being connected to its broader A.I. strategy. In launching these features in a piecemeal manner, though, it is not clear to a layperson which parts of iOS are related to Apple Intelligence, let alone where those interactions are taking place.

However, this particular setting is nearly three years old and unrelated to Apple Intelligence. This is related to Siri Suggestions which appear throughout the system. For example, the widget stack on my home screen suggests my alarm clock app when I charge my iPhone at night. It suggests I open the Microsoft Authenticator app on weekday mornings. When I do not answer the phone for what is clearly a scammer, it suggests I return the missed call. It is not all going to be gold.

Even at the time of its launch, its wording had the potential for confusion — something Apple has not clarified within the Settings app in the intervening years — and it seems to have been enabled by default. While this data may play a role in establishing the “personal context” Apple talks about — both are part of the App Intents framework — I do not believe it is used to train off-device Apple Intelligence models. However, Apple says this data may leave the device:

Your personal information — which is encrypted and remains private — stays up to date across all your devices where you’re signed in to the same Apple Account. As Siri learns about you on one device, your experience with Siri is improved on your other devices. If you don’t want Siri personalization to update across your devices, you can disable Siri in iCloud settings. See Keep what Siri knows about you up to date on your Apple devices.

While I believe Ackerman is incorrect about the setting’s function and how Apple handles its data, I can see how he interpreted it that way. The company is aggressively marketing Apple Intelligence, even though it is entirely unclear which parts of it are available, how it is integrated throughout the company’s operating systems, and which parts are dependent on off-site processing. There are people who really care about these details, and they should be able to get answers to these questions.

All of this stuff may seem wonderful and novel to Apple and, likely, many millions of users. But there are others who have reasonable concerns. Like any new technology, there are questions which can only be answered by those who created it. Only Apple is able to clear up the uncertainty around Apple Intelligence, and I believe it should. A cynical explanation is that this ambiguity is all deliberate because Apple’s A.I. approach is so much slower than its competitors and, so, it is disincentivized from setting clear boundaries. That is possible, but there is plenty of trust to be gained by being upfront now. Americans polled by Pew Research and Gallup have concerns about these technologies. Apple has repeatedly emphasized its privacy bonafides. But these features remain mysterious and suspicious for many people regardless of how much a giant corporation swears it delivers “stateless computation, enforceable guarantees, no privileged access, non-targetability, and verifiable transparency”.

All of that is nice, I am sure. Perhaps someone at Apple can start the trust-building by clarifying what the Siri switch does in the Settings app, though.

Cade Metz, New York Times:

Mr. [Sam] Altman said he was “tremendously sad” about the rising tensions between the two one-time collaborators.

“I grew up with Elon as like a mega hero,” he said.

But he rejected suggestions that Mr. Musk could use his increasingly close relationship with President-elect Trump to harm OpenAI.

“I believe pretty strongly that Elon will do the right thing and that it would be profoundly un-American to use political power to the degree that Elon would hurt competitors and advantage his own businesses,” he said.

Alex Heath, the Verge:

Jeff Bezos and President-elect Donald Trump famously didn’t get along the last time Trump was in the White House. This time, Bezos says he’s “very optimistic” and even wants to help out.

“I’m actually very optimistic this time around,” Bezos said of Trump during a rare public appearance at The New York Times DealBook Summit on Wednesday. “He seems to have a lot of energy around reducing regulation. If I can help him do that, I’m going to help him.”

Emily Swanson, the Guardian:

“Mark Zuckerberg has been very clear about his desire to be a supporter of and a participant in this change that we’re seeing all around America,” Stephen Miller, a top Trump deputy, told Fox.

Meta’s president of global affairs, Nick Clegg, agreed with Miller. Clegg said in a recent press call that Zuckerberg wanted to play an “active role” in the administration’s tech policy decisions and wanted to participate in “the debate that any administration needs to have about maintaining America’s leadership in the technological sphere,” particularly on artificial intelligence. Meta declined to provide further comment.

There are two possibilities. The first is that these CEOs are all dummies with memory no more capacious than that of an earthworm. The second is that these people all recognize the transactional and mercurial nature of the incoming administration, and they have begun their ritualistic grovelling. Even though I do not think money and success is evidence of genius, I do not think these CEOs are so dumb they actually believe in the moral fortitude of these goons.

Ben Cohen, Wall Street Journal:

Only four years ago, when it was less popular for podcasts than both Spotify and Apple, YouTube becoming a podcasting colossus sounded about as realistic as Martin Scorsese releasing his next movie on TikTok.

But this year, YouTube passed the competition and became the most popular service for podcasts in the U.S., with 31% of weekly podcast listeners saying it’s now the platform they use the most, according to Edison Research.

This is notable, but Cohen omits key context for why YouTube is suddenly a key podcast platform: Google Podcasts was shut down this year with users and podcasters alike instructed to move to YouTube. According to Buzzsprout’s 2023 analytics, Google Podcasts was used by only 2.5% of global listeners. YouTube is not listed in their report, perhaps because it exists in its own bubble instead of being part of the broader RSS-feed-reading podcast client ecosystem.

But where Google was previously bifurcating its market share, it aligned its users behind a single client. And, it would seem, that audience responded favourably.

John Herrman, New York magazine:

Then, just as the 2010s podcasting bubble was about to peak, TikTok arrived. Here was a video-first platform that was basically only a recommendation engine, minus the pretense and/or burden of sociality — a machine for automating and allocating virality. Its rapid growth drove older, less vibrant social-media platforms wild with envy and/or panic. They all immediately copied it, refashioning themselves as algorithmic short-video apps almost overnight. Suddenly, on every social-media platform — including YouTube, which plugged vertical video “Shorts” into its interface and rewarded creators who published them with followers, attention, and money — there was a major new opportunity for rapid, viral growth. TikTok’s success (and imitation by existing megaplatforms) triggered a formal explosion in video content as millions of users figured out what sorts of short videos worked in this new context: Vine-like comedy sketches; dances; product recommendations; rapid-fire confessionals. The list expanded quickly and widely, but one surprising category broke through: podcast clips.

Of the top twenty podcasts according to Edison Research, fifteen have what I would deem meaningful and regular video components. I excluded those with either a still piece of artwork or illustrated talking heads, and those which only occasionally have video.

Dave Winer:

[…] We’re losing the word “podcast” very quickly. It’s coming to mean video interviews on YouTube mostly. Our only hope is upgrading the open platform in a way that stimulates the imagination of creators, and there’s no time to waste. If you make a podcast client, it’s time to start collaborating with competitors and people who create RSS-based podcasts to take advantage of the open platforms, otherwise having a podcast will mean getting approved by Google, Apple, Spotify, Amazon etc. […]

I hope this is not the case. Luckily, YouTube seems to be an additional place for podcasters so far. I found every show in the top twenty available for download through Overcast in an audio-only format. Also, YouTube channels have RSS feeds, though that is not very useful in an audio-only client like Overcast. Also, Google’s commitment to RSS is about as good as the company’s commitment to anything.

Out of the U.S. today comes a slew of new proposed restrictions against data brokers and their creepy practices.

The Consumer Financial Protection Bureau:

[…] The proposed rule would limit the sale of personal identifiers like Social Security Numbers and phone numbers collected by certain companies and make sure that people’s financial data such as income is only shared for legitimate purposes, like facilitating a mortgage approval, and not sold to scammers targeting those in financial distress. The proposal would make clear that when data brokers sell certain sensitive consumer information they are “consumer reporting agencies” under the Fair Credit Reporting Act (FCRA), requiring them to comply with accuracy requirements, provide consumers access to their information, and maintain safeguards against misuse.

The Federal Trade Commission:

The Federal Trade Commission will prohibit data broker Mobilewalla, Inc. from selling sensitive location data, including data that reveals the identity of an individual’s private home, to settle allegations the data broker sold such information without taking reasonable steps to verify consumers’ consent.

And also the Federal Trade Commission:

The Federal Trade Commission is taking action against Gravy Analytics Inc. and its subsidiary Venntel Inc. for unlawfully tracking and selling sensitive location data from users, including selling data about consumers’ visits to health-related locations and places of worship.

Both of the proposed FTC orders require these businesses to “maintain a sensitive location data program designed to develop a list of sensitive locations and prevent the use, sale, license, transfer, sharing, or disclosure of consumers’ visits to those locations”. These include, for example and in addition to those in the above quotes, shelters, labour union offices, correctional facilities, and military installations. This order was previewed last month in Wired.

As usual, I am conflicted about these policies. While they are yet another example of Lina Khan’s FTC and other government bureaucrats cracking down on individually threatening data brokers, it would be far better for everyone if this were not handled on a case-by-case basis. These brokers have already caused a wealth of damage around the world, and only they are being required to stop. Other players in the rest of the data broker industry will either self-govern or hope they do not fall into the FTC’s crosshairs, and if you believe the former is more likely, you have far greater faith in already-shady businesses than I do.

There is another wrench in these proposals: we are less than two months away from a second Trump presidency, and the forecast for the CFPB looks unfriendly. It was kneecapped during the first administration and it is on the chopping block for those overseeing a advisory committee masquerading as a government agency. The future of the FTC is more murky, with some indicators it will continue its current path — albeit from a Republican-skewed perspective — while others suggest a reversal.

The centring of the U.S. in the digital activity of a vast majority of us gives it unique power on privacy — power it has, so far, used in only very small doses. The future of regulatory agencies like these has relevance to all of us.

Enron is not really back. Someone managed to grab the Enron.com URL and put up an inspirational faux corporate video and a Shopify merch store. It is all very funny.

What is more amusing to me is stumbling across a preserved-in-amber Enron website. There is an earnings press release from July 2001, mere months before the whole thing went to hell in public. There are descriptions of the company’s vast products.

But this, too, is unofficial. It was created by Facundo Pignanelli to preserve this noteworthy chapter in corporate fraud. There is even an Instagram account. This is all very strange.

Do you want to block all YouTube ads in Safari on your iPhone, iPad and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

Magic Lasso Adblock - best in class YouTube ad blocking

As an efficient, high performance, and native Safari ad blocker, Magic Lasso blocks all intrusive ads, trackers, and annoyances – delivering a faster, cleaner, and more secure web browsing experience.

Magic Lasso Adblock is easy to setup, doubles the speed at which Safari loads, and also blocks all YouTube ads; including all:

  • video ads

  • pop up banner ads

  • search ads

  • plus many more

With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 350,000 users and download Magic Lasso Adblock today.

Brendan Nystedt, reporting for Wired on a new generation of admirers of crappy digital cameras from the early 2000s:

For those seeking to experiment with their photography, there’s an appeal to using a cheap, old digital model they can shoot with until it stops working. The results are often imperfect, but since the camera is digital, a photographer can mess around and get instant gratification. And for everyone in the vintage digital movement, the fact that the images from these old digicams are worse than those from a smartphone is a feature, not a bug.

Om Malik attributes it to wabi-sabi:

Retromania? Not really. It feels more like a backlash against the excessive perfection of modern cameras, algorithms, and homogenized modern image-making. I don’t disagree — you don’t have to do much to come up with a great-looking photo these days. It seems we all want to rebel against the artistic choices of algorithms and machines — whether it is photos or Spotify’s algorithmic playlists versus manually crafted mixtapes.

I agree, though I do not see why we need to find just one cause — an artistic decision, a retro quality, an aesthetic trend, a rejection of perfection — when it could be driven by any number of these factors. Nailing down exactly which of these is the most important factor is not of particular interest to me; certainly, not nearly as much as understanding that people, as a general rule, value feeling.

I have written about this before and it is something I wish to emphasize repeatedly: efficiency and clarity are necessary elements, but are not the goal. There needs to be space for how things feel. I wrote this as it relates to cooking and cars and onscreen buttons, and it is still something worth pursuing each and every time we create anything.

I thought about this with these two articles, but first last week when Wil Shipley announced the end of Delicious Library:

Amazon has shut off the feed that allowed Delicious Library to look up items, unfortunately limiting the app to what users already have (or enter manually).

I wasn’t contacted about this.

I’ve pulled it from the Mac App Store and shut down the website so nobody accidentally buys a non-functional app.

Delicious Library was many things: physical and digital asset management software, a kind of personal library, and a wish list. But it was also — improbably — fun. Little about cataloguing your CDs and books sounds like it ought to be enjoyable, but Shipley and Mike Matas made it feel like something you wanted to do. You wanted to scan items with your Mac’s webcam just because it felt neat. You wanted to see all your media on a digital wooden shelf, if for no other reason than it made those items feel as real onscreen as they are in your hands.

Delicious Library became known as the progenitor of the “delicious generation” of applications, which prioritized visual appeal as much as utility. It was not enough for an app to be functional; it needed to look and feel special. The Human Interface Guidelines were just that: guidelines. One quality of this era was the apparently fastidious approach to every pixel. Another quality is that these applications often had limited features, but were so much fun to use that it was possible to overlook their restrictions.

I do not need to relitigate the subsequent years of visual interfaces going too far, then being reeled in, and then settling in an odd middle ground where I am now staring at an application window with monochrome line-based toolbar icons, deadpan typography, and glassy textures, throwing a heavy drop shadow. None of the specifics matter much. All I care about is how these things feel to look at and to use, something which can be achieved regardless of how attached you are to complex illustrations or simple line work. Like many people, I spend hours a day staring at pixels. Which parts of that are making my heart as happy as my brain? Which mundane tasks are made joyful?

This is not solely a question of software; it has relevance in our physical environment, too, especially as seemingly every little thing in our world is becoming a computer. But it can start with pixels on a screen. We can draw anything on them; why not draw something with feeling? I am not sure we achieve that through strict adherence to perfection in design systems and structures.

I am reluctant to place too much trust in my incomplete understanding of a foreign-to-me concept rooted in another country’s very particular culture, but perhaps the sabi is speaking loudest to me. Our digital interfaces never achieve a patina; in fact, the opposite is more often true: updates seem to erase the passage of time. It is all perpetually new. Is it any wonder so many of us ache for things which seem to freeze the passage of time in a slightly hazier form?

I am not sure how anyone would go about making software feel broken-in, like a well-worn pair of jeans or a lounge chair. Perhaps that is an unattainable goal for something on a screen; perhaps we never really get comfortable with even our most favourite applications. I hope not. It would be a shame if we lose that quality as software eats our world.

Barry Schwartz, Search Engine Roundtable:

Google launched a new feature in the Google App for iOS named Page Annotation. When you are browsing a web page in the Google App native browser, Google can “extract interesting entities from the webpage and highlight them in line.” When you click on them, Google takes you to more search results.

This was announced nearly two weeks ago in a subtle forum post. If there was a press release, I cannot find it. It was only picked up by the press thanks to Schwartz’s November 21 article, but those stories were not published until just before the U.S. Thanksgiving long weekend, so this news was basically buried.

Google is now injecting “Page Annotations”, which are kind of like Skimlinks but with search results. The results from a tapped Page Annotation are loaded in a floating temporary sheet, so it is not like users are fully whisked away — but that is almost worse. In the illustration from Google, a person is apparently viewing a list of Japanese castles, into which Google has inserted a link on “Osaka Castle”. Tapping on an injected link will show Google’s standard search results, which are front-loaded with details about how to contact the castle, buy tickets, and see a map. All of those things would be done better in a view that cannot be accidentally swiped away.

Maybe, you are thinking, it would be helpful to easily trigger a search from some selected text, and that is fair. But the Google app already displays a toolbar with a search button when you highlight any text in this app.

Owners of web properties are only able to opt out by completing a Google Form, but you must be signed into the same Google account you use for Search Console. Also, if a property is accessible at multiple URLs — for example, http and https, or www and non-prefixed — you must include each variation separately.

For Google to believe it has the right to inject itself into third-party websites is pure arrogance, yet it is nothing new for the company. It has long approached the web as its own platform over which it has control and ownership. It overlays dialogs without permission; it invented a proprietary fork of HTML and it pushed its adoption for years. It can only do these things because it has control over how people use the web.

From the official Bluesky account:

With this release, you can now display replies by “hotness,” which weights liked replies that are more recent more heavily.

I believe this replaced the past reply sorting of oldest to newest. People seem worried this can be gamed, but there is good news: you can just change it. There are options for oldest replies, newest replies, most-liked, and one that is completely randomized. Also, you can still set it to prioritize people you follow.

Imagine that: options for viewing social media that give control back to users. Threads is experimenting, but Meta still fundamentally distrusts users to make decisions like these.