Month: February 2023

Jason Torchinsky, the Autopian:

Tesla is recalling 362,758 vehicles after a new report from the federal National Highway Traffic Safety Administration called out its Full Self-Driving Beta (FSD Beta) software for a reason that, when you think about it, is surprisingly human for a computer-based AI system: it sometimes breaks traffic laws.

The reason I put “recall” in quotation marks in the headline is because, like many Tesla recalls, this does not require owners to bring them into a dealership for a parts exchange or something; Tesla says it will be fixing these problems with a software update. And the reason I put “full self-driving” in quotation marks is, well, you know.

From that report (PDF):

In certain rare circumstances and within the operating limitations of FSD Beta, when the feature is engaged, the feature could potentially infringe upon local traffic laws or customs while executing certain driving maneuvers in the following conditions before some drivers may intervene: 1) traveling or turning through certain intersections during a stale yellow traffic light; 2) the perceived duration of the vehicle’s static position at certain intersections with a stop sign, particularly when the intersection is clear of any other road users; 3) adjusting vehicle speed while traveling through certain variable speed zones, based on detected speed limit signage and/or the vehicle’s speed offset setting that is adjusted by the driver; and 4) negotiating a lane change out of certain turn-only lanes to continue traveling straight.

Sometimes Teslas run yellow lights when they should not, do not come to a textbook complete stop at stop signs, do not follow the speed limit, and ignore lane signage. The joke here is that a Tesla in autonomous mode drives like everyone else.

But it should not, right? Like, is that not the whole point of autonomous cars? Self-driving advocates routinely say that human beings are bad drivers and we should let computers take over. So, while these systems are not commonplace and dependent on driver attention, they should follow the law or, at the very least, not drive like an asshole. The same is true for human drivers, too.

We should also fund the rest of the mobility pyramid with an embarrassment of riches.

Certo, a company which makes smartphone spyware detection software, published to its blog a story about spyware which abuses the Wi-Fi syncing feature to obtain copies of the phone’s data. It is not a well-written article. It starts by arguing that “Apple’s supposedly impenetrable security” is “one of the reasons for [users’] loyalty”, before explaining this is not really a security problem. It is also not new — the vulnerabilities of Wi-Fi syncing have been known since at least 2018.

That information does little to ameliorate these abuses, however. Wi-Fi syncing is a logical vector for being spied upon by a family member or spouse from its very design: it began as an invisible, largely passive way to keep an iPhone in sync with a computer, when it works properly.

But there is something in here which I think is worth drawing attention to:

Historically you could perform a simple check in the Settings app on the phone to see if WiFi Sync was enabled (and therefore if you may be a victim of this type of spyware). It would even display the name of the computer that your iOS device was set up to sync with. However, in iOS 13 and all subsequent updates, Apple has removed this information from the Settings app, making it extremely difficult to tell if it is enabled.

The only way to know if an iPhone has Wi-Fi syncing turned on is by checking in Finder on the trusted Mac, or in iTunes on a Windows PC. If Apple is not retiring this feature, it should be possible to see if an iPhone has Wi-Fi syncing enabled on the phone itself.

Eric Van Aelstyn of Microsoft:

The out-of-support Internet Explorer 11 (IE11) desktop application was permanently disabled on certain versions of Windows 10 on February 14, 2023 through a Microsoft Edge update. Note, this update will be rolled out over the span of a few days up to a week, as is standard for Microsoft Edge updates.

[…]

Will iexplore.exe be removed from devices?

No, but if a user tries to access it, they will be unable to open IE11 and will be redirected to Microsoft Edge.

I am sure nobody is mourning this, and it is a long time coming. But this update is a little different: unlike a software update where a newer version replaces an older one, this update prevents users from launching Internet Explorer.

I do not think Microsoft should have been required to have ongoing support for IE and, to be sure, other software platforms have issued updates to remove features — remember when iOS came with a first-party YouTube app? It is an interesting update, though, if only because it is rare for any vendor to force users to stop using software, let alone Microsoft.

After I linked to an example of a confident but wrong answer from Google Bard, reader Nick Jarman emailed me with a tip about a similar flub in Microsoft’s Bing demo queries.

If you follow the prompt suggested on the New Bing homepage for making a pop music trivia game, the fifth suggested question is about an artist’s discography:

5. Which pop artist released the albums Future Nostalgia, Confessions on a Dance Floor, and Chromatica?

A) Dua Lipa

B) Madonna

C) Lady Gaga

D) Kylie Minogue

Bing says “A” is correct, but this question is unanswerable: “Future Nostalgia” is by Dua Lip, “Confessions…” is by Madonna, and “Chromatica” is by Lady Gaga. And that is not the only showpiece query where Bing gets confused.

Dmitri Brereton documents several other examples in Bing’s demo materials:

I am shocked that the Bing team created this pre-recorded demo filled with inaccurate information, and confidently presented it to the world as if it were good.

I am even more shocked that this trick worked, and everyone jumped on the Bing AI hype train without doing an ounce of due diligence.

Bing AI is incapable of extracting accurate numbers from a document, and confidently makes up information even when it claims to have sources.

I am not surprised these products are getting things wrong — they are brand new, after all — but it is disappointing to see these kinds of problems in the demos and marketing materials from these two giant companies. A demo is usually showing a company’s products in their best light and the limitations are discovered elsewhere. Is this the best they can do? Credit for the honesty, however unintended, but it only exposes the gap between the excited marketing of these nascent products and their reality.

Thanks again to Jarman for tipping me off to that first flawed demo.

Ted Chiang, the New Yorker:

[…] Think of ChatGPT as a blurry JPEG of all the text on the Web. It retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation. But, because the approximation is presented in the form of grammatical text, which ChatGPT excels at creating, it’s usually acceptable. You’re still looking at a blurry JPEG, but the blurriness occurs in a way that doesn’t make the picture as a whole look less sharp.

This analogy to lossy compression is not just a way to understand ChatGPT’s facility at repackaging information found on the Web by using different words. It’s also a way to understand the “hallucinations,” or nonsensical answers to factual questions, to which large language models such as ChatGPT are all too prone. […]

I shamelessly cribbed the Xerox example in the last post from this excellent article. The similarities between many of these machine learning models are apparent: computational photography uses trained guesswork to reconstruct detail in blurry images, just as GPT and similar models use a vast library of language which can be used to create a best guess at seemingly precise phrases.

Will Yager, PetaPixel:

Significantly more objectionable are the types of approaches that impose a complex prior on the contents of the image. This is the type of process that produces the trash-tier results you see in my example photos. Basically, the image processing software has some kind of internal model that encodes what it “expects” to see in photos. This model could be very explicit, like the fake moon thing, an “embodied” model that makes relatively simple assumptions (e.g. about the physical dynamics of objects in the image), or a model with a very complex implicit prior, such as a neural network trained on image upscaling. In any case, the camera is just guessing what’s in your image. If your image is “out-of-band”, that is, not something the software is trained to guess, any attempts to computationally “improve” your image are just going to royally trash it up.

This article arrived at a perfect time as Samsung’s latest flagship is once again mired in controversy over a Moon photography demo. Marques Brownlee tweeted a short clip of the S23 Ultra’s one-hundredfold zoom mode, which combines optical and digital zoom and produces a remarkably clear photo of the Moon. As with similar questions about the S21 Ultra and S22 Ultra, it seems Samsung is treading a blurry line between what is real and what is synthetic.

Samsung is surely not floating a known image of the Moon over its spot in the sky when you point the camera in its direction. But the difference between what it can see and what it displays is also not the result of increasing the image’s contrast and sharpness. If you look at a side-by-side comparison of Samsung’s S22 from last year and an iPhone 14 Pro — which the photographer claims “look the same” but do not in any meaningful sense — the Ultra is able to pull stunning detail and clarity out of a handheld image. Much of that can be attributed to the S22’s 10× optical zoom which outstrips the iPhone’s 3× zoom. Another reason why it is so detailed is also because Samsung specifically trained the camera to take pictures of the Moon, among other scenes. The Moon is a known object with limited variability, so it makes sense to me that machine learning models would be able to figure out which landmarks and details to enhance.

How much of that is the actual photo and how much you might consider to be synthesized is a line I think each person draws for themselves. I think it depends on the context; Moon photography makes for a neat demo but it is rarely relevant. A better question is whether these kinds of software enhancements hallucinate errors along the same lines of what happened in Xerox copiers for years. Short of erroneously reconstructing an environment, I think these kinds of computational enhancements make sense, even though they are inconsistent with my personal taste. I would prefer less processed images — or, at least, photos that look less processed. But that is not the way the wind is blowing.

I cannot recall much of the news from when I was a kid, but I distinctly remember a story about a home invasion where some intruders supposedly destroyed a bunch of valuables and covered the walls in graffiti. That narrative fell apart when police noticed the homeowner’s most prized possessions were untouched; they concluded it was insurance fraud. Some guy vandalized his stuff — actually, mostly his spouse’s stuff — and assumed there would be little investigation before blaming reckless youths.

In a similar vein — though without the fraud — comes this story from Rob Walker in Fast Company, carrying the headline “How a Viral TikTok Trend Vandalized Kia’s Brand”. Sure sounds like the youth these days have some kind of vendetta against Kia specifically, right? How are the TikTokers ruining Kia’s brand?

The “Kia Challenge” — involving videos sharing a purportedly straightforward hack that made it fairly easy to steal certain Kia and Hyundai models — initially flared up last summer. It briefly became a sensational-news staple, with media reports focused on alleged instances of thrill-seeking teens ending up in fiery wrecks. The how-to videos were promptly purged from TikTok. But unlike some scary-sounding viral “trends” that turn out to be mostly media hype (Nyquil Chicken, etc.), the spread of this car-theft how-to has had legs.

This is the second paragraph of the piece, and what follows are a series of statistics about increasing theft of Hyundais and Kias across the United States. And it is alarming — according to the Insurance Institute for Highway Safety, the rate of theft reports for cars from these two manufacturers were nearly twice as high as for other makes, and their models claimed four spots in the top twenty list.

But that is in the U.S.; in Canada, the list looks very different. Not a single Hyundai or Kia model appears in the national top-ten list released by the Équité Association, and the only mention of either manufacturer is in the number nine spot of the thefts in the combined Atlantic provinces. The Canadian auto market looks very similar to that of the U.S. because we are only one-ninth as populous and, so, automakers rarely develop or import vehicles specifically for our drivers. But the auto theft charts indicate something else is happening in this country.

I will address how that is the case in a moment, but first we need to see how Walker describes why these cars are getting stolen at such high rates:

Without getting into some finer points about how to steal a car, the process involves using a USB cable to bypass the ignition. Evidently, Kia models from 2011-2021 and Hyundai models from 2015-2021 lack an anti-theft “immobilizer” feature common to many contemporary ignition-technology systems.

This sounds like a fairly noteworthy design flaw, but at first most of the attention was focused on social media’s role in spreading nefarious information. […]

To be clear, it is a noteworthy design flaw, and it explains why the U.S. and Canadian auto theft statistics diverge so drastically. Since 2007, cars sold in Canada have been required by law to be equipped with immobilizers. Hyundai and Kia — which have mutual part-ownership and share some development costs — apparently realized they were not obligated to equip their vehicles with such technology in the U.S., so they neglected to do so. Whether that was for cost savings reasons or simply laziness seems to be the kind of thing they would not be keen to advertise, but both makes began fitting new U.S. vehicles with immobilizers after reports of this oversight gained momentum.

Yes, the ease of spreading information on TikTok seems to have accelerated how many people know about this design flaw. It is important for the company to be moderating its platform when it notices it is being used to encourage theft instead of advising owners of ways to protect their cars. But it is important to publicize flaws like these because owners of these cars need to be made aware of their vulnerabilities. In 2010, reporters at an ABC affiliate in Detroit found SUVs made by General Motors could be stolen within seconds because the manufacturer decided electronic security measures eliminated the need for mechanical measures. A specific model shared by Citroën, Peugeot, and Toyota was known to be easy to steal in the mid-2000s. A family friend was able to start his 1980s pickup truck by simply twisting the winged ignition switch without a key in it. It is possible knowing any of this information would help some criminals target certain models, but there are far more owners who should be aware of vulnerabilities so they can protect themselves.

To say TikTok caused Kia’s reputation to nosedive is an overreach. It and Hyundai decided not to equip their U.S.-market vehicles with the same anti-theft measure which is standard on cars sold in Canada thanks to regulatory oversight. While awareness could help some owners reduce their vulnerability to theft, prevention would have been an even more effective measure. Decreasing vehicle theft should be a worthwhile goal unto itself because, as Walker explains, it has knock-on effects:

“I don’t want to ride in a car with somebody who has a Kia, I’m not comfortable parking my car or being in a car with somebody who is going to park next to a Kia,” one disgruntled ex-Kia owner told a St. Louis news station. “I kind of just stay away from it because they’re like a target.”

Kia and Hyundai, like General Motors before, cannot blame anyone else for their corner-cutting. It seems social media platforms are doing their best to minimize the spread of tutorial videos, but I do not think anyone should point at TikTok when criminals take advantage of the bad decision the manufacturer made. I am not convinced a “challenge” hashtag turns most average people into car thieves.

All of this is yet another story in the ongoing series of exhibits of social media needing effective moderation and, more importantly, regulators crafting rules to avert disaster before it occurs. This is true of TikTok itself. If we are serious about making things better — just, like, generally — we need expectations for what that ought to look like.

Dustin Shahidehpour of Meta earlier this week:

Facebook for iOS (FBiOS) is the oldest mobile codebase at Meta. Since the app was rewritten in 2012, it has been worked on by thousands of engineers and shipped to billions of users, and it can support hundreds of engineers iterating on it at a time.

[…]

FBiOS was never intentionally architected this way. The app’s codebase reflects 10 years of evolution, spurred by technical decisions necessary to support the growing number of engineers working on the app, its stability, and, above all, the user experience.

Mohsen Agsen of Meta in March 2020 (previously linked):

In the end, we reduced core Messenger code by 84 percent, from more than 1.7M lines to 360,000. We accomplished this by rebuilding our features to fit a simplified architecture and design. While we kept most of the features, we will continue to introduce more features over time. Fewer lines of code makes the app lighter and faster, and a streamlined code base means engineers can innovate more quickly.

[…]

Overall, our approach was simple. If the OS did something well, we used it. We leveraged the full capability of the OS without needing to wait for any framework to expose that functionality. If the OS didn’t do something, we would find or write the smallest possible library code to address the specific need — and nothing more. We also embraced platform-dependent UI and associated tooling. For any cross-platform logic, we used an operating extension built in native C code, which is highly portable, efficient, and fast. We use this extension for anything OS-like that’s globally suboptimal, or anything that’s not covered by the OS. For example, all the Facebook-specific networking is done in C on our extension.

I am sure there are valid reasons for Meta to treat these applications differently, but reading these posts back to back sound like they are from two completely different companies.

Jason Koebler, Vice:

“For years we’ve had to deal with the fact that an entire copy of our phone lives on a server that’s outside of our control. Now the data on that server is under our control. That’s really all that’s changed here,” Matthew Green, associate professor at Johns Hopkins University, told Motherboard in an online chat. “I think it’s an extremely important development.”

In my tests, the process of setting up Advanced Data Protection was a bit buggy, but if the system delivers what it promises, iCloud’s new security add-on could be a game changer for people who have avoided cloud backup tools due to the lack of end-to-end encryption.

There are two reasons to mistrust cloud storage providers. The first is that other people, unauthorized by users, could be able to access data they store; Advanced Data Protection solves that problem. The second reason is that users will be unable to access something for they are authorized due to a bug, systems failure, or data loss.1

Do not get me wrong; I am impressed with the surprise release and rapid rollout of Advanced Data Protection. It is worth pointing out that appears to include availability in China, contrary to comments from some speculators at the feature’s announcement. I feel more secure knowing iCloud Drive operates more like an extension of my personal devices. Where I would like to see progress next is in ongoing stability and quality improvements. iCloud is a higher quality service than it used to be, but it can be better.


  1. A third fear is the possibility of irreversible changes made by others, something which was apparently possible in iCloud Drive↥︎

I may have already linked to this year’s instalment of the Six Colors Apple Report Card, but I did not expand my commentary beyond what Jason Snell quoted in the piece, or even reveal what I graded each category. I am still not going to do that in full — see if you can guess which of these is mine — but there are a couple of things I think are worth expanding upon.

Michael Tsai:

Software Quality: 1 Most things feel kind of buggy, and the Mac is in a particularly bad state, with a large number of small bugs (many persisting for years) and some debilitating larger ones. I’ve documented some of them here.

Federico Viticci gave software quality two points and was similarly underwhelmed:

Most of my concerns about Apple’s software quality this year are about the poor, unfinished, confusing state they shipped Stage Manager in. I’m not going to rehash all that. Instead, I’d also point out that I was hoping to see more improvements on the Shortcuts front in 2022, and instead the app was barely touched last year. It received some new actions for built-in apps, but no deeper integration with the system. I continue to experience crashes and odd UI glitches when working on more complex shortcuts, and I’d like to see more polish and stability in the app.

I gave software quality three points out of five, and I was being generous to a perhaps unfair extent. I have a hard time knowing how to grade this category and I always regret whatever score I choose because it is a mixed bag within Apple’s ecosystem and across all software I use regularly.

In 2022, I filed something like two bug reports every week solely against Apple’s own applications and operating systems. I am not a developer, so none of these are problems with APIs or documentation; all are from a user’s perspective. Some of them are not so significant, but contribute to a general feeling of unfinishedness — for example, if you initiate Siri in MacOS Ventura, you will see a tiny shadow in the upper-right of the screen, as though there is a window in the foreground.1 Sometimes, it is a goofy bug in Safari that beachballs your Mac. But a lot of the time, I find bugs that make me wonder if anyone actually tested the product before shipping it — “death by a thousand cuts” kind of stuff. Apple News links which point to stories unavailable in News where I live, but which are available on the web, are a dead end;2 AirPlay remains unreliable unless you set your Apple TV to never sleep;3 Siri sometimes asks me which contact details to use for iMessage when asking to send a message to recent contacts.4 These are just a few of the bugs I reported in the past several months. Apple News has used inscrutable URLs since it launched and there is still no way to preview what a channel’s icon or logo looks like — both are bugs I reported eight years ago.

Is all of that deserving of a lower score? Probably, but I have a hard time figuring out whether this is abnormally poor or merely worse than it ought to be. I seem to be living and working in a sea of bugs no matter whether I am using my Macs at home, the Windows PC at work, Adobe’s suite of products, or my thermostat. It is disheartening to realize we have built our modern world on unstable, warranty-free systems.

Stephen Hackett:

Developer Relations: 2/5

It’s harder to think of a harder self-own than Apple’s rollout of additional App Store ads in late 2022. The App Store was instantly flooded with ads for low-brow titles like gambling and hook-up apps.

I am pretty sure I did not score developer relations, but the sentences above gave me some awful flashbacks. One positive developer note this year was the so-called “reader” app link entitlement. That is right: in the year 2022, developers are permitted to show an external link for user registration — if Apple deems them eligible. It is progress, however slow.


  1. FB11897294 ↥︎

  2. FB10908000; this Apple News link still goes nowhere for me. ↥︎

  3. FB10710546 ↥︎

  4. FB11699482 ↥︎

Clive Thompson:

Algorithmically-sorted feeds are good for some things. They let you know what are the big, popular conversations of the day, which is valuable! But if you stare at ’em too much, it’s intellectual monocropping. All you wind up knowing (and thinking about) are the same things everyone else knows and is thinking about.

So it’s also important to rewild your mind — to cultivate your own quirky, overgrown, weedy garden of culture.

I like this analogy.

Thompson advises following lots of sites; I think that is good advice, generally speaking, as there is no cost if you are not a completionist. But the best part of feed readers, I think, is how they reward people for subscribing to websites which are updated infrequently or sporadically. According to Feedbin, which is my preferred subscription backend, I have 210 active subscriptions. That includes an awful lot of newsletters and personal blogs — which are actually the same thing, but I do not think the investors backing Substack have noticed — but it does not include things I would check daily, like major news sites.

One thing you might not be aware of is how many websites offer feeds for specific sections of their site. For example, while I would never follow the New York Times’ homepage in a feed reader, I do subscribe to its Media feed. WordPress-powered sites, meanwhile, sometimes have feeds for specific tags or post categories.

I still think website feeds have something of a branding problem. “RSS” and “feed reader”, despite the former term being an acronym for “really simple syndication”, sound technical and hard to use. It is just as easy to follow a site with a reader as it is on any social media platform. We can and should make this a less scary proposition.

Maddie Stone, Grist:

The passage of the Digital Fair Repair Act last June reportedly caught the tech industry off guard, but it had time to act before Governor Kathy Hochul would sign it into law. Corporate lobbyists went to work, pressing Albany for exemptions and changes that would water the bill down. They were largely successful: While the bill Hochul signed in late December remains a victory for the right-to-repair movement, the more corporate-friendly text gives consumers and independent repair shops less access to parts and tools than the original proposal called for. (The state Senate still has to vote to adopt the revised bill, but it’s widely expected to do so.)

[…]

Hochul’s office sent TechNet’s revised draft to repair advocates to get their reaction. Those advocates shared the TechNet-edited version of the bill with Fahy’s staff, which gave it to the Federal Trade Commission, or FTC, the agency charged with protecting American consumers. Documents that Repair.org shared with Grist show that FTC staff were highly critical of many of the changes. The parts assembly provision, one commission staffer wrote in response to TechNet’s edits, “could be easily abused by a manufacturer” to create a two-tiered system in which individual components like batteries are available only to authorized repair partners. Another of TechNet’s proposed changes — deleting a requirement that manufacturers give owners and independent shops the ability to reset security locks in order to conduct repairs — could result in a “hollow right to repair” in which security systems thwart people from fixing their stuff, the staffer wrote.

TechNet is an industry lobbying group with members like Amazon, Apple, Google, Meta, and Samsung.

I do not think the concerns raised by TechNet should be dismissed out of hand as simple influence peddling. There are real security and privacy concerns if it is possible to disable lockout features, even if it is being done with the best of intentions and with full permission. But there ought to be a solution; I think John Bumstead’s idea is worth considering. Security risks should not be used as a convenient excuse for restricting third-party repairability.

James Vincent, the Verge:

Google demoed its latest advances in AI search at a live event in Paris on Wednesday — but the features pale in comparison to Microsoft’s announcement yesterday of the “new Bing,” which the company has demoed extensively to the press and offered limited public access to.

Martin Coulter and Greg Bensinger, Reuters:

A selloff of Alphabet Inc shares knocked $100 billion in market value from Google’s parent company on Wednesday after its new chatbot shared inaccurate information in a promotional video and a company event failed to dazzle, feeding worries the tech giant is losing ground to rival Microsoft Corp.

Alphabet shares slid nearly 9% at one point, while Microsoft shares jumped around 3% before paring gains. Reuters was first to point out an error in Google’s advertisement for chatbot Bard, which debuted Monday, about which satellite first took pictures of a planet outside the Earth’s solar system.

The Reuters headline — “Alphabet shares dive after Google AI chatbot Bard flubs answer in ad” — seems to correlate the two events which, as far as I can tell, seem to have more to do with a muted response to Google’s presentation and the unknown public debut of Bard. Also, Reuters was only “first to point out an error” if you ignore the earlier replies to Google’s tweet.

Still, it is embarrassing for Google to have shown a product which looks weak in comparison to an offering from, of all companies, Microsoft. This feels a little like an echo of the past, too, as Google has long struggled with its “Snippets” feature.

Joanna Stern, Wall Street Journal:

Microsoft’s new Bing and Edge became available in a limited preview Tuesday. You have to sign up on bing.com for the preview wait list, and once you are in, you’ll have to use the Edge browser (available for Windows and MacOS). Microsoft plans to bring it to other web browsers over time.

It’s far too early to call a winner in this AI search race. But after seeing the new Bing in action, I can confidently say this: A big change is coming to how we get information and how we interact with our computers.

Microsoft announced today’s event unveiling these developments midday yesterday, hours after Google announced its efforts in the space, as it has done before. I am not sure whether to read this as panic or excitement, though Meta’s caution is notable.

Late last night, I assembled a series of links with commentary about this nascent field. I think it holds up. While Microsoft may be the first to play its cards, I cannot imagine Google will not quickly respond.

The big question right now is, I think, where Amazon and Apple are at internally. Are they racing to compete with Alexa and Siri? Are they maybe waiting it out to see if this is a real, exciting development, or yet more baseless hype like so many technology land rushes before it? Or, a third possibility: are either of them concerned about the ethical consequences of these kinds of products outweighing their benefits? I mean, I doubt it — business, baby! — but it would be nice if any company considered it.

There are many things which I was considering linking to in recent weeks, but it is easier to do so via a story told through block quotes. They share a common thread and theme.

I begin with Can Duruk, at Margins:

I’m old enough to remember when Google came out, which makes me old enough to remember at least 20 different companies that touted as called Google-killers. I applaud every single one of them! A single American company operating as a bottleneck behind the world’s information is a dangerous, and inefficient proposition and a big theme of the Margins is that monopolies are bad so it’s also on brand.

But for one reason and another, none of the Google competitors have seemed to capture the world’s imagination.

Until now, that is. Yes, sorry, I’m talking about that AI bot.

Dave Winer:

I went to ChatGPT and entered “Simple instructions about how to send email from a Node.js app?” What came back was absolutely perfect, none of the confusing crap and business models you see in online instructions in Google. I see why Google is worried.

Michael Tsai has a collection of related links. All of the above were posted within the month of January. It really felt like the narrative of competition between Google and ChatGPT was reaching some kind of peak.

Owen Yin last week:

Gone are the days of typing in specific keywords and struggling to find the information you need. Microsoft Bing is on the cusp of releasing its ChatGPT integration, which will allow users to ask questions in a natural way and get a tailored search experience that will reshape how we explore the internet.

I got a preview of Bing’s ChatGPT integration and managed to get some research in before it was shut off.

That is right: Microsoft was first to show a glimpse of the future of search engines with Bing, but only for a few people briefly. I tried to URL hack Bing to see if I could find any remnant of this and I think I got close: visiting bing.com/chat will show search results for “Bing AI”. If nothing else, it is a very good marketing tease.

Sundar Pichai today:

We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

[…]

Now, our newest AI technologies — like LaMDA, PaLM, Imagen and MusicLM — are building on this, creating entirely new ways to engage with information, from language and images to video and audio. We’re working to bring these latest AI advancements into our products, starting with Search.

Via Andy Baio:

Google used to take pride in minimizing time we spent there, guiding us to relevant pages as quickly as possible. Over time, they tried to answer everything themselves: longer snippets, inline FAQs, search results full of knowledge panels.

Today’s Bard announcement feels like their natural evolution: extracting all value out of the internet for themselves, burying pages at the bottom of each GPT-generated essay like footnotes.

Google faced antitrust worries due, in part, to its Snippets feature, which automatically excerpts webpage text to answer queries without the searcher having to click. As of 2019, over half of Google searches returned a result without a user clicking out.

The original point of search engines was to be directed to websites of interest. But that has not been the case for years. People are not interested in visiting websites about a topic; they, by and large, just want answers to their questions. Google has been strip-mining the web for years, leveraging its unique position as the world’s most popular website and its de facto directory to replace what made it great with what allows it to retain its dominance. Artificial intelligence — or some simulation of it — really does make things better for searchers, and I bet it could reduce some tired search optimization tactics. But it comes at the cost of making us all into uncompensated producers for the benefit of trillion-dollar companies like Google and Microsoft.

Baio:

Personally, I wish that the “code red” response that ChatGPT inspired at Google wasn’t to launch a dozen AI products that their red teams and AI ethicists have warned them not to release, but to combat the tsunami of AI-generated SEO spam bullshit that’s in the process of destroying their core product. Instead, they’re blissfully launching new free tools to generate even more of it.

It is fascinating to see Google make its “Bard” announcement in the weeks following CNet’s embarrassing and lucrative generated articles.

Jon Christian, Futurism:

Looking at the outraged public reaction to the news about CNET‘s AI articles, [director of search engine optimization, Jake] Gronsky realized that the company might have gone too far. In fact, he explained, he saw the whole scandal as a cautionary tale illustrating that Red Ventures shouldn’t mark its AI-generated content for readers at all.

“Disclosing AI content is like telling the IRS you have a cash-only business,” he warned.

Gronsky wasn’t just concerned about CNET and Bankrate, though. He was also worried that a Google crackdown could impact Red Ventures’ formidable portfolio of sites that target prospective college students, known internally as Red Ventures EDU.

To be clear, it seems like saner and more ethically conscious heads prevailed and CNet articles generated by automated means are marked.

The web is corrupt and it is only getting worse. Websites supported by advertising — itself an allegedly illegal Google monopoly — rely on clicks from searchers, so they have compromised their integrity to generate made-for-Google articles, many of which feature high-paying affiliate links. Search optimization experts have spent years in an adversarial relationship with Google in an attempt to get their clients’ pages to the coveted first page of results, often through means which make results worse for searchers. Artificial intelligence is, it seems, a way out of this mess — but the compromise is that search engines get to take from everyone while giving nothing back. Google has been taking steps in this direction for years: its results page has been increasingly filled with ways of discouraging people from leaving its confines. A fully automated web interpreter is a more fully realized version of this initiative. Searchers get the results they are looking for and, in the process, undermine the broader web. The price we all pay is the worsening of the open web for Google’s benefit.

New world, familiar worries.

Thoughtful commentary this year from a panel of smart people, and I also wrote some things. The year-over-year comparison chart at the top is telling: a couple of bummer categories, a few outstanding ones, and stagnating grades in several.

Update: I missed this at first, but it is my favourite thing in this report card:

Adam Engst wrote: “Everyone involved with System Settings for Ventura should be reassigned to work on fax drivers.”

Good one.

Jason Scott:

Where possible, save the original. Where possible, digitize the original or maintain a digital copy. Ephemera and transient content is just as important to maintain as products and projects. Digitize at the highest resolution and fidelity possible, but realize you’re never going to get it perfect and keep the originals around, if you can. Make digital copies as widely available as possible, all the time, so it finds its value to people seeking it.

[…]

I’ve been asked, in all manner of ways, what the most difficult part of the process is – is it tracking down items to work on, or finding the right order, or devising which video container codec is best for a ripping of a VHS tape, the DPI of a paper scan, or which equipment stack is best for the job?

No, none of that.

It’s the crushing loneliness.

The Internet Archive remains a uniquely brilliant resource on the web. Kudos to Scott for finding a solution for himself while continuing this often tedious but necessary work.

Mishaal Rahman tweeted a question: “how much of your phone’s storage is taken up by the system?” Respondents with Google Pixels seem to be doing okay, with about 16–18 GB consumed by the system. That is nothing compared to people with Samsung devices, some of whom reported 56, 62, 68, 75, and 203 GB of system files. My iPhone 12 Pro looks more acceptable, at first blush, with just 8.5 GB consumed by iOS, but the separate “System Data” section consumes nearly 20 GB. It has a beta profile on it, so it may be logging more than a typical iPhone, but it is in the right range.

I thought about this when I was having some iPad storage troubles last month. That device, which has a nearly fresh installation of iPadOS 16.3, reports 7 GB consumed by the operating system and another 4 GB in the catch-all “System Data” category. It is only a 32 GB model, but nearly a quarter of that space is consumed by iPadOS alone.

I continue to believe this rabid appetite for bytes is disrespectful to users. Sometimes, it is hard to see where those bytes are going. The iOS installer for my iPhone has grown by a full gigabyte since the version which came with the device. The model I had before it came with iOS 9, the installer for which consumed 2 GB; the installer for its most recent version is over 5 GB. I cannot find numbers for the on-device disk space consumption after installation, but it has likely grown by similar amounts. I am sure there are not gigabytes of padding in iOS but I also find it difficult to understand why it has grown by so much on the same device.

Megan Garber, the Atlantic:

In his 1985 book, Amusing Ourselves to Death, the critic Neil Postman described a nation that was losing itself to entertainment. What Newton Minow had called “a vast wasteland” in 1961 had, by the Reagan era, led to what Postman diagnosed as a “vast descent into triviality.” Postman saw a public that confused authority with celebrity, assessing politicians, religious leaders, and educators according not to their wisdom, but to their ability to entertain. He feared that the confusion would continue. He worried that the distinction that informed all others — fact or fiction — would be obliterated in the haze.

[…]

These are Postman’s fears in action. They are also Hannah Arendt’s. Studying societies held in the sway of totalitarian dictators — the very real dystopias of the mid-20th century — Arendt concluded that the ideal subjects of such rule are not the committed believers in the cause. They are instead the people who come to believe in everything and nothing at all: people for whom the distinction between fact and fiction no longer exists.

This is an unquestionably thoughtful piece that explores the seemingly agreed-upon phenomenon of citizens of the United States — in Garber’s terms, but I do not think it is limited to the one country — viewing society from an increasingly detached perspective. Instead of people and ongoing events, we see only characters in a storyline. I think it is very well worth your time and consideration.

But I think it is telling that Garber repeatedly references decades-old books and essays which say, more or less, the same thing. These are issues which society has grappled with for decades. Granted, public trust in government and institutional figureheads has been declining and, so, perhaps some people are filling in the mistrust with their imagination. It also feels forced, to me, to drag these well-trodden arguments into a more contemporary framing by using the word “metaverse”, which I am not sure entirely makes sense here despite Garber’s justifications.

That aside, I appreciate Garber’s updated exploration of the confused zone between what we consider the real world and what we treat as entertainment. The rapid pace at which studios option current events seems, to me, to be both a cause of this phenomenon as well as a product of it — all of those streaming services demand exclusive shows, and it is more exciting to watch a simplified and enhanced version of real life. I am surprised Garber did not cite the Social Network as an accelerant; I have noticed fictional events from the film being casually used when describing Facebook’s origins. People describe others as “real-life NPCs” and it can be hard to know when they are doing so earnestly or ironically.

One other thing; Garber:

The efforts to hold the instigators of the insurrection to account have likewise unfolded as entertainment. “Opinion: January 6 Hearings Could Be a Real-Life Summer Blockbuster,” read a CNN headline in May — the unstated corollary being that if the hearings failed at the box office, they would fail at their purpose. (“Lol no one is watching this,” the account of the Republican members of the House Judiciary Committee tweeted as the hearings were airing, attempting to suggest such a failure.)

Garber notes reasons the hearings actually drew large audiences. She does not mention one other explanation for that: the Committee hired former ABC News producers to make it more compelling for television viewers, and they leveraged connections to put it in prime time slots. It is fair to portray this as a way to turn dry scraps of testimony and a confusing series of events into an understandable storyline. A more cynical read is that these producers treated real life as entertainment fodder. As this essay suggests, the line has fully blurred.