Month: June 2024

Ashley Belanger, reporting for Ars Technica in July 2022 in what I will call “foreshadowing”:

Despite all the negative feedback [over then-recent Instagram changes], Meta revealed on an earnings call that it plans to more than double the number of AI-recommended Reels that users see. The company estimates that in 2023, about a third of Instagram and Facebook feeds will be recommended content.

Ed Zitron:

In this document [leaked to Zitron], they discuss the term “meaningful interactions,” the underlying metric which (allegedly) guides Facebook today. In January 2018, Adam Mosseri, then Head of News Feed, would post that an update to the News Feed would now “prioritize posts that spark conversations and meaningful interactions between people,” which may explain the chaos (and rot) in the News Feed thereafter.

To be clear, metrics around time spent hung around at the company, especially with regard to video, and Facebook has repeatedly and intentionally made changes to manipulate its users to satisfy them. In his book “Broken Code,” Jeff Horwitz notes that Facebook “changed its News Feed design to encourage people to click on the reshare button or follow a page when they viewed a post,” with “engineers altering the Facebook algorithm to increase how often users saw content reshared from people they didn’t know.”

Zitron, again:

When you look at Instagram or Facebook, I want you to try and think of them less as social networks, and more as a form of anthropological experiment. Every single thing you see on either platform is built or selected to make you spend more time on the app and see more things that Meta wants you to see, be they ads, sponsored content, or suggested groups that you can interact with, thus increasing the amount of your “time spent” on the app, and increasing the amount of “meaningful interactions” you have with content.

Zitron is a little too eager, for my tastes, to treat Meta’s suggestions of objectionable and controversial posts as deliberate. It seems much more likely the company simply sucks at moderating this stuff at scale and is throwing in the towel.

Kurt Wagner, Bloomberg:

In late 2021, TikTok was on the rise, Facebook interactions were declining after a pandemic boom and young people were leaving the social network in droves. Chief Executive Officer Mark Zuckerberg assembled a handful of veterans who’d built their careers on the Big Blue app to figure out how to stop the bleeding, including head of product Chris Cox, Instagram boss Adam Mosseri, WhatsApp lead Will Cathcart and head of Facebook, Tom Alison.

During discussions that spanned several meetings, a private WhatsApp group, and an eventual presentation at Zuckerberg’s house in Palo Alto, California, the group came to a decision: The best way to revive Facebook’s status as an online destination for young people was to start serving up more content from outside a person’s network of friends and family.

Jason Koebler, 404 Media:

At first, previously viral (but real) images were being run through image-to-image AI generators to create a variety of different but plausibly believable AI images. These images repeatedly went viral, and seemingly tricked real people into believing they were real. I was able to identify a handful of the “source” or “seed” images that formed the basis for this type of content. Over time, however, most AI images on Facebook have gotten a lot easier to identify as AI and a lot more bizarre. This is presumably happening because people will interact with the images anyway, or the people running these pages have realized they don’t need actual human interaction to go viral on Facebook.

Sarah Perez, TechCrunch:

Instagram confirmed it’s testing unskippable ads after screenshots of the feature began circulating across social media. These new ad breaks will display a countdown timer that stops users from being able to browse through more content on the app until they view the ad, according to informational text displayed in the Instagram app.

These pieces each seem like they are circling a theme of a company finding the upper bound of its user base, and then squeezing it for activity, revenue, and promising numbers to report to investors. Unlike Zitron, I am not convinced we are watching Facebook die. I think Koebler is closer to the truth: we are watching its zombification.

Kevin Beaumont:

At a surface level, it [Recall] is great if you are a manager at a company with too much to do and too little time as you can instantly search what you were doing about a subject a month ago.

In practice, that audience’s needs are a very small (tiny, in fact) portion of Windows userbase — and frankly talking about screenshotting the things people in the real world, not executive world, is basically like punching customers in the face. The echo chamber effect inside Microsoft is real here, and oh boy… just oh boy. It’s a rare misfire, I think.

Via Eric Schwarz:

This fact that this feature is basically on by default and requires numerous steps to disable is going to create a lot of problems for people, especially those who click through every privacy/permission screen and fundamentally don’t know how their computer actually operates — I’ve counted way too many instances where I’ve had to help people find something and they have no idea where anything lives in their file system (mostly work off the Desktop or Downloads folders). How are they going to even grapple with this?

The problems with Recall remind me of the minor 2017 controversy around “brassiere” search results in Apple’s Photos app. Like Recall, it is entirely an on-device process with some security and privacy protections. In practice, automatically cataloguing all your photos which show a bra is kind of creepy, even if it is being done only with your own images on your own phone.

Liz Reid, head of Google Search, on the predictably bizarre results of rolling out its “A.I. Overviews” feature:

One area we identified was our ability to interpret nonsensical queries and satirical content. Let’s take a look at an example: “How many rocks should I eat?” Prior to these screenshots going viral, practically no one asked Google that question. You can see that yourself on Google Trends.

There isn’t much web content that seriously contemplates that question, either. This is what is often called a “data void” or “information gap,” where there’s a limited amount of high quality content about a topic. However, in this case, there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question.

This reasoning sounds almost circular in the context of what A.I. answers are supposed to do. Google loves demonstrating how users can enter a query like “suggest a 7 day meal plan for a college student living in a dorm focusing on budget friendly and microwavable meals” and see a grouped set of responses synthesized from a variety of sources. That is surely a relatively uncommon query. I was going to prove that in the same was as Reid did, but when I enter it in Google Trends, I get a 400 error. Even a shortened version is searched so rarely it has no data.

The organic, non-A.I. search results for the long query are plentiful but do not exactly fulfill its specific criteria. Most of the links I saw are not microwave-only, or are simple lists not grouped into particular meal types. Nothing I could find specifically answers the question posed. In order to fulfill the query in the demo video, Google’s search engine has to look through everything it knows and find meals which cook in a microwave, and organize them into a daily plan of different meal types.

But Google is also blaming the novelty of the rocks query and the satirical information directly answering it for the failure of its A.I. features. In other words, it wants to say cool thing about its A.I. stuff is that it can handle unpopular or new queries by sifting through the web and merging together a bunch of stuff it finds. The bad thing about A.I. stuff, it turns out, is basically the same.

Benj Edwards, Ars Technica:

Here we see the fundamental flaw of the system: “AI Overviews are built to only show information that is backed up by top web results.” The design is based on the false assumption that Google’s page-ranking algorithm favors accurate results and not SEO-gamed garbage. Google Search has been broken for some time, and now the company is relying on those gamed and spam-filled results to feed its new AI model.

Reid says Google has made a bunch of changes to address the issues raised, but none of them fix a fundamental shift in A.I. results. Google used to be a directory — admittedly one ranked by mysterious criteria — allowing users to decide which results best fit their needs. It has slowly repositioned itself to being able to answer their queries with authority. Its A.I. answers are a more fulsome realization of features like Featured Snippets and the Answer Box. That is: instead of seeing options which may match their query, Google is now giving searchers singular answers. It has transformed from a referrer into an omniscient responder.

Deviant Ollam gave a brand new talk at CackalackyCon this year about fire safety standards from a pentesting perspective. It is as entertaining as just about anything you may have seen from Ollam, despite being about two hours long.