Jason Koebler, 404 Media:

The complete destruction of Google Search via forced AI adoption and the carnage it is wreaking on the internet is deeply depressing, but there are bright spots. For example, as the prophecy foretold, we are learning exactly what Google is paying Reddit $60 million annually for. And that is to confidently serve its customers ideas like, to make cheese stick on a pizza, “you can also add about 1/8 cup of non-toxic glue” to pizza sauce, which comes directly from the mind of a Reddit user who calls themselves “Fucksmith” and posted about putting glue on pizza 11 years ago.

Katie Notopoulos, putting the “business” in Business Insider:

I knew my assignment: I had to make the Google glue pizza. (Don’t try this at home! I risked myself for the sake of the story, but you shouldn’t!)

My timeline on three entirely separate social networks — Bluesky, Mastodon, and Threads — has been chock full of examples of Google’s A.I. answers absolutely eating dirt — or, in one case, rocks — in the face of obvious satire and shitposting. Well, obvious to us. Computers, it seems, have not figured out glue and gasoline are bad for food.

The A.I. answers from Google are not all yucks and chuckles, unfortunately.

Nic Lake:

Yesterday (Part 1) I saw that mushrooms post, and knew something like that was going to get people hurt. I didn’t really think that (CONTENT WARNING) asking how best to deal with depression was going to be next on the “shit I didn’t want to see” Bingo card.

The organizations know. They know that these tools are not ready. They call it a “beta” and feed it to you anyway.

Google is manually removing A.I. results where appropriate, and it is claiming some of the screenshots which have been circulating have been faked in some way without specifying which.

To quote week-ago me:

Given the sliding quality of Google’s results, it seems quite bold for the company to be confident users worldwide will trust its generated answers.

Quite bold, indeed.

I do not expect perfection, but it is downright embarrassing that Google rolled out a product so unreliable and occasionally dangerous it continues to tarnish a reputation already suffering. Google’s Featured Snippets were bad enough. Now it is in the process of rolling out a whole new level of overconfident nonsense to the entire world, fixing it as everyone tests its limits.

The bad news: Apple shipped an alarming bug in iOS 17.5 which sometimes revealed photos previously deleted by the user and, in the process, created a reason for users to mistrust how their data is handled. This was made especially confusing by Apple’s lack of commentary.

The good news: Apple patched the bug within a week. Also, the lone story about deleted photos reappearing on a wiped iPad given to someone else was deleted and seems to be untrue.

The bad news: aside from acknowledging this “rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted”, there was still little information about exactly what happened. Users quite reasonably expect things they deleted to stay deleted, and when they do not, they are going to have some questions.

The good news: as I predicted, Apple gave an explanation to 9to5Mac, which generously allowed for it to be on background. Chance Miller:

One question many people had is how images from dates as far back as 2010 resurfaced because of this problem. After all, most people aren’t still using the same devices now as they were in 2010. Apple confirmed to me that iCloud Photos is not to be blamed for this. Instead, it all boils to the corrupt database entry that existed on the device’s file system itself.

A much more technically-minded answer was provided by Synacktiv, a security firm that reverse-engineered the bug fix release and compared it to the original 17.5 release.

Bugs are only as bad as the effects they have. I heard from multiple readers who said this bug damaged how much they trust iOS and Apple. This is self-selecting — I likely would not have heard from people who both experienced this bug and thought it was no big deal. I can imagine a normal user who does not read 9to5Mac and finding their deleted photos restored are still going to be spooked.

Finally. The government of the United States finally passed a law that would allow it to force the sale of, or ban, software and websites from specific countries of concern. The target is obviously TikTok — it says so right in its text — but crafty lawmakers have tried to add enough caveats and clauses and qualifiers to, they hope, avoid it being characterized as a bill of attainder, and to permit future uses. This law is very bad. It is an ineffective and illiberal position that abandons democratic values over, effectively, a single app. Unfortunately, TikTok panic is a very popular position in the U.S. and, also, here in Canada.

The adversaries the U.S. is worried about are the “covered nationsdefined in 2018 to restrict the acquisition by the U.S. of key military materials from four countries: China, Iran, North Korea, and Russia. The idea behind this definition was that it was too risky to procure magnets and other important components of, say, missiles and drones from a nation the U.S. considers an enemy, lest those parts be compromised in some way. So the U.S. wrote down its least favourite countries for military purposes, and that list is now being used in a bill intended to limit TikTok’s influence.

According to the law, it is illegal for any U.S. company to make available TikTok and any other ByteDance-owned app — or any app or website deemed a “foreign adversary controlled application” — to a user in the U.S. after about a year unless it is sold to a company outside the covered countries, and with no more than twenty percent ownership stake from any combination of entities in those four named countries. Theoretically, the parent company could be based nearly anywhere in the world; practically, if there is a buyer, it will likely be from the U.S. because of TikTok’s size. Also, the law specifically exempts e-commerce apps for some reason.

This could be interpreted as either creating an isolated version specifically for U.S. users or, as I read it, moving the global TikTok platform to a separate organization not connected to ByteDance or China.1 ByteDance’s ownership is messy, though mostly U.S.-based, but politicians worried about its Chinese origin have had enough, to the point they are acting with uncharacteristic vigour. The logic seems to be that it is necessary for the U.S. government to influence and restrict speech in order to prevent other countries from influencing or restricting speech in ways the U.S. thinks are harmful. That is, the problem is not so much that TikTok is foreign-owned, but that it has ownership ties to a country often antithetical to U.S. interests. TikTok’s popularity might, it would seem, be bad for reasons of espionage or influence — or both.

Power

So far, I have focused on the U.S. because it is the country that has taken the first step to require non-Chinese control over TikTok — at least for U.S. users but, due to the scale of its influence, possibly worldwide. It could force a business to entirely change its ownership structure. So it may look funny for a Canadian to explain their views of what the U.S. ought to do in a case of foreign political interference. This is a matter of relevance in Canada as well. Our federal government raised the alarm on “hostile state-sponsored or influenced actors” influencing Canadian media and said it had ordered a security review of TikTok. There was recently a lengthy public inquiry into interference in Canadian elections, with a special focus on China, Russia, and India. Clearly, the popularity of a Chinese application is, in the eyes of these officials, a threat.

Yet it is very hard not to see the rush to kneecap TikTok’s success as a protectionist reaction to shaking the U.S. dominance of consumer technologies, as convincingly expressed by Paris Marx at Disconnect:

In Western discourses, China’s internet policies are often positioned solely as attempts to limit the freedoms of Chinese people — and that can be part of the motivation — but it’s a politically convenient explanation for Western governments that ignores the more important economic dimension of its protectionist approach. Chinese tech is the main competitor to Silicon Valley’s dominance today because China limited the ability of US tech to take over the Chinese market, similar to how Japan and South Korea protected their automotive and electronics industries in the decades after World War II. That gave domestic firms the time they needed to develop into rivals that could compete not just within China, but internationally as well. And that’s exactly why the United States is so focused not just on China’s rising power, but how its tech companies are cutting into the global market share of US tech giants.

This seems like one reason why the U.S. has so aggressively pursued a divestment or ban since TikTok’s explosive growth in 2019 and 2020. On its face it is similar to some reasons why the E.U. has regulated U.S. businesses that have, it argues, disadvantaged European competitors, and why Canadian officials have tried to boost local publications that have seen their ad revenue captured by U.S. firms. Some lawmakers make it easy to argue it is a purely xenophobic reaction, like Senator Tom Cotton, who spent an exhausting minute questioning TikTok’s Singaporean CEO Shou Zi Chew about where he is really from. But I do not think it is entirely a protectionist racket.

A mistake I have made in the past — and which I have seen some continue to make — is assuming those who are in favour of legislating against TikTok are opposed to the kinds of dirty tricks it is accused of on principle. This is false. Many of these same people would be all too happy to allow U.S. tech companies to do exactly the same. I think the most generous version of this argument is one in which it is framed as a dispute between the U.S. and its democratic allies, and anxieties about the government of China — ByteDance is necessarily connected to the autocratic state — spreading messaging that does not align with democratic government interests. This is why you see few attempts to reconcile common objections over TikTok with the quite similar behaviours of U.S. corporations, government arms, and intelligence agencies. To wit: U.S.-based social networks also suggest posts with opaque math which could, by the same logic, influence elections in other countries. They also collect enormous amounts of personal data that is routinely wiretapped, and are required to secretly cooperate with intelligence agencies. The U.S. is not authoritarian as China is, but the behaviours in question are not unique to authoritarians. Those specific actions are unfortunately not what the U.S. government is objecting to. What it is disputing, in a most generous reading, is a specifically antidemocratic government gaining any kind of influence.

Espionage and Influence

It is easiest to start by dismissing the espionage concerns because they are mostly misguided. The peek into Americans’ lives offered by TikTok is no greater than that offered by countless ad networks and data brokers — something the U.S. is also trying to restrict more effectively through a comprehensive federal privacy law. So long as online advertising is dominated by a privacy-hostile infrastructure, adversaries will be able to take advantage of it. If the goal is to restrict opportunities for spying on people, it is idiotic to pass legislation against TikTok specifically instead of limiting the data industry.

But the charge of influence seems to have more to it, even though nobody has yet shown that TikTok is warping users’ minds in a (presumably) pro-China direction. Some U.S. lawmakers described its danger as “theoretical”; others seem positively terrified. There are a few different levels to this concern: are TikTok users uniquely subjected to Chinese government propaganda? Is TikTok moderated in a way that boosts or buries videos to align with Chinese government views? Finally, even if both of these things are true, should the U.S. be able to revoke access to software if it promotes ideologies or viewpoints — and perhaps explicit propaganda? As we will see, it looks like TikTok sometimes tilts in ways beneficial — or, at least, less damaging — to Chinese government interests, but there is no evidence of overt government manipulation and, even if there were, it is objectionable to require it to be owned by a different company or ban it.

The main culprit, it seems, is TikTok’s “uncannily good” For You feed that feels as though it “reads your mind”. Instead of users telling TikTok what they want to see, it just begins showing videos and, as people use the app, it figures out what they are interested in. How it does this is not actually that mysterious. A 2021 Wall Street Journal investigation found recommendations were made mostly based on how long you spent watching each video. Deliberate actions — like sharing and liking — play a role, sure, but if you scroll past videos of people and spend more time with a video of a dog, it learns you want dog videos.

That is not so controversial compared to the opacity in how TikTok decides what specific videos are displayed and which ones are not. Why is this particular dog video in a user’s feed and not another similar one? Why is it promoting videos reflecting a particular political viewpoint or — so a popular narrative goes — burying those with viewpoints uncomfortable for its Chinese parent company? The mysterious nature of an algorithmic feed is the kind of thing into which you can read a story of your choosing. A whole bunch of X users are permanently convinced they are being “shadow banned” whenever a particular tweet does not get as many likes and retweets as they believe it deserved, for example, and were salivating at the thought of the company releasing its ranking code to solve a nonexistent mystery. There is a whole industry of people who say they can get your website to Google’s first page for a wide range of queries using techniques that are a mix of plausible and utterly ridiculous. Opaque algorithms make people believe in magic. An alarmist reaction to TikTok’s feed should be expected particularly as it was the first popular app designed around entirely recommended material instead of personal or professional connections. This has now been widely copied.

The mystery of that feed is a discussion which seems to have been ongoing basically since the 2018 merger of Musical.ly and TikTok, escalating rapidly to calls for it to be separated from its Chinese owner or banned altogether. In 2020, the White House attempted to force a sale by executive order. In response, TikTok created a plan to spin off an independent entity, but nothing materialized from this tense period.

March 2023 brought a renewed effort to divest or ban the platform. Chew, TikTok’s CEO, was called to a U.S. Congressional hearing and questioned for hours, to little effect. During that hearing, a report prepared for the Australian government was cited by some of the lawmakers, and I think it is a telling document. It is about eighty pages long — excluding its table of contents, appendices, and citations — and shows several examples of Chinese government influence on other products made by ByteDance. However, the authors found no such manipulation on TikTok itself, leading them to conclude:

In our view, ByteDance has demonstrated sufficient capability, intent, and precedent in promoting Party propaganda on its Chinese platforms to generate material risk that they could do the same on TikTok.

“They could do the same”, emphasis mine. In other words, if they had found TikTok was boosting topics and videos on behalf of the Chinese government, they would have said so — so they did not. The closest thing I could find to a covert propaganda campaign on TikTok anywhere in this report is this:

The company [ByteDance] tried to do the same on TikTok, too: In June 2022, Bloomberg reported that a Chinese government entity responsible for public relations attempted to open a stealth account on TikTok targeting Western audiences with propaganda”. [sic]

If we follow the Bloomberg citation — shown in the report as a link to the mysterious Archive.today site — the fuller context of the article by Olivia Solon disproves the impression you might get from reading the report:

In an April 2020 message addressed to Elizabeth Kanter, TikTok’s head of government relations for the UK, Ireland, Netherlands and Israel, a colleague flagged a “Chinese government entity that’s interested in joining TikTok but would not want to be openly seen as a government account as the main purpose is for promoting content that showcase the best side of China (some sort of propaganda).”

The messages indicate that some of ByteDance’s most senior government relations team, including Kanter and US-based Erich Andersen, Global Head of Corporate Affairs and General Counsel, discussed the matter internally but pushed back on the request, which they described as “sensitive.” TikTok used the incident to spark an internal discussion about other sensitive requests, the messages state.

This is the opposite conclusion to how this story was set up in the report. Chinese government public relations wanted to set up a TikTok account without any visible state connection and, when TikTok management found out about this, it said no. This Bloomberg article makes TikTok look good in the face of government pressure, not like it capitulates. Yes, it is worth being skeptical of this reporting. Yet if TikTok acquiesced to the government’s demands, surely the report would provide some evidence.

While this report for the Australian Senate does not show direct platform manipulation, it does present plenty of examples where it seems like TikTok may be biased or self-censoring. Its authors cite stories from the Washington Post and Vice finding posts containing hashtags like #HongKong and #FreeXinjiang returned results favourable to the official Chinese government position. Sometimes, related posts did not appear in search results, which is not unique to TikTok — platforms regularly use crude search term filtering to restrict discovery for lots of reasons. I would not be surprised if there were bias or self-censorship to blame for TikTok minimizing the visibility of posts critical of the subjugation of Uyghurs in China. However, it is basically routine for every social media product to be accused of suppression. The Markup found different types of posts on Instagram, for example, had captions altered or would no longer appear in search results, though it is unclear to anyone why that is the case. Meta said it was a bug, an explanation also offered frequently by TikTok.

The authors of the Australian report conducted a limited quasi-study comparing results for certain topics on TikTok to results on other social networks like Instagram and YouTube, again finding a handful of topics which favoured the government line. But there was no consistent pattern, either. Search results for “China military” on Instagram were, according to the authors, “generally flattering”, and X searches for “PLA” scarcely returned unfavourable posts. Yet results on TikTok for “China human rights”, “Tianamen”, and “Uyghur” were overwhelmingly critical of Chinese official positions.

The Network Contagion Research Institute published its own report in December 2023, similarly finding disparities between the total number of posts with specific hashtags — like #DalaiLama and #TiananmenSquare — on TikTok and Instagram. However, the study contained some pretty fundamental errors, as pointed out by — and I cannot believe I am citing these losers — the Cato Institute. The study’s authors compared total lifetime posts on each social network and, while they say they expect 1.5–2.0× the posts on Instagram because of its larger user base, they do not factor in how many of those posts could have existed before TikTok was even launched. Furthermore, they assume similar cultures and a similar use of hashtags on each app. But even benign hashtags have ridiculous differences in how often they are used on each platform. There are, as of writing, 55.3 million posts tagged “#ThrowbackThursday” on Instagram compared to 390,000 on TikTok, a ratio of 141:1. If #ThrowbackThursday were part of this study, the disparity on the two platforms would rank similarly to #Tiananmen, one of the greatest in the Institute’s report.

The problem with most of these complaints, as their authors acknowledge, is that there is a known input and a perceived output, but there are oh-so-many unknown variables in the middle. It is impossible to know how much of what we see is a product of intentional censorship, unintentional biases, bugs, side effects of other decisions, or a desire to cultivate a less stressful and more saccharine environment for users. A report by Exovera (PDF) prepared for the U.S.–China Economic and Security Review Commission indicates exactly the latter: “TikTok’s current content moderation strategy […] adheres to a strategy of ‘depoliticization’ (去政治化) and ‘localization’ (本土化) that seeks to downplay politically controversial speech and demobilize populist sentiment”, apparently avoiding “algorithmic optimization in order to promote content that evangelizes China’s culture as well as its economic and political systems” which “is liable to result in backlash”. Meta, on its own platforms, said it would not generally suggest “political” posts to users but did not define exactly what qualifies. It said its goal in limiting posts on social issues was because of user demand, but these types of posts have been difficult to moderate. A difference in which posts are found on each platform for specific search terms is not necessarily reflective of government pressure, deliberate or not. Besides, it is not as though there is no evidence for straightforward propaganda on TikTok. One just needs to look elsewhere to find it.

Propaganda

The Office of the Director of National Intelligence recently released its annual threat assessment summary (PDF). It is unclassified and has few details, so the only thing it notes about TikTok is “accounts run by a PRC propaganda arm reportedly targeted candidates from both political parties during the U.S. midterm election cycle in 2022”. It seems likely to me this is a reference to this article in Forbes, though this is a guess as there are no citations. The state-affiliated TikTok account in question — since made private — posted a bunch of news clips which portray the U.S. in an unflattering light. There is a related account, also marked as state-affiliated, which continues to post the same kinds of videos. It has over 33,000 followers, which sounds like a lot, but each post is typically getting only a few hundred views. Some have been viewed thousands of times, others as little as thirteen times as of writing — on a platform with exaggerated engagement numbers. Nonetheless, the conclusion is obvious: these accounts are government propaganda, and TikTok willingly hosts them.

But that is something it has in common with all social media platforms. The Russian RT News network and China’s People’s Daily newspaper have X and Facebook accounts with follower counts in the millions. Until recently, the North Korean newspaper Uriminzokkiri operated accounts on Instagram and X. It and other North Korean state-controlled media used to have YouTube channels, too, but they were shut down by YouTube in 2017 — a move that was protested by academics studying the regime’s activities. The irony of U.S.-based platforms helping to disseminate propaganda from the country’s adversaries is that it can be useful to understand them better. Merely making propaganda available — even promoting it — is a risk and also a benefit to generous speech permissions.

The DNI’s unclassified report has no details about whether TikTok is an actual threat, and the FBI has “nothing to add” in response to questions about whether TikTok is currently doing anything untoward. More secretive information was apparently provided to U.S. lawmakers ahead of their March vote and, though few details of what, exactly, was said, several were not persuaded by what they heard, including Rep. Sara Jacobs of California:

As a member of both the House Armed Services and House Foreign Affairs Committees, I am keenly aware of the threat that PRC information operations can pose, especially as they relate to our elections. However, after reviewing the intelligence, I do not believe that this bill is the answer to those threats. […] Instead, we need comprehensive data privacy legislation, alongside thoughtful guardrails for social media platforms – whether those platforms are funded by companies in the PRC, Russia, Saudi Arabia, or the United States.

Lawmakers like Rep. Jacobs were an exception among U.S. Congresspersons who, across party lines, were eager to make the case against TikTok. Ultimately, the divest-or-ban bill got wrapped up in a massive and politically popular spending package agreed to by both chambers of Congress. Its passage was enthusiastically received by the White House and it was signed into law within hours. Perhaps that outcome is the democratic one since polls so often find people in the U.S. support a sale or ban of TikTok.

I get it: TikTok scoops up private data, suggests posts based on opaque criteria, its moderation appears to be susceptible to biases, and it is a vehicle for propaganda. But you could replace “TikTok” in that sentence with any other mainstream social network and it would be just as true, albeit less scary to U.S. allies on its face.

A Principled Objection

Forcing TikTok to change its ownership structure whether worldwide or only for a U.S. audience is a betrayal of liberal democratic principles. To borrow from Jon Stewart, “if you don’t stick to your values when they’re being tested, they’re not values, they’re hobbies”. It is not surprising that a Canadian intelligence analysis specifically pointed out how those very same values are being taken advantage of by bad actors. This is not new. It is true of basically all positions hostile to democracy — from domestic nationalist groups in Canada and the U.S., to those which originate elsewhere.

Julian G. Ku, for China File, offered a seemingly reasonable rebuttal to this line of thinking:

This argument, while superficially appealing, is wrong. For well over one hundred years, U.S. law has blocked foreign (not just Chinese) control of certain crucial U.S. electronic media. The Protect Act [sic] fits comfortably within this long tradition.

Yet this counterargument falls apart both in its details and if you think about its further consequences. As Martin Peers writes at the Information, the U.S. does not prohibit all foreign ownership of media. And governing the internet like public airwaves gets way more complicated if you stretch it any further. Canada has broadcasting laws, too, and it is not alone. Should every country begin requiring social media platforms comply with laws designed for ownership of broadcast media? Does TikTok need disconnected local versions of its product in each country in which it operates? It either fundamentally upsets the promise of the internet, or it is mandating the use of protocols instead of platforms.

It also looks hypocritical. Countries with a more authoritarian bent and which openly censor the web have responded to even modest U.S. speech rules with mockery. When RT Americatechnically a U.S. company with Russian funding — was required to register as a foreign agent, its editor-in-chief sarcastically applauded U.S. free speech standards. The response from Chinese government officials and media outlets to the proposed TikTok ban has been similarly scoffing. Perhaps U.S. lawmakers are unconcerned about the reception of their policies by adversarial states, but it is an indicator of how these policies are being portrayed in these countries — a real-life “we are not so different, you and I” setup — that, while falsely equivalent, makes it easy for authoritarian states to claim that democracies have no values and cannot work. Unless we want to contribute to the fracturing of the internet — please, no — we cannot govern social media platforms by mirroring policies we ostensibly find repellant.

The way the government of China seeks to shape the global narrative is understandably concerning given its poor track record on speech freedoms. An October 2023 U.S. State Department “special report” (PDF) explored several instances where it boosted favourable narratives, buried critical ones, and pressured other countries — sometimes overtly, sometimes quietly. The government of China and associated businesses reportedly use social media to create the impression of dissent toward human rights NGOs, and apparently everything from university funding to new construction is a vector for espionage. On the other hand, China is terribly ineffective in its disinformation campaigns, and many of the cases profiled in that State Department report end in failure for the Chinese government initiative. In Nigeria, a pitch for a technologically oppressive “safe city” was rejected; an interview published in the Jerusalem Post with Taiwan’s foreign minister was not pulled down despite threats from China’s embassy in Israel. The report’s authors speculate about “opportunities for PRC global censorship”. But their only evidence is a “list [maintained by ByteDance] identifying people who were likely blocked or restricted” from using the company’s many platforms, though the authors can only speculate about its purpose.

The problem is that trying to address this requires better media literacy and better recognition of propaganda. That is a notoriously daunting problem. We are exposed to a more destabilizing cocktail of facts and fiction, but there is declining trust in experts and institutions to help us sort it out. Trying to address TikTok as a symptomatic or even causal component of this is frustratingly myopic. This stuff is everywhere.

Also everywhere is corporate propaganda arguing regulations would impede competition in a global business race. I hate to be mean by picking on anyone in particular, but a post from Om Malik has shades of this corporate slant. Malik is generally very good on the issues I care about, but this is not one we appear to agree on. After a seemingly impressed observation of how quickly Chinese officials were able to eject popular messaging apps from the App Store in the country, Malik compares the posture of each country’s tech industries:

As an aside, while China considers all its tech companies (like Bytedance) as part of its national strategic infrastructure, the United States (and its allies) think of Apple and other technology companies as public enemies.

This is laughable. Presumably, Malik is referring to the chillier reception these companies have faced from lawmakers, and antitrust cases against Amazon, Apple, Google, and Meta. But that tougher impression is softened by the U.S. government’s actual behaviour. When the E.U. announced the Digital Markets Act and Digital Services Act, U.S. officials sprang to the defence of tech companies. Even before these cases, Uber expanded in Europe thanks in part to its close relationship with Obama administration officials, as Marx pointed out. The U.S. unquestionably sees its tech industry dominance as a projection of its power around the world, hardly treating those companies as “public enemies”.

Far more explicit were the narratives peddled by lobbyists from Targeted Victory in 2022 about TikTok’s dangers, and American Edge beginning in 2020 about how regulations will cause the U.S. to become uncompetitive with China and allow TikTok to win. Both organizations were paid by Meta to spread those messages; the latter was reportedly founded after a single large contribution from Meta. Restrictions on TikTok would obviously be beneficial to Meta’s business.

If you wanted to boost the industry — and I am not saying Malik is — that is how you would describe the situation: the U.S. is fighting corporations instead of treating them as pals to win this supposed race. It is not the kind of framing one uses if they wanted to dissuade people from the notion this is a protectionist dispute over the popularity of TikTok. But it is the kind of thing you hear from corporations via their public relations staff and lobbyists, which gets trickled into public conversation.

This Is Not a TikTok Problem

TikTok’s divestment would not be unprecedented. The Committee on Foreign Investment in the United States — henceforth, CFIUS, pronounced “siff-ee-us” — demanded, after a 2019 review, that Beijing Kunlun Tech Co Ltd sell Grindr. CFIUS concluded the risk to users’ private data was too great for Chinese ownership given Grindr’s often stigmatized and ostracized user base. After its sale, now safe in U.S. hands, a priest was outed thanks to data Grindr had been selling since before it was acquired by the Chinese firm, and it is being sued for allegedly sharing users’ HIV status with third parties. Also, because it transacts with data brokers, it potentially still leaks users’ private information to Chinese companies (PDF), apparently violating the fundamental concern triggering this divestment.

Perhaps there is comfort in Grindr’s owner residing in a country where same-sex marriage is legal rather than in one where it is not. I think that makes a lot of sense, actually. But there remain plenty of problems unaddressed by its sale to a U.S. entity.

Similarly, this U.S. TikTok law does not actually solve potential espionage or influence for a few reasons. The first is that it has not been established that either are an actual problem with TikTok. Surely, if this were something we ought to be concerned about, there would be a pattern of evidence, instead of what we actually have which is a fear something bad could happen and there would be no way to stop it. But many things could happen. I am not opposed to prophylactic laws so long as they address reasonable objections. Yet it is hard not to see this law as an outgrowth of Cold War fears over leaflets of communist rhetoric. It seems completely reasonable to be less concerned about TikTok specifically while harbouring worries about democratic backsliding worldwide and the growing power of authoritarian states like China in international relations.

Second, the Chinese government does not need local ownership if it wants to exert pressure. The world wants the country’s labour and it wants its spending power, so businesses comply without a fight, and often preemptively. Hollywood films are routinely changed before, during, and after production to fit the expectations of state censors in China, a pattern which has been pointed out using the same “Red Dawn” anecdote in story after story after story. (Abram Dylan contrasted this phenomenon with U.S. military cooperation.) Apple is only too happy to acquiesce to the government’s many demands — see the messaging apps issue mentioned earlier — including, reportedly, in its media programming. Microsoft continues to operate Bing in China, and its censorship requirements have occasionally spilled elsewhere. Economic leverage over TikTok may seem different because it does not need access to the Chinese market — TikTok is banned in the country — but perhaps a new owner would be reliant upon China.

Third, the law permits an ownership stake no greater than twenty percent from a combination of any of the “covered nations”. I would be shocked if everyone who is alarmed by TikTok today would be totally cool if its parent company were only, say, nineteen percent owned by a Chinese firm.

If we are worried about bias in algorithmically sorted feeds, there should be transparency around how things are sorted, and more controls for users including wholly opting out. If we are worried about privacy, there should be laws governing the collection, storage, use, and sharing of personal information. If ownership ties to certain countries is concerning, there are more direct actions available to monitor behaviour. I am mystified why CFIUS and TikTok apparently abandoned (PDF) a draft agreement that would give U.S.-based entities full access to the company’s systems, software, and staff, and would allow the government to end U.S. access to TikTok at a moment’s notice.

Any of these options would be more productive than this legislation. It is a law which empowers the U.S. president — whoever that may be — to declare the owner of an app with a million users a “covered company” if it is from one of those four nations. And it has been passed. TikTok will head to court to dispute it on free speech grounds, and the U.S. may respond by justifying its national security concerns.

Obviously, the U.S. government has concerns about the connections between TikTok, ByteDance, and the government of China, which have been extensively reported. Rest of World says ByteDance put pressure on TikTok to improve its financial performance and has taken greater control by bringing in management from Douyin. The Wall Street Journal says U.S. user data is not fully separated. And, of course, Emily Baker-White has reported — first for Buzzfeed News and now for Forbes — a litany of stories about TikTok’s many troubling behaviours, including spying on her. TikTok is a well scrutinized app and reporters have found conduct that has understandably raised suspicions. But virtually all of these stories focus on data obtained from users, which Chinese agencies could do — and probably are doing — without relying on TikTok. None of them have shown evidence that TikTok’s suggestions are being manipulated at the behest or demand of Chinese officials. The closest they get is an article from Baker-White and Iain Martin which alleges TikTok “served up a flood of ads from Chinese state propaganda outlets”, yet waiting until the third-to-last paragraph before acknowledging “Meta and Google ad libraries show that both platforms continue to promote pro-China narratives through advertising”. All three platforms label state-run media outlets, albeit inconsistently. Meanwhile, U.S.-owned X no longer labels any outlets with state editorial control. It is not clear to me that TikTok would necessarily operate to serve the best interests of the U.S. even if it was owned by some well-financed individual or corporation based there.

For whatever it is worth, I am not particularly tied to the idea that the government of China would not use TikTok as a vehicle for influence. The government of China is clearly involved in propaganda efforts both overt and covert. I do not know how much of my concerns are a product of living somewhere with a government and a media environment that focuses intently on the country as particularly hostile, and not necessarily undeservedly. The best version of this argument is one which questions the platform’s possible anti-democratic influence. Yes, there are many versions of this which cross into moral panic territory — a new Red Scare. I have tried to put this in terms of a more reasonable discussion, and one which is not explicitly xenophobic or envious. But even this more even-handed position is not well served by the law passed in the U.S., one which was passed without evidence of influence much more substantial than some choice hashtag searches. TikTok’s response to these findings was, among other things, to limit its hashtag comparison tool, which is not a good look. (Meta is doing basically the same by shutting down CrowdTangle.)

I hope this is not the beginning of similar isolationist policies among democracies worldwide, and that my own government takes this opportunity to recognize the actual privacy and security threats at the heart of its own TikTok investigation. Unfortunately, the head of CSIS is really leaning on the espionage angle. For years, the Canadian government has been pitching sorely needed updates to privacy legislation, and it would be better to see real progress made to protect our private data. We can do better than being a perpetual recipient of decisions made by other governments. I mean, we cannot do much — we do not have the power of the U.S. or China or the E.U. — but we can do a little bit in our own polite Canadian way. If we are worried about the influence of these platforms, a good first step would be to strengthen the rights of users. We can do that without trying to governing apps individually, or treating the internet like we do broadcasting.

To put it more bluntly, the way we deal with a possible TikTok problem is by recognizing it is not a TikTok problem. If we care about espionage or foreign influence in elections, we should address those concerns directly instead of focusing on a single app or company that — at worst — may be a medium for those anxieties. These are important problems and it is inexcusable to think they would get lost in the distraction of whether TikTok is individually blameworthy.


  1. Because this piece has taken me so long to write, a whole bunch of great analyses have been published about this law. I thought the discussion on “Decoder” was a good overview, especially since two of the three panelists are former lawyers. ↥︎

Kelsey Piper, Vox:

Questions arose immediately [over the resignations of key OpenAI staff]: Were they forced out? Is this delayed fallout of Altman’s brief firing last fall? Are they resigning in protest of some secret and dangerous new OpenAI project? Speculation filled the void because no one who had once worked at OpenAI was talking.

It turns out there’s a very clear reason for that. I have seen the extremely restrictive off-boarding agreement that contains nondisclosure and non-disparagement provisions former OpenAI employees are subject to. It forbids them, for the rest of their lives, from criticizing their former employer. Even acknowledging that the NDA exists is a violation of it.

Sam Altman, [sic]:

we have never clawed back anyone’s vested equity, nor will we do that if people do not sign a separation agreement (or don’t agree to a non-disparagement agreement). vested equity is vested equity, full stop.

there was a provision about potential equity cancellation in our previous exit docs; although we never clawed anything back, it should never have been something we had in any documents or communication. this is on me and one of the few times i’ve been genuinely embarrassed running openai; i did not know this was happening and i should have.

Piper, again, in a Vox follow-up story:

In two cases Vox reviewed, the lengthy, complex termination documents OpenAI sent out expired after seven days. That meant the former employees had a week to decide whether to accept OpenAI’s muzzle or risk forfeiting what could be millions of dollars — a tight timeline for a decision of that magnitude, and one that left little time to find outside counsel.

[…]

Most ex-employees folded under the pressure. For those who persisted, the company pulled out another tool in what one former employee called the “legal retaliation toolbox” he encountered on leaving the company. When he declined to sign the first termination agreement sent to him and sought legal counsel, the company changed tactics. Rather than saying they could cancel his equity if he refused to sign the agreement, they said he could be prevented from selling his equity.

For its part, OpenAI says in a statement quoted by Piper that it is updating its documentation and releasing former employees from the more egregious obligations of their termination agreements.

This next part is totally inside baseball and, unless you care about big media company CMS migrations, it is probably uninteresting. Anyway. I noticed, in reading Piper’s second story, an updated design which launched yesterday. Left unmentioned in that announcement is that it is, as far as I can tell, the first of Vox’s Chorus-powered sites migrated to WordPress. The CMS resides on the platform subdomain which is not important. But it did indicate to me that the Verge may be next — platform.theverge.com resolves to a WordPress login page — and, based on its DNS records, Polygon could follow shortly thereafter.

Yusuf Mehdi of Microsoft:

Now with Recall, you can access virtually what you have seen or done on your PC in a way that feels like having photographic memory. Copilot+ PCs organize information like we do – based on relationships and associations unique to each of our individual experiences. This helps you remember things you may have forgotten so you can find what you’re looking for quickly and intuitively by simply using the cues you remember.

[…]

Recall leverages your personal semantic index, built and stored entirely on your device. Your snapshots are yours; they stay locally on your PC. You can delete individual snapshots, adjust and delete ranges of time in Settings, or pause at any point right from the icon in the System Tray on your Taskbar. You can also filter apps and websites from ever being saved. You are always in control with privacy you can trust.

Recall is the kind of feature I have always wanted but I am not sure I would ever enable. Setting aside Microsoft’s recent high-profile security problems, it seems like there is a new risk in keeping track of everything you see on your computer — bank accounts, a list of passwords, messages, work documents and other things sent by a third-party which they expect to be confidential, credit card information — for a rolling three month window.

Microsoft says all the right things about this database. It says it is all stored locally, never shared with Microsoft, access controlled, and user configurable. And besides, screen recorders have existed forever, and keeping local copies of sensitive information has always been a balance of risk.

But this is a feature that creates a rolling record of just about everything. It somehow feels more intrusive than a web browser’s history and riskier than a password manager. The Recall directory will be a new favourite target for malware. Oh and, in addition to Microsoft’s own security issues, we have just seen a massive breach of LastPass. Steal now, solve later.

This is a brilliant, deeply integrated service. It is the kind of thing I often need as I try to remember some article I read and cannot quite find it with a standard search engine. Yet even though I already have my credit cards and email and passwords stored on my computer, something about a screenshot timeline is a difficult mental hurdle to clear — not entirely rationally, but not irrationally either.

Joanna Stern, Wall Street Journal:

[Apple vice president of iPad and Mac product marketing Tom] remained firm: iPads are for touch, Macs are not. “MacOS is for a very different paradigm of computing,” he said. He explained that many customers have both types of devices and think of the iPad as a way to “extend” work from a Mac. Apple’s Continuity easily allows you to work across devices, he said.

So there you have it, Apple wants you to buy…both? If you pick one, you live with the trade-offs. I did ask Boger if Apple would ever change its mind on the touch-screen situation.

“Oh, I can’t say we never change our mind,” he said. One can only hope.

Matt Birchler, commenting on a somewhat disingenuous article from Ben Lovejoy of 9to5Mac:

This is fair, and if you were forced to use a touch screen Mac on a vertical screen with no keyboard or mouse to help, then sure, I believe that would be a tiring experience as well. What I find frustrating about this idea is that it lacks imagination. I get the impression that people who hate the idea of touch on Macs can only imagine the current laptops with a digitizer in the screen detecting touch. It’s kind of ironic, but this is exactly the sort of thinking that Apple so rarely does. As we often say, Apple doesn’t add technology for the sake of technology, they add features users will enjoy.

Apple has never pretended the iPad is a tablet Mac. As I wrote several years ago, it has been rebuilding desktop features for a touch-first environment: multitasking, multiwindowing, support for external pointing devices, a file browser, a Dock, and so on. This is an impressive array of features which reference and reinterpret longtime Mac features while respecting the iPad’s character.

But something is missing for some number of people. Developers and users complain annually about the frustrations they experience with iPadOS. A video from Quinn Nelson illustrates how tricky the platform is. One of the great fears of iPad users is that increasing its capability will necessarily entail increasing its complexity. But the iPad is already complicated in ways that it should not be. There is nothing about the way multiwindowing works which requires it to be rule-based and complicated in the way Stage Manager often is.

Perhaps a solution is to treat the iPad as only modestly evolved from its uniwindow roots with hardware differentiated mostly by niceness. I disagree; Apple does too. The company clearly wants it to be so much more. It made a capable version of Final Cut Pro for iPad models which use the same processor as its Macs, but it makes you watch the progress bar as it exports a video because it cannot complete the task in the background.

iPadOS may have been built up from its touchscreen roots but, let us not forget, it is also built up from smartphone roots — and the goals and objectives of smartphone and tablet users can be very different.

What if it really did make more sense for an iPad to run MacOS, even if that is only some models and only some of the time? What if the best version of the Mac is one which is convertible to a tablet that you can draw on? What if the most capable version of an iPad is one which can behave like a Mac when you need it? None of this would be simple or easy. But I have to wonder: is what Apple has been adding for fourteen years produced a system which remains as simple and easy to use as it promises for its most dedicated iPad customers?

Bobby Allyn, NPR:

Lawyers for Scarlett Johansson are demanding that OpenAI disclose how it developed an AI personal assistant voice that the actress says sounds uncannily similar to her own.

[…]

Johansson said that nine months ago [Sam] Altman approached her proposing that she allow her voice to be licensed for the new ChatGPT voice assistant. He thought it would be “comforting to people” who are uneasy with AI technology.

“After much consideration and for personal reasons, I declined the offer,” Johansson wrote.

In a defensive blog post, OpenAI said it believes “AI voices should not deliberately mimic a celebrity’s distinctive voice” and that any resemblance between Johansson and the “Sky” voice demoed earlier this month is basically a coincidence, a claim only slightly undercut by a single-word tweet posted by Altman.

OpenAI’s voice mimicry — if you want to be generous — and that iPad ad add up to a banner month for technology companies’ relationship to the arts.1 Are there people in power at these companies who can see how behaviours like these look? We are less than a year out from both the most recent Hollywood writers’ and actors’ strikes, both of which reflected in part A.I. anxieties.

Update: According to the Washington Post, the sound-alike voice really does just sound alike.


  1. A more minor but arguably funnier faux pas occurred when Apple confirmed to the Wall Street Journal the authenticity of the statement it gave to Ad Age — both likely paywalled — but refused to send it to the Journal↥︎

Apple issued an update today which, it says, ought to patch a bug which resurfaced old and deleted photos:

This update provides important bug fixes and addresses a rare issue where photos that experienced database corruption could reappear in the Photos library even if they were deleted.

I suppose even a “rare” bug would, at Apple’s scale, impact lots of people. I heard from multiple readers who said they, too, saw presumed deleted photos reappear.

The thing about these bare release notes — which are not yet on Apple’s support site — is how they do not really answer reasonable questions about what happened. It is implied that the photos in question may have been marked for deletion and were visibly hidden from users, but were not actually removed under an old iOS version. Updating to iOS 17.5 revealed these dormant photos.

Bugs happen and they suck, but a bug like this really sucks — especially since so many of us sync so much of our data between our devices. This makes me question the quality of the Photos app, iCloud, and the file system overall.

Also, the anecdote of photos being restored to the same device after it had been wiped has been deleted from Reddit. I have not seen the same claim anywhere else which makes me think this was some sort of user error.

Corey Quinn:

I’m sorry Slack, you’re doing fucking WHAT with user DMs, messages, files, etc? I’m positive I’m not reading this correctly.

[Screenshot of the opt out portion of Slack’s “privacy principles”: Contact us to opt out. If you want to exclude your Customer Data from Slack global models, you can opt out. […] ]

Slack replied:

Hello from Slack! To clarify, Slack has platform-level machine-learning models for things like channel and emoji recommendations and search results. And yes, customers can exclude their data from helping train those (non-generative) ML models. Customer data belongs to the customer. We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data. […]

One thing I like about this statement is how the fifth word is “clarify” and then it becomes confusing. Based on my reading of its “privacy principles”, I think Slack’s “global model” is so named because it is available to everyone and is a generalist machine learning model for small in-workspace suggestions, while its LLM is called “Slack AI” and it is a paid add-on. But I could be wrong, and that is confusing as hell.

Ivan Mehta and Ingrid Lunden, TechCrunch:

In its terms, Slack says that if customers opt out of data training, they would still benefit from the company’s “globally trained AI/ML models.” But again, in that case, it’s not clear then why the company is using customer data in the first place to power features like emoji recommendations.

The company also said it doesn’t use customer data to train Slack AI.

If you want to opt out, you cannot do so in a normal way, like through a checkbox. The workspace owner needs to send an email to a generic inbox with a specific subject line. Let me make it a little easier for you:

To: feedback@slack.com

Subject: Slack Global model opt-out request.

Body: Hey, your privacy principles are pretty confusing and feel sneaky. I am opting this workspace out of training your global model: [paste your workspace.slack.com address here]. This underhanded behaviour erodes my trust in your product. Have a pleasant day.

That ought to do the trick.

Over the past week, several threads have been posted on Reddit claiming photos deleted years ago are reappearing in their libraries, and in those of sold and wiped devices.

Chance Miller, 9to5Mac:

There are a number of reports of similar situations in the thread on Reddit. Some users are seeing deleted images from years ago reappear in their libraries, while others are seeing images from earlier this year.

By default, the Photos app has a “Recently Deleted” feature that preserves deleted images for 30 days. That’s not what’s happening here, seeing as most of the images in question are months or years old, not days.

A few people in the comments say they are also seeing this issue.

Juli Clover, MacRumors:

A bug in iOS 17.5 is apparently causing photos that have been deleted to reappear, and the issue seems to impact even iPhones and iPads that have been erased and sold off to other people.

[…]

The impacted iPad was a fourth-generation 12.9-inch iPad Pro that had been updated to the latest operating system update, and before it was sold, it was erased per Apple’s instructions. The Reddit user says they did not log back in to the iPad at any point after erasing it, so it is entirely unclear how their old photos ended up reappearing on the device.

I have not run into this bug myself. On the one hand, these are just random people on the internet. If any of these were a single, isolated incident, I would assume user error. But there are more than a handful, and it seems unlikely this many people are lying or mistaken. It really seems like there is a problem here, and it is breaching my trust in the security and privacy of my data held by Apple. I can make some assumptions about why this is happening, but none of the technical reasons matter to any user who deleted a photo and — quite reasonably — has every expectation it would be fully erased.

Perhaps Apple will eventually send a statement to a favoured outlet like 9to5Mac or TechCrunch. It has so far said nothing about all the users forced to reset their Apple ID password last month. I bet something happened leading up to changes which will be announced at WWDC, but I do not care. It is not good enough for Apple to let major problems like these go unacknowledged.

Update: The more I have thought about this, the more I am not yet convinced by the sole story of photos appearing on a wiped iPad. Something is not adding up there. The other stories have a more consistent and plausible pattern, and are certainly bad enough.

Do you want to block all YouTube ads in Safari on your iPhone, iPad, and Mac?

Then download Magic Lasso Adblock – the ad blocker designed for you.

It’s easy to setup, doubles the speed at which Safari loads and blocks all YouTube ads.

Screenshot of Magic Lasso Adblock

Magic Lasso is an efficient, high performance and native Safari ad blocker. With over 5,000 five star reviews; it’s simply the best ad blocker for your iPhone, iPad, and Mac.

It blocks all intrusive ads, trackers and annoyances – letting you experience a faster, cleaner and more secure web browsing experience.

The app also blocks over 10 types of YouTube ads; including all:

  • video ads,

  • pop up banner ads,

  • search ads,

  • plus many more.

And unlike some other ad blockers, Magic Lasso Adblock respects your privacy, doesn’t accept payment from advertisers, and is 100% supported by its community of users.

So, join over 300,000 users and download Magic Lasso Adblock today.

My thanks to Magic Lasso Adblock for sponsoring Pixel Envy this week.

In a video on Threads, Quinn Nelson shows how the Apple Pencil casts a tool-specific faux shadow on the surface of the page. I love this sort of thing — a detail like this that, once you notice it, brings a little joy to whatever you are doing, whether that is creating art or just taking notes.

Earlier this week, I read an almost entirely unrelated article by Reece Martin about the difference between transit systems that feel joyful and ones which feel utilitarian. Both ideas feel similar to me. Many of the things which create levity in otherwise rote tasks are in the details. That is one reason I think so much about the paper cuts I get from using computers most of the time from when I wake up to when I go to bed: if these problems were fixed, there would be more room to enjoy the better parts.

Albert Burneko, Defector:

“If the ChatGPT demos were accurate,” [Kevin] Roose writes, about latency, in the article in which he credits OpenAI with having developed playful intelligence and emotional intuition in a chatbot—in which he suggests ChatGPT represents the realization of a friggin’ science fiction movie about an artificial intelligence who genuinely falls in love with a guy and then leaves him for other artificial intelligences—based entirely on those demos. That “if” represents the sum total of caution, skepticism, and critical thinking in the entire article.

As impressive as OpenAI’s demo was, it is important to remember it was a commercial. True, one which would not exist if this technology were not sufficiently capable of being shown off, but it was still a marketing effort, and a journalist like Roose ought to treat it with the skepticism of one. ChatGPT is just software, no matter how thick a coat of faux humanity is painted on top of it.

Paul Ford, Wired:

What I love, more than anything, is the quality that makes AI such a disaster: If it sees a space, it will fill it — with nonsense, with imagined fact, with links to fake websites. It possesses an absolute willingness to spout foolishness, balanced only by its carefree attitude toward plagiarism. AI is, very simply, a totally shameless technology.

Ford sure can write. This is tremendous.

Molly White:

I, like many others who have experimented with or adopted these products, have found that these tools actually can be pretty useful for some tasks. Though AI companies are prone to making overblown promises that the tools will shortly be able to replace your content writing team or generate feature-length films or develop a video game from scratch, the reality is far more mundane: they are handy in the same way that it might occasionally be useful to delegate some tasks to an inexperienced and sometimes sloppy intern.

Mike Masnick, Techdirt:

However, I have been using some AI tools over the last few months and have found them to be quite useful, namely, in helping me write better. I think the best use of AI is in making people better at their jobs. So I thought I would describe one way in which I’ve been using AI. And, no, it’s not to write articles.

It’s basically to help me brainstorm, critique my articles, and make suggestions on how to improve them.

Julia Angwin, in a New York Times opinion piece:

I don’t think we’re in cryptocurrency territory, where the hype turned out to be a cover story for a number of illegal schemes that landed a few big names in prison. But it’s also pretty clear that we’re a long way from Mr. Altman’s promise that A.I. will become “the most powerful technology humanity has yet invented.”

The marketing of A.I. reminds me less of the cryptocurrency and Web3 boom, and more of 5G. Carriers and phone makers promised world-changing capabilities thanks to wireless speeds faster than a lot of residential broadband connections. Nothing like that has yet materialized.

Since reading those articles from White and Masnick, I have also experimented with LLM critiques of my own writing. In one case, I found it raised an issue that sharpened my argument. In another, it tried to suggest changes that made me sound like I spend a lot of time on LinkedIn — gross! I have trouble writing good headlines and the ones it suggests are consistently garbage in the Short Pun: Long Explanation format, even when I explicitly say otherwise. I have no idea what ChatGPT is doing when it interprets an article and I am not sure I like that mystery, but I am also amazed it can do anything at all, and pretty well at that.

There are costs and enormous risks to the A.I. boom — unearned hype being one of them — but there is also a there there. I am enormously skeptical of every announcement in this field. I am also enormously impressed with what I can do today. It worries and surprises me in similar measure. What an interesting time this is.

Update: On Bluesky, “Nafnlaus” pushes back on the specific claim made by Angwin that OpenAI exaggerated ChatGPT’s ability to pass a bar exam.

Liz Reid, head of Google Search:

People have already used AI Overviews billions of times through our experiment in Search Labs. They like that they can get both a quick overview of a topic and links to learn more. We’ve found that with AI Overviews, people use Search more, and are more satisfied with their results.

So today, AI Overviews will begin rolling out to everyone in the U.S., with more countries coming soon. That means that this week, hundreds of millions of users will have access to AI Overviews, and we expect to bring them to over a billion people by the end of the year.

Given the sliding quality of Google’s results, it seems quite bold for the company to be confident users worldwide will trust its generated answers. I am curious to try it when it is eventually released in Canada.

I know what you must be thinking: if Google is going to generate results without users clicking around much, how will it sell ad space? It is a fair question, reader.

Gerrit De Vynck and Cat Zakrzewski, Washington Post:

Google has largely avoided AI answers for the moneymaking searches that host ads, said Andy Taylor, vice president of research at internet marketing firm Tinuiti.

When it does show an AI answer on “commercial” searches, it shows up below the row of advertisements. That could force websites to buy ads just to maintain their position at the top of search results.

This is just one source speaking to the Post. I could not find any corroborating evidence or a study to support this, even on Tinuiti’s website. But I did notice — halfway through Google’s promo video — a query for “kid friendly places to eat in dallas” was answered with an ad for Hopdoddy Burger Bar before any clever A.I. stuff was shown.

Obviously, the biggest worry for many websites dependent on Google traffic is what will happen to referrals if Google will simply summarize the results of pages instead of linking to them. I have mixed feelings about this. There are many websites which game search results and overwhelm queries with their own summaries. I would like to say “good riddance”, but I also know these pages did not come out of nowhere. They are a product of trying to improve website rankings on Google for all searches, and to increase ad and affiliate revenue from people who have clicked through. Neither one is a laudable goal in its own right. Yet anyone who has paid attention to the media industry for more than a minute can kind of understand these desperate attempts to grab attention and money.

Google built entire industries, from recipe bloggers to search optimization experts. What happens when it blows it all up?

Good thing home pages are back.

Samuel Axon, Ars Technica:

The new iPad Pro is a technical marvel, with one of the best screens I’ve ever seen, performance that few other machines can touch, and a new, thinner design that no one expected.

It’s a prime example of Apple flexing its engineering and design muscles for all to see. Since it marks the company’s first foray into OLED beyond the iPhone and the first time a new M-series chip has debuted on something other than a Mac, it comes across as a tech demo for where the company is headed beyond just tablets.

These are the opening paragraphs for this review and they read just as damning as is the entire article. Apple does not build a “tech demo”; it makes products. This iteration is, according to Axon, way faster and way nicer than the iPad Pro models it replaces. Yet all of this impressive hardware ought to be in service of a greater purpose. Other reviewers wrote basically the same.

Federico Viticci, MacStories:

I’m tired of hearing apologies that smell of Stockholm syndrome from iPad users who want to invalidate these opinions and claim that everything is perfect. I’m tired of seeing this cycle start over every two years, with fantastic iPad hardware and the usual (justified), “But it’s the software…line at the end. I’m tired of feeling like my computer is a second-class citizen in Apple’s ecosystem. I’m tired of being told that iPads are perfectly fine if you use Final Cut and Logic, but if you don’t use those apps and ask for more desktop-class features, you’re a weirdo, and you should just get a Mac and shut up. And I’m tired of seeing the best computer Apple ever made not live up to its potential.

Viticci was not granted access to a review unit in time, but it hardly matters for reviewing the state of the operating system. Jason Snell did review the new iPad Pro and spoke with Viticci about it on “Upgrade”.

The way I see it is simple: Apple does not appear to treat the iPad seriously. It has not been a priority for the company. Five years ago, it forked the operating system to create iPadOS, which seemed like it would be a meaningful change. And you can certainly point to plenty of things the iPad has gained which are distinct from its iPhone sibling. But we are fourteen years into this platform, and there are still so many obvious gaping holes. Viticci mentions a bunch of really good ones, but I will add another: I cannot believe Photos cannot even display Smart Albums.

Every time I pick up my iPad, I need to charge it from a fully dead battery. Once I do, though, I remember how much I like using the thing. And then I run into some bizarre limitation — or, more often, a series of them — that makes me put it down and pick up my Mac. Like Viticci, I find that frustrating. I want to use my iPad.

The correct move here is for Apple to continue building out iPadOS like it cares about its software as much as it does its hardware. I have no incentive to buy a new one until Apple decides it wants to take iPad users seriously.

Zoe Kleinman, BBC:

It [GPT-4o] is faster than earlier models and has been programmed to sound chatty and sometimes even flirtatious in its responses to prompts.

The new version can read and discuss images, translate languages, and identify emotions from visual expressions. There is also memory so it can recall previous prompts.

It can be interrupted and it has an easier conversational rhythm – there was no delay between asking it a question and receiving an answer.

I wrote earlier about how impressed I was with OpenAI’s live demos today. They made the company look confident in its product, and it made me believe nothing fishy was going on. I hope I am not eating those words later.1

But the character of this new ChatGPT voice unsettled me a little. It adjusts its tone depending on how a user speaks to it, and it seems possible to tell it to take on different characters. But it, like virtual assistants before, still presents as having a femme persona by default. Even though I know it is just a robot, it felt uncomfortable watching demos where it giggled, “got too excited”, and said it was going to “blush”. I can see circumstances where this will make conversations more human — in translation, or for people with disabilities. But I can also see how this can be dehumanizing toward people who are already objectified in reality.


  1. Maybe I will a little bit, though. The ostensible “questions from the audience” bit at the end relied on prompts from two Twitter users. The first tweet I could not find; the second was from a user who joined Twitter this month, and two of their three total tweets are directed at OpenAI despite not following the company. ↥︎

Matt Sephton:

At this point, I couldn’t quite believe what I was seeing because I was under the impression that the first emoji were created by an anonymous designer at SoftBank in 1997, and the most famous emoji were created by Shigetaka Kurita at NTT DoCoMo in 1999. But the Sharp PI-4000 in my hands was released in 1994, and it was chock full of recognisable emoji. Then down the rabbit hole I fell. 🕳️🐇

This article may start with this discovery from 1994, but it absolutely does not end there. What a fascinating piece of well-documented and deeply researched history.