Month: April 2023

Kyle Barr, Gizmodo:

Microsoft’s Windows 11 Start Menu is becoming more and more like a dancing inflatable tube man gesticulating wildly outside a used car lot. The last Windows 11 update added advertisements for Microsoft’s OneDrive cloud backups for some users when they click on the little Windows icon on the desktop. Now the Redmond, Washington company wants to bombard you with more ads for its other “free” services every time you go to sign out.

Last year, Microsoft began testing upsell ads in File Explorer. Subscription services have created a conflict of interest for platform builders as they choose to relentlessly promote their revenue opportunities in parts of the system previously treated as users’ space.

Amy Hawkins and Helen Davidson, the Guardian:

In theory, many of the accusations that have been levelled against TikTok – such as that it is bad for children’s mental health or engages in censorship of political topics – should be less applicable to other Chinese apps that are popular in the west. Fast fashion and cheap cosmetics are less controversial than algorithmically delivered content that is seen as shaping young minds. And shopping apps like Temu and Shein are dependent on physical supply chains, so they are less able to change or mask their Chinese links.

But US lawmakers have warned that any Chinese-owned apps could be vulnerable to data privacy breaches or interference from the Chinese Communist party.

This was one of the problems with last month’s TikTok hearing: the lack of focus muddied any substantive discussion which may have accidentally occurred. Because some lawmakers were preoccupied with the effects of social media on the health of children, their questions to TikTok’s CEO were a mix of legitimate worries about all social media apps, private parenting concerns, and moral panic.1 Others spent time questioning the influence of the Chinese government, with one representative calling the app a “Chinese spy balloon in your pocket”.

Setting aside the questionable logic of that metaphor, if privacy and security are topics of concern, banning individual apps is a laughable countermeasure. Not only will it be an extraordinary amount of work — especially with all the time the government will spend in court because of how it is legally dubious — but it will be useless so long as the United States lacks strong privacy legislation.

By the way, though this Guardian article and, therefore, my response are focused on the U.S., it applies similarly to countries around the world, including the one I live in. The something we have here is better than nothing, but it is nowhere near as effective as it needs to be. The question is whether the Canadian government will grow a spine and prohibit things which will necessarily render some parts of our digital economy unviable.


  1. The recently passed bill in Montana which, upon being enacted, will ban TikTok in the state contains this excellent preamble:

    WHEREAS, TikTok fails to remove, and may even promote, dangerous content that directs minors to engage in dangerous activities, including but not limited to throwing objects at moving automobiles, taking excessive amounts of medication, lighting a mirror on fire and then attempting to extinguish it using only one’s body parts, inducing unconsciousness through oxygen deprivation, cooking chicken in NyQuil, pouring hot wax on a user’s face, attempting to break an unsuspecting passerby’s skull by tripping him or her into landing face first into a hard surface, placing metal objects in electrical outlets, swerving cars at high rates of speed, smearing human feces on toddlers, licking doorknobs and toilet seats to place oneself at risk of contracting coronavirus, attempting to climb stacks of milkcrates, shooting passersby with air rifles, loosening lug nuts on vehicles, and stealing utilities from public places […]

    Moral panic as law. Many of these either predate TikTok or are complete fiction↥︎

GLAAD:

Prior to the rule change, Twitter’s Hateful Content Policy stated: “We prohibit targeting others with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.”

The final sentence, specific to transgender users, has since been removed. According to the Wayback Machine, the sentence referring to targeted misgendering and deadnaming was present on April 7th, but was stricken from the policy on April 8th. The previous version is here, the current version is here.

This is not the only change Twitter made to this policy, but it is a notable one. Twitter also removed a list of examples of groups of people who are more likely to be marginalized, and clarified places where it will restrict an offending tweet’s visibility.

GLAAD notes the policy against deadnaming and misgendering transgender people has been in place since 2018, and that its removal comes at a time when the humanity of trans and non-binary people is being used as a political issue by the worst kind of assholes, some of whom have associated the mere existence of people who are not cisgender with heinous criminal acts. To the extent Twitter still has policies against targeted harassment, it is — at best — unhelpful to make them more vague. But that is being generous; it is hard to read this as anything other than tacit permission for Twitter users to bully people who are trans or non-binary.

There are some people who seem to believe everything is up for debate — that we need to resolve everything from first principles over and over, just to make sure. And, look, sometimes it is fair to question whether the precedent which has been set is the correct one. But when policies are a de facto encouragement of discrimination and othering of people for who they are, it means the authors of that policy are unprincipled assholes. The humanity of people is not a debate, and policies which undermine that do not encourage careful thought. They encourage a mob.

Macworld published two separate articles in the past few days about the recently contradictory and ever-changing Apple rumour mill. The first is by Jason Cross:

Apple is a famously secretive company, and the rumor mill for its upcoming products has never been particularly reliable or steady. And the further in the future you try to predict, the worse it gets.

But lately, it seems even our most reliable analysts, leakers, and supply chain watchers are all over the place. Their predictions disagree with one another and then turn around entirely.

And the second from Dan Moren:

The most important thing to remember about Apple rumors is the things they lack visibility into. Details like marketing and pricing, for example, tend to be far more closely held, since that information is generally the purview of the company’s high-level executives, and just doesn’t make its way down to the supply chain.

Cross is a senior editor at Macworld, pleading with analysts and reporters to “step back, take a deep breath, and give it a little time” before rushing to publish the latest rumour. But, as he documents in his article, his very publication often rushes to cover whatever is the most recent version of the iPhone 15’s volume buttons or the augmented reality headset.

This is an editorial policy which can change; Cross and fellow rumour enthusiast Michael Simon — who is the executive editor of Macworld — can make that decision.

Obviously, reasons for their not doing so may be reflective of search traffic, competitiveness, or a desire to be viewed as always current. Those are also editorial choices which may be weighed against the repetitional risk of running rumours which are questionable. It seems as though Macworld has made its choice about the balance it would like to strike, which makes it look a little rich publishing articles decrying the mercurial rumour mill.

CBC News, in an article posted without a byline:

Twitter has put a “government-funded media” label on CBC’s account in what is the latest move by the social media company to stamp public broadcasters with designations.

[…]

“Twitter’s own policy defines government-funded media as cases where the government ‘may have varying degrees of government involvement over editorial content,’ which is clearly not the case with CBC/Radio-Canada,” [CBC spokesperson Leon] Mar said.

On YouTube, CBC videos carry a notice that “CBC/Radio-Canada is a Canadian public broadcast service” and there is a Wikipedia link to learn more. It seems fair to acknowledge the funding of different media organizations, but these recently created labels seem intended to sow discontent and mistrust rather than to inform. You can tell because the Twitter’s definition of the difference between “government-funded media” and “publicly funded media” is, at best, vague. At worst, it invites confusion between public broadcasters and state-controlled media. Trust in media is already at perilous lows, and it is harmful to imply similarities in editorial policy between media created for a public service, and media outlets which are a mouthpiece for the state.

For a fun exercise, flip the angle: call public media “democratic broadcasters”, private media are “advertiser-funded broadcasters”, and those owned by billionaires are, for example, “Rupert Murdoch’s soapbox”. Or is that too on the nose?

Like NPR and PBS, the CBC has suspended its use of Twitter.

Update: Mere days after enacting this policy, Twitter ended this classification system for all types of media. That is, the CBC no longer carries a “government-funded media” tag, and RT — the Russian broadcaster that is funded by its government, and reflects in its editorial policy its overseers’ political goals — lacks its “state-affiliated media” designation. The “Government Media Labels” reference document published by Twitter now results in a 404 error page; I have updated its link above with one from the Internet Archive.

Thomas Bandt set up Windows 11 on a computer for his kid to use:

First, there was news about a mass shooting that had occurred only recently. In the middle of the search menu. The menu which was supposed to be one of the first touch points with that computer for the kid. Not okay. But after some time, I figured out how to switch that off.

[…]

So, there is basically little you can do with Windows out of the box but buy subscriptions and log into pre-installed social media apps. One thing I knew right on the spot: That’s not an environment I want my kid to make his first steps “on a real computer.” Not in a hundred years. Never.

Via Matt Birchler:

Oh, and to be clear, this isn’t some OEM addition, this is core Windows… you can’t escape this with a Surface device: this is the Windows experience as Microsoft sees it.

I am thankful I use a Windows 11 computer at my day job because it puts things into perspective. Apple’s operating systems are also full of ads for its services but it is somewhat less intrusive than what I experience on my office desktop. Neither is good for users, however. The more computer companies see their operating systems as vehicles for converting users to subscription-paying advertising-clicking customers, the more it feels like we are being taken advantage of.

Adnan Bhat, Rest of World:

The Indian government on April 6 announced a state-run fact-checking unit that will have sweeping powers to label any piece of information related to the government as “fake, false or misleading” and have it removed from social media. The country has tweaked its tech rules that now require platforms such as Facebook, Twitter, and Instagram to take down content flagged by the fact-checking body. Internet service providers are also expected to block URLs to such content. Failure to comply could result in the platforms losing safe harbor protection that safeguards them from legal action against any content posted by their users, said India’s minister of information technology, Rajeev Chandrasekhar.

Given Twitter’s new censorship-happy leadership, this effectively amounts to a worldwide policy.

Matt Binder, Mashable:

Numerous public service Twitter accounts have lost their ability to automatically post breaking news and events. Twitter has been removing API access, which allows many of these accounts to post in an authorized way by the platform, as it switches to Musk’s new high-priced paid API system.

Many of these affected Twitter accounts have automated updates, but aren’t the type of hands-off bot accounts that some may think of when they hear the term “bot.”

For example, numerous National Weather Service accounts that provide consistent updates, both automated and manually posted by humans, shared that they could no longer provide their up-to-the-minute, potentially life-saving updates.

These accounts are regional so users can follow the alerts most relevant to them. That means the National Weather Service operates hundreds of largely automated accounts. At $1,200 per year for basic API access, it very quickly becomes cheaper to hire people and pay them a healthy salary and benefits to manually tweet from these accounts. By some miracle, Twitter’s new management has found a way to reverse the ever-increasing threat of automation eliminating jobs.

By the way, Twitter finally canned the method I use for automatically posting. I have a fresh free API 2.0 key and am investigating a replacement posting method, but we should assume Twitter has little interest in being a stable long-term product. I recommend following the site directly through its JSON Feed, RSS, or on Mastodon.

Alex Kantrowitz:

Like every tech company, Uber is adjusting to the end of zero interest rate policy, and its focus on drivers is partially a result. When interest rates were near zero — and growth mattered more than margin — Uber could fix a subpar driving experience by luring drivers in with bonuses. In 2021, for instance, Uber announced a $250 million ‘stimulus’ to get more drivers on the road. But now, due to market conditions, the company has less cash to throw around. So it has to focus on whether drivers actually enjoy working with it.

[Dara] Khosrowshahi’s appearance in the Wall Street Journal therefore seemed to address two audiences. 1) Wall Street investors who worry Uber can’t attract drivers without cash bonuses. 2) Drivers who might drive more via a better product experience.

For some investors, the article landed with a thud. […]

Apparently, Khosrowshahi was paid about $24 million last year. You know, when he and other executives tried using the product from a driver’s perspective — that is to say, the perspective of the people who make Uber a functional service.

Mike Masnick, Techdirt:

Say “we believe that writers on our platform can publish anything they want, no matter how ridiculous, or hateful, or wrong.” Don’t hide from the question. You claim you’re enabling free speech, so own it. Don’t hide behind some lofty goals about “freedom of the press” when you’re really enabling “freedom of the grifters.”

You have every right to allow that on your platform. But the whole point of everyone eventually coming to terms with the content moderation learning curve, and the fact that private businesses are private and not the government, is that what you allow on your platform is what sticks to you. It’s your reputation at play.

And your reputation when you refuse to moderate is not “the grand enabler of free speech.” Because it’s the internet itself that is the grand enabler of free speech. When you’re a private centralized company and you don’t deal with hateful content on your site, you’re the Nazi bar.

“[T]he internet itself […] is the grand enabler of free speech” is the thing to take away from any question about how to moderate the web.

Uwa Ede-Osifo, NBC News:

Montana lawmakers passed a bill Friday blocking downloads of TikTok, the most significant action yet by a state yet against the app.

The bill, SB 419, makes it illegal for app stores to give users the option to download the app and also illegal for the company to operate within the state.

The bill does not, however, make it illegal for people who already have TikTok to use the app. A previous version of the bill sought to force internet providers to block TikTok, but that language was later removed.

The bill is called, [sic] “Ban tik-tok in Montana”, and you have to give them credit for being on the nose. This has none of the weaselly “RESTRICT Act” phrasing; it is a straightforward ban of TikTok. Why mince words?

Montana’s governor still needs to sign the bill for it to become law, and he is expected to do so. It sure would be embarrassing for Montana’s lawmakers if those affected found some kind of loophole. As I have written before, I have no objection to policies which would enhance users’ privacy — but this law does not do that. It bans the TikTok app from the seventh least populous state in the U.S. without any restrictions on personal data collection.

Apple:

Apple today announced a major acceleration of its work to expand recycled materials across its products, including a new 2025 target to use 100 percent recycled cobalt in all Apple-designed batteries. Additionally, by 2025, magnets in Apple devices will use entirely recycled rare earth elements, and all Apple-designed printed circuit boards will use 100 percent recycled tin soldering and 100 percent recycled gold plating.

The cobalt supply chain is notoriously dirty, with its use in the batteries of everything from personal electronics to electric cars being an ongoing source for concern. In 2017, Apple pledged to one day make its devices wholly from recycled materials. While it did not announce a timeline for doing so, this announcement gets it closer and the company’s regular progress reports indicate steps in the right direction.

Oliver Darcy, in CNN’s Reliable Sources newsletter:

Vox Media is gearing up for its first Code Conference without Kara Swisher at the helm. The invite-only event, which attracts top tech execs and journalists, will be hosted this year by The Verge Editor-In-Chief Nilay Patel, Platformer founder Casey Newton, and CNBC senior reporter Julia Boorstin. Swisher, who co-founded the news-making conference with Walt Mossberg and hosted it for the past two decades, will still participate in the conference, albeit in a less outsized role. The conference, which will take place September 26 to 27, also moves to the Ritz Carlton, Laguna Niguel.

Replacing Swisher is no easy task, but I have a feeling these three hosts will be able to hold industry executives accountable.

This is all going to sound very familiar. It is something society will re-litigate a few times a year until the internet goes away instead of, you know, learning.

On Tuesday evening, the BBC’s James Clayton scored an impromptu interview with Elon Musk, which was streamed live on Twitter Spaces. It ran for about ninety minutes and the most popular clip has been a brief segment in which Clayton pressed Musk on a rise in hate speech:

Clayton: You’ve asked me whether my feed, whether it’s got less or more. I’d say it’s got slightly more.

Musk: That’s why I’m asking for examples. Can you name one example?

Clayton: I honestly don’t — honestly…

Musk: You can’t name a single example?

Musk concludes by calling Clayton a liar. It is an awkward segment to watch because it is clear how unprepared Clayton was for this exchange — but his claim is not wrong.

Mike Wendling, BBC:

Several fringe characters that were banned under the previous management have been reinstated.

They include Andrew Anglin, founder of the neo-Nazi Daily Stormer website, and Liz Crokin, one of the biggest propagators of the QAnon conspiracy theory.

[…]

Anti-Semitic tweets doubled from June 2022 to February 2023, according to research from the Institute of Strategic Dialogue (ISD). The same study found that takedowns of such content also increased, but not enough to keep pace with the surge.

Of course, this followup story appears to be a case where the broadcaster would not only like to correct the record, but also stand behind a reporter who struggled with a line of questioning he, in hindsight, should have anticipated. That may undermine readers’ confidence in it. A reader may also doubt the independence of the ISD as it counts as funders government agencies, billionaires’ foundations, and large technology companies. Research like this demands access to Twitter’s API so, if billionaire funders are bothersome, prepare for it to get much worse.

I believe those aspects are worth considering, but are secondary concerns to the findings of the ISD report (PDF). In other words, if there are methodological problems or the study’s conclusions seem contrived, that is a more immediate concern. The ISD concluded by saying two things, and implying one other: Twitter is getting better at removing antisemitic tweets on a proportional basis; Twitter is not keeping up with a doubling of the number of antisemitic tweets being posted; and the total number of antisemitic tweets is still a tiny fraction of the total tweets published daily. That any antisemitic tweets are remaining on the site is obviously not good, but a doubling of a very small number is still a very small number.

The ISD is not the only source for this kind of research. Wendling cites other sources, including the BBC’s own, to make the case that hate speech on Twitter has climbed. Just a few days ago, a preprint study on Musk’s Twitter was released seeking to understand the presence of both hate speech and bots pre- and post-acquisition. Its authors found an increase in both — though, again, still at a relatively low percentage of all tweets.

But even if it is just a handful of posts which are violative of even a baseline understanding of what constitutes hate speech, it is harmful to the person who is targeted and — if you want the detached business case for it — may have a chilling effect on their use of the platform. From Twitter’s own policies:

We recognize that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. For those who identify with multiple underrepresented groups, abuse may be more common, more severe in nature, and more harmful.

We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals or groups with abuse based on their perceived membership in a protected category.

That is one reason why it is so important for platforms to set guidelines for permissible speech, and enforce those rules clearly and vigorously. There will always be grey area, but when a platform advertises the grey area as a space for “difficult” or “controversial” arguments, it will begin to slide. From that preprint study (PDF):

Both analyses we performed show large increases in hate speech following Musk’s purchase, with no sign of hate speech returning to previously typical levels. Prior research highlights the consequences of online hate speech, including increased anxiety in users (Saha, Chandrasekharan, and De Choudhury 2019) and offline victimization of targeted groups (Lewis, Rowe, and Wiper 2019). The effects of Twitter’s moderation policies are thus likely far-reaching and will lead to negative consequences if left unchecked.

The researchers note no causal relationship between any specific Twitter rules and the baseline rise in hate speech on the platform. But Musk’s documented views on encouraging a platform environment with fewer guidelines have effectively done the same work as an official policy change. He is an influential figure who could use his unique platform to encourage greater understanding; instead, he spends his very busy day farting out puerile memes — you are welcome — and mocking anti-racist initiatives.

The efforts of some to minimize the effects of hateful conduct as merely being words or not being the responsibility of platforms is grossly out of step with research. These are not new ideas, and we do not need to pretend that light touch moderation for only the most serious offences is an effective strategy. Twitter may not be overrun with hate speech but, for its most frequent targets, it has increasing presence.

This happens over and over again; you would think we would learn something. As I was writing this piece, a clip from the Verge’s “Decoder” podcast was published to TikTok, in which Nilay Patel asks Substack CEO Chris Best a pretty basic question about whether explicitly racist speech would be permitted in the platform’s new Notes section. That clip is not the result of crafty editing; the full transcript is an exercise by Best in equivocation and dodging. At one point, Best tries to claim “we are making a new thing […] we launched this thing one day ago”, but anyone can look at Notes and realize it is not really a new idea. If anything, its recent launch is even less of an excuse than Best believes — because it is new for Substack, it gives the company an opportunity to set reasonable standards from the first day. That it is not doing so and not learning from the work of other platforms and researchers is ridiculous.

Chris Espinosa:

As of today, Tim Cook becomes Apple’s longest-serving CEO at 4250 days in the seat.

Espinosa appears to be excluding Steve Jobs’ time serving as the interim CEO from September 1997 through January 2000. I think that is fair but, if you include it, Cook still has until the end of July 2025 before he eclipses Jobs’ record.

Nate Rogers interviewed Paul Dochney, better known to everyone as dril, for the Ringer:

Perhaps no artist has done more to push forward the conversation about how social media can exist in the artistic realm than Jacob Bakkila, who ran Horse_ebooks as part of a larger artistic collaboration with Thomas Bender. The Horse_ebooks project was deliberately ended in 2013 — “No one wants to work on a painting forever,” Bakkila said at the time — and Bakkila, who now works in advertising in addition to his ongoing work as a multimedia artist, spoke thoughtfully to me over video call about the promise of art in the digital landscape. But of anyone I talked to, he was the most concerned about the risk of overintellectualizing Dril’s act — of being the type of person who, in his analogy, would study photosynthesis but forget to watch “the leaves change color.”

“He’s a poster,” Bakkila said. “And I think that there’s a great beauty to that because it’s also the native language of the internet. … It’s what the internet is designed to do, is to let you post on it. And it goes deeper — in that sense, it’s more profound than comedy, although obviously he’s very funny. And it’s more profound than art, although obviously he’s artistic. But I think first and foremost, he’s a poster. And he’s the best one we have.”

Agreed, obviously. Do not miss the deranged New Yorker-esque cartoons based on classic dril tweets.