Link Log

Peter Hoskins and Lily Jamali, BBC News:

A US federal court has told Google to pay $425m (£316.3m) for breaching users’ privacy by collecting data from millions of users even after they had turned off a tracking feature in their Google accounts.

The verdict comes after a group of users brought the case claiming Google accessed users’ mobile devices to collect, save and use their data, in violation of privacy assurances in its Web & App Activity setting.

Heck of a week for Google. Perhaps it should stop doing so much creepy and anticompetitive stuff.

This lawsuit was filed in July 2020 (PDF), and alleged various privacy violations surrounding Google’s “Web and App Activity” control — a known source of confusion — and Google’s data collection through other services like Firebase and Analytics. Perhaps Google should not operate such a sprawling empire of surveillance by both becoming a smaller business and doing less data collection. Alas, it will not do so voluntarily.

Teresa Ribera, the European Commission’s vice president of “Clean, Just, and Competitive Transition”:

Google abused its power by favouring its own online display advertising technology services to the detriment of its competitors, online advertisers and publishers.

As a result of Google’s illegal practices, advertisers faced higher marketing costs which they likely passed on to European consumers in the form of higher prices for products and services. Google’s tactics also reduced revenues for publishers, which may have led to lower service quality and higher subscription costs for consumers.

Google’s abusive behaviour therefore had a negative impact on all European citizens in their day-to-day use of the web.

This is illegal under EU competition rules and therefore our decision orders Google to pay a fine of €2.95 billion.

Jacob Parry, Politico:

Google now has until early November — or 60 days — to tell the Commission how it intends to resolve that conflict of interest and to remedy the alleged abuse.

The Commission said it would not rule out a structural divestiture of Google’s adtech assets — but it “first wishes to hear and assess Google’s proposal.”

Kevin Breuninger, CNBC:

President Donald Trump on Friday threatened to launch a trade investigation to “nullify” what he said were discriminatory fines being levied by Europe against U.S. tech firms such as Google and Apple.

A United States court has also found Google’s dominance of online advertising is an illegal monopoly and will begin arguing over what to do about it later this month. A different U.S. court’s resolution to the search monopoly trial earlier this week was not particularly substantial; its stock went up after the judge announced remedies. Perhaps the advertising case will play out differently in the U.S. but I have my doubts.

Ashley Belanger, Ars Technica:

Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.

In a press release provided to Ars, the authors confirmed that the settlement is “believed to be the largest publicly reported recovery in the history of US copyright litigation.” Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. “Depending on the number of claims submitted, the final figure per work could be higher,” the press release noted.

Foster Kamer, Futurism:

Kyle Chayka, a staff writer at The New Yorker whose work zeroes in on the intersection between technology, art, and culture, is the author of not one but two books that popped up in LibGen: 2024’s “Filterworld: How Algorithms Flattened Culture” and 2020’s “The Longing For Less: Living With Minimalism.” Also in found in LibGen was the Italian translation of Filterworld. All in, he could stand to make upwards of $12K!

We asked Kyle: How does the sum of “$3,000 per class work” feel as a number given that his intellectual property was used to train an AI? Low, high, not worth it on principle, or about right?

“It should be a license, really,” he replied. “Because the training never goes away. So it could be $5,000 every 5 years, or $1,000 / year as long as they exist. But the price seems about right, honestly — a decent percentage of most book advances, and about the price of an institutional speaking gig.”

Yet another complication for the fair use arguments of generative A.I. companies, though one which was obviously undermined by using pirated data to begin with. Though I think it makes sense to focus on this case for now, the question looming over this entire case is what precedent it sets. It does not singlehandedly eliminate the fair use argument for training on public information, but what about other illicitly reproduced information ingested into data sets?

Meta and now Apple are being sued in similar cases. If you are a published author, visit the case settlement website to register for your share of the pot.

Katherine Bunt, Meredith McGraw, and Megan Bobrowsky, Wall Street Journal:

President Trump on Thursday led leaders of the world’s biggest technology companies in a version of his cabinet meetings, in which each participant takes a turn thanking and praising him, this time for his efforts to promote investments in chip manufacturing and artificial intelligence.

Present at the table were Sam Altman, Tim Cook, Sundar Pichai, David Sacks, and — immediately to Trump’s right — Mark Zuckerberg. Bill Gates was also there for some reason. Here is a fun exchange the Journal pulled from all the grovelling:

Trump also addressed Alphabet CEO Sundar Pichai about a federal judge’s ruling this week on an antitrust case related to Google’s monopoly in search. The judge levied relatively light penalties and rejected the most significant measures sought by the Justice Department, which filed the lawsuit in 2020.

“You had a very good day yesterday,” Trump said. “Do you want to talk about that big day you had yesterday?”

“I’m glad it’s over,” Pichai said.

“Biden was the one who prosecuted that lawsuit,” Trump said. “You know that, right?”

Beginning this section by reminding readers the suit was filed under the first Trump administration is a kind way of calling out the president’s flexible concepts of time and responsibility.

At least nobody gave him any solid gold statues this time, as far as I know.

Rebecca Bellan, TechCrunch:

U.S. District Court Judge Amit P. Mehta outlined remedies on Tuesday that would bar Google from entering or maintaining exclusive deals that tie the distribution of Search, Chrome, Google Assistant, or Gemini to other apps or revenue arrangements. For example, Google wouldn’t be able to condition Play Store licensing on the distribution of certain apps, or tie revenue-share payments to keeping certain apps.

Mehta’s full decision (PDF) is written to be read by non-lawyers. Even so, I admit I have only read the introduction — which includes some high-level explanations of the remedies — and skimmed much of the rest. It is a couple hundred pages long for those who want to put in a little more work than I did.

I did notice a potentially interesting deposition referenced on page 21 regarding the relative performance of Google’s A.I. search summaries compared to organic search results. I sure wish a transcript would be published.

Lily Jamali, BBC News:

The judge will also allow certain competitors to display Google search results as their own in a bid to give them the time and resources they need to innovate.

The judge is allowing Google to continue to pay companies like Apple and Samsung for distribution of its search engine on devices and browsers, but will bar Google from maintaining exclusive contracts.

The latter must be quite the relief to Apple, which gets tens of billions of dollars a year because people use Safari to search with Google, and to Mozilla, which gets to keep existing.

Cory Doctorow:

And then there’s Google’s data. Google is the world’s most prolific surveiller, and the company boasts to investors about the advantage that its 24/7 spying confers on it in the search market, because Google knows so much about us and can therefore tailor our results. Even if this is true – a big if – it’s nevertheless a fucking nightmare. Google has stolen every fact about our lives, in service to propping up a monopoly that lets it steal our money, too. Any remedy worth the name would have required Google to delete (“disgorge,” in law-speak) all that data […]

Some people in the antitrust world didn’t see it that way. Out of a misguided kind of privacy nihilism, they called for Google to be forced to share the data it stole from us, so that potential competitors could tune their search tools on the monopolist’s population-scale privacy violations.

And that is what the court has ordered.

Much like the Microsoft antitrust case that preceded Google’s by a couple decades, the proposed solutions basically treat Google with kid gloves. The judge admitted in the introduction of treating this case with “humility”, having “no expertise in the business of [general search engines], the buying and selling of search text ads, or the engineering of GenAI technologies”. That is true enough. Yet judges are often expected to rule on cases with subject matter about which they have no particular expertise. The judge appears to be comfortable assuming the generative A.I. boom is providing Google with ample competition.

This conclusion ultimately seems as though Mehta is doing something, yet Google has to change very little. It is difficult for me to believe this will be disruptive to Google’s search monopoly. Again, as with the years-ago hesitancy to impose serious conditions on Microsoft or break it up, Google will probably emerge as an even stronger force while technically complying with a ruling finding its monopoly illegal.

Charlotte Tobitt, Press Gazette:

Wired and Business Insider have removed news features written by a freelance journalist after concerns they are likely AI-generated works of fiction.

Freedom of expression non-profit Index on Censorship is also in the process of taking down a magazine article by the same author after concerns were raised by Press Gazette. The publisher has concluded that it “appears to have been written by AI”.

Several other UK and US online publications have published questionable articles by the same person, going by the name of Margaux Blanchard, since April.

Tobitt reports at least six publications carried articles with the “Margaux Blanchard” byline. Wired, for its part, published an explanation that, unfortunately, does not make it look very good. For one thing, it is attributed to “Wired Management”, not any specific person. For another, the reason “Blanchard” was busted had nothing to do with the article itself:

Over the next several days, it became clear that the writer was unable to provide enough information to be entered into our payments system. They instead insisted on payment by PayPal or check. Now suspicious, a WIRED editor ran the story through two third-party AI-detection tools, both of which said that the copy was likely to be human-generated. A closer look at the details of the story, though, along with further correspondence from the writer, made it clear to us that the story had been an AI fabrication. After more due diligence from the head of our research desk, we retracted the story and replaced it with an editor’s note.

Wired says it did not fully vet the article before publishing, which seems a little bit obvious. This amount of transparency is admirable, however, in contrast with other publications that have merely replaced articles “by” “Blanchard” with a terse note.

This is obviously a huge mistake on the part of these media organizations. It is embarrassing and silly and all the rest of it, particularly for Wired which has ramped up its political coverage with an aggressive but accurate bent. I also cannot help but feel it is indicative of what is, for lack of better phrasing, our current media climate. Trust in mass media has been declining for years, in part because financial pressures have triggered staffing cutbacks while online news has encouraged faster and voluminous coverage, which means all of this is done with increasing sloppiness thereby feeding declining trust in mass media. I am not making excuses for Wired or Business Insider running an insufficiently checked piece; they should have followed protocols. But a little bit of this problem may be attributable to the corner-cutting that encourages publishing more stories more quickly.

Also, who the hell is behind this “Margaux Blanchard” character anyway?

Tobitt, in a followup:

An internet hoaxer calling themselves Andrew Frelon has claimed responsibility for the fake (likely AI-generated) freelance journalist Margaux Blanchard.

Andrew Frelon is itself a pseudonym and their claims have to be treated with a high degree of scepticism.

“Frelon” is the same person who, earlier this year, claimed to be responsible for the Velvet Sundown A.I. music hoax, only to reveal that statement, itself, was a prank and he had nothing to do with the Velvet Sundown.

Also, we apparently know Frelon’s real name, according to Kevin Maimann, of CBC News:

The Quebec man who pranked journalists and music fans by saying he was behind a wildly successful AI band has revealed his identity as web platform safety and policy issues expert Tim Boucher.

Boucher’s own telling reads to me like the ramblings of someone a little too self-indulgent and self-important. In other words, he could very well be responsible for hijacking the publicity around the Velvet Sundown gag, and attempting to do the same for this “Margaux Blanchard” situation. Do not simply be skeptical; assume this is false until Boucher or “Frelon” provides proof. Press ought to stop giving this doofus the publicity he appears to crave.

Tim Bradshaw and Anna Gross, Financial Times:

The new [Investigatory Powers Tribunal] filing prepared by two judges sets out the “assumed facts” on which the case will be argued at a court hearing scheduled for early next year.

[…]

However, the new IPT filing states the [technical capability notice] “is not limited to” data stored under ADP, suggesting the UK government sought bulk interception access to Apple’s standard iCloud service, which is much more widely used by the company’s customers.

It is routine for law enforcement to request access to individual iCloud accounts, and Apple says it complies to the best of its ability with legal requests. But “bulk interception access” would go well beyond these kinds of targeted requests and reverting to the kind of global surveillance apparatus made public in 2013. The more cynical reader might imagine such a system still exists regardless of operating systems and web browsers defaulting to secure connections in the intervening years. I have not seen evidence of this. I think the Times’ reporting supports the notion that intelligence services can no longer monitor these kinds of communications in bulk as they once did.

The filing also apparently throws cold water on Tulsi Gabbard’s claim that the U.K. is rescinding its demands for an Advanced Data Protection backdoor. Again, the secrecy around this prevents us from gaining specificity or clarity. It even requires the judges to rely on “assumed facts” which, as Bradshaw and Gross write, are “not the same as asserting that [they are] factually the case”, because they cannot confirm the existence of the technical capability notice. Insert your personal favourite dystopian literary reference here.

Emanuel Maiberg, of 404 Media, published a pretty good story with a pretty bad headline. Here is his core argument:

Porn piracy, like all forms of content piracy, has existed for as long as the internet. But as more individual creators who make their living on services like OnlyFans, many of them have hired companies to send Digital Millennium Copyright Act takedown notices against companies that steal their content. As some of those services turn to automation in order to handle the workload, completely unrelated content is getting flagged as violating their copyrights and is being deindexed from Google search. The process exposes bigger problems with how copyright violations are handled on the internet, with automated systems filing takedown requests that are reviewed by other automated systems, leading to unintended consequences.

Ignoring the titillating associations with porn, this is a new and valuable entry into the compendium of articles about failures in the automatic handling of DMCA complaints. The headline on the article, however, gives no indication of that and is, I think, misleading:

How OnlyFans Piracy Is Ruining the Internet for Everyone

Here are my problems with it:

  1. The piracy itself is not doing anything. It is the automation and mishandling of takedown requests for copyright claims.

  2. This is only slightly related to OnlyFans. It is more broadly applicable to the increasing appeal of solo or independent producers in music, video, podcasting, etc.

  3. If I am being pedantic, it is not the internet which is being ruined, but the web.

This headline is kind of clickbait, but it is also simply inaccurate in describing the subject of the article. I do not often flag issues with headlines, especially since they are typically written by editors instead of reporters. In this case, though, Maiberg is a co-owner of 404 Media, so I am sure he had some say in choosing a headline.

Do not let that criticism steer you away from what is otherwise an excellent article, however. Maiberg interviewed the CEO of Takedowns AI, a platform which used to be involved in generic material removal and reputation management before pivoting to a service focused on OnlyFans piracy specifically. I am linking to the Wayback Machine there because the company’s site is currently offline, perhaps its own attempt at reputation management after Kunal Anand, the CEO, explained what seems to be a loose approach to validating takedown requests.

This is an old problem, as Maiberg acknowledges:

It’s an issue at the intersection of several critical problems with the modern internet: Google’s search monopoly, rampant porn piracy, a DMCA takedown process vulnerable to errors and abuse, and now the automation of all of the above in order to operate at scale. No one I talked to for this story thought there was an easy solution to this problem.

I obviously have no answers here, only two observations. The first is that it shows a limitation of offloading legal processes to corporations. Fair use is, famously, a nebulous concept, and trying to figure out whether a single YouTuber’s video is in violation would take significant time and expense — and this simply is not feasible at YouTube’s scale. Second, automation has made some of this easier — it is harder to find full-length Hollywood films on YouTube than you might expect for a video-based website — while also requiring each party to more carefully check their work.

Earlier this month, Tesla was penalized by a jury when a car’s supervised autonomous vehicle features failed, leading to a collision. When I linked to the CBS News article about the story, this was one of several paragraphs that stood out to me:

The most important piece of evidence in the trial, according to the plaintiffs’ lawyers, was an augmented video of the crash that included data from the Autopilot computer. Tesla previously claimed the video was deleted, but a forensic data expert was able to recover it.

Now, thanks to Trisha Thadani and Faiz Siddiqui, of the Washington Post, we know more about what happened behind the scenes:

Years after a Tesla driver using Autopilot plowed into a young Florida couple in 2019, crucial electronic data detailing how the fatal wreck unfolded was missing. The information was key for a wrongful death case the survivor and the victim’s family were building against Tesla, but the company said it didn’t have the data.

Then a self-described hacker, enlisted by the plaintiffs to decode the contents of a chip they recovered from the vehicle, found it while sipping a Venti-size hot chocolate at a South Florida Starbucks. Tesla later said in court that it had the data on its own servers all along.

One supposed benefit of autonomous vehicle technologies is in public safety. That is how the simple features on my car are described and marketed, at least, the most basic of which is front-facing collision detection and automatic braking. Tesla’s system, among the most advanced on any car, failed in this case with tragic consequences. I am not saying my car would have performed any better. But I would view Tesla’s system differently if it began from a base of reliable safety features rather than the implications of a name like “Autopilot”.

One supposed benefit of ever greater data collection — even having several cameras constantly recording — is that we can better understand a collision. However, that only works as well as an automaker is trustworthy. It is hard to know what to make of Tesla’s defence here. Either it did not look very hard — which is bad — or the company actively avoided producing evidence until it became impossible for it to play dumb. I sure hope it is more compliant in future collision investigations. But I have no trust that it will be.

I worry that Tesla will learn the wrong lesson, and will instead be even more evasive. The media strategy at Elon Musk’s companies these days, including for articles about this crash, is to say nothing. Better for them to be silent when trust in media and institutions is perilously low. Not good for anyone else.

Update: Tesla is, of course, fighting the verdict.

Sarah Perez, TechCrunch:

The law, HB 1126, requires platforms to implement age verification for all users before they can access social networks like Bluesky. Recently, the Supreme Court justices decided to block an emergency appeal that would have prevented the law from going into effect as the legal challenges it faces played out in the courts. This forced Bluesky to make a decision of its own: either comply or risk hefty fines of up to $10,000 per user.

Users in Mississippi soon scrambled for a workaround, which tends to involve the use of VPNs.

However, others questioned why a VPN would be the necessary solution here. After all, decentralized social networking was meant to reduce the control and power the state — or any authority — would have over these social platforms.

Bluesky blocked access in Mississippi to avoid collecting more data about its users or risk stiff penalties. It points out the law there is more expansive and requires more data collection than the U.K.’s Online Safety Act. It is even, according to a note on JD Supra, more broad than the Texas legislation on which some of its language was based.

But, as Perez writes, surely the whole point of decentralized networks is their resilience to this kind of overbearing legislation. In a way, I guess they are — you can still use the AT Protocol, which underpins Bluesky, in Mississippi through other personal data servers. The same is true for ActivityPub and Mastodon instances, though Mastodon says it has no way to comply with the Mississippi law. That makes me wonder if individual Mastodon instances must each incorporate age validation. I do not see anything in the sloppy text of the law saying it applies only to services over a certain number of users. It seems to non-lawyer me this means any instance — or any Bluesky PDS — allowing interaction in Mississippi could be liable for penalties.

Alexander Gromnitsky:

At the time of writing, the most recent Adobe Reader 25.x.y.z 64-bit installer for Windows 11 weights 687,230,424 bytes. After installation, the program includes ‘AI’ (of course), an auto-updater, sprinkled ads for Acrobat online services everywhere, and 2 GUIs: ‘new’ and ‘old’.

For comparison, the size of SumatraPDF-3.5.2 installer is 8,246,744 bytes. It has no ‘AI’, no auto-updater (though it can check for new versions, which I find unnecessary, for anyone sane would install it via scoop anyway), and no ads for ‘cloud storage’.

The installed size of the latest version of Acrobat is, on my Mac, 2.18 GB — or, to spell it out as Gromnitsky did, 2,176,053,007 bytes. Of course, over 435 MB of that is because it includes a copy of the Chromium web browser engine. I primarily use this application to view, edit, and add form fields to text-based documents, and to dismiss ads for A.I. features and Adobe services. Gromnitsky is describing only Reader, which is far more limited than Acrobat, even more so than Apple’s own Preview software; you cannot even split a PDF into multiple files with Reader.

If I ever give the impression of being personally attacked when I find a Preview feature no longer works as well as it once did, this is why. Acrobat and Reader are perfect examples of software made without respect for users.

(Via Michael Tsai.)

Rajpreet Sahota

U.S. Customs and Border Protection (CBP) has released new data showing a sharp rise in electronic device searches at border crossings.

From April to June alone, CBP conducted 14,899 electronic device searches, up more than 21 per cent from the previous quarter (23 per cent over the same period last year). Most of those were basic searches, but 1,075 were “advanced,” allowing officers to copy and analyze device contents.

U.S. border agents have conducted tens of thousands of searches every year for many years, along a generally increasing trajectory, so this is not necessarily specific to this administration. Unfortunately, as the Electronic Frontier Foundation reminds us, people have few rights at ports of entry, regardless of whether they are a U.S. citizen.

There are no great ways to avoid a civil rights violation, either. As a security expert told the CBC, people with burner devices would be subject to scrutiny because it is obviously not their main device. It stands to reason that someone travelling without any electronic devices at all would also be seen as more suspicious. Encryption is your best bet, but then you may need to have a whole conversation about why all of your devices are encrypted.

The EFF has a pocket guide with your best options.

If you, thankfully, missed Google’s Pixel 10 unveiling — and even if you did not — you will surely appreciate PetaPixel’s review of the Pro version of the phone from the perspective of photographers and videographers. This line of phones has long boasted computational photography bonafides over the competition, and I thought this was a good exploration of what is new and not-so-new in this year’s models.

Come for Chris and Jordan; stay for Chris’ “pet” deer.

Typepad:

After September 30, 2025, access to Typepad – including account management, blogs, and all associated content – will no longer be available. Your account and all related services will be permanently deactivated.   

I have not thought about Typepad in years, and I am certain I am not alone. That is not a condemnation; Typepad occupies a particular time and place on the web. As with anything hosted, however, users are unfortunately dependent on someone else’s interest in maintaining it.

If you have anything hosted at Typepad, now is a good time to back it up.

Kelefa Sanneh, the New Yorker:

[…] In 2018, the social-science blog “Data Colada” looked at Metacritic, a review aggregator, and found that more than four out of five albums released that year had received an average rating of at least seventy points out of a hundred — on the site, albums that score sixty-one or above are colored green, for “good.” Even today, music reviews on Metacritic are almost always green, unlike reviews of films, which are more likely to be yellow, for “mixed/average,” or red, for “bad.” The music site Pitchfork, which was once known for its scabrous reviews, hasn’t handed down a perfectly contemptuous score — 0.0 out of 10 — since 2007 (for “This Is Next,” an inoffensive indie-rock compilation). And, in 2022, decades too late for poor Andrew Ridgeley, Rolling Stone abolished its famous five-star system and installed a milder replacement: a pair of merit badges, “Instant Classic” and “Hear This.”

I have quibbles with this article, which I will get to, but I will front-load this with the twist instead of making you wait — this article is, in effect, Sanneh’s response to himself twenty-one years after popularizing the very concept of poptimism in the New York Times. Sanneh in 2004:

In the end, the problem with rockism isn’t that it’s wrong: all critics are wrong sometimes, and some critics (now doesn’t seem like the right time to name names) are wrong almost all the time. The problem with rockism is that it seems increasingly far removed from the way most people actually listen to music.

Are you really pondering the phony distinction between “great art” and a “guilty pleasure” when you’re humming along to the radio? In an era when listeners routinely — and fearlessly — pick music by putting a 40-gig iPod on shuffle, surely we have more interesting things to worry about than that someone might be lip-synching on “Saturday Night Live” or that some rappers gild their phooey. Good critics are good listeners, and the problem with rockism is that it gets in the way of listening. If you’re waiting for some song that conjures up soul or honesty or grit or rebellion, you might miss out on Ciara’s ecstatic electro-pop, or Alan Jackson’s sly country ballads, or Lloyd Banks’s felonious purr.

Here we are in 2025 and a bunch of the best-reviewed records in recent memory are also some of the most popular. They are well-regarded because critics began to review pop records on the genre’s own terms.

Here is one more bonus twist: the New Yorker article is also preoccupied with criticism of Pitchfork, a fellow Condé Nast publication. This is gestured toward twice in the article. Neither one serves to deflate the discomfort, especially since the second mention is in the context of reduced investment in the site by Condé.

Speaking of Pitchfork, though, the numerical scores of its reviews have led to considerable analysis by the statistics obsessed. For example, a 2020 analysis of reviews published between 1999 and early 2017 found the median score was 7.03. This is not bad at all, and it suggests the site is most interested in what it considers decent-to-good music, and cannot be bothered to review bad stuff. The researchers also found a decreasing frequency of very negative reviews beginning in about 2010, which fits Sanneh’s thesis. However, it also found fewer extremely high scores. The difference is more subtle — and you should ignore the dot in the “10.0” column because the source data set appears to also contain Pitchfork’s modern reviews of classic records — but notice how many dots are rated above 8.75 from 2004–2009 compared to later years. A similar analysis of reviews from 1999–2021 found a similar convergence toward mediocre.

As for Metacritic, I had to go and look up the Data Colada article referenced, since the New Yorker does not bother with links. I do not think this piece reinforces Sanneh’s argument very well. What Joe Simmons, its author, attempts to illustrate is that Metacritic skews positive for bands with few aggregated reviews because most music publications are not going to waste time dunking on a nascent band’s early work. I also think Simmons is particularly cruel to a Modern Studies record.

Anecdotally, I do not know that music critics have truly lost their edge. I read and watch a fair amount of music criticism, and I still see a generous number of withering takes. I think music critics, as they become established and busier, recognize they have little time for bad music. Maroon 5 have been a best-selling act for a couple of decades, but Metacritic has aggregated just four reviews of its latest album, because you can just assume it sucks. Your time might be better spent with the great new Water From Your Eyes record.

Even though I am unsure I agree with Sanneh’s conclusion, I think critics should make time and column space for albums they think are bad. Negative reviews are not cruel — or, at least, they should not be — but it is the presence of bad that helps us understand what is good.

Tripp Mickle and Don Clark, New York Times:

Echoing IBM, Microsoft in 1985 built its Windows software to run on Intel processors. The combination created the “Wintel era,” when the majority of the world’s computers featured Windows software and Intel hardware. Microsoft’s and Intel’s profits soared, turning them into two of the world’s most valuable companies by the mid-1990s. Most of the world’s computers soon featured “Intel Inside” stickers, making the chipmaker a household name.

In 2009, the Obama administration was so troubled by Intel’s dominance in computer chips that it filed a broad antitrust case against the Silicon Valley giant. It was settled the next year with concessions that hardly dented the company’s profits.

This is a gift link because I think this one is particularly worth reading. The headline calls it a “long, painful downfall”, but the remarkable thing about it is that it is short, if anything. Revenue is not always the best proxy for this, but the cracks began to show in the early 2010s when its quarterly growth contracted; a few years of modest growth followed before being clobbered since mid-2020. Every similar company in tech seems to have made a fortune off the combined forces of the covid-19 pandemic and artificial intelligence except Intel.

Tobias Mann, the Register:

For better or worse, the US is now a shareholder in the chipmaker’s success, which makes sense given Intel’s strategic importance to national security. Remember, Intel is the only American manufacturer of leading edge silicon. TSMC and Samsung may be setting up shop in the US, but hell will freeze over before the US military lets either of them fab its most sensitive chips. Uncle Sam awarded Intel $3.2 billion to build that secure enclave for a reason.

Put mildly, The US government needs Intel Foundry and Lip Bu Tan needs Uncle Sam’s cash to make the whole thing work. It just so happens that right now Intel isn’t in a great position to negotiate.

Mann’s skeptical analysis is also worth your time. There is good sense in the U.S. government holding an interest in the success of Intel. Under this president, however, it raises entirely unique questions and concerns.

Mary Cunningham, CBS News:

Tesla was found partly liable in a wrongful death case involving the electric vehicle company’s Autopilot system, with a jury awarding the plaintiffs $200 million in punitive damages plus additional money in compensatory damages.

[…]

“What we ultimately learned from that augmented video is that the vehicle 100% knew that it was about to run off the roadway, through a stop sign, through a blinking red light, through a parked car and through a pedestrian, yet did nothing other than shut itself off when the crash was unavoidable,” said Adam Boumel, one of the plaintiffs’ attorneys.

I continue to believe holding manufacturers legally responsible is the correct outcome for failures of autonomous driving technology. Corporations, unlike people, cannot go to jail; the closest thing we have to accountability is punitive damages.

Andy Baio:

This minute-long clip of a Will Smith concert is blowing up online for all the wrong reasons, with people accusing him of using AI to generate fake crowds filled with fake fans carrying fake signs. The story’s blown up a bit, with coverage in Rolling Stone, NME, The Independent, and Consequence of Sound.

[…]

But here’s where things get complicated.

The crowds are real. Every person you see in the video above started out as real footage of real fans, pulled from video of multiple Will Smith concerts during his recent European tour.

The lines, in this case, are definitely blurry. This is unlike any previous is it A.I.? controversy over crowds I can remember because — and I hope this is more teaser than spoiler — note Baio’s careful word choice in that last quoted paragraph.

Joseph Cox, 404 Media:

A man holds an orange and white device in his hand, about the size of his palm, with an antenna sticking out. He enters some commands with the built-in buttons, then walks over to a nearby car. At first, its doors are locked, and the man tugs on one of them unsuccessfully. He then pushes a button on the gadget in his hand, and the door now unlocks.

The tech used here is the popular Flipper Zero, an ethical hacker’s swiss army knife, capable of all sorts of things such as WiFi attacks or emulating NFC tags. Now, 404 Media has found an underground trade where much shadier hackers sell extra software and patches for the Flipper Zero to unlock all manner of cars, including models popular in the U.S. The hackers say the tool can be used against Ford, Audi, Volkswagen, Subaru, Hyundai, Kia, and several other brands, including sometimes dozens of specific vehicle models, with no easy fix from car manufacturers.

The Canadian government made headlines last year when it banned the Flipper Zero, only to roll it back in favour of a narrowed approach a month later. That was probably the right call. However, too many — including Hackaday and Flipper itself — were too confident in saying the device was not able to, or could not, be used to steal cars. This is demonstrably untrue.

Michelle Bellefontaine, CBC News:

“Any publicly funded immunization in B.C. can be provided at no cost to any Canadian travelling within the province,” a statement from the ministry said.

“This includes providing publicly funded COVID-19 vaccine to people of Alberta.”

[…]

Alberta is the only Canadian province that will not provide free universal access to COVID-19 vaccines this fall.

The dummies running our province opened what they called a “vaccine booking system” earlier this month allowing Albertans to “pre-order” vaccines. However, despite these terms having defined meanings, the system did not allow anyone to book a specific day, time, or location to receive the vaccine, nor did it take payments or even show prices. The government’s rationale for this strategy is that it is “intended [to] help reduce waste”.

Now that pricing has been revealed, it sure seems like these dopes want us to have a nice weekend just over the B.C. border. A hotel room for a couple or a family will probably be about the same as the combined vaccination cost. Sure, a couple of meals would cost extra, but it is also a nice weekend away. Sure, it means people who are poor or otherwise unable will likely need to pay the $100 “administrative fee” to get their booster, and it means a whole bunch of pre-ordered vaccines will go to waste thereby undermining the whole point of this exercise. But at least it plays to the anti-vaccine crowd. That is what counts for these jokers.