Anna Gross and Joe Miller, Financial Times:

Elon Musk has privately discussed with allies how Sir Keir Starmer could be removed as UK prime minister before the next general election, according to people briefed on the matter.

Musk, the world’s richest man and key confidant of US president-elect Donald Trump, is probing how he and his rightwing allies can destabilise the UK Labour government beyond the aggressive posts he has issued on his social media platform X, the people said.

Private Eye editor Ian Hislop appeared on Andrew Marr’s LBC show to discuss Musk’s absurd claims:

I mean, it is almost impossible to avoid him, and he has enormous power, because of a) his money, and b) his reach to people who have been persuaded over the last five years or so that the mainstream media hasn’t covered any stories.

Hislop says the award-winning story Musk is using to cause this frenzy was broken on the front page of the Times and has been covered for a decade or more. As Hank Green said, everything is a conspiracy theory when you do not trust anything and, as Mike Masnick said, when you do not bother to educate yourself.

I shudder to think what nonsense is coming for the Canadian election likely happening this year. It is going to be a nightmare.

Joseph Cox, 404 Media:

Hackers claim to have compromised Gravy Analytics, the parent company of Venntel which has sold masses of smartphone location data to the U.S. government. The hackers said they have stolen a massive amount of data, including customer lists, information on the broader industry, and even location data harvested from smartphones which show peoples’ precise movements, and they are threatening to publish the data publicly.

You remember Gravy Analytics, right? It is the one from the stories and the FTC settlements, though it should not be confused with all the other ones.

Cox, again, 404 Media:

Included in the hacked Gravy data are tens of millions of mobile phone coordinates of devices inside the US, Russia, and Europe. Some of those files also reference an app next to each piece of location data. 404 Media extracted the app names and built a list of mentioned apps.

The list includes dating sites Tinder and Grindr; massive games such as Candy Crush, Temple Run, Subway Surfers, and Harry Potter: Puzzles & Spells; transit app Moovit; My Period Calendar & Tracker, a period tracking app with more than 10 million downloads; popular fitness app MyFitnessPal; social network Tumblr; Yahoo’s email client; Microsoft’s 365 office app; and flight tracker Flightradar24. The list also mentions multiple religious-focused apps such as Muslim prayer and Christian Bible apps; various pregnancy trackers; and many VPN apps, which some users may download, ironically, in an attempt to protect their privacy.

This location data, some of it more granular than others, appears to be derived from real-time bidding on advertising, much like the Patternz case last year. In linking to — surprise — Cox’s reporting on Patternz, I also pointed to a slowly developing lawsuit against Google. In a filing (PDF) from the plaintiffs, so far untested in court, there are some passages that can help contextualize the scale and scope of real-time bidding data (emphasis mine):

As to the Court’s second concern about the representative nature of the RTB data produced for the plaintiffs (the “Plaintiff data”), following the Court’s Order, Google produced six ten-minute intervals of class-wide RTB bid data spread over a three-year period (2021-2023) (the “Class data”). Further Pritzker Decl., ¶ 17. Prof. Shafiq analyzed this production, encompassing over 120 terabytes of data and almost [redacted] billion RTB bid requests. His analysis directly answers the Court’s inquiry, affirming that the RTB data are uniformly personal information for the plaintiffs and the Class, and that the Plaintiff data is in fact representative of the Class as a whole.

[…]

[…] For the six ten-minute periods of Class data Google produced, Prof. Shafiq finds that there were at least [redacted] different companies receiving the bid data located in at least [redacted] countries, and that the companies included some of the largest technology companies in the world. […]

This is Google, not Gravy Analytics, but still — this entire industry is morally bankrupt. It should not be a radical position that using an app on your phone or browsing the web should not opt you into such egregious violations of basic elements of your privacy.

Sam Biddle, the Intercept:

Meta is now granting its users new freedom to post a wide array of derogatory remarks about races, nationalities, ethnic groups, sexual orientations, and gender identities, training materials obtained by The Intercept reveal.

There are examples in this article and, separately, in Casey Newton’s reporting about dehumanizing speech toward people who are transgender, non-binary, or genderfluid. I cannot imagine working on these products and being proud to see such abusive language is allowed.

Four years ago this week, social media companies decided they would stop platforming then-outgoing president Donald Trump after he celebrated seditionists who had broken into the U.S. Capitol Building in a failed attempt to invalidate the election and allow Trump to stay in power. After two campaigns and a presidency in which he tested the limits of what those platforms would allow, enthusiasm for a violent attack on government was apparently one step too far. At the time, Mark Zuckerberg explained:

Over the last several years, we have allowed President Trump to use our platform consistent with our own rules, at times removing content or labeling his posts when they violate our policies. We did this because we believe that the public has a right to the broadest possible access to political speech, even controversial speech. But the current context is now fundamentally different, involving use of our platform to incite violent insurrection against a democratically elected government.

Zuckerberg, it would seem, now has regrets — not about doing too little over those and the subsequent years, but about doing too much. For Zuckerberg, the intervening four years have been stifled by “censorship” on Meta’s platforms; so, this week, he announced a series of sweeping changes to their governance. He posted a summary on Threads but the five-minute video is far more loaded, and it is what I will be referring to. If you do not want to watch it — and I do not blame you — the transcript at Tech Policy Press is useful. The key changes:

  1. Fact-checking is to be replaced with a Community Notes feature, similar to the one on X.

  2. Change the Hateful Conduct policies to be more permissive about language used in discussions about immigration and gender.

  3. Make automated violation detection tools more permissive and focus them on “high-severity” problems, relying on user reports for material the company thinks is of a lower concern.

  4. Roll back restrictions on the visibility and recommendation of posts related to politics.

  5. Relocate the people responsible for moderating Meta’s products from California to another location — Zuckerberg does not specify — and move the U.S.-focused team to Texas.

  6. Work with the incoming administration on concerns about governments outside the U.S. pressuring them to “censor more”.

Regardless of whether you feel each of these are good or bad ideas, I do not think you should take Zuckerberg’s word for why the company is making these changes. Meta’s decision to stop working directly with fact-checkers, for example, is just as likely a reaction to the demands of FCC commissioner Brendan Carr, who has a bananas view (PDF) of how the First Amendment to the U.S. Constitution works. According to Carr, social media companies should be forbidden from contributing their own speech to users’ posts based on the rankings of organizations like NewsGuard. According both Carr and Zuckerberg, fact-checkers demand “censorship” in some way. This is nonsense: they were not responsible for the visibility of posts. I do not think much of this entire concept, but surely they only create more speech by adding context in a similar way as Meta hopes will still happen with Community Notes. Since Carr will likely be Trump’s nominee to run the FCC, it is important for Zuckerberg to get his company in line.

Meta’s overhaul of its Hateful Conduct policies also shows the disparity between what Zuckerberg says and the company’s actions. Removing rules that are “out of touch with mainstream discourse” sounds fair. What it means in practice, though, is to allow people to make COVID-19 more racist, demean women, and — of course — discriminate against LGBTQ people in more vicious ways. I understand the argument for why these things should be allowed by law, but there is no obligation for Meta to carry this speech. If Meta’s goal is to encourage a “friendly and positive” environment, why increase its platforms’ permissiveness to assholes? Perhaps the answer is in the visibility of these posts — maybe Meta is confident it can demote harmful posts while still technically allowing them. I am not.

We can go through each of these policy changes, dissect them, and consider the actual reasons for each, but I truly believe that is a waste of time compared to looking at the sum of what they accomplish. Conservatives, particularly in the U.S., have complained for years about bias against their views by technology companies, an updated version of similar claims about mass media. Despite no evidence for this systemic bias, the myth stubbornly persists. Political strategists even have a cute name for it: “working the refs”. Jeff Cohen and Norman Solomon, Creators Syndicate, August 1992:

But in a moment of candor, [Republican Party Chair Rich] Bond provided insight into the Republicans’ media-bashing: “There is some strategy to it,” he told the Washington Post. “I’m the coach of kids’ basketball and Little League teams. If you watch any great coach, what they try to do is ‘work the refs.’ Maybe the ref will cut you a little slack next time.”

Zuckerberg and Meta have been worked — heavily so. The playbook of changes outlined by Meta this week are a logical response in an attempt to court scorned users, and not just the policy changes here. On Monday, Meta announced Dana White, UFC president and thrice-endorser of Trump, would be joining its board. Last week, it promoted Joel Kaplan, a former Republican political operative, to run its global policy team. Last year, Meta hired Dustin Carmack who, according to his LinkedIn, directs the company’s policy and outreach for nineteen U.S. states, and previously worked for the Heritage Foundation, the Office of the Director of National Intelligence, and Ron DeSantis. These are among the people forming the kinds of policies Meta is now prescribing.

This is not a problem solved through logic. If it were, studies showing a lack of political bias in technology company policy would change more minds. My bet is that these changes will not have what I assume is the desired effect of improving the company’s standing with far-right conservatives or the incoming administration. If Meta becomes more permissive for bigots, it will encourage more of that behaviour. If Meta does not sufficiently suggest those kinds of posts because it wants “friendly and positive” platforms, the bigots will cry “shadowban”. Meta’s products will corrode. That does not mean they will no longer be influential or widely used, however; as with its forthcoming A.I. profiles, Meta is surely banking that its dominant position and a kneecapped TikTok will continue driving users and advertisers to its products, however frustratedly.

Zuckerberg appears to think little of those who reject the new policies:

[…] Some people may leave our platforms for virtue signaling, but I think the vast majority and many new users will find that these changes make the products better.

I am allergic to the phrase “virtue signalling” but I am willing to try getting through this anyway. This has been widely interpreted as because of their virtue signalling, but I think it is just as accurate if you think of it as because of our virtue signalling. Zuckerberg has complained about media and government “pressure” to more carefully moderate Meta’s platforms. But he cannot ignore how this week’s announcement also seems tied to implicit pressure. Trump is not yet the president, true, but Zuckerberg met with him shortly after the election and, apparently, the day before these changes were announced. This is just as much “virtue signalling” — particularly moving some operations to Texas for reasons even Zuckerberg says are about optics.

Perhaps you think I am overreading this, but Zuckerberg explicitly said in his video introducing the changes that “recent elections also feel like a cultural tipping point towards once again prioritizing speech”. If he means elections other than those which occurred in the U.S. in November, I am not sure which. These are changes made from a uniquely U.S. perspective. To wit, the final commitment in the list above as explained by Zuckerberg (via the Tech Policy Press transcript):

Finally, we’re going to work with President Trump to push back on governments around the world. They’re going after American companies and pushing to censor more. The US has the strongest constitutional protections for free expression in the world. Europe has an ever-increasing number of laws, institutionalizing censorship, and making it difficult to build anything innovative there. Latin American countries have secret courts that can order companies to quietly take things down. China has censored our apps from even working in the country. The only way that we can push back on this global trend is with the support of the US government, and that’s why it’s been so difficult over the past four years when even the US government has pushed for censorship.

For their part, the E.U. rejected Zuckerberg’s characterization of its policies, and Brazilian officials are not thrilled, either.

These changes — and particularly this last one — are illustrative of the devil’s bargain of large U.S.-based social media companies: they export their policies and values worldwide following whatever whims and trends are politically convenient at the time. Right now, it is important for Meta to avoid getting on the incoming Trump administration’s shit list, so they, like everyone, are grovelling. If the rest of the world is subjected to U.S.-style discussions, so be it. But so have we been for a long time. What is extraordinary about Meta’s changes is how many people will be impacted: billions, plural. Something like one-quarter the world’s population.

The U.S. is no stranger to throwing around its political and corporate power in a way few other nations can. Meta’s changes are another entry into that canon. There are people in some countries who will benefit from having more U.S.-centric policies, but most everyone elsewhere will find them discordant with more local expectations. These new policies are not satisfying for people everywhere around the world, but the old ones were not, either.

It is unfair to expect any platform operator to get things right for every audience, especially not at Meta’s scale. The options created by less centralized protocols like ActivityPub and AT Protocol are much more welcome. We should be able to have more control over our experience than we are trusted with.

Zuckerberg begins his video introduction by referencing a 2019 speech he gave at Georgetown University. In it, he speaks of the internet creating “significantly broader power to call out things we feel are unjust”. “[G]iving people a voice and broader inclusion go hand in hand,” he said, “and the trend has been towards greater voice over time”. Zuckerberg naturally centred his company’s products. But you know what is even more powerful than one company at massive scale? It is when no company needs to act as the world’s communications hub. The internet is the infrastructure for that, and we would be better off if we rejected attempts to build moats.

Zoe Kleinman, Liv McMahon, and Natalie Sherman, BBC News:

“Apple Intelligence features are in beta and we are continuously making improvements with the help of user feedback,” the company said in a statement on Monday, adding that receiving the summaries is optional.

“A software update in the coming weeks will further clarify when the text being displayed is summarization provided by Apple Intelligence. We encourage users to report a concern if they view an unexpected notification summary.”

I object to the “beta” excuse. Would Apple not be “continuously making improvements with the help of user feedback” if it was not a “beta” product? Of course it would make changes.

Jason Snell, Six Colors:

We shouldn’t be. Apple’s shipping a feature that frequently rewrites headlines to be wrong. That’s a failure, and it shouldn’t be shrugged off as being the nature of OS features in the 2020s.

Steve Troughton-Smith:

The Apple Intelligence vs BBC story is a microcosm of the developer story for the feature. We’re soon expected to vend up all the actions and intents in our apps to Siri, with no knowledge of the context (or accuracy) in which it will be presented to the user. Apple gets to launder the features and content of your apps and wrap it up in their UI as ‘Siri’ — that’s the developer proposition Apple has presented us. They get to market it as Apple Intelligence, you get the blame if it goes awry.

Guy English:

I agree with Jason. I’ll maybe go further—If Apple Intelligence summarizes your notifications then Apple *should* badge it with *their* Apple logo. Not some weird cog or brain or some other such icon. Put your name on it! […]

I agree. Apple should not be putting its name or logo on something it does not stand behind, and it should stand behind everything it ships. It supposedly cannot “ship junk”, but it is obviously not yet proud of the way these notifications were summarized — it is making changes, after all. But will it be courageous enough to attach its valuable brand to the output of its own large language model? I would bet against it, but it should.

Jon Milton, Canadian Centre for Policy Alternatives:

He also announced that Parliament — which has been consumed for months in Conservative-led procedural squabbling — would be prorogued until March 24. Prorogation is a type of temporary suspension of parliamentary activities. It is distinct from dissolution, which would trigger an election.

Prorogation is more like hitting the reset button on all legislation. All bills that haven’t yet been passed are now dead and would have to start from scratch. Importantly this includes both the spring budget and the fall economic statement, along with all other outstanding house business.

Among the bills killed is a package of privacy legislation contemplated since 2022. The Conservatives have voted unanimously against these laws so, if they win the next federal election — and they are heavily favoured to do so — expect to see this whole process beginning from scratch.

Tara Deschamps, Canadian Press:

The Online News Act aims to level the playing field by extracting compensation from search engine and social media companies with a total annual global revenue of $1 billion or more and 20 million or more Canadian average monthly unique visitors or average monthly active users. Google, along with Facebook and Instagram-owner Meta, are the only tech firms that currently meet these criteria.

Google secured a five-year exemption from the act by agreeing to pay $100 million a year to media organizations. Meta has avoided having to make any payments by blocking access to Canadian news on its platforms.

The way Google is “exempt” is a little odd. Instead of negotiating with individual publishers, Google is submitting a lump sum to be divided by the Canadian Journalism Collective, the government entity responsible for administering the Online News Act.

This is a significant discount from the $172 million Google was expected to pay annually. You can tell it had the upper hand in these negotiations, at least compared with Meta. Canadian publications do not want to lose whatever is left of Google’s precious referrals before that dries up and is replaced with A.I. zero-click summaries.

Antonio G. Di Benedetto, the Verge:

The tech industry’s relentless march toward labeling everything “plus,” “pro,” and “max” soldiers on, with Dell now taking the naming scheme to baffling new levels of confusion. The PC maker announced at CES 2025 that it’s cutting names like XPS, Inspiron, Latitude, Precision, and OptiPlex from its new laptops, desktops, and monitors and replacing them with three main product lines: Dell (yes, just Dell), Dell Pro, and Dell Pro Max.

If you think that sounds a bit Apple-y and bland, you’re right. But Dell is taking it further by also adding a bit of auto industry parlance with three sub-tiers: Base, Plus, and Premium.

Di Benedetto knocks Dell for “stripping itself of some of its identity” but I disagree: this is exactly what I expect to see from Dell’s naming conventions. I attempted to configure a model of its new Dell Pro Premium laptop. Upon selecting a brighter and nicer display, I received an error message reading “Composite Rule Error: Invalid selection in Processor Branding”. Upon closing the error and returning to the configurator, I was told:

The Chassis Option requires the matching Memory size. The 16gb Memory is only available with the Ultra 5 236V/226V and Ultra 7 266V. The 32gb Memory is only available with the Ultra 5 238V and Ultra 7 268V.

This is almost nostalgic for me. Before I owned a Mac, I recall trying to shop Dell’s website and encountering gibberish like this all the time. That is the Dell charm I so vividly remember, no matter what combination of “premium”, “pro”, “max”, and “plus” they use.

Geoffrey A. Fowler, Washington Post:

When I try to cross my street at a marked crosswalk, the Waymo robotaxis often wouldn’t yield to me. I would step out into the white-striped pavement, look at the Waymo, wait to see whether it’s going to stop — and the car would zip right past.

It cut me off again and again on the path I use to get to work and take my kids to the park. It happened even when I was stuck in a small median halfway across the road. So I began using my phone to film myself crossing. I documented more than a dozen Waymo cars failing to yield in the span of a week. (You can watch some of my recordings below.)

The crosswalk in the video looks terrifying. On a road with a speed limit of 35 miles per hour (56 kilometres per hour), it seems many human drivers happily barrelled through that crosswalk, too. But, as Fowler writes, a key argument for automated cars is supposed to be safety. That cannot be only for people in big metal boxes easy for a Waymo to spot. It must also — especially — be true for pedestrians.

The ads for Apple Intelligence have mostly been noted for what they show, but there is also something missing: in the fine print and in its operating systems, Apple still calls it a “beta” release, but not in its ads. Given the exuberance with which Apple is marketing these features, that label seems less like a way to inform users the software is unpolished, and more like an excuse for why it does not work as well as one might expect of a headlining feature from the world’s most valuable company.

“Beta” is a funny word when it comes to Apple’s software. It often makes available preview builds of upcoming O.S. releases to users and developers for feedback, testing software compatibility, and to build with new APIs. This is voluntary and done with the understanding that the software is unfinished, and bugs — even serious ones — can be expected.

Apple has also, rarely, applied the “beta” label to features in regular releases which are distributed to all users, not just those who signed up. This type of “beta” seems less honest. Instead of communicating this feature is a work in progress, it seems to say we are releasing this before it is done. Maybe that is a subtle distinction, but it is there. One type of beta is testing; the other type asks users to disregard their expectations of polish, quality, and functionality so that a feature can be pushed out earlier than it should.

We have seen this on rare occasions: once with Portrait mode; more notably, with Siri. Mat Honan, writing for Gizmodo in December 2011:

Check out any of Apple’s ads for the iPhone 4S. They’re promoting Siri so hard you’d be forgiven for thinking Siri is the new CEO of Apple. And it’s not just that first wave of TV ads, a recent email Apple sent out urges you to “Give the phone that everyone’s talking about. And talking to.” It promises “Siri: The intelligent assistant you can ask to make calls, send texts, set reminders, and more.”

What those Apple ads fail to report — at all — is that Siri is very much a half-baked product. Siri is officially in beta. Go to Siri’s homepage on Apple.com, and you’ll even notice a little beta tag by the name.

This is familiar.

The ads for Siri gave the impression of great capability. It seemed like you could ask it how to tie a bowtie, what events were occurring in a town or city, and more. The response was not shown for these queries, but the implication was that Siri could respond. What became obvious to anyone who actually used Siri is that it would show web search results instead. But, hey, it was a “beta” — for two years.

The ads for Apple Intelligence do one better and show features still unreleased. The fine print does mention “some features and languages will be coming over the next year”, without acknowledging the very feature in this ad is one of them. And, when it does actually come out, it is still officially in “beta”, so I guess you should not expect it to work properly.

This all seems like a convoluted way to evade full responsibility of the Apple Intelligence experience which, so far, has been middling for me. Genmoji is kind of fun, but Notification Summaries are routinely wrong. Priority messages in Mail is helpful when it correctly surfaces an important email, and annoying when it highlights spam. My favourite feature — in theory — is the Reduce Interruptions Focus mode, which is supposed to only show notifications when they are urgent or important. It is the kind of thing I have been begging for to deal with the overburdened notifications system. But, while it works pretty well sometimes, it is not dependable enough to rely on. It will sometimes prioritize scam messages written with a sense of urgency, but fail to notify me when my wife messages me a question. It still necessitates I occasionally review the notifications suppressed by this Focus mode. It is helpful, but not consistently enough to be confidence-inspiring.

Will users frustrated by the questionable reliability of Apple Intelligence routinely return to try again? If my own experience with Siri is any guidance — and I am not sure it is, but it is all I have — I doubt it. If these features did not work on the first dozen attempts, why would they work any time after? This strategy, I think, teaches people to set their expectations low.

This beta-tinged rollout is not entirely without its merits. Apple is passively soliciting feedback within many of its Apple Intelligence features, at a scale far greater than it could by restricting testing to only its own staff and contractors. But it also means the public becomes unwitting testers. As with Siri before, Apple heavily markets this set of features as the defining characteristic of this generation of iPhones, yet we are all supposed to approach this as though we are helping Apple make sure its products are ready? Sorry, it does not work like that. Either something is shipping or it is not, and if it does not work properly, users will quickly learn not to trust it.

Jason Snell, in the March 2000 issue of Macworld:

Suddenly, the future is now. Shortly after the calendar clicked over to 2000, Apple unveiled Mac OS X’s brand-new interface—named Aqua—giving the world its first glimpse of how we’ll all interact with our Macs for years to come. […]

[…]

Perhaps the most radical addition to the Mac OS interface in Mac OS X is the Dock, a strip that lives at the bottom of your screen and displays the contents of open windows (you can even opt to have it appear only when you move the cursor to the bottom of the screen, like the Windows task bar).

James Thomson:

The version he [Steve Jobs] showed was quite different to what actually ended up shipping, with square boxes around the icons, and an actual “Dock” folder in your user’s home folder that contained aliases to the items stored.

I should know – I had spent the previous 18 months or so as the main engineer working away on it. At that very moment, I was watching from a cubicle in Apple Cork, in Ireland. For the second time in my short Apple career, I said a quiet prayer to the gods of demos, hoping that things didn’t break. For context, I was in my twenties at this point and scared witless.

I was not using a Mac until after Mac OS X 10.2 was released, so I am by no means a good barometer for the Mac-iness of early releases. One thing I remember clearly, though, is being smitten with it from my earliest use; I was among many who downloaded Aqua Dock to get a taste of the experience on my Windows computer.

I still cannot believe it took until perhaps five years ago for me to become a Dock-on-the-side person, however.

Ann Telnaes:

I’ve worked for the Washington Post since 2008 as an editorial cartoonist. I have had editorial feedback and productive conversations — and some differences — about cartoons I have submitted for publication, but in all that time I’ve never had a cartoon killed because of who or what I chose to aim my pen at. Until now.

We can keep an open mind and accept the editor rejected this cartoon for any number of reasons, while also considering the most obvious reason: the editor acknowledges the owner of the Post is aligning himself with the incoming administration. Perhaps a more generous reading is that Jeff Bezos is directing the Post to be less adversarial than it was from 2016–2020. Either way, the effect is the same.

In the United States, donations to the extravagant presidential inauguration ceremony by U.S. citizens and corporations are unlimited. As a result, it is the perfect vehicle with which to get comfortable with the incoming administration. It is not a bribe, though. Money or goods given to holders of public office with the implication of favours is almost never bribery. If you call it a bribe, everyone involved seems to get mad. So do not call it a bribe.

Kathryn Watson and Libby Cathey, CBS News:

Amazon, run by billionaire Jeff Bezos, intends to donate $1 million to the president-elect’s inaugural fund and will stream the ceremony on Prime, amounting to another $1 million in-kind donation, according to a source familiar with the donations. The Wall Street Journal first reported Amazon’s plans.

Mark Zuckerberg’s Meta, the parent company of Facebook and Instagram, also plans to send $1 million to Trump’s inaugural fund.

OpenAI CEO Sam Altman plans to make a $1 million personal donation to Trump’s inaugural fund, according to an OpenAI spokesperson. Fox News Digital first reported Altman’s intended donation.

That makes three-for-three on billionaires who see nothing but good news in getting cozy with Trump administration figures.

Edward Helmore, the Guardian:

US business leaders are spending big on Donald Trump’s second inaugural fund, which is predicted to exceed even the record-setting $107m raised in 2017.

[…]

“EVERYBODY WANTS TO BE MY FRIEND!!!” Trump wrote in a post on Truth Social on Thursday.

I had blessedly forgotten what this seventy-eight year old sounds like.

Mike Allen, Axios:

Apple CEO Tim Cook will personally donate $1 million to President-elect Trump’s inaugural committee, sources with knowledge of the donation tell Axios.

[…]

Cook, a proud Alabama native, believes the inauguration is a great American tradition, and is donating to the inauguration in the spirit of unity, the sources said. The company is not expected to give.

The sources’ names? Cim Took and Ptim Kooc.

Call this what you want: bipartisanship, diplomacy, pragmatic, outright support, or “the spirit of unity”. But one thing you cannot call it is principled. We have become accustomed to business leaders sacrificing some of their personal principles to support their company in some way — for some reason, it is just business is a universal excuse for terrible behaviour — but all of these figures have already seen what the incoming administration does with power and they want to support it. For anyone who claims to support laws or customs, this is not principled behaviour.

Or, I guess, bribery.

Jason Koebler, 404 Media:

After the Cybertruck explosion outside of the Trump International Hotel in Vegas on Wednesday, Elon Musk remotely unlocked the Cybertruck for law enforcement and provided video from charging stations that the truck had visited to track the vehicle’s location, according to information released by law enforcement.

This comes just days after a Volkswagen subsidiary left vehicle tracking data exposed on an Amazon server.

While Clark County Police gave explicit credit to Musk, it is unclear what role he played. Even so, this demonstrates the power Tesla has over vehicles in owners’ hands. It can remotely interact with them and, because Telsa also provides the charging infrastructure, it can track vehicle use to a greater extent than its competitors.

Software is eating the world continues to sound as much like a threat as it does inspiration.

Samantha Cole, 404 Media:

That law, passed as Act 440, was introduced by “sex addiction” counselor and state representative Laurie Schegel and quickly copied across the country. The exact phrasing varies, but in most states, the details of the law are the same: Any “commercial entity” that publishes “material harmful to minors” online can be held liable—meaning, tens of thousands of dollars in fines and/or private lawsuits—if it doesn’t “perform reasonable age verification methods to verify the age of individuals attempting to access the material.”

These and other worries about minors’ access to technology increasingly convinces me a device-level age verification standard is around the corner. A requirement for Apple and Google to age-restrict their app stores was proposed by U.S. legislators in November, but this would not affect users’ access to the web. I bet something changes on this front — and soon.

Jonathan Stempel, Reuters:

Apple agreed to pay $95 million in cash to settle a proposed class action lawsuit claiming that its voice-activated Siri assistant violated users’ privacy.

A preliminary settlement was filed on Tuesday night in the Oakland, California federal court, and requires approval by U.S. District Judge Jeffrey White.

Alex Hern, who wrote the 2019 Guardian story forming the basis of many complaints in the lawsuit, today on Bluesky:

There’s two claims in one case and one of them Apple is bang to rights on (“Siri records accidental interactions”) and the other is worth far far more than $95m to disprove (“those recordings are shared with advertisers”)

The original complaint (PDF), filed just a couple of weeks after Hern’s story broke, does not once mention advertising. A revised complaint (PDF), filed a few months later, mentions it once and only in passing (emphasis mine):

Apple’s actions were at all relevant times knowing, willful, and intentional as evidenced by Apple’s admission that a significant portion of the recordings it shares with its contractors are made without use of a hot word and its use of the information to, among other things, improve the functionality of Siri for Apple’s own financial benefit, to target personalized advertising to users, and to generate significant profits. Apple’s actions were done in reckless disregard for Plaintiffs’ and Class Members’ privacy rights.

This is the sole mention in the entire complaint, and there is no citation or evidence for it. However, a further revision (PDF), filed in 2021, contains plenty of anecdotes:

Several times, obscure topics of Plaintiff Lopez’s and Plaintiff A.L.’s private conversations were used by Apple and its partners to target advertisements to them. For example, during different private conversations, Plaintiff Lopez and Plaintiff A.L. mentioned brand names including “Olive Garden,” “Easton bats,” “Pit Viper sunglasses,” and “Air Jordans.” These advertisements were targeted to Plaintiffs Lopez and A.L. Subsequent to these private conversations, Plaintiff Lopez and Plaintiff A.L., these products began to populate Apple search results and Plaintiffs also received targeted advertisements for these products in Apple’s Safari browser and in third party applications. Plaintiffs Lopez and A.L. had not previously searched for these items prior to the targeted advertisements. Because the intercepted conversations took place in private to the exclusion of others, only through Apple’s surreptitious recording could these specific advertisements be pinpointed to Plaintiffs Lopez and A.L.

I am filing this in the needs supporting evidence column alongside other claims of microphones being used to target advertising. I sympathize with the plaintiffs in this case, but nothing about their anecdotes — more detail on pages 8 and 10 of the complaint — is compelling, as alternative explanations are possible.

For example, one plaintiff discussed a particular type of surgery with his doctor, and then saw ads on his iPhone related to the condition it treats. While it seems possible Siri was erroneously activated, Apple received a copy of the recording, and then it automatically transcribed and sold its contents to data brokers, this is massively speculative compared to what we know ad tech companies do. Perhaps the doctor’s office was part of a geofenced ad campaign. Or, perhaps the doctor was searching related keywords and then, because the plaintiff’s phone was in the proximity of the doctor’s devices, some cross-device targeting became confused. Neither of these explanations involve microphones, let alone Siri.

Yet, because Apple settled this lawsuit, it looks like it is not interested in fighting these claims. It creates another piece of pseudo-evidence for people who believe microphone-equipped devices are transforming idle conversations into perfectly targeted ads.

None of these stories have so far been proven, and there is not a shred of direct evidence it is occurring — but I can understand why people are paranoid. While businesses have exploited private data to sell ads for decades, we have dramatically increased the amount of devices we have and the time we spend with them with few meaningful steps taken toward user privacy. We are feeding every part of this nauseating industry more data with, in many countries, about the same regulatory oversight.

I could be entirely wrong. Apple could have settled this case because it is, indeed, doing more-or-less what the plaintiffs say. To that possibility, I say: show me real evidence. I have no problem admitting I got something wrong.

Update: Apple has issued a statement in which it says it has “never used Siri data to build marketing profiles, never made it available for advertising, and never sold it to anyone for any purpose”.

Cristina Criddle and Hannah Murphy, Financial Times:

Meta is betting that characters generated by artificial intelligence will fill its social media platforms in the next few years as it looks to the fast-developing technology to drive engagement with its 3bn users.

[…]

“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform … that’s where we see all of this going,” he [Meta’s Connor Hayes] added.

Imagine opening any of Meta’s products after this has taken over. Imagine how little you will see from the friends and family members you actually care about. Imagine how much slop you will be greeted with — a feed alternating between slop, suggested posts, and ads, with just enough of what you actually opened the app to see. Now consider how this will affect people who are more committed to Meta’s products, whether for economic reasons or social cohesion.

A big problem for Meta is that it is institutionally very dumb. I do not want to oversell this too much, but I truly believe this is the case. There are lots of smart people working there and its leadership clearly understands something about how people use social media.

But there is a vast sense of dumb in its attempts to deliver the next generation of its products. Its social media products are dependent on “engagement”, which is sometimes a product of users’ actual interest and, at other times, an artifact of Meta’s success or failure in controlling what they see. Maybe its “metaverse” will be interesting one day, but it seems deeply embarrassing so far.

Patrick Beuth et al., in a German-language report in Der Spiegel, as translated by Apple’s built-in translator:

Because many of the vehicle data could be linked to the names and contact details of the drivers, owners or fleet managers. Precise location data could be viewed on 460,000 vehicles, which allowed conclusions to be drawn about the lives of the people behind the steering wheels – just like the two politicians.

[…]

It is a more than embarrassing breakdown for the already struggling group. It’s a shame. Especially in the software, where VW lags behind the competition anyway. Of all things, the security of private data, which the Germans like to cite as a location advantage over the much more lax USA.

Linus Neumann, of the Chaos Computer Club, also German-language, also translated by Safari:

The information collected by VW subsidiary Cariad contains precise information on the location and time of the ignition. The movement data is linked to other personal data. In this way, they also allow conclusions to be drawn about suppliers, service providers, employees or camouflage organizations of the security authorities.

Anthony Alaniz, Motor1:

The hacker group, the Chaos Computer Club, informed Cariad about the vulnerability, which quickly patched the issue. Cariad told Spiegel that the vulnerability was a “misconfiguration” and that the company doesn’t merge data that would allow someone to create a profile about a person. According to the company, the researchers had to combine different data sets by “bypassing several security mechanisms.” It also said it’s unaware of anyone accessing the data other than CCC.

Cariad has a lot of gall to issue a statement redirecting blame to someone defeating “security mechanisms” instead of the possibility all this stored data could be re-identified in the first place.

Matthew Green on Bluesky:

I love that Apple is trying to do privacy-related services, but this [“Enhanced Visual Search” setting] just appeared at the bottom of my Settings screen over the holiday break when I wasn’t paying attention. It sends data about my private photos to Apple.

The first mention of this preference I can find is a Reddit thread from August.

Apple says it is an entirely private process:

Enhanced Visual Search in Photos allows you to search for photos using landmarks or points of interest. Your device privately matches places in your photos to a global index Apple maintains on our servers. We apply homomorphic encryption and differential privacy, and use an OHTTP relay that hides IP address. This prevents Apple from learning about the information in your photos. […]

The company goes into more technical detail in a Machine Learning blog post. What I am confused about is what this feature actually does. It sounds like it compares landmarks identified locally against a database too vast to store locally, thus enabling more accurate lookups. It also sounds like matching is done with entirely visual data, and it does not rely on photo metadata. But because Apple did not announce this feature and poorly documents it, we simply do not know. One document says trust us to analyze your photos remotely; another says here are all the technical reasons you can trust us. Nowhere does Apple plainly say what is going on.

Jeff Johnson:

Of course, this user never requested that my on-device experiences be “enriched” by phoning home to Cupertino. This choice was made by Apple, silently, without my consent.

From my own perspective, computing privacy is simple: if something happens entirely on my computer, then it’s private, whereas if my computer sends data to the manufacturer of the computer, then it’s not private, or at least not entirely private. Thus, the only way to guarantee computing privacy is to not send data off the device.

I see this feature implemented with responsibility and privacy in nearly every way, but, because it is poorly explained and enabled by default, it is difficult to trust. Photo libraries are inherently sensitive. It is completely fair for users to be suspicious of this feature.

A press release from U.K. consumer advocacy group Which?:

The consumer champion rated products across four categories and gave them overall privacy scores for factors including consent and what data access they want. Researchers found data collection often went well beyond what was necessary for the functionality of the product – suggesting data could, in some cases, be being shared with third parties for marketing purposes. Which? is calling for firms to prioritise privacy over profits.

This includes products as pedestrian as air fryers, which apparently wanted the precise location of users and permission to record audio. There could be a valid reason for these permissions — for example, perhaps the app allows you to automate the air fryer to preheat when you return home; or, perhaps there is voice control functionality which, for understandable reasons, is not delineated in a permissions request for “recording” one’s voice.

I downloaded the Xiaomi app to look into these possibilities, but I was unable to proceed unless I created an account and connected a relevant product. I also looked at manuals for different smart air fryers from these brands, but that did not clear anything up because — wisely — these manufacturers do not include app-related information in their printed documentation.

Even if these permissions requests are perfectly innocent and were correctly documented — according to Which?, they are not — it is ridiculous that buyers need to consider all this just to use some appliance.

Matthew Gault, Gizmodo:

But it shouldn’t be this way. Every piece of tech shouldn’t be a devil’s bargain where we allow a tech company to read through our phone’s contact list so we can remotely shut off an oven. More people are pissed about this issue and complaining to their government. Watchdog groups in the U.K. and the U.S. are paying attention.

We can do something about this. We can have comprehensive privacy laws with the backing of well-funded regulators. But until that happens, everything “smart” is capable of lucrative contributions to the broader data broker and surveillance advertising markets, just because people want to use the product’s features.