Month: December 2021

Natasha Lomas, TechCrunch:

France’s privacy watchdog said today that Clearview has breached Europe’s General Data Protection Regulation (GDPR).

In an announcement of the breach finding, the CNIL also gives Clearview formal notice to stop its “unlawful processing” and says it must delete user data within two months.

Good; keep these orders coming. Like previous deletion demands, there are likely problems with ascertaining who in Clearview’s database is covered, but at least there is collective action by countries that have laws concerning individuals’ privacy. It is a stance that declares its entire operation an unacceptable violation. I see nothing wrong with putting Clearview out of business and discouraging others from replicating it.

Luming Yin:

macOS 12.2 beta is now available, featuring smoother scrolling in Safari on the latest MacBook Pro with ProMotion, and a native Apple Music and TV experience backed by AppKit views instead of web views.

Filipe Espósito, 9to5Mac:

Some parts of the Music app were already native, such as the music library. But now Mac users will notice that searching for new songs in Apple Music is much faster as the results pages are displayed with a native interface instead of as a webpage. Scrolling between elements has also become smoother with the beta app, and trackpad gestures are now more responsive.

[…]

Yin mentioned that the Apple TV app has also been rebuilt with a native backend. While this is indeed true, 9to5Mac found out that Apple had already updated the TV app with JET technology in macOS Monterey 12.1, which is available for everyone. Of course, more refinements are expected for both apps in the upcoming macOS 12.2 betas.

Michael Tsai:

Note that Music was always an AppKit app (not Catalyst). The difference in 12.2 seems to be that more content within the window now uses native controls. Personally, I didn’t notice a change, perhaps because I don’t use the Apple Music areas of the app.

These changes seem exclusive to the Apple Music parts, which — like the iTunes Store — have long been webpages rendered in the frame of a native Mac app. They have always felt slow and disconnected from the main app. In MacOS 12.2, these web-based sections are now interpreted as native Mac views, and Music feels noticeably faster because of it.1 Scrolling is smoother, and the spacebar now pauses and resumes playback correctly. These improvements and the significantly reduced CPU consumption in MacOS 12.1 make me believe that someone at Apple really does care about the Music app on MacOS. There is hope.

Then again, the preferences window in Music is still modal. Some things will never change.


  1. I believe recent versions of the Mac App Store also use the Jet framework. ↥︎

Ian Beer and Samuel Groß, of Google’s Project Zero team:

Earlier this year, Citizen Lab managed to capture an NSO iMessage-based zero-click exploit being used to target a Saudi activist. In this two-part blog post series we will describe for the first time how an in-the-wild zero-click iMessage exploit works.

Based on our research and findings, we assess this to be one of the most technically sophisticated exploits we’ve ever seen, further demonstrating that the capabilities NSO provides rival those previously thought to be accessible to only a handful of nation states.

This is a breathtaking accomplishment by NSO Group. I thought I knew where this explanation was going, but then I got to the penultimate section and it left me amazed.

One thing I have long wondered is what avenue was chosen for delivering the first part of the payload. iMessage has allowed the delivery of many file types since it launched. Most — like video or some arbitrary file — require user interaction, so those are ruled out. That leaves webpage previews and images, and we know that webpage previews are generated on the send-side, not by the recipient. So:

Looking at the selector name, the intention here was probably to just copy the GIF file before editing the loop count field, but the semantics of this method are different. Under the hood it uses the CoreGraphics APIs to render the source image to a new GIF file at the destination path. And just because the source filename has to end in .gif, that doesn’t mean it’s really a GIF file.

Are image formats the only instances where a file is interpreted and then a version created by iMessage on the client device? I am looking forward to the promised followup to this post.

Also, I recommend this article about how Xerox scanners screwed up documents and made files legally invalid — linked by Beer and Groß — just because it is really interesting.

Ian Sherr, CNet:

Apple has released a new Android app called Tracker Detect, designed to help people who don’t own iPhones or iPads to identify unexpected AirTags and other Find My network-equipped sensors that may be nearby.

The new app, which Apple released on the Google Play store Monday, is intended to help people look for item trackers compatible with Apple’s Find My network. “If you think someone is using AirTag or another device to track your location,” the app says, “you can scan to try to find it.”

I might be reading this wrong, but it seems like the selling points for Android users to download this app are:

  1. They would like to help owners of AirTags and other Find My items find their stuff.

  2. They think they may have a stalker.

Without undermining the seriousness of the second reason, it is not often a company launches a companion app to detect someone else’s misuse of its products. That is the main reason someone would keep this app on their phone, right? And it is not only Apple thinking about this; Tile will ship a similar feature next year.

Perhaps we will all need to download apps for products we do not use so that we are not victims of our location being tracked by some unauthorized person. But this does not really apply to “all” of us: women will be — and have been — targeted for stalking by beacons. I guess the market for high-tech key-finding devices is not going anywhere, so at the very least a universal anti-stalking measure should be part of the Bluetooth spec.

Every year, Bloomberg Businessweek’s writers and editors select stories from other publications they wish they had written. I love the concept, and this year’s “Jealousy List” is full of stuff I want to read.

In linking to the 2020 list, I said that I will be jealous of the person or publication who can fully explain the now-infamous “Big Hack” feature: the dubious story of Chinese intelligence surreptitiously implanting chips on the boards of servers made by Supermicro. But that was last year; now there are two questionable stories involving multiple intrusion techniques, a decade of spycraft, dozens of companies, several U.S. government agencies — and the only journalists anywhere who can report even a hint of this are Jordan Robertson and Michael Riley of Bloomberg. I would love to know the story behind that.

Elizabeth Dwoskin, Will Oremus, Craig Timberg, and Nitasha Tiku, Washington Post:

Earlier this year, as Twitter raced to roll out Spaces, its new live audio chat feature, some employees asked how the company planned to make sure the service didn’t become a platform for hate speech, bullying and calls to violence.

In fact, there was no plan. In a presentation to colleagues shortly before its public launch in May, a top Twitter executive, Kayvon Beykpour, acknowledged that people were likely to break Twitter’s rules in the audio chats, according to an attendee who spoke on the condition of anonymity to describe internal matters. But he and other Twitter executives — convinced that Spaces would help revive the sluggish company — refused to slow down.

Fast forward six months and those problems have become reality. Taliban supporters, white nationalists, and anti-vaccine activists sowing coronavirus misinformation have hosted live audio broadcasts on Spaces that hundreds of people have tuned in to, according to researchers, users and screenshots viewed by The Washington Post. Other Spaces conversations have disparaged transgender people and Black Americans. These chats are neither policed nor moderated by Twitter, the company acknowledges, because it does not have human moderators or technology that can scan audio in real-time.

Abuse in and from live audio rooms is entirely predictable. It permits a massive audience for the worst people while being ephemeral. When Clubhouse — last year’s hot new thing — was just a few months old and still only available by invitation, Casey Newton, then at the Verge, explored the obvious problems with keeping users in check:

And for Clubhouse, moderation issues promise to be particularly difficult — and if the app is to ever escape closed beta successfully, will require sustained attention and likely some product innovation. Tatiana Estévez, who worked on moderation efforts at the question-and-answer site Quora, outlined Clubhouse’s challenges in a Twitter thread.

Audio is fast and fluid; will Clubhouse record it so that moderators can review bad interactions later? In an ephemeral medium, how will Clubhouse determine whether users have a bad pattern of behavior? And can Clubhouse do anything to bring balance to the age-old problem of men interrupting women?

“Is this impossible? Probably not,” Estévez wrote. “But in my experience, moderation and culture have to be a huge priority for both the founding team as well as for the community as a whole.”

Estévez in that Twitter thread:

Clubhouse has to deal with this problem both with policies (to kick off bad actors) and with culture. The culture needs to encourage listening, and valuing female voices. And to be honest, many early adopter tech men are bad listeners and don’t value hearing from women.

This was over a year ago and, perhaps unsurprisingly, Swathi Moorthy in Moneycontrol reported last week that Clubhouse still has problems with abuse.

I do not think we should expect apps like Clubhouse or Twitter Spaces to fix misogyny, but it is unethical to create spaces for it to intensify and target specific individuals. I am not arguing that it ought to be illegal to create a new platform without having a moderation solution in place, but I think it is painfully stupid to do so. I am struggling to understand what is gained by creating an audio version of 4chan where it is even more difficult to set boundaries and expectations.

Apparently the metaverse is just around the corner.

Patrick McGee, Financial Times:

Apple has allowed app developers to collect data from its 1bn iPhone users for targeted advertising, in an unacknowledged shift that lets companies follow a much looser interpretation of its controversial privacy policy.

In May Apple communicated its privacy changes to the wider public, launching an advert that featured a harassed man whose daily activities were closely monitored by an ever-growing group of strangers. When his iPhone prompted him to “Ask App Not to Track”, he clicked it and they vanished. Apple’s message to potential customers was clear — if you choose an iPhone, you are choosing privacy.

But seven months later, companies including Snap and Facebook have been allowed to keep sharing user-level signals from iPhones, as long as that data is anonymised and aggregated rather than tied to specific user profiles.

Is this actually a “shift” in the way this policy is interpreted? The way Apple has defined tracking in relation to the App Tracking Transparency feature has remained fairly consistent — compare the current page against a snapshot from January. Apps cannot access the device’s advertising identifier if the user opts out and, while Apple warned developers creating unique device identifiers, it does not promise it can prevent the tracking of users, and especially not in aggregate.

It is concerning to me that Apple’s advertising and dialog box text may create the impression of a greater privacy effect than they may realistically achieve. Perhaps Apple’s definition of “tracking” does not align with public expectations; or, perhaps, privacy should not be a product to sell.

Karl Bode, Techdirt:

We also get just an endless parade of semantics, like ISP claims they “don’t sell access to your data” (no, they just give massive “anonymized” datasets away for free as part of a nebulous, broader arrangement they do get paid for). We get tracking opt-out tools that don’t actually opt you out of tracking, or opt you back in any time changes are made. And we get endless proclamations about how everybody supports codifying federal privacy laws from companies that immediately turn around and spend millions of dollars lobbying to ensure even a basic privacy law never sees the light of day.

Privacy is not a luxury good, nor should it be up to individual companies to decide which infringements are acceptable.

Most of the manipulations highlighted by Marques Brownlee are just automatic versions of tasks photographers used to have to spend hours completing in Photoshop or Lightroom. It is only generalizing a formerly specialized set of skills, but it seems like that approaches a fuzzy line of what may be desirable.

I could swear I vaguely remember the iPhone camera feature Brownlee mentions where it will merge images to make the best group photo where nobody is blinking, but I think we may both be hallucinating that announcement. I cannot find evidence of it; the best I found is a begging blog post from last year.

In his Sunday “Media Equation” column in the New York Times, Ben Smith said he obtained an internal document created for new TikTok employees:

The document, headed “TikTok Algo 101,” was produced by TikTok’s engineering team in Beijing. A company spokeswoman, Hilary McQuaide, confirmed its authenticity, and said it was written to explain to nontechnical employees how the algorithm works. The document offers a new level of detail about the dominant video app, providing a revealing glimpse both of the app’s mathematical core and insight into the company’s understanding of human nature — our tendencies toward boredom, our sensitivity to cultural cues — that help explain why it’s so hard to put down. The document also lifts the curtain on the company’s seamless connection to its Chinese parent company, ByteDance, at a time when the U.S. Department of Commerce is preparing a report on whether TikTok poses a security risk to the United States.

What is interesting to me is the lengths the Times went to so that it could obscure this relatively mild piece of internal documentation. Unlike many other artifacts obtained by the Times, a copy was not linked within the article, and even embedded diagrams were reproduced instead of the originals being shown.

Whether those were precautions borne of a secrecy promise, or perhaps because the original documents had legibility problems, I feel like Smith buried the lede. After wading through an overwrought exploration of the file’s contents, Smith reports on the many lingering connections the ostensibly independent TikTok has with its predecessor app Douyin:

TikTok’s development process, the document says, is closely intertwined with the process of Douyin’s. The document at one point refers TikTok employees to the “Launch Process for Douyin Recommendation Strategy,” and links to an internal company document that it says is the “same document for TikTok and Douyin.”

It turns out the Douyin version of that shared internal document has been circulating publicly for months.

Protocol’s Zeyi Yang, writing in the Source Code newsletter:

In fact, another closely related app uses the same secret sauce. In January, a document titled “Guide to a Certain Video APP’s Recommendation (Algorithm)” was released on several Chinese platforms. While it intentionally obscured which app it’s referencing, there are plenty of hints that it’s about Douyin, TikTok’s Chinese version.

For one, the Chinese document describes how it makes recommendations in the exact same formula and language (yes, word for word) as the document leaked to the Times. They also used the same challenge to the algorithm as a case study.

And in a Q&A entry about competitors, the document mentioned three other major Chinese apps — Toutiao, Kuaishou and Weibo — that rely on recommendation algorithms, but not Douyin, the app that does it the best.

The link above is now dead, but you can find plenty of copies on Chinese social networks — one that was uploaded to CSDN, for instance. It is in Chinese, but it appears to be exactly the same file.

Twitter is making a lot of interesting moves lately, and its new reporting workflow is one of them.

Take a look at the screenshots seen in the post. It is more-or-less a change of language compared to the existing workflow, but it makes a world of difference to my eyes. Though there are more words on each page, it seems much clearer to me how to categorize a report of a tweet that could be grounds for removal, especially since you can choose multiple criteria.

This new workflow is only being tested among a small group of users right now, but I hope something like it proves successful. I welcome changes like these aimed at better guidance through a process that can be confusing or intimidating, particularly if they are being harassed.

Samuel Axon, Ars Technica:

Today, The Information published a lengthy report detailing Apple CEO Tim Cook’s efforts to establish strong relationships between Apple and Chinese government officials and agencies.

Citing both interviews and direct access to internal Apple documents about repeated visits by Cook to China in the mid-2010s, the report describes a $275 billion deal whereby Apple committed to investing heavily in technology infrastructure and training in the country.

The nonbinding, five-year deal was signed by Cook during a 2016 visit, and it was made partially to mitigate or prevent regulatory action by the Chinese government that would have had significant negative effects on Apple’s operations and business in the country.

Wayne Ma’s report is paywalled, of course, but I have a few choice observations. First, it confirms what analysts speculated in 2016 when Apple announced its uncharacteristic investment in ride hailing company Didi Chuxing — that it was basically a way to appease government officials in China. Cook wrote a glowing endorsement of Didi Chuxing CEO Jean Liu for Time’s “100 Most Influential People” feature in 2017.

Second, while this agreement may be officially non-binding, it is hard to imagine Apple could run afoul of its spirit given its dependency on suppliers and manufacturing in China. Ma reports that Apple acquiesced to many government demands, like building research and development centres in the country — including one with the university where Cook was later named chairman of the advisory board — assigning an executive specifically to business in China, and even changing the scale of disputed territories in Apple Maps.

However, it also seems that this deal has helped Apple avoid more stringent regulation in other areas, in ways that are beneficial to users’ rights. Even though Chinese users’ iCloud data is stored on servers located within the country and operated by a local partner — as required by law — it has been allowed to retain control over its encryption keys. The government has allowed it to retain control over its source code, too. But Ma has previously reported that many of Apple’s exemptions are being revoked, and now writes that key businesses, including the App Store, are in a sort of legal limbo.

It seems that Apple is relying on more Chinese suppliers due, in part, to an agreement that it deepens its investment in the country while agreeing to comply with increasingly nationalistic laws. Apple may have published its Commitment to Human Rights last year, but it is further entangling itself with a government that is committing genocide. This entire situation remains Apple’s biggest liability as it goes into 2022 when, according to Ma’s reporting, the agreement could be extended through May.

Cade Metz and Neal E. Boudette, New York Times:

Unlike technologists at almost every other company working on self-driving vehicles, Mr. Musk insisted that autonomy could be achieved solely with cameras tracking their surroundings. But many Tesla engineers questioned whether it was safe enough to rely on cameras without the benefit of other sensing devices — and whether Mr. Musk was promising drivers too much about Autopilot’s capabilities.

Now those questions are at the heart of an investigation by the National Highway Traffic Safety Administration after at least 12 accidents in which Teslas using Autopilot drove into parked fire trucks, police cars and other emergency vehicles, killing one person and injuring 17 others.

I hope autonomous vehicle technologies really can improve safety for drivers and pedestrians alike. I hope more that mass transit gets better, but why not have both? Just know that I am not rooting for these efforts to fail.

One of the defences I often see is that there were only twelve accidents where Autopilot failed out of millions of vehicles on the road. That is likely better than the record of human drivers behind the wheel of any brand of car.

But what this angle misses is that this is effectively twelve accidents caused by the same driver. Autopilot may have been in different cars at the time and with different software versions, but it is all attributable to the same code. Tesla’s software is the driver. That is not a radical position — it is what Volvo argued six years ago for its own cars. Tesla should accept full responsibility when drivers use its autonomous features and not cower behind weak disclaimers that fail to match its own public rhetoric.

One more thing:

Amnon Shashua, chief executive of Mobileye, a former Tesla supplier that has been testing technology that is similar to the electric-car maker’s, said Mr. Musk’s idea of using only cameras in a self-driving system could ultimately work, though other sensors may be needed in the short term. He added that Mr. Musk might exaggerate the capabilities of the company’s technology, but that those statements shouldn’t be taken too seriously.

“One should not be hung up on what Tesla says,” Mr. Shashua said. “Truth is not necessarily their end goal. The end goal is to build a business.”

I hope this is not meant as praise. If it is not possible to build a business truthfully, we are in bad shape. But I am sure it is meant tongue firmly in cheek which, combined with its forthcoming IPO, makes it a fortuitous time for Mobileye to be criticizing a competitor in the press.

Eric Brain of Hypebeast interviewed Apple’s Evans Hankey and Stan Ng about the range of Apple Watch bands. It is unfortunately a pretty light interview — all marketing, no insight — but it made me reflect on how long Apple has been shipping some of these bands for, virtually unchanged.

The big, as-yet unanswered question is what it take for Apple to break backwards compatibility, or if that is something in the cards. Many Apple Watch owners have built up enormous collections of bands, and the longer Apple retains compatibility, the longer it will feel like that is a given.

So far, no strap has been exclusive to an Apple Watch series because of case size, though there are subtle fit issues when, say, putting a band designed for a 38mm model onto a 41mm Series 7. Apple says that the Solo Loop and Braided Solo Loop are only compatible with Series 4 or newer models, but the fit is not terrible on older models. It is not outright incompatible. There are also a handful of bands that have been exclusive to one of the smaller or larger models, like the Modern Buckle and now-discontinued Leather Loop.

In traditional watch terms, Apple has maintained a nearly consistent lug width in each size bracket. This fascinates me. It seems like every new iPhone has slightly different measurements for justifiable reasons like a different camera system and, so, needs a different case. But if you can still use the exact same bands as you used on the Apple Watch of six years ago, so long as you continue to buy either the small or large model.

For comparison, Rolex has been making versions of its iconic Submariner for nearly seventy years, but it consistently took 20mm straps until last year. It would be unwise to speculate that Apple will also take decades to change, but Watch hardware itself have been fairly consistent year-over-year. It is similarly iconic.

Apple’s announcement last month that it would soon sell users the parts they need to repair devices themselves reignited discussion about the perceived advantages and drawbacks of self-repair, and promoted questions about how many users would actually take advantage of the program. My guess is that it will be proportionate to the number of people who repair their own vehicles: not many. That is a shame because replacing an iPhone’s display or a MacBook Air’s battery is not very difficult, and I find it emotionally rewarding.

Regardless of whether that resonates with anyone else, one reason more people should be able to repair their own devices is to maintain control over their data. This is not theoretical.

Michael Brice-Saddler, reporting for the Washington Post in November 2019:

It was a sense of foreboding that prompted Gloria Fuentes to delete several apps from her phone ahead of an Apple Store appointment last week in Bakersfield, Calif.

[…]

It turns out Fuentes’s initial concerns were legitimate. When she got home, Fuentes turned on her phone and noticed a text that had been sent to an unknown number, she wrote. The message’s contents were even more harrowing: Fuentes alleged that the Apple employee had gone through her photos, retrieved a private picture and texted it to himself.

The picture in question was taken more than a year ago, she added.

In this article, Brice-Saddler mentions a handful of similar incidents from years past.

James Titcomb, reporting for the Telegraph in June:

Apple paid millions of dollars to a student after iPhone repair technicians posted explicit photos and videos from her phone to Facebook, legal documents have revealed.

The tech giant agreed a settlement with the 21-year-old after two employees at a repair facility uploaded the images from a phone she had sent to Apple to be fixed, resulting in “severe emotional distress”.

The repair facility was operated by Pegatron, but customers are not aware of that when turning their phones in to Apple for repair.

Ryne Hager, AndroidPolice:

Over the week, two Pixel owners have publicly reported that devices sent back to Google for warranty service and replacement were used to violate their privacy. In one instance, someone allegedly took “nudes” from the device and posted them on a customer’s social media account before stealing a small sum via PayPal. Game designer and New York Times bestselling author Jane McGonigal also later tweeted out her own report detailing someone’s attempts to secure similar information from her account, trawling her Gmail, Google Drive, and other data backup sources after she sent her phone to Google for repair.

Stories of repair technicians taking advantage of their position are as disgusting are they are common. Employees like these are present in official channels, at contractors, and at independent repair shops. But even though the problem is a common one, it should surprise nobody that all of these stories are about men violating the privacy of women through their broken devices.

It is not as though other professions do not have their share of creeps. But medical professionals and lawyers have more to lose. When a doctor violates the confidentiality of their relationship with a patient, their name makes the news, and they may get stripped of credentials or expelled from colleges. In many cases, the repair technicians who are found to be responsible for similarly egregious violations are nameless, and could easily get hired elsewhere.

Other professions requiring a high degree of trust in confidential information have codes of conduct their practitioners must adhere to, and governing bodies that can discipline rule-breakers. Repair technicians do not; the qualifications Apple requires of Genius Bar staff are similar to those of retail floor staff. Perhaps that is something which ought to be considered: a self-governing body that sets a minimum standard of expertise for consumer-level repairs,1 and can de-certify anyone who abuses their position.

The above cases are symptomatic of the objectification of women, almost always by men, that is commonplace at all levels of society and which we desperately need to correct. But privacy concerns are not limited to these flagrant violations. There are also items that all of us have on our computers that would make us concerned if a technician accessed them. These privacy incursions are certainly less egregious, but are damaging in their own way. We keep records of our conversations, banking history, health, and so much more on devices we would be reluctant to hand to a stranger on the street.

If you are concerned about someone else handling your device — and I think there are perfectly good, non-criminal reasons for being wary — a self-repair option might make sense for you. We should all expect privacy from technicians, and those who choose a full-service option are in no way asking to be taken advantage of. But self-repair offers another level of reassurance. Your device never leaves your hands. That peace of mind may, for some, be worth the modest learning curve.


  1. I am familiar with the kinds of certifications available to system administrators. ↥︎

Jon Keegan and Alfred Ng, the Markup:

Life360, a popular family safety app used by 33 million people worldwide, has been marketed as a great way for parents to track their children’s movements using their cellphones. The Markup has learned, however, that the app is selling data on kids’ and families’ whereabouts to approximately a dozen data brokers who have sold data to virtually anyone who wants to buy it.

In 2019, Apple pulled about a dozen parental control apps from the App Store over privacy concerns, since they abused Mobile Device Management, though I cannot find any reports that Life360 was among them. However, I did come across a Wired article from later that year in which Louise Matsakis reported that Life360’s public trading prospectus indicated the value it sees in mining its vast collection of user data — largely of children — for profit.

Last month, Life360 announced it would be acquiring Tile.

Andrew Paul, Input:

A new program innocuously titled the “Verizon Custom Experience” is sold to users as a way for the company to “personalize our communications with you, give you more relevant product and service recommendations, and develop plans, services and offers that are more appealing to you.” To accomplish this, all a Verizon subscriber needs to do is… allow the company access to all the websites you visit, apps you use, as well as see everyone you happen to call and text.

Well, okay, so that’s a bit misleading. You don’t “need” to allow access — Verizon already default granted it. You can manually go in and change a few settings to remedy the situation, though. Here’s how.

Emma Roth, the Verge:

In April, T-Mobile started automatically enrolling users in a program that shares your data with advertisers unless you manually opt-out from your privacy settings. On AT&T’s privacy center, the company says that it collects web and browsing information, along with the apps you use, and that you can manage these settings from AT&T’s site.

Even though this is a common practice among U.S. internet providers, it still disturbs me that they treat it as an opt-out arrangement. Each user has to get an idea that this program exists in the first place, know what it is called — “Custom Experience” is a weaselly marketing way to avoid saying tracking and profiling — and figure out how to disable it. This is a massive ISP-wide privacy violation that is completely legal, and entirely unethical.

Howard Oakley:

Monterey is a chance for Apple’s engineers to catch up with the backlog of bugs which have marred Big Sur and its predecessors. While plenty have already been fixed, there are still many to go. This brief survey lists some of those which have been niggling me since the release of macOS 12.0.1, with links to the more serious problems at the end. This is by no means complete, and I’m sure you’ll each know of many that haven’t yet irritated me. While I welcome your proposals, please be careful to outline how each bug can be reproduced, so that we can enjoy them for ourselves.

Michael Tsai:

If we’re talking annoyances, rather than bugs per se, the top of my list would have to be the narrow alerts.

I could pick and choose from the bugs I have filed in the past several months to build a list like these. I seldom find applications outright crashing, but there are plenty of entry-level user interaction problems: in several apps, scroll position is not preserved while using the app or when it is backgrounded; notifications fly in from the bottom edge of the screen when waking my Mac like there is a violent toaster on my desk; Music remains a small tragedy.

In isolation, it would be hard to isolate any of these problems as particularly upsetting or difficult. But they compound. Each one adds unnecessary friction to the tools I use all the time. You can add them all to a list but, for me at least, they multiply my annoyance. From where I am sitting, it is hard to know if these problems are being treated seriously, or if they are falling by the wayside as Apple races to get new features ready in time for WWDC 2022.

Mary Jo Foley, ZDNet:

Microsoft has been doing its best to force Windows 11 users to stick with its own Edge browser by making switching from it as difficult as possible. But there’s hope the company may do the right thing and stop this nonsense.

The latest Windows 11 Dev Channel test build released earlier this week, Build 22509, has a new browser Set default button, as discovered by Microsoft watcher Rafael Rivera. If and when this new button makes it into the commercially available Windows 11 release, users will again have a cleaner and simpler way to select a browser other than Edge.

It is not like Microsoft accidentally stumbled into the current chaotic browser selector. It made a choice to build something radically different than the Windows 10 picker. All it had to do is avoid user-hostile interactions, but Microsoft deliberately made changes in that direction anyway.

If it ships, this change is for the better. But we should not forget how much negative coverage was required for Microsoft to act, and I bet current antitrust climate helped. Good. Platform owners should be scared to make changes like this, and the pressure should be maintained until Microsoft reverts its other dark patterns. The web only exists through web browsers so it is important to encourage competition.

Twitter Safety:

There are growing concerns about the misuse of media and information that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals. Sharing personal media, such as images or videos, can potentially violate a person’s privacy, and may lead to emotional or physical harm. The misuse of private media can affect everyone, but can have a disproportionate effect on women, activists, dissidents, and members of minority communities. When we receive a report that a Tweet contains unauthorized private media, we will now take action in line with our range of enforcement options.

Emma Bowman, NPR:

Emerson Brooking, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab, tweeted that the rule is “written so broadly that most anyone can lodge a complaint against anyone.”

Public figures are exempt from the policy, Twitter said. The social media company assured users that “context matters,” and that its private information policy “includes many exceptions in order to enable robust reporting on newsworthy events and conversations that are in the public interest.”

Brooking added that a lot hinges on those last two words.

Chad Loder is maintaining a thread of legitimate public interest stories that are being curtailed because of this policy. Accounts are being locked from months-old retweets of photos being taken by journalists in public. Twitter’s whole thing is its firehose of information, its misapplication of this policy is kneecapping the use cases that make the platform so valuable.

Suzanne Vranica, the Wall Street Journal:

New privacy protections put in place by tech giants and governments are threatening the flow of user data that companies rely on to target consumers with online ads.

Great.

As a result, companies are taking matters into their own hands. Across nearly every sector, from brewers to fast-food chains to makers of consumer products, marketers are rushing to collect their own information on consumers, seeking to build millions of detailed customer profiles.

Not so great — but not as worrying for privacy as it sounds, either.

When the New York Times told Axios last year that it would be phasing out the use of third-party data for user targeting and relying on its own, I explained why this is a privacy benefit, even though it made the Times a collector of user data:

I would vastly prefer to revert to a pre-personalized ad world, but I still see this move as a step in the right direction. It may still collect data for targeting, but at least it does not involve the near-universal surveillance of companies like Facebook and Google. Reducing their ability to conduct broad and intrusive behavioural data collection is an important step towards a more private web.

This remains true. As much as I think an advertising marketplace should not target users based specifically on who they are and their activities, the least evil version of that is one where individual businesses leverage their existing relationships with people instead of depending on vast web-wide tracking.

But these companies are not exclusively using first-party data. The Journal is careful to acknowledge that targeting information from Facebook, Google, and other ad tech companies will still be used by businesses alongside their own. Furthermore, these data collection schemes are going beyond the typical granularity of loyalty programs, collecting attributes like device identifiers and tying them to names. Building these databases through QR codes and contest entries is sneaky, but not unique or new.

Make no mistake: this is not a slam-dunk win for privacy. I would like to see a regulatory framework scaling back the collection of this data by prohibiting its use for ad targeting, and banning its sale or sharing. But this is a less bad version of personalized advertising because it leverages existing opt-in relationships, rather than fishing for behavioural data with a Google-sized dragnet.