Month: June 2022

John Thornhill, Financial Times (this might be paywalled but I know you are a clever so-and-so):

A powerful case for why politicians need to act now to create a stronger legal framework for biometric technologies has been made by the barrister Matthew Ryder in an independent report published this week. (For disclosure: the report was commissioned by the Ada Lovelace Institute and I am on the charity’s board.) Until that comes into force, Ryder has called for a moratorium on the use of live facial recognition technology. Similar calls have been made by British parliamentarians and US legislators without prompting much response from national governments.

Three arguments are made as to why politicians have not yet acted: it is too early; it is too late; and the public does not care. All three ring hollow.

Despite widespread privacy concerns, facial recognition is everywhere. It is hard to overstate the impact of the global failure to restrict its use while governments permit and even encourage its widespread deployment. In just one example of how badly regulators are fumbling, the European Parliament called for a ban on facial recognition in a 2021 non-binding agreement while simultaneously developing a massive surveillance database. You can find similar examples worldwide for facial recognition and, increasingly, other forms of biometric identification. We are hurtling down a path to persistent surveillance with few safeguards, and our institutions are only encouraging it through either apathy or eager adoption.

Update: Several recognizable companies like Amazon and Microsoft stopped offering facial recognition technologies to law enforcement after widespread protests triggered, in part, by the police murder of George Floyd. But those decisions can change at any time and, anyway, they have been replaced by lower-profile vendors. All the same problems, offered by companies most people have never heard of.

Apple:

The Telecommunications Business Act in South Korea was recently amended to mandate that apps distributed by app market operators in South Korea be allowed to offer an alternative payment processing option within their apps. To comply with this law, developers can use the StoreKit External Purchase Entitlement. This entitlement allows apps distributed on the App Store solely in South Korea the ability to provide an alternative in-app payment processing option. Developers who want to continue using Apple’s in-app purchase system may do so and no further action is needed.

John Voorhees, MacStories:

Developers who want to use a third-party payment processor must apply to Apple for a StoreKit External Purchase Entitlement. Apps with the new entitlement can only be released in South Korea’s App Store, which means that developers will need to make a separate version of any app that uses the entitlement.

In the Netherlands, Apple dropped the requirement for a separate app binary. Why is the policy in South Korea different?

Charlie Warzel, in his fantastic Galaxy Brain newsletter:

When I read McCormick’s posts or many of the other pieces evangelizing for crypto-related projects, I tend to come away feeling kind of stupid — just as I did reading the Braintrust whitepaper. Now maybe I’m telling on myself here. Perhaps my small brain stopped accepting new ideas after the 10th version of the iPhone, and now I’m doomed to look foolish (I’ve written at length about this fear). But I also know I’m not at all alone when it comes to reading whitepapers and crypto Substacks and feeling like the breathless urgency or complexity doesn’t match with what’s really being described.

I am keeping an open mind, but this has also been my experience. I find myself stumbling across these projects and asking myself what lies beneath the layers of jargon and marketing speak, and then how that differs from existing products and services. So far, I have been unsatisfied.

Since the parallel rise over the past couple years of TikTok and concerns about the service’s connections to China — or, more specifically, its intelligence and military arms — I have been mulling over this piece. I have dropped bits and pieces, but I feel like now is a good time to bring all those thoughts together given a letter sent by FCC commissioner Brendan Carr jointly to Tim Cook and Sundar Pichai. Unfortunately, it has solely been posted to Twitter as a series of images without descriptive text because I guess Carr hates people who use screen readers:

I am writing the two of you because Apple and Google hold themselves out as operating app stores that are safe and trusted places to discover and download apps. Nonetheless, Apple and Google have reviewed and approved the TikTok app for inclusion in your respective app stores. Indeed, statistics show that TikTok has been downloaded in the U.S. from the Apple App Store and the Google Play Store nearly 19 million times in the first quarter of this year alone. It is clear that TikTok poses an unacceptable national security risk due to its extensive data harvesting being combined with Beijing’s apparently unchecked access to that sensitive data But it is also clear that TikTok’s pattern of conduct and misrepresentations regarding the unfettered access that persons in Beijing have to sensitive U.S. user data — just some of which is detailed below — puts it out of compliance with the policies that both of your companies require every app to adhere to as a condition of remaining available on your app stores. Therefore, I am requesting that you apply the plain text of your app store policies to TikTok and remove it from your app stores for failure to abide by those terms.

As a reminder, Carr works for the FCC, not the FTC. Nor does Carr work for the Department of Commerce, which was most recently tasked with eradicating TikTok from the United States. While frequent readers will know how much I appreciate a regulator doing their job and making tough demands, I feel Carr’s fury is misplaced and, perhaps, a little disingenuous.

Carr’s letter follows Emily Baker-White’s reporting earlier this month for Buzzfeed News about the virtually nonexistent wall between U.S. user data collected by TikTok and employees at ByteDance, its parent company in China. The concerns, Baker-White says, are claims of persistent backdoors connected to Chinese military or intelligence which allow access to users’ “nonpublic data”. The ostensible severing of ties between ByteDance and TikTok’s U.S. users is referred to as “Project Texas” internally:

TikTok’s goal for Project Texas is that any data stored on the Oracle server will be secure and not accessible from China or elsewhere globally. However, according to seven recordings between September 2021 and January 2022, the lawyer leading TikTok’s negotiations with CFIUS and others clarify that this only includes data that is not publicly available on the app, like content that is in draft form, set to private, or information like users’ phone numbers and birthdays that is collected but not visible on their profiles. A Booz Allen Hamilton consultant told colleagues in September 2021 that what exactly will count as “protected data” that will be stored in the Oracle server was “still being ironed out from a legal perspective.”

In a recorded January 2022 meeting, the company’s head of product and user operations announced with a laugh that unique IDs (UIDs) will not be considered protected information under the CFIUS agreement: “The conversation continues to evolve,” they said. “We recently found out that UIDs are things we can have access to, which changes the game a bit.”

What the product and user operations head meant by “UID” in this circumstance is not clear — it could refer to an identifier for a specific TikTok account, or for a device. Device UIDs are typically used by ad tech companies like Google and Facebook to link your behavior across apps, making them nearly as important an identifier as your name.

It has become a cliché by now to point out that TikTok’s data collection practices are no more invasive or expansive than those of American social media giants. It is also a shallow comparison. The concerns raised by Carr and others are explicitly related to the company’s Chinese parentage, not simply the pure privacy violations of collecting all that information.

But, you know, maybe they should be worried about that simpler situation. I think Baker-White buried the lede in that big, long Buzzfeed story:

Project Texas’s narrow focus on the security of a specific slice of US user data, much of which the Chinese government could simply buy from data brokers if it so chose, does not address fears that China, through ByteDance, could use TikTok to influence Americans’ commercial, cultural, or political behavior.

This piece is almost entirely about users’ private data being accessible by staff apparently operating as agents of a foreign government; almost none of it is about its algorithm influencing behaviour.1 So it is wild to read, in the first half of this sentence, that a great deal of the piece’s concerns about TikTok collecting user data can be effectively undone if its management clicks the “Add to Cart” button on a data broker’s website. Those are the privacy concerns churning away in the back room. What is happening in the front office?

From Carr’s letter:

In March 2020, researchers discovered that TikTok, through its app in the Apple App Store, was accessing users’ most sensitive data, including passwords, cryptocurrency wallet addresses, and personal messages.

This seems to be a reference to Talal Haj Bakry and Tommy Mysk’s research showing that TikTok and a large number of other popular apps were automatically reading the iOS clipboard — or “pasteboard” in iOS parlance. It sounds bad, but is not clear that TikTok actually received any of this pasted data.

There are also many non-insidious reasons why this could have been the case. In a statement to Ars Technica, TikTok said it was related to an anti-spam feature. I am not sure that is believable, but I also do not have a good reason to think it was secretly monitoring users’ copied data when there are other innocent explanations. Google’s Firebase platform, for example, automatically retrieves the pasteboard by default when using its Dynamic Links feature. TikTok probably does not use Firebase, but all I am saying is that it is not, in of itself, a reason to think the worst. At any rate, suspicious pasteboard access is one reason pasting stuff in iOS has become increasingly irritating and why there is a whole new paste control in iOS 16.

Also Carr:

In August 2020, TikTok circumvented a privacy safeguard in Google’s Android operating system to obtain data that allowed it to track users online.

This appears to reference TikTok’s collection of MAC addresses — a behaviour which, while against Google’s policies, is not exclusive to TikTok. It may be more concerning when TikTok does it, but it could obtain the same information from data brokers (PDF).

That is really what this is all about. The problem of TikTok is really the problem of worldwide privacy failures. There are apps on your phone collecting historically unprecedented amounts of information about your online and offline behaviour. There are companies that buy all of this — often nominally de-identified — and many of them offer “enrichment” services that mix information from different sources to create more comprehensive records. Those businesses, in turn, provide those more complex profiles to other businesses — or journalists — which now have the ability to individually identify you. It is often counterproductive to do so, and they often promise they would never do such a thing, but it is completely possible and entirely legal — in the U.S., at least. It should be noted that, while the world is grappling with privacy problems, some of those failures are specific to the United States.

One of the concerns Carr enumerated in his letter is TikTok’s collection of “biometric identifiers, including faceprints […] and voiceprints”. When this language was added last year, it seemed to give the company legal cover for features like automatic captions and face filters, both of which involve things that are considered biometric identifiers. But the change was only made to the U.S.-specific privacy policy; TikTok has three. When I looked at them, the U.S. one seemed more permissive than those for Europe or the rest of the world, but I am not a lawyer, and things may have changed since.2

One of the frustrating characteristics about Carr’s letter is that he is, in many ways, completely right — and I just wish he had raised these concerns about literally everything else applicable. From the perspective of a non-American, his concerns about intrusive surveillance reflect those I have about my data being stored under the control of American companies operating under American laws. Sure, Canada is both an ally and a participant in the Five Eyes group. But it is hard to be reassured by that when the U.S. has lost its moral high ground by wiretapping allies and entire countries.

It is worse that an authoritarian surveillance state may be doing the snooping. But the moral and ethical problems are basically the same, so it is hard to read even the most extreme interpretation of Carr’s letter without some amount of hypocrisy. The U.S. decided not to pass adequate privacy legislation at any point in the past fifteen years of accelerating intrusive practices — by tech companies, internet service providers, ad networks, and the data brokers tying them all together — and has its own insidious and overreaching intelligence practices. Even if Carr has a point, TikTok is not the problem; it is one entity taking advantage of a wildly problematic system.


  1. On the question of influence, there is room for nuance. A response to that is a whole different article. ↥︎

  2. TikTok is far from the only company to have region-specific privacy policies. I would love to see a legal analysis and comparison. ↥︎

Glenn Fleishman wrote a fantastic introduction to the open passkey standard, coming soon to the world around you — and it sounds great:

The passkey is a modern replacement for passwords that rebuilds the security wall protecting standard account logins. Proximity—in the form of the device that stores your passkeys—is a powerful tool in reducing account hijacking and interception. Passkeys may seem scary and revolutionary, but they’re actually safer and, in some ways, a bit old-fashioned: they’re a bit of a throwback to a time when having access to a terminal provided proof you were authorized to use it.

It all sounds very promising, though I expect the rollout of this standard will take some time. If it works as promised, it is likely to be a marked improvement in both user experience and security — not an easy feat to pull off.

Ivan Mehta, TechCrunch:

US House of Representatives Speaker Nancy Pelosi said Tuesday that Democrats are considering the introduction of legislation that could protect the abortion rights of citizens. Notably, her letter addressed to Democrats pointed out that the legislation would also work on securing user data of reproductive apps.

This sounds both necessary and narrow. News coverage has been overwhelmingly focused on period-tracking apps and the risks of such private information having basically no legal protections. But there are so many avenues for leaking data to extremist prosecutors in more theocratic U.S. states. Such limited protections are welcome, but not nearly enough.

It is not a good reflection of either public policy or privacy rights when people have to access healthcare with the kinds of precautions used by spies and dissidents dealing with classified information.

Ravi Kanneganti of Google:

First, starting today, people using Hangouts on mobile will see an in-app screen asking them to move to Chat in Gmail or the Chat app. Similarly, people who use the Hangouts Chrome extension will be asked to move to Chat on the web or install the Chat web app. In July, people who use Hangouts in Gmail on the web will be upgraded to Chat in Gmail.

While we encourage everyone to make the switch to Chat, Hangouts on the web will continue to be available until later this year. Users will see an in-product notice at least a month before Hangouts on the web starts redirecting to Chat on the web.

I just love how this is an ongoing story. Google has announced the forthcoming demise of Hangouts a few times. A merger with Allo seemed likely for a time, but Allo came and went while the company was trying to figure out what to do with Hangouts.

On the plus side, it really does seem like Google is figuring out its chat client strategy. You can now have video calls with Meet, Chat on Android, or Gmail on Android and iOS, though I think all of them use Google’s Meet back-end. Audio calls are possible with Chat, Gmail, and Google Voice, and text-only messages can be sent with any of Google’s messaging products. Believe it or not, this is an improvement.

Cristiano Lima, Washington Post:

An academic study finding that Google’s algorithms for weeding out spam emails demonstrated a bias against conservative candidates has inflamed Republican lawmakers, who have seized on the results as proof that the tech giant tried to give Democrats an electoral edge.

[…]

That finding has become the latest piece of evidence used by Republicans to accuse Silicon Valley giants of bias. But the researchers said it’s being taken out of context.

[Muhammad] Shahzad said while the spam filters demonstrated political biases in their “default behavior” with newly created accounts, the trend shifted dramatically once they simulated having users put in their preferences by marking some messages as spam and others as not.

Shahzad and the other researchers who authored the paper have disputed the sweeping conclusions of bias drawn by lawmakers. Their plea for nuance has been ignored. Earlier this month, a group of senators introduced legislation to combat this apparent bias. It intends to prohibit email providers from automatically flagging any political messages as spam, and requires providers to publish quarterly reports detailing how many emails from political parties were filtered.

According to reporting from Mike Masnick at Techdirt, it looks like this bill was championed by Targeted Victory, which also promoted the study to conservative media channels. You may remember Targeted Victory from their involvement in Meta’s campaign against TikTok.

Masnick:

Anyway, looking at all this, it is not difficult to conclude that the digital marketing firm that Republicans use all the time was so bad at its job spamming people, that it was getting caught in spam filters. And rather than, you know, not being so spammy, it misrepresented and hyped up a study to pretend it says something it does not, blame Google for Targeted Victory’s own incompetence, and then have its friends in the Senate introduce a bill to force Google to not move its own emails to spam.

I am of two minds about this. A theme you may have noticed developing on this website over the last several years is a deep suspicion of automated technologies, however they are branded — “machine learning”, “artificial intelligence”, “algorithmic”, and the like. So I do think some scrutiny may be warranted in understanding how automated systems determine a message’s routing.

But it does not seem at all likely to me that a perceived political bias in filtering algorithms is deliberate, so any public report indicating the number or rate of emails from each political party being flagged as spam is wildly unproductive. It completely de-contextualizes these numbers and ignores decades of spam filters being inaccurate from time to time for no good reason.

A better approach for all transparency around automated systems is one that helps the public understand how these decisions are made without playing to perceived bias by parties with a victim complex. Simply counting the number of emails flagged as spam from each party is an idiotic approach. I, too, would like to know why many of the things I am recommended by algorithms are entirely misguided. This is not the way.

By the way, politicians have a long and proud history of exempting themselves from unfavourable regulations. Insider trading laws virtually do not apply to U.S. congresspersons, even with regulations to ostensibly rein it in. In Canada, politicians excluded themselves from unsolicited communications laws by phone and email. Is it any wonder why polls have showed declining trust in institutions for decades?

Perhaps you, like me, live in a city where some of your fellow citizens have decided to engage in a sticker campaign with anti-vaccine and conspiracy-minded messages. It is not just light vandalism; they are advertisements for misinformation. Maybe you, like me, attempt to peel them off when you see them, but wish you could correct their mistaken message instead.

Well, good news: Context Center, a project by Aram Zucker-Scharff, has a free printable sticker design with a QR code that points to the U.N.’s vaccine debunking page. Is it legal to put these stickers on things you do not own? Probably not. Would I rather see these than yet more fear mongering nonsense? I sure would.

This probably has no effect on the conspiracy committed. They have found their narrative and have built their world around it. But for those who are maybe questioning health authorities because of some nonsense they have seen, this may help provide better information. I do not know if this is the best way to fix these issues, but it sure beats doing nothing.

Natasha Lomas, TechCrunch:

Another strike against use of Google Analytics in Europe: The Italian data protection authority has found a local web publisher’s use of the popular analytics tool to be non-compliant with EU data protection rules owing to user data being transferred to the U.S. — a country that lacks an equivalent legal framework to protect the info from being accessed by U.S. spooks.

Earlier this year, Austrian regulators found the use of Google Analytics by a German publisher to be illegal on similar grounds and, as Lomas writes, it was also found to violate GDPR rules by French authorities.

Jia Tolentino, the New Yorker:

If you become pregnant, your phone generally knows before many of your friends do. The entire Internet economy is built on meticulous user tracking — of purchases, search terms — and, as laws modelled on Texas’s S.B. 8 proliferate, encouraging private citizens to file lawsuits against anyone who facilitates an abortion, self-appointed vigilantes will have no shortage of tools to track and identify suspects. (The National Right to Life Committee recently published policy recommendations for anti-abortion states that included criminal penalties for anyone who provides information about self-managed abortion “over the telephone, the internet, or any other medium of communication.”) A reporter for Vice recently spent a mere hundred and sixty dollars to purchase a data set on visits to more than six hundred Planned Parenthood clinics. Brokers sell data that make it possible to track journeys to and from any location — say, an abortion clinic in another state. In Missouri, this year, a lawmaker proposed a measure that would allow private citizens to sue anyone who helps a resident of the state get an abortion elsewhere; as with S.B. 8, the law would reward successful plaintiffs with ten thousand dollars. The closest analogue to this kind of legislation is the Fugitive Slave Act of 1793.

Two data brokers, Safegraph and Placer.ai, said they removed Planned Parenthood visits from their data sets. They could reverse that decision at any time, and there is nothing preventing another company from offering its own package of users seeking a form of healthcare that is now illegal in a dozen states. People have little choice about which third-party providers receive data from the apps and services they use. Anyone using a period tracking app is at risk of that data being subpoenaed and, while some vendors say they do not pass health records to brokers, some of those same apps were found to be inadvertently sharing records with Facebook.

If the U.S. had more protective privacy laws, it would not make today’s ruling any less of a failure to uphold individuals’ rights in the face of encroaching authoritarian policies. But it would make it a whole lot harder for governments and those deputized on their behalf to impose their fringe views against medical practitioners, clinics, and people seeking a safe abortion.

Muyi Xiao, Paul Mozur, Isabelle Qian, and Alexander Cardia of the New York Times put together a haunting short documentary about the state of surveillance in China. It shows a complete loss of privacy, and any attempt to maintain one’s sense of self is regarded as suspicious. From my limited perspective, I cannot imagine making such a fundamental sacrifice.

This is why it is so important to match the revulsion we feel over things like that Cadillac Fairview surreptitious facial recognition incident or Clearview AI — in its entirety — with strong legislation. These early-stage attempts at building surveillance technologies that circumvent legal processes forecast an invasive future for everyone.

Look, I know you are probably here for technology and not cooking tips from some doofus, but I had a little epiphany the other day and, well, it is my website, so I will share it here.

I was making a dish not too long ago — maybe hot and sour eggplant, but I am not sure — with a corn starch slurry. The corn starch is added to thicken the sauce, yes, but it also helps it adhere to the ingredients — eggplants in this case, but noodles in others.

A lot of cooks will tell you to use pasta water in the same way when you cook a sauced pasta. In a typical telling, you might set aside half a cup of the water the pasta is cooking in just before you strain it, then return the pasta to the pot with a sauce you prepared separately and then some of your pasta water. Tossing the starchy water with the sauce and pasta is supposed to help it bind better.

The little epiphany I had is to add some of the water back first, without the sauce, and let it reduce to a more concentrated state. Here is Albert Burneko in a recent article at about cacio e pepe over at Defector expressing the same sentiment:

What is happening all the while is, the pasta is absorbing a little bit of the moisture from that pasta water; at the same time, the heat is causing the rest of it to evaporate, slowly. As it does, it gradually reduces to a starchy goo, a sauce, which coats the pasta and makes it sticky and gives it a satin sheen. I promise this is happening while you are tossing, and I also promise that if it is not happening then it is because you and not I screwed up somehow.

I have tried this technique for a few different recipes and it is magical. Making this ultra-concentrated starch slurry does require you to undercook your pasta maybe a little more than you might usually for this technique, but the results are worth giving it a try. You get this super silky sauce that completely clings to whatever pasta you have made, in a way I have not found possible even when I cook my pasta in relatively small amounts of water to create a more concentrated cooking liquid. Reducing it later produces much nicer results — I think.

Charlie Warzel, the Atlantic:

There’s a strange irony to all of this. For years, researchers, technologists, politicians, and journalists have agonized and cautioned against the wildness of the internet and its penchant for amplifying conspiracy theories, divisive subject matter, and flat-out false information. Many people, myself included, have argued for platforms to surface quality, authoritative information above all else, even at the expense of profit. And it’s possible that Google has, in some sense, listened (albeit after far too much inaction) and, maybe, partly succeeded in showing higher-quality results in a number of contentious categories. But instead of ushering in an era of perfect information, the changes might be behind the complainers’ sense that Google Search has stopped delivering interesting results. In theory, we crave authoritative information, but authoritative information can be dry and boring. It reads more like a government form or a textbook than a novel. The internet that many people know and love is the opposite — it is messy, chaotic, unpredictable. It is exhausting, unending, and always a little bit dangerous. It is profoundly human.

I am not sure this is the right conclusion to draw from the sometimes questionable results of a Google search these days. As has been repeatedly documented by others, the problem with Google is not that it is surfacing boring results, but that search engine spammers and machine-generated results are winning.

Earlier this year, our washing machine was not completing a cycle correctly. The model number seems to be one of those ones specific to a long-departed retailer; so, after I was unable to find a copy of the manual, I resorted to more general searches. Turns out that appliance troubleshooting seems to be one of the more polluted genres of query. DuckDuckGo and Google searches alike returned page after page of keyword-filled junk intended solely to rank highly.

So many of my searches for all kinds of stuff go this way, and I am not the only one. It is the current status in the ongoing adversarial relationship Google has with spammers and marketers alike. I think Google has been more successful in burying the dregs of the web. All too often, what has replaced them are word-filled pages of emptiness.

Casey Liss:

Why is this the accepted way to get the attention of an engineer? For something as simple as a one-line code change, why are my only two options:

  • Wait for June and hope I get an audience with the right engineer at a lab

  • Use one of my two Technical Support Incidents and hope it’s fruitful… and that I don’t need that one for something else later in the year

I understand why Apple sets limits on Technical Support Incidents, but restricting developers to just two per year is a bit like giving employees two sick days per year. Developers can buy more if they run out but they are not cheap and expire a year after purchase, so their use is somewhat disincentivized.

Also, I learned something from Liss’ article. You know how Feedback Assistant disappears from your iOS or iPadOS device when you update to a release candidate version, therefore making it difficult to report bugs from your iPhone or iPad? It turns out the applefeedback:// URL protocol still opens Feedback Assistant on RC versions, and I made the world’s simplest Shortcut to trigger it.

Nicole Nguyen and Cordilia James, Wall Street Journal:

Different types of data, including information that can be subpoenaed from period trackers, can create an extremely detailed profile of you when combined. Prof. Fowler says she thinks it is likely that user data will have greater importance if more places criminalize abortion.

While period trackers collect and store health data, there aren’t typically special protections governing that information, said Prof. Fowler. Apps can use your data how they choose as outlined in their privacy policies, she said, adding that ideally the data would be stored on your devices — rather than in the cloud — and not be subject to third-party tracking.

Period tracking apps’ sometimes sketchy privacy policies and the legal jeopardy in which they can place users is something explicitly called out in Sen. Elizabeth Warren’s announcement of a bill to curtail data brokers.

Apple’s first-party Health app is the only one that encrypts users’ data end-to-end. Unfortunately, it is halfway between an all-in-one health tracking app and a repository for other apps’ data. I do not have experience with entering a menstrual cycle, but I find manually adding cycling distance or — new in iOS 16 — medication to be confusing and inelegant.

Even if a period tracking app is sharing data with Health, it is worth remembering that its own in-app privacy and data use policies apply.

Jon Brodkin, Ars Technica:

A bill introduced by Sen. Elizabeth Warren (D-Mass.) would prohibit data brokers from selling Americans’ location and health data, Warren’s office said Wednesday.

“Largely unregulated by federal law, data brokers gather intensely personal data such as location data from seemingly innocuous sources including weather apps and prayer apps—oftentimes without the consumer’s consent or knowledge,” a bill summary said. “Then, brokers turn around and sell the data in bulk to virtually any willing buyer, reaping massive profits.”

I do love the sound of this. Though Brodkin says it bans selling certain data types, it is actually more comprehensive — if passed, data brokers would be prohibited from doing just about anything with location and health data “declared or inferred”.

It seems too good to be true, and my hopes were quashed when I read this piece from Jeffrey Neuburger of the National Law Review:

[…] The bill makes exceptions for health information transfers done lawfully under HIPAA, publication of “newsworthy information of legitimate public concern” under the First Amendment, or disclosure for which the individual provides “valid authorization.” The FTC would be responsible for adapting the HIPAA-related term “valid authorization” to fit the location data context. It is possible that the conspicuous notice and consent processes surrounding the collection and use of the data — as is currently in place in many mobile applications — will suffice.

If all big ideas for protecting privacy come down to the same notice and consent laws that have had mixed results around the world, I do not think we will find ourselves in a better place. Everyone will simply be more irritated by the technology they use while finding few privacy benefits. I understand the value of someone consenting to have information collected and shared, but there needs to be a better model for confirming an opt-in and limitations on its use.

Julia Conley, Common Dreams:

Warren noted that location data has already been used by federal agencies to circumvent the Fourth Amendment by purchasing private data instead of obtaining it via a subpoena or warrant and to out LGBTQ+ people.

I continue to wonder how much of a factor it is that law enforcement and intelligence agencies rely on anti-privacy companies and data brokers as a workaround for more scrutinized legal measures.

José Adorno, 9to5Mac (via Matt Birchler):

I honestly think watchOS 9 is the greatest update for the Apple Watch in years, and the main reason is Apple Watch Series 3 going away for good. I know it’s a very affordable wearable for those who just want an Apple device to track their daily activities, but with new Watches coming soon and so many new technologies available, Apple is making the right move to support watchOS 9 only to newer Watches.

Supporting only newer Apple Watches is absolutely the right call. What is not is how Apple continues to sell the Series 3 with little acknowledgement of its forthcoming software dead end. As of writing, the Series 3 marketing webpage contains zero notice of WatchOS 9 incompatibility, and neither does the comparison tool. It is mentioned on the Apple Store page — “compatible with watchOS 8 and earlier versions”, as though one could install earlier versions if they wanted to — and on the WatchOS 9 preview page. WatchOS 9 may not yet be publicly available, but I still do not think it is fair for Apple to continue selling an incompatible model. And, in a perfect world, sales of the Series 3 would have ceased one or two years ago.

Dr. Drang:

You see, things have changed since my defense of the Series 3 back in September. I was still running watchOS 7 then and the experience — apart from OS updates — was perfectly fine. But a couple of months ago, I bit the bullet and went through the multi-hour experience to update to watchOS 8. I had read that watchOS 8 would make the pain of updates a thing of the past. What I had not read was that the occasional pain of long updates would be replaced by the daily pain of a watch that commonly takes a second or two to respond to taps and swipes — if it responds at all.

As someone with a 2012 MacBook Air destined to run MacOS Catalina until its circuits melt, I feel Drang’s pain. It is unfortunate when the final year of software compatibility coincides with an unsatisfactory release on that hardware.

Matthew Panzarino, TechCrunch:

I had a chance to talk briefly with Apple SVP of Software Engineering Craig Federighi last week about the new iPadOS features aimed at enhancing multitasking and multi-app work. We chatted about the timing, execution and reactions to these announcements.

[…]

Stage Manager takes advantage of the more powerful GPU, faster I/O in virtual memory, faster storage and more RAM that the M1 chips brought to the table. That all had to come together to keep the experience fluid and this year, they did, says Federighi.

Michael Tsai:

[…] I honestly don’t understand his argument. I don’t think it’s that pre-M1 iPads couldn’t support virtual memory, since even the A12Z in the DTK did. That processor also had great performance running more simultaneous apps than iPadOS supports. Stage Manager is also supported on older Macs with Intel processors — and older graphics — that are less capable than recent-but-not-M1 iPads.

Tsai’s post includes a roundup of commentary, including several people pointing out how multitasking has existed in MacOS for decades, even on systems running on the apparently asthmatic performance of Intel and PowerPC processors.

Manton Reece:

I think the root issue is that when people choose a computer to buy, they don’t expect the operating system to change significantly for different computer models. You buy a more expensive Mac because it has a larger screen, or is faster, or has more ports. You buy a more expensive iPhone because it has better cameras. You buy a more expensive iPad because it has the latest Pencil support. It is a hardware decision, not a software one.

I love a piece that makes me think more about something I considered settled, and this is one of those. I count myself among those who saw recent iPad Pro models in need of differentiation, and Stage Manager delivers.

Reece frames this limitation as “without precedent” — a set of system features beyond wallpaper and grab bag features choked by high hardware standards. It is almost a reversal of the way system requirements used to work, where you could get bare minimum compatibility all around; if you had better hardware, you could get some bonus features. This is common in games, and readers of a certain age will remember when more capable hardware got you better UI performance in Mac OS X. But even if you ran the least-powerful Mac supported by the operating system, you still got full multitasking.

The restriction of Stage Manager and memory swapping in iPadOS 16 is the complete inverse. No matter which iPad model you have, you will see gorgeous graphics and use super fast flash memory — but you need a recent iPad Pro or Air model to more efficiently multitask. That is kind of weird. Buying a MacBook Pro does not unlock a better workflow model than what is available on a MacBook Air, but buying an iPad Pro means you get exclusive system capabilities. Strange — but perhaps something to get used to as a differentiator going forward.