Month: April 2020

Eric S. Yuan, Zoom’s CEO and founder, in a statement published to the company’s blog earlier this month:

First, some background: our platform was built primarily for enterprise customers – large institutions with full IT support. These range from the world’s largest financial services companies to leading telecommunications providers, government agencies, universities, healthcare organizations, and telemedicine practices. Thousands of enterprises around the world have done exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom for complete deployment.

“Enterprise customers” who completed “exhaustive security reviews of our user, network, and data center layers and confidently selected Zoom” would, presumably, include Dropbox. The two companies promised tight integration and frequently promoted their partnership.

But, while the two companies were publicly demonstrating their collaborative efforts, Dropbox was privately shocked by the number of vulnerabilities in Zoom’s software — something they knew because they were actively finding security holes and pushing Zoom to fix them.

Natasha Singer and Nicole Perlroth, New York Times:

One year ago, two Australian hackers found themselves on an eight-hour flight to Singapore to attend a live hacking competition sponsored by Dropbox. At 30,000 feet, with nothing but a slow internet connection, they decided to get a head start by hacking Zoom, a videoconferencing service that they knew was used by many Dropbox employees.

The hackers soon uncovered a major security vulnerability in Zoom’s software that could have allowed attackers to covertly control certain users’ Mac computers. It was precisely the type of bug that security engineers at Dropbox had come to dread from Zoom, according to three former Dropbox engineers.

[…]

Dropbox grew so concerned that vulnerabilities in the videoconferencing system might compromise its own corporate security that the file-hosting giant took on the unusual step of policing Zoom’s security practices itself, according to the former engineers, who spoke on the condition of anonymity because they were not authorized to publicly discuss their work.

Imagine how bad Zoom’s security would be if all those enterprise customers had not worked so diligently on, as Yuan claimed, “exhaustive security reviews”.

The thing we know for certain about bullshit is that, no matter how hard we try, it is virtually impossible to be countered, eradicated, minimized, or undone. Harry Frankfurt described this phenomenon in “On Bullshit”:

[…] Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all. By virtue of this, bullshit is a greater enemy of the truth than lies are.

Alberto Brandolini summarized the problem in 2013:

The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.

This rule remains true for the bullshit web — the marketing cruft, bloated advertising, tracking mechanisms, and Google’s unnecessary and toxic Accelerated Mobile Pages project that have come to dominate the web. Users have tried to fight back by adopting ad blockers and switching to web browsers that are more privacy focused. But bullshit is stronger than that. It cannot be contained by browsers or the mere will of users’ requests. The response by its purveyors illustrates how thoroughly bullshit resists control.

You’re probably familiar with ad blocker blockers like the one from Admiral, which calls itself a “visitor relationship management company” and has a website that contains an SVG animation which uses over 100% of one of my iMac’s CPU cores. Another popular service is Blockthrough. In its 2020 ad blocking report, it claims to be the “most popular dedicated provider of ad recovery” using an “Acceptable Ads” whitelist. Most ad blocker blockers are only a little irritating. They might show a notice guilting you into disable your ad blocker; some will prevent the article from loading until the ad blocker is disabled, though many will offer an option to proceed anyway.

But some go much further, like Instart’s AppShield. They call their product an “Ad Integrity” feature, which works by encrypting most of the media on each page and requiring the company’s scripts to decode the page. If those scripts are blocked, so, too, will be everything from pictures to links. If you visited a CBS Interactive property like CNet or Metacritic at some point over the past few years with an ad or tracking blocker turned on, you may have noticed that many of the pages fail to fully load. Instart went to great lengths in an attempt to avoid detection of AppShield, from obfuscating code to monitoring whether the developer console is open.

It’s not just anti-ad blockers that are pervasive; a user’s attempts to withdraw from all sorts of the web’s bullshit are countered at every turn. Providers of push notification services for website recognized that users could simply opt out of receiving requests to enable notifications on all websites, so they switched to JavaScript-based prompts that cannot be universally dismissed. Analytics providers are promoting tactics for “greater reliability” and creating workarounds for anti-tracking policies.

Violations of users’ intent are nothing new. Ad tech companies like Criteo and AdRoll created workarounds specifically to track Safari users without their explicit consent; Google was penalized by the FTC for ignoring Safari users’ preferences. These techniques are arrogant and unethical. If a user has set their browser preferences to make tracking difficult or impossible, that decision should be respected. Likewise, if a browser has preferences that are not favourable to tracking, it is not the prerogative of ad tech companies to ignore or work around those defaults. Just because browsers have historically been receptive by default to all sorts of privacy hostile technologies, it does not mean that those defaults are more correct.

A lack of user control is a worrisome theme amongst web bullshit purveyors. Think of all of the video files you have inadvertently streamed because they were included on a webpage that prioritized flashiness. Think of all of the times you have been stymied while trying to load some ostensibly simple page over a poor connection because of a wasteful use of resources.

This arrogance reaches its zenith when we test the limits of whether the integrity of a webpage is dependent on the entirety of its components. For instance, is it fair for a webpage to use a visitor’s CPU power to generate cryptocurrency? A recent analysis of the million most popular websites found that around a third of websites which use WebAssembly are running cryptocurrency miners; it was the most popular use of WebAssembly among the websites surveyed. The survey found that WebAssembly was being used in any capacity by only around 1,600 popular websites.

This stuff is like malware — except malware is usually relegated to the confines of software sourced in dubious means. Cryptocurrency miners, on the other hand, can be found on name brand websites. Surely, not all instances of mining scripts were deliberate, instead being included on a webpage through an ad network or similar means.

Still, these scripts are out there. When I started writing JavaScript twenty years ago, I used it to swap button styling or create dropdown menus. Now, it is somehow possible for a news article to have buried in its ad tech package a few-kilobyte obfuscated JavaScript file that will maximize CPU cycles, destroy battery life, and spool up your computer’s fans as it generates cryptocurrency in the background. One could argue that this is just another revenue source for a website, but it isn’t that simple. It’s an insidious and hostile way of usurping power and control.

If it were up to me, cryptocurrency mining would not be a capability offered by any web technology. But there is no way to remove just one CPU-sucking application from a web technology that enables many CPU-sucking applications. If WebAssembly were universally curtailed or dropped entirely, we would no longer face the scourge of cryptocurrency miners in browsers, but we might also lose web applications that require higher performance programming languages. And, notably, the use of those languages has broadened beyond the previous confines of the World Wide Web. It’s not just crappy Electron apps — which is to say all Electron apps. Web technologies are everywhere in otherwise native apps, from Adobe’s Creative Cloud to plenty of Apple’s, and there are plenty of reasons why.

But it also means that any website you visit is brimming with capabilities it almost certainly does not need, but can be used in ways that users have not consented to — and users have little control or knowledge. You can disable JavaScript entirely, but that makes some websites unusable. In some browsers — though not Safari — you can disable JavaScript on a per-website basis, but that’s a big switch to throw that simultaneously stops desktop-class code and also doesn’t allow the site to display a photo in an overlay.1

The bullshit web is not going away — on the contrary, it has leeched into the desktop, threatening native applications, and it is this very growth that is allowing websites to become laden with toxic technical waste. One of the reasons the bullshit web is able to exist at all is because the web is increasingly a universal operating system. We build websites as applications, which encourages standards bodies to add web app features to programming languages, which means even more apps get built with web languages.

It is as much a tasteless exercise as it is progress, and we should not accept its creeping intrusion. You can still separate a great Mac app from a poor one, but there is a new basement and it is found in apps which are partially or exclusively websites. It is foolish to run several instances of Chromium, and it is profoundly disrespectful for everything from our file syncing app to a videoconferencing app to be hundreds of megabytes each when their native-built equivalents are in the tens of megabytes. It is bizarre and poetic that the bullshit web has expanded to such an extent that we now construct it using a website disguised as an app.

All of this is to say that many of the surface-level indicators of the bullshit web may go away. Like popup ads and Flash, the worst qualities today’s web will eventually run their course. I look forward to a time when I no longer open an AMP URL, change it to a real webpage address, then am required to set my cookie preferences, state that I do not wish to receive push notifications, pause an autoplaying video, close a form asking for my email address, and hide a half-page advertisement only to read a single article. A future without all of that trash would undeniably be a very good thing.

Under the hood, though, the bullshit web will be more insidious and all-encompassing. Users will have less control over what is allowed to execute on their computer while unscrupulous practices are normalized. The worst privacy offences may be curtailed by regulation enforcing strong rights, but the worst technical offences will be hard to stop so long as websites are apps and apps are websites.


  1. Yes, I know there are non-JavaScript implementations of photo lightbox scripts; please do not write. ↥︎

At Android Central, Harish Jonnalagadda writes that, with the new iPhone SE, “you’re not only getting a phone with reliable hardware, but you’re also getting great software and more updates than you would on Android”. At 9to5Google, Seth Weintraub says that it is “setting a high bar this year, particularly matching the Pixel (a) series price points”.

But over at Gizmodo, Sam Rutherford remains unsatisfied:

After months of rumors and leaks, the new iPhone SE has finally arrived with a starting price of $400. It’s a great deal. But deep down inside, I can’t help but feel like Apple could have done more with its affordable new iPhone.

[…]

More importantly, what’s kind of bumming me out are some of the things that Apple didn’t do. A good example is the iPhone SE’s 4.7-inch LCD screen. By going with what looks to be the same display used on the iPhone 8, Apple was probably able to save a little money. However, you have to compare it to other budget and mid-range phones like the Pixel 3a, Galaxy A50, and the OnePlus 7T (which got a recent price reduction down to just $500), that all have more colorful OLED displays (which is also what Apple uses on the iPhone 11 Pro). Apple’s decision to stick with LCD panels seems like a missed opportunity.

Rutherford provides no reason why the iPhone SE ought to have an OLED display other than saying it is “more colourful”. The iPhone SE’s LCD display has exactly the same P3 colour gamut of the iPad Pro’s LCD display and the iPhone 11 Pro’s OLED display.

On top of that, only including a single rear camera is somewhat disappointing, because while having multiple rear cameras used to be a luxury only available on expensive flagship handsets, a ton of affordable phones including the Moto G Power, Galaxy A50, and the upcoming TCL 10L all have two or more rear cameras while still sporting price tags of $350 or less.

But perhaps the biggest disappointment is that Apple didn’t really do anything to maximize the size of the iPhone SE’s screen. Last year, when Apple announced the iPhone 11, iPhone 11 Pro, and iPhone 11 Pro Max, it felt like Apple was finally ready to boldly move forward into a world centered around FaceID and thinner bezels. Seeing the iPhone SE still sporting fat bezels and TouchID is somewhat confusing (even if its a little welcome during a time we’re all having to wear masks outside of our homes).

Rutherford is the same guy who thought that it was perfectly reasonable for Samsung to charge $1,700 for the Galaxy Fold but it was unconscionable to price the iPhone X and XS at a thousand bucks. He now thinks that you should be able to get all of the features of an iPhone 11 Pro with a slightly smaller display for $399, and he will not be satisfied until then.

It’s a small wonder that Apple has not hired this guy to set the prices of all of its products. Imagine how successful it would be.

Dan Seifert, the Verge:

It’s been a bad year for small phone lovers. It’s no secret that the average size of new smartphones has increased dramatically over the past few years. But this year it feels like the idea of a small phone you’d actually want to use as a primary device (read: not whatever that Palm phone was trying to be a couple years back) is truly dead and gone.

It’s hilarious in hindsight to go back to reviews of the first Samsung Galaxy Note, which came with a “humungous” 5.3-inch display. Because of its chunky bezels, the phone was bigger than its display size would suggest, but it’s similar in length, depth, and weight to an iPhone 11, and only a little wider. This was a time when people used the word “phablet”, and when reviewers questioned if there was any reason to buy an iPad because these phones were just so gosh darn big. How quaint.

But, after reading this piece, I’m not sure why Seifert picks this as the year the truly small smartphone became extinct. He points out that Apple stopped selling its last four-inch iPhone in 2018, and new Android phones have been even larger for years.

TC Sottek:

[The] refusal of smartphone makers to acknowledge a huge part of the population who prefer smaller phones is just wild. [We] make pants in 80,000 sizes but no you can only have the XL phone or the XXL phone.

Even after carrying it every day for the last two and a half years, I still find that my iPhone X feels uncomfortable in my pocket. I could use one hand to solidly grip both sides of the iPhone 5S I used to have, and I almost never dropped that phone. I have dropped the similarly-sized 6S and X more times than I can count. I can’t imagine using either of the bigger iPhone 11 models.

Seifert:

It’s easy to see why Apple went with the larger design for the new model: the company claims this size is the most popular iPhone ever released, and on a technical level, it’s easier to fit components into a larger frame than a smaller one.

Plus, Apple has years of experience with this basic form factor, going all the way back to the iPhone 6. In fact, if you cast your memory back to that time, you might recall that the iPhone 6 and 6 Plus were worldwide blockbusters because Apple was finally meeting demand for big phones. What was big in 2014 is now small in 2020.

This form factor has never been my favourite, but its longevity is a testament to the skill and taste of Apple’s industrial designers.

Apple:

Apple today announced the second-generation iPhone SE, a powerful new iPhone featuring a 4.7-inch Retina HD display, paired with Touch ID for industry-leading security. iPhone SE comes in a compact design, reinvented from the inside out, and is the most affordable iPhone. The new iPhone SE is powered by the Apple-designed A13 Bionic, the fastest chip in a smartphone, to handle the most demanding tasks. iPhone SE also features the best single-camera system ever in an iPhone, which unlocks the benefits of computational photography including Portrait mode, and is designed to withstand the elements with dust and water resistance.

iPhone SE comes in three beautiful colors — black, white and (PRODUCT)RED — and will be available for pre-order beginning Friday, April 17, starting at just $399 (US).

Brian X. Chen, New York Times:

Apple typically holds splashy events to introduce new products. But with the pandemic, the company instead live-streamed a product executive showing a slide show of images of the new iPhone to make its announcement.

Those of you who were patiently waiting for Apple to announce a new iPhone in the vein of the original iPhone SE — a small screen in a classic body with the latest processor and camera — will find that this ticks most boxes. Apple says that it has the same SoC as the iPhone 11 and 11 Pro, has dual SIM support, and faster wireless, all for $50 less than the iPhone 8. Sounds like a great deal to me.

One thing this phone seems to confirm, though, is that the four-inch display is not coming back to the iPhone. At its launch, the 2016 iPhone SE replaced the 5S as the smallest phone in Apple’s lineup; that is the case with this SE, too, but it uses the still-large 4.7-inch display size as the iPhone 8. The iPod Touch remains available with a smaller display, but I get the feeling that it’s only the case because it is by far the cheapest iOS device they sell. There is no direct replacement for the iPhone 8 Plus — no “iPhone SE Plus” — so this single 4.7-inch model remains the last vestige of the home button.

The new white model, now with a black bezel around the screen, looks particularly sharp. Like iPhone 11 models in many countries, these phones only have an Apple logo on the back — and, in the case of the red version, a “(PRODUCT)RED” designation. I still can’t get used to the vertically centred logo.

Update: Now that the iPhone 8 has been discontinued, Apple no longer sells any iPhone with 3D Touch. Makes me wonder if Force Touch will be dropped from the Apple Watch as well.

After first announcing that the Magic Keyboard would be available in May, it is now available to order and will arrive on doorsteps next week.

A couple of years ago, Apple slipped into a bit of a pattern of missing already-announced release dates. It seems like whatever was gumming up the works has been corrected and the company has returned to form: the AirPods Pro, Mac Pro, and products announced in March were all released according to their public schedule — or, in the case of this Magic Keyboard, earlier than promised. Good stuff.

Zoe Schiffer, the Verge:

In many ways, this was Medium working as intended. The company operates both as a blogging platform and as a media outlet. Some articles, written by professional journalists who work at one of Medium’s publications, are fact-checked. Others are written by conspiracy theorists and contain dangerous misinformation. After all, the company’s mission, according to CEO Ev Williams, is to level the playing field and encourage “ideas that come from anywhere.”

As the pandemic disrupts life in the US, Medium has made strides to stop the spread of misleading health news. Its own publications, like OneZero and Elemental, have covered COVID-19 with the journalistic ethics, editing, and fact-checking you’d expect from a traditional outlet. Medium also started an official COVID-19 blog to promote articles from verified experts. It rolled out a coronavirus content policy and hired a team of science editors.

But the decision to curate some content — to hire professional journalists and promote verified articles — has made it harder to tell fact from fiction on the platform. While user-generated pieces now have a warning at the top telling users the content isn’t fact-checked, they look otherwise identical to those written by medical experts or reporters. In some ways, this is the promise of Medium: to make the work of amateurs look professional.

Much in the same way that social media networks tend to blur unique identities into a homogenous, context-averse feed, the uniform look of Medium posts make it harder to identify the higher-quality work of their own publications. Medium is running a contributor network by another name, and experiencing similar results: lots of charlatans taking advantage of higher production values and an established domain name.

According to this Atlas Obscura article, the handful of drive-in theatres that are open right now in the United States are busier than they been in recent memory. But DriveInMovie.com — a website that I fully believe is “the internet’s oldest drive-in movie resource” — says that there are no drive-in theatres in Alberta. The last one here in Calgary closed in 1999. Now would be as good a time as any to open one up.

Robert Mackey, the Intercept:

Donald Trump grinned broadly on Monday as he tricked the news networks into broadcasting a taxpayer-funded testimonial to his own leadership, in the form of a video highlight reel of presidential statements on the coronavirus crisis, set to stirring music, unveiled during the president’s 29th daily briefing on the pandemic.

[…]

The centerpiece of the video was a timeline of actions by Trump and his administration, highlighting the partial ban on travel from China he ordered on January 31, and his declaration of a national emergency on March 13.

But, as CBS News correspondent Paula Reid pointed out to Trump after the video ended, there was a huge gap in the timeline: It mentioned absolutely no action by him in February and there was, as the Times had noted, a period of “six long weeks” after the travel restrictions until he “finally took aggressive action to confront the danger the nation was facing.”

The president did not simply fail to answer Reid’s questions; he threw a tantrum while dodging the premise. There is nobody less equipped to take charge of a response to this pandemic than this guy — no person so self-obsessed, so quick to gloat, so thin-skinned, and so deliberately ignorant.

Julia Angwin, the Markup:

The idea is elegant in its simplicity: Google and Apple phones would quietly in the background create a database of other phones that have been in Bluetooth range — about 100 to 200 feet — over a rolling two-week time period. When users find out that they are infected, they can send an alert to all the phones that were recently in their proximity.

The broadcast would not identify the infected person, it would just alert the recipient that someone in his or her recent orbit had been infected. And, importantly, the companies say they are not collecting data on people’s identities and infection status. Nearly all of the data and communication would be stored on users’ phones.

But building a data set of people who have been in the same room together — data that would likely be extremely valuable to both marketers and law enforcement — is not without risk of exploitation, even stored on people’s phones, security and privacy experts said in interviews.

Maciej Cegłowski:

I appreciate any commentary on the privacy design of contact tracking that acknowledges the existence of an entire unregulated industry already trafficking in this information. The current effort is noble but fundamentally ridiculous, like painting bike lanes on the interstate.

[…]

I wish people had picked any other moment than the start of a pandemic to come to terms with the implications of living in a surveillance society. But by the iron law of 2020, the dumbest thing has to happen, and that means a principled debate about fictional forms of privacy.

Zack Whittaker and Darrell Etherington, TechCrunch:

TechCrunch joined a media call with Apple and Google representatives, allowing reporters to ask questions about their coronavirus tracing efforts.

[…]

The companies said only public health authorities will be allowed access to the contact tracing API.

This limited API use will be restricted in the same spirit that you restrict individual healthcare to licensed medical professionals like physicians. In the same way, use of the API will be restricted only to authorized public health organizations as identified by whatever government is responsible for designating such entities for a given country or region. There could be conflict about what constitutes a legitimate public health agency in some cases, and even disagreements between national and state authorities, conceivably, so this sounds like it could be a place where friction might occur, with Apple and Google on tricky footing as platform operators.

Casey Newton:

Third, the companies said they would prevent abuse of the system by routing alerts through public health agencies. (They are also helping those agencies, such as Britain’s National Health Service, build apps to do just that.) While the details are still being worked out, and may vary from agency to agency, Apple and Google said they recognized the importance of not allowing people to trigger alerts based on unverified claims of a COVID-19 infection. Instead, they said, people who are diagnosed will be given a one-time code by the public health agency, which the newly diagnosed will have to enter to trigger the alert.

Fourth, the companies promised to use the system only for contact tracing, and to dismantle the network when it becomes appropriate. Some readers have asked me whether the system might be put to other uses, such as targeted advertising, or whether non-governmental organizations might be given access to it. Today Apple and Google explicitly said no.

Aside from bad faith arguments from those who assume that this framework will be implemented in the dumbest possible way, I think the answers provided by Apple and Google representatives ought to assuage overall worries about the design of this contact tracing system. I am emphatically not saying that there are no further criticisms that can be levelled at it, only that they should be far more precise. It can be true that a reckoning is needed to correct the privacy failures of the past twenty years and also that this is a particularly unwelcome time to suddenly have that realization.

Dylan Smith, Digital Music News:

It’s hardly a secret that the coronavirus (COVID-19) crisis has brought the live-event industry to a screeching halt. Responding both to government mandates and health concerns, promoters have canceled (or delayed) sporting events, concerts, and essentially all other audience-based entertainment functions.

And predictably, a substantial number of would-be attendees are looking to receive refunds for the tickets they bought prior to the pandemic. In responding to this unprecedented cluster of repayment requests, Ticketmaster has quietly changed its refund policy to cover only canceled events — not the many functions that promoters have indefinitely “postponed” or rescheduled to a date/time that some ticketholders cannot make.

Ticketmaster is doing this in the middle of both a pandemic and an economic crisis. There are undoubtedly people who can no longer afford to attend a to-be-rescheduled show, particularly if it involves travel. Their best option will be to resell their tickets through Ticketmaster’s own scummy platform.

Matthew Prince and Sergi Isasi of Cloudflare:

Earlier this year, Google informed us that they were going to begin charging for reCAPTCHA. That is entirely within their right. Cloudflare, given our volume, no doubt imposed significant costs on the reCAPTCHA service, even for Google.

Again, this is entirely rational for Google. If the value of the image classification training did not exceed those costs, it makes perfect sense for Google to ask for payment for the service they provide. In our case, that would have added millions of dollars in annual costs just to continue to use reCAPTCHA for our free users. That was finally enough of an impetus for us to look for a better alternative.

Lindsey O’Donnell, Threatpost:

Google initially provided reCAPTCHA for free in exchange for data from the service, which was used to train its visual identification systems. But according to Prince, earlier this year Google said they plan to start charging for reCAPTCHA use. According to The Register, Google said there’s no charge for reCAPTCHA unless customers exceed one million queries per month (or 1,000 API calls per second).

I had never heard of hCAPTCHA before seeing it across Cloudflare protected websites, but it is promising to see alternatives to Google-owned properties used at such a large scale. Cloudflare has a pretty good track record when it comes to privacy, too, so their vote for a specific alternative is a good endorsement.

In general, it is a good thing to see fewer elements of the web’s infrastructure being controlled by the same handful of companies. I am painfully aware of how limited that line of argument is when the company that runs hCAPTCHA is touting in its press release that Cloudflare controls 12% of the web’s traffic. But, still, at least all that traffic is not being protected by the web’s biggest advertising network, too.

Derek Thompson, writing in the Atlantic earlier this week:

Our cellphones and smartphones have several means of logging our activity. GPS tracks our location, and Bluetooth exchanges signals with nearby devices. In its most basic form, cellphone tracing might go like this: If someone tests positive for COVID-19, health officials could obtain a record of that person’s cellphone activity and compare it with the data emitted by other phone owners. If officials saw any GPS overlaps (e.g., data showing that I went to a McDonald’s hot spot) or Bluetooth hits (e.g., data showing that I came within several feet of a new patient), they could contact me and urge me to self-isolate, or seek a test.

Casey Newton, the Verge:

All of these efforts seem to skip over the question of whether a Bluetooth-reported “contact event” is an effective method of contact tracing to begin with. On Thursday I spoke with Dr. Farzad Mostashari, the former national coordinator for health information technology at the Department of Health and Human Services. (Today he’s the the CEO of Aledade, which makes management software for physicians.) Mostashari had recently posted a Twitter thread expressing skepticism over Bluetooth-based contact tracing, and I asked him to elaborate.

The first problem he described is getting a meaningful number of people to install the app and make sure it’s active as everyone makes their way through the world. Most countries have made app installation voluntary, and adoption has been low. In Singapore, Mostashari told me, adoption has been about 12 percent of the population. If the United States had similar adoption, you’ve now made your big contact-tracing bet on the likelihood that two people passing one another have both installed this app on your phone. The statistical likelihood of this is about 1.44 percent. (It could be higher in areas with greater population density or where the app was more widely installed.)

A low adoption rate of contact tracing is something that could be resolved if, say, one or both of the world’s mobile operating system vendors hopped on this problem.

Which brings us to today’s news — Apple and Google jointly announcing a two-stage effort to identify people who may have been exposed to the novel coronavirus:

First, in May, both companies will release APIs that enable interoperability between Android and iOS devices using apps from public health authorities. These official apps will be available for users to download via their respective app stores.

Second, in the coming months, Apple and Google will work to enable a broader Bluetooth-based contact tracing platform by building this functionality into the underlying platforms. This is a more robust solution than an API and would allow more individuals to participate, if they choose to opt in, as well as enable interaction with a broader ecosystem of apps and government health authorities. Privacy, transparency, and consent are of utmost importance in this effort, and we look forward to building this functionality in consultation with interested stakeholders. We will openly publish information about our work for others to analyze.

The short version, from Ina Fried at Axios:

In mid-May, the companies will update their operating system to support the contact-sharing technique and allow for contact-tracing apps.

In the coming months, a further operating system update will allow the system to work without needing a specific app.

As reported in the New York Times, this partnership formed about two weeks ago, so it is not fully realized yet. However, to supplement this announcement, both companies have released draft technical documentation explaining how the system will be implemented. From the Bluetooth document:

  • The Contact Tracing Bluetooth Specification does not require the user’s location; any use of location is completely optional to the schema. In any case, the user must provide their explicit consent in order for their location to be optionally used.

  • Rolling Proximity Identifiers change on average every 15 minutes, making it unlikely that user location can be tracked via bluetooth over time.

  • Proximity identifiers obtained from other devices are processed exclusively on device.

Nicole Nguyen:

What’s most interesting about this is that it seems to not actually be about location data—it’s about proximity data […]

This is a more privacy-sensitive version of some of the makeshift ideas documented by Thompson, above, and it address the problem of scale even though it’s still, rightly, opt-in.

Here’s another question that Newton raised in his piece:

The second problem is that when these Bluetooth chips do pass in the night, you should expect a large number of false positives.

“If I am in the wide open, my Bluetooth and your Bluetooth might ping each other even if you’re much more than six feet away,” Mostashari said. “You could be through the wall from me in an apartment, and it could ping that we’re having a proximity event. You could be on a different floor of the building and it could ping. You could be biking by me in the open air and it could ping.”

The snarky side of my brain wonders if this is a real problem given how unreliable Bluetooth connections often are in typical use. For what it’s worth, the Washington Post also pointed this out — so it is problematic that Bluetooth is apparently too sensitive and not sensitive enough.

It is understandable why this is being raised as a concern, but it beggars belief that it is something that would not be considered by either Apple or Google. They know Bluetooth signals pass through walls and floors, and that walls and floors often separate apartments and offices. I do not see a clear explanation of how this will be resolved in any of their documentation, so it is an open question, but it is surely not one that either company is unfamiliar with.

Another concern is an overwhelming amount of data creating patterns where none exist. In a Twitter thread, Ashkan Soltani points out the importance of avoiding noise that could interfere with other efforts. There are plenty of likely scenarios in Sultana’s thread where false negatives and false positives are plausible. But nobody is proposing that this technology should be used as a substitute for widespread testing and other policy initiatives. It is a complementary option, not a replacement.

There are reasonable concerns we should keep in mind about this initiative; however, on balance, I think we must see this as a contribution that has the possibility to more accurately direct resources and testing. It should not be trusted without verification, but it also should not be dismissed. There is no substitute for testing but, in the absence of good governance in critical countries, it becomes necessary to create supplements wherever possible.

Update: Maciej Cegłowski:

What this does highlight is the problem of governance. Deploying a new, nationwide contact-tracing technology should be something decided by a functioning Congress in consultation with a functioning CDC, not a fun crypto project for Chad and Brad in Cupertino and Mountain View.

It speaks to some deep-seated cynicism that many people would, apparently, prefer tech giants creating a contact tracing system than the CDC. I don’t think it’s wrong that Apple and Google are creating this system, but I do think it’s upsetting that they seemingly must.

Jagadeesh Chandraiah of SophosLabs:

Since we began writing last year about the consumer-hostile trend in mobile apps that we’re calling fleeceware, the number of apps we’ve discovered that engage in this practice have only increased. In the first two articles we wrote about fleeceware, we covered various Android apps in the official Play Store charging very high subscriptions for apps of questionable quality or utility.

In this latest round of research, we found more than 30 apps we consider fleeceware in Apple’s official App Store.

Many of these apps charge subscription rates like $30 per month or $9 per week after a 3- or 7-day trial period. If someone kept paying that subscription for a year, it would cost $360 or $468, respectively. For an app.

Like we have seen before, most of these fleeceware apps are image editors, horoscope/fortune telling/palm readers, QR code/barcode scanners, and face filter apps for adding silly tweaks to selfies.

I downloaded a horoscope app to see what this world of fleeceware was like. Turns out it’s as bad as you might think. Immediately after launching the app, I was prompted to enter my Apple ID password. I tapped “cancel” and the app proceeded to run. I entered a name, a birthday, and a time of birth — for some reason — after which it scanned my “palm”. Then it asked for my Apple ID password again, so I tapped “cancel” again, and then it said that it could show me my horoscope results with a three-day free trial and, after that, would charge me $13.49 per week.

How could anyone pass up a deal like that?

Tapping the subscription button showed me the standard in-app purchase sheet, so I confirmed the purchase with Face ID. Then it prompted me for my Apple ID’s password again, so I tapped the cancel button again, after which I was shown a SwiftyStoreKit error. I tried the in-app purchase again and, with trepidation, entered my Apple ID’s password at the prompt. My trial was unlocked; I could at long last know what the stars and my palm have in store for me, or whatever.

I learned a few things while running this experiment:

  1. One advantage of requiring apps to use Apple’s own in-app purchases API is that all subscriptions are tied to an Apple ID and known at the system level. That means that Apple could theoretically solve the problem of erroneous subscriptions by notifying consumers when a free trial is expiring.

  2. Even though I ostensibly have a free trial for three days, the fine print suggests that I must cancel by day two or I will be charged for the first week.

  3. The system Apple ID password prompt still looks like a phishing scam. My understanding is that a developer could reproduce the overall look and feel of this dialog, but would be unable to read my Apple ID’s email address and the prompt would not persist after switching apps. So, while I am fairly confident that my password is not in the hands of some criminal enterprise, I will be changing it.

    This dialog is in desperate need of a redesign that clearly indicates that it is something that is generated by the system rather than an app. Perhaps the app could be shaded and zoom out slightly, as with the share sheet, and a sheet similar to the in-app purchase confirmation could prompt for the password. I’m not sure if this is the right solution, but it would more clearly indicate that this is a system-level action and that it’s safe to enter your password.

Ever since subscriptions have been opened up to all types of app, they have become a scammer’s best friend. When coupled with a free trial, there’s a low barrier to onboarding users and generating recurring revenue. Apps that offer subscriptions should be more closely scrutinized, particularly when the app is for jokey entertainment purposes or when there are a large number of negative reviews.

Update: Riley Tomasek pointed out that new rules are being imposed by Visa (PDF) regarding recurring payments. The requirements are set to go into effect on April 18, and seem to apply to vendors. I’m not sure there will be many changes in practice to App Store subscriptions, but Visa is now mandating that a reminder notification must be sent at least seven days in advance of a free trial ending — potentially meaning that free trials will need to be at least a week long.

Brian X. Chen, New York Times:

If there is something déjà vu about all of this, you aren’t wrong. That’s because we find ourselves dealing with the same situation over and over again, focusing on the convenience of easy-to-use tech products over issues like data security and privacy.

[…]

The lesson is one we need to learn and relearn. When a company fails to protect our privacy, we shouldn’t just continue to use its product — and tell the people we care about to use it — just because it works well and is simple to use. Once we lose our privacy, we rarely get it back again.

Chen is absolutely right in arguing that we need to be more discerning. Upon recognizing shoddy privacy and security practices, we should stop promoting the flawed app or service until it is fixed, as we cannot undo the flaws that result.

However, setting this up as a contrast between ease-of-use and security is bananas:

If you are concerned about privacy, try an alternative. There are video chatting tools from companies with better reputations, like Google’s Hangouts, Cisco’s Webex and FaceTime for Apple devices. These products may not be as simple to use as Zoom, but they work and you can worry less.

Webex might be harder to use than Zoom — I’m not sure, as I haven’t tried it — but since when is FaceTime considered anything but simple? Same with Hangouts, for that matter.

The lesson we should be drawing from this Zoom debacle is that privacy and security should not be tacked on later, but are foundational elements of any product or service. You can design something to be secure and user-friendly, but it’s easier to start with that vision than treating either as something to do whenever there’s some free time.

I know that there are lots of people who are still eager for Apple, for example, to make their own smart television. But discrete components allow for more flexibility and, overall, probably reduce the environmental impact of replacing old components. In my case, the TV is fine and it doesn’t need replacing any time soon, but my fourth-generation Apple TV will be upgraded whenever Apple feels like releasing a new version.

John Prine has died as a result of COVID-19. A year or so ago, he performed a short live set for the Strombo Show, ending with the last song he wrote for what is now his final album: “When I Get to Heaven”. It’s sweet, celebratory, and poignant in the way only Prine could do it.

Aaron Pressman, in an article for Fortune that can’t help but ask the question “should Apple be allowed to kill one of Android’s best weather apps?” (Fortune paywalls articles shortly after they are published, but you can find a mirror on the Internet Archive):

The killing of the Android app brought complaints from its fans and also criticism from antitrust experts who said the move, along with the decision to cut off licensing of Dark Sky’s data, is an example of Apple eliminating competition among weather apps.

“Ugh,” was the first reaction from Stanford University law professor Mark Lemley, a noted antitrust scholar who has studied how large tech firms concentrate their power by buying up innovative startups.

“It’s worth asking whether there is any reason we should allow this merger,” Lemley adds. “True, it’s not the most important app in the world, but it seems to make consumers unambiguously worse off.”

Andrew Gavil, a Howard University law professor who has also worked for the Federal Trade Commission, adds, “Given what they’ve said they will do, it’s obviously anticompetitive.” Cutting off the Android app could be seen as a play to make iOS preferable to consumers, at least for weather app fans, he says. And cutting off access to the data licensing makes it harder for new weather apps to launch.

Acquisitions often leave fewer choices in the marketplace, and Apple’s purchase of Dark Sky is no different in that respect. What makes this acquisition so disruptive is that Dark Sky is not simply an app — it is a full weather forecasting API used by dozens of other well-known apps and big organizations. And because the cross-platform app and Dark Sky’s API are being discontinued in varying stages over the next couple of years, users and developers are each going to have to find ways to replace it.

There’s no way around it: that means a lot of work for developers, and that means some apps may no longer be available. That sucks for users of those apps.

But it is not as though Dark Sky is the only weather API around. Dark Sky is clever about predicting near-future precipitation, but it’s not nearly as accurate for forecasting as data supplied by government agencies or private competitors. It simply isn’t the case that Apple is buying the market for weather APIs or apps.

By the way, how did Pressman react to previous acquisitions? Did he ask whether Google should be allowed to buy FitBit? He did not. Did he protest T-Mobile’s purchase of Sprint? Nope. Even in the Dark Sky piece, he references other acquisitions that he doesn’t seem to find quite as problematic:

Top tech companies like Apple, Google, and Facebook have been vacuuming up small app developers for many years, largely without much scrutiny beyond complaints from fans of the acquired software. Apps are often shut down, and features from the apps are sometimes integrated into the acquirer’s existing products. Apple bought a music streaming app called Swell in 2014, shuttered its app, and added features into its own Apple Music service, for example. Meanwhile, Microsoft plans to shut down acquired to-do list app Wunderlist later this year after buying it in 2015 and then creating its own to-do app.

For what it’s worth, Swell was a podcast client, not a music streaming app. But Pressman does not make the case in this piece that either of these apps, nor any others, should be kept running by the acquiring party.

At any rate, as a user of many apps that have been acquired, I prefer being given a hard deadline for when an app or service will be shut down, rather than the empty promise that it will stick around. We all know that, one day, it will disappear; better to rip the bandage off now than later.