Pixel Envy

Written by Nick Heer.

Archive for July, 2021

Allowing Tesla Owners to Beta Test Autonomous Functionality on Public Roads Is a Safety Hazard

Keith Barry, Consumer Reports:

FSD beta 9 is a prototype of what the automaker calls its “Full Self-Driving” feature, which, despite its name, does not yet make a Tesla fully self-driving. Although Tesla has been sending out software updates to its vehicles for years—adding new features with every release—the beta 9 upgrade has offered some of the most sweeping changes to how the vehicle operates. The software update now automates more driving tasks. For example, Tesla vehicles equipped with the software can now navigate intersections and city streets under the driver’s supervision.

“Videos of FSD beta 9 in action don’t show a system that makes driving safer or even less stressful,” says Jake Fisher, senior director of CR’s Auto Test Center. “Consumers are simply paying to be test engineers for developing technology without adequate safety protection.”

One owner’s car repeatedly drove over a double yellow line while creeping into an intersection. Another’s confused the moon with a yellow traffic light. Tesla excuses this by pointing to cautionary statements on the in-car display that state that it is “not a substitute for an attentive driver”, and says that it is only being rolled out to owners who signed up to participate in pre-release testing.

But the fact of the matter is that these features are being marketed as “Full Self-Driving” and “Autopilot”. Unlike other cars equipped with automatic lane keeping and radar-assisted cruise control, Tesla is not pitching these features as part of a safety enhancement package, but as autonomous vehicle technologies. There is no way the company does not know how owners are using these features and, consequently, subjecting other drivers, pedestrians, and cyclists to their beta testing experience at great risk to public safety.

It is also true that human drivers will make mistakes. Not every driver on the road is equally competent, and it is possible that Tesla’s system is better than some human drivers. But autonomous systems can lull the human operator into a false impression of safety, with sometimes deadly consequences.

Playdate Previews

Andrew Webster, the Verge:

And then there’s the Playdate from Panic. Whereas the aforementioned handhelds are almost uniformly technological upgrades, the Playdate offers something much weirder. It looks kind of like a Game Boy that comes from an alien world. There are familiar elements, like a D-pad and face buttons, but many of its games are controlled by a crank that slots into the side. And those games are only available in black and white, and they’ll eventually be released as part of weekly mystery drops.

It sounds strange and fascinating, and I had the chance to head into the PlayDate’s parallel universe over the last few days with a near-final version of the device. It definitely is weird — but that’s also what makes it exciting.

Sam Machkovech, Ars Technica:

Nothing I’ve played on the Playdate thus far screams “revolutionary” or “must-have.” Two low-powered CPUs, intentionally lo-fi hardware, and a single rotary crank can only combine to deliver so much. These four test titles likely lack the scope or depth that some gamers hope for in a brand-new system’s launch library.

Yet everything I’ve played on the Playdate has been accessible, amusing, and unique, and getting four games at once has distributed the fun factor around in a way that I really appreciate. Two of the games are built with replayability in mind—one as a score chaser, the other as a puzzle-minded platformer with speedrunning potential. The other two titles are more linear but focus less on challenge and more on atmosphere; these show what developers can do within a wimpy system’s limits to deliver their own comfortable, unique games on black-and-white hardware.

Preorders for the Playdate begin one week from today, July 29. I am so excited about the possibilities of this weird little thing.

The NSO ‘Surveillance List’, What It Is and Isn’t

Kim Zetter’s Zero Day newsletter has been a consistently good read. Today’s issue, about that mysterious list of tens of thousands of phone numbers forming the basis of much of the Pegasus Project reporting, is a great example:

There is nothing on the list to indicate what purpose it’s meant to serve or who compiled it, according to the Post and other media outlets participating in the Pegasus reporting project. There is also nothing on the list that indicates if the phones were spied on, were simply added to the list as potential targets for spying or if the list was compiled for a completely different reason unrelated to spying.

[…]

Those varying descriptions have created confusion and controversy around the reporting and the list, with readers wondering exactly what the list is for. The controversy doesn’t negate the central thesis and findings, however: that NSO Group has sold its spy tool to repressive regimes, and some of those regimes have used it to spy on dissidents and journalists.

The reporting associated with the Pegasus Project has been enlightening so far, but not without its faults. The confusion about this list of phone numbers is one of those problems — and it is a big one. It undermines some otherwise excellent stories because it is not yet known why someone’s phone number would end up on this list. Clearly it is not random, but nor is it a list of individuals whose phones were all infected with Pegasus spyware. This murkiness has allowed NSO Group’s CEO to refocus media attention away from the ethical dumpster fire started when his company knowingly licensing spyware to authoritarian regimes.

A Priest Was Outed by His Phone’s Location Data, Likely Through Ad Tech Middle Parties

This is one of those stories that gets into some difficult territory as far as my writing about it goes. These are not light topics.

JD Flynn and Ed Condon, the Pillar:

Monsignor Jeffrey Burrill, former general secretary of the U.S. bishops’ conference, announced his resignation Tuesday, after The Pillar found evidence the priest engaged in serial sexual misconduct, while he held a critical oversight role in the Catholic Church’s response to the recent spate of sexual abuse and misconduct scandals.

[…]

According to commercially available records of app signal data obtained by The Pillar, a mobile device correlated to Burrill emitted app data signals from the location-based hookup app Grindr on a near-daily basis during parts of 2018, 2019, and 2020 — at both his USCCB office and his USCCB-owned residence, as well as during USCCB meetings and events in other cities.

I do not wish to devalue any reader’s faith; if you are Catholic, please know that I am not criticizing you specially or your beliefs.

The Catholic Church has a history of opposing LGBTQ rights and treating queer people with a unique level of hatred — this report says that the use of Grindr and similar apps “present[s] challenges to the Church’s child protection efforts”, invoking the dehumanizing myth tying gay men to pedophilic behaviour, an association frequently made by the Catholic Church.1 I find it difficult to link to this story because of statements like these, and it offends me how this priest was outed.

But I also think it is important to give you, reader, the full context of what is disclosed, and what is not. For example, I understand that Catholic priests have an obligation to be celibate and, theoretically, the Pillar would investigate any clergy it believed was stepping out of line. But this specifically involves one priest and Grindr, and leaves a lot of questions unanswered. For a start, how did the Pillar know? Did it get tipped off about Burrill’s activities so it would know where to look, or did it receive data dumps related to the phones of significant American clergy? And what about other dating apps, like Tinder or Bumble? Surely, there must be priests in America using one of those apps to engage in opposite-sex relationships; why not an exposé on one of them? This report does not give any indication about how it began investigating. I find that odd, to say the least.

The reason I am linking to this is because of that data sharing angle. As reported by Shoshana Wodinsky at Gizmodo, Grindr has repeatedly insisted on the anonymity of its data collection and ad tech ties:

When asked about the Burrill case, a Grindr spokesperson told Gizmodo that it “[does] not believe Grindr is the source of the data behind the blog’s unethical, homophobic witch hunt.”

[…]

Obviously, only Grindr knows if Grindr is telling the truth. But these sorts of adtech middlemen the platform’s relying on have a years-long track record of lying through their teeth if it means it can squeeze platforms and publishers for a few more cents per user. Grindr, meanwhile, has a years-long track record of blithely accepting these lies, even when they mean multiple lawsuits from regulators and slews of irate users.

Wodinsky points to a piece at the Catholic News Agency — which both Pillar writers both used to work for — claiming that an anonymous party had “access to technology capable of identifying clergy […] found to be using [dating apps] to violate their clerical vows”. It will come as no surprise to you that I find it revolting that someone can expose this behaviour through advertising data. It is a wailing klaxon for regulation and reform.

But, also, is it ethical for a news organization to acquire data like this for the purpose of publicly outing someone or sharing their private activities? In a 2018 story, the New York Times showed how it was possible to identify people using similar data. But the newsworthiness of that story was not in individuals’ habits and activities, it was about how easy it is to misuse advertising and tracking data. And where is the line on this? Are journalists and publications going to begin mining the surveillance of ad tech companies in search of news stories? I would be equally disturbed if this were instead a report that exposed the infidelity of a “family values”-type lawmaker. I think the Pillar exposed a worrisome capability with this report, and also initiated a rapid ethical slide.

Thank you for making it through this post. As compensation, please enjoy some impressive finger athletics.


  1. The authors clarify that they are ostensibly concerned about the relative ease with which minors are able to use dating and hookup apps. That is a fair criticism. But this digression cannot be separated from this harmful belief, nor from the Church’s history of sexual abuse of minors. That abuse was not caustic because the clergy involved were engaged in same-sex relations, it was because they were powerful adults molesting children. ↩︎

A Case Against Security Nihilism

I get the feeling I am going to be linking to a lot of NSO Group-related pieces over the next little while. There are a couple of reasons for that — good reasons, I think. The main one is that I think it is important to understand the role of private security companies like NSO Group and their wares in the context of warfare. They function a little bit like mercenary teams — Academi, formerly Blackwater, and the like — except they are held to, improbably, an even lower standard of conduct.

The second reason is because I think it is necessary to think about how private exploit marketplaces can sometimes be beneficial, at great risk and with little oversight. There are few laws associated with this market. There are attempts at self-regulation, often associated with changing the economics of the market through bug bounties and the like.

Which brings me to this piece from Matthew Green, cryptographer at Johns Hopkins University and mobile device security researcher:

NSO can afford to maintain a 50,000 number target list because the exploits they use hit a particular “sweet spot” where the risk of losing an exploit chain — combined with the cost of developing new ones — is low enough that they can deploy them at scale. That’s why they’re willing to hand out exploitation to every idiot dictator — because right now they think they can keep the business going even if Amnesty International or CitizenLab occasionally catches them targeting some human rights lawyer.

But companies like Apple and Google can raise both the cost and risk of exploitation — not just everywhere, but at least on specific channels like iMessage. This could make NSO’s scaling model much harder to maintain. A world where only a handful of very rich governments can launch exploits (under very careful vetting and controlled circumstances) isn’t a great world, but it’s better than a world where any tin-pot authoritarian can cut a check to NSO and surveil their political opposition or some random journalist.

Sounds appealing, except many of the countries NSO Group is currently selling to are fantastically wealthy and have abysmal human rights records. I must be missing something here because I do not know that there is a way to increase the cost of deploying privately-developed spyware so that its use is restricted from regimes that many people would consider uniquely authoritarian, since they are often wealthy. Amnesty researchers found evidence of the use of NSO’s Pegasus on Azerbaijani phones, too: like Saudi Arabia, Azerbaijan is an oil-rich country with human rights problems. And then there is the matter of international trust: selling only to, for example, NATO member countries might sound like a fair compromise to someone living in the U.S. or the U.K. or Canada, but it clearly establishes this spyware as a tool of a specific political allegiance.

We must also consider that NSO Group has competitors on two fronts: the above-board, like Intellexa, and those on the grey market. NSO Group may not sell to, say, North Korea, but nobody is fooled into thinking that a particularly heinous regime could not invest in its own cybercrime and espionage capabilities — like, again, the North Korean ruling party has and does.

But — I appreciate the sentiment in Green’s post, and I think it is worthwhile to keep in mind as more bad security news related to this leak will inevitably follow in the coming days and weeks.

Clearview AI Raises $30 Million From Unidentified Investors

Kashmir Hill, the New York Times:

Clearview AI is currently the target of multiple class-action lawsuits and a joint investigation by Britain and Australia. That hasn’t kept investors away.

The New York-based start-up, which scraped billions of photos from the public internet to build a facial-recognition tool used by law enforcement, closed a Series B round of $30 million this month.

The investors, though undeterred by the lawsuits, did not want to be identified. Hoan Ton-That, the company’s chief executive, said they “include institutional investors and private family offices.”

It makes sense that these investors would want their association with the company kept secret, since identifying them as supporters of a creepy facial recognition company is more embarrassing that their inability to understand irony. Still, it shows how the free market is betting that this company will grow and prosper despite its disregard for existing laws, proposed legislation, and a general sense of humanity or ethics.

Dismantle this company and legislate its industry out of existence. Expose the investors who are propping it up.

The Eternal October

Mike Masnick, Techdirt:

I think it’s time that we bring back recognition of how innovation, and technology such as the open internet, can actually do tremendous good in the world. I’m not talking about a return to unfettered boosterism and unthinking cheerleading — but a new and better-informed understanding of how innovation can create important and useful outcomes. An understanding that recognizes and aims to minimize the potential downsides, taking the lessons of the techlash and looking for ways to create a better, more innovative world.

I appreciate this cognizant optimistic approach, and am excited to see what Masnick has in store.

Calcalis Interviews NSO CEO Shalev Hulio

Omer Kabir and Hagar Ravet of Calcalis:

Perhaps due to the magnitude of the media interest in the investigation, NSO executives chose to break the secrecy that usually surrounds their company and answer questions directly. In an interview with Calcalist, NSO chief executive Shalev Hulio denied his software was being used for malicious activity. At the heart of his claims is the list of 50,000 phone numbers on which the investigation is based, and which it is claimed are potential NSO targets. The source of the list wasn’t revealed, and according to Hulio, it reached him a month prior to the publication of the investigation, and from a completely different source.

The publications behind the Pegasus Project assert that this list of phone numbers is, in the words of the Guardian, “an indication of intent”. This is clearly not a list of random phone numbers — several of the numbers on it are tied to phones with local evidence of Pegasus software, and many more of the numbers belong to high-profile targets. But, according to Hulio, it is impossible that this is entirely a list of targets:

According to Hulio, “the average for our clients is 100 targets a year. If you take NSO’s entire history, you won’t reach 50,000 Pegasus targets since the company was founded. Pegasus has 45 clients, with around 100 targets per client a year. In addition, this list includes countries that aren’t even our clients and NSO doesn’t even have any list that includes all Pegasus targets – simply because the company itself doesn’t know in real-time how its clients are using the system.”

Hulio says that NSO Group investigated these allegations by scanning clients’ records that agreed to an analysis, and could not find anything that matched the Pegasus Project’s list. But it is hard to believe he is being fully honest with examples like these of his hubris:

“Out of 50,000 numbers they succeeded in verifying that 37 people were targets. Even if we go with that figure, which is severe in itself if it were true, we are saying that out of 50,000 numbers, which were examined by 80 journalists from 17 media organizations around the world, they found that 37 are truly Pegasus, so something is clearly wrong with this list. I’m willing to give you a random list of 50,000 numbers and it will probably also include Pegasus targets.”

If a list of just 50,000 random phone numbers — basically, everyone in a small town — contains Pegasus targets, Pegasus is entirely out of control. It is a catastrophic spyware emergency. Hulio was clearly being hyperbolic, but his bluster generated quite the response from Calcalis’ interviewer:

That isn’t accurate. Out of the 50,000 numbers they physically checked only 67 phones and in 37 of them, they found traces of Pegasus. It isn’t 37 out of 50,000. And there were 12 journalists among them. That is 12 too many.

NSO Group’s response, while impassioned, cannot be trusted. The company has not earned enough public goodwill for its CEO to use such colourful language. But the Pegasus Project’s publication partners also need to clarify what the list of phone numbers actually means, because something here is not adding up.

Troubles With Apple’s Bug Bounty Program

I used some of the Washington Post’s reporting on the Pegasus Project in my piece about its revelations and lessons, but I never really addressed the Post’s article. I hope you will read what I wrote, especially since this website was down for about five hours today around the time it started picking up traction. Someone kicked the plug out at my web host; what can I say?

Anyway, the Post’s story is also worth reading, despite its headline: “Despite the hype, iPhone security no match for NSO spyware”. iPhone security is not made of “hype” and marketing. On the contrary, the reason this malware is notable is because of its sophistication and capability in an operating system that, while imperfect, is far more secure than almost any consumer device before it, as the Post acknowledged just a few years ago when it claimed Apple was “protecting a terrorist’s iPhone”. According to the Post, the iPhone is both way too locked down for a consumer product and also all of its security is mere hype.

Below the miserable headline and between the typically cynical Reed Albergotti framing, there is a series of worthwhile interviews with current and former Apple employees claiming that the company’s security responses are too often driven by marketing response and the annual software release cycle. The Post:

Current and former Apple employees and people who work with the company say the product release schedule is harrowing, and, because there is little time to vet new products for security flaws, it leads to a proliferation of new bugs that offensive security researchers at companies like NSO Group can use to break into even the newest devices.

[…]

Apple also was a relative latecomer to “bug bounties,” where companies pay independent researchers for finding and disclosing software flaws that could be used by hackers in attacks.

Krstić, Apple’s top security official, pushed for a bug bounty program that was added in 2016, but some independent researchers say they have stopped submitting bugs through the program because Apple tends to pay small rewards and the process can take months or years.

Apple disputes the Post’s characterization of its security processes, quality of its bug bounty program, involvement of marketing in its responses, and overall relationship with security researchers.

However, a suddenly very relevant post from Nicolas Brunner, writing last week, indicates that Apple’s bug bounty program is simply not good enough:

In my understanding, the idea behind the bounty program is that developers report bugs directly to Apple and remain silent about them until fixed in exchange for a security bounty pay. They also state very clearly, what issues do qualify for the bounty program payout on their homepage. Unfortunately, in my case, Apple never fulfilled their part of the deal (until now).

To be frank: Right now, I feel robbed. However I still hope, that the security bounty program turns out to be a win-win situation for both parties. In my current understanding however, I do not see any reason, why developers like myself should continue to contribute to it. In my case, Apple was very slow with responses (the entire process took 14 months), then turned me away without elaborating on the reasons and stopped answering e-mails.

A similarly frustrating experience with Apple’s security team was reported last month by Laxman Muthiyah:

The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report). Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD.

Selling these kind of vulnerabilities to government agencies or private bounty programs could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple.

[…]

But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below.

Apple says that it pays one million dollars for a “zero-click remote chain with full kernel execution and persistence” — and 50% more than that for a zero-day in a beta version — pales compared to the two million dollars that Zerodium is paying for the same kind of exploit.

Steven Troughton-Smith, via Michael Tsai:

I’m not sure why one of the richest companies in the world feels like it needs to be so stingy with its bounty program; it feels far more like a way to keep security issues hidden & unfixed under NDA than a way to find & fix them. More micro-payouts would incentivize researchers.

Security researchers should not have to grovel to get paid for reporting a vulnerability, no matter how small it may seem. Buy why would anyone put themselves through this process when there are plenty of companies out there paying far more?

The good news is that Apple can get most of the way toward fixing this problem by throwing money at it. Apple has deep pockets; it can keep increasing payouts until the grey market cannot possibly compete. That may seem overly simplistic, but at least this security problem is truly very simple for Apple to solve.

Security Is the Story We Have, Not the Story We Want to Have

This weekend’s first batch of stories from the “Pegasus Project” — a collaboration between seventeen different outlets invited by French investigative publication Forbidden Stories and Amnesty International — offers a rare glimpse into the infrastructure of modern espionage. This is a spaghetti junction of narratives: device security, privatized intelligence and spycraft, appropriate targeting, corporate responsibility, and assassination. It is as tantalizing a story as it is disturbing.

“Pegasus” is a mobile spyware toolkit created and distributed by NSO Group. Once successfully installed, it reportedly has root-level access and can, therefore, exfiltrate anything of intelligence interest: messages, locations, phone records, contacts, and photos are all obvious and confirmed categories. Pegasus can also create new things of intelligence value: it can capture pictures using any of the cameras and record audio using the microphone, all without the user’s knowledge. According to a 2012 Calcalist report, NSO Group is licensed by the Israeli Ministry of Defense to export its spyware to foreign governments, but not private companies or individuals.

There is little record of this software or capability on NSO Group’s website. Instead, the company says that its software helps “find and rescue kidnapped children” and “prevent terrorism”. It recently published a transparency report arguing that it offers lots of software for other purposes. It acknowledged some abuse of Pegasus’ capabilities, but said that those amount to a tiny number and that the company does not sell to “55 countries […] for reasons such as human rights, corruption, and regulatory restrictions”. It does not say in this transparency report which countries’ governments it prohibits from using its intelligence-gathering products.

Much of this conflict is about the stories which NSO Group wants to tell compared to the stories it should be telling: how its software enables human rights abuses, spying on journalists, and expanding authoritarian power. In fact, that is an apt summary for much of the security reporting that comprises the Pegasus Project: the stories that we, the public, have, not the stories that we want to have.

One of the stories that we tell ourselves is that our devices are pretty secure, so long as we keep them up to date, and that we would probably notice an intrusion attempt. The reality, as verified by Citizen Lab at the University of Toronto, is that NSO Group is particularly good at developing spyware:

Citizen Lab independently documented NSO Pegasus spyware installed via successful zero-day zero-click iMessage compromises of an iPhone 12 Pro Max device running iOS 14.6, as well as zero-day zero-click iMessage attacks that successfully installed Pegasus on an iPhone SE2 device running iOS version 14.4, and a zero-click (non-zero-day) iMessage attack on an iPhone SE2 device running iOS 14.0.1. The mechanics of the zero-click exploit for iOS 14.x appear to be substantially different than the KISMET exploit for iOS 13.5.1 and iOS 13.7, suggesting that it is in fact a different zero-click iMessage exploit.

“Zero-day” indicates a vulnerability that has not already been reported to the vendor — in this case, Apple. “Zero-click” means exactly what it sounds like: this is an exploit delivered by iMessage that is executed without any user interaction, and it is wildly difficult to know if your device has been compromised. That is the bad news: the story we like to tell ourselves about mobile device security simply is not true.

But nor is it true that we are all similarly vulnerable to attacks like these, as Ivan Krstić, Apple’s Head of Security Engineering and Architecture, said in a statement to the Washington Post:

Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. […] Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. […]

This situation is reminiscent of the 2019 zero-day attacks against iPhone-using Uyghurs, delivered through news websites popular with Uyghurs and presumably orchestrated by the Chinese government. Those vulnerabilities were quietly fixed at the beginning of that year, but their exploitation was not disclosed until Google’s Project Zero published a deep dive into their existence, at which point Apple issued a statement. I thought it was a poor set of excuses for a digital attack against an entire vulnerable population.

This time, it makes sense to focus on the highly-targeted nature of Pegasus attacks. The use of this spyware is not indiscriminate. But — with reportedly tens of thousands of attempted infections — it is being used in a more widespread way than I think many would assume. Like the exploits used on Uyghurs two years ago, it indicates that iPhone zero-click zero-days might not be the Pappy Van Winkle of the security world. Certainly, they are still rare, but it seems that there are some companies and nation-states that have stocked their pantries for a rainy day and might not be so shy about their use.

Still, nothing so far indicates that a typical person is in danger of falling victim to Pegasus, though the mere presence of zero-click full exploitations is worrisome for every smartphone user. The Guardian reports that the victims of NSO Group’s customers are high-profile individuals: business executives, investigative journalists, world leaders, and close associates. That is not to minimize the effect of this spyware, but its reach is more deliberately limited. If anything, the focus of its deployment teases for us mere mortals the unique security considerations faced by those at higher risk of targeted attack.

Thing is that many of those high-profile people use iPhones. The diplomats and friends of assassinated journalist Jamal Khashoggi profiled by the Washington Post all use iPhones. Many celebrities use iPhones, even when promoting Android devices. Jeff Bezos used an iPhone X.1 Many of the devices examines as part of the Pegasus Project are, indeed, iPhones, which has push the Washington Post team reporting on this investigation to conclude that this is largely an iPhone-specific problem:

Researchers have documented iPhone infections with Pegasus dozens of times in recent years, challenging Apple’s reputation for superior security when compared with its leading rivals, which run Android operating systems by Google.

The months-long investigation by The Post and its partners found more evidence to fuel that debate. Amnesty’s Security Lab examined 67 smartphones whose numbers were on the Forbidden Stories list and found forensic evidence of Pegasus infections or attempts at infections in 37. Of those, 34 were iPhones — 23 that showed signs of a successful Pegasus infection and 11 that showed signs of attempted infection.

If you read Amnesty’s full investigation into Pegasus — and I suggest you do as it is comprehensive — there is a different explanation for why the iPhone is overrepresented in its sample, and a clear warning against oversimplification:

Much of the targeting outlined in this report involves Pegasus attacks targeting iOS devices. It is important to note that this does not necessarily reflect the relative security of iOS devices compared to Android devices, or other operating systems and phone manufacturers.

In Amnesty International’s experience there are significantly more forensic traces accessible to investigators on Apple iOS devices than on stock Android devices, therefore our methodology is focused on the former. As a result, most recent cases of confirmed Pegasus infections have involved iPhones.

iOS clearly has many holes in its security infrastructure that need patching. Reporting from the Post suggests that the demand of launching a major new version of iOS every year — in addition to the four other operating systems Apple updates on an annual cycle — not only takes a toll on the reliability of its software, but also means some critical vulnerabilities take months to get patched. Apple is not alone in that regard, but it does raise questions about the security of the world’s information resting entirely in the hands of engineers at three companies on the American west coast. Is it a good thing that that high-risk people only have a choice between iOS and Android? Does it make sense that many of the world’s biggest companies almost entirely run Windows? Is enough being done to counter the inherent risks of this three-way market?

The security story we have is one of great risk, with responsibility held by very few. There are layers of firewalls and scanners and obfuscation techniques and encryption and all of that — but a determined attacker knows there are limited variables. iOS is not especially weak, but it is exceptionally vertically-integrated. If the latest iPhone running the latest software updates is vulnerable, all iPhones probably are as well.

There are two more contrasting sets of stories I wish to touch on about the responsibility of NSO Group and companies like it in these attacks. First, NSO Group is careful to state that it is merely a vendor and, as such, “does not operate its technology, does not collect, nor possesses, nor has any access to any kind of data of its customers”. However, it is also adamant that its software had zero role in Khashoggi’s assassination. How is it possible to square that certainty with the company’s alleged lack of involvement in the affairs of customers it cannot confirm nor deny?

Second, I gave credit earlier this year to the notion that private marketplaces of security vulnerabilities might actually be beneficial — at least, compared to weakened encryption or some form of “back door”. NSO Group is the reverse side of that argument. The story I like to tell myself is that, given that there is an established market for zero-days, at least that means law enforcement can unlock encrypted smartphones without the need for a twenty first century Clipper Chip. But the story we have is that NSO Group develops espionage software over which, once sold, it has little control. The company’s spyware is now implicated in the targeting of tens of thousands of phones belonging to activists, human rights lawyers, journalists, businesspeople, demonstrators, investigators, world leaders, and friends and colleagues of all of the above. NSO Group is a private company that enables dictators and autocrats, and somehow gets to wash its hands of all responsibility.

The story it wants is of a high technology company saving children and fighting terrorists. The story it has is an abuse of power and a lack of accountability.


  1. You might remember that embarrassing texts and images were leaked from Jeff Bezos’ iPhone a couple of years ago that confirmed that he was cheating on his now ex-wife with his current partner Lauren Sanchez. Bezos got in front of the National Enquirer story with a heroic-seeming Medium post where he copped to the affair.

    In that post, he also insinuated that the Saudi royal family used NSO Group malware to breach his phone’s security and steal that incriminating evidence in retaliation for his ownership of the Washington Post and its coverage of the Saudi royalty’s role in Post contributor Jamal Khashoggi’s assassination. In addition, the Post had aggressively reported on the Enqiurer’s catch-and-kill scheme to silence salacious stories.

    While that got huge amounts of coverage, a funny thing happened not too long after: the Wall Street Journal confirmed that the Enquirer did not get the texts and photos from some secret Saudi arrangement and, instead, simply paid Sanchez’ brother who had stolen them. A fuller story of this public relations score was reported earlier this year by Brad Stone in Bloomberg Businessweek. It seems that, contrary to contemporary reporting, there was little to substantiate rumours of a high-tech break-in by a foreign government.

    It is unclear whether Bezos was simply spinning a boring story in a politically-favourable way; a recent Mother Jones investigation found that Amazon’s public relations team is notorious among journalists for being hostile and telling outright lies. But if he was targeted by the Saudi Arabian royal family using NSO Group software, it is notable that it is apparently not on the list of 55 countries that the company refuses to sell to on the basis of human rights abuses↩︎

Instagram Has Become a Blend of TikTok and SkyMall

Instagram head Adam Mosseri, a few weeks ago:

But today I actually want to talk a bit more about video. And I want to start by saying we’re no longer a photo-sharing app or a square photo-sharing app. The number one reason that people say that they use Instagram in research is to be entertained. So people are looking to us for that. […]

Leaning hard on video at the expensive of everything else — now where have I heard that before?

Ben Thompson:

To this point I have framed Mosseri’s announced changes in the context of Instagram’s continual evolution as an app, from photo filters to network to video to algorithmic feed to Stories. All of those changes, though, were in the spirit of Systrom’s initial mission to capture and share moments. That is why perhaps the most momentous admission by Mosseri is that Instagram’s new mission is simply to be entertainment.

I have to wonder if it is in preparation for more than that, given this piece by Clive Thompson, writing for Medium’s the Debugger:

If you flew during the 90s and 00s, you probably remember SkyMall. It was a catalogue of completely loony products — often high-tech gadgets of dubious promise, such as “a vacuum cleaner to catch flies, an alien butler drink tray, a helmet that promises to regrow your hair using lasers.”

[…]

I can’t say precisely when my Instagram ads began to tip over into SkyMall territory. I’d been noticing the devolution for months, maybe years. But these days when I open up the app, every ad customized for me is some decidedly loopy gewgaw.

Maybe Instagram’s growth continues to be driven by the successful features it can lift directly from other photo- and video-based apps. But I wonder if this mix of ads for bizarre direct-to-consumer goods and the integrated e-commerce functionality are laying the foundation for a platform more like WeChat, Line, or Gojek. Perhaps Instagram does not expand into logistics operations, but why would it not push further into online payments, and buying and selling products? For many, shopping is entertainment. Why not facilitate that inside one of the world’s most popular mobile apps and take a cut of every purchase?

Even discarding my idle speculation, the name “Instagram” sure is beginning to feel outdated or, at least, disconnected.

The White House Is Not Colluding With Facebook on Censorship

Susan Heavey, Elizabeth Culliford, and Diane Bartz, Reuters:

Facebook is not doing enough to stop the spread of false claims about COVID-19 and vaccines, White House press secretary Jen Psaki said on Thursday, part of a new administration pushback on misinformation in the United States.

Facebook, which owns Instagram and WhatsApp, needs to work harder to remove inaccurate vaccine information from its platform, Psaki said.

From the White House transcript of that press briefing, in response to a reporter’s question about what actions the U.S. federal government is taking:

In terms of actions, Alex, that we have taken — or we’re working to take, I should say — from the federal government: We’ve increased disinformation research and tracking within the Surgeon General’s office. We’re flagging problematic posts for Facebook that spread disinformation. We’re working with doctors and medical professionals to connect — to connect medical experts with popular — with popular — who are popular with their audiences with — with accurate information and boost trusted content. So we’re helping get trusted content out there.

Psaki’s admission that the government is “flagging” posts with misinformation has caused quite the gnashing of teeth in pockets of the professional commentary circuit, with the Wall Street Journal’s editorial board calling it “censorship coordination”.

But, as Mike Masnick writes at Techdirt, that is not an accurate portrayal of what the Biden administration is doing:

It’s a simple fact: the US government should not be threatening or coercing private companies into taking down protected speech.

But, over the past few days there’s been an absolutely ridiculous shit storm falsely claiming that the White House is, in fact, doing this with Facebook, leading to a whole bunch of nonsense — mainly from the President’s critics. It began on Thursday, when White House press secretary Jen Psaki, in talking about vaccine disinfo, noted that the White House had flagged vaccine disinformation to Facebook. And… critics of the President completely lost their shit claiming that it was a “First Amendment violation” or that it somehow proved Donald Trump’s case against the social media companies.

It did none of those things.

I think Ken White’s messaging is better than the official White House version, but I do not think it would ameliorate the situation for those who believe the administration is colluding with Silicon Valley, or who are exploiting vaccine misinformation for their own gain.

Microsoft and Google Have Redesigned Their Emoji

Microsoft’s Claire Anderson, on the company’s Medium-based design blog (can Microsoft not host its own blog?):

As the world moves toward hybrid work scenarios that blend in-person with remote, expressive forms of digital communication are more important than ever. Over 1,800 emoji exist within Microsoft 365, and we’ve been working for the past year to dramatically refresh them by creating a system that is innately Fluent.

We opted for 3D designs over 2D and chose to animate the majority of our emoji. While you’ll see these roll out in product over the coming months, we wanted to share a sneak peek with you in honor of World Emoji Day. We’re also excited to unveil five brand-new emoji that signal our fresh perspective on work, expression, and the spaces in between.

Even though the video in the post is not entirely reflective of the actual textures and detail in these emoji, there is some beautiful design at play in these images. The faces, in particular, are playfully rendered, yet still legible even at the smaller size shown in the banner image. Many of the objects have soft lighting effects that, while slightly reducing contrast, do not seem to affect clarity too much. I am looking forward to seeing what they will look like in actual use.

Jennifer Daniel of Google, a company that hosts its own blog and uses its own top-level domain extension:

Well, it looks like giving some love to hundreds of emoji already on your keyboard — focusing on making them more universal, accessible and authentic — so that you can find an all-new fav emoji (I’m fond of 🎷🐛). And, you can find all of these emoji (yes, including the king, 🐢) across more of Google’s platforms including Android, Gmail, Chat, Chrome OS and YouTube.

I am much less fond of these.

‘Buy Now, Pay Later’ Services

Maddy Varner, the Markup:

If you’ve scrolled through any e-commerce sites lately, you’ve probably seen a version of it: A charming dinner plate costs $28 or “4 interest-free installments of $7.00 by Afterpay.” A pastoral checkered dress could run you $74.50 … or, alternatively, “4 interest-free payments of $18.62 with Klarna.”

In the past year, more and more merchants have started incorporating “buy now, pay later” options into their websites. They’re often prominently featured on product pages, where shoppers who might otherwise click away are encouraged to instead splurge and split their spending into periodic payments.

[…]

While BNPL companies present these loans as a smart budgeting tool, experts say costs can quickly add up, leaving shoppers with mounting debt. And regulators across the world have started to rein in these services, concerned that they can negatively impact the young consumers who tend to use them.

It is a bit disappointing but unsurprising that Apple is rumoured to be working on a competing offering. Once a company has got its feet wet in the murky sea of financial services, why would it be reluctant to go further?

Unclack for MacOS

Here is an excellent little single-purpose Mac utility: Unclack automatically mutes your mic when you are typing, and unmutes it when you stop. That’s it; that is all it does.

This was apparently released months ago, but I was only introduced to it recently, and it has been a very good thing to have. I am a loud typist pretty much always — it is a bad habit, I know — and this little utility means that I do not have to remember to mute and unmute my mic during online meetings. However, I have also discovered that I speak more while typing than I realized, thanks to this utility.

This is, for me, a perfect addition to my work-from-home software toolkit, and it is free. Recommended.

Abolishing Online Anonymity Will Not Tackle Abuse

Hussein Kesvani, in an opinion piece for the Guardian:

There is an argument that by forcing people to reveal themselves publicly, or giving the platforms access to their identities, they will be “held accountable” for what they write and say on the internet. Though the intentions behind this are understandable, I believe that ID verification proposals are shortsighted. They will give more power to tech companies who already don’t do enough to enforce their existing community guidelines to protect vulnerable users, and, crucially, do little to address the underlying issues that render racial harassment and abuse so ubiquitous.

My pet theory is that our fractured relationship with other users of big online platforms has nothing to do with anonymity and everything to do with standards. Pseudonymity and anonymity have been a part of the internet since it was created. Many users of forums and, before them, BBSes were only known by their handles. The biggest thing that has changed in the last fifteen-or-so years is a weakening of moderation efforts and community standards. It used to be that you had to go to specific websites known for users’ ability to test the limits of good taste and free speech, but that approach was mainstreamed. In the earlier days of Twitter, company executives famously referred to it as the “free speech wing of the free speech party”. Alexis Ohanian repeatedly praised Reddit’s laissez-faire approach to speech, and Facebook has wrestled with moderation issues for well over a decade now. Many users may have been repelled by rampant abuse, and those who remained were able to set a standard for new users to grow accustomed to.

Lax moderation in the founding years of these platforms undoubtably aided their growth, but that rapid ascendency also compounded their inability to moderate as they grew. Mike Masnick of Techdirt has said that moderation is impossible at scale, but I think that is partly because platforms are not moderating at a small scale. Trying to embed community standards into a platform hosting hundreds of millions of users is a fraught exercise. It has to start when these platforms are nascent.

That is my little theory, but it is sort of irrelevant. There is no way to reset platforms to the size they were at their founding so that we can try this whole thing again. I do not know how we, collectively, find a better way to express ourselves online now that the standard has been set. I do not think banning anonymity is a realistic or effective solution. Platforms’ lowered tolerance for abuse is, I think, helpful, if long overdue. But some change perhaps comes from understanding that we are often communicating with real people. I am not arguing that it will solve racism, but Kesvani is right: requiring verified identification to use web platforms will only give a superficial impression of improving on that front, too.

‘Roadrunner’ Contains Undisclosed Generated Re-Creations of Anthony Bourdain’s Voice

Helen Rosner, of the New Yorker, interviewed Morgan Neville about his new film “Roadrunner”, a documentary about Anthony Bourdain’s life:

There is a moment at the end of the film’s second act when the artist David Choe, a friend of Bourdain’s, is reading aloud an e-mail Bourdain had sent him: “Dude, this is a crazy thing to ask, but I’m curious” Choe begins reading, and then the voice fades into Bourdain’s own: “… and my life is sort of shit now. You are successful, and I am successful, and I’m wondering: Are you happy?” I asked Neville how on earth he’d found an audio recording of Bourdain reading his own e-mail. Throughout the film, Neville and his team used stitched-together clips of Bourdain’s narration pulled from TV, radio, podcasts, and audiobooks. “But there were three quotes there I wanted his voice for that there were no recordings of,” Neville explained. So he got in touch with a software company, gave it about a dozen hours of recordings, and, he said, “I created an A.I. model of his voice.” In a world of computer simulations and deepfakes, a dead man’s voice speaking his own words of despair is hardly the most dystopian application of the technology. But the seamlessness of the effect is eerie. “If you watch the film, other than that line you mentioned, you probably don’t know what the other lines are that were spoken by the A.I., and you’re not going to know,” Neville said. “We can have a documentary-ethics panel about it later.”

Since Bourdain wrote the words generated by this faked audio, I can see how this might seem like a fine compromise if you twist your brain around a little bit, but it is diving headfirst into some murky ethical waters. In this specific case, it comes across as exploitative and disrespectful.

This email is apparently not the only generated audio in the film, too, and it is unclear what the circumstances are around other clips. A big reason why there are no clear answers here is because Neville reportedly does not disclose the use of generated speech in the film — and that is inexcusable. For shame.

Bloomberg: Three-Quarters of iOS Users Opt Out of Tracking

Kurt Wagner, Bloomberg:

When users get asked on iPhone devices if they’d like to be tracked, the vast majority say no. That’s worrying Facebook Inc.’s advertisers, who are losing access to some of their most valuable targeting data and have already seen a decrease in effectiveness of their ads. 

The new prompt from Apple Inc., which arrived in an iOS software update to iPhones in early June, explicitly asks users of each app whether they are willing to be tracked across their internet activity. Most are saying no, according to Branch, which analyzes mobile app growth. People are giving apps permission to track their behavior just 25% of the time, Branch found, severing a data pipeline that has powered the targeted advertising industry for years.

The opt-in numbers reported by Branch are similar to those last reported by Flurry for worldwide users.

The online advertising industry has been telling us for years that consumers overwhelmingly prefer personalized ads and are only too happy to give up private information. What a crock of lies. The ad tech industry has been relying on a lack of transparency and consent to drive its business. When given a choice, there is now large-scale evidence that people abhor tracking and will usually opt out.

Also, for what it is worth, iOS 14.5, the update that launched App Tracking Transparency, was released in April and not “early June” as reported here. Never change, Bloomberg.

Twitter Is Axing Fleets, the Stories-Like Feature Demanded by Activist Investors, Due to Low Usage

Ilya Brown of Twitter:

We built Fleets as a lower-pressure, ephemeral way for people to share their fleeting thoughts. We hoped Fleets would help more people feel comfortable joining the conversation on Twitter. But, in the time since we introduced Fleets to everyone, we haven’t seen an increase in the number of new people joining the conversation with Fleets like we hoped. Because of this, on August 3, Fleets will no longer be available on Twitter.

You may recall that the Fleets feature was launched globally in November, reportedly due to pressure from Paul Singer’s Elliott Management though Jack Dorsey took issue with that characterization. I am not sure Wall Street is the best place to look for product ideas, but I admire Twitter’s willingness to experiment with copycat features apparently demanded by the same jerks who pushed Argentina to default. That’s innovation.

Meet BIGDBM and FullContact, Two Companies in the De‍-‍Anonymization Industry

Joseph Cox, Vice:

Tech companies have repeatedly reassured the public that trackers used to follow smartphone users through apps are anonymous or at least pseudonymous, not directly identifying the person using the phone. But what they don’t mention is that an entire overlooked industry exists to purposefully and explicitly shatter that anonymity.

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

While American lawmakers have been focused on allegations of criminally anticompetitive practices by bigger tech companies and American media has extensively covered Facebook and Google’s creepy tracking practices, the data “enrichment” industry has skated by with little attention outside of the tech-centric press. Its practices cannot be ignored.

A couple of years ago, records of over one billion people were found on an unprotected server, sourced from two different data enrichment companies. American cellular providers share subscriber information with advertisers and enrichment companies. This entire industry matches identifiers in different data sets to produce more comprehensive, more detailed, and more individualized profiles on people, which it sells back to the advertising industry, other data companies, resellers, and government agencies, according to the privacy policy of one of the two companies in this report.

I thought it might be useful to look at ways to opt out of this kind of associative data collection, so let’s examine those two companies.

FullContact cares so much about privacy that it provides a process for removing your data from its systems — but that is helpful only if you know the company exists. I followed the process and saw that FullContact had linked two of my email addresses and my phone number against a scraped copy of the LinkedIn profile I deleted many years ago, various social media profiles — remember FourSquare? — and my city. If you are familiar with the APIs provided by social media companies, this is probably an unsurprising data set. I sent a request to delete my data and, within an hour, I received an email confirming it was completed.

BIGDBM is much less transparent. On its Data Market page, it brags that it offers:

[…] a secure, cloud-based, self-service data platform that enables users to quickly and easily select data from billions of records. All BIGBDM records contain a persistent individual ID that keeps track of individuals in both online and offline data environments, allowing our customers for marketing to individuals using digital ads, or offline using phones and direct mail.

People seemingly have little control over whether BIGDBM has their identifier. On its privacy page, California users are able to request a copy of their data by completing a PDF form — which, as of writing, returns an error stating that the HTTPS certificate expired last year — and emailing them a copy. Then, BIGDBM may grant access to its California-specific database tool, at which point it appears that users may be allowed to delete their information. Apparently, this only applies to users in California; if you live elsewhere in the United States, BIGDBM may process your request if you nicely ask a sympathetic company representative.

It is unclear how the company treats information about non-Americans. Its privacy policy says that it does not collect information about people in the European Union “as a matter of course”, but how can it guarantee that? And what about people elsewhere?

Of course, all of this is only relevant if you have heard of BIGDBM. Companies like these are often unnamed in the user agreements and privacy policies most users do not read before registering for a service. In many cases, they fall under a generic term, like “vendors”, “partners”, or “other parties”.

It is onerous to require that individual users understand the full consequences of privacy policies like these. They grant most companies the freedom to share whatever information they feel like with whichever third-parties they deem relevant to their business practices. Those parties might re-share it, or mix it with other records to increase its granularity. All of this is permitted under U.S. law. And, because many technology products and services are based in the U.S., it often means that non-Americans are subject to the same policies due to the jurisdiction clause in the user agreement.