Yeah. It’s like any given the price is going to be wrong. So we’ll just adjust it over time, as we see if the value proposition makes sense to people. I’m not thinking about this a lot right now. We need to make full self-driving work in order for it to be a compelling value proposition.
Tesla has been selling this service for years now, and Elon is saying that for it to be a “compelling value,” the company will “need to make full self-driving work.”
That sure sounds like he’s saying that anyone who has already paid for FSD both does not yet have a system that works, and their purchase was not a “compelling value.”
It sounds to me like Tesla has been treating the ten thousand dollar self-driving option as a sort of Kickstarter for maybe, eventually, making its existing fleet of vehicles fully autonomous. But what happens if Tesla is unable to deliver these future features with the hardware it has sold under the promise of “Full Self-Driving Capability”? How many Tesla owners optioned it now with the hope that it will be compelling value in the future, instead of waiting for that time — when it will likely be more expensive?
Tesla’s approach to autonomous vehicles is full of contradictions. The company calls the feature “Full Self-Driving”, but a car optioned with it does not currently drive itself, and it may never do so. The company keeps raising the cost of the option creating pressure on buyers to spec it now or risk a greater expense if and when Tesla can deliver the promised features, but Musk also says that the option is not yet “compelling”.
Google’s parent company flexed its digital dominance, reporting its highest quarter ever for sales and profit behind a gusher of online advertising from businesses vying for customers across reopened economies.
Other tech companies have benefited from a soaring digital ad market. Snap Inc. last week reported revenue more than doubled behind strong user growth, while Twitter Inc. reported sales surged 74% behind increased advertising.
Advertising revenue growth in the second quarter of 2021 was driven by a 47% year-over-year increase in the average price per ad and a 6% increase in the number of ads delivered. Similar to the second quarter, we expect that advertising revenue growth will be driven primarily by year-over-year advertising price increases during the rest of 2021.
Facebook Inc said on Wednesday it expects revenue growth to “decelerate significantly,” sending the social media giant’s shares down 3.5% in extended trading even as it reported strong ad sales.
Facebook said it expects Apple’s recent update to its iOS operating system to impact its ability to target ads and therefore ad revenue in the third quarter. The iPhone maker’s privacy changes make it harder for apps to track users and restrict advertisers from accessing valuable data for targeting ads.
Facebook said much the same thing in its earnings press release last quarter. Perhaps its advertising revenues will begin to be impacted by App Tracking Transparency after all, but it seems likely that the feature will benefit the online advertising duopoly. In this riskier climate, advertisers seem to be favouring the known quantities of Google and Facebook. I will repeat what I wrote in April:
As is often the case for stories about privacy changes — whether regulatory or at a platform level — much of the coverage about App Tracking Transparency has been centred around its potential effects on the giants of the industry: Amazon, Facebook, and Google. But this may actually have a greater impact on smaller ad tech companies and data brokers. That is fine; I have repeatedlyhighlighted the surreptitious danger of these companies that are not household names. But Facebook and Google can adapt and avoid major hits to their businesses because they are massive — and they may, as Zuckerberg said, do even better. They are certainly charging more for ads.
Privacy should not be something that users must buy, nor should its violation be a key selling point. Privacy is something that should be there, for all of us, regardless of the device we use, the websites we visit, or the ad tech networks we unknowingly interact with.
If you parked your car in one of the thousands of parking spots across Calgary, there’s a good chance you paid the Calgary Parking Authority for the privilege. But soon you might be hearing from the authority after a recent security lapse exposed the personal information of vehicle owners.
But a logging server used to monitor the authority’s parking system for bugs and errors was left on the internet without a password. The server contained computer-readable technical logs, but also real-world events like payments and parking tickets that contained a driver’s personal information.
Nice to see my city being recognized by the international technology press. As of writing, the Calgary Parking Authority has not notified account holders, and I could not find any relevant local news stories.
The first time I remember shopping for music was at a Best Buy one day in 2001. I came home with two CDs: the Baha Men’s Who Let the Dogs Out and the pop compilation Now That’s What I Call Music! 5.
Each of those albums cost more than a month of streaming does today, which reflects all that happened to music listening in the intervening 20 years — Napster and LimeWire, iPods and iPhones, Spotify and TikTok. Every decade I’ve been alive, a new format has ascended. Tapes were displaced in the 1990s by CDs, which were displaced in the 2000s by mp3s, which were displaced in the 2010s by streaming. Now, instead of buying music, people rent it.
The music I’ve salvaged from earlier times is now part of my collection on Spotify, which I’ve been using since it launched in the United States, 10 years ago this month. But as I look back on the churn of the past couple of decades, I feel uneasy about the hundreds of playlists I’ve taken the time to compile on the company’s platform: 10 or 20 years from now, will I be able to access the music I care about today, and all the places, people, and times it evokes?
I still have the very first CD I remember buying, in a particularly luxe A&B Sound location in 2003 now occupied by a gym. I cannot remember when I last put that disc into a player. According to Music, the MP3s I ripped from that CD were most recently listened to in 2010, but I streamed that same record just a few weeks ago. That raises some interesting questions: Am I likely to ever play the MP3s I created all those years ago? Will they work next time I try? What am I most likely to do when I want to listen to that record and find that it has, for example, been pulled from streaming services? What new format will emerge in a decade’s time, and will it have that album on it?
In some sense, we have never stored music recordings in a permanent way. Vinyl records degrade with time and on playback. Manufacturers promised that CDs would last hundreds of years, but their actual lifespan is entirely variable. Hard drives degrade, and music streaming is an unproven business model with an oddly stagnant price point. Even so, transitioning so much of our listening to a deliberately temporary model seems short-sighted. We have replaced our hope of permanence with the more honest promise of ephemerality, which is perhaps more honest, but places all control and trust over our very personal attachment to art in the hands of big companies. I’m not sure about you, but that seems like a mistake.
Here’s a simple way to put the explosion of vinyl record sales in perspective: Pressing plants around the globe have the capacity to manufacture 160 million albums a year, according to the estimate of one executive with decades of experience in physical formats. But, he explains, the current “extraordinary” demand for vinyl looks to be more than double that: somewhere between 320 million and 400 million units.
We still want physical versions of many records even though much of our casual listening has been moved to a rental model. They may not last forever, but they cannot be removed from our shelves if a record label and a streaming service have a legal dispute.
Speaking of Safari, the developer of the OldOS app featuring a recreated iOS 4 interface built in SwiftUI has created something old and exciting: Safari 5. It is incomplete, sure, but it features all of the gradients, shiny icons, and Lucida Grande you would expect.
There is some good news: the “⋯” Button of Mystery has been scrapped and replaced with the standard share button. There’s also a reload button in the address bar right beside the URL — but it is grey, while every other tappable control in Safari is blue.
However, a whole-cloth web browser redesign is perhaps one of the most ambitious and difficult UI changes to make, and it still shows. I appreciate that Apple has been trying to move user interface components toward the bottom of the display in several applications; phone screens are still growing, and notification bubbles can cover toolbars at the top. It makes sense to prioritize thumb-accessible areas for interactivity. But when Google prototyped a similar bottom-focused redesign, many users found it “disorienting”, according to Chris Lee. It is a similar story in iOS 15.
They’re already desperately trying to make this UI work *and it’s a brand new UI*; imagine if a year or two from now they want to add some new option to it.
I often get the impression that software vendors, in general, imagine that it is inherently good for them to ship frequent updates with noticeable changes, and that users must appreciate the knowledge that their software is being updated all the time. This is a hallmark of the “Agile” development model and the software-as-a-service world. But I would submit that most users just want to get stuff done in more-or-less the same way as they did before an update. Software should enable that as much as possible; it should not be a barrier, and whole-cloth redesigns like these are burdensome on users.
In this context, reconfiguring Safari so that the entire user interaction happens in the lower half of the screen is a win for usability, but a loss for muscle memory. I think this once-in-a-lifetime update could make sense in the long term. But when coupled with some of the space constraints created by this specific iteration and how cramped the controls are, it is hard to argue in favour of this interpretation of Safari.
Meanwhile, the latest version of iPadOS has gained a preference in Safari to toggle the new unified tab and address bar, similar to that introduced in the last MacOS Monterey beta seed, which ought to be a clue. I think adding options to, effectively, switch between new and old versions of an app is a tacit admission that a change is big enough to be troublesome for a large number of users.
None of the versions of Safari 15, including the one in Monterey, should be scrapped entirely. But many of the UI changes are either too ambitious or — in the case of colour-changing tabs — poorly considered. New versions of iOS and iPadOS will probably be rolling out to users in six to eight weeks, and I do not think this flagship app is close to being shippable.
I wish I was kidding at this point, but the Safari tab bar in iOS 15 beta 4 *can* get busier.
Here’s what happens if you do a Google search, have an extension active, and have just downloaded a file.
In the pursuit of simplicity, the first version of this Safari redesign hid almost everything so that the UI could be condensed into a single address bar. Just three revisions in, Safari now appears far more complicated than its predecessor.
In the headphones category on Amazon, 1,800 different products from 666 brands were among the top 100 best-sellers in the last twenty-four months. That’s nearly three new products from almost one new brand every day replacing current items in the best-sellers list. Those brands are pseudo-brands like NUBBYO, LAFITEAR, NANMING, AIWONS, or HWCONA.
Only five brands – Apple, Samsung, Sony, Soundcore, and Tozo – had a product in the headphones best-sellers list for the entire twenty-four months. Just twenty have been in it for over 500 days (70% of the time). More than half of brands were on the list for only five days or less; hundreds of brands that gained some momentum, all to get lost among the sea of lookalikes a few days later.
I appreciate this longer-term view into the staying power — or lack thereof — of passthrough trademarks raised in a story I linked to last year by John Herrman, New York Times:
“For brand owners, enrolling provides you with powerful tools to help protect your trademarks, including proprietary text and image search and predictive automation,” the company declares. It gives owners control over product listings that contain their products, and the ability to protect themselves against unauthorized sellers using their names. Crucially, Amazon says on its site, “it gives you more access to advertising solutions, which can help you increase your brand presence on Amazon,” as well as to “utilize the Early Reviewer Program to gain initial reviews on new products” — a sanctioned method for improving a product’s search result.
If you’re feeding a brand-new listing into the Amazon machine, in other words, and doing so without a pre-existing brand or customers, getting into Brand Registry is extremely important. To achieve real and lasting success on Amazon, it’s vital.
As of 2017, it also requires a registered trademark.
Amazon’s policies have singlehandedly incentivized the creation of hundreds of these nonsense trademarks, and Kaziukėnas shows that they have no long-term staying power. These are entirely disposable brands for disposable goods: if you have a problem with your new pair of HWCONA headphones, where do you turn to get them fixed? What company is staking their reputation on the quality of these products? From a consumer’s perspective, there is nothing giving these products any greater expectations than some knock-off brand stocked in a dollar store.
I would love to see an investigation like the one Kaziukėnas did across dozens of different product categories to see if the results are similar.
Most Americans by now understand that our phones are tracking our movements, even if we don’t necessarily know all the gory details. And I know how easy it can be to feel angry resignation or just think, “so what?” I want to resist both of those reactions.
Hopelessness helps no one, although that’s often how I feel, too. Losing control of our data was not inevitable. It was a choice — or rather a failure over years by individuals, governments and corporations to think through the consequences of the digital age. We can now choose a different path.
Most articles about privacy tend to feel pretty bleak, but Ovide’s comes across refreshingly optimistic. I appreciate that.
These are choices we can demand of our governments at different levels. It is particularly warranted in the United States, given that it is the headquarters of many privacy-hostile companies and, therefore, the jurisdiction in which users’ data is regulated.
Once upon a time, Apple offered an easy-to-understand business model. The company made personal computers, small, medium, and large. Successfully positioned in the affordable luxury market sector, Apple devices sold well with healthy margins. Those margins helped finance strong R&D investments and took good care of employees, investors, and Uncle Sam.
In the company’s latest SEC filing for the quarter ended in March 2021, Apple’s Services reached $16.9B, exactly as much as the $16.9B number for the combined Mac and iPad revenue, although still far form the $48B iPhone revenue for that quarter.
This changes the business model’s “center of gravity”.
Apple’s business model is still admirably simple compared to many of its biggest competitors. Facebook and Google sell advertisements against scraped user data and profiling; Microsoft and Amazon are more diversified, but a large slice of their revenue comes from enterprise and government contracts. Apple, for the most part, sells physical products, bytes, and service contracts to end users.
But this new focus on recurring services revenue — predictable monthly payments from as many buyers as possible — has created plenty of opportunities for Apple to degrade its existing product offerings. As the iTunes Store gave way to the Apple Music streaming model, iTunes was replaced with the much worse Music app, which feels like an old <frame>-based website given the façade of a desktop application. Applications across MacOS and iOS now interrupt users with advertisements in a nagging reminder that your multi-thousand-dollar purchase of a hardware product is merely the beginning of your financial relationship with Apple.
I understand why this is happening, but a pivot to services is a hard turn for Apple to make, and I feel it is not executing it as gracefully as it could — and should — be.
As Gassée writes, the definition Apple uses for reporting revenue in its Services category is pretty broad. This is how Apple described the category in its most recent annual filing:
Services net sales include sales from the Company’s advertising, AppleCare, digital content and other services. Services net sales also include amortization of the deferred value of Maps, Siri, and free iCloud storage and Apple TV+ services, which are bundled in the sales price of certain products.
One thing not mentioned by either Gassée or Apple is that about one-fifth to one-quarter of Apple’s services revenue is from Google for making it the default search engine across Apple’s ecosystem. I mentally subtract $3 billion from this category in the quarterly earnings report to create a truer estimation of how Apple’s own-brand services are performing.
FSD beta 9 is a prototype of what the automaker calls its “Full Self-Driving” feature, which, despite its name, does not yet make a Tesla fully self-driving. Although Tesla has been sending out software updates to its vehicles for years—adding new features with every release—the beta 9 upgrade has offered some of the most sweeping changes to how the vehicle operates. The software update now automates more driving tasks. For example, Tesla vehicles equipped with the software can now navigate intersections and city streets under the driver’s supervision.
“Videos of FSD beta 9 in action don’t show a system that makes driving safer or even less stressful,” says Jake Fisher, senior director of CR’s Auto Test Center. “Consumers are simply paying to be test engineers for developing technology without adequate safety protection.”
But the fact of the matter is that these features are being marketed as “Full Self-Driving” and “Autopilot”. Unlike other cars equipped with automatic lane keeping and radar-assisted cruise control, Tesla is not pitching these features as part of a safety enhancement package, but as autonomous vehicle technologies. There is no way the company does not know how owners are using these features and, consequently, subjecting other drivers, pedestrians, and cyclists to their beta testing experience at great risk to public safety.
It is also true that human drivers will make mistakes. Not every driver on the road is equally competent, and it is possible that Tesla’s system is better than some human drivers. But autonomous systems can lull the human operator into a false impression of safety, with sometimes deadly consequences.
And then there’s the Playdate from Panic. Whereas the aforementioned handhelds are almost uniformly technological upgrades, the Playdate offers something much weirder. It looks kind of like a Game Boy that comes from an alien world. There are familiar elements, like a D-pad and face buttons, but many of its games are controlled by a crank that slots into the side. And those games are only available in black and white, and they’ll eventually be released as part of weekly mystery drops.
It sounds strange and fascinating, and I had the chance to head into the PlayDate’s parallel universe over the last few days with a near-final version of the device. It definitely is weird — but that’s also what makes it exciting.
Nothing I’ve played on the Playdate thus far screams “revolutionary” or “must-have.” Two low-powered CPUs, intentionally lo-fi hardware, and a single rotary crank can only combine to deliver so much. These four test titles likely lack the scope or depth that some gamers hope for in a brand-new system’s launch library.
Yet everything I’ve played on the Playdate has been accessible, amusing, and unique, and getting four games at once has distributed the fun factor around in a way that I really appreciate. Two of the games are built with replayability in mind—one as a score chaser, the other as a puzzle-minded platformer with speedrunning potential. The other two titles are more linear but focus less on challenge and more on atmosphere; these show what developers can do within a wimpy system’s limits to deliver their own comfortable, unique games on black-and-white hardware.
Preorders for the Playdate begin one week from today, July 29. I am so excited about the possibilities of this weird little thing.
Kim Zetter’s Zero Day newsletter has been a consistently good read. Today’s issue, about that mysterious list of tens of thousands of phone numbers forming the basis of much of the Pegasus Project reporting, is a great example:
There is nothing on the list to indicate what purpose it’s meant to serve or who compiled it, according to the Post and other media outlets participating in the Pegasus reporting project. There is also nothing on the list that indicates if the phones were spied on, were simply added to the list as potential targets for spying or if the list was compiled for a completely different reason unrelated to spying.
Those varying descriptions have created confusion and controversy around the reporting and the list, with readers wondering exactly what the list is for. The controversy doesn’t negate the central thesis and findings, however: that NSO Group has sold its spy tool to repressive regimes, and some of those regimes have used it to spy on dissidents and journalists.
The reporting associated with the Pegasus Project has been enlightening so far, but not without its faults. The confusion about this list of phone numbers is one of those problems — and it is a big one. It undermines some otherwise excellent stories because it is not yet known why someone’s phone number would end up on this list. Clearly it is not random, but nor is it a list of individuals whose phones were all infected with Pegasus spyware. This murkiness has allowed NSO Group’s CEO to refocus media attention away from the ethical dumpster fire started when his company knowingly licensing spyware to authoritarian regimes.
Monsignor Jeffrey Burrill, former general secretary of the U.S. bishops’ conference, announced his resignation Tuesday, after The Pillar found evidence the priest engaged in serial sexual misconduct, while he held a critical oversight role in the Catholic Church’s response to the recent spate of sexual abuse and misconduct scandals.
According to commercially available records of app signal data obtained by The Pillar, a mobile device correlated to Burrill emitted app data signals from the location-based hookup app Grindr on a near-daily basis during parts of 2018, 2019, and 2020 — at both his USCCB office and his USCCB-owned residence, as well as during USCCB meetings and events in other cities.
I do not wish to devalue any reader’s faith; if you are Catholic, please know that I am not criticizing you specially or your beliefs.
The Catholic Church has a history of opposing LGBTQ rights and treating queer people with a unique level of hatred — this report says that the use of Grindr and similar apps “present[s] challenges to the Church’s child protection efforts”, invoking the dehumanizing myth tying gay men to pedophilic behaviour, an association frequently made by the Catholic Church.1 I find it difficult to link to this story because of statements like these, and it offends me how this priest was outed.
But I also think it is important to give you, reader, the full context of what is disclosed, and what is not. For example, I understand that Catholic priests have an obligation to be celibate and, theoretically, the Pillar would investigate any clergy it believed was stepping out of line. But this specifically involves one priest and Grindr, and leaves a lot of questions unanswered. For a start, how did the Pillar know? Did it get tipped off about Burrill’s activities so it would know where to look, or did it receive data dumps related to the phones of significant American clergy? And what about other dating apps, like Tinder or Bumble? Surely, there must be priests in America using one of those apps to engage in opposite-sex relationships; why not an exposé on one of them? This report does not give any indication about how it began investigating. I find that odd, to say the least.
The reason I am linking to this is because of that data sharing angle. As reported by Shoshana Wodinsky at Gizmodo, Grindr has repeatedly insisted on the anonymity of its data collection and ad tech ties:
When asked about the Burrill case, a Grindr spokesperson told Gizmodo that it “[does] not believe Grindr is the source of the data behind the blog’s unethical, homophobic witch hunt.”
Obviously, only Grindr knows if Grindr is telling the truth. But these sorts of adtech middlemen the platform’s relying on have a years-long track record of lying through their teeth if it means it can squeeze platforms and publishers for a few more cents per user. Grindr, meanwhile, has a years-long track record of blithely accepting these lies, even when they mean multiple lawsuits from regulators and slews of irate users.
Wodinsky points to a piece at the Catholic News Agency — which both Pillar writers both used to work for — claiming that an anonymous party had “access to technology capable of identifying clergy […] found to be using [dating apps] to violate their clerical vows”. It will come as no surprise to you that I find it revolting that someone can expose this behaviour through advertising data. It is a wailing klaxon for regulation and reform.
But, also, is it ethical for a news organization to acquire data like this for the purpose of publicly outing someone or sharing their private activities? In a 2018 story, the New York Times showed how it was possible to identify people using similar data. But the newsworthiness of that story was not in individuals’ habits and activities, it was about how easy it is to misuse advertising and tracking data. And where is the line on this? Are journalists and publications going to begin mining the surveillance of ad tech companies in search of news stories? I would be equally disturbed if this were instead a report that exposed the infidelity of a “family values”-type lawmaker. I think the Pillar exposed a worrisome capability with this report, and also initiated a rapid ethical slide.
The authors clarify that they are ostensibly concerned about the relative ease with which minors are able to use dating and hookup apps. That is a fair criticism. But this digression cannot be separated from this harmful belief, nor from the Church’s history of sexual abuse of minors. That abuse was not caustic because the clergy involved were engaged in same-sex relations, it was because they were powerful adults molesting children. ↩︎
I get the feeling I am going to be linking to a lot of NSO Group-related pieces over the next little while. There are a couple of reasons for that — good reasons, I think. The main one is that I think it is important to understand the role of private security companies like NSO Group and their wares in the context of warfare. They function a little bit like mercenary teams — Academi, formerly Blackwater, and the like — except they are held to, improbably, an even lower standard of conduct.
The second reason is because I think it is necessary to think about how private exploit marketplaces can sometimes be beneficial, at great risk and with little oversight. There are few laws associated with this market. There are attempts at self-regulation, often associated with changing the economics of the market through bug bounties and the like.
NSO can afford to maintain a 50,000 number target list because the exploits they use hit a particular “sweet spot” where the risk of losing an exploit chain — combined with the cost of developing new ones — is low enough that they can deploy them at scale. That’s why they’re willing to hand out exploitation to every idiot dictator — because right now they think they can keep the business going even if Amnesty International or CitizenLab occasionally catches them targeting some human rights lawyer.
But companies like Apple and Google can raise both the cost and risk of exploitation — not just everywhere, but at least on specific channels like iMessage. This could make NSO’s scaling model much harder to maintain. A world where only a handful of very rich governments can launch exploits (under very careful vetting and controlled circumstances) isn’t a great world, but it’s better than a world where any tin-pot authoritarian can cut a check to NSO and surveil their political opposition or some random journalist.
Sounds appealing, except many of the countries NSO Group is currently selling to are fantastically wealthy and have abysmal human rights records. I must be missing something here because I do not know that there is a way to increase the cost of deploying privately-developed spyware so that its use is restricted from regimes that many people would consider uniquely authoritarian, since they are often wealthy. Amnesty researchers found evidence of the use of NSO’s Pegasus on Azerbaijani phones, too: like Saudi Arabia, Azerbaijan is an oil-rich country with human rights problems. And then there is the matter of international trust: selling only to, for example, NATO member countries might sound like a fair compromise to someone living in the U.S. or the U.K. or Canada, but it clearly establishes this spyware as a tool of a specific political allegiance.
We must also consider that NSO Group has competitors on two fronts: the above-board, like Intellexa, and those on the grey market. NSO Group may not sell to, say, North Korea, but nobody is fooled into thinking that a particularly heinous regime could not invest in its own cybercrime and espionage capabilities — like, again, the North Korean ruling party has and does.
But — I appreciate the sentiment in Green’s post, and I think it is worthwhile to keep in mind as more bad security news related to this leak will inevitably follow in the coming days and weeks.
Clearview AI is currently the target of multiple class-action lawsuits and a joint investigation by Britain and Australia. That hasn’t kept investors away.
The New York-based start-up, which scraped billions of photos from the public internet to build a facial-recognition tool used by law enforcement, closed a Series B round of $30 million this month.
The investors, though undeterred by the lawsuits, did not want to be identified. Hoan Ton-That, the company’s chief executive, said they “include institutional investors and private family offices.”
It makes sense that these investors would want their association with the company kept secret, since identifying them as supporters of a creepy facial recognition company is more embarrassing that their inability to understand irony. Still, it shows how the free market is betting that this company will grow and prosper despite its disregard for existing laws, proposed legislation, and a general sense of humanity or ethics.
Dismantle this company and legislate its industry out of existence. Expose the investors who are propping it up.
I think it’s time that we bring back recognition of how innovation, and technology such as the open internet, can actually do tremendous good in the world. I’m not talking about a return to unfettered boosterism and unthinking cheerleading — but a new and better-informed understanding of how innovation can create important and useful outcomes. An understanding that recognizes and aims to minimize the potential downsides, taking the lessons of the techlash and looking for ways to create a better, more innovative world.
I appreciate this cognizant optimistic approach, and am excited to see what Masnick has in store.
Perhaps due to the magnitude of the media interest in the investigation, NSO executives chose to break the secrecy that usually surrounds their company and answer questions directly. In an interview with Calcalist, NSO chief executive Shalev Hulio denied his software was being used for malicious activity. At the heart of his claims is the list of 50,000 phone numbers on which the investigation is based, and which it is claimed are potential NSO targets. The source of the list wasn’t revealed, and according to Hulio, it reached him a month prior to the publication of the investigation, and from a completely different source.
The publications behind the Pegasus Project assert that this list of phone numbers is, in the words of the Guardian, “an indication of intent”. This is clearly not a list of random phone numbers — several of the numbers on it are tied to phones with local evidence of Pegasus software, and many more of the numbers belong to high-profile targets. But, according to Hulio, it is impossible that this is entirely a list of targets:
According to Hulio, “the average for our clients is 100 targets a year. If you take NSO’s entire history, you won’t reach 50,000 Pegasus targets since the company was founded. Pegasus has 45 clients, with around 100 targets per client a year. In addition, this list includes countries that aren’t even our clients and NSO doesn’t even have any list that includes all Pegasus targets – simply because the company itself doesn’t know in real-time how its clients are using the system.”
Hulio says that NSO Group investigated these allegations by scanning clients’ records that agreed to an analysis, and could not find anything that matched the Pegasus Project’s list. But it is hard to believe he is being fully honest with examples like these of his hubris:
“Out of 50,000 numbers they succeeded in verifying that 37 people were targets. Even if we go with that figure, which is severe in itself if it were true, we are saying that out of 50,000 numbers, which were examined by 80 journalists from 17 media organizations around the world, they found that 37 are truly Pegasus, so something is clearly wrong with this list. I’m willing to give you a random list of 50,000 numbers and it will probably also include Pegasus targets.”
If a list of just 50,000 random phone numbers — basically, everyone in a small town — contains Pegasus targets, Pegasus is entirely out of control. It is a catastrophic spyware emergency. Hulio was clearly being hyperbolic, but his bluster generated quite the response from Calcalis’ interviewer:
That isn’t accurate. Out of the 50,000 numbers they physically checked only 67 phones and in 37 of them, they found traces of Pegasus. It isn’t 37 out of 50,000. And there were 12 journalists among them. That is 12 too many.
NSO Group’s response, while impassioned, cannot be trusted. The company has not earned enough public goodwill for its CEO to use such colourful language. But the Pegasus Project’s publication partners also need to clarify what the list of phone numbers actually means, because something here is not adding up.
I used some of the Washington Post’s reporting on the Pegasus Project in my piece about its revelations and lessons, but I never really addressed the Post’s article. I hope you will read what I wrote, especially since this website was down for about five hours today around the time it started picking up traction. Someone kicked the plug out at my web host; what can I say?
Anyway, the Post’s story is also worth reading, despite its headline: “Despite the hype, iPhone security no match for NSO spyware”. iPhone security is not made of “hype” and marketing. On the contrary, the reason this malware is notable is because of its sophistication and capability in an operating system that, while imperfect, is far more secure than almost any consumer device before it, as the Post acknowledged just a few years ago when it claimed Apple was “protecting a terrorist’s iPhone”. According to the Post, the iPhone is both way too locked down for a consumer product and also all of its security is mere hype.
Below the miserable headline and between the typically cynical Reed Albergotti framing, there is a series of worthwhile interviews with current and former Apple employees claiming that the company’s security responses are too often driven by marketing response and the annual software release cycle. The Post:
Current and former Apple employees and people who work with the company say the product release schedule is harrowing, and, because there is little time to vet new products for security flaws, it leads to a proliferation of new bugs that offensive security researchers at companies like NSO Group can use to break into even the newest devices.
Apple also was a relative latecomer to “bug bounties,” where companies pay independent researchers for finding and disclosing software flaws that could be used by hackers in attacks.
Krstić, Apple’s top security official, pushed for a bug bounty program that was added in 2016, but some independent researchers say they have stopped submitting bugs through the program because Apple tends to pay small rewards and the process can take months or years.
Apple disputes the Post’s characterization of its security processes, quality of its bug bounty program, involvement of marketing in its responses, and overall relationship with security researchers.
However, a suddenly very relevant post from Nicolas Brunner, writing last week, indicates that Apple’s bug bounty program is simply not good enough:
In my understanding, the idea behind the bounty program is that developers report bugs directly to Apple and remain silent about them until fixed in exchange for a security bounty pay. They also state very clearly, what issues do qualify for the bounty program payout on their homepage. Unfortunately, in my case, Apple never fulfilled their part of the deal (until now).
To be frank: Right now, I feel robbed. However I still hope, that the security bounty program turns out to be a win-win situation for both parties. In my current understanding however, I do not see any reason, why developers like myself should continue to contribute to it. In my case, Apple was very slow with responses (the entire process took 14 months), then turned me away without elaborating on the reasons and stopped answering e-mails.
The actual bounty mentioned for iCloud account takeover in Apple’s website is $100,000 USD. Extracting sensitive data from locked Apple device is $250,000 USD. My report covered both the scenarios (assuming the passcode endpoint was patched after my report). Even if they chose to award the maximum impact out of the two cases, it should still be $250,000 USD.
Selling these kind of vulnerabilities to government agencies or private bounty programs could have made a lot more money. But I chose the ethical way and I didn’t expect anything more than the outlined bounty amounts by Apple.
But $18,000 USD is not even close to the actual bounty. Lets say all my assumptions are wrong and Apple passcode verifying endpoint wasn’t vulnerable before my report. Even then the given bounty is not fair looking at the impact of the vulnerability as given below.
Apple says that it pays one million dollars for a “zero-click remote chain with full kernel execution and persistence” — and 50% more than that for a zero-day in a beta version — pales compared to the two million dollars that Zerodium is paying for the same kind of exploit.
I’m not sure why one of the richest companies in the world feels like it needs to be so stingy with its bounty program; it feels far more like a way to keep security issues hidden & unfixed under NDA than a way to find & fix them. More micro-payouts would incentivize researchers.
Security researchers should not have to grovel to get paid for reporting a vulnerability, no matter how small it may seem. Buy why would anyone put themselves through this process when there are plenty of companies out there paying far more?
The good news is that Apple can get most of the way toward fixing this problem by throwing money at it. Apple has deep pockets; it can keep increasing payouts until the grey market cannot possibly compete. That may seem overly simplistic, but at least this security problem is truly very simple for Apple to solve.
This weekend’s first batch of stories from the “Pegasus Project” — a collaboration between seventeen different outlets invited by French investigative publication Forbidden Stories and Amnesty International — offers a rare glimpse into the infrastructure of modern espionage. This is a spaghetti junction of narratives: device security, privatized intelligence and spycraft, appropriate targeting, corporate responsibility, and assassination. It is as tantalizing a story as it is disturbing.
“Pegasus” is a mobile spyware toolkit created and distributed by NSO Group. Once successfully installed, it reportedly has root-level access and can, therefore, exfiltrate anything of intelligence interest: messages, locations, phone records, contacts, and photos are all obvious and confirmed categories. Pegasus can also create new things of intelligence value: it can capture pictures using any of the cameras and record audio using the microphone, all without the user’s knowledge. According to a 2012 Calcalist report, NSO Group is licensed by the Israeli Ministry of Defense to export its spyware to foreign governments, but not private companies or individuals.
There is little record of this software or capability on NSO Group’s website. Instead, the company says that its software helps “find and rescue kidnapped children” and “prevent terrorism”. It recently published a transparency report arguing that it offers lots of software for other purposes. It acknowledged some abuse of Pegasus’ capabilities, but said that those amount to a tiny number and that the company does not sell to “55 countries […] for reasons such as human rights, corruption, and regulatory restrictions”. It does not say in this transparency report which countries’ governments it prohibits from using its intelligence-gathering products.
Much of this conflict is about the stories which NSO Group wants to tell compared to the stories it should be telling: how its software enables human rights abuses, spying on journalists, and expanding authoritarian power. In fact, that is an apt summary for much of the security reporting that comprises the Pegasus Project: the stories that we, the public, have, not the stories that we want to have.
One of the stories that we tell ourselves is that our devices are pretty secure, so long as we keep them up to date, and that we would probably notice an intrusion attempt. The reality, as verified by Citizen Lab at the University of Toronto, is that NSO Group is particularly good at developing spyware:
Citizen Lab independently documented NSO Pegasus spyware installed via successful zero-day zero-click iMessage compromises of an iPhone 12 Pro Max device running iOS 14.6, as well as zero-day zero-click iMessage attacks that successfully installed Pegasus on an iPhone SE2 device running iOS version 14.4, and a zero-click (non-zero-day) iMessage attack on an iPhone SE2 device running iOS 14.0.1. The mechanics of the zero-click exploit for iOS 14.x appear to be substantially different than the KISMET exploit for iOS 13.5.1 and iOS 13.7, suggesting that it is in fact a different zero-click iMessage exploit.
“Zero-day” indicates a vulnerability that has not already been reported to the vendor — in this case, Apple. “Zero-click” means exactly what it sounds like: this is an exploit delivered by iMessage that is executed without any user interaction, and it is wildly difficult to know if your device has been compromised. That is the bad news: the story we like to tell ourselves about mobile device security simply is not true.
But nor is it true that we are all similarly vulnerable to attacks like these, as Ivan Krstić, Apple’s Head of Security Engineering and Architecture, said in a statement to the Washington Post:
Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. […] Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. […]
This situation is reminiscent of the 2019 zero-day attacks against iPhone-using Uyghurs, delivered through news websites popular with Uyghurs and presumably orchestrated by the Chinese government. Those vulnerabilities were quietly fixed at the beginning of that year, but their exploitation was not disclosed until Google’s Project Zero published a deep dive into their existence, at which point Apple issued a statement. I thought it was a poor set of excuses for a digital attack against an entire vulnerable population.
This time, it makes sense to focus on the highly-targeted nature of Pegasus attacks. The use of this spyware is not indiscriminate. But — with reportedly tens of thousands of attempted infections — it is being used in a more widespread way than I think many would assume. Like the exploits used on Uyghurs two years ago, it indicates that iPhone zero-click zero-days might not be the Pappy Van Winkle of the security world. Certainly, they are still rare, but it seems that there are some companies and nation-states that have stocked their pantries for a rainy day and might not be so shy about their use.
Still, nothing so far indicates that a typical person is in danger of falling victim to Pegasus, though the mere presence of zero-click full exploitations is worrisome for every smartphone user. The Guardian reports that the victims of NSO Group’s customers are high-profile individuals: business executives, investigative journalists, world leaders, and close associates. That is not to minimize the effect of this spyware, but its reach is more deliberately limited. If anything, the focus of its deployment teases for us mere mortals the unique security considerations faced by those at higher risk of targeted attack.
Researchers have documented iPhone infections with Pegasus dozens of times in recent years, challenging Apple’s reputation for superior security when compared with its leading rivals, which run Android operating systems by Google.
The months-long investigation by The Post and its partners found more evidence to fuel that debate. Amnesty’s Security Lab examined 67 smartphones whose numbers were on the Forbidden Stories list and found forensic evidence of Pegasus infections or attempts at infections in 37. Of those, 34 were iPhones — 23 that showed signs of a successful Pegasus infection and 11 that showed signs of attempted infection.
If you read Amnesty’s full investigation into Pegasus — and I suggest you do as it is comprehensive — there is a different explanation for why the iPhone is overrepresented in its sample, and a clear warning against oversimplification:
Much of the targeting outlined in this report involves Pegasus attacks targeting iOS devices. It is important to note that this does not necessarily reflect the relative security of iOS devices compared to Android devices, or other operating systems and phone manufacturers.
In Amnesty International’s experience there are significantly more forensic traces accessible to investigators on Apple iOS devices than on stock Android devices, therefore our methodology is focused on the former. As a result, most recent cases of confirmed Pegasus infections have involved iPhones.
iOS clearly has many holes in its security infrastructure that need patching. Reporting from the Post suggests that the demand of launching a major new version of iOS every year — in addition to the four other operating systems Apple updates on an annual cycle — not only takes a toll on the reliability of its software, but also means some critical vulnerabilities take months to get patched. Apple is not alone in that regard, but it does raise questions about the security of the world’s information resting entirely in the hands of engineers at three companies on the American west coast. Is it a good thing that that high-risk people only have a choice between iOS and Android? Does it make sense that many of the world’s biggest companies almost entirely run Windows? Is enough being done to counter the inherent risks of this three-way market?
The security story we have is one of great risk, with responsibility held by very few. There are layers of firewalls and scanners and obfuscation techniques and encryption and all of that — but a determined attacker knows there are limited variables. iOS is not especially weak, but it is exceptionally vertically-integrated. If the latest iPhone running the latest software updates is vulnerable, all iPhones probably are as well.
There are two more contrasting sets of stories I wish to touch on about the responsibility of NSO Group and companies like it in these attacks. First, NSO Group is careful to state that it is merely a vendor and, as such, “does not operate its technology, does not collect, nor possesses, nor has any access to any kind of data of its customers”. However, it is also adamant that its software had zero role in Khashoggi’s assassination. How is it possible to square that certainty with the company’s alleged lack of involvement in the affairs of customers it cannot confirm nor deny?
Second, I gave credit earlier this year to the notion that private marketplaces of security vulnerabilities might actually be beneficial — at least, compared to weakened encryption or some form of “back door”. NSO Group is the reverse side of that argument. The story I like to tell myself is that, given that there is an established market for zero-days, at least that means law enforcement can unlock encrypted smartphones without the need for a twenty first century Clipper Chip. But the story we have is that NSO Group develops espionage software over which, once sold, it has little control. The company’s spyware is now implicated in the targeting of tens of thousands of phones belonging to activists, human rights lawyers, journalists, businesspeople, demonstrators, investigators, world leaders, and friends and colleagues of all of the above. NSO Group is a private company that enables dictators and autocrats, and somehow gets to wash its hands of all responsibility.
The story it wants is of a high technology company saving children and fighting terrorists. The story it has is an abuse of power and a lack of accountability.
You might remember that embarrassing texts and images were leaked from Jeff Bezos’ iPhone a couple of years ago that confirmed that he was cheating on his now ex-wife with his current partner Lauren Sanchez. Bezos got in front of the National Enquirer story with a heroic-seeming Medium post where he copped to the affair.
In that post, he also insinuated that the Saudi royal family used NSO Group malware to breach his phone’s security and steal that incriminating evidence in retaliation for his ownership of the Washington Post and its coverage of the Saudi royalty’s role in Post contributor Jamal Khashoggi’s assassination. In addition, the Post had aggressively reported on the Enqiurer’s catch-and-kill scheme to silence salacious stories.
While that got huge amounts of coverage, a funny thing happened not too long after: the Wall Street Journal confirmed that the Enquirer did not get the texts and photos from some secret Saudi arrangement and, instead, simply paid Sanchez’ brother who had stolen them. A fuller story of this public relations score was reported earlier this year by Brad Stone in Bloomberg Businessweek. It seems that, contrary to contemporary reporting, there was little to substantiate rumours of a high-tech break-in by a foreign government.
It is unclear whether Bezos was simply spinning a boring story in a politically-favourable way; a recent Mother Jones investigation found that Amazon’s public relations team is notorious among journalists for being hostile and telling outright lies. But if he was targeted by the Saudi Arabian royal family using NSO Group software, it is notable that it is apparently not on the list of 55 countries that the company refuses to sell to on the basis of human rights abuses. ↩︎
But today I actually want to talk a bit more about video. And I want to start by saying we’re no longer a photo-sharing app or a square photo-sharing app. The number one reason that people say that they use Instagram in research is to be entertained. So people are looking to us for that. […]
To this point I have framed Mosseri’s announced changes in the context of Instagram’s continual evolution as an app, from photo filters to network to video to algorithmic feed to Stories. All of those changes, though, were in the spirit of Systrom’s initial mission to capture and share moments. That is why perhaps the most momentous admission by Mosseri is that Instagram’s new mission is simply to be entertainment.
I have to wonder if it is in preparation for more than that, given this piece by Clive Thompson, writing for Medium’s the Debugger:
I can’t say precisely when my Instagram ads began to tip over into SkyMall territory. I’d been noticing the devolution for months, maybe years. But these days when I open up the app, every ad customized for me is some decidedly loopy gewgaw.
Maybe Instagram’s growth continues to be driven by the successful features it can lift directly from other photo- and video-based apps. But I wonder if this mix of ads for bizarre direct-to-consumer goods and the integrated e-commerce functionality are laying the foundation for a platform more like WeChat, Line, or Gojek. Perhaps Instagram does not expand into logistics operations, but why would it not push further into online payments, and buying and selling products? For many, shopping is entertainment. Why not facilitate that inside one of the world’s most popular mobile apps and take a cut of every purchase?
Even discarding my idle speculation, the name “Instagram” sure is beginning to feel outdated or, at least, disconnected.
Facebook is not doing enough to stop the spread of false claims about COVID-19 and vaccines, White House press secretary Jen Psaki said on Thursday, part of a new administration pushback on misinformation in the United States.
Facebook, which owns Instagram and WhatsApp, needs to work harder to remove inaccurate vaccine information from its platform, Psaki said.
From the White House transcript of that press briefing, in response to a reporter’s question about what actions the U.S. federal government is taking:
In terms of actions, Alex, that we have taken — or we’re working to take, I should say — from the federal government: We’ve increased disinformation research and tracking within the Surgeon General’s office. We’re flagging problematic posts for Facebook that spread disinformation. We’re working with doctors and medical professionals to connect — to connect medical experts with popular — with popular — who are popular with their audiences with — with accurate information and boost trusted content. So we’re helping get trusted content out there.
Psaki’s admission that the government is “flagging” posts with misinformation has caused quite the gnashing of teeth in pockets of the professional commentary circuit, with the Wall Street Journal’s editorial board calling it “censorship coordination”.
But, as Mike Masnick writes at Techdirt, that is not an accurate portrayal of what the Biden administration is doing:
It’s a simple fact: the US government should not be threatening or coercing private companies into taking down protected speech.
But, over the past few days there’s been an absolutely ridiculous shit storm falsely claiming that the White House is, in fact, doing this with Facebook, leading to a whole bunch of nonsense — mainly from the President’s critics. It began on Thursday, when White House press secretary Jen Psaki, in talking about vaccine disinfo, noted that the White House had flagged vaccine disinformation to Facebook. And… critics of the President completely lost their shit claiming that it was a “First Amendment violation” or that it somehow proved Donald Trump’s case against the social media companies.
It did none of those things.
I think Ken White’s messaging is better than the official White House version, but I do not think it would ameliorate the situation for those who believe the administration is colluding with Silicon Valley, or who are exploiting vaccine misinformation for their own gain.
Microsoft’s Claire Anderson, on the company’s Medium-based design blog (can Microsoft not host its own blog?):
As the world moves toward hybrid work scenarios that blend in-person with remote, expressive forms of digital communication are more important than ever. Over 1,800 emoji exist within Microsoft 365, and we’ve been working for the past year to dramatically refresh them by creating a system that is innately Fluent.
We opted for 3D designs over 2D and chose to animate the majority of our emoji. While you’ll see these roll out in product over the coming months, we wanted to share a sneak peek with you in honor of World Emoji Day. We’re also excited to unveil five brand-new emoji that signal our fresh perspective on work, expression, and the spaces in between.
Even though the video in the post is not entirely reflective of the actual textures and detail in these emoji, there is some beautiful design at play in these images. The faces, in particular, are playfully rendered, yet still legible even at the smaller size shown in the banner image. Many of the objects have soft lighting effects that, while slightly reducing contrast, do not seem to affect clarity too much. I am looking forward to seeing what they will look like in actual use.
Jennifer Daniel of Google, a company that hosts its own blog and uses its own top-level domain extension:
Well, it looks like giving some love to hundreds of emoji already on your keyboard — focusing on making them more universal, accessible and authentic — so that you can find an all-new fav emoji (I’m fond of 🎷🐛). And, you can find all of these emoji (yes, including the king, 🐢) across more of Google’s platforms including Android, Gmail, Chat, Chrome OS and YouTube.
If you’ve scrolled through any e-commerce sites lately, you’ve probably seen a version of it: A charming dinner plate costs $28 or “4 interest-free installments of $7.00 by Afterpay.” A pastoral checkered dress could run you $74.50 … or, alternatively, “4 interest-free payments of $18.62 with Klarna.”
In the past year, more and more merchants have started incorporating “buy now, pay later” options into their websites. They’re often prominently featured on product pages, where shoppers who might otherwise click away are encouraged to instead splurge and split their spending into periodic payments.
While BNPL companies present these loans as a smartbudgeting tool, experts say costs can quickly add up, leaving shoppers with mountingdebt. And regulators across the world have started to rein in these services, concerned that they can negatively impact the young consumers who tend to use them.
It is a bit disappointing but unsurprising that Apple is rumoured to be working on a competing offering. Once a company has got its feet wet in the murky sea of financial services, why would it be reluctant to go further?
Here is an excellent little single-purpose Mac utility: Unclack automatically mutes your mic when you are typing, and unmutes it when you stop. That’s it; that is all it does.
This was apparently released months ago, but I was only introduced to it recently, and it has been a very good thing to have. I am a loud typist pretty much always — it is a bad habit, I know — and this little utility means that I do not have to remember to mute and unmute my mic during online meetings. However, I have also discovered that I speak more while typing than I realized, thanks to this utility.
This is, for me, a perfect addition to my work-from-home software toolkit, and it is free. Recommended.
There is an argument that by forcing people to reveal themselves publicly, or giving the platforms access to their identities, they will be “held accountable” for what they write and say on the internet. Though the intentions behind this are understandable, I believe that ID verification proposals are shortsighted. They will give more power to tech companies who already don’t do enough to enforce their existing community guidelines to protect vulnerable users, and, crucially, do little to address the underlying issues that render racial harassment and abuse so ubiquitous.
My pet theory is that our fractured relationship with other users of big online platforms has nothing to do with anonymity and everything to do with standards. Pseudonymity and anonymity have been a part of the internet since it was created. Many users of forums and, before them, BBSes were only known by their handles. The biggest thing that has changed in the last fifteen-or-so years is a weakening of moderation efforts and community standards. It used to be that you had to go to specific websites known for users’ ability to test the limits of good taste and free speech, but that approach was mainstreamed. In the earlier days of Twitter, company executives famously referred to it as the “free speech wing of the free speech party”. Alexis Ohanian repeatedly praised Reddit’s laissez-faire approach to speech, and Facebook has wrestled with moderation issues for well over a decade now. Many users may have been repelled by rampant abuse, and those who remained were able to set a standard for new users to grow accustomed to.
Lax moderation in the founding years of these platforms undoubtably aided their growth, but that rapid ascendency also compounded their inability to moderate as they grew. Mike Masnick of Techdirt has said that moderation is impossible at scale, but I think that is partly because platforms are not moderating at a small scale. Trying to embed community standards into a platform hosting hundreds of millions of users is a fraught exercise. It has to start when these platforms are nascent.
That is my little theory, but it is sort of irrelevant. There is no way to reset platforms to the size they were at their founding so that we can try this whole thing again. I do not know how we, collectively, find a better way to express ourselves online now that the standard has been set. I do not think banning anonymity is a realistic or effective solution. Platforms’ lowered tolerance for abuse is, I think, helpful, if long overdue. But some change perhaps comes from understanding that we are often communicating with real people. I am not arguing that it will solve racism, but Kesvani is right: requiring verified identification to use web platforms will only give a superficial impression of improving on that front, too.