I would hate to begin any post here in the way that some first-year college student would start an essay: with a definition. But the meaning of “privacy” so variable that I invite you to see how different entities explain it. NIST has a few different explanations, while the NTIA has a much longer exploration. PC Magazine’s glossary entry is pretty good, too, and closely mimics Steve Jobs’ thesis.
So, with so much understanding of what privacy represents — at least in a kind of abstract sense — parts of this article by Benedict Evans come across as hollow even as it makes several great arguments. I’m going to start by quoting the second paragraph, because it begins “first”:
First, can we achieve the underlying economic aims of online advertising in a private way? Advertisers don’t necessarily want (or at least need) to know who you are as an individual. As Tim O’Reilly put it, data is sand, not oil – all this personal data actually only has value in the aggregate of millions. Advertisers don’t really want to know who you are – they want to show diaper ads to people who have babies, not to show them to people who don’t, and to have some sense of which ads drove half a million sales and which ads drove a million sales. […]
Already, I find myself wondering if Evans is being honest with himself. The argument that advertisers want to work in bulk more often than at the individual level is an outdated one in an era of ads that can be generated to uncanny specificity. Even conceding that Facebook’s influence on the 2016 election was overstated, the Trump campaign was “running 40,000 to 50,000 variants of its ads” every day. This ain’t the world of high-quality, thoughtful advertising — not any more. This is a numbers game: scaled individualization driven by constant feedback and iteration. If advertisers believe more personal information will make ads more effective, they will pursue that theory as far as they can take it.
Evans acknowledges that consumer demands and Apple’s industry influence have pushed the technology industry to try improving user privacy. On-device tracking systems are seen, he says, as a more private way of targeting advertising without exposing user data to third parties.
This takes me to a second question – what counts as ‘private’, and how can you build ‘private’ systems if we don’t know?
Apple has pursued a very clear theory that analysis and tracking is private if it happens on your device and is not private if [it] leaves your device or happens in the cloud. Hence, it’s built a complex system of tracking and analysis on your iPhone, but is adamant that this is private because the data stays on the device. People have seemed to accept this (so far), but acting on the same theory Apple also created a CSAM scanning system that it thought was entirely private – ‘it only happens your device!’ – that created a huge privacy backlash, because a bunch of other people think that if your phone is scanning your photos, that isn’t ‘private’ at all. […]
I will get back to the first part of this quoted section at the end of this response because I think it is the most important thing in Evans’ entire piece.
For clarity, the backlash over CSAM scanning seems less about privacy than it does about device ownership and agency. This is, to some extent, perhaps a distinction without a difference. Many of the definitions I cited in the first paragraph describe privacy as a function of control. But I think there is a subtle point of clarity here: Apple’s solution probably is more private than checking those photos server-side, but it means that a user’s device is more than a mere client connected to cloud services — it is acting as a local agent of those services.
Continued from above:
[…] So is ‘on device’ private or not? […]
This feels like a trick question or a false premise, to which the only acceptable answer is “it depends”. In general, probably, but there are reasonable concerns about Google’s on-device FLoC initiative.
On / off device is one test, but another and much broader one is the first party / third party test: that it’s OK for a website to track what you do on that website but not OK for adtech companies to track you across many different websites. This is the core of the cookie question, and sounds sensible, and indeed one might think that we do have a pretty good consensus on ‘third party cookies’ – after all, Google and Apple are getting rid of them. However, I’m puzzled by some of the implications. “1p good / 3p bad” means that it’s OK for the New York Times to know that you read ten New York Times travel pieces and show you a travel ad, but not OK for the New Yorker to know that and show you the same ad. […]
This is where this piece starts to go off the rails. I have read the last sentence of this quoted paragraph several times and I cannot figure out if this is a legitimate question Evans is asking.
If we engage with it on its premise, of course it is not okay for the New Yorker to show an ad based on my Times browsing history. It is none of their business what I read elsewhere. It would be like if I went to a clothing store and then, later at a restaurant, a waiter told me that I should have bought the other shirt I tried on because they think it looked better. That would be creepy! And if any website could show me ads based on what I viewed somewhere else, that means that my web browsing history is public knowledge. It violates both the first- and third-party definition and the on- and off-device definition.
But the premise is wrong — or, at least, incomplete. The New Yorker contains empty frames that can be filled by whatever a series of unknown adtech companies decide is the best fit for me based on the slice of my browsing history they collect, like little spies with snippets of information. If it were a direct partnership to share advertising slots, at least we could imply that a reader of both may see them as similarly trustworthy organizations, given that they read both. But this is not a decision between the New Yorker and the Times. There may be a dozen other companies involved in selecting the ad, most of which a typical user has never heard of. How much do you, reader, trust Adara, Dataxu, GumGum, MadHive, Operative, SRAX, Strossle, TelMar, or Vertoz? I do not know if any of them have ever been involved in ad spots in the New Yorker or the Times, but they are all real companies that are really involved in placing ads across the web — and they are only a few names in a sea of thousands.
At this point one answer is to cut across all these questions and say that what really matters is whether you disclose whatever you’re doing and get consent. Steve Jobs liked this argument. But in practice, as we’ve discovered, ‘get consent’ means endless cookie pop-ups full of endless incomprehensible questions that no normal consumer should be expected to be understand, and that just train people to click ‘stop bothering me’. Meanwhile, Apple’s on-device tracking doesn’t ask for permission, and opts you in by default, because, of course, Apple thinks that if it’s on the device it’s private. Perhaps ‘consent’ is not a complete solution after all.
Evans references Jobs’ consent-based explanation of privacy that I cited at the top of this piece — a definition which, unsurprisingly, Apple continues to favour. But an over-dependency on a consent model offloads the responsibility for privacy onto individual users. At best, this allows the technology and advertising industries to distance themselves from their key role in protecting user privacy; at worst, it allows them to exploit whatever they are permitted to gather by whatever technical or legal means possible.
The Jobs definition of privacy and consent is right, but it becomes even more right if you expand its scope beyond the individual. As important as it is for users to confirm who is collecting their data and for what purpose, it is more important that there are limits on the use and distribution of collected information. This sea of data is simply too much to keep track of. Had you heard of any of the ad tech companies mentioned above? What about data brokers that trade and “enrich” personal information? Even if users affirm that they are okay with an app or a website tracking them, they may not be okay with how a service that app relies on ends up reselling or sharing user data.
Good legislation can restrict these industries. I am sure Canada’s is imperfect, but there has to be a reason why the data broker industry here is, thankfully, almost nonexistent compared to the industry in the United States.
But the bigger issue with consent is that it’s a walled garden, which takes me to a third question – competition. Most of the privacy proposals on the table are in absolute, direct conflict with most of the competition proposals on the table. If you can only analyse behaviour within one site but not across many sites, or make it much harder to do that, companies that have a big site where people spend lots of time have better targeting information and make more money from advertising. If you can only track behaviour across lots of different sites if you do it ‘privately’ on the device or in the browser, then the companies that control the device or the browser have much more control over that advertising (which is why the UK CMA is investigating FLoC).
With GDPR, we have seen the product of similarly well-intentioned privacy legislation that restricts the abilities of smaller companies while further entrenching the established positions of giants. I think regulators were well aware of that consequence, and it is a valid compromise position between where the law existed several years ago and where it ought to be going.
As regulations evolve, these competition problems deserve greater focus. It is no good if the biggest companies on the planet or those that are higher up the technology stack — like internet service providers — are able to use their position to abuse user privacy. To make sure smaller companies ever have a chance of competing, it would be a mistake loosen policies on privacy and data collection. Regulations must go in the other direction.
And, as an aside, if you can only target on context, not the user, then Hodinkee is fine but the Guardian’s next landmark piece on Kabul has no ad revenue. Is that what we want? What else might happen?
This is not a new problem for newspapers. Advertisers have always been worried that their ads will be placed alongside “hard news” stories. You can find endless listicles of examples — here’s one from Bored Panda. In order to avoid embarrassing associations, it is commonplace for print advertisers to ask for exceptions: a car company, for example, may request their ad not be placed alongside stories about collisions.
This has been replicated online at both ends of the ad buying market. The New York Times has special tags to limit or remove ads on some stories, while advertisers can construct lists of words and domains they want to avoid placement alongside. But what is new about online news compared to its print counterpart is that someone will go from the Guardian story about Kabul to Hodinkee without “buying” the rest of the Guardian, or even looking at it. This is a media-wide problem that has little to do with privacy-sensitive ad technologies. If serving individualized ads tailored based on a user’s browsing history were so incredible, you would imagine the news business would be doing far better than it is.
All of this leads to the final paragraph in Evans’ piece, which I think raises worthwhile questions:
These are all unresolved questions, and the more questions you ask the less clear things can become. I’ve barely touched on a whole other line of enquiry – of where all the world’s $600bn of annual ad spending would be reallocated when all of this has happened (no, not to newspapers, sadly). Apple clearly thinks that scanning for CSAM on the device is more private than the cloud, but a lot of other people think the opposite. You can see the same confusion in terms like ‘Facebook sells your data’ (which, of course, it doesn’t) or ‘surveillance capitalism’ – these are really just attempts to avoid the discussion by reframing it, and moving it to a place where we do know what we think, rather than engaging with the challenge and trying to work out an answer. I don’t have an answer either, of course, but that’s rather my point – I don’t think we even agree on the questions.
Regardless of whether we disagree on the questions or if you — as I — think that Evans is misstating concerns without fully engaging, I think he’s entirely right here. Questions about user privacy on the web are often flawed because of the expansive and technical nature of the discussion. We should start with simpler questions about what we hope to achieve, and fundamental statements what “privacy” really looks like. There should be at least some ground level agreement about what information is considered personal and confidential. At the very least, I would argue that this applies to data points like non-public email addresses, personal phone numbers, dates of birth, government identification numbers, and advertiser identifiers that are a proxy for an individual or a device.
But judging by the popularity of data enrichment companies, it does not appear that there is broad agreement that anything is private any more — certainly not among those in advertising technologies. The public is disillusioned and overwhelmed, and it is irresponsible to leave it to individuals to unpack this industry. There is no such thing as informed consent in marketing technologies when there is no corresponding legislation requiring the protection of collected data. These kinds of fundamental concerns must be addressed before moving on to more abstract questions about how the industry will cope.
Apple today announced it has acquired Primephonic, the renowned classical music streaming service that offers an outstanding listening experience with search and browse functionality optimized for classical, premium-quality audio, handpicked expert recommendations, and extensive contextual details on repertoire and recordings.
With the addition of Primephonic, Apple Music subscribers will get a significantly improved classical music experience beginning with Primephonic playlists and exclusive audio content. In the coming months, Apple Music Classical fans will get a dedicated experience with the best features of Primephonic, including better browsing and search capabilities by composer and by repertoire, detailed displays of classical music metadata, plus new features and benefits.
Existing Primephonic users have been given only a week’s notice of the service’s impeding shuttering. But this sure is an interesting development. All major streaming services suck at classical music. I would like to see one of them crack this nut and, if it happens to be the service I subscribe to, even better.
I can’t imagine why Apple would build a standalone classical music app unless they were planning to charge extra for it (on top of usual Apple Music subscription). Primephonic on its own now is $8 or $15.
Perhaps, but Apple added lossless streaming free of charge even though Tidal charges an extra $10 per month. I think it is more likely that classical music needs a completely different presentation. Works and movements and composers are not displayed very well in any of Apple’s current music apps.
Hopscotch co-founder and CEO Samantha John published a Twitter thread yesterday documenting her experiences with the rejection of a minor app update. I recommend reading the whole thing, but I am quoting the conclusion:
Hopscotch is a small company, I’m the CEO, AND I write code. And that’s how a lot of the best apps work! My time is limited and precious to me. The way that Apple wasted my energy, gaslighted me, and sucked my time away made me furious.
There’s a lot of talk about the 30% tax that Apple takes from every app on the App Store. The time tax on their developers to deal with this unfriendly behemoth of a system is just as bad if not worse.
Hopscotch is not some scrappy single-developer app with a dozen users. It is a hugely popular learning tool targeted at children that has been selected by Apple as an “Editors’ Choice”. That is not to say that this runaround experience would be appropriate for any developer, but it is defeating to see this is how Apple continues to treat longtime high-profile developers. John is not the only one; scroll through the quote tweets and you will see plenty of people sharing similar stories.
Also, sorry, one more rant. With the exception of maybe Uber and Airbnb, App Review isn’t kidding when they say they treat all developers the same, as every good app in the App Store, no matter how beloved, has at least five horror stories just like this […]
The semi-open qualities of iOS are a constant strain on developers’ time and morale. It is not an insular console-type system, nor is it a free-for-all — developer policies for iOS sit in an awkward middle ground that demands far more attention that is poorly rewarded.
Stuff like this is why yesterday’s too-proud announcement of a proposed class action settlement read like a slap in the face to so many developers. It was apparent to me that the settlement was basically inconsequential for Apple, but I missed the condescending tone that the release struck. Its headline spells out who did not win:
Apple, US developers agree to App Store updates that will support businesses and maintain a great experience for users
Still not a great experience for developers, though.
Following a productive dialogue, Apple and the plaintiffs in the Cameron et al v. Apple Inc. developer suit reached an agreement that identifies seven key priorities shared by Apple and small developers, which has been submitted to the judge presiding over the case for her approval.
To give developers even more flexibility to reach their customers, Apple is also clarifying that developers can use communications, such as email, to share information about payment methods outside of their iOS app. As always, developers will not pay Apple a commission on any purchases taking place outside of their app or the App Store. Users must consent to the communication and have the right to opt out.
I wonder if this means App Review will be less strict if developers’ websites contain some non-in-app-purchase payment methods that may, somehow, be accessible inside their apps. I doubt it; the proposed settlement is narrow and precludes, for instance, the use of push notifications to mention external purchasing options. But developers are now permitted to contact users by the email they provided within the app and prompt them to subscribe elsewhere.
Apple also says that it will maintain the Small Business Program for at least three years, publish an annual report on App Review, and allow for more pricing tiers. It sounds like Apple’s concessions are pretty minor, especially since developers are still not allowed to mention alternative purchase avenues within apps.
Update: Hagens Berman, the law firm representing the class of iOS developers affected by this suit, clarifies that even the weak commitments Apple made only apply to U.S. developers. This settlement is a walkover for Apple and a sweet payday for the lawyers involved, but gives developers next to nothing.
Google’s 16 years of messenger wheel-spinning has allowed products from more focused companies to pass it by. Embarrassingly, nearly all of these products are much younger than Google’s messaging efforts. Consider competitors like WhatsApp (12 years old), Facebook Messenger (nine years old), iMessage (nine years old), and Slack (eight years old) — Google Talk even had video chat four years before Zoom was a thing.
Because no single company has ever failed at something this badly, for this long, with this many different products (and because it has barely been a month since the rollout of Google Chat), the time has come to outline the history of Google messaging. Prepare yourselves, dear readers, for a non-stop rollercoaster of new product launches, neglected established products, unexpected shut-downs, and legions of confused, frustrated, and exiled users.
Perhaps the most striking thing about this lengthy history lesson is that Google — despite being synonymous with web services for fifteen years — has never had a single clear messaging strategy. Around it, as Amadeo recalls, every other internet company seemed to be doing okay with its own version of instant messaging. Even Apple, a company that has a long and embarrassing history of failed online services, figured out a decent messaging product ten years ago.
Meanwhile, Google has launched three-and-a-half chat apps this year. What is going on in Mountain View?
Millions of people are watching high-quality, pirated online versions of Hollywood’s top movies sooner than ever after their releases, undermining potential ticket sales and subscriber growth as the industry embraces streaming.
Copies of several of the year’s most popular films, from “The Suicide Squad” and “Godzilla vs. Kong” to “Jungle Cruise” and “Black Widow,” shot up almost immediately after their premieres to the top of the most-downloaded charts on piracy websites such as the Pirate Bay and LimeTorrents, according to piracy-tracking organizations.
“Pirates behave like consumers do,” said Carnegie Mellon University professor and piracy expert Michael D. Smith. “If you make it sufficiently hard for them to get something free, they’ll pay for it.”
The reverse seems as likely to be true: people will seek alternative channels when it is unnecessarily difficult for them to spend money on a new release. Sure, not all of those who downloaded ripped copies of these movies would have paid for them if the ripped version was somehow not available. But I think a lot of them would be happy to watch it on the streaming service of their choice. Because studios are so desperate to re-create the cable television experience through exclusivity demands and siloed libraries, piracy is appealing again.
The marketplace for exploits and software of an ethically questionable nature is a controversial one, but something even I can concede has value. If third-party vendors are creating targeted surveillance methods, it means that the vast majority of us can continue to have secure and private systems without mandated “back doors”. It seems like an agreeable compromise so long as those vendors restrict their sales to governments and organizations with good human rights records.
NSO Group, creators of Pegasus spyware, seems to agree. Daniel Estrin, reporting last month at NPR:
NSO says it has 60 customers in 40 countries, all of them intelligence agencies, law enforcement bodies and militaries. It says in recent years, before the media reports, it blocked its software from five governmental agencies, including two in the past year, after finding evidence of misuse. The Washington Post reported the clients suspended include Saudi Arabia, Dubai in the United Arab Emirates and some public agencies in Mexico.
Pegasus can have legitimate surveillance use, but it has great potential for abuse. NSO Group would like us to believe that it cares deeply about selling only to clients that will use the software to surveil possible terrorists and valuable criminal targets. So, how is that going?
We identified nine Bahraini activists whose iPhones were successfully hacked with NSO Group’s Pegasus spyware between June 2020 and February 2021. Some of the activists were hacked using two zero-click iMessage exploits: the 2020 KISMET exploit and a 2021 exploit that we call FORCEDENTRY.
At least four of the activists were hacked by LULU, a Pegasus operator that we attribute with high confidence to the government of Bahrain, a well-known abuser of spyware. One of the activists was hacked in 2020 several hours after they revealed during an interview that their phone was hacked with Pegasus in 2019.
As Citizen Lab catalogues, Bahrain’s record of human rights failures and internet censorship should have indicated to NSO Group that misuse of its software was all but guaranteed.
NSO Group is just one company offering software with dubious ethics. Remember Clearview? When Buzzfeed News reported last year that the company was expanding internationally, Hoan Ton-That, Clearview’s CEO, brushed aside human rights concerns:
“Clearview is focused on doing business in USA and Canada,” Ton-That said. “Many countries from around the world have expressed interest in Clearview.”
Later last year, Clearview went a step further and said it would terminate private contracts, and its Code of Conduct promises that it only works with law enforcement entities and that searches must be “authorized by a supervisor”. You can probably see where this is going.
Like a number of American law enforcement agencies, some international agencies told BuzzFeed News that they couldn’t discuss their use of Clearview. For instance, Brazil’s Public Ministry of Pernambuco, which is listed as having run more than 100 searches, said that it “does not provide information on matters of institutional security.”
But data reviewed by BuzzFeed News shows that individuals at nine Brazilian law enforcement agencies, including the country’s federal police, are listed as having used Clearview, cumulatively running more than 1,250 searches as of February 2020. All declined to comment or did not respond to requests for comment.
Documents reviewed by BuzzFeed News also show that Clearview had a fledgling presence in Middle Eastern countries known for repressive governments and human rights concerns. In Saudi Arabia, individuals at the Artificial Intelligence Center of Advanced Studies (also known as Thakaa) ran at least 10 searches with Clearview. In the United Arab Emirates, people associated with Mubadala Investment Company, a sovereign wealth fund in the capital of Abu Dhabi, ran more than 100 searches, according to internal data.
As noted, this data only covers up until February last year; perhaps the policies governing acceptable use and clientele were only implemented afterward. But it is alarming to think that a company which bills itself as the world’s best facial recognition provider ever felt comfortable enabling searches by regimes with poor human rights records, private organizations, and individuals in non-supervisory roles. It does jibe with Clearview’s apparent origin story, and that should be a giant warning flag.
These companies can make whatever ethical promises they want, but money talks louder. Unsurprisingly, when faced with a choice about whether to allow access to their software judiciously, they choose to gamble that nobody will find out.
For example, the documents explain how Google employs revenue-sharing and licensing agreements with Android partners (OEMs) to maintain Google Play as the dominant app store. One filing describes “Anti-Fragmentation Agreements” that prevent partners from modifying the Android operating system to offer app downloads in a way that competes with Google Play.
“Google’s documents show that it pushes OEMs into making Google Play the exclusive app store on the OEMs’ devices through a series of coercive carrots and sticks, including by offering significant financial incentives to those that do so, and withholding those benefits from those that do not,” the redlined complaint says.
These agreements allegedly included the Premiere Device Program, launched in 2019, to give OEMs financial incentives like 4 per cent, or more, of Google Search revenues and 3-6 per cent of Google Play spending on their devices in return for ensuring Google exclusivity and the lack of apps with APK install rights.
There seems to be some overlap in what is claimed in Epic’s filing and what we know from a different lawsuit. Last month, a group of thirty seven attorneys general sued Google for abusing its power over Android OEMs to cement the Play Store’s dominance in Android app distribution. At the time, Google’s Wilson White responded:
We also built an app store, Google Play, that helps people download apps on their devices. If you don’t find the app you’re looking for in Google Play, you can choose to download the app from a rival app store or directly from a developer’s website. We don’t impose the same restrictions as other mobile operating systems do.
So it’s strange that a group of state attorneys general chose to file a lawsuit attacking a system that provides more openness and choice than others. This complaint mimics a similarly meritless lawsuit filed by the large app developer Epic Games, which has benefitted from Android’s openness by distributing its Fortnite app outside of Google Play.
White claims that “most Android devices ship with two or more app stores preloaded”, and cites Samsung’s Galaxy Store as an example. But Epic’s lawyers claim, beginning on page 45, that Google pressured Samsung into only allowing the Google Play and Samsung Galaxy Store on its phones, thereby scuttling a distribution deal with Epic for its own app store. This was a shift away from what the suit describes as Google’s intent since 2011 to altogether prevent Samsung from running its own app marketplace.
The results of Google’s deals with Samsung allegedly became part of “Project Agave”, thus forming the basis for the suit filed by the attorneys general this year.
Remember when Google used to put some effort into pretending that Android was open? Those were the days.
One of the curious side effects of a sprawling lawsuit like Epic Games v. Apple is that documents surface which can clarify past reporting.
Take, for example, a 2010 email from Steve Jobs to Apple’s executive team that was first disclosed during Apple’s lawsuit against Samsung. It described the agenda for Apple’s 2011 “Top 100” meeting. In that version, there was one bullet point that was redacted, but contained second-level items like “cost goal” and “show model”. We now know that line read “iPhone Nano plan”.
The amount of exhibits released can also create newsworthy items of its own. Sean Hollister at the Vergeassembled a large list of interesting tidbits. One of those items was a February 2020 iMessage discussion between Eric Friedman and Herve Sibert. Friedman is responsible for Apple’s Fraud Engineering Algorithms and Risk team, while Sibert manages Security and Fraud. From that discussion, Hollister snipped this back-and-forth:
Friedman The spotlight at Facebook etc is all on trust and safety (fake accounts, etc). In privacy, they suck.
Friedman Our priorities are the inverse.
Friedman Which is why we are the greatest platform for distributing child porn, etc.
Sibert Really? I mean, is there a lot of this in our ecosystem? I thought there were even more opportunities for bad actors on other file sharing systems.
The snippet Hollister posted ends there, and it formed the basis for articles by John Koetsier at Forbes and Ben Lovejoy at 9to5Mac. Both writers seized on the third text Friedman sent and quoted it in their headlines.
But this is clearly only a segment of a conversation — a single page glimpse into a much longer iMessage discussion. Page 17 of 31, as it turns out, in this exhibit document. Given how incendiary Friedman’s statement was, even in the context of a casual chat, I think it is worth being precise about its context.
In preceding messages, Friedman writes about a presentation the two managers have been working on to be shown to Eddy Cue later that morning. Friedman shows a slide describing features within iOS that have revealed fraud and safety issues. The two relevant concerns are reports of child grooming in social features — like iMessages and in-app chat — and in App Store reviews, of all places. Subsequent messages indicate that this is partly what Friedman was referring to.
Here’s the transcript beginning immediately after Friedman responded “Yes” in the above quote:
Friedman But — and here’s the key — we have chosen to not know in enough places where we really cannot say.
Friedman The NYTimes published a bar graph showing how companies are doing in this area. We are on it, but I think it’s an undererport. [sic]
Friedman Also, we KNOW that developers on our platform are running social media integrations that are inherently unsafe. We can do things in our ecosystem to help with that. For example “ask to chat” is a feature we could require developers to adopt and use for U13 accounts.
Sibert There are also lots of rapidly changing trends in public focus
Friedman Let the parents make a decision
Friedman We could introduce a fine distinction between malware and software that is behaviorally fraught, guiding parents to have a conversation with kids about their choices.
Friedman discusses how this could be implemented through families set up in iCloud, which sounds similar to one of Apple’s child safety initiatives. But this discussion is not limited to Apple’s first-party features; it appears to cover a range of vectors through which childrens’ safety could be at risk.
I raise this subtle distinction because the simplified, headline-friendly version gave rise to a bizarre line of questioning in Lovejoy’s article:
Eric Friedman stated, in so many words, that “we are the greatest platform for distributing child porn.” The revelation does, however, raise the question: How could Apple have known this if it wasn’t scanning iCloud accounts… ?
One possibility not raised in Lovejoy’s article is that Friedman was typing imprecisely in this iMessage thread. But this seems to me like a reasonable guess made by the head of fraud and risk at Apple — one of the world’s biggest providers of cloud storage and maker of some of the most popular third-party developer platforms. Even though Apple has not been checking iCloud Photos or iCloud Drive against a CSAM hash list, it is reasonable to speculate that a billion active devices will — sad to say — involve a lot of CSAM in those cloud services.
But Friedman is right: Apple has almost certainly been underreporting because of the current design of its systems. According to the National Center for Missing and Exploited Children, many companies made millions of reports of CSAM uploaded by users, but Apple does not even appear on the chart the Times created. Given the types of services Apple offers, this is certainly a lack of detection rather than a lack of material.
But Apple does make some reports to NCMEC. So, if it is not scanning its cloud storage services — yet — where are those reports coming from?
But in Apple’s case, its staff is clearly being more helpful, first by stopping emails containing abuse material from being sent. A staff member then looks at the content of the files and analyzes the emails. That’s according to a search warrant in which the investigating officer published an Apple employee’s comments on how they first detected “several images of suspected child pornography” being uploaded by an iCloud user and then looked at their emails. (As no charges have been filed against that user, Forbes has chosen to publish neither his name nor the warrant.)
Apple also confirmed to Lovejoy this week that it has automatically checked hashes of email attachments against known CSAM since 2019.
I was able to find the warrant referenced here by Brewster — but, for the same reasons, I will not link to it — and I was struck by the similarities between its existing CSAM protocol and the description of its forthcoming child safety projects. In both cases, when there is a hash match, someone at Apple verifies the legitimacy of the match before submitting a report.
I hope Apple does not offload this emotionally damaging work onto some minimum wage contractor.
Apple’s announcement two weeks ago set up a high-stakes reorientation of the balance between the privacy of its users and the risks created by its hardware, software, and services. Those risks were also identified by Friedman and Sibert in the iMessage chat above, along with some loose ideas for countermeasures. Whether Apple’s recently-proposed projects are a good compromise is still the topic of rigorous debate. But it seems some of the exhibits exposed in this lawsuit combined with great reporting from Brewster creates a fuller picture of the nascent days of these child safety efforts, and how Apple’s current processes might scale.
Today, the U.S. Food and Drug Administration approved the first COVID-19 vaccine. The vaccine has been known as the Pfizer-BioNTech COVID-19 Vaccine, and will now be marketed as Comirnaty (koe-mir’-na-tee), for the prevention of COVID-19 disease in individuals 16 years of age and older. The vaccine also continues to be available under emergency use authorization (EUA), including for individuals 12 through 15 years of age and for the administration of a third dose in certain immunocompromised individuals.
Fantastic news, and something that may sway a handful of people worried about receiving this vaccine and will pave the way for mandates. But I doubt that this will move the needle for the truly anti-vaccine crowd, and I am bemused that some media outlets are surprised that the goalposts have been moved.
One of the theories that has gained favour among some anti-vaccine people is that Bill Gates was involved in COVID vaccines in order to upload every individual’s conscience to the cloud so that bankers can control your transactions. This is not an exaggeration or a misrepresentation. Though not all those who are anti-vaccine believe everything this extreme, it is a mainstream view among a non-mainstream group of people. Anyone who thinks that FDA approval will change minds set in these kinds of beliefs is only kidding themselves.
Anyone who can get vaccinated must do so, and as quickly as possible. If you can, do your best to convince the unconvinced. But there are some who simply will not accept that COVID-19 is real, and that widespread vaccination is good public health policy. I cannot imagine how difficult it would be to reorientate someone’s entire perception of the world back to reality.
Earlier this week, the FTC took a second crack at accusing Facebook of criminal anticompetitive behaviour:
“Facebook lacked the business acumen and technical talent to survive the transition to mobile. After failing to compete with new innovators, Facebook illegally bought or buried them when their popularity became an existential threat,” said Holly Vedova, FTC Bureau of Competition Acting Director. “This conduct is no less anticompetitive than if Facebook had bribed emerging app competitors not to compete. The antitrust laws were enacted to prevent precisely this type of illegal activity by monopolists. Facebook’s actions have suppressed innovation and product quality improvements. And they have degraded the social network experience, subjecting users to lower levels of privacy and data protections and more intrusive ads. The FTC’s action today seeks to put an end to this illegal activity and restore competition for the benefit of Americans and honest businesses alike.”
The FTC filed the amended complaint today in the U.S. District Court for the District of Columbia, following the court’s June 28 ruling on the FTC’s initial complaint. The amended complaint includes additional data and evidence to support the FTC’s contention that Facebook is a monopolist that abused its excessive market power to eliminate threats to its dominance.
I am looking forward to seeing what comes out of this case; I hope for an outcome that reduces Facebook’s overwhelming market power. But, as a non-lawyer, I find it illuminating to look at the counterarguments.
That said, there is more evidence in this complaint that Facebook deliberately sought to undermine competition at a variety of different points. And if the FTC can convince the court that (1) the market definition it has is correct, and (2) that Facebook has monopolistic power in that market, perhaps it can move the case forward. But, again, the complaint focuses heavily on the Instagram and WhatsApp acquisitions, both of which happened many years ago — at a time when Facebook was nowhere near as big or powerful as it is today. And, importantly, there aren’t really examples of them doing the same thing recently. Indeed, we keep seeing new entrants showing up in the social media market — including Snap, TikTok, and Clubhouse. Those all undermine the argument that Facebook can stop competitors.
Of the three competitive companies here, Snap is the most successful. I would be surprised if Clubhouse is enduring rather than fleeting.
Meanwhile, the FTC defines the product category that Facebook’s brands occupy in a way that excludes TikTok, and I think there is a reasonable argument for that. TikTok is not really designed for following close sets of friends and family members in the same way that Facebook and WhatsApp are, but which Instagram is increasingly not.
The FTC seems to be arguing that acquiring Instagram and WhatsApp reduced competition in the social networking space they have defined, and that they cemented Facebook’s position. But Masnick points out a couple of internal inconsistencies in the FTC’s reasoning.
I don’t have a law degree, so I feel unqualified to assess the legal merits of the FTC’s market definition.
But what I can capably assess is the FTC’s arguments around Facebook’s control of advertising prices on its platform. Multiple times throughout the complaint, the FTC declares that Facebook’s monopoly control over the market for personal social networking resulted in unnaturally high “advertising prices.” This is simply incorrect, and it reveals a lack of understanding of the digital advertising ecosystem and how advertising inventory is priced.
This seems like a pretty significant error for making the case that Facebook’s market position is an economic concern. But if the FTC’s complaint is mostly about how Facebook has used its size and power to disadvantage competitors, I wonder if it matters. Regardless, it is worrying that the FTC seems to not fully understand the arguments it is making.
Musk’s “robot” was just a person dancing around in a skintight full-body suit, but he promises that his electric car company really is working on something. And he really wants you to believe him this time.
“The Tesla bot will be real,” Musk said emphatically, trying to usher his fake robot off-stage on Thursday.
Almost as funny: the credulous media coverage that dutifully repeated all of Musk’s claims.
My guess is that Tesla realized earlier this week that it did not have enough to talk about at its “A.I. Day” presentation. It hastily assembled some drawings, hired a company to render a few surfaces, and made an intern put on a full body suit and dance for a few seconds. There is no reason to believe this is a real project.
Of the many questions raised by Facebook’s “Widely Viewed Content” report, released Wednesday, one was about timing: why did Facebook choose to release a report covering April through June? Why start now rather than, say, in January?
Facebook had prepared a similar report for the first three months of the year, but executives never shared it with the public because of concerns that it would look bad for the company, according to internal emails sent by executives and shared with The New York Times.
In that report, a copy of which was provided to The Times, the most-viewed link was a news article with a headline suggesting that the coronavirus vaccine was at fault for the death of a Florida doctor. The report also showed that a Facebook page for The Epoch Times, an anti-China newspaper that spreads right-wing conspiracy theories, was the 19th-most-popular page on the platform for the first three months of 2021.
Given the widespread mockery of what ended up being released, it makes me wonder if Facebook will scrap the very concept of this report rather than commit to a quarterly release schedule. Its goal seems to be creating a counterpoint to reporting that Facebook enables the spread of conspiracy theories and disinformation, but nobody seems to be convinced. Why would Facebook prepare an update to this in three months’ time — especially if, like the draft from the first quarter of this year, it will spur another round of bad press?
Let’s say you are an evil genius. You hatch a brilliant plan that is probably not legal and might backfire in the future. Do you immediately transcribe it in plain language in an email?
I am cautious; I would not. But apparently one of the secrets of being an executive at a tech company is brazenly documenting things that may eventually lead to criminal allegations or, at least, be unsavoury and damaging to your company’s image.
The FTC filed the amended complaint today in the U.S. District Court for the District of Columbia, following the court’s June 28 ruling on the FTC’s initial complaint. The amended complaint includes additional data and evidence to support the FTC’s contention that Facebook is a monopolist that abused its excessive market power to eliminate threats to its dominance.
Maintaining its monopoly through acquisition was a natural choice for Facebook. The company has long sought to achieve and maintain dominance through acquisitions rather than competition, reflecting a deeply rooted view within Facebook that, as Mr. Zuckerberg put it in a June 2008 internal email, “it is better to buy than compete.” Facebook’s acquisitions have often focused on arresting the growth of potential rivals: for example, following Facebook’s failed 2008 attempt to acquire Twitter, Mr. Zuckerberg wrote: “I was looking forward to the extra time that would have given us to get our product in order.” […]
Meanwhile, Sean Hollister at the Verge read through many of the documents revealed in Epic v. Apple and pulled out a bunch of interesting tidbits.1 Here’s one:
In the agenda notes for a 2011 corporate strategy presentation, to be delivered by Jobs himself, he calls 2011 the “Year of the Cloud,” and makes one of its three tenets to “tie all of our products together, so we further lock customers into our ecosystem.”
That agenda was sent in an email to Apple’s executive team, judging by the abbreviation in the “To:” field. It was previously released during Apple’s lawsuit against Samsung, but the line reading “iPhone Nano plan” was redacted at the time.
For Back to School, Eddy Cue suggests that Apple should bundle iTunes gift cards with new devices instead of putting them on sale — specifically to lock customers into Apple’s ecosystem and dissuade them from switching phones. “Who’s going to buy a Samsung phone if they have apps, movies, etc already purchased? They now need to spend hundreds more to get to where they are today,” says Cue.
Once again I am begging executives to simply use the telephone instead of putting their dastardly schemes in an email.
It is stunning to read the blunt discussions of tech executives in these emails. I picked just one quote from the FTC’s filing, and it is clear that Facebook’s management — especially Zuckerberg — either assumed that none of this could ever be found illegal, or that they did not care if it were. It is shocking.
In Apple’s case, there is Jobs’ typically straightforward explanation of “further lock[ing] customers into” the company’s products and services. Lock-in tactics are not inherently illegal. But Apple’s private discussions paints an unflattering image compared to the way it presents this information publicly, often using words like “seamless”.
Similarly brazen emails were discovered during the anti-poaching lawsuit several years ago. It surprised me even then to see direct demands from a senior level HR executive at Apple to “add Google to your ‘hands-off’ list” and “be sure to honor our side of the deal”. Then there’s Eric Schmidt’s email to Jobs stating that he would “prefer [Google exeutive Omid Kordestani] do it verbally since I don’t want to create a paper trail” — which, I repeat, he wrote in an email.
From a justice perspective, I am glad so many of these schemes appear in plain language in emails with so little room for interpretation. But I will always be surprised to see the architecture of potential crimes laid so plainly bare in plain text.
Perhaps these executives believe they are effectively immune from prosecution. In the U.S., companies are often fined and otherwise punished for criminal behaviour, but it is rare for individuals to be held to account. If it were more common to make executives responsible for the criminal actions of the companies they control, would there be fewer corporate crimes, or would there just be fewer emails describing them in plain English? I have to wonder.
I could not help but notice one thing Hollister got wrong:
Last July, Apple CEO Tim Cook testified under oath before Congress that “we treat every developer the same.” But as previously unearthed documents revealed, Apple was more than willing to give Netflix all sorts of special treatment to keep its cut of Netflix’s subscription fees.
Here’s something I haven’t seen previously reported, though: it appears Apple had already given Netflix a sweetheart deal where it took just 15 percent of subscriptions Netflix sold in its app. […]
This deal was previously reported in two different capacities: once in 2015 by Recode’s Peter Kafka and, another time in 2019 by Edmund Lee in the New York Times. In Kafka’s case, it only covered subscriptions started through the Apple TV, while Lee’s reported that Netflix’s 15% cut was for subscriptions sold through apps on all Apple devices. ↩︎
Today we are announcing changes to our commercial pricing for Microsoft 365 — the first substantive pricing update since we launched Office 365 a decade ago. This updated pricing reflects the increased value we have delivered to our customers over the past 10 years. Let’s take a look at some of the innovations we’ve delivered over the past decade in three key areas — communications and collaboration, security and compliance, and AI and automation — as well as the addition of audio conferencing capabilities that we’re announcing today.
Another way to read this is that Microsoft used its unique market position to subsidize the cost of its dramatically expanded feature set, and is now coming to collect.
“The single most important decision in evaluating a business is pricing power. If you have the power to raise prices without losing business to a competitor, you’ve got a very good business.”
Sure, but it could also mean that the business has no true competitors. What other fully-integrated enterprise software suite are disgruntled Microsoft 365 customers going to switch to?
Google, its closest competitor, has just 10% of the market compared to Microsoft’s 87%, according to the most recent figures released I can find which are about a year old. Google may have increased its active user count to 2.6 billion by October last year, but the vast majority of those are not paying. If Microsoft is increasing its pricing, it clearly does not see Google as a major threat.
Microsoft will raise Office 365 business subscription prices in 2022 // putting this in perspective relative to “boxed” software. A consumer might have paid $150 for Office once every 4 yrs. A biz paying $150/yr now pays $450/yr (and can’t stop). #saas
That is great news for Microsoft’s bottom line. It is less good for businesses that may not need these new capabilities but are forced to keep paying. In the old boxed software days, a business could choose whether the new features justified the cost of an upgrade. Now, in the days of software-as-a-service, Microsoft will simply charge more because it can, and customers are going to accept because they have no other choice.
What do people see most on Facebook? Recipes, cute cat GIFs or highly charged political partisanship?
That question has been hard to answer, because the social network keeps a tight lid on so much of its data.
Now, Facebook is for the first time making public some information on what content gets the most views every quarter as the company pushes back against claims its platform is dominated by inflammatory, highly partisan and even misleading posts.
The content that’s seen by the most people isn’t necessarily the content that also gets the most engagement. Read more about how engagement — the likes, shares, and comments a Page or post generates — doesn’t equate to its reach, the number of people who actually see it.
That seems to be a pretty direct attempt to rebut New York Times reporter Kevin Roose’s daily top ten roundup, which has consistently shown that the most-engaged Facebook posts in the U.S. are from conservative media personalities and publications. Roose on Twitter:
I’m all for more data, but this new FB “widely viewed content” report (which I think represents the most labor-intensive effort ever made to dunk on me personally?) is just a tremendously weird document.
The most basic thing is: despite FB’s objections to @FacebooksTop10, there is no inherent reason that reach (how many people see a post in their feeds) is a better measure of popularity than engagement (how many people like/share/comment on it).
I agree. These are two metrics that show different results but are both valid. If the situation were reversed — if idiots like Ben Shapiro got the most views, while links from YouTube, Unicef, and a Green Bay Packers alumni page were engaged with most frequently — I bet Facebook would be trumpeting engagement as a more valid metric.
But who the hell is playeralumniresources.com? They’re the #9 most common domain in the Facebook news feed, and their homepage is the #1 most common URL in Facebook’s set, with 87.2 million views. Well, they’re a speaking agency of former Green Bay Packers players, who are available to join you for a round of golf, a fishing expedition or to sign autographs. And while I personally would be willing to pay a good deal of money to catch walleye with William Henderson (#33, legendary Packers fullback and Superbowl champion), the popularity of this page suggests that there’s something wacky about this data set.
I had the same reaction to this link’s appearance. It makes no sense to see it sandwiched between Spotify and ABC News, and well above CNN and Google, in a list of the top twenty domains on Facebook.
I took a quick look; it looks like the cause may be the marketing strategy site creator Chris Jacke has apparently taken: to share the link in every single post/comment/etc. he makes.
10,000 posts seen 90k times each is quite doable for an ex-NFL player.
There are several oddball links and domains like this in Facebook’s report mixed with expected ones, like YouTube, all presented entirely without context. What YouTube and Vimeo videos are people sharing? That is something Facebook does not elaborate on. One of the few remarkable notes about this report is that no specific YouTube video was popular enough on Facebook to be listed.
It is a really bizarre report that raises more questions than it answers and, as far as I am concerned, is not the counterpoint to Roose’s daily top ten it seems intended to be.
A reporter working for the Apple-focused website 9to5Mac paid a source around $500 in bitcoin in exchange for leaked data from the company in 2018, Motherboard has learned.
The article revealed that Apple would launch two iPads with only Wi-Fi and two with LTE connectivity, the type of processor the new iPad would have, details about the screen, and the fact that it would have a USB-C port instead of the traditional lightning port. [Andrey Shumeyko] told Motherboard that the article was based on data extracted from an iPhone XR prototype, which he said he obtained from a collector.
The story, published in October 2018, now reads “This post has been removed due to 9to5mac’s sourcing policies”. The iPad in question was announced less than a month after it was written. Was that scoop of, frankly, little value worth $500 and a marred ethical record?
T-Mobile says it is investigating a forum post claiming to be selling a mountain of personal data. The forum post itself doesn’t mention T-Mobile, but the seller told Motherboard they have obtained data related to over 100 million people, and that the data came from T-Mobile servers.
The data includes social security numbers, phone numbers, names, physical addresses, unique IMEI numbers, and driver licenses information, the seller said. Motherboard has seen samples of the data, and confirmed they contained accurate information on T-Mobile customers.
Und0xxed said the hackers found an opening in T-Mobile’s wireless data network that allowed access to two of T-Mobile’s customer data centers. From there, the intruders were able to dump a number of customer databases totaling more than 100 gigabytes.
They claim one of those databases holds the name, date of birth, SSN, drivers license information, plaintext security PIN, address and phone number of 36 million T-Mobile customers in the United States — all going back to the mid-1990s.
The hacker(s) claim the purloined data also includes IMSI and IMEI data for 36 million customers. These are unique numbers embedded in customer mobile devices that identify the device and the SIM card that ties that customer’s device to a telephone number.
T-Mobile US Inc. said an investigation confirmed about 7.8 million current users had information stolen along with more than 40 million records from past or prospective customers who’d applied for credit in a cyberattack.
The stolen information included customers’ full names, dates of birth, social security numbers, and IDs such as drivers licenses, the Bellevue, Washington-based company said in a statement on Wednesday. The hack doesn’t appear to have included credit card details or other financial information, it said.
Even though the number of affected accounts is lower than initially claimed, T-Mobile’s official statement acknowledges the leak is worse than originally thought because the dump also includes personal information from people who merely applied.
Here’s the kicker from Thomson’s article:
T-Mobile’s shares were little changed in New York trading on Tuesday. The stock has gained 4.3% this year.
Massive data breaches that compromise the identities of tens of millions of people? That does not move the needle for investor confidence. It is just another day.