If you’re a shareholder, you’re probably thrilled right now. A huge quarter for the iPhone, services, and AirPods, plus notable growth in Brazil, Malaysia, Thailand, and Vietnam.
It’s not all rosy, however. I think there are a couple of low points — or, well, less high points — that are worth pointing out. Earlier this year, Apple hinted at a strong quarter for services; indeed, it was. On the conference call today, Luca Maestri said that the company has around 480 million total paying subscribers to services. But, aside from Apple Music, Apple has so far provided no breakdown of how many paying subscribers it has for any specific service. Apple Arcade and Apple News Plus were each mentioned only once. Apple TV Plus got a fair bit more airtime, but Maestri acknowledged that there aren’t loads of paying subscribers yet:
And so when you take the combination of paid subscribers and bundle subscribers, you get the Apple TV+ revenue. Of course, because we’ve launched the service very recently, the amount of revenue that we recognized during the quarter was immaterial to our results.
The other thing that stood out to me was a year-over-year decline in iPad sales. It may have been the tenth anniversary of the iPad yesterday, but this was its fourth-lowest holiday quarter. I imagine that many users are hanging onto their older iPads, as iPadOS 13 supports models all the way back to the five-year-old iPad Air 2. But I imagine that not updating the iPad Pro at all in 2019 muted sales somewhat.
Anyway, I imagine Apple’s biggest shareholders have each gone home tonight to jump into a pile of cash, Scrooge McDuck style, because they are well-adjusted people just like you and I.
Ten years later, though, I don’t think the iPad has come close to living up to its potential. By the time the Mac turned 10, it had redefined multiple industries. In 1984 almost no graphic designers or illustrators were using computers for work. By 1994 almost all graphic designers and illustrators were using computers for work. The Mac was a revolution. The iPhone was a revolution. The iPad has been a spectacular success, and to tens of millions it is a beloved part of their daily lives, but it has, to date, fallen short of revolutionary.
It’s tempting to dwell on the Jobs point — I really do think the iPad is the product that misses him the most — but the truth is that the long-term sustainable source of innovation on the iPad should have come from 3rd-party developers. Look at Gruber’s example for the Mac of graphic designers and illustrators: while MacPaint showed what was possible, the revolution was led by software from Aldus (PageMaker), Quark (QuarkXPress), and Adobe (Illustrator, Photoshop, Acrobat). By the time the Mac turned 10, Apple was a $2 billion company, while Adobe was worth $1 billion.
There are, needless to say, no companies built on the iPad that are worth anything approaching $1 billion in 2020 dollars, much less in 1994 dollars, even as the total addressable market has exploded, and one big reason is that $4.99 price point. Apple set the standard that highly complex, innovative software that was only possible on the iPad could only ever earn 5 bucks from a customer forever (updates, of course, were free).
Universal apps are the worst thing that ever happened to the iPad.
The economics for developers are to make a big iPhone app or ignore the device altogether. No business model = no innovation.
A selection of iPad-optimized apps may continue to differentiate it from its competitors, but it has taken forever to get the biggest developers on board with creating real versions of their software for the iPad. You would have thought Adobe, in particular, would be clamoring to release a true version of Photoshop for the iPad, but it took them until late last year — and it’s still very much a work in progress. Microsoft was much faster, but it still took them over four years after the iPad’s debut to launch a compatible version of Office.
One thing these apps have in common is that they are now subscription-based. On that front, the App Store on the iPhone and iPad has been revolutionary. Apple first encouraged developers to price their iPad software far below Mac equivalents — GarageBand was $5 on the iPad, but Apple charged $79 for iLife on the Mac, of which GarageBand was one part — and has never had an official mechanism for offering paid updates through the App Store. Developers realized that they could instead offer their apps for free and require a paid account; Apple made this arrangement official in 2016. Neither the iPad nor the App Store are singlehandedly responsible for the software-as-a-service business model, but they have each been a beneficiary of it.
Unfortunately, the simplicity of buying a license to use a piece of software has all but vanished.
Today is the tenth anniversary of the day that Steve Jobs took the stage at Yerba Buena and introduced the world to the iPad. It went on sale in April 2010 and ended up being Apple’s fastest selling new product ever.
Plenty of writers have been acknowledging this anniversary today — Tom Warren at the Verge and John Voorhees at MacStories both wrote articles worth your time; Ryan Houlihan of Input interviewed Imran Chaudhri and Bethany Bongiorno, both of whom worked on the original iPad.
The iPad at 10 is, to me, a grave disappointment. Not because it’s “bad”, because it’s not bad — it’s great even — but because great though it is in so many ways, overall it has fallen so far short of the grand potential it showed on day one. To reach that potential, Apple needs to recognize they have made profound conceptual mistakes in the iPad user interface, mistakes that need to be scrapped and replaced, not polished and refined. I worry that iPadOS 13 suggests the opposite — that Apple is steering the iPad full speed ahead down a blind alley.
I agree with Gruber’s criticism of the iPad’s multitasking model in design terms, but I find myself increasingly frustrated by the myriad ways using an iPad makes simple tasks needlessly difficult — difficulties that should not remain ten years on.
There are small elements of friction, like how the iPad does not have paged memory, so the system tends to boot applications from memory when it runs out. There are developer limitations that make it difficult for apps to interact with each other. There are still system features that occupy the entire display. Put all of these issues together and it makes a chore of something as ostensibly simple as writing.
Writing this post, for example, involved tapping a bookmarklet and saving the title and link URL as a draft. I rewrote the title, selected it — with some difficulty, as text selection on the iPad remains a mysterious combination of swipes and taps — then tapped the “Share” option and passed the selection to the Text Case app. The title case-converted text was placed on my clipboard with a tap, as there’s no way for the app to simply replace the selected text inline, and then another incantation was performed to select the title again and replace it with the text on the clipboard. As I typed out the body text, words were inexplicably selected and the cursor was moved around. Sometimes, after holding the delete key to remove a few words, the keyboard would be in uppercase mode. To get all of the links for the second paragraph, I had to open a few Safari tabs. I received a message notification midway through this and needed to open Notification Centre to read it, which took over the whole display for a handful of balloons half its width. I tapped to reply, then switched back to Safari. It had apparently been dumped from memory in the background, perhaps because I opened the photo picker in Messages, so the tabs I opened before had to reload.
Each of these problems is tiny but irksome. Combined, it makes the iPad a simplistic multitasking environment presented with inexplicable complexity.
No device or product I own has inspired such a maddening blend of adoration and frustration for me as the iPad, and certainly not for as long in so many of the same ways.
Last month, you may remember, Avast’s web browser extensions were caught collecting every website users were visiting for sale by its Jumpshot subsidiary. Those extensions were pulled and the company insisted that the information had no personal information attached:
As a final assurance, [Avast CEO Ondrej Vlcek] told Forbes he recognizes customers use Avast to protect their information and so it can’t do anything that might “circumvent the security of privacy of the data including targeting by advertisers.”
“So we absolutely do not allow any advertisers or any third party … to get any access through Avast or any data that would allow the third party to target that specific individual,” he adds. […]
Instead of behaving more ethically, Avast decided to turn their free antivirus software into a piece of spyware.
The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.
Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.
The data collected is so granular that clients can view the individual clicks users are making on their browsing sessions, including the time down to the millisecond. And while the collected data is never linked to a person’s name, email or IP address, each user history is nevertheless assigned to an identifier called the device ID, which will persist unless the user uninstalls the Avast antivirus product.
“Most of the threats posed by de-anonymization — where you are identifying people — comes from the ability to merge the information with other data,” said Gunes Acar, a privacy researcher who studies online tracking.
He points out that major companies such as Amazon, Google, and branded retailers and marketing firms can amass entire activity logs on their users. With Jumpshot’s data, the companies have another way to trace users’ digital footprints across the internet.
Of course, Avast knows de-anonymization is trivial. That’s why it sells an anti-tracking product that explicitly promises to “disguise your online behavior so that no one can tell it’s you” for just $65 per year. That’s nice of Avast: it will sell your identity, and also sell you a product that promises to prevent companies from selling your identity.
Bruce Schneier, in an op-ed for the New York Times:
Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.
Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect. The large internet surveillance companies like Facebook and Google collect dossiers on us more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.
Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.
Motorola released a whole slew of YouTube videos Sunday about its new Razr, a revamped throwback to its mid-2000s flip phone of the same name, in celebration of the phone’s pre-order launch. But with them came a disclaimer about the foldable phone: “Screen is made to bend; bumps and lumps are normal.”
Pre-sales became available today exclusively through Verizon at $1,499, though the phones won’t ship out until Feb. 14, according to Verizon’s website, and not on Feb. 6 as Motorola previously announced.
Nothing builds confidence for a thousand-dollar device than the manufacturer reassuring me that it’s normal for it to deform over time.
It seems like ever such a long time ago that tech writers were complaining about the prices of smartphones these days — and those were finished, reliable products.
Ron Amadeo reviewed the Samsung Galaxy Fold for Ars Technica and does not seem impressed:
The inside screen would benefit a lot from being bigger. While the inner aspect ratio is the same as an iPad Mini, in practice the two devices are nothing alike. The 7.3-inch Galaxy Fold display is noticeably smaller than a 7.9-inch iPad Mini, and, critically, the iPad doesn’t have to waste space on an on-screen navigation bar and a giant camera notch. An iPad aspect ratio doesn’t work when you have to chop off sections of the screen like this—iOS dedicates nearly the entire display to the app area, and the Galaxy Fold does not. Overall, there’s just not enough room on the Fold display for apps to make it a significant improvement, or any improvement at all, over a regular smartphone.
A wider body would also allow for a smartphone-sized front screen instead of the tiny, useless screen that is on the front now. It could display apps at a normal size, with a normal width, and the keyboard would be usable. A wider body would also allow for a wider interior screen, which would be better for split screen, better for media, better for multi-pane tablet apps, and more normal for most Android games.
However Samsung arrived at this form factor for the Galaxy Fold, it’s a disaster. Nothing justifies this shape. Neither screen is good for its intended purpose, and this is something anyone could figure out if they just tried the phone for a few minutes next to a normal smartphone. You don’t see more Android content on the bigger screen, the app area is not the right aspect ratio for split screen, and most forms of media would benefit from a wider, less square screen. With such a considerable increase in price and heft over a normal smartphone, the Galaxy Fold just isn’t worth it.
This is a review of the Samsung Galaxy Fold as a product, and it is a brutal invective. But let’s be realistic: the Fold should not be a product. It is a prototype that you can, for some reason, buy today. Its hardware is ill-considered; its software feels like a stretched smartphone rather than a shrunken tablet. This is a two thousand dollar way to say “first”.
A quick primer on the now-industry-standard SAE International rules on how to discuss self-driving abilities: Level 0 is no automation whatsoever. Level 1 is partial assistance with certain aspects of driving, like lane keep assist or adaptive cruise control. Level 2 is a step up to systems that can take control of the vehicle in certain situations, like Tesla’s Autopilot or Cadillac’s Super Cruise, while still requiring the driver to pay attention.
Get past that and we enter the realm of speculation: Level 3 promises full computer control without supervision under defined conditions during a journey, Level 4 is start-to-finish autonomous tech limited only by virtual safeguards like a geofence, and Level 5 is the total hands-off, go literally anywhere at the push of a button experience where the vehicle might not even have physical controls.
Sitting down with WardsAuto at the Consumer Electronics Show in Las Vegas last week, VW Autonomy’s Alex Hitzinger said Level 4 might be the realistic limit for what automakers can build. He wasn’t shy in pointing out the relative difficulty of trying for full Level 5 autonomy.
I am skeptical that generally available cars will make the jump from third-level autonomy to fourth-level within this decade, and I have no expectation that any car will get to fifth-level autonomy in my lifetime. I simply don’t think it’s reasonable that a vehicle will be able to drive itself anywhere on Earth that can be traversed by cars today under any weather conditions — without the intervention of a human driver.
One of the reasons auto manufacturers have given for their interest in autonomous vehicles is their ability to reduce collisions. If that’s the case, why not set a goal of making entirely reliable collision avoidance systems? I know that’s less cool than a car that can drive itself, but it’s much more practical.1
I am also prepared to eat humble pie.
Better still would be greater investment in public transit which, in some circumstances, is fully automated. I know this is even duller than collision avoidance systems, but it’s also better for cities. ↩︎
One of the bigger mysteries associated with the hack of Jeff Bezos’ iPhone X is how, exactly, it was breached. A report yesterday by Sheera Frenkel in the New York Times appeared to shed some light on that:
On the afternoon of May 1, 2018, Jeff Bezos received a message on WhatsApp from an account belonging to Saudi Arabia’s crown prince, Mohammed bin Salman.
The two men had previously communicated using WhatsApp, but Bezos, Amazon’s chief executive, had not expected a message that day — let alone one with a video of Saudi and Swedish flags with Arabic text.
The video, a file of more than 4.4 megabytes, was more than it appeared. Hidden in 14 bytes of that file was a separate bit of code that most likely implanted malware, malicious software, that gave attackers access to Bezos’ entire phone, including his photos and private communications.
The detail attributing the breach to fourteen bytes of malware was entirely new information, and not reported elsewhere. But I’m linking here to the Chicago Tribune’s syndicated copy because the version currently on the Times’ website no longer makes the same specific claim:
The video, a file of more than 4.4 megabytes, was more than it appeared, according to a forensic analysis that Mr. Bezos commissioned and paid for to discover who had hacked his iPhone X. Hidden in that file was a separate bit of code that most likely implanted malware that gave attackers access to Mr. Bezos’ entire phone, including his photos and private communications.
Despite this material change, there is no correction notice at the bottom of the article. The forensic report (PDF) acknowledges that “the file containing the video is slightly larger than the video itself”, but does not cite a specific figure. It does, however, state that the video file is 4.22 MB, not “more than 4.4” as stated in the Times report.
I know this seems ridiculously pedantic, but I want to know how this discrepancy can be explained. The UN press release also does not contain any more specific details. Is this just a weird instance of miscommunications that haven’t been fact-checked? Or is this perhaps news that hasn’t been fully confirmed? For example, is there another forensic report that hasn’t yet been made public?
This matters, I think, because it could suggest a difference between whether the H.264 MP4 video decoder on iOS has a vulnerability, or if it’s something specific to the WhatsApp container. If the former is true, that means that this isn’t just something that WhatsApp users need to watch out for.
It used to be the case that vulnerabilities like these were kept extremely close to the vest and only used on specific high-value targets. But, ever since we found out that China was attacking Uyghur iPhone users broadly, I’m no longer as convinced that not being a prominent individual is enough to avoid being a target.
Update:Ben Somers points out that 4.22 MiB roughly converts to 4.4 MB, which may be the source of that part of the discrepancy. The fourteen bytes are still unaccounted for.
Also, it’s worth mentioning that one reason that I wanted to draw attention to this story is because the Times often fails to post correction notices for online stories that have been updated after publication. I think this practice is ridiculous.
Update: A paragraph later in the story references the fourteen byte mystery, now with more context:
The May 2018 message that contained the innocuous-seeming video file, with a tiny 14-byte chunk of malicious code, came out of the blue, according to the report and additional notes obtained by The New York Times. In the 24 hours after it was sent, Mr. Bezos’ iPhone began sending large amounts of data, which increased approximately 29,000 percent over his normal data usage.
This wasn’t in the story last time I checked. There still isn’t a corrections or updates notice appended to the Times article. Thanks to Lawrence Velázquez for bringing it to my attention.
Ryan Mac, Caroline Haskins, and Logan McDonald, Buzzfeed News:
Originally known as Smartcheckr, Clearview was the result of an unlikely partnership between Ton-That, a small-time hacker turned serial app developer, and Richard Schwartz, a former adviser to then–New York mayor Rudy Giuliani. Ton-That told the Times that they met at a 2016 event at the Manhattan Institute, a conservative think tank, after which they decided to build a facial recognition company.
While Ton-That has erased much of his online persona from that time period, old web accounts and posts uncovered by BuzzFeed News show that the 31-year-old developer was interested in far-right politics. In a partial archive of his Twitter account from early 2017, Ton-That wondered why all big US cities were liberal, while retweeting a mix of Breitbart writers, venture capitalists, and right-wing personalities.
It is revealing that the people behind tools that are borderline unethical and threaten our privacy expectations often also happen to be aggressively protective of their own privacy.
In the second part of the presentation, Scott Forstall (then Apple’s software chief) invoked the App Store, which had already become wildly successful after less than two years in operation. It was the App Store’s Gold Rush era, and Forstall’s message was clear: There’s a new Gold Rush coming, and it’s in iPad apps. And if developers wanted their apps to be prominently featured on the App Store for iPad, Forstall pointed out, those apps would need to be updated to support it. iPhone-only apps would run, but they’d do so in a diminished compatibility mode and be relegated to the back pages of the App Store.
The iPad was introduced in January, but it didn’t ship until April — and Apple released tools for developers to build iPad apps the very day the product was announced. The message was clear: Build iPad apps and a flood of users will come your way. You’ve got three months.
Forstall was also quick to point out that good iPad apps were more than just blown-up iPad versions. Several compliant developers were brought out to demo how they’d already begun work on reconceptualizing their iPhone apps for a larger screen, including MLB At Bat and the New York Times.
Really great apps designed for the iPad indeed make it a uniquely worthwhile and good experience. It’s a shame, then, that some developers in the past few years — Apple included — have been putting less effort into designing apps specifically for the iPad. It remains somewhat dispiriting to see that the biggest difference between the iPhone and iPad versions of an app is that there is an always-visible sidebar in the left third of the screen area. Even system features like Siri are half-assed ports on the iPad.
But that could all change soon. One of the most interesting things to happen in 2019 was the bifurcation of iOS into “iOS” for the iPhone and iPod Touch, and “iPadOS” for the iPad. So far, this change has largely been in name only, but I am hopeful that this signals a future in which iPad apps and features are more specific to the platform. Apple has also been steadily updating the iPad at both the high and low ends of its lineup, which is equally good news for the platform.
Forensic experts hired by Jeff Bezos have concluded with “medium to high confidence” that a WhatsApp account used by Saudi Crown Prince Mohammed bin Salman was directly involved in a 2018 hack of the Amazon founder’s phone.
A report on the hack, which has been seen by the Financial Times, says Mr Bezos’ phone started surreptitiously sharing vast amounts of data immediately after receiving an apparently innocuous, but encrypted video file from the prince’s WhatsApp account in May 2018.
That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”
Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.
Investigators say in the report that their efforts were hampered somewhat by WhatsApp’s encryption, but they have suggested that a followup step would be to jailbreak Bezos’ iPhone to examine its file system.
Also of note: Bezos creates an encrypted backup of his iPhone using iTunes; he has iCloud Backups disabled. But investigators were not able to extract the encrypted backup. It’s unclear whether Bezos forgot his password or was unable to supply it for another reason.
[…] To encrypt a backup in the Finder or iTunes for the first time, turn on the password-protected Encrypt Backup option. Backups for your device will automatically be encrypted from then on. You can also make a backup in iCloud, which automatically encrypts your information every time.
There is nothing technically incorrect about this explanation. iCloud backups are, indeed, encrypted every time; local backups have encryption as an option. But whether a backup is “encrypted” is not enough information to decide which method is more secure. Apple holds the keys to iCloud backups, but only users know their local backup key.
It’s worth noting that Apple has been evaluating whether to offer encrypted iCloud backups since at least February of 2016, and possibly sooner. Today’s Reuters report suggests that Apple dropped that plan at some point in early 2018, though an October 2018 interview with Tim Cook in Spiegel indicated that the company was still working on it. I’m not sure what the correct timeline is, but I hope that renewed public pressure can encourage the company to make it a priority. It is imperative that users know exactly how their data is being used, and there is no reason that enabling backups should compromise their security and privacy.
This reminds me of the Facebook 2FA fiasco, a more egregious case of something positive (2FA) being needlessly tainted (abuse of SMS numbers for non-2FA purposes).
This is exactly right. One of the effects of Apple’s confusing language around backups and encryption is that some people may not trust either. It’s not a great day when Apple is getting unfavourably compared to Facebook on privacy and security matters.
Right now an ordinary person still can’t, for free, take a random photo of a stranger and find the name for him or her. But with Yandex they can. Yandex has been around a long time and is one of the few companies in the world that is competitive to Google. Their index is heavily biased to Eastern European data, but they have enough global data to find me and Andrew Yang.
If you use Google Photos or Facebook you’ve probably encountered their facial recognition. It’s magic, the matching works great. It’s also very limited. Facebook seems to only show you names for faces that people you have some sort of Facebook connection to. Google Photos similarly doesn’t volunteer random names. They could do more; Facebook could match a face to any Facebook user, for instance. But both services seem to have made a deliberate decision not to be a general purpose facial recognition service to identify strangers.
At the time that I linked to the Bellingcat report, I wondered why Google’s reverse image recognition, in particular, was so bad in comparison. In tests, it even missed imagery from Google Street View despite Google regularly promoting its abilities in machine learning, image identification, and so on. In what I can only explain as a massive and regrettable oversight, it is clear to me that the reason Google’s image search is so bad is because Google designed it that way. Otherwise, Google would have launched something like Yandex or Clearview AI, and that would be dangerous.
Google’s restraint is admirable. What’s deeply worrying is that it is optional — that Google could, at any time, change its mind. There are few regulations in the United States that would prevent Google or any other company from launching its own invasive and creepy facial recognition system. Recall that the only violation that could be ascribed to Clearview’s behaviour — other than an extraordinary violation of simple ethics — is that the company scraped social media sites’ images without permission. It’s a pretty stupid idea to solely rely upon copyright law as a means of reining in facial recognition.
Mike Masnick of Techdirt responds to the Reuters report from earlier today claiming that Apple dropped a plan to implement end-to-end encryption on iCloud backups at the behest of the FBI:
Of course, the other way one might look at this decision is that if Apple had gone forward with fully encrypting backups, then the DOJ, FBI and other law enforcement would have gone even more ballistic in demanding a regulatory approach that blocks pretty much all real encryption. If you buy that argument, then failing to encrypt backups is a bit of appeasement. Of course, with Barr’s recent attacks on device encryption, it seems reasonable to argue that this “compromise” isn’t enough (and, frankly, probably would never be enough) for authoritarian law enforcement folks like Barr, and thus, it’s silly for Apple to even bother to try to appease them in such a manner.
Indeed, all of this seems like an argument for why Apple should actually cooperate less with law enforcement, rather than more, as the administration keeps asking. Because even when Apple tries to work with law enforcement, it gets attacked as if it has done nothing. It seems like the only reasonable move at this point is to argue that the DOJ is a hostile actor, and Apple should act accordingly.
Even though Apple attempts to explain how iCloud backups work, I don’t think they do a good job, and it is one reason the Reuters report today had such a profound impact: a lot of people have been surprised that their iCloud backups are less private than their phone. Yet, as bad as this is for Apple, it is equally a poor look for the Department of Justice, who have publicly been whining about their inability to extract device data while privately accepting Apple’s cooperation.
More than two years ago, Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud, according to one current and three former FBI officials and one current and one former Apple employee.
Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order.
In private talks with Apple soon after, representatives of the FBI’s cyber crime agents and its operational technology division objected to the plan, arguing it would deny them the most effective means for gaining evidence against iPhone-using suspects, the government sources said.
When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.
Apple describes both local iPhone storage and iCloud backups as “encrypted”, but those words mean different things depending on their context. An iPhone’s files cannot be decrypted unless the passcode is known, which typically means that only the device’s user has full access. But an iCloud backup’s key is held by Apple, so they have just as much access as the user would. Importantly, it also means that there is a way of recovering the data should the user’s key fail for some reason. It is possible that part of the reason Apple scrapped a plan for end-to-end encryption of iCloud backups is because it would lead to customers frustrated that they cannot recover their backup in some circumstances.
However, it is more troubling if such a plan never came to fruition because of government pressure. I don’t think it should be the goal of Apple or any company to deliberately make the work of law enforcement impossible, but decisions like these should be made in the best interests of users. And I would expect many users believe that storing their device’s backup in iCloud should not compromise their security and privacy. At the very least, encrypting backups using a secret known only to the user should be an option for iOS users; after all, it is apparently an option on Android. Apple also ought to make it plainly obvious who holds the key to encrypted data at every level to help reduce moronic takes like that from David Carroll:
Apple’s new position on protecting iCloud data from the United States government is now remarkably similar to its position on protecting iCloud data stored in the People’s Republic of China.
This simply isn’t true on any level. For a start, this is not a “new position”, and it does not solely apply to the United States government. Apple makes public what it can and cannot supply to law enforcement, and how they respond to those requests (PDF):
All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the subscriber’s account, may be provided in response to a search warrant issued upon a showing of probable cause.
For law enforcement agencies outside the U.S. (PDF), the last sentence is replaced with this paragraph:
All requests from government and law enforcement agencies outside of the United States for content, with the exception of emergency circumstances (defined above in Emergency Requests), must comply with applicable laws, including the United States Electronic Communications Privacy Act (ECPA). A request under a Mutual Legal Assistance Treaty or Agreement with the United States is in compliance with ECPA. Apple Inc. will provide subscriber content, as it exists in the subscriber’s account, only in response to such legally valid process.
The worry in China is not necessarily that the government can subpoena for iCloud data; the worry is that user data is stored on servers belonging to a company run, in part, by a corrupt single-party regime. The government of the U.S. and its various criminal justice and national security branches are worrying for myriad reasons, but they cannot accurately be compared to the situation in China.
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
This investigation was published on Saturday. I’ve read it a few times and it has profoundly disturbed me on every pass, but I haven’t been surprised by it. I’m not cynical, but it doesn’t surprise me that an entirely unregulated industry motivated to push privacy ethics to their revenue-generating limits would move in this direction.
Clearview’s technology makes my skin crawl; the best you can say about the company is that its limited access prevents the most egregious privacy violations. When something like this is more widely available, it will be dangerous for those who already face greater threats to their safety and privacy — women, in particular, but also those who are marginalized for their race, skin colour, gender, and sexual orientation. Nothing will change on this front if we don’t set legal expectations that limit how technologies like this may be used.
Susan Heavey and Andrea Shalal, reporting for Reuters in an article with the headline “Mnuchin urges Apple, other tech companies to work with law enforcement”:
Apple Inc and other technology companies should cooperate with U.S. investigators, Treasury Secretary Steven Mnuchin said on Wednesday as law enforcement officials continued probing last month’s fatal shooting at a U.S. naval base in Florida.
Mnuchin later told reporters at the White House that he had not discussed the issue with Apple and did not know the specifics at hand. “I know Apple has cooperated in the past on law enforcement issues, and I expect they would continue … to cooperate.”
The Reuters article notes that Apple is, in fact, cooperating with investigators by turning over everything they have on the iPhones in question, counter to Mnuchin’s claim. But the headline on this article is misleading.
When framed that way, it’s obviously dumb. But anyone reading Reuters’ coverage of the issue won’t get that. They’ll think that Apple is somehow taking some sort of stand against US law enforcement. This is what Trump, Barr, and apparently Mnuchin, would like people to think, but it’s not true, and it’s fundamentally bad journalism for Reuters to frame it that way.
To be clear, it is likely not the reporters’ fault that the story was framed with this headline. But it’s unnecessarily carrying water for a Department of Justice that is exploiting a terrorist attack and public confusion over this issue to neuter encryption.
Jamie Leach of Google, announcing the new search results page design last year:
The name of the website and its icon appear at the top of the results card to help anchor each result, so you can more easily scan the page of results and decide what to explore next. Site owners can learn more about how to choose their prefered icon for organic listings here.
When you search for a product or service and we have a useful ad to show, you’ll see a bolded ad label at the top of the card alongside the web address so you can quickly identify where the information is coming from.
Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded “Ad” label for ads.
All you get, as far as identifying where a search result comes from, is a tiny 16-by-16-point favicon and small grey text with the URL. If it’s an ad, the favicon is replaced with a little “Ad” label, but there are no other advertising identifiers. Just a few years ago, Google used to indicate ads much more prominently. If the ads are truly as “useful” as Google claims, surely it doesn’t need to trick users into clicking on them instead of regular results.
Update:Google says that they’re going to experiment with different ad and search result appearances.