If you’re a shareholder, you’re probably thrilled right now. A huge quarter for the iPhone, services, and AirPods, plus notable growth in Brazil, Malaysia, Thailand, and Vietnam.
It’s not all rosy, however. I think there are a couple of low points — or, well, less high points — that are worth pointing out. Earlier this year, Apple hinted at a strong quarter for services; indeed, it was. On the conference call today, Luca Maestri said that the company has around 480 million total paying subscribers to services. But, aside from Apple Music, Apple has so far provided no breakdown of how many paying subscribers it has for any specific service. Apple Arcade and Apple News Plus were each mentioned only once. Apple TV Plus got a fair bit more airtime, but Maestri acknowledged that there aren’t loads of paying subscribers yet:
And so when you take the combination of paid subscribers and bundle subscribers, you get the Apple TV+ revenue. Of course, because we’ve launched the service very recently, the amount of revenue that we recognized during the quarter was immaterial to our results.
The other thing that stood out to me was a year-over-year decline in iPad sales. It may have been the tenth anniversary of the iPad yesterday, but this was its fourth-lowest holiday quarter. I imagine that many users are hanging onto their older iPads, as iPadOS 13 supports models all the way back to the five-year-old iPad Air 2. But I imagine that not updating the iPad Pro at all in 2019 muted sales somewhat.
Anyway, I imagine Apple’s biggest shareholders have each gone home tonight to jump into a pile of cash, Scrooge McDuck style, because they are well-adjusted people just like you and I.
Ten years later, though, I don’t think the iPad has come close to living up to its potential. By the time the Mac turned 10, it had redefined multiple industries. In 1984 almost no graphic designers or illustrators were using computers for work. By 1994 almost all graphic designers and illustrators were using computers for work. The Mac was a revolution. The iPhone was a revolution. The iPad has been a spectacular success, and to tens of millions it is a beloved part of their daily lives, but it has, to date, fallen short of revolutionary.
It’s tempting to dwell on the Jobs point — I really do think the iPad is the product that misses him the most — but the truth is that the long-term sustainable source of innovation on the iPad should have come from 3rd-party developers. Look at Gruber’s example for the Mac of graphic designers and illustrators: while MacPaint showed what was possible, the revolution was led by software from Aldus (PageMaker), Quark (QuarkXPress), and Adobe (Illustrator, Photoshop, Acrobat). By the time the Mac turned 10, Apple was a $2 billion company, while Adobe was worth $1 billion.
There are, needless to say, no companies built on the iPad that are worth anything approaching $1 billion in 2020 dollars, much less in 1994 dollars, even as the total addressable market has exploded, and one big reason is that $4.99 price point. Apple set the standard that highly complex, innovative software that was only possible on the iPad could only ever earn 5 bucks from a customer forever (updates, of course, were free).
Universal apps are the worst thing that ever happened to the iPad.
The economics for developers are to make a big iPhone app or ignore the device altogether. No business model = no innovation.
A selection of iPad-optimized apps may continue to differentiate it from its competitors, but it has taken forever to get the biggest developers on board with creating real versions of their software for the iPad. You would have thought Adobe, in particular, would be clamoring to release a true version of Photoshop for the iPad, but it took them until late last year — and it’s still very much a work in progress. Microsoft was much faster, but it still took them over four years after the iPad’s debut to launch a compatible version of Office.
One thing these apps have in common is that they are now subscription-based. On that front, the App Store on the iPhone and iPad has been revolutionary. Apple first encouraged developers to price their iPad software far below Mac equivalents — GarageBand was $5 on the iPad, but Apple charged $79 for iLife on the Mac, of which GarageBand was one part — and has never had an official mechanism for offering paid updates through the App Store. Developers realized that they could instead offer their apps for free and require a paid account; Apple made this arrangement official in 2016. Neither the iPad nor the App Store are singlehandedly responsible for the software-as-a-service business model, but they have each been a beneficiary of it.
Unfortunately, the simplicity of buying a license to use a piece of software has all but vanished.
Today is the tenth anniversary of the day that Steve Jobs took the stage at Yerba Buena and introduced the world to the iPad. It went on sale in April 2010 and ended up being Apple’s fastest selling new product ever.
Plenty of writers have been acknowledging this anniversary today — Tom Warren at the Verge and John Voorhees at MacStories both wrote articles worth your time; Ryan Houlihan of Input interviewed Imran Chaudhri and Bethany Bongiorno, both of whom worked on the original iPad.
The iPad at 10 is, to me, a grave disappointment. Not because it’s “bad”, because it’s not bad — it’s great even — but because great though it is in so many ways, overall it has fallen so far short of the grand potential it showed on day one. To reach that potential, Apple needs to recognize they have made profound conceptual mistakes in the iPad user interface, mistakes that need to be scrapped and replaced, not polished and refined. I worry that iPadOS 13 suggests the opposite — that Apple is steering the iPad full speed ahead down a blind alley.
I agree with Gruber’s criticism of the iPad’s multitasking model in design terms, but I find myself increasingly frustrated by the myriad ways using an iPad makes simple tasks needlessly difficult — difficulties that should not remain ten years on.
There are small elements of friction, like how the iPad does not have paged memory, so the system tends to boot applications from memory when it runs out. There are developer limitations that make it difficult for apps to interact with each other. There are still system features that occupy the entire display. Put all of these issues together and it makes a chore of something as ostensibly simple as writing.
Writing this post, for example, involved tapping a bookmarklet and saving the title and link URL as a draft. I rewrote the title, selected it — with some difficulty, as text selection on the iPad remains a mysterious combination of swipes and taps — then tapped the “Share” option and passed the selection to the Text Case app. The title case-converted text was placed on my clipboard with a tap, as there’s no way for the app to simply replace the selected text inline, and then another incantation was performed to select the title again and replace it with the text on the clipboard. As I typed out the body text, words were inexplicably selected and the cursor was moved around. Sometimes, after holding the delete key to remove a few words, the keyboard would be in uppercase mode. To get all of the links for the second paragraph, I had to open a few Safari tabs. I received a message notification midway through this and needed to open Notification Centre to read it, which took over the whole display for a handful of balloons half its width. I tapped to reply, then switched back to Safari. It had apparently been dumped from memory in the background, perhaps because I opened the photo picker in Messages, so the tabs I opened before had to reload.
Each of these problems is tiny but irksome. Combined, it makes the iPad a simplistic multitasking environment presented with inexplicable complexity.
No device or product I own has inspired such a maddening blend of adoration and frustration for me as the iPad, and certainly not for as long in so many of the same ways.
Last month, you may remember, Avast’s web browser extensions were caught collecting every website users were visiting for sale by its Jumpshot subsidiary. Those extensions were pulled and the company insisted that the information had no personal information attached:
As a final assurance, [Avast CEO Ondrej Vlcek] told Forbes he recognizes customers use Avast to protect their information and so it can’t do anything that might “circumvent the security of privacy of the data including targeting by advertisers.”
“So we absolutely do not allow any advertisers or any third party … to get any access through Avast or any data that would allow the third party to target that specific individual,” he adds. […]
Instead of behaving more ethically, Avast decided to turn their free antivirus software into a piece of spyware.
The documents, from a subsidiary of the antivirus giant Avast called Jumpshot, shine new light on the secretive sale and supply chain of peoples’ internet browsing histories. They show that the Avast antivirus program installed on a person’s computer collects data, and that Jumpshot repackages it into various different products that are then sold to many of the largest companies in the world. Some past, present, and potential clients include Google, Yelp, Microsoft, McKinsey, Pepsi, Sephora, Home Depot, Condé Nast, Intuit, and many others. Some clients paid millions of dollars for products that include a so-called “All Clicks Feed,” which can track user behavior, clicks, and movement across websites in highly precise detail.
Avast claims to have more than 435 million active users per month, and Jumpshot says it has data from 100 million devices. Avast collects data from users that opt-in and then provides that to Jumpshot, but multiple Avast users told Motherboard they were not aware Avast sold browsing data, raising questions about how informed that consent is.
The data collected is so granular that clients can view the individual clicks users are making on their browsing sessions, including the time down to the millisecond. And while the collected data is never linked to a person’s name, email or IP address, each user history is nevertheless assigned to an identifier called the device ID, which will persist unless the user uninstalls the Avast antivirus product.
“Most of the threats posed by de-anonymization — where you are identifying people — comes from the ability to merge the information with other data,” said Gunes Acar, a privacy researcher who studies online tracking.
He points out that major companies such as Amazon, Google, and branded retailers and marketing firms can amass entire activity logs on their users. With Jumpshot’s data, the companies have another way to trace users’ digital footprints across the internet.
Of course, Avast knows de-anonymization is trivial. That’s why it sells an anti-tracking product that explicitly promises to “disguise your online behavior so that no one can tell it’s you” for just $65 per year. That’s nice of Avast: it will sell your identity, and also sell you a product that promises to prevent companies from selling your identity.
Bruce Schneier, in an op-ed for the New York Times:
Regulating this system means addressing all three steps of the process. A ban on facial recognition won’t make any difference if, in response, surveillance systems switch to identifying people by smartphone MAC addresses. The problem is that we are being identified without our knowledge or consent, and society needs rules about when that is permissible.
Similarly, we need rules about how our data can be combined with other data, and then bought and sold without our knowledge or consent. The data broker industry is almost entirely unregulated; there’s only one law — passed in Vermont in 2018 — that requires data brokers to register and explain in broad terms what kind of data they collect. The large internet surveillance companies like Facebook and Google collect dossiers on us more detailed than those of any police state of the previous century. Reasonable laws would prevent the worst of their abuses.
Finally, we need better rules about when and how it is permissible for companies to discriminate. Discrimination based on protected characteristics like race and gender is already illegal, but those rules are ineffectual against the current technologies of surveillance and control. When people can be identified and their data correlated at a speed and scale previously unseen, we need new rules.
Motorola released a whole slew of YouTube videos Sunday about its new Razr, a revamped throwback to its mid-2000s flip phone of the same name, in celebration of the phone’s pre-order launch. But with them came a disclaimer about the foldable phone: “Screen is made to bend; bumps and lumps are normal.”
Pre-sales became available today exclusively through Verizon at $1,499, though the phones won’t ship out until Feb. 14, according to Verizon’s website, and not on Feb. 6 as Motorola previously announced.
Nothing builds confidence for a thousand-dollar device than the manufacturer reassuring me that it’s normal for it to deform over time.
It seems like ever such a long time ago that tech writers were complaining about the prices of smartphones these days — and those were finished, reliable products.
Ron Amadeo reviewed the Samsung Galaxy Fold for Ars Technica and does not seem impressed:
The inside screen would benefit a lot from being bigger. While the inner aspect ratio is the same as an iPad Mini, in practice the two devices are nothing alike. The 7.3-inch Galaxy Fold display is noticeably smaller than a 7.9-inch iPad Mini, and, critically, the iPad doesn’t have to waste space on an on-screen navigation bar and a giant camera notch. An iPad aspect ratio doesn’t work when you have to chop off sections of the screen like this—iOS dedicates nearly the entire display to the app area, and the Galaxy Fold does not. Overall, there’s just not enough room on the Fold display for apps to make it a significant improvement, or any improvement at all, over a regular smartphone.
A wider body would also allow for a smartphone-sized front screen instead of the tiny, useless screen that is on the front now. It could display apps at a normal size, with a normal width, and the keyboard would be usable. A wider body would also allow for a wider interior screen, which would be better for split screen, better for media, better for multi-pane tablet apps, and more normal for most Android games.
However Samsung arrived at this form factor for the Galaxy Fold, it’s a disaster. Nothing justifies this shape. Neither screen is good for its intended purpose, and this is something anyone could figure out if they just tried the phone for a few minutes next to a normal smartphone. You don’t see more Android content on the bigger screen, the app area is not the right aspect ratio for split screen, and most forms of media would benefit from a wider, less square screen. With such a considerable increase in price and heft over a normal smartphone, the Galaxy Fold just isn’t worth it.
This is a review of the Samsung Galaxy Fold as a product, and it is a brutal invective. But let’s be realistic: the Fold should not be a product. It is a prototype that you can, for some reason, buy today. Its hardware is ill-considered; its software feels like a stretched smartphone rather than a shrunken tablet. This is a two thousand dollar way to say “first”.
A quick primer on the now-industry-standard SAE International rules on how to discuss self-driving abilities: Level 0 is no automation whatsoever. Level 1 is partial assistance with certain aspects of driving, like lane keep assist or adaptive cruise control. Level 2 is a step up to systems that can take control of the vehicle in certain situations, like Tesla’s Autopilot or Cadillac’s Super Cruise, while still requiring the driver to pay attention.
Get past that and we enter the realm of speculation: Level 3 promises full computer control without supervision under defined conditions during a journey, Level 4 is start-to-finish autonomous tech limited only by virtual safeguards like a geofence, and Level 5 is the total hands-off, go literally anywhere at the push of a button experience where the vehicle might not even have physical controls.
Sitting down with WardsAuto at the Consumer Electronics Show in Las Vegas last week, VW Autonomy’s Alex Hitzinger said Level 4 might be the realistic limit for what automakers can build. He wasn’t shy in pointing out the relative difficulty of trying for full Level 5 autonomy.
I am skeptical that generally available cars will make the jump from third-level autonomy to fourth-level within this decade, and I have no expectation that any car will get to fifth-level autonomy in my lifetime. I simply don’t think it’s reasonable that a vehicle will be able to drive itself anywhere on Earth that can be traversed by cars today under any weather conditions — without the intervention of a human driver.
One of the reasons auto manufacturers have given for their interest in autonomous vehicles is their ability to reduce collisions. If that’s the case, why not set a goal of making entirely reliable collision avoidance systems? I know that’s less cool than a car that can drive itself, but it’s much more practical.1
I am also prepared to eat humble pie.
Better still would be greater investment in public transit which, in some circumstances, is fully automated. I know this is even duller than collision avoidance systems, but it’s also better for cities. ↩︎
One of the bigger mysteries associated with the hack of Jeff Bezos’ iPhone X is how, exactly, it was breached. A report yesterday by Sheera Frenkel in the New York Times appeared to shed some light on that:
On the afternoon of May 1, 2018, Jeff Bezos received a message on WhatsApp from an account belonging to Saudi Arabia’s crown prince, Mohammed bin Salman.
The two men had previously communicated using WhatsApp, but Bezos, Amazon’s chief executive, had not expected a message that day — let alone one with a video of Saudi and Swedish flags with Arabic text.
The video, a file of more than 4.4 megabytes, was more than it appeared. Hidden in 14 bytes of that file was a separate bit of code that most likely implanted malware, malicious software, that gave attackers access to Bezos’ entire phone, including his photos and private communications.
The detail attributing the breach to fourteen bytes of malware was entirely new information, and not reported elsewhere. But I’m linking here to the Chicago Tribune’s syndicated copy because the version currently on the Times’ website no longer makes the same specific claim:
The video, a file of more than 4.4 megabytes, was more than it appeared, according to a forensic analysis that Mr. Bezos commissioned and paid for to discover who had hacked his iPhone X. Hidden in that file was a separate bit of code that most likely implanted malware that gave attackers access to Mr. Bezos’ entire phone, including his photos and private communications.
Despite this material change, there is no correction notice at the bottom of the article. The forensic report (PDF) acknowledges that “the file containing the video is slightly larger than the video itself”, but does not cite a specific figure. It does, however, state that the video file is 4.22 MB, not “more than 4.4” as stated in the Times report.
I know this seems ridiculously pedantic, but I want to know how this discrepancy can be explained. The UN press release also does not contain any more specific details. Is this just a weird instance of miscommunications that haven’t been fact-checked? Or is this perhaps news that hasn’t been fully confirmed? For example, is there another forensic report that hasn’t yet been made public?
This matters, I think, because it could suggest a difference between whether the H.264 MP4 video decoder on iOS has a vulnerability, or if it’s something specific to the WhatsApp container. If the former is true, that means that this isn’t just something that WhatsApp users need to watch out for.
It used to be the case that vulnerabilities like these were kept extremely close to the vest and only used on specific high-value targets. But, ever since we found out that China was attacking Uyghur iPhone users broadly, I’m no longer as convinced that not being a prominent individual is enough to avoid being a target.
Update:Ben Somers points out that 4.22 MiB roughly converts to 4.4 MB, which may be the source of that part of the discrepancy. The fourteen bytes are still unaccounted for.
Also, it’s worth mentioning that one reason that I wanted to draw attention to this story is because the Times often fails to post correction notices for online stories that have been updated after publication. I think this practice is ridiculous.
Update: A paragraph later in the story references the fourteen byte mystery, now with more context:
The May 2018 message that contained the innocuous-seeming video file, with a tiny 14-byte chunk of malicious code, came out of the blue, according to the report and additional notes obtained by The New York Times. In the 24 hours after it was sent, Mr. Bezos’ iPhone began sending large amounts of data, which increased approximately 29,000 percent over his normal data usage.
This wasn’t in the story last time I checked. There still isn’t a corrections or updates notice appended to the Times article. Thanks to Lawrence Velázquez for bringing it to my attention.
Ryan Mac, Caroline Haskins, and Logan McDonald, Buzzfeed News:
Originally known as Smartcheckr, Clearview was the result of an unlikely partnership between Ton-That, a small-time hacker turned serial app developer, and Richard Schwartz, a former adviser to then–New York mayor Rudy Giuliani. Ton-That told the Times that they met at a 2016 event at the Manhattan Institute, a conservative think tank, after which they decided to build a facial recognition company.
While Ton-That has erased much of his online persona from that time period, old web accounts and posts uncovered by BuzzFeed News show that the 31-year-old developer was interested in far-right politics. In a partial archive of his Twitter account from early 2017, Ton-That wondered why all big US cities were liberal, while retweeting a mix of Breitbart writers, venture capitalists, and right-wing personalities.
It is revealing that the people behind tools that are borderline unethical and threaten our privacy expectations often also happen to be aggressively protective of their own privacy.
In the second part of the presentation, Scott Forstall (then Apple’s software chief) invoked the App Store, which had already become wildly successful after less than two years in operation. It was the App Store’s Gold Rush era, and Forstall’s message was clear: There’s a new Gold Rush coming, and it’s in iPad apps. And if developers wanted their apps to be prominently featured on the App Store for iPad, Forstall pointed out, those apps would need to be updated to support it. iPhone-only apps would run, but they’d do so in a diminished compatibility mode and be relegated to the back pages of the App Store.
The iPad was introduced in January, but it didn’t ship until April — and Apple released tools for developers to build iPad apps the very day the product was announced. The message was clear: Build iPad apps and a flood of users will come your way. You’ve got three months.
Forstall was also quick to point out that good iPad apps were more than just blown-up iPad versions. Several compliant developers were brought out to demo how they’d already begun work on reconceptualizing their iPhone apps for a larger screen, including MLB At Bat and the New York Times.
Really great apps designed for the iPad indeed make it a uniquely worthwhile and good experience. It’s a shame, then, that some developers in the past few years — Apple included — have been putting less effort into designing apps specifically for the iPad. It remains somewhat dispiriting to see that the biggest difference between the iPhone and iPad versions of an app is that there is an always-visible sidebar in the left third of the screen area. Even system features like Siri are half-assed ports on the iPad.
But that could all change soon. One of the most interesting things to happen in 2019 was the bifurcation of iOS into “iOS” for the iPhone and iPod Touch, and “iPadOS” for the iPad. So far, this change has largely been in name only, but I am hopeful that this signals a future in which iPad apps and features are more specific to the platform. Apple has also been steadily updating the iPad at both the high and low ends of its lineup, which is equally good news for the platform.
Forensic experts hired by Jeff Bezos have concluded with “medium to high confidence” that a WhatsApp account used by Saudi Crown Prince Mohammed bin Salman was directly involved in a 2018 hack of the Amazon founder’s phone.
A report on the hack, which has been seen by the Financial Times, says Mr Bezos’ phone started surreptitiously sharing vast amounts of data immediately after receiving an apparently innocuous, but encrypted video file from the prince’s WhatsApp account in May 2018.
That file shows an image of the Saudi Arabian flag and Swedish flags and arrived with an encrypted downloader. Because the downloader was encrypted this delayed or further prevented “study of the code delivered along with the video.”
Investigators determined the video or downloader were suspicious only because Bezos’ phone subsequently began transmitting large amounts of data. “[W]ithin hours of the encrypted downloader being received, a massive and unauthorized exfiltration of data from Bezos’ phone began, continuing and escalating for months thereafter,” the report states.
Investigators say in the report that their efforts were hampered somewhat by WhatsApp’s encryption, but they have suggested that a followup step would be to jailbreak Bezos’ iPhone to examine its file system.
Also of note: Bezos creates an encrypted backup of his iPhone using iTunes; he has iCloud Backups disabled. But investigators were not able to extract the encrypted backup. It’s unclear whether Bezos forgot his password or was unable to supply it for another reason.
[…] To encrypt a backup in the Finder or iTunes for the first time, turn on the password-protected Encrypt Backup option. Backups for your device will automatically be encrypted from then on. You can also make a backup in iCloud, which automatically encrypts your information every time.
There is nothing technically incorrect about this explanation. iCloud backups are, indeed, encrypted every time; local backups have encryption as an option. But whether a backup is “encrypted” is not enough information to decide which method is more secure. Apple holds the keys to iCloud backups, but only users know their local backup key.
It’s worth noting that Apple has been evaluating whether to offer encrypted iCloud backups since at least February of 2016, and possibly sooner. Today’s Reuters report suggests that Apple dropped that plan at some point in early 2018, though an October 2018 interview with Tim Cook in Spiegel indicated that the company was still working on it. I’m not sure what the correct timeline is, but I hope that renewed public pressure can encourage the company to make it a priority. It is imperative that users know exactly how their data is being used, and there is no reason that enabling backups should compromise their security and privacy.
This reminds me of the Facebook 2FA fiasco, a more egregious case of something positive (2FA) being needlessly tainted (abuse of SMS numbers for non-2FA purposes).
This is exactly right. One of the effects of Apple’s confusing language around backups and encryption is that some people may not trust either. It’s not a great day when Apple is getting unfavourably compared to Facebook on privacy and security matters.
Right now an ordinary person still can’t, for free, take a random photo of a stranger and find the name for him or her. But with Yandex they can. Yandex has been around a long time and is one of the few companies in the world that is competitive to Google. Their index is heavily biased to Eastern European data, but they have enough global data to find me and Andrew Yang.
If you use Google Photos or Facebook you’ve probably encountered their facial recognition. It’s magic, the matching works great. It’s also very limited. Facebook seems to only show you names for faces that people you have some sort of Facebook connection to. Google Photos similarly doesn’t volunteer random names. They could do more; Facebook could match a face to any Facebook user, for instance. But both services seem to have made a deliberate decision not to be a general purpose facial recognition service to identify strangers.
At the time that I linked to the Bellingcat report, I wondered why Google’s reverse image recognition, in particular, was so bad in comparison. In tests, it even missed imagery from Google Street View despite Google regularly promoting its abilities in machine learning, image identification, and so on. In what I can only explain as a massive and regrettable oversight, it is clear to me that the reason Google’s image search is so bad is because Google designed it that way. Otherwise, Google would have launched something like Yandex or Clearview AI, and that would be dangerous.
Google’s restraint is admirable. What’s deeply worrying is that it is optional — that Google could, at any time, change its mind. There are few regulations in the United States that would prevent Google or any other company from launching its own invasive and creepy facial recognition system. Recall that the only violation that could be ascribed to Clearview’s behaviour — other than an extraordinary violation of simple ethics — is that the company scraped social media sites’ images without permission. It’s a pretty stupid idea to solely rely upon copyright law as a means of reining in facial recognition.
Mike Masnick of Techdirt responds to the Reuters report from earlier today claiming that Apple dropped a plan to implement end-to-end encryption on iCloud backups at the behest of the FBI:
Of course, the other way one might look at this decision is that if Apple had gone forward with fully encrypting backups, then the DOJ, FBI and other law enforcement would have gone even more ballistic in demanding a regulatory approach that blocks pretty much all real encryption. If you buy that argument, then failing to encrypt backups is a bit of appeasement. Of course, with Barr’s recent attacks on device encryption, it seems reasonable to argue that this “compromise” isn’t enough (and, frankly, probably would never be enough) for authoritarian law enforcement folks like Barr, and thus, it’s silly for Apple to even bother to try to appease them in such a manner.
Indeed, all of this seems like an argument for why Apple should actually cooperate less with law enforcement, rather than more, as the administration keeps asking. Because even when Apple tries to work with law enforcement, it gets attacked as if it has done nothing. It seems like the only reasonable move at this point is to argue that the DOJ is a hostile actor, and Apple should act accordingly.
Even though Apple attempts to explain how iCloud backups work, I don’t think they do a good job, and it is one reason the Reuters report today had such a profound impact: a lot of people have been surprised that their iCloud backups are less private than their phone. Yet, as bad as this is for Apple, it is equally a poor look for the Department of Justice, who have publicly been whining about their inability to extract device data while privately accepting Apple’s cooperation.
More than two years ago, Apple told the FBI that it planned to offer users end-to-end encryption when storing their phone data on iCloud, according to one current and three former FBI officials and one current and one former Apple employee.
Under that plan, primarily designed to thwart hackers, Apple would no longer have a key to unlock the encrypted data, meaning it would not be able to turn material over to authorities in a readable form even under court order.
In private talks with Apple soon after, representatives of the FBI’s cyber crime agents and its operational technology division objected to the plan, arguing it would deny them the most effective means for gaining evidence against iPhone-using suspects, the government sources said.
When Apple spoke privately to the FBI about its work on phone security the following year, the end-to-end encryption plan had been dropped, according to the six sources. Reuters could not determine why exactly Apple dropped the plan.
Apple describes both local iPhone storage and iCloud backups as “encrypted”, but those words mean different things depending on their context. An iPhone’s files cannot be decrypted unless the passcode is known, which typically means that only the device’s user has full access. But an iCloud backup’s key is held by Apple, so they have just as much access as the user would. Importantly, it also means that there is a way of recovering the data should the user’s key fail for some reason. It is possible that part of the reason Apple scrapped a plan for end-to-end encryption of iCloud backups is because it would lead to customers frustrated that they cannot recover their backup in some circumstances.
However, it is more troubling if such a plan never came to fruition because of government pressure. I don’t think it should be the goal of Apple or any company to deliberately make the work of law enforcement impossible, but decisions like these should be made in the best interests of users. And I would expect many users believe that storing their device’s backup in iCloud should not compromise their security and privacy. At the very least, encrypting backups using a secret known only to the user should be an option for iOS users; after all, it is apparently an option on Android. Apple also ought to make it plainly obvious who holds the key to encrypted data at every level to help reduce moronic takes like that from David Carroll:
Apple’s new position on protecting iCloud data from the United States government is now remarkably similar to its position on protecting iCloud data stored in the People’s Republic of China.
This simply isn’t true on any level. For a start, this is not a “new position”, and it does not solely apply to the United States government. Apple makes public what it can and cannot supply to law enforcement, and how they respond to those requests (PDF):
All iCloud content data stored by Apple is encrypted at the location of the server. When third-party vendors are used to store data, Apple never gives them the keys. Apple retains the encryption keys in its U.S. data centers. iCloud content, as it exists in the subscriber’s account, may be provided in response to a search warrant issued upon a showing of probable cause.
For law enforcement agencies outside the U.S. (PDF), the last sentence is replaced with this paragraph:
All requests from government and law enforcement agencies outside of the United States for content, with the exception of emergency circumstances (defined above in Emergency Requests), must comply with applicable laws, including the United States Electronic Communications Privacy Act (ECPA). A request under a Mutual Legal Assistance Treaty or Agreement with the United States is in compliance with ECPA. Apple Inc. will provide subscriber content, as it exists in the subscriber’s account, only in response to such legally valid process.
The worry in China is not necessarily that the government can subpoena for iCloud data; the worry is that user data is stored on servers belonging to a company run, in part, by a corrupt single-party regime. The government of the U.S. and its various criminal justice and national security branches are worrying for myriad reasons, but they cannot accurately be compared to the situation in China.
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
Until now, technology that readily identifies everyone based on his or her face has been taboo because of its radical erosion of privacy. Tech companies capable of releasing such a tool have refrained from doing so; in 2011, Google’s chairman at the time said it was the one technology the company had held back because it could be used “in a very bad way.” Some large cities, including San Francisco, have barred police from using facial recognition technology.
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
This investigation was published on Saturday. I’ve read it a few times and it has profoundly disturbed me on every pass, but I haven’t been surprised by it. I’m not cynical, but it doesn’t surprise me that an entirely unregulated industry motivated to push privacy ethics to their revenue-generating limits would move in this direction.
Clearview’s technology makes my skin crawl; the best you can say about the company is that its limited access prevents the most egregious privacy violations. When something like this is more widely available, it will be dangerous for those who already face greater threats to their safety and privacy — women, in particular, but also those who are marginalized for their race, skin colour, gender, and sexual orientation. Nothing will change on this front if we don’t set legal expectations that limit how technologies like this may be used.
Susan Heavey and Andrea Shalal, reporting for Reuters in an article with the headline “Mnuchin urges Apple, other tech companies to work with law enforcement”:
Apple Inc and other technology companies should cooperate with U.S. investigators, Treasury Secretary Steven Mnuchin said on Wednesday as law enforcement officials continued probing last month’s fatal shooting at a U.S. naval base in Florida.
Mnuchin later told reporters at the White House that he had not discussed the issue with Apple and did not know the specifics at hand. “I know Apple has cooperated in the past on law enforcement issues, and I expect they would continue … to cooperate.”
The Reuters article notes that Apple is, in fact, cooperating with investigators by turning over everything they have on the iPhones in question, counter to Mnuchin’s claim. But the headline on this article is misleading.
When framed that way, it’s obviously dumb. But anyone reading Reuters’ coverage of the issue won’t get that. They’ll think that Apple is somehow taking some sort of stand against US law enforcement. This is what Trump, Barr, and apparently Mnuchin, would like people to think, but it’s not true, and it’s fundamentally bad journalism for Reuters to frame it that way.
To be clear, it is likely not the reporters’ fault that the story was framed with this headline. But it’s unnecessarily carrying water for a Department of Justice that is exploiting a terrorist attack and public confusion over this issue to neuter encryption.
Jamie Leach of Google, announcing the new search results page design last year:
The name of the website and its icon appear at the top of the results card to help anchor each result, so you can more easily scan the page of results and decide what to explore next. Site owners can learn more about how to choose their prefered icon for organic listings here.
When you search for a product or service and we have a useful ad to show, you’ll see a bolded ad label at the top of the card alongside the web address so you can quickly identify where the information is coming from.
Last year, our search results on mobile gained a new look. That’s now rolling out to desktop results this week, presenting site domain names and brand icons prominently, along with a bolded “Ad” label for ads.
All you get, as far as identifying where a search result comes from, is a tiny 16-by-16-point favicon and small grey text with the URL. If it’s an ad, the favicon is replaced with a little “Ad” label, but there are no other advertising identifiers. Just a few years ago, Google used to indicate ads much more prominently. If the ads are truly as “useful” as Google claims, surely it doesn’t need to trick users into clicking on them instead of regular results.
Update:Google says that they’re going to experiment with different ad and search result appearances.
Comcast and NBCUniversal announced today that Peacock will be available in three tiers: a free option (Peacock Free) that comes with limited programming; an ad-supported complete version that is free to existing Comcast customers and $5-a-month for everyone else; and a $10-a-month ad-free subscription option that is open to anyone. That one is known as Peacock Premium.
Peacock Free consists of 7,500 hours of programming, including next-day access to current seasons of first-year NBC shows, Universal movies, and curated content such as SNL, Vault, and Family Movie Night. The two premium tiers come in at $4.99 per month with ads and $9.99 per month with no ads. Both of these tiers will include live sports and early access to late-night shows. Peacock Premium will include non-televised Premier League soccer games beginning in August.
Between weak antitrust enforcement, mergers designed to create vertical integration, the demise of net neutrality, and exclusive distribution contracts, it’s like a slow return to the old Hollywood studio system at even greater scale and scope.
A recent cold snap seems to have increased my propensity to experience bugs. I’m usually a walking commuter to my day job, but I’ve happily accepted a lift from my partner all week long as temperatures dropped below the –30° C mark every morning. As I got into the car this morning, I noticed a strange notification on my lock screen:
This appears to be a Siri suggestion — a nudge by the system to show a hopefully-useful shortcut to a common task. As Apple puts it:
As Siri learns your routines, you get suggestions for just what you need, at just the right time. For example, if you frequently order coffee mid morning, Siri may suggest your order near the time you normally place it.
Since I go to work at a similar time every day, it tells me how long my commute will take and gives me the option to get directions. Nice, right?
Except something is plainly not right: it’s going to take me over a day to get to work? Here’s the route it thinks I should take:
I found this hilarious — obviously — but also fascinating. How did it get this so wrong?
My assumption was that my phone knew that I commuted to work daily, so it figured out the address of my office. And then, somehow, it got confused between the location it knows and the transcribed address it has stored, and then associated that with an address in or near Rochester, New York. But that doesn’t seem right.
Then, I thought that perhaps the details in my contact card were wrong. My work address is stored in there, and Siri mines that card for information. But there’s a full address in that card including country and postal code, so I’m not sure it could get it so wrong.
I think the third option is most likely: I have my work hours as a calendar appointment every day, and the address only includes the unit and street name, not my city, country, or postal code. I guess Apple Maps’ search engine must have searched globally for that address and ended up in upstate New York.
But why? Why would it think that an appointment in my calendar is likely to be anywhere other than near where I live, particularly when it’s recurring? Why doesn’t Apple Maps’ search engine or Siri — I don’t know which is responsible in this circumstance — prioritize nearby locations? Why doesn’t it prioritize frequent locations?
If you look closely, you’ll also notice another discrepancy: the notification says that it’s going to give me directions to “12th St”, but the directions in Maps are to “12 Ave SE”. Why would this discrepancy exist?
It’s not just the bug — or, more likely, the cascading series of bugs — that fascinates me, nor the fact that it’s so wrong. It’s this era of mystery box machine learning, where sometimes its results look like magic and, at other times, the results are incomprehensible. Every time some lengthy IF-ELSE chain helpfully suggests me driving directions for going across the continent or thinks I only ever message myself, my confidence is immediately erased in my phone’s ability to do basic tasks. How can I trust it when it makes such blatant mistakes, especially when there’s no way to tell it that it’s wrong?
David Sparks, commenting on the FBI’s latest challenge to encryption:
I sympathize with law enforcement for wanting access to this data. I worked briefly in the criminal justice system and I know how maddening it would be to know you have a magic envelope with evidence in it and no way to open that envelope. I just think the sacrifice involved with creating a back door is too much to ask.
I do think this discussion isn’t over though. Apple sells into a lot of countries. Any one of them could require they install a back door as a condition of access to the market. Apple’s principals are on a collision course with a massive loss of income. Is it just a question of time before governmental regulation and market pressures make this period of time, where all citizens have relatively secured data and communications, only a temporary phase? I sure hope not.
Apple has proved itself somewhat amenable to compromise. It stores iCloud data for Chinese and Russian users on servers located in those countries. In the case of China, the data centre provider is a state-run company. Apple maintains that it holds the encryption keys, that it won’t disclose user data without legal authorization, and that the governments of both countries have no way of getting users’ data without going through legal proceedings. But, still, the legal systems of both countries are notoriously favourable to authoritarian policies, so it’s hard not to assume that Apple’s control is anything greater than theoretical.
Notably, both China and Russia have extreme restrictions on end-to-end encryption. In Russia, telecom and software companies are required to retain messages and encryption data for months; but, as Telegram offers messaging that’s encrypted end-to-end, they have no encryption keys to retain, resulting in its ban in the country. A similar law recently came into effect in China. Yet, iMessage remains available in both countries and, presumably, Apple has made no concessions on its end-to-end encryption. In 2018, meanwhile, the Australian government passed a law requiring tech companies to assist in decrypting users’ data. Apple has continued to sell products and services encrypted by default in all three countries.
Sparks is right: there will come a time that Apple will need to choose whether it will stand behind strong privacy and security, or if the monetary cost of doing so is simply too high. And Apple is, ultimately, a business legally required to do what is best for its shareholders.
For most people watching the occasional stupid video isn’t that big of a problem because people tell Google who they are and the algorithm shows them what they want to watch. They do this by letting the Google track their browsers or by logging into Google services. But I browse the web with tracking protection turned on and never log into Google. The algorithm remembers nothing about me, and as a result I am always shown stupid videos.
In 2020 I am watching less stupid on YouTube by skipping the algorithm. Instead of letting the YouTube decide which videos it wants to show me, I am watching only the videos I want to see by subscribing to my favorite content creators via RSS.
This is a great trick. Brand explains it well, though it’s a little clunky for anyone not familiar with viewing the source of a webpage.
YouTube isn’t the only website that buries its RSS feeds in this manner. I don’t know that it’s deliberate — in the sense that they’re trying to discourage the use of RSS. I think it might be a result of product teams convincing themselves that RSS is something used only by the technically-proficient, so it’s put in a place where that group can find it. The trouble is that only the technically-proficient will end up using it, so it’s cyclical.
Update: A few people have written me to point out that you can try adding the standard channel URL to your feed reader, as many readers will automatically discover the RSS feed.
Lorenzo Franceschi-Bicchierai, reporting for Vice in 2018:
The FBI argued that it had no technical way to unlock the phone or hack into it without Apple’s help. Apple argued that helping the FBI would’ve put all iPhone users in danger because it would’ve required the company to weaken the security of all iPhones. The battle ended with a whimper when an unknown “third party” gave the FBI a way to hack in and the FBI abandoned its legal request.
As it turned out, the FBI’s own hackers didn’t start working with vendors to find a way to hack into Farook’s iPhone until “the eve” of the FBI’s initial court filing demanding Apple’s assistant on February 16, 2016. Moreover, two different teams within the FBI’s Operational Technology Division (OTD), a department tasked with giving technological assistance to investigations, didn’t communicate with each other to find a solution until late in the investigation, according to the OIG report.
The tech team initially helping with the case was the Cryptologic and Electronics Analysis Unit (CEAU). It was only after a meeting on February 11 that another hacking team within the FBI, the Remote Operations Unit or ROU, started looking into it and started contacting contractors and vendors asking for help.
It beggars belief that today’s FBI is struggling to breach an iPhone 7 Plus and an iPhone 5, the latter being a model of smartphone that is eight years old and is stuck on iOS 10. That’s especially suspicious given that investigators in another case were recently able to unlock an iPhone 11 Pro. What is it about the much older phones in this case that are proving so iron-clad against the United States’ elite digital forensics teams? Are they even trying? Or does the Department of Justice just want to fight?
In 2016, ABC News’ David Muir interviewed Tim Cook about why Apple was fighting the FBI’s order to create a modified version of iOS that would allow the forced unlocking of the iPhone used by one of the San Bernardino shooting perpetrators. Memorably, he called the development of any backdoor the “software equivalent of cancer”. He also described what the FBI was asking for: a version of iOS, but without the preference to erase data after ten attempts, and with the ability for the FBI to try an unlimited number of passcodes as fast as a computer could enter them. Now, they seem to be asking for something similar; the FBI, once again, wants Apple to do something to help decrypt iPhones for law enforcement.
At no point — then or now — has Cook or anyone at Apple publicly confirmed how such a backdoor may be installed, or if it’s even possible. Presumably, it would use the iOS update mechanism, but how could permission be granted if the passcode to the iPhone isn’t known? After all, you must enter your iPhone’s passcode to install a software update. When you plug an iPhone into a computer, you must enter the passcode on the phone to enable a trusted data connection. But I thought there might be a way around all of this with one of iOS’ recovery modes.
Coincidentally, I have an almost perfect environment in which to test this. I recently had to install a clean copy of MacOS Catalina on my MacBook Air1 and had not yet connected my iPhone to that laptop, so I had something which could simulate a stranger’s computer to perform an update. And, happily, Apple released a new seed of iOS 13.3.1 today, so I had something to update to.
In the interest of testing this, I risked spending all evening restoring my iPhone’s data and followed Apple’s directions to enter recovery mode.2 I was able to update my iPhone to a newer version of iOS from a local .ipsw package without once entering my passcode.
I downloaded the software update package from Apple’s developer website. Presumably, this means that any software update signed by Apple could be used instead.
I connected my iPhone to my MacBook Air and forced a reboot, which cleared the temporary Face ID passcode authorization from the phone’s memory. I restarted it again, this time into recovery mode.
MacOS prompted me to update or restore the phone. I picked “Cancel” to close the dialog box, then option-clicked on the “Update” button in Finder so I could select the software update package instead of using one from Apple’s server. It began and competed installation, then prompted for my passcode twice before it switched to an “Attempting Data Recovery” screen. After this process completed, my iPhone booted normally.
To be clear, my iPhone still prompted for its passcode when the update had finished its installation process. This did not magically unlock my iPhone. It also doesn’t prove that passcode preferences could be changed without first entering the existing valid passcode.
But it did prove the existence of one channel where an iPhone could be forced to update to a compromised version of iOS. One that would be catastrophic in its implications for iPhones today, into the future, and for encrypted data in its entirety. It is possible; it is terrible.
Update: I’ve had a few people ask questions about what this proves, or express doubt that this would enable an iPhone to be unlocked. To be perfectly clear, a compromised software package with the workarounds the FBI has asked for would have to be signed with Apple’s key for it to be installed. The passcode would still have to be cracked for user data to be extracted from the phone. But if Apple were legally compelled to comply with the FBI’s request in San Bernardino, this proves that a software update package containing the workarounds can be installed on an iPhone without having to enter a passcode.
Long story short, my MacBook Air contains a battery and an SSD from two different third-party vendors. The battery is one year old and comes from an organization well-known for their advocacy of right-to-repair legislation, and its capacity has already been reduced by over a third. I’ve been trying to get a replacement, even though it’s just out of warranty, and had to perform a series of tests to verify the age and wear on the battery. While trying to do these tests, the third-party SSD — from a different company that’s similarly well-known for their stance on repairing electronics — also started to fail. I replaced the third-party SSD with the original one that came with the MacBook Air, wiped the drive, and did a clean install of MacOS Catalina on it.
I have two takeaways. First, I am receiving a free replacement battery, even though the one-year warranty has lapsed. I haven’t been so lucky with the SSD. I am admittedly a month and a bit outside of the manufacturer’s three-year warranty, but it is fairly disappointing that I have all sorts of SSDs and spinning rust drives that have outlived that drive.
The second takeaway is that, even though I share some principles and sympathy with right-to-repair advocates, I would be much more convinced about its merits if they shipped higher quality products that lasted longer. It’s entirely anecdotal and probably bad luck, in part, if not in full. But this experience underscores that — in addition to environmental and ethical reasons for device repair rather than replacement — the biggest advocates are businesses that sell parts. ↩︎
On a Mac with macOS Catalina 10.15, open Finder. On a Mac with macOS Mojave 10.14 or earlier, or on a PC, open iTunes. If iTunes is already open, close it.
There’s no way to read this that makes sense. If you’re using Mojave, open iTunes. If iTunes is now open, close it. is the first and most literal way to read this, but is clearly wrong. If you’re using Mojave, open iTunes. If you had iTunes open first, close it, then reopen it. is the second way to read this, but it also seems silly. ↩︎
Apple’s iOS 13 update, released in September, includes regular reminders when apps are sucking up a user’s location data. The pop-up gives a user a chance to choose from the following options: allowing data collection at all times, or only when the app is open — or only one time. Four months in, ad tech sources are reporting the result that some observers had predicted: There’s less location data coming from apps.
Right now opt-in rates to share data with apps when they’re not in use are often below 50%, said Benoit Grouchko, who runs the ad tech business Teemo that creates software for apps to collect location data. Three years ago those opt-in rates were closer to 100%, he said. Higher opt-in rates prevailed when people weren’t aware that they even had a choice. Once installed on a phone, many apps would automatically start sharing a person’s location data.
Apple did not dither around with some balance of allowing advertisers to keep collecting location data at will while nominally protecting user privacy. Apple didn’t even block background location access. It just changed iOS so that users must deliberately allow background access, and the system now reminds users when apps actually use that access. That’s all. Yet, these simple changes have made it difficult for companies you’ve never heard of to monetize information you didn’t know you were sharing.
Apple, not coincidentally and unlike some of its competitors, is not a company making its money off personalized advertising.
After initial dialogue with the web community, we are confident that with continued iteration and feedback, privacy-preserving and open-standard mechanisms like the Privacy Sandbox can sustain a healthy, ad-supported web in a way that will render third-party cookies obsolete. Once these approaches have addressed the needs of users, publishers, and advertisers, and we have developed the tools to mitigate workarounds, we plan to phase out support for third-party cookies in Chrome. Our intention is to do this within two years. But we cannot get there alone, and that’s why we need the ecosystem to engage on these proposals. We plan to start the first origin trials by the end of this year, starting with conversion measurement and following with personalization.
Google’s Privacy Sandbox plans still require the cooperation and support of the web’s standards bodies, which is why they are pretending to be hindered from making privacy-supportive changes. It probably is, ultimately, a privacy-friendly move, albeit undercut by suspicions that it will further entrench Google’s business.
That wouldn’t be true if the world’s most popular browser were not owned by a personalized advertising company. C’est la vie.
Twice now, the U.S. Department of Justice has pushed Apple to help decrypt iPhones involved in high-profile crimes. Twice, Apple has pushed back. And, twice, the popular press has framed these cases in terms that do not help their general-audience readers understand why Apple is refusing demands to cooperate; instead, using language that implicitly helps those who believe that our rights should be compromised to a lowest common denominator.
The first time the Department of Justice began this campaign was in the aftermath of the December 2015 mass shooting in San Bernardino, California. Two individuals murdered fourteen people in a workplace terrorist attack motivated by extremist views. The perpetrators were killed. One had an iPhone and, while Apple was able to provide investigators with a copy of the data stored in iCloud, they were unable to assist with the phone’s unknown passcode. The Department of Justice attempted to use the All Writs act to compel the company to disable any passcode-guessing countermeasures that might be enabled, and Apple refused on the grounds that it would universally undermine its products’ security and set a precedent against encryption — more on that later. The two parties fought and nearly ended up in court before the FBI enlisted a third-party vendor to crack the passcode. Ultimately, nothing of investigative value was on the phone.
It has been over four years since that case first began, and officials did not, in that time, again attempt to compel Apple into weakening the security of its products. That is, despite nearly seven thousand devices in the first eleven months of 2017 alone being apparently inaccessible, the Department of Justice did not again make any further requests of unlocking assistance from Apple.
Until recently, that is, when a case of horrible deja vu struck. In December 2019, one person motivated by extremist views murdered three people in a terrorist attack at his workplace. The perpetrator had two iPhones, one of which he shot before being killed by police. Apple has provided investigators with the data they were able to access, but is not assisting with the decryption of the iPhones in question.
Which is how we arrive at today’s announcement from U.S. Attorney General William Barr that he wants more “substantive assistance” from Apple in decrypting the two phones used by the perpetrator in this most recent attack — and, more specifically, Katie Benner’s report for the New York Times:
Mr. Barr’s appeal was an escalation of an ongoing fight between the Justice Department and Apple pitting personal privacy against public safety.
This is like three paragraphs in and it is already setting up the idea that personal privacy and public safety are two opposing ends of a gradient. That’s simply not true. A society that has less personal privacy does not inherently have better public safety; Russia and Saudi Arabia are countries with respectable HDI scores, brutal censorship and surveillance, and higher murder rates than Australia, Denmark, France, and the United Kingdom.
More worrisome, however, is how easily the issue of encryption is minimized as being merely about personal privacy, when it is far more versatile, powerful, and useful than that. The widespread availability of data encryption is one reason many companies today are okay with employees working remotely, since company secrets can’t be obtained by those not authorized. Encryption helps journalists get better information from sources who must remain anonymous. Encryption is why I haven’t had a printed bank statement in ten years, and how you know you’re giving your health care information to your insurance provider. Encryption helps marginalized people bypass unjust and oppressive laws where they may travel or live. It makes commerce work better. It prevents messages from being read by an abusive ex-partner. It gives you confidence that you can store your work and personal life on a single device.
The U.S. Department of Justice is trying to compel Apple to weaken the encryption of every iOS device. That will set a precedent that those who implement encryption technologies ought to loosen them upon request. And that means that everything we gain from it is forever undermined.
Public safety would not be improved if encryption were weakened — it would be gutted.
Knowing all that helps one see why a summary like this is wildly inaccurate:
Apple has given investigators materials from the iCloud account of the gunman, Second Lt. Mohammed Saeed Alshamrani, a member of the Saudi air force training with the American military, who killed three sailors and wounded eight others on Dec. 6. But the company has refused to help the F.B.I. open the phones themselves, which would undermine its claims that its phones are secure.
Apple is not declining to decrypt these iPhones for marketing reasons. If anything, the public reaction to their stance will be highly negative, as it was in 2015. They are refusing Barr’s request because it means, in effect, that we must forego all of the benefits we have gained from encryption.
We are continuing to work with the FBI, and our engineering teams recently had a call to provide additional technical assistance. Apple has great respect for the Bureau’s work, and we will work tirelessly to help them investigate this tragic attack on our nation.
We have always maintained there is no such thing as a backdoor just for the good guys. Backdoors can also be exploited by those who threaten our national security and the data security of our customers. Today, law enforcement has access to more data than ever before in history, so Americans do not have to choose between weakening encryption and solving investigations. We feel strongly encryption is vital to protecting our country and our users’ data.
This is the same thing experts keep telling lawmakers, who persist in believing that it’s a matter of hard work and willpower rather than a limitation of reality — and then cast their lack of understanding in profoundly offensive terms:
“Companies shouldn’t be allowed to shield criminals and terrorists from lawful efforts to solve crimes and protect our citizens,” Senator Tom Cotton, Republican of Arkansas, said in a statement. “Apple has a notorious history of siding with terrorists over law enforcement. I hope in this case they’ll change course and actually work with the F.B.I.”
Setting aside how stupid and disliked Cotton has proved himself to be, it’s revealing that his best argument is to claim that Apple sides with terrorists. He really hasn’t got a clue.
Back to the Times report:
The San Bernardino dispute was resolved when the F.B.I. found a private company to bypass the iPhone’s encryption. Tensions between the two sides, however, remained; and Apple worked to ensure that neither the government nor private contractors could open its phones.
This is one of those instances where a reporter is so close to getting it, but ends up missing the mark and landing in dangerous territory. Apple fixed a bunch of iOS security problems; these changes simultaneously prevent investigators and criminals from gaining access to devices because both are unauthorized, as far as the security infrastructure is concerned. Any breach of that may help law enforcement, but it will also help people trying to break into, for example, the President’s iPhone. Weakening security for one weakens it for everyone.
Apple did not “ensure” that it locked law enforcement out of its products. It fixed bugs.
Apple did not respond to a request for comment. But it will not back down from its unequivocal support of encryption that is impossible to crack, people close to the company said.
This Times piece was published before Apple responded at length to reporters — as linked above — but their position has been admirably consistent. Much in the same way that it’s impossible to draw a line between security holes for good people and security holes for bad people, it’s also hard not to see this ending with encryption compromised everywhere. If the Department of Justice thinks it should be breached for this device, why not the apparently thousands of devices in storage lockers? If encryption should not apply to devices belonging to the dead, why not devices belonging to the living? If they can get into encrypted devices at rest, why wouldn’t they feel the right to decrypt communications in transit? If the relatively stable and reputable law enforcement of the United States can gain access, what about officers in other countries? Why not other branches of the justice system, or even officials more broadly? Are there any countries that you would exclude from access?
There is a point at which I expect many people will start to push back against this ever-expanding list of those allowed access to encrypted communications. From a purely technical perspective, it doesn’t matter where you stopped: if you don’t think a particular corrupt regime should be allowed to decrypt communications and devices on demand, or you object to other branches of a government having access, or you think that this policy should only apply to devices with dead owners. It simply doesn’t matter. Because, from a technical perspective, once you allow one point of access, you allow them all. Code doesn’t care whether a back door was accessed by an investigator with a warrant, a stalker, a corrupt official, or a thief.
This story is not a case of a stubborn tech company feeling like they are above the law. It is about an opportunistic Department of Justice that is making an impossible demand that devices should allow access to the authorized user, law enforcement agencies, and nobody else. They haven’t argued for that since a previous high-profile terrorist attack, so it isn’t about principle. It’s about taking advantage of a situation they know will be a hard public relations battle for Apple — in large part because the public at large doesn’t understand the unfeasibility of what is being asked. Articles like this one do nothing to explain that, and only help to push the government’s dangerous position.
After the past few years of all “big tech companies” being lumped into the same pile of public distrust, I fear they might win this time. As a result, we, the public, will lose our electronic privacy, security, and safety.
My search confirmed my initial hunch that there is only one official remaining use of the word “Macintosh” by today’s Apple: the default “Macintosh HD” name of the internal drive on a new Mac. Many Mac users personalize that name immediately, although less experienced Mac users often don’t realize they’re allowed to change it. (If you’ve never done it, just click the name once to select it and a second time to start editing it, just like a file or folder.)
What’s most curious about this vestigial naming is that everything about it is wrong. Besides the anachronistic use of “Macintosh,” the “HD” abbreviation for “hard disk” or “hard drive” refers to a spinning disk drive, whereas most Macs rely on SSDs (solid-state drives). Even the case-less hard drive icon in the Quick Look preview window incorrectly uses an image of a spinning disk to represent an SSD.
As Engst illustrates, the CoreTypes bundle in MacOS contains icons for all kinds of drives: different external drive types, shared drives, and even fossils like ZIP drives. But MacOS stubbornly names the built-in drive “Macintosh HD” by default and still assigns it a spinning disk icon — which, by the way, has been redrawn twice since Apple launched Macs with default SSD options.
I kind of like it. Also, I still keep hard drive icons on my desktop, so maybe this is more indicative of the kind of person I am.
U.S. music streams on services like Spotify Technology AB, Apple Music and YouTube rose 30% last year to top one trillion for the first time, according to Nielsen Music’s annual report, fueled by big releases from artists like Taylor Swift, Billie Eilish and Post Malone.
Streaming services have upended how people listen to and pay for music, and now account for 82% of music consumption in the U.S., according to Nielsen. Sales of physical albums, meanwhile, dropped off 19% in 2019 and now make up just 9% of overall music consumption.
Since 2016, streaming has been far bigger business than digital sales ever were. Meanwhile, vinyl records are projected to surpass compact discs in 2019 sales. This makes complete sense to me: if you’re passively listening to music, you’ll just stream it because you don’t have to pay more, but if you want to make your music listening an — and I am already regretting this word choice — experience, you’ll pick up a physical item with presence.
Of the litany of things you can buy for $10 — a sandwich, a box of pens, and a print magazine among them — unlimited access to a catalog of 50 million songs is one of the most bang-for-your-buck options out there. But that’s how much music-streaming subscriptions have cost their entire existence.
The coming year may be when that finally changes, for a number of reasons. First, Spotify, the leader of the music-streaming market, recently entered its second decade of existence towing 250 million users, 110 million of whom are paying subscribers; when tech companies hit such major growth milestones, they tend to hike up prices to begin recouping previous years’ lost revenue, which is why Uber rides, Seamless deliveries, ClassPass sessions, and the products of other startups-turned-behemoths are more expensive now than they were at the start.
I can’t imagine most people a decade ago were spending $120 per year on music; given the RIAA’s sales data, it appears that streaming now generates more revenue than all music revenues combined from 2010–2017. But now we are, and we can probably expect to spend much more than that pretty soon.
Ophir Harpaz just wanted to get a good deal on a flight to London. She was on travel website OneTravel, scouring various options for her trip. As she browsed, she noticed a seemingly helpful prompt: “38 people are looking at this flight”. A nudge that implied the flight might soon get booked up, or perhaps that the price of a seat would rise as they became scarcer.
Except it wasn’t a true statement. As Harpaz looked at that number, “38 people”, she began to feel sceptical. Were 38 people really looking at that budget flight to London at the same exact moment?
Being a cyber-security researcher, she was familiar with web code so she decided to examine how OneTravel displayed its web pages. (Anyone can do this by using the “inspect” function on web browsers like Firefox and Chrome.) After a little bit of digging she made a startling discovery – the number wasn’t genuine. The OneTravel web page she was browsing was simply designed to claim that between 28 and 45 people were viewing a flight at any given moment. The exact figure was chosen at random.
I have some travel coming up, so I’ve spent a few weeks trying to get a good deal on a flight and a hotel room. I cannot imagine that any website is thirstier for you to act immediately than a travel booking website. I’d do everything I could to limit my accommodation choices to just those within my budget and in a specific location, but I’d still be offered sold-out five-star hotels nowhere near where I wanted to be — I suppose this was to encourage me to book something, anything, quickly.
Also, many of the biggest travel booking websites are owned by just a couple of companies: Bookings Holdings runs Booking.com, Priceline, Kayak, and Cheapflights; the Expedia Group owns Expedia, Hotels.com, Hotwire, Orbitz, Travelocity, and Trivago. Each group shares the same inventory, and they all use the same tactics. Users simultaneously get the impression that they’re shopping around and competing with other users, when neither is true.
The App Store is the world’s safest and most vibrant app marketplace, with over half a billion people visiting each week. It remains the safest place for users to find software and provides developers of all sizes access to customers in 155 countries. Since the App Store launched in 2008, developers have earned over $155 billion, with a quarter of those earnings coming from the past year alone. As a measure of the excitement going into 2020, App Store customers spent a record $1.42 billion between Christmas Eve and New Year’s Eve, a 16 percent increase over last year, and $386 million on New Year’s Day 2020 alone, a 20 percent increase over last year and a new single-day record.
Big numbers. Investors sure seem pleased — the stock hit a new high.
Apple News draws over 100 million monthly active users in the US, UK, Australia and Canada and has revolutionized how users access news from all their favourite sources. Apple News+ offers an all-in-one subscription to hundreds of the world’s top magazines and major newspapers.
Apple does not disclose how many paying subscribers they have for Apple News Plus, nor for Arcade or Music. Perhaps it’s simply a matter of disclosure rules regarding the company’s upcoming quarterly earnings report. Of course, there’s another possibility.
As the 2020 campaign gains speed, Facebook is taking measures to protect against foreign interference and stop the spread of misinformation. Social media is a fertile space for civic participation, and Facebook is at the forefront of encouraging civil discourse. But with the company’s huge platform comes huge responsibility.
Five women across Facebook and Instagram — Katie Harbath, Sarah Schiff, Monica Lee, Antonia Woodford, and Crystal Patterson — are key to ensuring the integrity of the 2020 election on Facebook. Behind the scenes, these women have helped overhaul the company’s approach to protecting elections, creating a new ad library to ensure transparency and partnering with over 55 third party fact-checking organizations. With just under a year until the election, Teen Vogue spoke with Facebook to learn more about what they’ve been up to.
This looks like a regular Teen Vogue article. On the homepage, it’s indistinguishable from older and newer pieces, aside from its lack of byline. For all intents and purposes, it is a Teen Vogue article — until you read it.
It is egregiously bad to omit a byline on pretty much any story, let alone a puff piece like this! What is happening!
No credit on these very PR-friendly photos, either. Curious!
It’s stranger than that: all of the photos in the article are screenshots, carrying file names like Screen Shot 2020-01-06 at 3.21.21 PM.png. Setting aside how unwise it is to present photographs as PNG files — they’re all well over 2 MB — the only time I’ve seen this technique used is to mask the source of the photograph, so no Exif data is retained from the camera or photographer. I’m not claiming that’s the reason Teen Vogue decided to screenshot these pictures instead of uploading the originals, but why wouldn’t they publish photographs as photographs?
Max Tani, about an hour after the article’s publication:
This piece was just updated with an editor’s note saying it’s “sponsored editorial content.”
If this was an example of native advertising, why wasn’t it initially and clearly identified as such? Why was it so easy to confuse it with an article? If it was just a regular article, why wasn’t it bylined? Why was it so easy too confused with an ad?
Newest in teenvoguegate: Facebook sponcon *was* supposed to be sponcon: “We had a paid partnership with Teen Vogue related to their women’s summit, which included sponsored content. Our team understood this story was purely editorial, but there was a misunderstanding.” – FB spox
How to parse that, per source: FB piece was supposed to be sponcon, tied to Facebook sponsorship of a Teen Vogue event last fall. Then, supposedly, FB decides they don’t want/need the sponcon after all.
But! Sponcon was created anyway, and was floating around the Teen Vogue CMS, and then…
Condé Nast, which publishes Teen Vogue, began using their editorial staff to write stuff like this about five years ago, rather than leaving it up to their ad teams. This seems like a predictable consequence.
There’s a silly dismissal of privacy laws that goes something like this: because these laws require that data processors get opt-in consent from users, they empower Facebook and Google, which means these laws are failures on a grand scale. I thought this argument was absurd when it first appeared last year in relation to Europe’s GDPR, but California’s new CCPA has made it ripe and juicy again.
We warned folks that these big attempts to “regulate” the internet as a way to “punish” Google and Facebook would only help those companies. Last fall, about six months into the GDPR, we noted that there appeared to be one big winner from the law: Google. And now, the Wall Street Journal notes that it’s increasingly looking like Facebook and Google have grown thanks to the GDPR, while the competition has been wiped out.
“GDPR has tended to hand power to the big platforms because they have the ability to collect and process the data,” says Mark Read, CEO of advertising giant WPP PLC. It has “entrenched the interests of the incumbent, and made it harder for smaller ad-tech companies, who ironically tend to be European.”
So, great work, EU. In your hatred for the big US internet companies, you handed them the market, while destroying the local European companies.
The result is that not only is there a privacy/convenience tradeoff that users must navigate, there’s a privacy/competition one that regulators must navigate as well.
You want users to have transparent, wide-ranging choice in how their data is used, with companies they know?
Then you’ve got to limit data use to first-party companies with a big public brand and lots of public scrutiny, rather than a complex ecosystem of many data producers and vendors.
There is absolutely nothing wrong with making it harder for any company — large or small, American or European — from abusing users’ privacy. Besides, it isn’t as though most big websites carry only one tracker. The fewer companies that are able to build highly personalized profiles, the better.
More relevant, though, is that you probably can’t name many of these smaller ad tech companies, but you can name the three biggest ones: Google, Facebook, and Amazon. That’s probably because you have a profile with at least one of them, if not all three, so of course it’s easier for them to get consent from you. If you have a user account, they already have your consent.
I doubt that compliance costs — in the sense of documentation or technical support — is what is preventing smaller firms from competing with the big three. It’s the first-party relationship that these companies have with their users. Remember: Google is not a software and services company, it is an advertising company with several interactive and useful features. Facebook is not a family of social networks and chat apps, but a personalized advertising company that entices you to give them as much data as you can. Amazon — well, they’re everything, but they’re also a big fan of advertising to you Amazon listings for the things you just bought off Amazon.
Complying with GDPR really is much harder for a company nobody has ever heard of that asks permission to keep a copy of your name, phone number, email address, and anything else you submit to an unrelated service. But why shouldn’t it be?
These privacy laws are not perfect, yet they’ve had an immediate impact. In the year-and-a-half since GDPR has been in effect, hundreds of millions of Euros worth of fines have been issued. Plenty of companies have had to tighten their privacy and security measures as a result. But, yes, Google, Facebook, and Amazon have become stronger as a result of their ease of compliance.
GDPR and CCPA are largely good — if imperfect — first steps towards regulating the unhinged worlds of advertising technology firms and data brokerages. We should encourage our public representatives to set broad expectations about how our data may be collected and used. We also ought to fight for more people-friendly interpretations of antitrust law. It isn’t a failure that privacy laws fail to address antitrust concerns any more than it is a failure that restaurant sanitation requirements don’t rein in corn subsidies.
It’s possible to do both, and it isn’t indicative of poor policy that we should do both. Well, it isn’t indicative of poor privacy regulations, anyhow; it absolutely does point to missed opportunities for decades. Now is as good a time as any to fix those shortcomings.
Imagine a startup with $12 billion of revenue, 125%+ YoY revenue growth (two years in a row), and Apple-esque gross margins (30-50%). Without knowing anything else about the business, what would you value it at? $50 billion? $100 billion? More?
That’s Apple’s AirPods business, the fastest-growing segment of the world’s most valuable company.
Credit the shift in sentiment to Apple’s focus on tapping an ecosystem of nearly 1.5 billion users to generate a steady stream of profit. The increasing contribution from services like iCloud storage and Apple Music is making its business more stable and therefore deserving of a higher multiple, according to Gene Munster, a long-time Apple analyst and founder of Loup Ventures.
“Investors are slowly getting more comfortable with the concept that a company that has a combination of software, hardware and services can be a dependable business,” Munster said.
Between the blockbuster sales of new products like the AirPods and Apple Watch, subscriptions to services new and old — plus the continued success of the rest of the lineup — it’s little wonder that Apple’s stock hit an all-time high last week.
Apple’s mammoth 2019, which took the stock up 86% for the year and 50% in the second half alone, has created a conundrum for analysts with Buy ratings on the stock but target prices below the current level. While the stock is just shy of $300, the average target is $266. Analysts need to decide: turn more cautious or boost the target?
So far, the consensus move is to push out the goal post.
Now is as good a time as any to look back at nearly a decade of analyst commentary advising the replacement of Tim Cook as CEO. Any shareholder that sold their stake in Apple after those complaints would have kicked themselves silly across 2019.
Sounds great. But the wild gains in service subscriptions haven’t come out of nowhere.
But I worry that with its services push, Apple is turning into an advertising company too. It’s just advertising its own services. In iOS 13 they put an ad for AppleCare at the very top of Settings. They use push notifications to ask you to sign up for Apple Pay and Apple Card, and subscribe to Apple Music, TV, and Arcade. The free tier of Apple News is now a non-stop barrage of ads for Apple News+ subscriptions. Are we at the “hellscape” stage with Apple? No, not even close. But it’s a slippery slope. What made Apple Apple is this mindset: “Ship great products and the profits will follow” — not “Ship products that will generate great profits”.
With the AirPods and Apple Watch, Cook has proven himself to be a CEO that can define entire product categories by bringing visionary ideas to market. But Apple’s services business isn’t nearly as revolutionary. Lots of people are subscribed to Apple Music, but that’s probably partly because it’s a default option, and partly because there’s little profound differentiation in music streaming platforms. MacOS and iOS badger you to upgrade your iCloud storage when you’re running low. I’m sure a lot of people are Apple TV Plus subscribers, but that’s because a lot of people bought an Apple product in the past few months and got a free year.
I am not suggesting that Apple’s services are not successful, nor am I saying that Apple is misrepresenting them. But they’re pushing services hard in a way that I worried about after their March event. I’m not saying that services sell themselves; Apple clearly has to promote them somehow. It’s just that such a hard sell cheapens their hardware and operating systems. Users should want to subscribe to Apple News Plus because it is a good service — you know, hypothetically — not because it is irritating to use Apple News without it.
Update: Kevin Rooke’s analysis has a bunch of holes in it, as Neil Cybart points out. I regret linking to it. Even so, by Cybart’s estimates, Apple’s AirPods business generated closer to $7.5 billion in revenue in 2019.
The FBI is asking Apple Inc. to help unlock two iPhones that investigators think were owned by Mohammed Saeed Alshamrani, the man believed to have carried out the shooting attack that killed three people last month at Naval Air Station Pensacola, Florida.
In a letter sent late Monday to Apple’s general counsel, the FBI said that although it has court permission to search the contents of the phones, both are password-protected. “Investigators are actively engaging in efforts to ‘guess’ the relevant passcodes but so far have been unsuccessful,” it said.
The letter, from FBI General Counsel Dana Boente, said officials have sought help from other federal agencies, as well as from experts in foreign countries and “familiar contacts in the third-party vendor community.” That may be a reference to the undisclosed vendor that helped the FBI open the locked phone of Syed Farook, the gunman who attacked a city meeting in San Bernardino, California, in 2015. The Justice Department took Apple to court in an effort to get the company to help the FBI open that phone.
The U.S. government also has access to many tools that can help acquire data from iPhones, Androids and myriad other mobile devices. For instance, Cellebrite tools and Grayshift’s GrayKey have long been able to grab data from iPhones, and the FBI is one of many agencies that own hacking tech from both.
Forbes recently obtained a search warrant from Ohio, signed off on in October 2019, showing an FBI-owned GrayKey was able to extract data from an iPhone 12.5, though no device exists (neither does iOS 12.5). In the search warrant application, the government doesn’t specify what model of iPhone it was, but an image shows it has three camera lenses on the back of the device. Only Apple’s top of the range iPhone 11 Pro and iPhone 11 Pro Max models have three cameras. Though it’s not clear the iPhone was locked prior to being search by the FBI, a photo of the front of the device shows it on a locked screen with a handful of missed calls.
“iPhone12,5” is the model identifier for the iPhone 11 Pro Max in this warrant, which is unrelated to the Pensacola case. It’s news alone that the GrayKey can unlock the newest iPhone models.
As with the San Bernardino case, Apple says that it is cooperating with authorities. But, unlike that case, the FBI hasn’t yet tried to legally compel Apple into, for example, creating a special version of iOS that has no restrictions on passcode attempts. As with that case, it would set a troubling precedent that encryption should be weakened. So far, there is simply no practical or realistic way of doing so without breaking every user’s security.
Aqua was a huge leap over the classic MacOS’ Platinum appearance, and far richer than anything on Windows at the time. It felt alive, with subtly pulsating buttons and progress bars that looked like they were some sort of modern-day barber poles, turned on their sides.
I remember seeing Aqua for the first time in its Jaguar iteration. By then, the roughest edges of the earliest implementations had been ironed out and Mac OS X was finally fast enough on a family friend’s Power Mac G4 that the system felt like it was supposed to.
I was young; I knew none of this. All I knew is that the copy of Windows XP that I used at home was, in an instant, hilariously outdated.
Windows remains a spectacular example of tasteless and unrefined user interface design. Meanwhile, MacOS has aged gracefully with basically the same interface elements as twenty years ago. Even the old stuff still holds up — mostly. The pinstripes of the earliest versions of Mac OS X are pretty garish, and the dark drop shadows behind seemingly every UI widget and menu label are pretty heavy-handed.
But you could have shown me a copy of Catalina fifteen years ago and told me that this was the way the next version of Mac OS X was going to look, and I would have entirely believed you. That’s evolving gracefully. That’s refinement.
See Also:Jimmy Grewal of the Macintosh IE 5 team. IE5 was demoed by Steve Jobs as an example of a Carbon app on Mac OS X using the Aqua user interface elements. Only two caveats: first, Carbon apps did not automatically inherit Aqua UI components; and, second, the IE5 team independently arrived at an Aqua-like UI without ever having seen Aqua.
From Apple TV+ to Apple Music, from Apple Pay to Apple News+, Cook’s company is now the gateway through which millions of us live our lives. We watch movies, pay for groceries, read the news, go to the gym, adjust our heating and monitor our hearts through Apple services, which is now the company’s fastest growing division.
Living within this carefully curated ecosystem, soon to be bolstered by new augmented reality products, the company’s 1.4 billion active users have become less like customers and more like citizens. We no longer just live our lives on Apple’s phones, but in them.
Apple’s market valuation is roughly equal to the national net worth of Denmark, the 28th wealthiest country in the world. It has as many users as China has citizens. Its leader has a close relationship with the US president and other heads of state. In all but name, this is a superpower, wielding profound influence over our lives, our politics and our culture.
That’s why Tortoise has decided to report on Apple as if it is a country: the first instalment in a year-long project we are calling Tech Nations, which will cover all the main technology giants. Here, we’ll examine Apple’s economy, its foreign policy and its cultural affairs. We’ll dig into its leadership, its security operation and its lobbying spend. We’ll identify the executives likely to succeed Cook, and the areas where Apple is falling behind in the global tech race.
This is a curious way to frame this series of reports. It’s not hard to see this as the corporate insider counterpart to the countless Apple is a religion takes that have littered the media world since the 1980s. Most of those were entirely thoughtless; this seems to be more considered, but it verges on the ridiculous. While many of us may be “citizens” of Apple, we have every ability to leave that world by replacing their products and services with competitors’ offerings. But it is virtually impossible to never use anything from Amazon, Google, or Microsoft — even if you are not consciously using products from any of those companies. And that makes mongering sentiments, like the clause in the last sentence of this paragraph, seem wildly mistargeted:
There is a sense of necessity, even of wisdom, about these shifts. After all, consumers have become less willing to pay out for iteratively improved phones, so new ways of making money from the phones they already have must be found. The idea is to expand the Apple ecosystem so far that consumers never need to – or never can – leave it.
The “shifts” described are in Apple’s emphasis of its more private products and services:
Yet Cook has made some defining interventions. Other companies, such as Facebook and Google, are happy for a sort of chaos to prevail: an online world that’s sprawling, messy and mostly unregulated, where data can be plucked from the air and passed on to advertisers. Cook is trying to create a refuge: a unified world of hardware, software and services, all under Apple’s flag, where citizens can expect their data to remain their own.
Far from hoping that Apple will be the sole provider of “refuge”, Cook has argued for privacy legislation. It’s fair to retort that this would be in the best interests of Apple, but I also think it would be better for everyone if Apple could not be one of the few companies that can claim to care about privacy. Privacy should not be a unique selling point. Users of any product or service should expect their personal data to be treated with due respect, without exploitation.
Imagine if Apple had to compete primarily on the quality of their services, with privacy as a guaranteed benchmark for them and their competitors.
Instead, we’re seeing the opposite: exploitative ad companies like Google and Facebook are re-framing themselves as respectful of user privacy, despite making no meaningful business model changes. While many of Apple’s services — like iCloud, Messages, and Maps — have been getting much better, they’re now tasked with also redefining their unique offering in the absence of meaningful privacy regulation. It’s entirely fair for them to keep beating that drum.
Under the nearly two-decade-old Free File deal, the industry agreed to make free versions of tax filing software available to lower- and middle-income Americans. In exchange, the IRS promised not to compete with the industry by creating its own online filing system. Many developed countries have such systems, allowing most citizens to file their taxes for free. The prohibition on the IRS creating its own system was the focus of years of lobbying by Intuit. The industry has seen such a system as an existential threat. Now, with the changes to the deal, the prohibition has been dropped.
The addendum also expressly bars the companies from “engaging in any practice” that would exclude their Free File offerings “from an organic internet search.” […]
Product naming will also be standardized so that Intuit can’t offer two slightly different “free” versions of TurboTax, where the main difference is that one of them is actually free.
Apple began rolling out its updated mapping app to customers starting in iOS 12, and at the 2019 Worldwide Developer Conference, Apple said all customers in the United States would receive the improved Maps app by the end of the year.
Apple has made good on that promise with the rollout of the new mapping terrain to large swathes of the United States, and the updated Maps are now available across most of the country. It could still take some time for all users in the Central and Southeastern areas of the U.S. to see the new content.
Apple drove and walked throughout cities around the world in 2019, including where I live in Calgary. I’m looking forward to seeing the results of their efforts — and hopefully soon.
Apple has rightfully caught a lot of flak for the quality of their web services this past decade. I think they’ve done a lot to shed their lacklustre reputation — with iCloud, iMessage, Apple Music, and Apple Maps. The latter, in particular, is far from perfect; Apple’s place data is still infuriatingly incomplete and inconsistent. But it is no longer ridiculous to consider Apple’s services viable competitors to those from longstanding giants of the web.
Millions of people in California are now seeing notices on many of the apps and websites they use. “Do Not Sell My Personal Information,” the notices may say, or just “Do Not Sell My Info.”
But what those messages mean depends on which company you ask.
Stopping the sale of personal data is just one of the new rights that people in California may exercise under a state privacy law that takes effect on Wednesday. Yet many of the new requirements are so novel that some companies disagree about how to comply with them.
I’m not sure they’re “novel” as much as they are poorly defined. A couple of weeks ago, a piece in the Wall Street Journal explained how Facebook would be taking advantage of the confusion — at least until the actual expectations of these new regulations are hashed out in court in the coming years.
Laws like the CCPA, and GDPR in Europe, are necessary steps forward, even though they are incomplete and imperfect. But that’s part of the process; and, as new test cases come forward to challenge their measures, they will become better-defined. They should be paired with better antitrust enforcement, too, so that those companies which have huge market advantages cannot unfairly coerce consumers.
The first and most important piece of advice on this topic cannot be stressed enough: Google reverse image search isn’t very good.
As of this guide’s publication date, the undisputed leader of reverse image search is the Russian site Yandex. After Yandex, the runners-up are Microsoft’s Bing and Google. A fourth service that could also be used in investigations is TinEye, but this site specializes in intellectual property violations and looks for exact duplicates of images.
Toler isn’t kidding around. In these comparisons, it isn’t even a close call between Yandex and any other reverse image search engine. Google even failed to identify an image from Google Street View.
It makes me wonder why Google’s search engine is so comparatively bad. It isn’t like they don’t do facial recognition or object identification; they frequently promote their abilities in both. How are they so far behind when it comes to indexing the web’s photos?
Former Uber CEO Travis Kalanick will step down from the board, effective Dec. 31, and a spokesperson said Tuesday he has sold all of his stock in the ride-hailing company he co-founded 10 years ago.
It’s unclear how much Kalanick’s total stake is worth, but the latest public filings show it’s about $2.5 billion.
Uber’s disastrous second quarter this year resulted in losses of $5.2 billion, of which $3.9 billion was attributable to IPO payouts. In the third quarter, the company lost $1.2 billion, which is similar to their second-quarter non-IPO-based losses. So, if those losses are similar this quarter, and if Kalanick’s $2.5 billion worth of now-sold shares was locked up until the current quarter, they could be in for another multibillion-dollar loss this quarter. They’re probably going to cross $20 billion in total losses since 2016.
Uber is a predatory company, and it sure seems that even Kalanick is not convinced of its long-term prospects.
Did you give or receive any books for Christmas? If so, were they physical books, or electronic ones? I suspect that, while many of us have exchanged real, printed books as presents, eBooks were far less popular, and unless you give a voucher, they’re almost impossible to give as presents anyway. So why have eBooks failed so miserably, when other media such as movies and music now sell and rent so well online?
There are surely plenty of people who own and use an Amazon Kindle, and others who have stuck with Apple Books on their iPad — you may be among them. But a physical book has remained a far more attractive premise. In the past few years, I’ve read several dozen paper books, but only a handful of ebooks.
Finally, apart from enhanced search facilities, few eBooks offer any advantage in use over their physical equivalents. eBook readers are still incredibly primitive, and won’t even let you refer to two or more sections of the book at the same time. You can’t photocopy them, copy quotations, or do anything remotely advantageous. What should have been a liberation from the printed page turns out to be the imposition of more restrictive rules.
What amazes me about all this is that the many penalties and drawbacks of eBooks aren’t the result of the medium itself, but have been cunningly devised and implemented by eBook publishers. It’s almost as if they don’t want us to license eBooks in the first place. Or have they just become so greedy that they think they’ll win either way?
These two paragraphs close Oakley’s piece, and I find them contradictory. Many of the drawbacks I experience — in addition to those in the preceding paragraph — are an inherent result of the medium: ebook software is subject to the most irritating aspects of its device, like software updates, battery consumption, and a distracting environment. The often multipurpose nature of these devices also flattens the context of books; when they’re as interchangeable in the same frame as a webpage or a video, I find it harder to get lost in them.
Ebooks should have an advantage over music and movies, though: because they’re often just text, ebook files are a few hundred kilobytes, or maybe a couple of megabytes. They’re tiny. For those who are okay with the drawbacks of the medium, ebooks should be as successful as Netflix. They’re even available in libraries, though publishers are doing their damndest to scuttle that advantage. But, of course, libraries also offer physical books, and which would you rather have?
With the arrival of Mac Catalyst this summer, as promised by Apple last year, the Mac has started to benefit from apps developers originally on iOS. But I predicted that it would be a major onslaught that would dramatically change the Mac forever, and this was my biggest miss. Some combination of a rough summer of developer betas and limitations of the technology itself mean that there aren’t nearly as many Catalyst apps as I thought, and a bunch of my favorite iOS apps still aren’t anywhere close to shipping Mac versions. Catalyst may still change the Mac forever, but it’s going to take a lot more than one year to make it happen.
For users and for the Mac ecosystem, I hope that Catalyst gets better. But there’s a little voice in my head that thinks it would be better if SwiftUI makes the difference and Catalyst is, in the future, a sad and largely-forgotten experiment. There’s a long way to go between today’s iPad apps running on the Mac, and the promised cross-platform apps that feel right on each device.
Obviously, we won’t see an end to such gadgets because too many products rely on what economists refer to as the “two-part tariff,” where you buy the product (razor, floss dispenser, coffee maker) and then pay a per-unit fee for the items (blades, floss, coffee pods) that make the product usable. Every subscription razor blade company has this figured out: it’s why the razor itself is usually relatively inexpensive, but the specialized blades are pricey.
However, the gadgets that are flooding the marketplace (and Kickstarter) now are a generation removed from razor blades, which actually do take some precision to manufacture. The gadgets I’m ranting about are ones that try to convince you to spend more for a relatively inexpensive, readily available product: floss dispensers with proprietary floss (it’s just string, people); garbage cans with specialty garbage bags; even a manicure machine that paints each fingernail individually using — wait for it — pods of its proprietary nail polish.
Investigations by ProPublica and BuzzFeed News this year revealed that drivers delivering Amazon packages had been involved in more than 60 crashes that led to serious injuries, including 10 deaths. Since then, the news organizations have learned of three more deaths.
Amazon, which keeps a tight grip on how drivers working for contractors do their jobs, has told courts around the country it was not responsible when delivery vans crashed or workers were exploited. It is a position that is facing more legal and legislative challenges, as some states seek to force tech companies such as Uber to take more financial responsibility for the contract workers who underlie their businesses.
Officials are racing to keep track of the numerous warehouses sprouting up, to create more zones for trucks to unload and to encourage some deliveries to be made by boat as the city struggles to cope with a booming online economy.
The average number of daily deliveries to households in New York City tripled to more than 1.1 million shipments from 2009 to 2017, the latest year for which data was available, according to the Rensselaer Polytechnic Institute Center of Excellence for Sustainable Urban Freight Systems.
“It is impossible to triple the amount,” said José Holguín-Veras, the center’s director and an engineering professor at Rensselaer, “without paying consequences.”
Households now receive more shipments than businesses, pushing trucks into neighborhoods where they had rarely ventured.
It is perhaps inevitable that some accidents will occur, and Amazon’s total of at least sixty-three since 2015 is lower than, say, Uber’s — they reported nearly one-hundred fatal accidents in 2017 and 2018. But delivery contractors for Amazon are typically driving larger and heavier vehicles that present greater danger to drivers of smaller cars, cyclists, and pedestrians.
In the case of both companies, however, these injuries and deaths — and the congestion described in the Times article — are a result of unproven but eagerly-adopted developments. Many vehicles might not be on the road if it were not for Amazon’s high-pressure rush delivery options. It is worrying that these companies are aware of the negative results of programs like Amazon Prime, but are slow to make changes for the better as they continue to hurry packages along. If they bore some responsibility for the damage they inflict, I imagine things may be different.
Stuart A. Thompson and Charlie Warzel, New York Times:
Every minute of every day, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.
Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so. The sources of the information said they had grown alarmed about how it might be abused and urgently wanted to inform the public and lawmakers.
After spending months sifting through the data, tracking the movements of people across the country and speaking with dozens of data companies, technologists, lawyers and academics who study this field, we feel the same sense of alarm. In the cities that the data file covers, it tracks people from nearly every neighborhood and block, whether they live in mobile homes in Alexandria, Va., or luxury towers in Manhattan.
The page that delivers this alarming news also contains analytics scripts from a dizzying number of third-party providers. Their app contains user tracking SDKs provided by ComScore, Google (Firebase), and Localytics. If you’re a U.S.-based print subscriber, the Times will sell your address to third-party companies without telling you.
Yes, there absolutely should be laws in place that restrict how this data may be collected and shared. But the parties controlling websites and apps also bear responsibility for the privacy-destroying software they include, and business practices that are similarly compromising.
Update: To their credit, the Times provides a decent guide to setting up your phone for better privacy protections, and they have previously acknowledged that they collect visitor data. It is not the fault of Thompson and Warzel that their employer uses tracking technologies on the very investigation that points to abuses of collecting this information. But it is their employer’s responsibility to understand what they’re reporting and change their practices.
Podcasts should be the optimal poster child for Catalyst because it exists on many platforms, and I think it could be made to work so much better by being responsive to ways in which a Mac is not an iOS device. But what upsets me is not a lack of polish as such, it’s that this was deemed anywhere near good enough to ship. It’s not a good podcasting app, it’s not a good Catalyst example, it’s not a good macOS citizen and it’s not even a good reincarnation of the Podcasts app. It’s just a mess.
It was somewhat concerning to see a collection of tech demos ship as user-facing apps in Mojave last year. But to have recurring complaints of basic MacOS features after a year — why the hell are picker controls still touch-based spinners? — is inexcusable.
Is it good for MacOS, as a platform, to have a bunch of officially-supported bad iOS ports? It sure is not a great sign that Apple enables and seemingly encourages abject laziness.
In 2018, Kashmir Hill reported for Gizmodo that Facebook was allowing advertisers to target users based on user data provided solely — according to Facebook — for security purposes, such as two-factor authentication phone numbers. Additional reporting by Zack Whittaker of TechCrunch from earlier this year indicated that Facebook indexed two-factor phone numbers in their friend finder tool, without a way to opt out.
Facebook Inc will no longer feed user phone numbers provided to it for two-factor authentication into its “people you may know” feature, as part of a wide-ranging overhaul of its privacy practices, the company told Reuters.
It had already stopped allowing those phone numbers to be used for advertising purposes in June, the company said, and is now beginning to extend that separation to friend suggestions.
I believe this is the first explicit acknowledgement that two-factor phone numbers were being used for People You May Know, in addition to the ways in which they were previously exploited.
Sounds good, right? Well, this is Facebook, so its not like this change is available now and applies to all accounts:
Michel Protti, a long-time Facebook executive who took over as chief privacy officer for product this summer and is leading the overhaul, told Reuters the two-factor authentication update was an example of the company’s new privacy model at work.
The change – which is happening in Ecuador, Ethiopia, Pakistan, Libya and Cambodia this week and will be introduced globally early next year – will prevent any phone numbers provided during sign-up for two-factor authentication from being used to make friend suggestions.
Existing users of the tool will not be affected, but can de-link their two-factor authentication numbers from the friend suggestion feature by deleting them and adding them again.
So, unless you reconfigure your two-factor authentication settings — and live in one of the five named countries — the phone number you thought would be used solely for security will keep being usurped for building Facebook’s people finding tools.
If this is an example of Facebook’s radical new privacy-focused business model, well, that seems about right to me.
The best web page I’ve found recently, which I will bestow upon you now, is this IMDb guide to “Best Movies Less Than 100 Minutes Running Time.” This will be the list I turn to when I am in the mood to watch something but not in the mood to commit to what feels like half my life to a movie that probably isn’t even that good. Movies are getting longer, but they are not getting better, and I have had enough.
I am fully aware of how much I sound like an old grouch when I write that movies are way too goddamn long. “Avengers: Endgame”, the biggest movie of the year, was over three hours long, and I felt every single minute of it. The second-biggest movie of the year, “The Lion King”, was just a shade under two hours long — which doesn’t sound too bad, until you compare it to the original, and realize that the extra thirty minutes in the terrible new one added nothing.
It’s not just movies, either: many albums are also way too long, despite the inherent flexibility of music streaming. Drake is notorious for padding the run time of his records. He has released a new full-length solo album every year for the past four years; the shortest one is one hour and thirteen minutes long, and it’s just a collection of off-cuts and outtakes.
[…] Developers using first party tools from Apple shouldn’t have to swim upstream to build cohesive Mac versions of their apps. I am not saying that the existence of any incongruous Catalyst ports is worrisome — incongruous ports are inevitable and Catalyst is an opportunity to make them better — what’s worrisome is that incongruity seems to be the default with Catalyst.
Look no further than Apple’s own Catalyst ports. Developers have enjoyed a variety of good first party examples on both Mac OS and iOS. Mail, TextEdit, Preview, Notes serve as examples of what good cohesive apps on those platforms should look like. Outside of maybe the Podcasts app, Apple made Catalyst apps feel like ports.
Wellborn’s use of the word “cohesive” is inspired and speaks to the root of my worries about Catalyst. Great MacOS apps are often not entirely consistent, but they are cohesive — apps with unique user interfaces, like Coda and Tweetbot, feel very Mac-like, even though both are from third-party developers and neither one looks especially like Mail or Preview.
It worries me that some of Apple’s own MacOS apps lack cohesion; and, though Catalyst is the purest expression of this concern, it is not solely at fault. The redesigned Mac App Store that debuted in Mojave certainly looks like a Mac app, but it feels and functions like a crappy port from some distant platform. It launches by zooming in from the desktop; editorial collections open with an absurd sliding animation and cover the entire app like a sheet; it contains a truly bizarre combination of large click targets and tiny buttons. And none of this can be blamed on a bad Catalyst port, because the Mac App Store is not a Catalyst app, as far as I can tell.
It is frustrating to see Apple release a mediocre porting utility like Catalyst, but it is truly concerning that some of their newer Mac apps feel this incongruous. It would be one thing if it were a third-party developer, but these are first-party apps. As much as I am excited by the prospects of SwiftUI, I have to wonder whether this current stage of cross-platform-influenced Mac development is a distraction, or portends the lazy Mac experience of the future.
Update: Now that my MacBook Air is running MacOS Catalina, I see that the Mac App Store is improved compared to Mojave. It no longer features the comical zoom animation at launch, and the escape key can now be used to dismiss editorial collections. But it still doesn’t feel like a Mac app. Also, the JetEngineMac framework has been moved from within the Mac App Store app package to /System/Library/Private Frameworks, along with a new JetUI framework.
Walt Mossberg emerged briefly from retirement today to publish a decade’s-end column about Apple for the Verge. The nutshell version of his column goes something like this: Apple’s biggest hardware introductions came in 2010 with the iPad, iPhone 4, and the modern MacBook Air — all of which were done while Steve Jobs was still in charge. But, while Apple has grown beyond anyone’s wildest imagination, the company under Tim Cook’s tenure has failed to produce a “blockbuster” product. I disagree:
Both of these Cook-era hardware innovations [AirPods and Apple Watch] made the top 10 in The Verge’s list of the 100 top gadgets of the decade. In fact, Apple not only took first place, but placed a total of four products in the top 10, the only company with more than one product in that tier.
Still, neither of these hardware successes has matched the impact or scale of Jobs’ greatest hits. Even the iPad, despite annual unit sales that are sharply down from its heyday, generated almost as much revenue by itself in fiscal 2019 as the entire category of “wearables, home and accessories” where the Apple Watch and AirPods are slotted by Apple.
This wasn’t entirely Cook’s fault. Industries go through secular phases, and this hasn’t been a decade of new blockbuster consumer gadgets on the scale of the iPhone for any company. The closest thing may be Amazon’s Echo smart speaker and Alexa voice assistant, but they’re no match for the smartphone in sales or impact — at least, not yet.
Let’s acknowledge that comparing just about any product category to the modern smartphone, as defined by the iPhone, is a flawed premise. I maintain that the smartphone is a generation-defining convergence device of near-universal functionality, with an impact that might not be replicated by any foreseeable device.
But I find it hard not to consider either the AirPods or Apple Watch blockbuster products. While both are accessories, I don’t think that diminishes their impact on Apple, the tech industry, and society at large — in fact, if anything, it indicates that we should not be so quick to dismiss the power of a dependent product. After all, it wasn’t too long ago that the iPhone and iPad couldn’t update their operating system without requiring an iTunes connection.
By the sole criteria of how they set the standard for their respective categories, the AirPods and Apple Watch are absolutely blockbusters. The sales figures implied by Apple’s earnings only reinforce that.
Other products of the Cook era that I feel have been a wild success are the introduction of class-leading biometrics — first with Touch ID, and then the successful transition to the Face ID era — the percolation of Retina Displays across the entire product line, the company’s rapid advancements in chip design, and the breakthroughs in iPhone camera quality.
Mossberg also addressed a troubled decade for the Mac:
But Cook does bear the responsibility for a series of actions that screwed up the Macintosh for years. The beloved mainstream MacBook Air was ignored for five years. At the other end of the scale, the Mac Pro, the mainstay of professional audio, graphics, and video producers, was first neglected then reissued in 2013 in a way that put form so far ahead of function that it enraged its customer base.
I think these, and even the notebook keyboard fiasco, are smaller issues than this decade’s decline in software quality. Even in the best scenario, it would take years to dig out, and so far Apple does not seem to be on that path. Cook is also responsible for the services strategy, still in the early stages, which is infecting the software design by making it AAPL-first rather than customer-first.
These are, I think, the most worrying trends of the last decade. The MacBook Air is Apple’s most popular Mac; it should have evolved more than a megahertz bump in the five years been 2013 and 2018. The Mac Pro’s six-year stagnation remains scarcely believable.
And then there’s the software. It’s hard to reconcile the leaps-and-bounds improvements in iCloud services, and radically great new features like CarPlay, with the egregious regressions across Apple’s platforms. It’s almost like all development energy is put toward new features, leaving no room for refinements, fixing bugs, and getting out of technical debt.
For a long time, our problem was there were not enough things to choose from. Then with big box stores, followed by the internet, there were too many things to choose from. Now there are still too many things to choose from, but also a seemingly infinite number of ways to choose, or seemingly infinite steps to figuring out how to choose. The longer I spend trying to choose, the higher the premium becomes on choosing correctly, which means I go on not choosing something I need pretty badly, coping with the lack of it or an awful hacked-together solution (in the case of gloves, it’s “trying to pull my sleeves over my hands but they are too short for this”) for way, way too long, and sometimes forever.
The degree to which you feel this problem definitely depends on your income, or at least, being in the privileged position of not having to make do with the only thing you can afford. But for people with even a limited ability to make an investment purchase, if it’s worth it, there’s even more pressure to get it right. Knowing you wasted a big chunk of money on a cheaper, worse thing that falls apart when you could have spent a little more money on a thing that is good and lasts feels like failure. You’ve then wasted your money, wasted your time, you’ve contributed to global warming, and now you have to start the entire thing over again and hope you don’t somehow end up making the exact same mistake.
We’ve known since at least the 1970s that too much choice feels far from freeing; it is anxiety-inducing and causes us to feel paralyzed.1 In a bid to narrow down our options, we’ll probably turn to professional reviews — particularly from sites like the Wirecutter, where Johnston was a senior editor.
This obsessive tendency is as obviously silly as it is widespread. Culture journalist Eliza Brooke pointed out that “Google searches for ‘best” have been steadily rising for years.’” Product recommendation sites have been springing up across the internet, including scientific reviews and influencer reviews and trend reviews and aggregated reviews and bad SEO-driven reviews and even worse copies of all of the above. “Googling ‘best air fryer’ is not a path to enlightenment, but into a spiral of comparison between publications,” Alyssa Bereznak said of the impact at the Ringer.
The worst part, though, is that I don’t actually care about pants or pillows or travel mugs. I just want to be a man with warm coffee, a covered crotch, and no neck pain. Mediocre products would suffice. But I can’t help but enter the product review trap for every little item because I live in the United States in 2019 and so I am constantly taught that I must make the best purchases because buying good things is also a moral good.
I get why people want the “best” of something, but I think that’s the wrong term for review sites to be using. I assume it’s for Google ranking reasons that they do.
Review websites are fantastic starting points for product categories that you know virtually nothing about, and for larger purchases that are supposed to last a long time. As an example, I’ve been trying to find a decent portable vacuum for cleaning detritus out of the car, and a few review roundups saved me from buying a model that wasn’t going to be powerful enough, even though it was from a well-known brand.
But the “best” product for you may vary from what reviewers recommend. You’ll know this if you know a particular product category well, or if you have fairly specific requirements. For example, when the Wirecutter tested food storage containers, they suggested Pyrex’s tempered glass containers. What Pyrex markets as an “eighteen piece set” — which is actually nine containers of various sizes and nine matching lids — costs about $30 in the United States. Their plastic pick was similar, except made by Snapware and about $10 less expensive for the same-sized set. I get the allure of both of these. But neither option fulfills three criteria that I consider essential: they must be perfectly stackable with and without a fitted lid, so they sit securely in my fridge or pantry when in use, but are compact when not in use; they must be cheap enough to leave behind, so they don’t feel precious; and they must all fit the same lid, so I don’t have to go hunting for a specific one in an oft-disorganized cupboard. And, for those reasons, I own fifty-count sleeves of half- and whole-litre heavy-weight plastic deli containers, and fifty lids that fit both sizes. I bought them from a restaurant supply store where I get a lot of my kitchen gear; this “hundred and fifty piece set”, as the marketing department might put it, cost me $15. Oh, and they’re microwaveable and machine-washable.
I think review websites could do a better job of making their criteria more apparent. I also think Amazon should make their website easier to use, especially for categories with thousands of options. Nobody needs that much choice. But we can do a better job of understanding the role of professional reviewers. They provide recommendations, but if you know better or have specific requirements, you shouldn’t take their “best” choice too literally.
I think I’ve mentioned before how much I loathe shopping for toothpaste. Of all the goods in the world, why can I select from so many variations of that? ↩︎
Bowman Heiden and Nicolas Petit write for the Hill about how they think those who have an “absolute” interpretation of privacy are missing the point:
Privacy absolutism comes from a world of physical privacy where moral agents are the norm. However, the digital world is different than the physical world. Leaving aside legitimate fears like information leaks, incorrect predictions or systemic externalities that may require appropriate regulations or technical standards, our policymakers must understand that users do not view digital privacy in black and white, but rather in shades of gray.
This is a fair point, but it is a strawman argument. I don’t think anyone asks for “absolute” privacy; the authors point to “doctors, lawyers and priests” as examples of people who frequently handle others’ private information. But we willingly give these professionals that information — and we have failed to translate that to the ease, speed, and scale of personal information aggregation on the web.
Here’s a wacky thought experiment from their piece:
You and your spouse are sleeping upstairs in your house after a long day of work. Downstairs, the living room and kitchen are a mess. There just wasn’t time to manage it all. During the night, someone sneaks in, takes a photo of the disorder, and then proceeds to clean it up.
What is there to learn here? At first blush, all of us should find the privacy intrusion intolerable. And yet, on further thought, some of us may accept the trade-off of saving money on a cleaning service and enjoying breakfast in a tidy room. […]
Earlier this year, an almost identical incident occurred to a Massachusetts homeowner who found it “weird and creepy”. I don’t know anyone who would not find this situation unacceptable, regardless of the benefit of having a clean house. But if I were to request a house cleaner, this would be fine, as I imagine it would be for anyone. Again: the difference is that permission has been granted.
Heiden and Petit also betray a level of ignorance that is scarcely believable for the co-authors of an article about privacy and tech companies:
It is not that hard to understand that no one at Facebook is passing moral judgment on photos of your carbon-intensive vacation, your meat consumption at a restaurant or your latest political rant. And when someone searches Google for “how to avoid taxes,” there’s no need to add, “I’m asking for a friend.” In both cases, there is just a set of algorithms seeing sequences of 1s and 0s. And when humans listen to your conversations with a digital assistant, they’re basically attempting to refine the accuracy of the translating machine that will one day replace them. How different is this from scientists in a laboratory looking at results from a clinical trial?
But these are minor blips in how collected data may be used and abused. The real problems — and, again, I must stress how badly the co-authors of this op-ed missed both of these things — are the lack of informed consent, and the scale and speed of personal information collection. We are only starting to understand the full unwelcome consequences of privacy-rejecting business models.
Of course, this wouldn’t be so bad if policymakers weren’t avid readers of the Hill. As it is, this tripe may help persuade some of them to relax their newfound and admirably aggressive stance on privacy.
For that reason, it can be hard to distinguish genuine intent from rage-clicking through a dumb-looking website. A user named Frankbill161 was apparently furious at the operators of the sports-betting website FanDuel for not refunding his money, which, he told Yura, “ruined my life.” So he’d paid $6,232 to order the murder of the customer service representative who delivered the bad news to him over the phone. This sort of spontaneous anger, which might otherwise be spent on a Twitter or Reddit thread, can now be unleashed on sites where users believe their clicks can kill.
So far, according to [Chris Monteiro], eight people have been arrested for ordering murders through Yura’s websites, on the basis of evidence Monteiro passed to law enforcement. One of them, a young Californian named Beau Brigham, had paid less than $5 toward a hit on his stepmother. Nevertheless, he was found guilty of soliciting murder and sentenced to three years in prison.
Murder marketplaces may force us to reexamine — and redefine — what constitutes criminal intent. Though judgments have been somewhat inconsistent, courts seem to regard making a payment of any amount as proof that the desire for harm is sincere. David Crichton, a doctor in the United Kingdom, was acquitted of attempted-murder charges after ordering a hit on a financial adviser who’d lost most of his pension, because he had never transferred any money to Yura. In court, Crichton claimed he had been trying to “clear his head” of his own suicidal thoughts, and that he’d never really wanted the killing to happen.
Marketplaces offering murder-for-hire have existed on the web for decades. But it’s only relatively recently — with the coalescing of legitimized storefronts, difficult-to-track payment systems, and what I perceive to be a shift in societal condition that is at best increasingly nihilistic and at worst misanthropic — that have resulted in what Merchant describes as the first documented death purchased through these means.
Once the California Consumer Privacy Act takes effect Jan. 1, websites with third-party trackers must add to their home page a button that says “Do Not Sell My Personal Information.” If a consumer clicks that button, the site is barred from transactions that send data to hundreds of third parties.
Those transfers have underpinned the digital-ad market for more than a decade, and are a core component of Facebook’s and Google’s powerful tools for collecting data on consumers and delivering relevant ads to them online.
In advance of the California law taking effect, Google has created a new protocol so sites won’t send data to the company if consumers have opted out. A Google spokeswoman said the protocol was intended to “help them [advertisers] as they work to comply with” the California law.
Facebook, however, has told advertisers that its trackers’ data collection doesn’t constitute “selling” data under the California law and that it therefore doesn’t believe it is required to make changes.
At first glance, this seems consistent with the framing privacy-insensitive companies like Google and Facebook have been using for this debate: they don’t sell personal data; they merely allow advertisers to bid on it. Similarly, a business does not sell Facebook information about who is browsing their website; they only share it.
“Sell,” “selling,” “sale,” or “sold,” means selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by the business to another business or a third party for monetary or other valuable consideration.
While businesses generate no immediate financial benefit from using a Facebook pixel, its ad performance and analytics characteristics seem to me to be “valuable consideration”. But, of course, regulators will decide that beginning next year.
This is one of those cases where it seems that the letter of the law ought to have been far clearer to correctly convey its spirit.
Welcome! I’m excited about your upcoming stay at my home! I’m using the term “home” loosely here, as the living space you’ll be inhabiting is simply a piece of real estate I have purchased with the intent of skirting local zoning laws in order to turn a profit.
A cool thing about these “disruptive” enterprises is seeing the constant real-world flaws in their premise, and why we have laws and regulations in the first place.
In a hearing of the Senate Judiciary Committee yesterday, while their counterparts in the House were busy with articles of impeachment, senators questioned New York District Attorney Cyrus Vance, University of Texas Professor Matt Tait, and experts from Apple and Facebook over the issue of gaining legal access to data in encrypted devices and messages. And committee chairman Sen. Lindsey Graham (R-S.C.) warned the representatives of the tech companies, “You’re gonna find a way to do this or we’re going to do it for you.”
Graham and ranking member Sen. Diane Feinstein (D-Calif.)—who referenced throughout the hearing the 2015 San Bernardino mass shooting and the confrontation between Apple and the Federal Bureau of Investigation that resulted from mishandling of the shooter’s county-owned iCloud account by administrators directed by the FBI—closed ranks on the issue.
“Everyone agrees that having the ability to safeguard our personal data is important,” Feinstein said. “At the same time, we’ve seen criminals increasingly use technology, including encryption, in an effort to evade prosecution. We cannot let that happen. It is important that all criminals, whether foreign or domestic, be brought to justice.”
As always, lawmakers are exhibiting profound resistance to learning about encryption from experts in the field. There is simply no way to encrypt data in a way that can be viewed with a warrant regardless of human cooperation, but is otherwise secure. Former law enforcement personnel said as much in response to the San Bernardino case, as has pretty much every expert in this field.
Lawmakers urgently need to understand that what they are asking for is not possible. It never has been; it likely never will be. Every solution that has been proposed so far — hardcoded super secret credentials, escrow systems, forcing the creation of special software patches, and the like — has already been evaluated and discarded.
A few months ago, the Carnegie Endowment for International Peace’s Encryption Working Group published an overview of what the encryption debate looks like today:
There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.
One reason lawmakers are struggling with technologists in this debate but winning with the public overall is because encryption seems like a black box. I disagree with this report that there is balance to be found in encrypting data at rest securely and somehow allowing only law enforcement access to it, but I think it’s worth investigating. But if experts come back again to say that this is not possible, lawmakers need to understand that they are not being hyperbolic or defeatist. Some things simply are not possible.
Ring’s website states that supplying a home address enables Neighbors to “create a radius around your home” in order to share alerts from “within that radius.” (Users aren’t required to provide accurate information.) As such, users presumably expect that their own posts are, likewise, visible only to the neighbors in whose radii they fall. Ring’s website implies as much: “Conversely, if you share an alert on the [Neighbors app] about a crime or safety issue in your radius,” it says, “your neighbors will also get a notification on their phones and tablets.”
A Ring spokesperson said elsewhere the company characterizes posts to Neighbors as “public” and allows users to link to specific posts on social media. Gizmodo found that Google has indexed almost 2,000 Ring videos so far. However, it’s unclear whether users understand that posts, including those containing accurate location information, can be easily viewed by anyone, from anywhere on the planet.
The density of cameras that this reporting revealed is shocking to me. For a country like the United States that as wary of interference in personal freedom, many residents of American cities certainly are eager to give Amazon eyes on every block.
It reminds me of the steady drumbeat of reporting from the United Kingdom that it is among the world’s most-surveilled nations. It seems that a handful of reports are released every year about Britain’s big brother problem — but few acknowledge that most of these cameras are privately owned, and are subject to far more permissive privacy laws. In many regions of the world, private surveillance can operate with few rules in broadly public spaces, so long as there is notice that cameras are being used.
In his excellent book on surveillance, Bruce Schneier has pointed out we would never agree to carry tracking devices and report all our most intimate conversations if the government made us do it.
But under such a scheme, we would enjoy more legal protections than we have now. By letting ourselves be tracked voluntarily, we forfeit all protection against how that information is used.
Those who control the data gain enormous power over those who don’t. The power is not overt, but implicit in the algorithms they write, the queries they run, and the kind of world they feel entitled to build.
When it comes to the ethics of monitoring individuals, is there really a difference between when it’s done by the government or one of the biggest companies on the planet? I don’t believe there is.
Twitter is funding a small independent team of up to five open source architects, engineers, and designers to develop an open and decentralized standard for social media. The goal is for Twitter to ultimately be a client of this standard.
This isn’t going to happen overnight. It will take many years to develop a sound, scalable, and usable decentralized standard for social media that paves the path to solving the challenges listed above. Our commitment is to fund this work to that point and beyond.
We’re calling this team @bluesky. Our CTO [Parag Agrawal] will be running point to find a lead, who will then hire and direct the rest of the team. Please follow or DM @bluesky if you’re interested in learning more or joining!
If you’re worried about the dominance of certain social media platforms, or if you’re concerned about privacy online, or if you’re uncomfortable with leaving the decisions for how content moderation works in the hands of a few internet company bosses — this is big news and something you should be paying attention to. It won’t change the way the web works overnight. Indeed, it might never have that big of an impact. But it certainly has the potential to be one of the most significant directional shifts for the mainstream internet in decades. Keep watching.
After a closer reading of Jack’s tweets, though, I think my first interpretation wasn’t quite right. Twitter isn’t necessarily interested in decentralizing content or even identity on their platform. Why would they be? Their business is based around having all your tweets in one place.
This “burden on people” is the resources it would take for Twitter to actively combat hate and abuse on their platform. Facebook, for example, has hired thousands of moderators. If Twitter is hoping to outsource curation to shared protocols, it should be in addition to — not a replacement for — the type of effort that Facebook is undertaking. I’ve outlined a better approach in my posts on open gardens and 4 parts to fixing social networks, which don’t seem compatible with Twitter’s current business.
As is the Twitter way, Dorsey does not seem to have fully considered this proposal. On the relatively simple question of whether this would be based on existing standards or if Twitter would be inventing an entirely new spec, he said that it was to be determined. As Dorsey acknowledges in the thread, Twitter already has an open API; it could be updated. Why not use that? Mastodon is a decentralized social networking platform — why not adopt its features?
This is a spitball at this stage — barely more than a napkin sketch. There might be something to show for it, sometime, in some capacity, but there’s a lot of buzzwords in this announcement without any product. That suggests a high likelihood of vapourware to me.
For the third time in our nation’s history, a President is facing an impeachment vote in the House of Representatives. As with the Mueller Report (and the DOJ IG Report), we at The Bulwark think the public needs to read the articles of impeachment thoroughly, carefully, as citizens — not as lawyers.
The House of Representatives is moving toward a momentous decision about whether to impeach a president for only the third time in U.S. history. The charges brought against President Trump by the House Judiciary Committee on Tuesday are clear: that he abused his office in an attempt to induce Ukraine’s new president to launch politicized investigations that would benefit Mr. Trump’s reelection campaign, and that he willfully obstructed the subsequent congressional investigation.
Because of that unprecedented stonewalling, and because House Democrats have chosen to rush the impeachment process, the inquiry has failed to collect important testimony and documentary evidence that might strengthen the case against the president. Nevertheless, it is our view that more than enough proof exists for the House to impeach Mr. Trump for abuse of power and obstruction of Congress, based on his own actions and the testimony of the 17 present and former administration officials who courageously appeared before the House Intelligence Committee.
I was too young to remember Bill Clinton’s impeachment over lying under oath and obstructing justice, and not yet born when Richard Nixon resigned for his corrupt abuse of power — and also obstructing justice. So it is a momentous occasion for me to witness what is, in my view, an instance where the President of the United States abused his power in an attempt to damage a political rival, and obstructed investigations of these actions. Today will be etched into my brain for as long as I live; I wanted to make sure it was recorded in my public diary, too.
Of course, I am Canadian. American politics has no immediate consequence nor direct impact on my life, but the uniquely close relationship of my country with the United States — not to mention the latter’s power and influence over nearly all countries — means that I am not far removed from its effects. This process is important, it is right, and it is a just decision to proceed with curtailing a president who does not value the law.
Earlier today, Apple began accepting orders for the all-new Mac Pro, which will start shipping to customers in 1-2 weeks. Reminiscent of what Apple did when it released the iMac Pro, the new Mac Pro was provided to a very limited set of reviewers with video production experience in advance of pre-orders.
Marques Brownlee shares his impressions after using the Mac Pro and two Pro Display XDRs to edit all of his YouTube videos for the past two weeks. His main takeaways? “One, it’s really quiet, Two, it’s really fast.” So fast, in fact, that he was able to render 8K video more quickly than the time it would take to watch.
Brownlee’s video is the only one I’ve watched so far, but his first impressions blew me away. I will never own one of these things, but I will live vicariously through those that need nothing but the fastest Mac for their specialized workflows.
Apple is on the cusp of shipping a new Mac Pro — a phrase many of us would not expect to have uttered three years ago, on the last Mac Pro’s third birthday without an update. The test will be whether that happens again. We already know of a few Mac Pro updates that Apple is readying, including Radeon Pro W5700X configuration options and up to 8 TB of storage. They also aren’t yet taking orders for the rack-mounted Mac Pro, which is “coming soon”. After that, though, it would be worrying if it once again stagnated for several years. I don’t expect to see a Mac Pro update every year, but I would hope to see gradual improvements — both to keep up with new technology, and to demonstrate their commitment to niche customers who depend on the Mac.
I have previously shared with you Balk’s Law (“Everything you hate about The Internet is actually everything you hate about people”) and Balk’s Second Law (“The worst thing is knowing what everyone thinks about anything”). Here I will impart to you Balk’s Third Law: “If you think The Internet is terrible now, just wait a while.” The moment you were just in was as good as it got. The stuff you shake your head about now will seem like fucking Shakespeare in 2016. I like to think of myself as an optimist, but I have a hard time seeing a future where anything gets better. Do you know why? Because everything is terrible and only getting worse. We won’t all be dead in twenty years, but we’ll all wish we were. I used to have hopes that once the Internet got completely unbearable some of the smart people would peel off and start something new, but with each passing day it seems ever less likely. (If anyone peels off to start something new it’s going to be teens, and we know what idiots they are.) No, the Internet is going to keep getting worse and there will be no chance for escape. It’s a massive torrent of sewage blasted at you at all hours and you pay handsomely for the privilege of having a hand-held cannon you carry with you at all times to spray more shit-sludge at yourself whenever you’re bored or anxious. Some of you sleep with it right next to your head in case you wake in the middle of the night and need to deliver another turgid shot to your wide-open mouth.
By 2010, personal blogs were thriving, Tumblr was still in its prime, and meme-makers were revolutionizing with form. Snapchat was created in 2011 and Vine, the beloved six-second video app, was born in 2012. People still spent time posting to forums, reading daily entries on sites like FML, and watching Shiba Inus grow up on 24-hour puppy cams. On February 26, 2015 — a day that now feels like an iconic marker of the decade — millions of people on the internet argued about whether a dress was blue or gold, and watched live video of two llamas on the lam in suburban Arizona. Sites like Gawker, the Awl, Rookie, the Hairpin, and Deadspin still existed. Until they didn’t. One by one, they were destroyed by an increasingly unsustainable media ecosystem built for the wealthy.
There may be an element of rosy retrospection to Chang’s piece, but the thesis of the argument fully summarizes a decade-long shift: the internet really has lost joy — lightness and vibrancy have been upended and replaced by gravitas. A veneer of marketing coats surfaces already soaked in cynicism. It becomes increasingly difficult to choose our own web adventure as it is more frequently dictated by opaque recommendations.
I am an optimistic person; this stuff can be changed. I’ve been writing about the misplaced trust of an advertising-driven techno-utopia for most of this decade, and I’m encouraged by the increasing awareness of its faults. Correcting these problems doesn’t require us to entirely give up on social media, or advertising, or smartphone apps, or any of that stuff. The 2020s should be marked by a conscientious effort to bring the fun back. It will be a slow process, but I think it has already begun.
Larry Page and Sergey Brin spent the first fifteen years of their careers building the greatest information network the world has ever known and the last five trying to escape it. Having made everything visible, they made themselves invisible. Larry has even managed to keep the names of his two kids secret, an act of paternal love that is also, given Google’s mission “to organize the world’s information and make it universally accessible and useful,” an act of corporate treason.
Larry Page’s efforts have made it trivial to find virtually anyone’s contact information, date of birth, siblings, and address, but effectively impossible to find his children’s names. Mark Zuckerberg wants to make everyone in the world closer, except his neighbours.
They’re fully aware of the problems they have played a part in creating, but the business models of the companies they started are dependent on everyone else not figuring that out.
Avast, the multibillion-dollar Czech security company, doesn’t just make money from protecting its 400 million users’ information. It also profits in part because of sales of users’ Web browsing habits and has been doing so since at least 2013.
That’s led to some labelling its tools “spyware,” the very thing Avast is supposed to be protecting users from. Both Mozilla and Opera were concerned enough to remove some Avast tools from their add-on stores earlier this month.
But recently appointed chief executive Ondrej Vlcek tells Forbes there’s no privacy scandal here. All that user information that it sells cannot be traced back to individual users, he asserts.
Here’s how it works, according to Vlcek: Avast users have their Web activity harvested by the company’s browser extensions. But before it lands on Avast servers, the data is stripped of anything that might expose an individual’s identity, such as a name in the URL, as when a Facebook user is logged in. All that data is analysed by Jumpshot, a company that’s 65%-owned by Avast, before being sold on as “insights” to customers. Those customers might be investors or brand managers.
On the marketing webpage for their anti-tracking product, Avast says that VPNs don’t secure your privacy enough because “advertisers can still track you and identify you based on your device and browser settings”. They also say that you’re not anonymous to trackers because your “online habits, along with your device and browser settings make up your unique digital fingerprint, allowing advertisers to identify you from a crowd of visitors”.
For four years during college, I bought and scalped tickets on the side. I didn’t use bots and I wasn’t good at it. I ultimately lost a lot of money. But I did learn quite a lot about the ticket scalping industry. And I learned enough to know that the “anti-scalper” strategies Ticketmaster has deployed in recent years benefits scalpers, not fans.
It is the full-time job of thousands of people in the U.S. and around the world to buy tickets during hectic Ticketmaster onsales and sell them at jacked-up prices. When Ticketmaster tweaks how sales work, scalpers have lots of time and incentive to learn how to optimize for its new systems and to circumvent its anti-scalper tech. By making onsales more complicated, Ticketmaster is hurting average fans who buy tickets using the site only a couple times a year and helping the people who buy tickets every single day, in dozens of different onsales.
I was reminded of this article today as I attempted to buy a couple of tickets to a low-demand show that definitely isn’t seeing mass orders by scalpers. Point of clarity: scumbags they may be, ticket scalpers do not actually collect human scalps.
I started on my phone, because I was in the kitchen making coffee. I have the Ticketmaster app, but it had logged me out at some point. So I had to go through all of its prompts to pick bands and artists to get emailed about — no, thank you — to switch on push notifications, and all the rest of it. I signed in using my complicated saved password, which had apparently expired, so I had to go through their whole password reset process. Expiring passwords are bullshit.
I switched over to my Mac and tried in Safari, Chrome — the browser for people who don’t give a shit about their privacy — and I even brushed the dust off my copy of Firefox. I got the same error in all of them. I tried again on my phone using LTE, and had the same problem. Their website is apparently so secure that I simply cannot use it to buy tickets; last year, however, a Canadian investigation found that Ticketmaster was complicit in scalping. Live Nation Entertainment — the parent company of both Ticketmaster and Live Nation, which were somehow permitted to merge in 2010 — has exclusive contracts with some of the biggest venues in North America, too, so they’re impossible to avoid.
So I guess I’ll try buying tickets in person, at a booth, the way my ancestors once did.
The feature “rollout” is a staple of tech launches. A feature technically goes live, but when it will actually reach all users is left vague. Dashboards tabulating screen time rolled out last year, making their way to users over the course of weeks. Instagram’s anti-bullying tools rolled out a couple of months ago. A year ago, a feature to unsend messages in Messenger went live … in Bolivia, Colombia, Lithuania, and Poland, until eventually making its way to everyone else. This rollout tactic gives major tech platforms a way to create the illusion that they are for everyone. Tech companies get outlets to write up press releases about features going live, even if the features are not, in many cases, actually live.
A cautionary approach to rolling out new features by testing and refining them in smaller markets is not a problem. The problem is that these features are often announced in press releases and news stores as though they are widely available when they aren’t. In the New York Times’ coverage of Facebook’s new tool to control data collection across the web, it isn’t mentioned until the very last paragraph that it is only available in Ireland, South Korea, and Spain, with no timeline for U.S. or worldwide access. There’s no sign that Facebook is restricting the feature to these markets for licensing, translation, or legal reasons; it is a strategic decision to test how it works for users, and how much it impacts the company’s data gathering. Reporters should reserve praise and more accurately describe these soft launches for what they are: tests in specific markets.
The policy explains users can disable all location services entirely with one swipe (by navigating to Settings > Privacy > Location Services, then switching “Location Services” to “off”). When one does this, the location services indicator — a small diagonal upward arrow to the left of the battery icon — no longer appears unless Location Services is re-enabled.
The policy continues: “You can also disable location-based system services by tapping on System Services and turning off each location-based system service.” But apparently there are some system services on this model (and possibly other iPhone 11 models) which request location data and cannot be disabled by users without completely turning off location services, as the arrow icon still appears periodically even after individually disabling all system services that use location.
“Ultra wideband technology is an industry standard technology and is subject to international regulatory requirements that require it to be turned off in certain locations,” an Apple spokesperson told TechCrunch. “iOS uses Location Services to help determine if an iPhone is in these prohibited locations in order to disable ultra wideband and comply with regulations.”
“The management of ultra wideband compliance and its use of location data is done entirely on the device and Apple is not collecting user location data,” the spokesperson said.
That seems to back up what experts have discerned so far. Will Strafach, chief executive at Guardian Firewall and iOS security expert, said in a tweet that his analysis showed there was “no evidence” that any location data is sent to a remote server.
Apple said it will provide a new dedicated toggle option for the feature in an upcoming iOS update.
This makes complete sense to me and appears to be nothing more than a mistake in not providing a toggle specifically for UWB. It seems that a risk of marketing a company as uniquely privacy-friendly is that any slip-up is magnified a hundredfold and treated as evidence that every tech company is basically the same.
One of the more noticeable changes in recent iOS releases is just how many of them there are. There were ten versions each of iOS 6 and 7, but there were sixteen versions of iOS 11, and fifteen of iOS 12.
iOS 13 has distinguished itself by racing to a x.2 version number faster than any other iOS release family — on October 28 — and has received two further version increments since. This rapid-fire pace of updates has been noticeable, to say the least, and helps illustrate a shift in the way iOS releases are handled.
Which brings me to a confession: I’ve slightly misled you. Merely counting the number of software updates isn’t necessarily a fair way of assessing how rapidly each version changes. For example, while both iOS 6 and 7 had ten versions each, they were clustered in low version numbers. iOS 6 had three 6.0 releases and, oddly, a whole bunch under 6.1; iOS 7’s were the reverse.1
In fact, it used to be the case that iOS rarely breached the x.2 release cycle at all. The first version to get to an x.3 release was iOS 4, but that was also the year that the company merged iPhone and iPad versions in 4.2. You have to skip all the way to iOS 8 to find another x.3 release; after that, though, every version of iOS has gotten to x.3, and iOS 8, 10, and 11 have each seen a series of x.4 releases as well.
iOS 13 is currently at 13.2.3; the developer beta is at 13.3, and 13.4 is being tested internally. Excluding the beta seeds, there have already been eight versions of iOS 13 released so far, and it has been available to the general public for less than three months.
And, again, just looking at the number of versions belies the impact of their contents. In addition to myriad bug fixes, iOS 13’s updates have introduced or reintroduced features that were announced at WWDC, but which did not appear in the gold master of 13.0. A similar pattern occurred with iOS 11 and 12: Apple announced, demoed, and often even released into beta features that were ultimately pulled from the x.0 version, before reappearing in a later update.
This indicates a shift in Apple’s product release strategy — not just from monumental updates to iterative ones, but also from just-in-time feature announcements to early previews. At WWDC, the iOS announcement was implied to be an indication of everything that would be available in the x.0 release; now, it’s a peek at everything that will be available across the entire release cycle.
I do not think that this is inherently problematic, or even concerning. But, so far, it does not seem to be a deliberate strategy. From the outside, it feels far more like an accidental result of announcing features too early — a predictable consequence of which is that announcements may have to be walked back. There are plenty of examples of this in Apple’s history, as well as the tech market and other industries as a whole. But you may recall that Apple’s push notification service was announced for an iPhone OS 2 release, but it was pushed back to iPhone OS 3 due to scalability concerns. So, this is not a new problem, but it is a more frequent concern lately, as features are increasingly deferred to later software updates.
I would rather features be stable; I do not think there is any reason that Apple should rush to release something before it’s ready. But I do wish this new strategy came across as a deliberate choice rather than what I perceive to be a lack of internal coordination.
I’ve experienced the tedium of plotting the iOS version release history as a spreadsheet so you don’t have to. ↩︎
Apple CarPlay in BMW vehicles is finally going to be free. Hallelujah! Autocar reported that BMW is eliminating the subscription charge for folks in the U.K. earlier today, and we just received confirmation from BMW that the change applies to U.S. BMW owners as well.
A BMW spokesperson told us that they “can confirm that this change does also apply to the U.S. market.” When we asked why the sudden change of heart, the same spokesperson sent us this statement: “BMW is always looking to satisfy our customers’ needs and this policy change is intended to provide BMW owners with a better ownership experience.”
Then it was time for opening statements. Taylor Wilson, a partner at L. Lin Wood and a lawyer for the plaintiff, put up a chart I couldn’t see with a lot of dates on it. (The chart was aimed at the jury and would continue to obscure my view all day.) He then walked through the dates of the basic action around the tweets with the energy of a nervous middle schooler doing a monologue at the school play. Not only did Musk call Unsworth a “pedo guy,” Wilson pointed out, when Kevin Beaumont sarcastically called the tweet “classy,” Musk replied “bet you a signed dollar it’s true.” (The “signed dollar” tweet has also been deleted.)
Musk apologized on July 17, but that wasn’t the end of it. Wilson rather irritably told the court that despite the apology, Musk did not retract his “worldwide accusation on Twitter” that Unsworth was a pedophile. Wilson then told the court that Musk’s family office retained a PI to look into Unsworth and on August 28th, instructed the investigator to leak negative information to the press. (It would later emerge that the PI was, in fact, a con man.)
Musk is not coming across particularly well — which is not surprising for someone who broadcast an insinuation, without any shred of evidence, that a barely-public person was a pedophile. I still cannot understand why he didn’t settle and retract his claims. Arrogance, perhaps.
You will thank your comment blocking browser extension when reading this and seemingly all articles reporting on the trial, as it prevents you from enduring a toxic wasteland of moronic pseudo-legal arguments and Musk worship. Lopatto’s piece, on the other hand, is terrific.
Today, in 2019, if the company was a person, it would be a young adult of 21 and it would be time to leave the roost. While it has been a tremendous privilege to be deeply involved in the day-to-day management of the company for so long, we believe it’s time to assume the role of proud parents — offering advice and love, but not daily nagging!
With Alphabet now well-established, and Google and the Other Bets operating effectively as independent companies, it’s the natural time to simplify our management structure. We’ve never been ones to hold on to management roles when we think there’s a better way to run the company. And Alphabet and Google no longer need two CEOs and a President. Going forward, Sundar will be the CEO of both Google and Alphabet. He will be the executive responsible and accountable for leading Google, and managing Alphabet’s investment in our portfolio of Other Bets. We are deeply committed to Google and Alphabet for the long term, and will remain actively involved as Board members, shareholders and co-founders. In addition, we plan to continue talking with Sundar regularly, especially on topics we’re passionate about!
This seems like huge news — and I suppose it inherently is a big deal for co-founders to step back from their company — but it does not mean that Brin and Page won’t be involved in Alphabet’s direction. This announcement contains nothing about the co-founders’ holding of unique shares that give them extraordinary control over the company. It also doesn’t clarify why the Alphabet holding company was created, what purpose it serves now, and why it needs to be distinct from Google.
The first time I tried to publish new images to Flickr, Lightroom aborted and the OS put up a dialog warning me that the app “magick” isn’t signed and so it might be dangerous, so the OS wouldn’t let it launch. “magick” is part of the ImageMagick graphics tool suite, a commonly used set of image manipulation tools; as of today the developers haven’t signed it with a developer certificate from Apple, so Apple’s Gatekeeper will reject it.
You can tell the OS to let the app run, but it’s not obvious where to do that. Here’s how:
Try to export some images and get the warning dialog. Then open up the System Preferences app and navigate to the “Security and Privacy” section and the “General” tab. At the bottom of that tab, you should see some text similar to the warning you got in the dialog. There’s an “Allow” button there. If you click it, you’re approving that app as something that’s okay to be launched.
When launching an app directly, the workaround is easier: you can Control-click and choose Open from the contextual menu.
In both cases, why doesn’t the alert tell you how to resolve the problem (if you do, in fact, trust the software)? In my view, this is poor design and essentially security through obscurity. Apple decided that they don’t want you to run unsigned software, but they don’t want to (or realistically can’t) completely forbid it, so they provide an escape hatch but keep it hidden. macOS doesn’t trust the user to make the right decision, so it acts as though there’s no choice.
The solution to these errors reminds me a little of the de facto standard for burying rarely-toggled options in hidden preferences set via the command line. It’s a pretty clever trick. But the dialog provides no indication that this is possible; it treats unsigned apps as inherently dangerous, not just a risk for the user to take. I know about the secondary-click-to-open trick, but I always forget it when I launch an unsigned app and get spooked before remembering how to proceed.
Perhaps this is the intention, but it makes security far too visible to the user and makes solutions far too opaque. The dialog is unhelpful for average users, and irksome for more technically-capable users. It’s not striking a good balance.
Descriptive error messages are useful; silent failures, misleading dialogs, and vague errors are not.
Russian President Vladimir Putin on Monday signed legislation requiring all smartphones, computers and smart TV sets sold in the country to come pre-installed with Russian software.
The country’s mobile phone market is dominated by foreign companies including Apple, Samsung and Huawei. The legislation signed by Putin said the government would come up with a list of Russian applications that would need to be installed on the different devices.
According to a profile official, in summer in informal conversations it was said that the main goal of the bill is Apple, which they are trying to oblige the law to install Russian applications on iPhones and iPads. But the iOS operating system that Apple uses does not at all imply the ability to preinstall third-party applications.
At one of the meetings, Apple representatives warned that the introduction of such standards would force the company to “revise its business model in Russia, ” Vedomosti wrote in the summer. As of September, the company’s position has not changed, the official said. “The company then took this position: we will show you the middle finger, your market is a very small segment of our business, its loss is insignificant,” he says. Perhaps the authors of the project were inspired by the example of China, from where, after the adoption of similar rules, no one left, admits The Bell’s interlocutor. But Russia is not China, there are no levers of pressure on Apple, he states.
I’m not sure what Chinese law the writers are referring to. The only laws restricting smartphone apps that I can find being passed by China include one that prohibits preinstalled apps that invade users’ privacy without permission — presumably, this does not include government monitored services — and one that requires the ability to remove preinstalled apps. I cannot find a record of a Chinese law that requires the installation of software on devices sold in the country.
This Russian law really is something else. While I could see a situation in which certain apps aren’t available in Russia, I cannot imagine that Apple would sell iPhones specially customized in accordance with the Russian government’s wishes. That’s an indefensible precedent. Russia’s internet policy goals are increasingly distant from the rest of the world. If isolation is what they wish for, the rest of us should not be dragged along.
Now, when you want to share a photo, you no longer have to create an entire album. You can send a one-off message to a friend, so long as they also have Google Photos installed, that contains a photo, just as you would on Instagram, Snapchat, SMS, or any other chat app. If you want to turn that thread into a conversation, you can both start chatting, as well as react to the photos with likes and share more. That way, the photos become a starting point for a conversation, much in the way photos have become just another form of communicating on social platforms.
Since Google Photos is now, effectively, a standalone messaging app in addition to being a place for your photo library, it brings the total count of apps made by the company which have some sort of chat functionality up to six.
Lesley Stahl of CBS’ 60 Minutes interviewed Susan Wojcicki about the state of YouTube:
And what about medical quackery on the site? Like turmeric can reverse cancer; bleach cures autism; vaccines cause autism.
Once you watch one of these, YouTube’s algorithms might recommend you watch similar content. But no matter how harmful or untruthful, YouTube can’t be held liable for any content, due to a legal protection called Section 230.
Lesley Stahl The law under 230 does not hold you responsible for user-generated content. But in that you recommend things, sometimes 1,000 times, sometimes 5,000 times, shouldn’t you be held responsible for that material, because you recommend it?
Susan Wojcicki Well, our systems wouldn’t work without recommending. And so if—
Lesley Stahl I’m not saying don’t recommend. I’m just saying be responsible for when you recommend so many times.
Susan Wojcicki If we were held liable for every single piece of content that we recommended, we would have to review it. That would mean there’d be a much smaller set of information that people would be finding. Much, much smaller.
I entirely buy the near-impossibility of moderating a platform where hundreds of hours of video are uploaded every second. It seems plausible that uploads could be held for initial machine review, with a human-assisted second stage — particularly for new accounts — but that’s kind of nitpicking at YouTube’s scale. It would not be preferable to hold YouTube legally accountable for the videos users upload.
However, I do not buy for one second that YouTube should not be held morally accountable for the videos it recommends. The process and intention of recommendations is entirely in YouTube’s hands, and they can adjust it as they choose. Watching a video from a reputable newspaper should not suggest a video from a hate group in its “Up Next” feature. Conspiracy theories should not be the first search result, for example; they should be far harder to find. YouTube clearly agrees, and have been making changes as a result. But it isn’t enough. It’s misleading to paint uploads and recommendations with the same brush, and it is worrying that a lack of legal obligations is used to justify moral inaction.
“I did expect some people to be unhappy with the decision, I expected some pushback,” he told The Register, adding: “But the level of pushback has been very strong.”
He was aware, he says, that people would not like two key aspects of the decision: the move from a non-profit model to a for-profit one; and the lack of consultation. […]
Translation: “I, Andrew Sullivan, thought I could get away with exploiting charities and not-for-profits so long as I did so quietly. However, this plan has backfired spectacularly because it turns out that people actually pay attention to this stuff. Who knew?”
[…] He had explanations ready for both: “The registry business is still a business, and this represented a really big opportunity, and one that is good for PIR [Public Interest Registry].”
As for the lack of consultation: “We didn’t go looking for this. If we had done that [consulted publicly about the sale .org], the opportunity would have been lost. If we had done it in public, it would have created a lot of uncertainty without any benefit.”
Translation: “If we had told people about this before the sale, it would have meant answering awkward questions that I very much wish to avoid — then and now.”
Just why is ISOC approving this deal, going back on nearly two decades of non-profit stewardship and infuriating many of its ardent supporters? Is it just money?
Yes and no.
“The lump sum is definitely a benefit,” he admits, before arguing passionately about ISOC’s core missions. “The work ISOC does is focused on policies and connecting the unconnected. There is already a community organisation that covers domain names – and that’s ICANN.”
There are four main issues libraries are having when it comes to accessing ebooks and e-audiobooks, she said. The first is cost: ebooks or e-audiobooks can cost up to five times the price of a print copy for a library, she said.
The second issue is the rise in metered access or expiry dates for ebook licenses. More and more, publishers are making ebook licenses expire after two years, or after a certain number of uses.
Third, some e-audiobooks just aren’t available to libraries at all. That’s because companies like Audible have exclusivity rights on certain titles, blocking libraries from accessing them.
And of course, there’s the recent change by MacMillan, a new type of restriction.
Can you imagine the hysterical reaction if someone had suggested the creation of public libraries today. ‘For free? How are you going to pay for that, STALIN?’
This is not a uniqueobservation in the world of tweet-based observations, but it has remained a nagging thought in the back of my head for years. Libraries have nimbly adapted as they continue to serve community needs, in spite of ridiculous doubts about their continued relevance and twenty-first century roadblocks like those reported above. Libraries deserve ongoing support for the greater good; DRM and other gatekeepers to learning are antithetical to their mission and role.
This Crimea situation is a real shitshow. And so is Apple’s response to it.
Last night, I oversimplified my reaction to Apple’s compliance with Russia’s requirement that maps display Crimea as Russian territory when those maps are viewed in Russia. There’s some subtlety that I neglected to dive into that doesn’t change my objection to Apple’s acquiescence, but helps provide some clarity on why it is objectionable.
The first thing to know is that Apple is not unique in how it recognizes Crimea and disputed territory elsewhere. Google has a similar policy, even saying to Tass, a Russian news agency, that they “fixed a bug” that indicated Crimea was Ukrainian territory. This is similar to the obviously misleading language used by Russia to describe Apple’s change yesterday. Here WeGo — originally developed by Nokia before being spun off as its own company — shows Crimean addresses as Russian when browsing from within Russia, and Ukrainian when browsing elsewhere.
But other mapping software still retains Ukraine’s territorial claim over Crimea, even when browsing using a Russian proxy, including Microsoft’s Bing Maps. OpenStreetMap — used by Facebook, Foursquare, and others — seems to take a middle-ground approach with Crimean addresses shown as being within Ukraine, but with a border around the entire peninsula as though it’s its own country.
This is also a situation that is not entirely unique to Ukraine, Russia, and Crimea. Maps with countries and cities and borders are inherently political — it’s right there in the name — and there are dozens of disputes over borders and sovereignty all around the world. The display of this disputed land is handled differently depending on mapping software and region but, due to the nature of things that are location dependent, this is devilishly difficult to test. I am still not entirely confident in what I found. For example, the region of Kashmir displays in Apple Maps and Google Maps on my iPad as disputed territories; but, if I use Google Maps on the web and switch its region to India, it becomes solidly Indian. A 1961 law prohibits making maps of India that are incongruous with the one made by the Survey of India, so I imagine that Apple’s map would follow suit — but I cannot verify that.
I haven’t mentioned Israel and Palestine which, suffice to say, as Jon Stewart once put it, is a “bottomless cup of sadness”.
So it’s not a situation that is specific to Apple’s maps app, nor is it specific to Russia’s occupation of Ukrainian territory. But it remains one of several recent examples of tyrannical leaders wielding influence over American tech companies to further their propaganda campaigns. Apple and Google have little choice but to comply with the laws of the regions in which they operate, no matter how authoritarian.
But they would also not be forced to be used as vehicles for disinformation if they chose not to operate within countries that require such compliance. This doesn’t have to be a wholesale withdrawal. Apple doesn’t have to include Weather or Maps on iPhones sold in Russia, for example; Google has the ability to prevent its own maps app from being accessed from within the country. I’m not saying that either company should do this, and I’m sure this solution was at least suggested at both and was clearly shot down for reasons not publicly known.
This also becomes vastly more difficult when it comes to Apple’s relationship with Chinese authorities. In August, Google’s Project Zero team announced that iOS vulnerabilities that were patched earlier in the year were actively exploited. Reporters put together the clues and established that the Chinese government was likely responsible for hacking into websites that targeted the oppressed Uyghur population. But Apple’s response mostly nitpicked Google’s description and did not acknowledge the real damage these security bugs caused. Did they worry about whether their Chinese manufacturing facilities would be impacted by a more complete response that acknowledged the damage these vulnerabilities inflicted upon Uyghurs? I don’t know, but it’s awfully concerning that it’s a question that can reasonably be asked. If this was a worry, I maintain that Apple ought to have stayed silent and let press reports do the talking — but that is a last-ditch option that is only slightly more preferable than a complete response. Their purely defensive response was misleading, weak, and capitulating.
Quite simply, any company operating worldwide must set a line that it will not cross. There cannot be limitless ethical bending to appease an audience of countries ranging from liberal democracies to ruthless authoritarian states. Otherwise, products and services will morph from tools for customers into tools for dictators. There is unambiguous precedence.
I’m sure the founders of today’s tech giants did not consider any of this in their nascent days spent in proverbial Silicon Valley garages. Nevertheless, they must respect their responsibility now.
Apple has complied with Moscow’s demands to show Crimea, annexed from Ukraine in 2014, as Russian territory. Crimea & the cities of Sevastopol & Simferopol are now displayed as Rus. territory on Apple’s map & weather apps when used in Russia.
The United Nations continues to recognize Crimea as a Ukrainian territory, describing Russia’s presence on the peninsula as an “occupation”. The Russian state spun Apple’s labelling as an “inaccuracy”, as they are wont to do.
Earlier this year, Foreign Policy reported that Russia had successfully compelled Apple to store Russian users’ data on servers in Russia — adding that if it follows Russian counterterrorism law, it would be forced to decrypt and surrender user data to the government.
In 2017, Apple removed LinkedIn from the App Store in Russia, and there was some speculation that Apple had quietly stopped updating Telegram in the wake of Russia’s call for a ban on the app. (It eventually did make the updates.)
Earlier this year, I linked to the Foreign Policy report on Apple’s migration of Russian users’ iCloud data to local servers, wondering where the company would draw the line. Apple’s limits haven’t been found yet, as it slowly but surely capitulates to strongman leaders and authoritarian states. Give them an inch, they’ll take a peninsula.
Many readers probably believe they can trust links and emails coming from U.S. federal government domain names, or else assume there are at least more stringent verification requirements involved in obtaining a .gov domain versus a commercial one ending in .com or .org. But a recent experience suggests this trust may be severely misplaced, and that it is relatively straightforward for anyone to obtain their very own .gov domain.
Earlier this month, KrebsOnSecurity received an email from a researcher who said he got a .gov domain simply by filling out and emailing an online form, grabbing some letterhead off the homepage of a small U.S. town that only has a “.us” domain name, and impersonating the town’s mayor in the application.
The webpage for the DotGov registry, operated by the General Services Administration, hilariously states that “bona fide government services should be easy to identify on the internet”. They sure should.
By the way, the .gov domain extension is a bizarrely U.S.-only feature of the web that should eventually be abolished. Virtually every other country has its government services associated with a second-level domain with a country-specific domain extension — in Canada, for instance, we use .gc.ca; in the U.K., it’s .gov.uk. American government institutions should be required to use a specific .us address for consistency and equality. Arguably, .mil should follow suit in being decommissioned, and .edu could become available worldwide.
Historically, app makers could ask users for permission to track their location even when they’re not using the app. That was helpful for services that tracked where a user parked their car or where they may have lost a device paired to the phone. But in the new update, app makers can no longer ask for that functionality when an app is first set up — a potentially devastating blow to competitors such as Tile, maker of Bluetooth trackers that help people find lost items.
By contrast, Apple tracks iPhone users’ location at all time — and users can’t opt out unless they go deep into Apple’s labyrinthine menu of settings.
It isn’t exactly true that iPhone users’ locations are always being tracked by the system. Users are asked when setting up their iOS device whether they would like to enable location-based services; they are not automatically opted in. But once a user sets their device up, it’s unlikely they’ll change that setting. There is huge power in being the default, particularly when that’s the default across the entire system for any and all of Apple’s own services that require location access.
There is a fair argument about how this makes sense. Buyers, presumably, have an implied trust in the first-party device manufacturer that cannot be extended to third-party developers. Apple’s track record on privacy is generally good; it would be falsely equivalent to compare their requirement of system-forced permission requests with companies like Facebook and Google that inhale user data and spit out creepy advertisements.
“I’m increasingly concerned about the use of privacy as a shield for anti-competitive conduct,” said Rep. David N. Cicilline (R.I.), who serves as chairman of the House Judiciary antitrust subcommittee. “There is a growing risk that without a strong privacy law in the United States, platforms will exploit their role as de facto private regulators by placing a thumb on the scale in their own favor.”
Cicilline is correct: the duty of regulating this stuff should not be passed off to companies motivated less by ethical concerns than revenue.
Twitter will begin deleting accounts that have been inactive for more than six months, unless they log in before an 11 December deadline.
The cull will include users who stopped posting to the site because they died — unless someone with that person’s account details is able to log-in.
It is the first time Twitter has removed inactive accounts on such a large scale.
The site said it was because users who do not log-in were unable to agree to its updated privacy policies.
My timeline has been humming today with people excited to claim usernames that will likely be freed up, but it seems as though Twitter has — as ever — failed to fully think through their plans. I imagine there are plenty of people out there who occasionally check in on the Twitter accounts of deceased friends and family; Twitter simply has no solution to preserve those memories.
The first is the tax we each pay so that companies can bid against each other to buy traffic from Google. Because their revenue model is (cleverly) built on both direct marketing and an auction, they are able to keep a significant portion of the margin from many industries. They’ve become the internet’s landlord.
The second is harder to see: Because Google has made it ever more difficult for sites to be found, previously successful businesses like Groupon, Travelocity and Hipmunk suffer. As a result, new web companies are significantly harder to fund and build. If you’re dependent on being found in a Google search, it’s probably worth rethinking your plan.
I think there’s a widespread assumption that Google’s search engine is a relatively benevolent and impartial directory of the web at large. The Wall Street Journal’s recent investigation sure makes it sound like that’s the expectation; the authors seemed surprised by how often the ranking parameters are adjusted so that spam, trash, and marketing pablum doesn’t find its way to the top — albeit twisting their findings to imply that Google is pushing a political agenda. There simply isn’t a good way to make search engines truly neutral; that’s fine, but users need to understand that.
Non-Google search engines also need to be more competitive, but it takes time to chip away at a company with complete market share dominance — particularly when they use it as leverage for obtaining an advantage in other markets.
The Washington Post and New York Times have both now struck deals with cellular providers to hype 5G networking for journalism; neither has explained what, exactly, faster cellular networks will do to make journalism any better — where by “better”, in the case of journalism, I mean “more accurate, situated in context, and comprehensive”.
Here’s what the Times said they’d be using 5G to do in their partnership with Verizon:
The Times has journalists reporting on stories from over 160 countries. Getting their content online often requires high bandwidth and reliable internet connections. At home, too, covering live events means photographers might take thousands of photos without access to a reliable connection to send data back to our media servers. We’re exploring how 5G can help our journalists automatically stream media — HD photos, videos and audio, and even 3D models — back to the Newsroom in real-time, as they are captured.
In addition, as news breaks throughout the country, The Post plans to experiment with reporters using millimeter wave 5G+ technology to transmit their stories, photos and videos faster and more reliably, whether they are covering forest fires on the West Coast or hurricane weather in the southeast.
Most journalism is still text. The Times and Post are absolutely doing wonderful things with video, but most of what they produce is still text, and text doesn’t need speed. I can see how photos and video would get to the newsroom faster, but is the speed of delivery really improving journalism?
I hope that the most time-consuming part of a journalist’s job is and remains in the analysis and research of a story — and having a faster connection does not inherently make someone a better researcher.
[…] It’s pretty telling of the era that nobody at either paper thought such a partnership could potentially represent a possible conflict of interest as they cover one of the most heavily hyped tech shifts in telecom history.
I don’t think either publication would jeopardize its integrity to spike stories about its corporate partners. But as antitrust questions increasingly circle tech companies, it is only a matter of time before questions about the lack of competition amongst ISPs and cellular providers cannot be ignored by lawmakers any longer. These are among the most important stories of our time. Should inherently skeptical publications be cozying up to the subjects of their investigations?
Late last week, people on Twitter started noticing sponsored tweets promoting the island of Eroda, linking to a website advertising its picturesque views, marine life, and seaside cuisine.
The only catch? Eroda doesn’t exist. It’s completely fictional. Musician/photographer Austin Strifler was the first to notice, bringing attention to it in a long thread that unraveled over the last few days.
The creators of the Visit Eroda campaign covered their tracks well. According to Baio, they didn’t leave any identifying information in image metadata, domain records, or in the site’s markup.
I verified a connection between @visiteroda and @Harry_Styles. The Eroda page is using a [Facebook] pixel installed on http://hstyles.co.uk. You can only track websites you have control of. They are related.
I’m not arguing that a promotional campaign for Harry Styles’ new record should be taken as a serious privacy violation; I am, in fact, quite sober. But I think there’s a lesson in the campaign’s difficulty for identifying data to be completely disassociated. A need for behaviourally-targeted advertising is what ultimately made it easy to reassociate the anonymous website.
See also: A 2011 article by Andy Baio in which he describes how he was able to figure out the author of an ostensibly anonymous blog because of a shared Google Analytics account.
Sir Tim Berners-Lee has launched a global action plan to save the web from political manipulation, fake news, privacy violations and other malign forces that threaten to plunge the world into a “digital dystopia”.
The Contract for the Web requires endorsing governments, companies and individuals to make concrete commitments to protect the web from abuse and ensure it benefits humanity.
The “contract” — a term I use very loosely, as the only punishment for a signatory’s failure to uphold its terms is to be removed from the list of organizations which support it — is endorsed by usual suspects like the Electronic Frontier Foundation and DuckDuckGo. It also counts as supporters Google, Facebook, and Twitter. Two of the nine principles of the Contract for the Web are about respecting users’ privacy in meaningful ways. You do the math.
So it’s flabbergasting to now see Berners-Lee in the New York Times sidestepping any accountability, and instead promoting himself as the restorer of the web’s virtue. Berners-Lee is pushing what he calls the Contract for the Web, which he describes, with no irony, as a “global plan of action … to make sure our online world is safe, empowering and genuinely for everyone.” He assures us that “the tech giants Google, Facebook, [and] Microsoft” are all “committing to action.” What a relief! Berners-Lee still seems to think Big Tech can do no wrong, even at a time when public and political opinion are going the opposite direction.
I’m not sure I share Butterick’s cynical view of this effort, but I do not see it making a lick of difference in the behaviour or business models of behavioural advertising companies with interactive front-ends.
On October 16, 2019 Bob Diachenko and Vinny Troia discovered a wide-open Elasticsearch server containing an unprecedented 4 billion user accounts spanning more than 4 terabytes of data.
A total count of unique people across all data sets reached more than 1.2 billion people, making this one of the largest data leaks from a single source organization in history. The leaked data contained names, email addresses, phone numbers, LinkedIN and Facebook profile information.
What makes this data leak unique is that it contains data sets that appear to originate from 2 different data enrichment companies.
It’s entirely possible that this data came from a PDL subscriber and not PDL themselves. Someone left an Elasticsearch instance wide open and by definition, that’s a breach on their behalf and not PDL’s. Yet it doesn’t change the fact that PDL is indicated as the source in the data itself and it definitely doesn’t change the fact that my data (and probably your data too), is available freely to anyone who wishes to query their API. I signed up for a free API key just to see how much they have on me (they’ll give you 1k free API calls a month) and the result was rather staggering.
And this is the real problem: regardless of how well these data enrichment companies secure their own system, once they pass the data downstream to customers it’s completely out of their control. My data — almost certainly your data too — is replicated, mishandled and exposed and there’s absolutely nothing we can do about it. Well, almost nothing…
I also signed up for an API key and found records associated with my name and one of my email addresses. Everything in it appears to be scraped from public sources — my name matched outdated LinkedIn data from the time that I thought it was an excellent idea to have a LinkedIn profile, while my email address surfaced a mixed data set.
I am, of course, responsible for putting my information out into the world — if someone can see it, they can copy it. But should they be allowed to store it as long as they like? I deleted my LinkedIn profile years ago, but People Data Labs still has my employment history from there. Furthermore, my email address was not public or visible on any of my social media profiles, but PDL still managed to connect all of them because they used each social media company’s API to scrape user details. I have little recourse in getting rid of PDL’s copy of this information short of contacting them and all other “data enrichment” companies individually to request deletion. That seems entirely wrong.
At the end of last week, the Internet Society (ISOC) announced that it has sold the rights to the .org registry for an undisclosed sum to a private equity company called Ethos Capital. The deal is set to complete in the first quarter of next year.
The decision shocked the internet industry, not least because the .org registry has always been operated on a non-profit basis and has actively marketed itself as such. The suffix “org” on an internet address – and there are over 10 million of them – has become synonymous with non-profit organizations.
However, overnight and without warning that situation changed when the registry was sold to a for-profit company. The organization that operates the .org registry, PIR – which stands for Public Interest Registry – has confirmed it will discard the non-profit status it has held since 2003 as a result of the sale.
It’s not just a bleak turn of events for millions of charities and non-profit organizations worldwide that are tied to their domains; McCarthy’s investigation found suspicious undercurrents behind the sale. Truly one of the year’s most upsetting stories about the web.
After the storm, I was determined to find out why the ‘Report An Outage’ page was so painful to download.
There were 4.6MB of unused code being downloaded on a page whose main and only content, apart from a sign in button, is a form to submit your address.
Paul Boag tweeted an excellent illustration of the benefits of designing for accessibility, dividing their impact into permanent, temporary, and situational occurrences. Performance could easily be on the same list: some people have permanently restricted bandwidth because of where they live or the device they use; temporarily, something like the storm that Stimac faced would impact connectivity; and simply getting in an elevator or being in a crowded city can be situations that impact performance.
On November 7th, tens of thousands of people across the US woke up to strange text messages from friends and loved ones, occasionally from people who were no longer in their lives, like an ex-boyfriend or a best friend who had recently died. The messages had actually been sent months earlier, on Valentine’s Day, but had been frozen in place by a glitched server and were only shot out when the system was finally fixed nine months later, in the middle of the night.
AT&T, T-Mobile, and Sprint currently use Syniverse to route text messages to people on other networks, according to data available to Tyntec, a smaller messaging services company that spoke with The Verge. T-Mobile confirmed that it uses Syniverse, AT&T declined to comment, and Sprint did not respond to a request for comment. Verizon confirmed that it uses a competitor, SAP.
But for years, industry figures have been sounding the alarm about just such a scenario. The very same Valentine’s Day that the SMS server froze up, a mobile services executive named Thorsten Trapp had flown into Washington to warn lawmakers about Syniverse’s dominance in messaging and other carrier services. He came armed with a series of slide decks laying out Syniverse’s dominance in SMS and MMS messaging, as well as in providing critical services for 2G, 3G, and roaming.
“This thing is monopolized. You have literally only one provider who makes sense in the messaging world,” says Trapp, the chief technology officer of Tyntec. “No innovation, no nothing.” His company is currently suing Syniverse for alleged anticompetitive behavior.
Imagine a parallel universe where antitrust law still had teeth.
Apple Inc. is overhauling how it tests software after a swarm of bugs marred the latest iPhone and iPad operating systems, according to people familiar with the shift.
Software chief Craig Federighi and lieutenants including Stacey Lysik announced the changes at a recent internal “kickoff” meeting with the company’s software developers. The new approach calls for Apple’s development teams to ensure that test versions, known as “daily builds,” of future software updates disable unfinished or buggy features by default. Testers will then have the option to selectively enable those features, via a new internal process and settings menu dubbed Flags, allowing them to isolate the impact of each individual addition on the system.
The news in this story is not that Apple has added a system to hide unfinished changes and new features. Such a process is already in place; that’s how they try to prevent unannounced stuff from showing up in external builds. Nor is it particularly newsworthy that Apple is working on iOS 14. Gurman provides no details about the release, other than writing that it will “rival iOS 13 in the breadth of its new capabilities”, despite the HTML page title implying that the article describes iOS 14 features.
The news seems to be entirely contained in this sentence:
The new approach calls for Apple’s development teams to ensure that test versions, known as “daily builds,” of future software updates disable unfinished or buggy features by default.
From the outside, this feels like something of a rehash of the internal meeting after iOS 11’s similarly buggy release. Federighi announced that the company was pushing features scheduled for iOS 12 into the following year so that there would be a renewed focus on quality. It’s worrying that this is an issue that needs to be emphasized again, and so soon.
Tim Cook isn’t the only tech CEO making friends with the big wet President. But if I were on the same short list as Mark Zuckerberg, I may want to take that as a clue to reconsider my stance.
Also reportedly dining with Zuckerberg and the President was Peter Thiel, a man who once said that he “no longer [believes] that freedom and democracy are compatible”.
Update: For clarification, I understand that working dinners with the President are fairly common for CEOs and other prominent business leaders. They are obviously valuable for in-person lobbying, but I think they create an uncomfortable compromise. The less-formal and cozier setting is unbecoming of CEOs who wish to distance themselves from a discriminatory President.
President Trump just toured a Texas plant that has been making Apple computers since 2013 and took credit for it, suggesting the plant opened today. “Today is a very special day.”
Tim Cook spoke immediately after him and did not correct the record.
The President later made the same claim on Twitter, taking credit for “[bringing] high paying jobs back to America”, which is a lie. It is a manufacturing facility that has been producing the same low-volume product for the past six years. I wish Cook had corrected him, and also defended reporters subjected to the President’s abuse since Apple now runs a news subscription business.
The plant toured on Wednesday, operated by Flex, assembles the Mac Pro, a high-end computer that starts at $6,000. A previous model of the computer was made in the same facility starting in 2013. Apple doesn’t own or operate its own manufacturing and instead contracts with companies like Flex. A Flex spokesperson declined to comment.
This isn’t the first time the big wet President said some bullshit about Apple manufacturing products in the United States. In 2017, he claimed that Apple would open “three big plants, beautiful plants” in the U.S., because he doesn’t know how to match adjectives and nouns. While Apple has invested in American manufacturing, they have not built three factories in the U.S., not even small and ugly ones.
The FCC’s Orwellian-named “Restoring Internet Freedom” order certainly did kill rules preventing internet service providers (ISPs) from abusing their broadband monopolies to harm competitors and consumers. And it did so in a flurry of controversy and fraud, all while ignoring the opinions of a bipartisan majority of Americans who wanted to keep net neutrality in place.
But the industry-backed repeal quietly had a much broader objective: It all-but obliterated the FCC’s authority to hold ISPs accountable for any number of other bad behaviors. Instead, it dumped most telecom oversight on a Federal Trade Commission (FTC) that experts say lacks the resources or authority to police the sector and punish bad behavior.
“The fight over net neutrality has always been about gutting the FCC’s legal authority to protect consumers and promote competition,” said Gigi Sohn, a former FCC lawyer and advisor who helped craft the agency’s original 2015 net neutrality rules.
If there is any consistent theme to this administration and its agencies, it is that they are being plundered for personal gain while being dismantled from the inside, with obviously devastating consequences that will only fully be realized in the years to come.
Ford’s newly revealed electric Mustang SUV, the Mach-E, is quickly becoming one of the more buzzed-about car reveals of the last few years. But while the new EV looked competent at its LA Auto Show debut, the company pretty much whiffed on one really important part of the Mustang Mach-E: the software.
The performance and practicality of the Mustang Mach-E will be big determinants of its success, but the new Sync 4 software that will power the giant 15.5-inch touchscreen at the center of the dashboard will have a major impact on day-to-day life inside this car. That’s why it was disappointing that Ford didn’t offer much of a chance to interact with the software, and in some cases was actively discouraging people from trying to use it.
That’s pretty embarrassing, but so is the Mach-E’s approach to automotive interior design. Just go look at the pictures: there’s a big 15-inch laptop screen just sort of screwed into the dash. It’s not just Ford, either; Volkswagen’s otherwise nice-looking electric wagon concept has the same problem. I’d think it was case of these companies aping Tesla, but new cars from Mercedes-Benz, Mazda, and Hyundai — among many others — also have poorly-integrated screens of various sizes. The Mercedes and Volkswagen examples are particularly ridiculous — the integration of the display in the E-Class is fine, and the screen in our Golf sits perfectly in the centre console.