“Over the last two years, we’ve shown Google irrefutable evidence again and again that they are displaying lyrics copied from Genius,” said Ben Gross, Genius’s chief strategy officer, in an email message. The company said it used a watermarking system in its lyrics that embedded patterns in the formatting of apostrophes. Genius said it found more than 100 examples of songs on Google that came from its site.
Starting around 2016, Genius said, the company made a subtle change to some of the songs on its website, alternating the lyrics’ apostrophes between straight and curly single-quote marks in exactly the same sequence for every song.
When the two types of apostrophes were converted to the dots and dashes used in Morse code, they spelled out the words “Red Handed.”
Google previously scraped material wholesale from Celebrity Net Worth, Yelp, TripAdvisor, and Amazon. The Knowledge Panels and Featured Snippets features are unreliable, and they power a huge scale of Business Insider-style aggregation that discourages reading the source. Google denies any wrongdoing here and have passed the buck to unidentified partners, but it’s disgraceful that they either cannot stop scraping from smaller businesses, or they cannot be bothered to verify that materials from partners are reliable and original.
Incidentally, the case of Genius is interesting because song lyrics are not exactly their intellectual property — they are the property of the songwriters, and are published under license. However, there is no centralized lyrics database to which artists can publish; therefore, the lyrics that are shown on Genius are a best guess original transcription, some of which are verified by artists. The lack of a canonical lyrics database is also why lyrics shown in, say, Apple Music may be different than lyrics shown on a lyrics website or in the booklet in the physical copy.
Om Malik — who, I think, has been on a roll lately:
“Content” is the black hole of the Internet. Incredibly well-produced videos, all sorts of songs, and articulate blog posts — they are all “content.” Are short stories “content”? I hope not, since that is one of the most soul-destroying of words, used to strip a creation of its creative effort.
You can tell a lot about a person and how they think about their work based on whether or not they use “content” to describe what they do. A photographer who says that he is creating “content” for his YouTube channel is nothing more than a marketer churning out fodder to fill the proverbial Internet airwaves with marketing noise.
I go out of my way to avoid using “content” in any of my writing. That word is only accurate in describing artistic materials that are a vehicle for other purposes, like advertising or product marketing, and which could easily be interchanged with any other material. It’s offensive to anything made with craft or earnest care.
I sincerely hope that what I am about to say is already obvious to anyone empowered to be involved in negotiation legal, political, or labor policy, but: The one thing they don’t want to give you is the thing that you need to get. This offer from Uber and Lyft is like a kidnapper offering you a softer blanket, as long as you agree not to ever escape. No thanks. These companies know very well that once their workers become actual employees, they will get a host of benefits automatically, and they can formally unionize to win themselves many more benefits and increased pay. These companies, which have never made a dollar even while exploiting their workers, fear this. So they offer some concessions.
Uber’s S-1 filing is revealing for why they are fighting so desperately against treating drivers as employees:
If, as a result of legislation or judicial decisions, we are required to classify Drivers as employees (or as workers or quasi-employees where those statuses exist), we would incur significant additional expenses for compensating Drivers, potentially including expenses associated with the application of wage and hour laws (including minimum wage, overtime, and meal and rest period requirements), employee benefits, social security contributions, taxes, and penalties.
Meanwhile, taxicab drivers have been receiving confirmation across the United States that they are employees, not independent contractors. The U.S. Department of Labor found that Stanford Cab in San Jose misclassified drivers, and courts in California and New Jersey have ruled that taxi drivers are employees. If drivers are found to be employees, Uber and Lyft would no longer be able to undercut taxicab companies by not providing full compensation.
While people argue over the balance to strike between environmental preservation and economic activity, no one now denies that this tradeoff exists — that some technologies and ways of earning money must remain off limits because they are simply too harmful.
This regulatory project has been so successful in the First World that we risk forgetting what life was like before it. Choking smog of the kind that today kills thousands in Jakarta and Delhi was once emblematic of London. The Cuyahoga River in Ohio used to reliably catch fire. In a particularly horrific example of unforeseen consequences, tetraethyl lead added to gasoline raised violent crime rates worldwide for fifty years.
None of these harms could have been fixed by telling people to vote with their wallet, or carefully review the environmental policies of every company they gave their business to, or to stop using the technologies in question. It took coordinated, and sometimes highly technical, regulation across jurisdictional boundaries to fix them. In some cases, like the ban on commercial refrigerants that depleted the ozone layer, that regulation required a worldwide consensus.
We’re at the point where we need a similar shift in perspective in our privacy law. The infrastructure of mass surveillance is too complex, and the tech oligopoly too powerful, to make it meaningful to talk about individual consent. Even experts don’t have a full picture of the surveillance economy, in part because its beneficiaries are so secretive, and in part because the whole system is in flux. Telling people that they own their data, and should decide what to do with it, is just another way of disempowering them.
A lawyer for Facebook argued last year that users can have no expectation of privacy if they interact with the company’s products, which I think shows a callous indifference to whether users should have an expectation of privacy. It’s dismissive to the point of callousness. A lack of meaningful legislation protecting individual and collective privacy freedoms is increasingly a failure of ethics and responsibility.
All of this is starting to remind me of my ISP, which is not a great sign:
I loathe my ISP.
There is no easy way to increase prices, but there are plenty of wrong ways, and this feels like one of them. Dropbox used the old trick of adding a few features to justify a dramatic cost increase, to force a choice between a stripped-down plan and one that’s overkill for many individual users’ needs.
My ISP does the same thing: you can get a pathetic 15 Mbps connection for $82 per month — eighty-two dollars per month, in 2019, for speeds that would be embarrassing on a DSL line ten years ago — or a 100 Mbps connection for $92 per month. That’s not really a choice, of course — it’s a way to get a minimum of another $120 per year from every customer.
In much the same way, Dropbox offers users the choice between their free plan — with just 2 gigabytes of storage and syncing between only three devices — or their 2 terabyte Plus plan, which costs nearly $160 per year in Canada. Will you be having the Dom Perignon tonight or a glass of mop water?
For my use, Dropbox is practically a utility. I sync some folders and share a few of them with friends. The individual users I know treat it similarly. Steve Jobs may have been overly flippant when he described Dropbox as a feature, but I don’t think he was that far off. A syncing service, much like an internet connection, is supremely useful, but not inherently interesting. My ISP tries to pretend that it’s a high-tech company by, for example, marketing its rebranded mesh WiFi routers, but I just want to pay them as little money as I can every month to have an adequate and reliable internet connection. I think of Dropbox similarly: let me pay a reasonable amount of money to sync stuff between my devices.
I suppose Dropbox’s new client is indicative of their increased emphasis on enterprise customers. It sure seems like they’re more eager to compete with Slack and Microsoft than they are to provide syncing tools to individual customers. I’ll respond accordingly by making sure no files or apps I rely upon are dependent on Dropbox.
During the keynote at the recently concluded WWDC 2019, Apple executives made a big deal about the massive improvements in the Maps. This is a brand new Maps, Apple said. It is rebuilt and has more detailed information about everything from terrain to roads to landmarks. Apple said it drove four million miles to get better, richer data. The new Maps will also allow you to add favorite places and create a list of personal locations that can be shared with friends and family members. It will have a feature called “Look Around,” which is like Google’s Street View but with maybe slightly slicker and smoother visuals.
My reaction to Apple Maps was a shrug. So, they are finally catching up to Google — but will they ever be able to catch up with Google Maps? The WWDC hoopla around this tells me that Apple thinks of Apple Maps as an application, whereas in reality, maps are all about data — something Google understands better than anyone. Google maps are getting richer with data by the day. The more people use those maps to find locations, the deeper their data set gets. In my last visit to Old Delhi, I was able to find antique stores in back alleys with no difficulty at all. Apple Maps was nowhere close.
By the time Apple launched their maps product in 2012, Google had at least a seven year head start from the launch of Google Maps. Seven years is a hell of a long time to collect data, display it, make updates based on feedback, and establish a process for making cartographic and data changes. Coincidentally, Apple Maps is about as old now as Google Maps was when Apple launched their product.
Even so, in my area, Apple Maps has generally equalled Google Maps for my typical low-demand use. Its turn-by-turn directions are decent, its business listings are mostly fine — even though store hours are of dubious reliability — and it has the added bonus of not pestering me to log into an account. But the nature of mapping products is such that their quality is entirely location-dependent. In regions where fewer people use iPhone, the quality of Apple’s maps noticeably suffers. And if Malik’s central thesis is correct — that Apple Maps will forever be a distant second or third to Google Maps — then it is somewhat worrying if usage is as correlated with data quality as it appears to be.
FCC chairman Ajit Pai repeatedly emphasized that eliminating the rules would help smaller ISPs in particular bring competition to the market. “They told us that these rules prevented them from extending their service because they had to spend money on lawyers and accountants,” he said in a June 2018 statement.
A year later, the bargain looks unfulfilled. Evidence remains scant of ISPs saving money from this regulatory rollback, or working to give consumers faster or better broadband options. But they also don’t seem to be using their new power, much less abusing it.
On the other hand, we haven’t seen telco execs indulge their dreams of surcharging sites.
Consumers don’t need a technical understanding of data collection processes in order to protect their personal information. Instead of explaining the excruciatingly complicated inner workings of the data marketplace, privacy policies should help people decide how they want to present themselves online. We tend to go on the internet privately – on our phones or at home – which gives the impression that our activities are also private. But, often, we’re more visible than ever.
Most privacy policies are not documents that would befit their name. They are not policies that ensure the privacy of visitors, users, or customers. They are most often contracts that allow for as much freedom for the company and whatever third parties it designates and as few remedies as possible for signatories. They are terrific examples of the corruption of the definition of privacy.
By yesterday evening, Zupan’s tweet had been collectively shared tens of thousands of times. Even Chrissy Teigen retweeted it to her 11.2 million followers. But the viral tweet’s claim is false, and its premise — that photos at sites of tragedy are inherently self-serving and in poor taste — is misleading.
While the area surrounding the destroyed reactor has undeniably morphed into a tourist destination, and interest in the disaster has spiked since the premiere of HBO’s miniseries Chernobyl, the Instagram geotag offers zero evidence of any uptick in lifestyle influencers visiting the site. Three of the four people that Zupan chose to highlight in his tweet aren’t influencers at all.
Photos like those highlighted in the tweet have been posted on Instagram for years; I know that because I’ve been looking at them for years. I’m not interested in disaster tourism but I’ve long wanted to visit the Duga radar arrays in the Exclusion Zone.
If this silly tweet proves anything about influence, it’s that a television show can spike interest in pretty typical photographs shared on the web — albeit of a place of pain and suffering — and people might ascribe horrible motivations to their subjects.
That’s where Google’s web-dominating Chrome browser (and its nominally free/open cousin, Chromium) come in: these have become the defacto standard for web browsing, serving as the core for browsers like Microsoft Edge and Opera.
And while you can use or adapt Chromium to your heart’s content, your new browser won’t work with most internet video unless you license a proprietary DRM component called Widevine from Google. The API that connects to Widevine was standardized in 2017 by the World Wide Web Consortium, whose members narrowly voted down a proposal to change the membership rules for the W3C to require members not to abuse the DMCA to prevent DRM from becoming a tool to undermine competition.
Prior to 2017, all W3C standards were free for anyone to implement, allowing free/open browser developers to create their own rivals to the big companies’ offerings. But now, a key W3C standard requires a proprietary component to be functional, and that component is under Google’s control, and the company will not authorize free/open source developers to use that component.
Meanwhile, spending on ads in podcasts just keeps growing — most of that is not targeted and only barely tracked through the use of referral or discount codes. Patience Haggin, Wall Street Journal:
U.S. advertisers spent $479.1 million advertising on podcasts in 2018, up 53% from about $313.9 million a year earlier, according to a new report from the industry group Interactive Advertising Bureau and accounting firm PricewaterhouseCoopers LLC.
Podcast advertising is expected to rise to $678.7 million this year, the report said.
Podcasters and their advertisers are still waiting for a standard, widely used method of measuring listens, which would provide more precise audience metrics than download counts.
Podcasts don’t need more specific numbers. Television, radio, and magazines did really well for decades on approximate audience sizes. The appearance of precision is a relatively new phenomenon for advertisers and it’s often inaccurate and, in many ways, is inherently flawed.
Also, I’d be willing to bet that podcasters and advertisers aren’t the ones pushing for more data on listeners; ad tech companies conveniently represented by the Interactive Advertising Bureau almost certainly are.
Assistant Attorney General of the United States Makan Delrahim delivered a speech today at the New Frontiers of Antitrust Conference in Tel Aviv:
AT&T held near-monopoly positions in its telephone equipment and its telecommunications service businesses. Its maintenance of those monopolies triggered a series of antitrust complaints. In 1974, the United States sued AT&T for monopolization and alleged a long list of restrictive practices. The company defended its practices and its “integrated” structure by arguing that it offered the public superior price, performance, and innovation. This argument was not successful. After years of litigation, the company agreed to be broken up into separate local and long-distance companies – the “baby Bells” – in 1982 by the Reagan Administration.
President Donald Trump suggested this morning that Attorney General Bill Barr might go after the big tech companies, seeming to confirm rumors that the U.S. Justice Department would soon launch an onslaught against Silicon Valley. The DOJ is reportedly thinking about an antitrust investigation of Google but Trump’s comments today hint that “antitrust” might be a smokescreen for other motives.
What could the real reason be for Trump pursuing a case against Big Tech? Trump has previously railed against American tech giants for being “biased” against conservatives and recently called for a boycott of AT&T because the company owns CNN. Trump’s attacks on CNN are solely based on the fact that the news network will report on Trump’s various crimes against humanity without the rosy shine that viewers might see on Fox News.
Ben Thompson is skeptical of the formidability of antitrust cases against the big tech companies:
Ultimately, when it comes to antitrust actions against tech companies in the U.S., there really isn’t nearly as much there as all of the attendant fervor would suggest. Google is absolutely vulnerable, Apple somewhat less so, and it is very hard to see any sort of case against Facebook or Amazon.
The validity of these rumoured antitrust cases is something that must, of course, be judged independently of the proclivities of the stupid oaf currently spinning in his chair in the Oval Office. Whether Thompson is correct can only be evaluated in courts and through long legal proceedings. But it’s hard not to see the influence of the remarks explained by Novak above as an equal motivator in the public square of these investigations, even if they have no legal standing, and that’s an obvious worry.
The U.S. Justice Department recently signaled it was exploring possible antitrust investigations into Google and Apple. Leading those investigations—assuming they actually happen—would be none other than Assistant Attorney General Makan Delrahim, a former lobbyist that presidential hopeful and Massachusetts Senator Elizabeth Warren says should recuse himself due to his history with both companies.
In a letter to Delrahim, Warren notes Google paid Delrahim about $100,000 in 2007 to lobby federal antitrust officials for the company’s acquisition of DoubleClick Inc, an online ad company. That gig paid off, as it eventually culminated in a $3.1 billion merger. As for Delrahim’s ties to Apple, Warren notes he also lobbied on behalf of the Cupertino giant regarding patent reforms. The letter further emphasizes that Delrahim worked as a corporate lobbyist until 2016, and counted Anthem, Pfizer, Qualcomm, and Caesars among his clients.
I maintain that Google’s purchase of DoubleClick must rank near the top of the list of acquisitions that should never have been allowed.
The archive in Building 6197 was UMG’s main West Coast storehouse of masters, the original recordings from which all subsequent copies are derived. A master is a one-of-a-kind artifact, the irreplaceable primary source of a piece of recorded music. According to UMG documents, the vault held analog tape masters dating back as far as the late 1940s, as well as digital masters of more recent vintage. It held multitrack recordings, the raw recorded materials — each part still isolated, the drums and keyboards and strings on separate but adjacent areas of tape — from which mixed or “flat” analog masters are usually assembled. And it held session masters, recordings that were never commercially released.
UMG maintained additional tape libraries across the United States and around the world. But the label’s Vault Operations department was managed from the backlot, and the archive there housed some of UMG’s most prized material. There were recordings from dozens of record companies that had been absorbed by Universal over the years, including several of the most important labels of all time. The vault housed tape masters for Decca, the pop, jazz and classical powerhouse; it housed master tapes for the storied blues label Chess; it housed masters for Impulse, the groundbreaking jazz label. The vault held masters for the MCA, ABC, A&M, Geffen and Interscope labels. And it held masters for a host of smaller subsidiary labels. Nearly all of these masters — in some cases, the complete discographies of entire record labels — were wiped out in the fire.
This lengthy article is one gut punch after another. Some recordings that were held in Universal’s vault were never digitized; others were digitized long before there were any standards for doing so. And there’s no guarantee that the digital recordings will be adequately preserved, either. We are not great yet at preserving our greatest cultural works.
Tennessee-based Perceptics prides itself as “the sole provider of stationary LPRs [license plate readers] installed at all land border crossing lanes for POV [privately owned vehicle] traffic in the United States, Canada, and for the most critical lanes in Mexico.”
In fact, Perceptics recently announced, in a pact with Unisys Federal Systems, it had landed “a key contract by US Customs and Border Protection to replace existing LPR technology, and to install Perceptics next generation License Plate Readers (LPRs) at 43 US Border Patrol check point lanes in Texas, New Mexico, Arizona, and California.”
On Thursday this week, however, an individual using the pseudonym “Boris Bullet-Dodger” contacted The Register, alerting us to the hack, and provided a list of files exfiltrated from Perceptics’ corporate network as proof. We’re assuming this is the same “Boris” involved in the CityComp hack last month. Boris declined to answer our questions.
Customs officials said in a statement Monday that the images, which included photos of people’s faces and license plates, had been compromised as part of an attack on a federal subcontractor.
CBP would not say which subcontractor was involved. But a Microsoft Word document of CBP’s public statement, sent Monday to Washington Post reporters, included the name “Perceptics” in the title: “CBP Perceptics Public Statement.”
There’s a lot wrong with this. It’s understandable why Customs and Border Protection would have all collected data stored in connected repositories, but it is inexcusable for this data to be unencrypted.
I also get why a contractor would be involved in creating this system, but it’s outrageous that the contractor would have general access to any data after implementation.
Anyway, that’s a lot to unpack before we even get to this part of Harwell and Fowler’s report:
One U.S. official, who spoke on condition of anonymity due to lack of authorization to discuss the breach, said it was being described inside CBP as a “major incident.” The official said Perceptics was attempting to use the data to refine its algorithms as part of a CBP-sanctioned pilot program to match up license plates with the faces of a car’s occupants, which the official said was outside of CBP’s sanctioned use. The official said data from travelers crossing the Canadian border was also included.
This paragraph is unclear in its specifics — how exactly can using data collected in a CPB-sanctioned program be outside of the sanctioned use of that data? I’m sure this makes sense in some way, but it isn’t explained here — but the gist of it is pretty awful. It’s one thing to collect records of individuals entering and leaving the country; it’s wildly different to train facial recognition to associate persons with vehicles and keep track of them, particularly as over half of Americans live in areas where CBP has extra authority.
My face is probably in this breach. Hooray and also sorry.
Apple has since updated its website following the end of WWDC, however, revealing that the new Mac Pro and Pro Display XDR are “coming in September.” This date is listed on Apple’s homepage in an overlay that pops open after clicking on “notify me” under each product, although only in the United States.
Update: Apple’s homepage now says the new Mac Pro and Pro Display XDR are “coming this fall.” It’s unclear if the “September” timeframe was simply a mistake or prematurely-revealed information. We’ve yet to hear back from Apple.
In the keynote and the relevant press release, Apple only promised a “fall” availability which, as John Siracusa pointed out, technically gives them until December 20 to meet the self-imposed deadline.
Also, Apple’s non-American websites still show the old Mac Pro model’s marketing pages.
WWDC this year was so big that it will easily eat the news cycle for at least part of this week as well. There’s a lot to discuss, and Patrick Balestra put together a great list of some of the things that didn’t get as much press as, say, the Pro Display XDR’s stand.
The debuts of Project Catalyst and SwiftUI this week reminded me of the way the API layer of Mac OS X was described nearly twenty years ago. Classic was a way to run Mac OS 9 apps without modification, but also without gaining new benefits. Carbon was a way to get some new functionality but without needing too much of a rewrite. Cocoa was the fully modern object-oriented end goal for most developers. It’s unsurprising that AppKit and UIKit, Project Catalyst, and SwiftUI slot fairly well into a similar roadmap.
The biggest difference with this week’s keynote is that these evolutionary stages were, I think, more clearly and transparently described in the Mac OS X introduction.
Though I don’t discount Catalyst’s usefulness — we will get lots of apps new to the Mac — the real news this week was about SwiftUI and the Combine framework. This, finally, is a new way of writing apps, and it’s based on Swift and not on Objective-C. It’s very much not from NeXT.
It’s early. It has bugs. It’s not nearly complete. Sure. But it’s also how we’re going to write apps in the future.
I know a lot of developers who have been working with Apple’s products for decades.
The overwhelming consensus is that we’re seeing something that will change our lives for decades to come.
1976 -> 1984 -> 1996 -> 2008 -> 2019
If SwiftUI were the only thing at WWDC this year, it would have been a gigantic year. It only got about six minutes of stage time during the WWDC keynote; I encourage you to check out Session 204 and Session 216. SwiftUI fits the way my brain works in a way that I’ve never felt with anything other than, like, HTML.
It is impossible to overstate how important the NeXT acquisition and subsequent era was for Apple. But, now, we’re in a new generation, and I could not be more excited to see what the next twenty years is going to look like as a result of this week’s announcements.
Michael Simon, Macworld (content blockers required, obviously):
All said, one of each Pro product will cost you about $25,000, depending on which MacBook Pro and iPad Pro you decide to buy. And that’s the entry-level price. Over the past decade or so, Apple’s Pro products have skyrocketed in price, and now we have a gorgeous Mac Pro and display that costs more than a small sedan. That’s not an Apple tax, that’s an Apple mortgage.
I can already hear the rationalizations: the Mac Pro isn’t for you! That’s why Apple sells the iPad Air! You can buy a MacBook Air! Sure, but for the most part, Apple’s non-Pro products don’t merely represent cheaper versions of their Pro counterparts. They’re completely different machines with older tech. The iPad Air has a home button, the MacBook Air doesn’t have a Touch Bar, etc. (OK, having no Touch Bar might be a benefit, but still.) Apple products have always been luxury items, but it wasn’t that long ago when the most expensive Mac tower topped out at $3,400. Now that doesn’t even get you in the door.
I am sympathetic to arguments that Apple is charging more — and in the case of some products, a lot more — than they used to. It hasn’t gone unnoticed. But I think this article is fairly silly. For one thing, why judge the cost of pro products by adding up the cost of every pro product? Who buys an 11-inch iPad Pro and a 12.9-inch iPad Pro and a 13-inch MacBook Pro and a 15-inch MacBook Pro? What a ludicrous metric.
For another, I’m not even sure that the prices in this article are accurate. I’m not certain that the most expensive Mac tower has ever “topped out at $3,400”. I tried using the Mac Pro configurator from 2009 through the Internet Archive and just upgrading the RAM to 32 GB from the single 6 GB stick it shipped with cost a whopping $3,700.
The Pro Display XDR — for which this article contains the now legally-required amount of hand-wringing over the cost of the stand — could best be compared to the 30-inch Cinema Display. When that product shipped in 2004, it cost $3,299, and you needed to buy a $599 graphic card to run it. The inflation-adjusted cost of that display combination is just shy of $5,300 in 2019 terms.
But there are two other reasons I think this is a poor explanation of the cost of being a pro customer. The first is that some of Apple’s products have actually come down in price. In his pro products list, Simon includes to Final Cut Pro X and Logic Pro X, which cost about $300 and $200, respectively. But Final Cut Studio was $1,299 and Logic Studio cost $499. These apps are much less expensive today, even if you add on the cost of Motion and Compressor, which are now sold separately.
The second is that the barrier to entry for doing professional-grade work has dropped dramatically from a technical level. Editing HD video is no longer the rarefied duty of the highest-end Macs, and it hasn’t been that way for a while; my iMac does an acceptable job of manipulating 4K video. Every developer wants their code to compile faster, and a well-specced iMac Pro might be more apt for their needs; an even higher-end Mac may not have the right balance of cost to benefit. A machine like this might have more of a niche use. In combination with my previous argument that Apple’s pro apps have come down in price, a lower hardware barrier to entry also means that a media editing workflow including software may actually be less expensive than it used to be.
I don’t want to make this seem like an apology for Apple’s prices generally. The stuff in the $1,200 to $2,000 price bracket, or thereabouts, is of particular concern to me as I think it’s more of a mixed bag of compromises than it used to be. Putting Retina displays and SSDs in nearly every Mac results in far better but more expensive products, and the company’s reduced profit margins in recent years back that up. But I don’t think the pricing of the new Mac Pro or Apple’s pro products in general is as dire as Simon makes it out to be. It’s just reflective of a more niche customer; and, maybe an ostensibly “pro” workflow no longer requires Pro-branded hardware.
Multiple segments of Apple’s Worldwide Developers’ Conference keynote presentation today indicated that Apple is rushing into spaces where other tech companies have already deeply soured customers’ ability to trust them. The presentation doubled down on Apple’s recent privacy-themed advertising campaign, but the problem with this kind of privacy has never been company’s intentions in the moment; it’s that they appear to be unable to resist the intense pull of how lucrative customer data can be. As Apple moves into services while its hardware sales slow down, the recent betrayals of other tech companies who implicitly or explicitly promised to be careful with their users’ data loom very large.
Johnston gives examples of how Google and Facebook started out as ostensibly privacy-aware, but have caved to exploiting user data; she questions whether Apple will be different over the long term, and how we can trust them not to be. What happens if the next CEO doesn’t care at all about privacy? Surely, users are owed a deeper commitment to the privacy of their data than company culture.
I think Apple mostly gets that right by encrypting user data in ways that the company cannot decrypt — in other words, it’s only accessible by the user. Therefore, it is less necessary to trust that they will not abuse user data, as they are not collecting it in a way where they can abuse it. If you have iCloud Backups turned off, much of this data isn’t stored by Apple at all.
This article raises a really great point about privacy’s long-term commitments. Maciej Cegłowski has previously highlighted a hypothetical instance of a queer Russian blogger writing on LiveJournal before its acquisition by a Russian company; shortly thereafter, Russia passed strict homophobic laws, which could put that blogger at risk. Or consider how many apps have scooped up your contact list with your permission — who owns those lists now? What if an indie developer with your contact list in its database gets acquired by a social media giant with a pathological objection to privacy for anyone but its CEO?
It is therefore critically important that user data is encrypted in a way that is impossible for anyone else to decode. Users should be entirely in control of their own data now and forever.