“While everyone was focused on the latest headline crisis coming out of the White House, Congress was able to roll back privacy,” said former Federal Communications Commission chairman Tom Wheeler, who worked for nearly two years to pass the rules.
The process to eliminate them took only a matter of weeks. The blowback was immediate.
Constituents heckled several of the lawmakers at town halls. “You sold my privacy up the river!” one person yelled at Sen. Jeff Flake (R-Ariz.) — lead sponsor of the Senate bill — at a gathering in April. Several late-night comedians roasted congressional Republicans: “This is what’s wrong with Washington, D.C. I guarantee you there is not one person, not one voter of any political stripe anywhere in America who asked for this,” Stephen Colbert said.
I still can’t find anyone who thinks that undoing these rules was a good idea. Even the Republicans’ rationale, summarized in Kindy’s article, are so flimsy that they fall apart with even the most cursory questioning:
The industry, Republican FCC commissioners and lawmakers said the restrictions were too broad and should be limited to highly sensitive data, such as personal medical information, not data gathered from activities like online car shopping. The rules, they said, would cause consumers to miss out on customized promotions. And, opponents said, the threat to privacy was overstated — a provider might learn that a person visited a website but would not typically know what the person did while there.
Do Americans want to see more targeted advertising? No. Do Americans want their internet service provider to retain a full record of all of the websites they visit? Hell no. No shit.
Another revelation in Kindy’s article:
By January, trade groups for tech companies such as Facebook and Google had joined the fight to undo the privacy rules, according to records and interviews. Those companies are regulated by a different government body, the Federal Trade Commission, but they worried that Congress might someday find a way to expand the reach of the rules so that they apply to all technology companies.
One can only hope that explicit opt-in rules do become the norm, and are similarly applied to ISPs and technology companies.
Last week David Sparks wrote a nice little article about text and screen effects in Messages and how Apple is missing the boat by not updating it with new effects, allowing the feature to get stale. It’s a good article in its own right, but it’s also a template. Apple introduces so many things with great fanfare and then forgets to follow up.
There’s a good list in this post, but I have a couple of additions:
Remember Live and Dynamic wallpapers? Neither has been updated since their introductions in 2015 and 2013, respectively.
Remember the “Learn to Play” feature in GarageBand? It was introduced in 2009, and hasn’t been updated since 2010. The artist lesson store is exactly the same as the day it launched nearly eight years ago.
I get that times and priorities change, but it sort of seems like Apple released all of these things, and then instantly forgot about them in the pursuit of the next big thing.
One of the things I love most about the automotive industry is the wild variety of stuff that’s possible from a base of four wheels and a powertrain. Most models are designed to be practical, and that’s why the best-selling cars all look very similar to each other and have basically the same functions, with the notable exception of the Ford F-150.
Even with the vast majority of cars being made by companies focused primarily on practicality, there’s still room in the marketplace for boutique manufacturers. Some of them produce fewer than one hundred units annually, with stratospheric prices: Spyker, Koenigsegg, and Pagani, to name a few. That money doesn’t just buy exclusivity — it also pays for radical innovations. Koenigsegg’s Regera doesn’t have a gearbox, for example, while Pagani is well-known for its innovations in carbon fibre composites.
Other companies price their cars more accessibly, but still have a recalcitrant attitude towards any notion of practicality or real-world usability. Alfa Romeo has a solid track record of making cars with more personality than sense. The Giulia Quadrifoglio, for example, is billed as being a performance sports sedan that can run with the likes of BMW and Mercedes, but Patrick George tested one for Jalopnik and found that it’s still an Alfa Romeo:
This feels readily apparent when you step inside. Boy, does it want to be a BMW 3 Series in there. The gear selector, the dashboard, the center console, the shape of the arm rest, the shape and location of the infotainment system’s control knob — all of it feels like it was traced over from the Bavarians, but badly.
The inside is rife with rough and cheap-feeling plastic, not to mention a persistent rattle from the dash plagued us on our weeklong test.
But that’s okay, says George, because the Giulia goes like stink and sounds like heaven.
And that’s something only Alfa Romeo — and companies like them — can get away with. If BMW’s next M4 drove perfectly but had a crap interior, people would be furious. That’s right: BMWs are, according their marketing, “ultimate driving machines”, but Alfas have always been more like pure fun, in sheet metal form.
Of course, this kind of separation between mass-market efficiency and small-market experimentation has been happening in the fashion world since the industrial revolution created large-scale manufacturing. Smaller design houses have the opportunity to find a niche for themselves by designing and making garments that transcend clothing, and become wearable art.
Even the camera market shows a clear division between the two biggest camera companies and the rest. Canon and Nikon have always been reliable and safe bets, but you have to go to a company like Leica to find a monochromatic digital camera, or to Ricoh to get a feature like Full Press Snap.1 It’s not that the big two manufacturers can’t introduce models and features like these; it’s that they’re geared for making models for lots of people, rather than for specific people.
So why isn’t there a boutique manufacturer of smartphones, like there is in many other industries? Why isn’t there a company doing interesting things with the basic smartphone formula of a screen, a battery, and a cellular radio? Is there room for one in the marketplace?
It feels like these are the kinds of questions that Andy Rubin is trying to answer with Essential, his new company. They’re planning on making an Amazon Echo competitor and a full “ambient” operating system for internet-of-things devices, but they’re starting with a smartphone called, simply, the Essential Phone.
Most people look at smartphones and see one of the largest and most competitive markets in history, one with no room (or profits) for anyone but Apple or Samsung. And most people complain that there’s no innovation. Rubin disagrees. Vehemently. He sees loads of innovation, but believes companies don’t take advantage of it because they’re simply too big. “When Apple finds some new technology, they’re like, ‘Great, can I have 50 million next quarter?’ Manufacturers are like, ‘No, you can’t. We just invented it,’” he says. Meanwhile, companies design by committee — with too much input from supply chain experts and accountants — and everything moves slowly.
If Essential sells 50 million phones this quarter, Jason Keats, the company’s head of product architecture, is totally screwed. Essential simply cannot produce that many phones. That’s the point. “We’ve gone after technologies and methods of manufacturing that aren’t designed to support 50 million devices,” he says.
I like this attitude. Rubin and the rest of the people who run Essential are smart enough to know that they almost certainly won’t outsell smartphones from Apple or Samsung. But they might be able to produce a far more interesting product, and I think that counts for something.
So, is the Essential Phone interesting? Pierce’s article mentions that it doesn’t use radically different components and it isn’t waterproof, and the Essential website really only points to two noteworthy differences between it and, say, a Samsung Galaxy S8.
The first is that the chassis is made of titanium, which Essential says allows the frame to perform better in drop tests. But after dropping a smartphone, even very finicky people — like me — are much less concerned about the condition of the case than of the display. Even though it’s fifth-generation Gorilla Glass — the same kind first used last year in the Galaxy Note 7 — it’s still prone to shattering on impact.
The second noteworthy difference is the inclusion of magnetic power connectors. Even though we’ve seen similar functionality before in the Microsoft Surface and iPad Pro, I think that’s a cool addition.
The magnetic accessory connectors are probably the most interesting thing about this phone. Aside from that, it still runs stock Android and uses the same kind of internals as plenty of other smartphones. That’s a bit disappointing because, while the Essential Phone may be a perfectly functional device, it’s not as adventurous as I had hoped from a company that’s totally fine with selling fewer units every quarter. If they really are, in the words of their head of product architecture, trying to find “technologies and methods of manufacturing that aren’t designed to support 50 million devices”, I’d love to see more.
Perhaps my expectations are too high here. Perhaps it isn’t possible to have an experimental smartphone company. Cars and fashion are symbols of power, money, prestige, and sex appeal; cameras — even digital ones — are tactile and ultimately personal objects that capture memories. But smartphones have, so far, been utilitarian objects above all else. Is it possible for a consumer tech product to rise to the level of high fashion?
That’s without getting into the inherent uniqueness of the products from more obscure companies. Practically every smartphone, including the Essential, uses parts from the same supply chains and, unless the phone is from Apple, is probably going to run Android. Is it truly possible to have a boutique smartphone company when so much of the phone’s hardware and functionality is predetermined and shared with other phones?
More curiously, I wonder if a boutique smartphone company something we might even want. One of the most revolutionary aspects of the devices you and I use every day is that they’re the exact same products used by some of the wealthiest people on Earth. The commoditization of technology is probably the greatest equalizer in modern commerce since the invention of the printing press.
Unfortunately, the closest thing the smartphone industry has to a firm making niche devices today is Vertu, a company that charges an absolute fortune for basic Android phones wrapped in leather and gold.
Perhaps that’s the only innovation that’s left: changing the case, while sharing technology with everyone else. But the premise of Essential suggests that there’s so much more that can be done from a company that’s okay with selling fewer units, and not having to worry about working at a phenomenal scale.
On the other hand, maybe the most boutiquey smartphone company is the one that makes the most of any single model: maybe it’s Apple. They’re building phones using techniques previously reserved for prototyping and small-scale production, designing their own CPUs, and might even start making their own wireless chipsets. They build their own operating system that nobody else can use, and they make design decisions that have a certain Apple-y quality. They build software and hardware for hundreds of millions of people around the world, and must weigh interesting decisions — like a complete redesign of the operating system, or the removal of the headphone jack — against its impact on that many people. Even so, they still do radical things.
That’s what I’m hoping to see from Essential. Maybe it’s all marketing bullshit, but I really like the idea of a company that is more comfortable experimenting with ideas than gunning for sales. It’s early days, so I hope to see the kinds of technologies that can only be built into phones at a scale of, say, hundreds of thousands of units instead of tens of millions. Any market is better when there are more entrants and crazier ideas. The cool thing about a company deliberately limiting their production capacity — as Essential says they are — is that their ideas don’t need to be judged by how well they sell. But I’m still not sold on the idea that a functional consumer electronics device can, truly, be cool.
I’m fully aware that I’m stretching the definition of “boutique” by using Ricoh as an example, by the way. ↩︎
I’ve previously linked to Justin O’Beirne’s well-illustrated essays about Apple Maps and Google Maps, but I think this might be his best yet.
There are effectively two sections in this piece. The first is an exploration of how many changes Google made to a specific section of San Francisco compared to how many times Apple changed the same area, and what those changes are. It’s clear that Google is iterating more on their mapping product with the intention of surfacing more places, more accurately, more of the time.
The second section of O’Beirne’s piece is an extension of that last point. It’s about how Google changed their cartography and priorities over the past year, and it’s worth reading.
I hope Apple’s on-the-ground data collection indicates that they’re pushing for a big improvement soon. But, while they may be working really hard, Google’s designers and engineers aren’t twiddling their thumbs either, and Google is starting with a much stronger base. This article is so good that Apple could almost use it as a todo list. And they probably should.
Past Apple documentation claimed that device backups in iCloud were encrypted, but that didn’t include some user data like Notes, iMessages, and SMS messages. I don’t know why I didn’t verify this before posting, but I apologize for the error.
Now, I’m correcting the record yet again, because I think I was right the first time: iCloud backups may be encrypted, but not in the same way that iTunes backups are.
I stillthink this is misleading because it ignores the fact that iCloud backups are encrypted with a key that’s in Apple’s possession. We know this because you can buy a new iPhone and restore your backup simply by entering your Apple ID and password. And we know that your password itself is not the key because Apple’s support people can restore your account access if you forget your password.
Backup keybag is created when an encrypted backup is made by iTunes and stored on the computer to which the device is backed up. A new keybag is created with a new set of keys, and the backed-up data is re-encrypted to these new keys.
And page 17:
iCloud Backup keybag is similar to the backup keybag. All the class keys in this keybag are asymmetric (using Curve25519, like the Protected Unless Open Data Protection class), so iCloud backups can be performed in the background. For all Data Protection classes except No Protection, the encrypted data is read from the device and sent to iCloud. The corresponding class keys are protected by iCloud keys.
It also differs from the expectations made by that knowledgebase article, which says that iCloud “always encrypts your backups” while iTunes “offers encrypted backups (off by default)”.
My — admittedly, entry-level — understanding of everything I’ve read about this is that device backups are, indeed, encrypted in iCloud but users don’t hold the keys — Apple does. The comparison they make to iTunes in that knowledgebase isn’t fair because encrypted backups made using iTunes are entirely in the user’s control, while encrypted backups made using iCloud are in Apple’s control.
I should have been clearer in my initial link to Ritchie’s article: iCloud should offer an encrypted device backup option that is tied to an Apple ID, or to a secondary device. That means that if a user were to change their Apple ID password, the backup would become invalid and a fresh one would need to be created; but, it also makes iCloud backups that much safer.
I think I got this right this time, but please do let me know if I goofed again.
To power its multibillion-dollar advertising juggernaut, Google already analyzes users’ Web browsing, search history and geographic locations, using data from popular Google-owned apps like YouTube, Gmail, Google Maps and the Google Play store. All that information is tied to the real identities of users when they log into Google’s services.
The new credit-card data enables the tech giant to connect these digital trails to real-world purchase records in a far more extensive way than was possible before. But in doing so, Google is yet again treading in territory that consumers may consider too intimate and potentially sensitive. Privacy advocates said few people understand that their purchases are being analyzed in this way and could feel uneasy, despite assurances from Google that it has taken steps to protect the personal information of its users.
This feature was initially launched as part of Google’s Analytics 360 suite last year, but a free version is now being made available as well. According to this Post story, Google says that the way both Attribution products work is by using a broader data collection set, and that various formulas are used to “double-blind” purchases.
However, even if store owners and Google employees never see who purchased what, this still feels wrong on so many levels. For this to be effective, there has to be some association made between a purchaser, whether they have seen an ad, and how that campaign was delivered — through social media, a general website, and so forth. Therefore, there must be enough information to correlate the three factors, which is enough information for specific purchases to be tracked back to an individual. If there isn’t that level of granularity, the service is pointless, isn’t it?
The efficacy of Google Attribution leads to another problem: Google is both the seller of advertising, and the company reporting on whether it’s effective. Yuyu Chen, Digiday:
The issue of Google and Facebook grading their own homework is still a big concern for marketers, as recently underscored by WPP CEO Martin Sorrell. Because of the inherent conflict of interest, Crossmedia CEO Kamran Asghar said his agency would never use attribution services from Google or Facebook.
“We do our best to avoid any vendors — be it media or tech — that pose a conflict of interest,” said Asghar. “Google is a media company, and, therefore, clients should monitor it — and all channels — with credible third parties who are independent of selling media.”
Facebook launched a similar offline conversion product last year. Both products rely on treading a very fine line between determining the success of an ad campaign and tracking users on an uncomfortably fine level, and I think they’re overstepping that line in a big way. This feels downright creepy, and it’s the kind of thing only Google and Facebook can do because they’re entrenched into the fabric of the web. That should probably scare you in its own right: these two companies know exactly who uses the web better than anyone else. And, now, they know your offline activities too.
When Apple launched Apple Pay, they made a point of stating that they don’t track transactions over time. I don’t think Apple’s privacy protections necessarily prevent Google and Facebook from associating purchases with ad views, but it can’t hurt to consider using services from companies that build privacy protections into their products and services, instead of those that try to find the thinnest tightrope they can walk between what is and is not considered creepy.
Walt Mossberg penned what is officially the last column of his career for the Verge and Recode, and it’s — as you might imagine — about tech’s journey since he began covering it regularly in 1991, and where it’s going:
I expect that one end result of all this work will be that the technology, the computer inside all these things, will fade into the background. In some cases, it may entirely disappear, waiting to be activated by a voice command, a person entering the room, a change in blood chemistry, a shift in temperature, a motion. Maybe even just a thought.
Your whole home, office and car will be packed with these waiting computers and sensors. But they won’t be in your way, or perhaps even distinguishable as tech devices.
This is ambient computing, the transformation of the environment all around us with intelligence and capabilities that don’t seem to be there at all.
It sounds like Mossberg is excited about this future, if apprehensive about the lack of privacy and security regulations that surround it. While I’m sure he won’t be writing a weekly column, I’d be surprised if we never hear from Mossberg again when there’s so much to discuss.
It is frequently said that the point is not to remove the rules themselves, just change the authority to something a little less heavy-handed.
This is a puzzling assertion to make when the proposal itself asks over and over again whether the “bright line” rules of no blocking, no throttling, etc should be removed. It’s pretty clear that proponents don’t think the rules are necessary and will eliminate them if they can. Just because they frame their preference in the form of a question doesn’t make it any less obvious.
A sort of corollary to this argument is that internet providers will voluntarily adhere to suggested practices. This is a pretty laughable suggestion, and even if it were true, it self-destructs: if companies have no problem subjecting themselves to these restrictions, how can they be as onerous as they say?
We’ll know more about what is and isn’t on the chopping block when the final text of the proposed rules is made available, at which point I’ll update this story.
That weaselly framing has, indeed, persisted in the FCC’s proposal (PDF):
In the Title II Order, despite virtually no quantifiable evidence of consumer harm, the Commission nevertheless determined that it needed bright line rules banning three specific practices by providers of both fixed and mobile broadband Internet access service: blocking, throttling, and paid prioritization. The Commission also “enhanced” the transparency rule by adopting additional disclosure requirements. Today, we revisit these determinations and seek comment on whether we should keep, modify, or eliminate the bright line and transparency rules.
Make no mistake: the FCC is seeking to hamper or eradicate these rules, as Ajit Pai suggested last month, and replace them with a pinky promise.
The TL;DR of it is that demands on the data being stored on our iPhones, iPads, and Macs are, unsurprisingly, up.
In this context, it’s important to remember that while Apple protects messages and other personal data with end-to-end encryption, Apple has to turn over iCloud backups when and if required to do so by law.
Unlike local backups, no option is available to encrypt iCloud backups. Possible technical hangups notwithstanding, I’m surprised that’s something that hasn’t yet been made available in iCloud. If iMessages are worth encrypting in transit, surely they’re worth encrypting in a backed-up state as well.
Update: Well this is embarrassing. Via Laurent Boileau, it appears that iCloud backups are, indeed, encrypted (page forty-one of that PDF). Past Apple documentation claimed that device backups in iCloud were encrypted, but that didn’t include some user data like Notes, iMessages, and SMS messages. I don’t know why I didn’t verify this before posting, but I apologize for the error.
The seventy-five-page document (PDF) released today by the FCC represents the clearest view yet of Ajit Pai’s point of view on what ISPs offer, how to regulate providers, and what he sees as the Commission’s role in making sure that the open web continues to thrive. And, in short, it’s a crock of shit.
I anticipate that Karl Bode and Jon Brodkin will explore this proposal — titled “Restoring Internet Freedom”, like a gigantic middle finger to anyone who truly cares about freedom on the internet — on a much deeper level than I can, but I’d like to present a few excerpts for your review.
Americans cherish a free and open Internet. And for almost twenty years, the Internet flourished under a light-touch regulatory approach. It was a framework that our nation’s elected leaders put in place on a bipartisan basis. President Clinton and a Republican Congress passed the Telecommunications Act of 1996, which established the policy of the United States “to preserve the vibrant and competitive free market that presently exists for the Internet … unfettered by Federal or State regulation.”
During this time, the Internet underwent rapid, and unprecedented, growth. Internet service providers (ISPs) invested over $1.5 trillion in the Internet ecosystem and American consumers enthusiastically responded. Businesses developed in ways that the policy makers could not have fathomed even a decade ago.
These are the opening sentences of the proposal, and they already hint at a misleading document. In the context of this proposal, the implication is that the high investments of internet service providers in the nineteen years prior to the 2015 decision to classify providers under Title II is responsible for the rapid expansion and overwhelming success of online businesses and services. This proposal then goes on to blame Title II classification for an apparent destruction of the internet’s economy:
The Commission’s Title II Order has put at risk online investment and innovation, threatening the very open Internet it purported to preserve. Investment in broadband networks declined. Internet service providers have pulled back on plans to deploy new and upgraded infrastructure and services to consumers. This is particularly true of the smallest Internet service providers that serve consumers in rural, low-income, and other underserved communities. Many good-paying jobs were lost as the result of these pull backs. And the order has weakened Americans’ online privacy by stripping the Federal Trade Commission — the nation’s premier consumer protection agency — of its jurisdiction over ISPs’ privacy and data security practices.
This is complete myth building. ISPs themselves state that Title II has not affected their infrastructure plans, and the vast majority of publicly-traded ISPs actually saw an increase in capital expenditures from 2015–2016, compared to the two years prior. There is no indication that the classification of ISPs as common carriers has impacted either their business or the internet economy as a whole: the stock prices of all major American ISPs have increased over the past five years and, with the exception of Verizon, dramatically so. Of the ten most valuable publicly-traded companies in the world, five are American tech companies — all have a far higher valuation than they did five years ago. Put simply: the internet economy isn’t dying; it’s bigger than it ever has been, and the common carrier designation hasn’t made a dent in that trajectory.
Furthermore, the Commission’s claim that consumer privacy has been affected by the classification of ISPs under Title II is wildly misleading.
But the outright falsehoods in this proposal aren’t nearly as egregious as the way the Commission misinterprets the role of an ISP. The 2015 common carrier designation is based on the FCC’s classification of ISPs as telecommunications companies, rather than information service providers. I’ll get to the latter categorization in a moment, but first, a quick word from the Commission on why ISPs — which categorizethemselves in SEC filings as “telecommunications service” companies — are not telecommunications companies:
In contrast, Internet service providers do not appear to offer “telecommunications,” i.e., “the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received,” to their users. For one, broadband Internet users do not typically specify the “points” between and among which information is sent online. Instead, routing decisions are based on the architecture of the network, not on consumers’ instructions, and consumers are often unaware of where online content is stored.
What a load of hot garbage. A user specifies what internet connections they wish to make by typing or selecting URLs or addresses over other protocols. That the route chosen by the infrastructure is not directly controlled by the user is immaterial.
The FCC’s argument is akin to them stating that someone isn’t driving to a specific destination because they’re forced to pass through other towns because that’s how roads work, or that FedEx isn’t a courier company because a shipper doesn’t get to choose whether their parcel goes through Memphis.
So how does the FCC define “information service provider”, and why do they think the internet falls under that categorization?
Section 3 of the Act defines an “information service” as “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing, but does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service.”
Whether posting on social media or drafting a blog, a broadband Internet user is able to generate and make available information online. Whether reading a newspaper’s website or browsing the results from a search engine, a broadband Internet user is able to acquire and retrieve information online. Whether it’s an address book or a grocery list, a broadband Internet user is able to store and utilize information online. Whether uploading filtered photographs or translating text into a foreign language, a broadband Internet user is able to transform and process information online. In short, broadband Internet access service appears to offer its users the “capability” to perform each and every one of the functions listed in the definition — and accordingly appears to be an information service by definition.
This is the part where things get necessarily lawyerly. For that, we’ll turn to page twenty-seven of a June 2016 ruling (PDF) from the D.C. Circuit of the U.S. Court of Appeals:
In support of its second conclusion — that from the user’s point of view, the standalone offering of broadband service provides telecommunications — the Commission explained that “[u]sers rely on broadband Internet access service to transmit ‘information of the user’s choosing,’ ‘between or among points specified by the user,’” without changing the form or content of that information. … The Commission grounded that determination in record evidence that “broadband Internet access service is marketed today primarily as a conduit for the transmission of data across the Internet.”
The Commission then cited ISPs’ marketing in defence of their position, arguing that their very own ads sell ISPs on the basis of speed and reliability of arbitrary data transfer. That is, they sell themselves as dumb pipes. The Court of Appeals held up the 2015 Title II reclassification in this and many other decisions.
But the significance of all of this is kind of moot, as Mike Masnick explains:
For Pai to successfully roll back those rules, he’d need to show that there was some major change in the market since the rules were put in place less than two years ago. That’s… almost certainly going to fail in court. Again, this is important: Pai can change the rules, but that rule change will almost definitely be shot down in court.
Congressional net neutrality haters (e.g. those receiving massive campaign contributions from big broadband players…) are well aware that Pai’s plans have no chance in court. Yet, they want there to be this kind of uproar over the plans. They want the public to freak out and to say that this is bad for the internet and all that. Because this will allow them to do two things. First, they will fundraise off of this. They will go to the big broadband providers and act wishy washy on their own stance about changing net neutrality rules, and will smile happily as the campaign contributions roll in. It’s how the game is played.
The second thing they will do… is come to “the rescue” of net neutrality. That is, they will put forth a bill — written with the help of broadband lobbyists — that on its face pretends to protect net neutrality, but in reality absolutely guts net neutrality as well as the FCC’s authority to enforce any kind of meaningful consumer protection. We’ve already seen this with a plan from Senator Thune and this new bill from Senator Mike Lee.
This is really important to keep an eye on. Because, as bad as the proposal released today is — and it’s really bad — the fight won’t be over even if these rules pass, and are then overturned. I’m not very confident that the highly divided and very partisan Congress will get this right.
There are a couple of things you can do if you’re American. First, acknowledge your support for retaining Title II classification for ISPs. Comments will be added to the public record on this, so when this proposal is passed with millions of people opposing, there’s a clear sign that it isn’t in the public interest.
The second thing you can do — if this ever becomes a Congressional issue — is call your public representatives. Urge them to keep the common carrier designation for ISPs. I get that everyone seems to be telling you to call your representatives for a laundry list of reasons, but this is really important. Most everyone seems to agree with keeping the ’net neutral if it’s explained to them, but it can be hard to explain what’s going on here and what is at stake.
And that brings me to the third thing you can do: tell your friends about this, particularly those less technically inclined. Get them engaged, and get them to call as well. Every voice counts, even when it seems like those accountable aren’t listening. They absolutely will be listening if they fuck up the ’net for a generation.
Maybe leveraging those Content Delivery Networks will let you get away with it. But maybe they won’t.
Consider, too, the weight of code that isn’t being written directly, instead being added by plugins and addons. Analytics, advertising, and retargeting scripts tend to consume far too many resources each — combined, they’re a recipe for a website that’s disrespectful to its users by being bloated and invasive. That can’t be saved by code minification.
Apple’s collaborations with Nike on the Watch make so much sense to me; the Watch really is the new iPod, in many ways. I’d love to see a band made out of Nike’s terrific Flyknit material, too — I bet it would feel like a super flexible, super soft version of Apple’s nylon bands.
All we’ve had to go on about Facebook’s guiding principles have been generic platitudes from Zuckerberg until a few months ago, when he gave us a few thousand words of generic platitudes. The company has always clung mightily to vagueness – and secrecy. Facebook says it wants to protect free speech and to avoid censorship. But censorship is something to be avoided because it’s a mis-calibration: Something valuable was prohibited or erased. The banned book was worth reading. The activist’s speech needed to be heard. The silencing was a problem because of the values it acted against. Facebook has never understood that. They’ve operated at the level of the particular, and they have studiously avoided the theoretical that makes that particular worth fighting for.
Sure, if Facebook had decided to take an actual stand, they’d have had detractors. But if they’d been transparent about why, their users would have gotten over it. If you have principles, and you stick to them, people will adjust.
Instead, Facebook seems to change their policies based on the level of outrage that is generated. It contributes to a perception of them as craven and exploitative. This is why Facebook lurches from stupid controversy to stupid controversy, learning the hard way every. single. time.
I think Hazlett is right — Facebook ought to take some sort of stand. But I don’t think they will, because it’s too easy to coast between controversies that most people forget about after a day or two. We have regulatory bodies for a reason; without them, participants in many industries would also be briefly reactionary.
Maciej Cegłowski, in an infinitely-quotable transcript from a talk he gave at Republica Berlin:
The danger facing us is not Orwell, but Huxley. The combo of data collection and machine learning is too good at catering to human nature, seducing us and appealing to our worst instincts. We have to put controls on it. The algorithms are amoral; to make them behave morally will require active intervention.
The second thing we need is accountability. I don’t mean that I want Mark Zuckerberg’s head on a pike, though I certainly wouldn’t throw it out of my hotel room if I found it there. I mean some mechanism for people whose lives are being brought online to have a say in that process, and an honest debate about its tradeoffs.
Cegłowski points out, quite rightly, that the data-addicted tech industry is unlikely to effectively self-regulate to accommodate these two needs. They’re too deeply-invested in tracking and data collection, and their lack of ethics has worked too well from a financial perspective.
But real problems are messy. Tech culture prefers to solve harder, more abstract problems that haven’t been sullied by contact with reality. So they worry about how to give Mars an earth-like climate, rather than how to give Earth an earth-like climate. They debate how to make a morally benevolent God-like AI, rather than figuring out how to put ethical guard rails around the more pedestrian AI they are introducing into every area of people’s lives.
The tech industry enjoys tearing down flawed institutions, but refuses to put work into mending them. Their runaway apparatus of surveillance and manipulation earns them a fortune while damaging everything it touches. And all they can think about is the cool toys they’ll get to spend the profits on.
The message that’s not getting through to Silicon Valley is one that your mother taught you when you were two: you don’t get to play with the new toys until you clean up the mess you made.
I don’t see any advantage to having a regulated web. I do see advantages to having regulated web companies.
All of us need to start asking hard questions of ourselves — both as users, and as participants in this industry. I don’t think users are well-informed enough to be able to make decisions about how their data gets used. Even if they read through the privacy policies of every website they ever visited, I doubt they’d have enough information to be able to decide whether their data is being used safely, nor do I think they would have any idea about how to control that. I also don’t think many tech companies are forthcoming about how, exactly, users’ data is interpreted, shared, and protected.
Update: If you — understandably — prefer to watch Cegłowski speak, a video of this talk has been uploaded to YouTube. Thanks to Felix for sending me the link.
Yet these blueprints may also alarm free speech advocates concerned about Facebook’s de facto role as the world’s largest censor. Both sides are likely to demand greater transparency.
I would wager that it’s impossible to come up with a single set of guidelines that can clearly guide the moderation policy for two billion users spread across hundreds of countries. Even being more aware of their existing rulebook is unlikely to be helpful — someone acting nefariously could use them as guidance, while others will certainly see the rules as needlessly prohibitive and claim that Facebook shouldn’t censor any viewpoint, no matter how objectionable.
Facebook currently gets to decide its own level of squeamishness — they’re a private company, of course. But is there a size or scale at which it’s no longer okay for a company to create their own oversight? There has never been a single company that connects a quarter of the world’s entire population until now. Is it okay for that many people in so many places to be communicating using a rulebook developed by twenty- and thirty-somethings in California?
As Google looks for ways to keep people using its own mobile search to discover content — in competition with apps and other services like Facebook’s Instant Articles — the company is announcing some updates to AMP, its collaborative project to speed up mobile web pages.
Today at the Google I/O developer conference, Google announced that there are now over 2 billion AMP pages covering some 900,000 domains. These pages are also loading twice as fast as before via Google Search. Lastly, the AMP network is now expanding to more e-commerce sites and covering more ad formats.
In Google’s post announcing that AMP pages load faster — which Lunden links to — they also explain some additional capabilities offered to AMP pages:
Many of AMP’s e-commerce capabilities were previewed at the AMP Conf and the amp-bind component is now available for origin trials, creating a new interaction model for elements on AMP pages.
Forms and interactive elements were previously verboten in AMP land, but they’re now allowed through a proprietary — albeit open source — and nonstandard fork of HTML largely developed and popularized by one of the biggest web companies out there.
Quite a few high-profile web developers have this year weighted in with criticism and some, following a Google conference dedicated to AMP, have cautioned users about diving in with both feet.
These, in my view, don’t go far enough in stating the problem and I feel this needs to be said very clearly: Google’s AMP is bad – bad in a potentially web-destroying way. Google AMP is bad news for how the web is built, it’s bad news for publishers of credible online content, and it’s bad news for consumers of that content. Google AMP is only good for one party: Google. Google, and possibly, purveyors of fake news.
I’ve been pretty open about my distrust with ISPs. I think the FCC’s likely destruction of net neutrality legislation will be regarded as an easily-averted decision driven by dogma, and it will ruin the open web. At the same time, though, we cannot ignore Google’s slow takeover of the web. The world wide web is slowly becoming a Google product, and that’s just as fundamentally flawed as if the web were a division of Comcast.
In the words of Marco Arment, an “email-like product” that doesn’t follow IMAP standards and is, in many ways, a proprietary interpretation of email. ↩︎
I’ve generally had pretty good luck with Spotlight on iOS, but I’ve long noticed that results are delayed or nonexistent after not using it for a little while, particularly if I haven’t rebooted my phone recently. I thought I was losing my head a little bit, until I found a tip on Twitter from Anand Iyer:
Settings > General > Spotlight Search > toggle Slack off
Just like that, Spotlight seems to be running quickly again. Every query I’ve tried is fast and reliable, even if I don’t use my phone for a while. I don’t know why Slack, in particular, seems to make Spotlight perform so poorly — other apps surely index thousands of messages and require network lookups to complete — but this one weird trick seems to make Spotlight performance issues disappear.
Sounds like there might widespread problems with Spotlight indexing on iOS 10, because a bunch of readers have written to say they have the same problem but don’t even have Slack installed.
I’ve seen this Slack trick working for a few other people, so I wonder what the common thread is between those of us with Slack installed and those without. Perhaps there are issues with indexing large numbers of items, or perhaps toggling a setting simply rebuilds the Spotlight cache.
Update: I’ve seen reports that 10.3.2 fixes this bug altogether. Daniel Shockley says that he improved Spotlight performance by toggling languages, which makes me think that it is — or was — a bug that can be worked around by clearing a cache.
That wasn’t always the case. It wasn’t that long ago when Sergey Brin enlisted a group of skydivers to introduce the world to Google Glass. Or when Google’s Advanced Technology and Projects (ATAP) division, the skunkworks behind moonshot ideas like modular smartphones, gesture-sensing radar, and clothing with embedded sensors, was a reliable source of shock and awe for I/O attendees.
This year, though, there was no sign of ATAP, which lost its chief visionary Regina Dugan to Facebook last year.
I get the point that Bell is making here: Google has a reputation for having a bit of a quirky attitude that bubbles through their products and services. But I disagree — I’m glad that Google is being a bit more honest in admitting that they are a bonafide corporate entity, not a gigantic startup. Yeah, it’s a bit boring, but it’s the truth.
Of the experiments that Bell mentions, two — Google Glass and Project Ara — are officially on hold, but I wouldn’t bet on them coming out of hold any time soon. The gestural control system and connected jacket are scheduled to ship later this year, but that also seems to be the case for a lot of Google products: perpetually coming soon.
This is why I think that Pichai’s “boring” opening was a great thing. No, there wasn’t the belligerence of early Google IOs, insisting that Android could take on the iPhone. And no, there wasn’t the grand vision of Nadella last week, or the excitement of an Apple product unveiling. What there was was a sense of certainty and almost comfort: Google is about organizing the world’s information, and given that Pichai believes the future is about artificial intelligence, specifically the machine learning variant that runs on data, that means that Google will succeed in this new world simply by being itself.
Before I/O began this year, Matt Birchler reflected on last year’s event:
Google’s I/O conference last year was big on flash, but little in substance that will actually move users away from iOS. Google Assistant has proven to be a big win for the company, as it has asserted itself as the best voice assistant out there for a lot of things. Google Home, which I don’t own yet, is a strong competitor to the Amazon Echo which has been gaining popularity.
But beyond the Assistant-related announcements, everything else was a bit of a letdown.
This year’s event was nowhere near as flashy. The Android updates seemed a bit obvious — the system now has notification badges for app icons, as an example — and that’s probably a good thing. Google’s big company reality doesn’t really match their wacky persona, and a dizzying array of new messaging apps every year is confusing in the real world. It’s boring, but it’s okay that Google is becoming more reliable and, well, normal.
Surprising absolutely nobody, the FCC today voted 2-1 along strict party lines to begin dismantling net neutrality protections for consumers. The move comes despite the fact that the vast majority of non-bot comments filed with the FCC support keeping the rules intact. And while FCC boss Ajit Pai has breathlessly insisted he intended to listen to the concerns of all parties involved, there has been zero indication that this was a serious commitment as he begins dismantling all manner of broadband consumer protections, not just net neutrality.
The commission will now consider Pai’s proposal, which would repeal the reclassification of broadband providers as “common carriers” (a little like utilities) under Title II of the Telecommunications Act. Pai’s proposed rulemaking would also “seek comment” on the so-called “bright line” rules—no blocking, throttling, or paid prioritization of internet traffic—likely meaning those rules would be watered down or even erased. We won’t know for sure until closer to the final vote, but without Title II authority, the FCC might not be able to enforce those rules anyway.
Much like a repeal of net neutrality would allow, this vote is a clear demonstration that a few powerful companies have their interests prioritized far higher than the millions of people who don’t have a boatload of cash to spare. This result may have been expected, but that doesn’t make it any less of a pile of horse shit.
Meanwhile, the other Republican, Mike O’Rielly, laid the groundwork for ignoring pro-net neutrality comments that have already flooded in and will likely continue to do so before the vote, saying FCC rules aren’t decided “like a ‘Dancing With the Stars’ contest.” More than 2.1 million comments have already been filed, though as we’ve reported, hundreds of thousands of those appear to be astroturfed, possibly bot-filed anti-net neutrality comments, submitted under the names of other people. But as much as O’Rielly might want to dismiss the comment process, every comment in favor of net neutrality makes it more obvious that Pai’s proposal is something that only ISPs want.
O’Rielly’s disparagement of democracy and Pai’s refusal to take seriously the millions of comments in favour of Title II regulation says everything you need to know about what these jackasses think of Americans’ values and voices.
I’ve been using Things as my primary todo app for as long as I can remember, and it has always been a well-designed and thoughtful app from good people. But the last major version of Things was launched in 2012 and, any way you cut it, that’s a really long time ago for any piece of software. It’s a testament to how good the app is that I — and many others — have stuck with it for so long.
And, now, there’s a new version. I’ve been using Things 3 on all my devices for a while and it’s amazing. I can promise you that this is one of the best-designed apps to grace any Apple platform in a very long time — not just the way it looks, but what it does.
As with many other task managers, you’ll find a plus button in the bottom area of the screen to add new tasks. But in Things for iOS, that button has a special name: the Magic Plus Button.
In one of the most clever methods of task entry I’ve seen, the Magic Plus Button can be dynamically moved around the screen as a way to add additional data. While its default location will always be the lower right corner, the button can be dragged and dropped into different spaces of the app to do different things. Tap and drag the button into your list of projects to create a new project. Drop it into a list of tasks in Today to create a new task in that exact spot. Drop it into the Inbox icon that appears in the lower left corner to create the task in your Inbox. And, my personal favorite, when viewing your Upcoming list, drag and drop the button on to the day when that task needs to be acted on, and you’ve just assigned its start date.
The idea of a persistent button floating in the lower-right sounds very much like it’s pulled from Google’s Material Design guidelines, but it doesn’t feel that way. Cultured Code has clearly given a lot of thought to the way the Magic Plus Button should work, and its visual appearance is a reflection of that — not the other way around. My favourite little tip for this button: drag it to the left side of the screen within a project to create a section.
There’s lots more to love, like calendar integration and the redesigned Areas function, but at its core, it’s still Things. That means bulletproof sync, lots of little details, and a stubborn refusal to compromise their vision for what apps like this should be. I really like this set of updates.
I’ve written frequently here about supporting developers, the race-to-the-bottom of the App Store, and the lack of good apps on the Mac App Store. Cultured Code bucks the trend of reducing the price of their apps or introducing a subscription model, and these apps are better for it. Supporting good developers comes at a real monetary cost: $10 for the iPhone app, $20 on the iPad, and $50 on the Mac. But if a great task management app is what you’re looking for and you don’t want a company doing sketchy stuff with your data, Things might be worth the investment for you. I know that it is for me.