Month: December 2017

Apple:

To address our customers’ concerns, to recognize their loyalty and to regain the trust of anyone who may have doubted Apple’s intentions, we’ve decided to take the following steps:

  • Apple is reducing the price of an out-of-warranty iPhone battery replacement by $50 — from $79 to $29 — for anyone with an iPhone 6 or later whose battery needs to be replaced, starting in late January and available worldwide through December 2018. Details will be provided soon on apple.com.

  • Early in 2018, we will issue an iOS software update with new features that give users more visibility into the health of their iPhone’s battery, so they can see for themselves if its condition is affecting performance.

  • As always, our team is working on ways to make the user experience even better, including improving how we manage performance and avoid unexpected shutdowns as batteries age.

Apple’s PR strategy tends to err on the side of silence. This time, that bit them in the ass, at least initially; but, this is a fantastic response. To my eyes, it ticks the boxes of every complaint I had, with the exception of its timeliness: I don’t see anything here — with the possible exception of the reduction in battery replacement costs — that Apple could not have said back when they first released the 10.2.1 software update.

Kirk McElhearn:

But after ten years, it’s fair to say that the smartphone has become commoditized. The feature set of this device is essentially limited, and there aren’t many new bells and whistles that can be added. So smartphone manufacturers focus on two areas: the camera, since many smartphone owners buy a phone in part to have a good or better camera, and details, including things like security features (Touch ID and Face ID), displays, water resistance, and more. None of these latter features are “killer” features, they are all incremental enhancements. Gone is the day when a new device added, say, the ability to play videos, or faster network access. All the essential features are there. (To be fair, the new iPhone adds augmented reality, but this technology is still too young for this to be a killer feature.)

[…]

Since the announcement of the iPhone X, as a “second” iPhone line, I have been thinking that Apple would keep the “number” iPhone for another generation – iPhone 9 and 9 plus – and release the iPhone X2, before moving all iPhones to the “X” line. They would be able to refine the new interface used to control the iPhone (see the Daring Fireball article linked above for more on the difference in iOS of the iPhone X), and slowly phase it in. But at $200-$300 more than the “number” iPhone, plus a steeper cost for AppleCare, this is a luxury item.

The pricing of the iPhone X is interesting — and I’ll get back to that — but I’m not sure McElhearn is right about the features common to pretty much any smartphone commodifying the market. There are still basic features implemented poorly, even on expensive devices. The $699 Essential Phone shipped with a terrible camera app and the $799 Pixel 2 XL has an abysmal display, so there’s still plenty of growth that can happen in the market. Yes, it’s a far more mature market with a higher level of base expectations than, say, five years ago, but it isn’t like we’re swimming in a sea of inexpensive smartphones with excellent screens, cameras, battery life, and apps.

There are also network improvements right around the corner that could mean faster speeds and lower latency. These sorts of improvements could unlock as-yet-unforeseen capabilities that may comfortably qualify as “killer” features — we just don’t know yet.

From a pricing standpoint, though, the iPhone X can be compared to the rest of the iPhone lineup in a similar way to the first Retina MacBook Pro and the rest of Apple’s laptop lineup. A standard 15-inch MacBook Pro started at $1,799 in the United States before and after the introduction of the Retina model; the Retina model started at $400 more, but came with an SSD, twice the RAM, and a double-resolution display. A few months after the 15-inch model was released, a 13-inch Retina MacBook Pro was launched at a price premium of $500 over the standard 13-inch model, at $1,699.

Over time, the Retina MacBook Pro eventually became the only model, but still at a price premium over the outgoing models. For example, the 13-inch Retina MacBook Pro model has started at $1,299 for several years, $200 more than the standard 13-inch model when it was most recently available. The Retina model is, of course, a much better computer, but there’s now a higher barrier to entry. The 15-inch model, meanwhile, now starts at $2,399 — $200 more than the previous starting price for the 15-inch Retina model, and a huge $600 more than the starting price for the non-Retina 15-inch MacBook Pro. Again, aside from the foibles of the most recent MacBook Pro models, I can’t imagine anyone would choose a non-Retina model over anything in the current lineup; but, all of the new MacBook Pros cost a lot more money.

I made a similar argument as McElhearn in July, when rumours of the thousand-dollar iPhone price point were brewing, and I wonder where pricing goes from here across Apple’s lineup. Perhaps the introduction of higher pricing tiers with next-generation features gives Apple room to introduce lower-cost products.1 Apple has long been a company that offers accessible premium goods, and I hope that more luxurious products aren’t the new standard.


  1. For example, I think the more-expensive iPad Pro lineup helped make way for the $329 iPad. ↥︎

As usual, Michael Tsai has put together a definitive reference. Lots of great articles in here, including one from Andrei Frumusanu of Anandtech:

The first unique characteristic separating Apple iPhones from other smartphones is that Apple is using a custom CPU architecture that differs a lot from those of other vendors. It’s plausible that the architecture is able to power down and power up in a much more aggressive fashion compared to other designs and as such has stricter power regulation demands. If this is the case then another question rises is if this is indeed just a transient load issue why the power delivery system was not designed sufficiently robust enough to cope with such loads at more advanced levels of battery wear? While cold temperature and advanced battery wear are understandable conditions under which a device might not be able to sustain its normal operating conditions, the state of charge of a battery under otherwise normal conditions should be taken into account during the design of a device (Battery, SoC, PMIC, decoupling capacitors) and its operating tolerances.

If the assumptions above hold true then logically the issue would also be more prevalent in the smaller iPhone as opposed to the iPhone Plus models as the latter’s larger battery capacity would allow for greater discharge rates at a given stable voltage. This explanation might also be one of many factors as to why flagship Android and other devices don’t seem to exhibit this issue, as they come with much larger battery cells.

And a fair point from Tsai himself:

Lastly, how long should we expect a phone to last? Especially one like the iPhone X? With higher prices, the move away from carrier contracts, and diminishing returns for the camera and other new features, it seems natural that people will want to keep their phones longer. But that seems totally at odds with the design and battery choices Apple is making.

On that note, there seems to be some confusion about whether fast charging impacts long-term battery life and degradation. Rene Ritchie says that it does; John Gruber asked Apple and they said it doesn’t.

In hindsight, I think I was too nice in my first piece on this. What I wrote yesterday was that “I don’t think they communicated this very well”. What I should have written was that they didn’t communicate this at all on the record, and that’s not acceptable. I still think that reducing CPU performance is a reasonable choice to make, but perhaps it’s a choice they had to make because of other decisions, like the balance of battery capacity to maximum CPU power draw.

Dan Goodin, Ars Technica:

For a decade, some security professionals have held out extended validation certificates as an innovation in website authentication because they require the person applying for the credential to undergo legal vetting. That’s a step up from less stringent domain validation that requires applicants to merely demonstrate control over the site’s Internet name. Now, a researcher has shown how EV certificates can be used to trick people into trusting scam sites, particularly when targets are using Apple’s Safari browser.

Researcher Ian Carroll filed the necessary paperwork to incorporate a business called Stripe Inc. He then used the legal entity to apply for an EV certificate to authenticate the Web page https://stripe.ian.sh/. When viewed in the address bar, the page looks eerily similar to https://stripe.com/, the online payments service that also authenticates itself using an EV certificate issued to Stripe Inc.

Ian Carroll:

Let’s look at the user interfaces of browsers. On Safari, the URL is completely hidden! This means the attacker does not even need to register a convincing phishing domain. They can register anything, and Safari will happily cover it with a nice green bar. The below screenshot is from this site. Hard to tell, right?

With Chrome, the story is slightly better, but only if you bother to look at the full URL. Chrome has no native way to view anything other than the company name and country of the certificate. Newer versions of Chrome will open the system certificate viewer with two mouse clicks (older versions completely removed viewing the certificate), but the system certificate viewer is useless for any normal user.

By default, Safari will only show the company name in the address bar when a website is loaded with an extended validation certificate; users can reveal the company name beside the URL by opening Safari preferences and checking the “Show full website address” box under the Advanced tab.

Over the past couple of weeks, you’ve probably noticed a resurgence of the old rumour that Apple deliberately slows down older iPhones with software updates, presumably to encourage users to upgrade. Here’s the post on Reddit from “TeckFire” that, I think, sparked recent rumours to that effect:

[…] Wear level was somewhere around 20% on my old battery. I did a Geekbench score, and found I was getting 1466 Single and 2512 Multi. This did not change wether I had low power mode on or off. After changing my battery, I did another test to check if it was just a placebo. Nope. 2526 Single and 4456 Multi. From what I can tell, Apple slows down phones when their battery gets too low, so you can still have a full days charge. […]

John Poole of Primate Labs, the company that runs Geekbench, effectively confirmed the post by examining Geekbench users’ scores in aggregate, and concluded:

If the performance drop is due to the “sudden shutdown” fix, users will experience reduced performance without notification. Users expect either full performance, or reduced performance with a notification that their phone is in low-power mode. This fix creates a third, unexpected state. While this state is created to mask a deficiency in battery power, users may believe that the slow down is due to CPU performance, instead of battery performance, which is triggering an Apple introduced CPU slow-down. This fix will also cause users to think, “my phone is slow so I should replace it” not, “my phone is slow so I should replace its battery”. This will likely feed into the “planned obsolecense” narritive.

Here’s what Apple says about this:

Our goal is to deliver the best experience for customers, which includes overall performance and prolonging the life of their devices. Lithium-ion batteries become less capable of supplying peak current demands when in cold conditions, have a low battery charge or as they age over time, which can result in the device unexpectedly shutting down to protect its electronic components.

Last year we released a feature for iPhone 6, iPhone 6s and iPhone SE to smooth out the instantaneous peaks only when needed to prevent the device from unexpectedly shutting down during these conditions. We’ve now extended that feature to iPhone 7 with iOS 11.2, and plan to add support for other products in the future.

This statement is via Matthew Panzarino, who writes:

As that battery ages, iOS will check its responsiveness and effectiveness actively. At a point when it becomes unable to give the processor all of the power it needs to hit a peak of power, the requests will be spread out over a few cycles.

Remember, benchmarks, which are artificial tests of a system’s performance levels, will look like peaks and valleys to the system, which will then trigger this effect. In other words, you’re always going to be triggering this when you run a benchmark, but you definitely will not always trigger this effect when you’re using your iPhone like normal.

Apple’s solution is quite clever here: to make a device last longer during the day with a battery in poor condition, the system simply caps peak performance. Since most activities don’t require that level of performance, users shouldn’t notice this cap in typical usage.

However, for the tasks that do make full use of the CPU, the effect of performance capping can be very noticeable. If search is indexing in the background, or the user is playing a game, or Safari is rendering a complex webpage, the device will feel much slower because it’s hitting the wall of reduced peak performance.1

Even though I think Apple’s solution is clever and, arguably, right, I don’t think they communicated this very well. I don’t know why Apple would even consider keeping something like this hidden — there are hundreds of millions of iPhones in active use around the world, so it’s guaranteed to be discovered. I understand why they would be reluctant to communicate this to users because it shatters the apparent simplicity of the product, but it would also be trivial to present users with a first-run dialog indicating that the battery is in a poor state and the phone will run with reduced performance until it is repaired. By choosing to implement this quietly, it appears more nefarious than it really is. That doesn’t engender trust.

Update: Apple has long been very good about managing expectations. When an item is backordered in their online store, they almost always beat their own shipping time estimates. The web is awash with stories from users who were pleasantly surprised with free or inexpensive repairs when they went to an Apple Store. This is an instance where they blew it — needlessly, I think.

Update: It would be interesting to know how Android handles battery degradation, and whether they employ similar throttling mechanisms or have their phones shut off when CPU power requests exceed the maximum battery output.


  1. The Geekbench chart for iPhone 6S models running iOS 11.2 has peaks in single-core scores of about 1100, 1400, 1700, 2200, and a big spike at about 2500. The 1400 score is approximately similar to the speed of an iPhone 6. ↥︎

Justin O’Beirne is back with another one of his well-illustrated essays on the state of digital maps. This one is mostly about Google and the power of knowing about buildings:

At some point, Google realized that just as it uses shadings to convey densities of cities, it could also use shadings to convey densities of businesses. And it shipped these copper-colored shadings last year as part its Summer redesign, calling them “Areas of Interest”:

[…]

With “Areas of Interest”, Google has a feature that Apple doesn’t have. But it’s unclear if Apple could add this feature to its map in the near future.

The challenge for Apple is that AOIs aren’t collected — they’re created. And Apple appears to be missing the ingredients to create AOIs at the same quality, coverage, and scale as Google.

With Maps in particular, Google has truly learned the value of what Apple has known for quite some time: it pays to own your systems. The data Google has been able to collect for Maps has created staggering competitive advantages for them, and has enabled them to do things none of their competitors are even close to attempting. It makes you wonder why so much of Apple’s mapping efforts are clearly, as O’Beirne illustrates, so dependent on third-party data. It also makes you wonder if anyone can catch Google at their rate of progress.

Valentina Palladino, Ars Technica:

Today, Facebook announced that it will start using its facial recognition technology to find photos of you across its site, even if you aren’t tagged in those photos. The idea is to give you more control over your identity online by informing you when your face appears in a photo, even those you don’t know about. According to a Facebook blog post, the new feature is powered by the same AI technology used to suggest friends you may want to tag in your own uploaded images.

The feature, dubbed Photo Review, has one caveat: you’ll only be notified of an untagged photo of yourself if you’re in the intended “audience” of that photo. “We always respect the privacy setting people select when posting a photo on Facebook (whether that’s friends, public, or a custom audience), so you won’t receive a notification if you’re not in the audience,” the blog post says.

To be clear, Facebook is now only making public what they’ve been doing privately for years: building a massive catalogue of recognized faces matched to names, birthdays, locations, and so on. It also means that they likely have an enormous catalogue of faces matched to people who are not members and, therefore, also knows their relationship to Facebook users. Kashmir Hill of Gizmodo was told by a Facebook representative that this facial recognition capability is not used for the People You May Know feature, though, so it won’t expose that information publicly right now.

For what it’s worth, Photo Review will not be made available to European or Canadian Facebook users because of local privacy laws. I am completely fine with that.

On Friday, just one day after the FCC’s historic1 vote to kill net neutrality rules, Ajit Pai made an appearance on the “Fox and Friends” morning show to deliver a truly baffling statement.

John Bowden, the Hill:

Federal Communications Commission (FCC) Chairman Ajit Pai said Friday that supporters of net neutrality provisions that were repealed Thursday have been proven wrong, as internet users wake up still able to send emails and use Twitter after the regulations were struck down.

Of course, Pai isn’t stupid, and he knows that this is a completely disingenuous defence. For one thing, it will take sixty days after the repeal is published in the Federal Registry for it to take effect. Yet, even though this explanation is bullshit, it is enabled by two related phenomena.

The first is that net neutrality is a fairly esoteric policy issue, despite sounding simple on the surface. Indeed, net neutrality policies are pretty simple for ISPs and consumers to understand: all traffic passing over the network is treated equality. It requires them to not bias nor hinder any data. But the pragmatic consequences of not having net neutrality policies in place are much harder to grok.

That has led to people trying to explain the negative impacts of yesterday’s vote with all sorts of analogies and situations, to the point of farce. That’s the second phenomenon: a muddying of the waters by well-meaning activists, writers, and public figures. An internet that lacks these policies means that ISPs have far more power over the data transmitted via their networks. That theoretically means that they could make Twitter load just one word at a time, or they could charge subscribers five dollars per month to access Facebook. But, realistically, that won’t happen.

What is far more likely, in my mind, is a quieter and more insidious campaign by ISPs to create private marketplaces under their control. I don’t think it’s unreasonable to imagine tiered internet plans where “premium” video services — the Netflixes and YouTubes of the world — would get guaranteed smooth service at faster speeds, while other media services would be streamed at the same lower speed as the webpages you read and emails you receive.

Of course, the only way this guarantee would be made is if the ISP were to strike a deal with the media service, and it’s unlikely that consumers would know about this until Netflix inevitably bumps up their rates to make up for their increased costs. Consumers also probably wouldn’t be wholly aware of the dynamics of this scheme if, say, Vimeo were to refuse to pay to be included in the hypothetical premium media package. When every website is loading slowly, you blame your computer, WiFi connection, or ISP; when only a single website is slow, you probably blame the website.

ISPs might even charge enticingly lower rates for a service like this, hoping they’ll make up the difference in increased subscribers and contractually-obligated fees from media services. It will look appealing, especially if the majority of your web experience centres around the giants of the web. But it will also mean that competing services will be fighting against established players that have paid to more deeply entrench themselves in consumers’ web habits.

As major ISPs increasingly consolidate into media conglomerates,2 there’s also reason to worry that they favour their own media in ways that may not legally violate American antitrust acts. Even if they do, regulators may be hesitant to prosecute in a generally-weakened antitrust climate.

But let’s be optimistic for a moment and, like Ajit Pai and Ben Thompson, let’s assume that ISPs will act in consumers’ best interests and somehow innovate with their utility-like service. In other words, let’s assume that the internet of two or three years from now works pretty much the same as it does today, just faster, cheaper, and more available to everyone. Why would anyone want rules like these in place?

In short, they’re a legally-binding guarantee that ISPs must not engage in the kind of behaviour described above. It also prevents ISPs from doing the kinds of stuff that Pai said was evidence that the internet didn’t collapse the day after his vote: block email and Twitter. Given the staggering influence ISPs have over the information and media we consume, the amount of soft power they now have is greatly concerning. They won’t do the things described earlier, but they could; they could do it, but they won’t. They promise not to, because doing so would be absurd in the minds of consumers; it would run counter to our learned expectations of how the internet works.

If you buy the ISPs’ argument that these rules are just needless nannying on the part of the federal government, it’s understandable why they and their lobbyists would want to tell everyone before the FCC’s vote that we should all calm down. That there was nothing to fear, because they profit best when everyone uses their internet connections as they do today. But, as often happens, what ISPs said after the FCC voted to rescind these regulations differs greatly from the picture they painted before.

Jacob Kastrenakes, the Verge:

We reached out to 10 big or notable ISPs to see what their stances are on three core tenets of net neutrality: no blocking, no throttling, and no paid prioritization. Not all of them answered, and the answers we did get are complicated.

[…]

In particular, none of the ISPs we contacted will make a commitment — or even a comment — on paid fast lanes and prioritization. And this is really where we expect to see problems: ISPs likely won’t go out and block large swaths of the web, but they may start to give subtle advantages to their own content and the content of their partners, slowly shaping who wins and loses online.

No kidding.

There’s a reason Comcast removed from their website a promise to not engage in paid prioritization. There’s a reason ISPs fought so hard against Title II and it isn’t because ISPs just love innovation so gosh darn much; and, of course, there’s a reason Comcast is eager to help write new legislation.

So you’ve heard this story before. You know that analogies that try to explain net neutrality often muddy the waters of an already complex issue. You know that ISPs are lobbying the hell out of Congress to try to get a law passed that would take the FCC out of the equation for good. Ajit Pai and the other FCC Commissioners who voted with him know all of this, too. Why write it all again?

Well, there is a glimmer of hope for you to have your say, now. Being an appointed and independent body, the FCC is not democratic and is not subject to continued public approval, per se.3 Now, though, this issue has been bunted over to elected officials. Divided as the United States may be today, an overwhelming majority of Americans disagree with the Title II net neutrality repeal. So when your representatives are helping write the new net neutrality rulebook, make sure they know just how much you approve of maintaining Title II-like regulations.

It’s a complicated topic, but I’m sure if you explain it to them really, really slowly, they’ll get it.


  1. Much in the same way that, for example, the Commodity Futures Modernization Act was also historic↥︎

  2. Rob Rousseau on Twitter:

    this whole year has been beyond parody but net neutrality rules being reversed against massive popular support on the same day that Disney effectively becomes the world’s only media company is still a bit of a stretch

    ↥︎

  3. I still think that the FCC erred by not taking into account the public comments posted on this issue. ↥︎

Juli Clover, MacRumors:

When the Apple Watch Series 3 first launched, carriers in the United States and other countries where the LTE version of the device is available offered three free months of service and waived activation fees.

That fee-free grace period is coming to an end, and customers are getting their first bills that include the $10 per month service charge.

If you have an Apple Watch Series 3 with LTE functionality, you’ve probably already learned that $10 is not all it’s going to cost per month. On carriers like AT&T and Verizon, there are additional service charges and fees, which means it’s not $10 per month for an Apple Watch, it’s more like $12-$14.

I still think it’s egregious that carriers are charging anything more than an administrative fee — at most — to use an LTE Apple Watch on their network. You don’t get any additional data allotment by adding an Apple Watch to your plan; if anything, the data a Watch will use will be dramatically less than that used by a smartphone. Yet another subscription is the kind of thing that makes me wary of an LTE Apple Watch — not necessarily because of the price, but because of the ethics. I bet most consumers can tell that this is nothing more than a money grab by carriers.

Jason Koebler, Vice:

A core Republican talking point during the net neutrality battle was that, in 2015, President Obama led a government takeover of the internet, and Obama illegally bullied the independent Federal Communications Commission into adopting the rules. In this version of the story, Ajit Pai’s rollback of those rules Thursday is a return to the good old days, before the FCC was forced to adopt rules it never wanted in the first place.

[…]

But internal FCC documents obtained by Motherboard using a Freedom of Information Act request show that the independent, nonpartisan FCC Office of Inspector General — acting on orders from Congressional Republicans — investigated the claim that Obama interfered with the FCC’s net neutrality process and found it was nonsense. This Republican narrative of net neutrality as an Obama-led takeover of the internet, then, was wholly refuted by an independent investigation and its findings were not made public prior to Thursday’s vote.

When little to no proof supports the arguments that you are making, and what evidence does exist actually refutes your stance, do you simply hide it? Congratulations — you, too, could be a Republican FCC commissioner.

As expected and in spite of overwhelming public and business support for net neutrality rules, the FCC just voted along party lines to strip themselves of the power to meaningfully regulate internet service providers. But just because appointed FCC Commissioners like Ajit Pai have no respect for the public, that doesn’t mean this is over.

Kayleigh Rogers, Vice:

The first course of immediate action will be for net neutrality proponents to pressure Congress to use the Congressional Review Act to pass a resolution of disapproval. This is a mechanism that allows Congress to overrule any regulations enacted by federal agencies. You might remember it’s the tool that the GOP used to eliminate broadband privacy protections earlier this year.

“The CRA is our best option on Capitol Hill for the time being,” said Timothy Karr, a spokesperson for the Free Press Action Fund, an open internet advocacy group. “We’re not interested in efforts to strike a Congressional compromise that are being championed by many in the phone and cable lobby. We don’t have a lot of confidence in the outcome of a legislative fight in a Congress where net neutrality advocates are completely outgunned and outspent by cable and telecom lobbyists.”

A lot more work needs to be done. Title II regulations are an effective and well-rounded way to treat ISPs more like the utility providers they really are, but a bill could be passed that places a Title II-style framework into a modern context for the internet, if there’s enough public pressure to do so. Time for Americans to get to work.

Update: New York Attorney General Eric Schneiderman is suing to block this repeal. He pointed out yesterday that millions of comments on this topic were posted under real people’s names without their knowledge or consent, and that the FCC has refused to allow an investigation into this matter.

Garrett M. Graff, Wired:

The most dramatic cybersecurity story of 2016 came to a quiet conclusion Friday in an Anchorage courtroom, as three young American computer savants pleaded guilty to masterminding an unprecedented botnet — powered by unsecured internet-of-things devices like security cameras and wireless routers — that unleashed sweeping attacks on key internet services around the globe last fall. What drove them wasn’t anarchist politics or shadowy ties to a nation-state. It was Minecraft.

Minecraft may have been the motive and three college students may have been the perpetrators, but the reason this attack was so successful was because so many internet-of-things device manufacturers don’t prioritize security, and nobody really checks to make sure any of these products have been tested for trivial loopholes.

We’re used to extension cords being certified that they won’t burst into flames when you plug them in. Microwaves and cellphones get tested by regulatory bodies to ensure that they won’t fry living organisms. We expect our cars to be built to withstand moderate collisions. These processes don’t prevent all problems, but they do help maintain standards and provide third-party verification that the manufacturer did a good job.

But there are millions and millions of devices out there — including medical devices — connected to the same network that people use to play Minecraft, and there’s no certification process in place or agreed-upon standards outside of industry practices. In the United States, there’s a division of the Department of Homeland Security called US-CERT that monitors devices and software for vulnerabilities, but only after they go on sale. The FDA is perhaps at the forefront of keeping devices safe: they monitor consumer medical devices and maintain software standards.

I’m not necessarily arguing that every device and software update ought to go through an extensive pentesting process, but there is a reasonable argument to be made that internet-of-things devices should be subject to a little more scrutiny. The industry is currently not doing a good enough job regulating itself, and their failures can have global effects. Some sort of standards body probably would slow down the introduction of these products, but is the possibility of a global attack on the internet’s infrastructure a reasonable price to pay for bringing a device to the market a little bit faster?

Maya Kosoff, Vanity Fair:

When Ajit Pai, the Trump-appointed head of the Federal Communications Commission, announced his intention to roll back Obama-era net-neutrality guidelines, gutting rules that prevent Internet service providers from charging companies for faster access or from slowing down or speeding up services like Netflix or YouTube, he was quick to claim that critics of his plan—Internet freedom groups and smaller Internet companies that can’t afford so-called “fast lanes”—were overreacting. “They greatly overstate the fears about what the Internet will look like going forward,” Pai said on Fox & Friends. Pai’s proposal, which would put in place a voluntary system reliant on written promises from I.S.P.s not to stall competitors’ traffic or block Web sites, essentially serves as a road map to radically reshape the Internet. But like Pai, I.S.P.s and others in the telecom industry have curiously insisted that consumers and smaller companies have nothing to fear when it comes to net-neutrality reform.

ISPs also promise to be at your house between noon and 3:00 PM to check out your slow internet connection which, of course, is because they oversold your neighbourhood and overstated likely end-user speeds; but, sure, let’s trust them to play fair when they have few incentives to do so.

Mark Gurman and Ian King in a February 2017 report for Bloomberg:

Apple Inc. is designing a new chip for future Mac laptops that would take on more of the functionality currently handled by Intel Corp. processors, according to people familiar with the matter.

The chip, which went into development last year, is similar to one already used in the latest MacBook Pro to power the keyboard’s Touch Bar feature, the people said. The updated part, internally codenamed T310, would handle some of the computer’s low-power mode functionality, they said. The people asked not to be identified talking about private product development. It’s built using ARM Holdings Plc. technology and will work alongside an Intel processor.

[…]

The current ARM-based chip for Macs is independent from the computer’s other components, focusing on the Touch Bar’s functionality itself. The new version in development would go further by connecting to other parts of a Mac’s system, including storage and wireless components, in order to take on the additional responsibilities. Given that a low-power mode already exists, Apple may choose to not highlight the advancement, much like it has not marketed the significance of its current Mac chip, one of the people said.

It sounds like this is the chip that is included in the iMac Pro, even though Gurman and King cite lower power tasks as being the focus of its development. Steven Troughton-Smith in November:

This looks like the iMac Pro’s coprocessor (Bridge2,1) will be an A10 Fusion chip with 512MB RAM […] So first Mac with an A-series chip

Rene Ritchie tweeted today that the A10 has been rebranded “T2” — as in, a successor to the T1 chip in Touch Bar MacBook Pro models.

Cabel Sasser of Panic received an iMac Pro review unit from Apple, and tweeted about the T2’s functionality:

It integrates previously discrete components, like the SMC, ISP for the camera, audio control, SSD control… plus a secure enclave, and a hardware encryption engine.

This new chip means storage encryption keys pass from the secure enclave to the hardware encryption engine in-chip — your key never leaves the chip. And, they it allows for hardware verification of OS, kernel, boot loader, firmware, etc. (This can be disabled…)

In addition to the enhanced security measures Sasser notes, a couple more things are very exciting about Apple’s gradual rollout of a proprietary coprocessor in their Mac lineup. The T2 sounds like it expands upon some of the input mechanism security measures of the T1, so the keyboard and built-in camera are more secure than previous implementations. And, as Guilherme Rambo noticed, it can enable “Hey, Siri” functionality on the Mac. But Apple hasn’t enabled that functionality; so, now, it is a question of “when?”.

John Voorhees, MacStories:

Apple updated its website with news that the iMac Pro is shipping beginning on December 14, 2017. The pro-level iMac features a long list of impressive specifications. The desktop computer, which was announced in June at WWDC comes in 8, 10, and 18-core configurations, though the 18-core model will not ship until 2018. The new iMac can be configured with up to 128GB of RAM and can handle SSD storage of up to 4TB. Graphics are driven with the all-new Radeon Pro Vega, which Apple said offers three times the performance over other iMac GPUs.

Apple provided Marques Brownlee (MKBHD) and another YouTuber, Jonathan Morrison, with review units, and they seem effusively positive, with the exception of some concerns about the machine’s lack of post-purchase upgradability.

Of note, there’s nothing on the iMac Pro webpage nor in either of the review videos about the Secure Enclave that’s apparently in the machine, nor is there anything about an A10 Fusion chip or “Hey, Siri” functionality. These rumours were supported by evidence in MacOS; it isn’t as though the predictions came out of nowhere. It’s possible that these features will be unveiled on Thursday when the iMac Pro becomes available, or perhaps early next year with a software update, but I also haven’t seen any reason for the Secure Enclave — the keyboard doesn’t have a Touch Bar, nor is there Touch ID anywhere on this Mac.

Update: Filmmaker and photographer Vincent Laforet:

I found a very consistent set of results: a 2X to 3X boost in speed (relative to my current iMac and MacBook Pro 15”) a noticeable leap from most generational jumps that are generally ten times smaller.

Whether you’re editing 8K RED video, H.264 4K Drone footage, 6K 3D VR content or 50 Megapixel RAW stills – you can expect a 200-300% increase in performance in almost every industry leading software with the iMac Pro.

Mechanical and aerospace engineer Craig Hunter:

Most of my apps have around 20,000-30,000 lines of code spread out over 80-120 source files (mostly Obj-C and C with a teeny amount of Swift mixed in). There are so many variables that go into compile performance that it’s hard to come up with a benchmark that is universally relevant, so I’ll simply note that I saw reductions in compile time of between 30-60% while working on apps when I compared the iMac Pro to my 2016 MacBook Pro and 2013 iMac. If you’re developing for iOS you’ll still be subject to the bottleneck of installing and launching an app on the simulator or a device, but when developing for the Mac this makes a pretty noticeable improvement in repetitive code-compile-test cycles.

These are massive performance gains, even at the 10-core level; imagine what the 18-core iMac Pro is going be like. And then remember that this isn’t the Mac Pro replacement — it’s just a stopgap while they work on the real Mac Pro replacement.

Update: Rene Ritchie says that the A10 Fusion SoC is, indeed, present in the iMac Pro, albeit rebranded as a T2 coprocessor.

Matt Birchler used a Pixel 2 instead of his usual iPhone for a couple of months, and has started publishing pieces about his experience and impressions. It’s worth your time to start with part one and work your way through his thoughts, but this bit from the “Performance and Stability” section stood out to me:

[…] My time with Android has shown it to be anything but the “stable” alternative to iOS. Just 31 days into my time with the Pixel 2, I had to restore my phone to factory settings to fix the errors I was experiencing. In addition to the issues raised in that post, I have also had issues with apps crashing, notifications staying on silent even though I have them set to vibrate, random reboots, and more. After a few weeks I actually stopped reporting my Android bugs to Twitter because it was getting too depressing.

As Birchler points out, everyone’s bug experiences are different because, even with the relatively limited configuration options available for mobile devices — compared to, say, PCs — there are still billions of possible combinations of languages, WiFi networks, apps, settings, and so on. In light of recent iOS bugs, though, it’s remarkable to recognize that there’s still a lot of work to be done all around. Bugs shake the trust we place in our devices and may even make us consider switching, but there’s nothing like hearing user reports like these that acknowledge that it’s not any more stable. I’m not mocking Android users here or the OS itself; it’s just something worth recognizing.

Hey, remember how Andy Rubin temporarily stepped away from Essential after reporters from the Information started asking questions about what they called an “inappropriate relationship” between him and a woman who worked for him while at Google? Theodore Schleifer of Recode has the latest:

Andy Rubin, the founder of smartphone startup Essential, has already returned to his company less than two weeks after it was announced that he took a leave of absence amid questions about an alleged inappropriate relationship.

[…]

Even while on leave from Essential, Rubin was still able to show up to work at the same physical workplace. That’s because he did not take a similar leave from Playground Global, the venture capital firm he founded, which shares the same office space as Essential.

It will come as no surprise to you that Playground Global also has an investment in Essential, so what did Rubin’s leave of absence truly mean?

Jon Brodkin, Ars Technica:

There have been two major versions of the FCC’s transparency requirements: one created in 2010 with the first net neutrality rules, and an expanded version created in 2015. Both sets of transparency rules survived court challenges from the broadband industry.

The 2010 requirement had ISPs disclose pricing, including “monthly prices, usage-based fees, and fees for early termination or additional network services.”

That somewhat vague requirement will survive Pai’s net neutrality repeal. But Pai is proposing to eliminate the enhanced disclosure requirements that have been in place since 2015.

The 2015 disclosures that Ajit Pai’s proposal would undo include transparency on data caps, and additional monthly fees for things like modem rentals. ISPs also wouldn’t have to necessarily make these disclosures public on their own website; they can tell the FCC about them, and the FCC will publish the disclosures on their byzantine website.

Pai has claimed that his proposed rollback will encourage net neutrality practices without regulation because it will require ISPs to be fully transparent. In a shocking turn of events for statements and policies originating from the top minds of this administration, that claim turns out to be a complete lie: ISPs won’t have to be as open and transparent about their pricing and policies, and they have repeatedly stated that they would use tactics like paid prioritization to manipulate network traffic if given the opportunity.

The undoing of the 2015 net neutrality rules is likely to pass during the FCC’s December 14 meeting despite the rules’ overwhelming public support.

Update: Klint Finley of Wired debunks Pai’s claim that net neutrality regulations negatively impact broadband investment.

I don’t have an Amazon Prime subscription, so I don’t really have a reason to download this app; but, by all accounts, it is shockingly bad.

Netflix’s is also pretty awful — it now autoplays a preview of the selected show or movie at the top of the screen, with sound, and I can’t find any way to disable this. It also doesn’t behave like a typical tvOS app: the app navigation is displayed as tiles, shows and movies are also displayed as tiles, and they’re mixed together in an infinitely-scrolling grid.

Hulu isn’t available in Canada, but its tvOS app is apparently poor as well.

Why is it that three of the biggest players in streaming video can’t seem to find the time and resources to build proper tvOS apps? Is it not worth the effort because the Apple TV isn’t popular enough? Is it because these companies simply don’t care?

I don’t think it’s right to stymie experimentation amongst app developers, but tvOS has a very particular set of platform characteristics. If Apple isn’t going to encourage developers’ compliance to those characteristics, it’s up to users to provide feedback and encourage developers like these to do better.

Mike Masnick of Techdirt responds to the anti-Title II article that Ben Thompson wrote, and which Ajit Pai just can’t stop tweeting:

The larger point here, though is that while there certainly were a number of reasons to be hesitant about supporting Title II or even explicit rules from the FCC a decade ago, enough things have happened that if you support net neutrality, supporting Title II is the only current way to get it. Ajit Pai’s plan gets rid of net neutrality. The courts have made it clear. The (non) competitive market has made it clear. The statements of the large broadband providers have made it clear. The concerns of the small broadband providers have made it clear. If Ben does support net neutrality, as he claims, then he should not support Pai’s plan. It does not and will not lead to the results he claims he wants. It is deliberately designed to do the opposite.

So, yes. For a long time — like Ben does now — I worried about an FCC presenting rules. But the courts made it clear that this was the only way to actually keep neutrality — short of an enlightened Congress. And the deteriorating market, combined with continued efforts and statements from the big broadband companies, made it clear that it was necessary. You can argue that the whole concept of net neutrality is bad — but, if you support the concept of net neutrality, and actually understand the history, then it’s difficult to see how you can support Pai’s plan. I hope that Ben will reconsider his position — especially since Pai himself has been retweeting Ben’s posts and tweets on this subject.

If I didn’t convince you to disagree with Thompson’s misleading piece, maybe Masnick will. If you live in the United States, it’s vital that the FCC — particularly Ajit Pai, Michael O’Rielly, and Brendan Carr — and your representatives hear your concerns.

Update: Another great piece from Erica Portnoy and Jeremy Gillula of the Electronic Frontier Foundation, “The FCC Still Doesn’t Know How the Internet Works”:

There are at least two possible explanations for all of these misunderstandings and technical errors. One is that, as we’ve suggested, the FCC doesn’t understand how the Internet works. The second is that it doesn’t care, because its real goal is simply to cobble together some technical justification for its plan to kill net neutrality. A linchpin of that plan is to reclassify broadband as an “information service,” (rather than a “telecommunications service,” or common carrier) and the FCC needs to offer some basis for it. So, we fear, it’s making one up, and hoping no one will notice.

Regardless of whether the FCC commissioners are being malicious or they truly don’t understand how the internet works, it disqualifies them from running the Commission.