Apple’s reported numbers were well within — and even at the upper bounds of — their guidance from last quarter, but that doesn’t make for a stellar quarter. They sold fewer iPads this quarter than in any quarter since 2011, while “other” product revenue, which includes the Apple Watch, was down year over year — no surprises there.
We’ve seen a strong iPhone growth in many markets around the world including Canada, Latin America, Western Europe, Eastern Europe, the Middle East, India, and South Asia. iPhone sales in Greater China declined during the quarter, but initial customer response to iPhone 7 and 7 Plus gives us confidence that our December quarter performance in China will be significantly better on a year-over-year basis than our September quarter results, even as we lap the all-time-record period from a year ago.
Worldwide demand for iPhone 7 and 7 Plus has significantly outpaces supply, particularly on iPhone 7 Plus. And we’re working very hard to get the new iPhones into the hands of our customers as quickly as possible.
Pretty solid for a phone derided by plenty of techjournalists as a recycled design. And — did you hear? — it doesn’t have a headphone jack. Crazy. Apple also reported solid growth in services, though they didn’t release an updated subscriber count for Apple Music.
Guidance for Q1 2017 forecasts revenue of $76–78 billion, which is a similar range to their Q1 2016 forecast, and slightly above actual revenue for that quarter.
Kara Swisher, as quoted by Karis Hustad at Harvard’s Tech Conference 22:
When you look up at a board room and you look around and you see 10 white men and you don’t understand you have a problem, I want to know what happens to you. How can’t you see it? It’s a huge problem, and from a business point of view it is ridiculous. If half the women are using the Internet and half of people of color are using it, that’s how it should be represented. My god, how can you make a product for half the human race and not have half the human race be represented?
The goal is not just about numbers, but about equal representation of ideas and equal consideration to issues raised. But it is impossible to get to a point where equal thought is given to the specific concerns of women or people of colour if boards and employees are overwhelmingly white and male. Diversity is not — and cannot — be a checkbox item, as Swisher points out:
They’re not trying. They’re not looking hard enough. [They say] “Oh, it’s hard.” I don’t care. I don’t give a fuck if it’s hard. You need to bring me 10 great candidates and you have to be thinking hard about different kinds of candidates, different ages, different races — and we’re not always going to be successful, but I think from the very top you have to say, you’re being lazy about this. You’re being easy. You’re pattern matching.
I’ve spent the last seventeen years blogging, and for some of that time I ran PVRblog and for 15 years I ran MetaFilter, both of which are ad-supported sites. I’ve had lots of ups and downs with both, and at some point in the mid-2000s I built a whole Amazon product recommendation subsite for MetaFilter that never launched. Readers of PVRblog back in its heyday used to ask me to write a “buyers guide” every holiday season and though I recognized the utility of such a thing, I never made one, fearing it would constantly need updating to stay current with the latest news.
I don’t think I’ve ever met Brian Lam face to face, but we’ve talked online a handful of times but I’m immensely impressed with what he’s built. I don’t think any news I read today about this deal gave him enough credit for what he did, so I want to break it down.
I used the word “atypical” in the title of this post very deliberately. The Wirecutter and the Sweethome didn’t utilize a brand new business model, but they managed to become one of the most effective implementations of affiliate linking — something which has been around for ages. But it is a model that’s atypical amongst today’s VC-and-PPC-ad-funded media companies. Lam deserves a lot of credit.
Earlier today, Apple released software updates for the Mac, Watch, iOS devices, and Apple TV. iOS 10.1 includes the new Depth Effect mode and brings public transit to major Japanese cities, amongst lots of other bug fixes and adjustments. I’ve also noticed better battery life over iOS 10.
WatchOS 3.1 mostly has “bug fixes and performance improvements”. I’ve noticed a significant improvement in battery life compared to 3.0. I recommend installing this at your earliest convenience, provided you don’t fuck up your stand goal.
Adds an automatic smart album in Photos for Depth Effect images taken on iPhone 7 Plus
A smart album is still missing for Live Photos on iOS and MacOS. I get the implication that Apple wants you to leave it turned on all of the time, but I don’t think most people keep it on. Regardless, it remains unforgivable that you can’t search for photos by type: screenshot, panorama, Live Photo, and so on.
Maybe not everyone is convinced they need a smartwatch? According to a new industry report from IDC out this morning, smartwatch shipments experienced “significant” declines in the third quarter, as total shipments were down 51.6 percent from the same time last year. Just 2.7 million units were shipped in Q3 2016 versus 5.6 million in Q3 2015. While IDC offers several explanations as to why sales are dropping – including issues related to launch timings, Android Wear delays, and more – the numbers still indicate how smartwatches are having a hard time finding traction among a majority of consumers.
Of course, we need to keep in mind that Apple Watch is the market leader among smartwatches – its Series One device accounted for the majority of shipments in the quarter (1.1 million units shipped, a 72 percent year-over-year decline). That means its ups and downs will have an outsize impact on the industry’s numbers at large.
To make matters worse, the new Apple Watches didn’t go on sale until two weeks before the end of the third quarter, and the Nike+ model won’t be available until this Friday.
Still, these numbers aren’t great. I still think it’s a nascent market, the potential for which will be revealed over time as more people get their hands on smartwatches — or, well, smartwatches on their wrists. It’s certainly not a market that’s going to be a smartphone-sized yet or, possibly, ever, but there’s definitely something catching buyers’ eyes. Anecdotally, I’ve had more people ask me about how much I wear and like my first-generation Apple Watch over the past month than I had in the previous year.
After last week’s massive web outage was understood to have been the result of a botnet originating from insecure web-connected devices — DVRs and cameras, mostly — a number of people, including me, pointed to Bruce Schneier’s Vice article on why it’s important to regulate the security of these devices. In short:
The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.
The persistent rumor is that an IoT botnet is being used. So everything is calling for regulations to secure IoT devices. This is extraordinarily bad. First of all, most of the devices are made in China and shipped to countries not in the United States, so there’s little effect our regulations can have. Except they would essentially kill the Kickstarter community coming up with innovative IoT devices. Only very large corporations can afford the regulatory burden involved.
Like public school textbooks in Texas, regulating large markets can have the effect of regulating every market. There are lots of significant markets for these devices, but the United States and Europe are certainly two of the biggest. If those two regions — and, ideally, China and Korea — were to impose security screenings for these devices, manufacturers would likely comply worldwide, since it costs less for them to deploy the same software in every sales region.
Of course, this raises the question of how it would be most efficient to secure devices like these. A penetration test before an import certificate is granted would probably do a good job of weeding out the less-secure products, but it’s unrealistic for such a test to be imposed with every software update.
It’s a tricky problem. The solution that Graham tweeted is to have the NSA brick vulnerable devices, but that seems like a hard overreach of power. The influence of imposing regulations is softer, but I think it reduces the “Team America” feeling of the NSA acting as the global internet police.
Thomas Gryta and Keach Hagey, Wall Street Jorunal:
AT&T Inc. has reached an agreement to buy Time Warner Inc. for between $105 and $110 a share, with a deal likely to be announced as soon as Saturday evening, according to people familiar with the plans.
The boards of the two companies are meeting on Saturday to approve the transaction, the people said. The deal is half cash and half stock, according to one of the people.
Of note, this does not include Time Warner Cable, which was acquired by Charter Communications. Time Warner owns CNN, HBO, DC Entertainment, and 10% of Hulu, amongst a huge list of other brands. It is one of the largest media conglomerates in the world.
AT&T, meanwhile, is the highest-earning telecommunications company in the world, with over 130 million customers (PDF) and a market capitalization of $226 billion. Should that remain consistent, the combined valuation of over $300 billion would make the resulting company worth more than Comcast and Disney combined.
Meanwhile, CBS and Viacom are reportedly exploring a merger that would create a company with a combined worth of $40 billion, and just three years ago, Comcast completed their acquisition of NBC.
I’m unconvinced that the slow merging of many news and media organizations is in the best interests of the general public. What net positive arises for consumers from having large telecommunications companies also in control of what gets delivered over their wires? If anything, the effect of this will be to create a vastly larger, more powerful, and more influential entity, capable of gobbling up some of the largest companies in the world.
Update:Dennis K. Berman has posted a graphic of the composition of today’s AT&T. The near-reversal of the 1982 breakup of Bell’s monopoly is pretty astonishing.
Criminals this morning massively attacked Dyn, a company that provides core Internet services for Twitter, SoundCloud, Spotify, Reddit and a host of other sites, causing outages and slowness for many of Dyn’s customers.
It’s incredible — and more than a little irresponsible — that we’ve taken something as decentralized as the web and made it largely dependent upon a handful of popular providers.
According to researchers at security firm Flashpoint, today’s attack was launched at least in part by a Mirai-based botnet. Allison Nixon, director of research at Flashpoint, said the botnet used in today’s ongoing attack is built on the backs of hacked IoT devices — mainly compromised digital video recorders (DVRs) and IP cameras made by a Chinese hi-tech company called XiongMai Technologies. The components that XiongMai makes are sold downstream to vendors who then use it in their own products.
“It’s remarkable that virtually an entire company’s product line has just been turned into a botnet that is now attacking the United States,” Nixon said, noting that Flashpoint hasn’t ruled out the possibility of multiple botnets being involved in the attack on Dyn.
What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.
Of course, this would only be a domestic solution to an international problem. The internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.
Be sure to read Krebs’ article on the cause of today’s attack. In it, he notes that many of the devices used in the attack are vulnerable to a ridiculously obvious flaw: a hardcoded root password for Telnet and SSH. Any security researcher worth their salt would find this problem in a heartbeat, but it’s up to the manufacturers of these devices to do their due diligence in getting them tested. Perhaps a rudimental penetration test should be part of the certification process by consumer protection agencies.
The practical result of the change is that the DoubleClick ads that follow people around on the web may now be customized to them based on the keywords they used in their Gmail. It also means that Google could now, if it wished to, build a complete portrait of a user by name, based on everything they write in email, every website they visit and the searches they conduct.
Google also happens to run the most popular website analytics suite, estimated to be used on tens of millions of websites. They say that they are currently keeping browsing data separate from other Google activity, but they’re leaving the door open for that to change in the future.
I’m not trying to spread F.U.D., but Google’s change to their integration of DoubleClick data is significant. Datanyze estimates that DoubleClick holds a 75% market share within the top million websites, as ranked by Alexa. That’s more than enough to get a remarkably accurate picture of a user’s browsing history. If you use Chrome in signed-in mode, there’s already an option to make the websites you visit part of your Google profile. If Google is willing to reverse their stance on DoubleClick and has an option to track your Chrome history, a quiet policy shift towards blending analytics data doesn’t seem that far off.
There is no company that can do a better job of tying your name to nearly everything you do online. If any other company — or, indeed, a government — were to do this, there would be outrage. Yet, Google has largely managed to avoid deep concerns. Most people still use Google search, Android phones, watch YouTube videos, and trust Google Maps to get them where they’re going. What would it take for users to recognize just how risky this is? If this year has shown us anything, it’s that even the largest companies are susceptible to catestrophic breaches of security.
I wouldn’t be surprised if Apple never even mentions next year that 2017 is the 10th anniversary of the original iPhone. And if they do mention it, I think it will be a brief passing reference on stage, not a part of any advertising or marketing campaign.
If they do mention it, I think it will be a lot like the way Phil Schiller alluded to the original Mac when introducing the 27-inch Retina iMac:
It’s the thirtieth birthday of the Mac this year, and [this lineup is] the best ever. […] I think [the Retina iMac is] the perfect fitting to the thirtieth birthday of Macintosh.
Today’s Retina iMac is obviously different from the original Macintosh in pretty much every way. But if you put them side-by-side, you’d notice the familial similarity. The Retina iMac is that original Macintosh with every single element pushed to the ragged edge.
If the next iPhone is similar to what the rumours say, it’s going to be that kind of upgrade. It’s very likely that you’ll be able to place it beside the original iPhone and acknowledge the similarities, while seeing it as possibly the purest expression of what a smartphone can be. Yet, while it may be a fitting tribute on the iPhone’s tenth birthday, that’s not why it’s being released next year. Whatever the case for the iPhone next year, it’s because that’s the best of what Apple can do.
The report from Cellular Insights finds that iPhone 7’s equipped with Qualcomm’s MDM9645M modem, which powers the (A1660, A1661) Verizon, Sprint, and SIM-free models, features better cellular performance than the (A1778, A1784) Intel version. Not only that, but the Qualcomm version’s ability to take advantage of Ultra HD Voice has been disabled as well according to the report.
I wouldn’t read too much into this report. Remember last year’s brief controversy about the performance differences between the dual-sourced A9 SoCs? It quickly fizzled out after Apple noted that all iPhones experience slight differences in processor performance and battery life due to variances in manufacturing processes. There’s no reason to suspect that Apple has dual-sourced their modems this year without assuring comparable real-world performance.
Of all of the features of Google’s new Pixel phones, the camera is receiving perhaps the loudest praise. It’s no wonder: most of the images I’ve seen look fantastic, especially in low light.
Sam Byford of the Verge spoke with Google’s Marc Levoy about how they used software to eke out the best photos they could from a fairly standard smartphone camera sensor:
The traditional way to produce an HDR image is to bracket: you take the same image multiple times while exposing different parts of the scene, which lets you merge the shots together to create a final photograph where nothing is too blown-out or noisy. Google’s method is very different — HDR+ also takes multiple images at once, but they’re all underexposed. This preserves highlights, but what about the noise in the shadows? Just leave it to math.
Google also claims that, counterintuitively, underexposing each HDR shot actually frees the camera up to produce better low-light results. “Because we can denoise very well by taking multiple images and aligning them, we can afford to keep the colors saturated in low light,” says Levoy. “Most other manufacturers don’t trust their colors in low light, and so they desaturate, and you’ll see that very clearly on a lot of phones — the colors will be muted in low light, and our colors will not be as muted.” But the aim isn’t to get rid of noise entirely at the expense of detail; Levoy says “we like preserving texture, and we’re willing to accept a little bit of noise in order to preserve texture.”
This sounds like a very clever workaround for getting great images from a sensor smaller than a postage stamp, and the results so far seem to support that.
However, some reviewers seem to prefer the warmer tones of the iPhone’s camera, and the Pixel doesn’t have the wide colour capture of the iPhone. While the former benefit is preferential, the latter benefit is becoming increasingly noticeable: the iPads Pro, the iMac, the iPhone 7, and — likely — next week’s MacBook Pros all support a wider colour gamut.
Of course, there’s a followup question worth asking: which of those is more important for a smartphone?
It’s official: Apple’s next event will be held at 10:00 Pacific on October 27 at their campus in Cupertino. They’re giving this event a pretty bold title, again. Maybe there will be iMac-related news at this event after all.
Walt Disney Co. decided not to pursue a bid for Twitter Inc. partly out of concern that bullying and other uncivil forms of communication on the social media site might soil the company’s wholesome family image, according to people familiar with management’s thinking.
“What’s happened is, a lot of the bidders are looking at people with lots of followers and seeing the hatred,” Cramer said on CNBC’s “Squawk on the Street,” citing a recent conversation with Benioff. “I know that the haters reduce the value of the company…I know that Salesforce was very concerned about this notion.”
It would be awful if the only reason Twitter decides to get a handle on the worst parts of their service is because the company is unsaleable otherwise. Awful, but entirely expected.
Porsche’s and Apple’s design philosophies are similar. Much like the 356, the original iPhone was about defining a foundation for the future. It was different from other phones on the market — it made a rectangular touchscreen the main way to interact, displacing buttons and keypads. Now the iPhone is the essence of a phone.
This is not a new argument, but it is a very good one. There is an expectation of what an iPhone should look like and, though it has morphed in form and materials since its debut, the iPhone 7 still looks like an iPhone, and that’s right. So is this:
The seamless interaction between the technologies hidden behind the screen, the software, and our services is good design. Apple thus far has made sure that it gets most of that experience right — especially the stuff under the hood. Perhaps the next time someone criticizes its designs, we should remember: good design means your phone doesn’t explode.
I’ve written a fair bit about Siri over the past month or so: in my iOS 10 review, in response to Walt Mossberg’s piece, and in response to a piece from Stephen Hackett. I think it’s important to keep bringing it up because I think Siri is currently fundamentally flawed in its design.
The way I see it, Siri requires three streams of improvement that can roughly be prioritized in terms of their complexity and perceived intelligence. At the highest level, it should be able to maintain context over the course of several requests. That is what a good human assistant would be able to do, and it’s a request for Siri that I’ve often seen expressed in tech circles.
Something slightly less complex but, arguably, of similar value is to improve the number of things Siri knows and can do. Frustrations with Siri’s limited knowledge have partially been alleviated through the introduction of SiriKit, but it is a limited set of APIs. Trivia and news items should be more frequently updated, and that’s something only Apple can do.
But there are usability concerns that run much deeper. For example, holding up my Apple Watch and saying “Hey Siri, text Michel” will return an onscreen button that must be tapped in order to dictate my text. I’ve mentioned this previously, but I’ll bring it up again because it grates on me for a couple of reasons. First, a task initiated by voice should continue using vocal interaction because the user has indicated that their hands are occupied. Second, this is something Apple already knows because the same command on an iPhone responds with an audible prompt for dictation. On the iPhone, this is an example of good design; on the Apple Watch, it’s poorly-designed in a pretty obvious way.
If Siri is to be the interaction mechanism of the future — as is indicated by bringing it to the Mac, using it as a primary user interface for the Apple TV, and the introduction of the AirPods which can’t even adjust the volume without depending on Siri — it ought to deserve an appropriate amount of attention to its design and functionality.
Apple’s most recent hires and acquisitions indicate that, behind the scenes, Siri is being given a high priority within the company. Yet, it’s hard to square those acquisitions with Siri’s age: Apple has had five years to work on this stuff. It would be ridiculous to argue that they blew their chance — not with over a billion Siri-capable devices in active use — but there’s definitely an impression that Apple isn’t yet good enough at augmented reality and machine learning. More worrying for me is that the user interface component of Siri — a field where Apple typically excels — simply isn’t good enough.
This event has to be one of the closest to the winter holidays in Apple’s recent history. I’m excited.
Update: Realistically, I’m expecting a significant update to the MacBook Pro line, with a minor update to the 13-inch MacBook Air. I doubt we’ll see updates to the iMac or Mac Mini, let alone the Mac Pro. Mark Gurman is hinting that the 11-inch MacBook Air isn’t being discontinued.
Field Notes’s quarterly edition is called “Lunacy” this time around, and it looks gorgeous. However, I’m more interested in another product they launched today: the Brand’s Hall pen:
We’ve partnered with Allegory Goods of Chicago to produce a limited-edition, fine rollerball pen using wood reclaimed from an iconic Chicago building, which was constructed in the aftermath of the Great Fire of 1871. The body of the Brand’s Hall Pen is made from salvaged Old-Growth White Pine (likely from Western Michigan) that has been turned by hand on a lathe and then individually sanded, embossed and polished in Chicago. No two are exactly alike.
Accompanying the pen is the history of Brand’s Hall, and it’s fascinating — it’s what this post links to. The pen is a little spendy, but I’ve ordered one. I’m a sucker for stuff like this.
We all know that Project Titan is one of the most difficult ideas Apple has undertaken, second only to updating their Mac lineup. But relief is, reportedly, nearly here, according to Mac Otakara:
Furthermore, it seems that they are also going to announce the new MacBook Pro at the same time [in October], which will go on to replace the entire MacBook Pro series.
It seems all of these models are developed with support for the USB-C and Thunderbolt 3 ports in mind, and will no longer be compatible with the USB-A connector, and the MagSafe 2 and Thunderbolt 2 ports.
The MagSafe is such an Apple-y connector: it brings so many improvements over a standard power connector that it justifies its nonstandard design.
Dropping it ten years after its introduction is equally Apple-y. If the MacBook is anything to go by, they’re basically saying that charging your computer is something you should do so infrequently that tripping over your cable is a thing of the past, because you’re probably asleep.
Speaking of the MacBook, Mac Otakara also says that the 11-inch Air might be dropped, which makes sense, given the amount of overlap between the little Air and the 12-inch MacBook. Mac Otakara has no news on any other Macs, and it appears that the iPad won’t be refreshed until springtime.
Update: As Macs slowly move from USB-A to USB-C, at what point does the iPhone start shipping with a USB-C to Lightning cable in the box?
Chou had spent her twenties working at places such as Google, Facebook, and Quora before landing at Pinterest as an engineer, and as she expanded her networks she started informally keeping track of the number of female engineers at tech firms. It was deeply ironic, she thought, that in a data-driven industry that prides itself on running experiments, performing A/B testing, and measuring outcomes, there was no official, easily accessible data about the number of women actually working in the field. And so she wrote:
As an engineer and someone who’s had ‘data-driven design’ browbeaten into me by Silicon Valley, I can’t imagine trying to solve a problem where the real metrics, the ones we’re setting our goals against, are obfuscated. Vanity metrics are dangerous; just pointing to the happy numbers, like those on Grace Hopper conference attendance, doesn’t do anything except make people feel good while the real issues fester, unaddressed.
With her employer’s blessing, she then shared the number of female engineers at Pinterest — 11 out of 89 — and encouraged her readers to do the same. They did. Within a week, employees from over 50 companies had submitted data, including Dropbox, Rent the Runway, Reddit, and Mozilla — and the companies kept on coming.
In the three years since Chou first coaxed tech companies into releasing their diversity figures, the motivation to do so has somewhat fizzled out. Of the eight large companies that I compare annually, four — Amazon, LinkedIn, Yahoo, and Twitter — haven’t released their numbers for 2016. Countless smaller companies also haven’t; I spot-checked Reddit, Mozilla, and Dropbox, and the most recent report from those three companies is Dropbox’s, from January.
A lack of diverse employees clearly remains a defining issue of most of the tech companies that we rely upon daily. Chou’s work laid the foundation for every company to be transparent and to do better. But, without constant pressure, it seems that many major tech companies would rather avoid releasing their internal stats.
[Peter Thiel], a non-employee (a ‘part-time partner’), is directly supporting Donald Trump at a massive scale — over a million dollars! — after we’ve learned even more of Trump’s horrendous statements, positions, and past actions than we could’ve ever imagined.
This isn’t voting for an economic or social policy — this is literally paying a huge amount of money to directly support a racist, sexist bigot with rapidly mounting allegations of multiple sexual assaults.
Much like Brendan Eich’s contributions to the “yes” vote on Proposition 8, this isn’t merely a difference of opinion. I will always stand up for the ability for others to have political opinions that differ from my own, but I have no tolerance for those who purchase the power to discriminate.
Power doesn’t surrender power w/out a struggle. In this struggle to end or uphold straight/white/male/cis supremacy, actions do the talking. Peter Thiel’s actions have demonstrated where he falls in this struggle. YC’s actions should demonstrate the same.
We have hope for YC; YC has openly acknowledged bias and harassment problems in tech, and it has made progress in diversity and inclusion in its own organization over the last few years. We saw an opportunity to work with YC companies interested in building vibrant and diverse organizations, and we actively invited YC as a contributor to our VC Include program to gain access to its nearly 1,000 companies and CEOs, who are greatly admired and emulated.
But Thiel’s actions are in direct conflict with our values at Project Include. Because of his continued connection to YC, we are compelled to break off our relationship with YC. We hope this situation changes, and that we are both willing to move forward together in the future. Today it is clear to us that our values are not aligned.
Apple Inc. has drastically scaled back its automotive ambitions, leading to hundreds of job cuts and a new direction that, for now, no longer includes building its own car, according to people familiar with the project. […]
New leadership of the initiative, known internally as Project Titan, has re-focused on developing an autonomous driving system that gives Apple flexibility to either partner with existing carmakers, or return to designing its own vehicle in the future, the people also said. Apple has kept staff numbers in the team steady by hiring people to help with the new focus, according to another person.
Apple executives have given the car team a deadline of late next year to prove the feasibility of the self-driving system and decide on a final direction, two of the people said. Apple spokesman Tom Neumayr declined to comment.
If Apple elects to just build the software platform, does that mean they license it to other car companies? That seems unlikely to me. More likely, if this report is accurate, would be a collaboration — or series of collaborations — that give Apple some control over the car itself, similar to their co-branded Watches.
A collaborative path still feels unlike Apple. But, perhaps due to the nature of a product like this, that may prove to be a good thing.
Update: A collaborative strategy might also make it easier to do multiple price points, particularly at the higher end. After the performance of the first-generation Apple Watch Edition, it might make more sense to work with an automotive brand already positioned to sell high-end cars.
Mark Bramhill announced today that he’s bringing back his podcast, Welcome to Macintosh, for a third season. Regular readers here will know that I’m not a big podcast guy, but Welcome to Macintosh is one of the few that I love. It’s a well-edited, fast-paced show, and every episode revolves around a single narrative.
Bramhill wants to raise $10,000 to bring it back. He needs money to travel, license music, and more. I’ve contributed. If you like the show, I hope you will too.
Back in 2009, Digg started wrapping external links on their site in an iframe using a URL shortener. The “DiggBar”, as it was called, was widely derided for, among other things, stealing search rankings — this was back when Digg was relevant — and breaking bookmarking. It was adjusted shortly after launching, only to be killed off about a year later.
So, lesson learned, right? Well, not as far as Google is concerned. Alex Kras enabled AMP on his self-hosted WordPress site, only to find it ruined his URLs:
Most importantly, I was surprised to find out that instead of redirecting users to an optimized version hosted on my server, Google was actually serving a snapshot of the page from their own cache. To make things worse, Google was injecting a large toolbar at the top of the snapshot encouraging users to get back to Google search results (a functionality already provided by the back button) and making it harder to get to the original site.
A while back, I was trying to get access to the regular version of a page because there was an element that was broken on the AMP version. To do so, I had to adjust the URL manually — there appeared to be no other way to get to the HTML page.
I’m a sucker for a good SR-71 story. In this one for War is Boring, Robert Beckhusen writes about the attempts by industry watchers to figure out what Lockheed was building before the SR-71 was declassified. In some cases, it feels remarkably similar to today’s Apple rumour mill:
By March, [retired admiral John B. Pearson] and his coworkers studied shipments of liquid hydrogen and oxygen fuel, movements of test pilots and even subcontractors working on specialized precision valves to deduce not only the existence of a new spy plane, but guess its specifications. They weren’t dead-on accurate, but they were close.
The problem with always rounding halves up is that in doing so, you introduce a persistent bias in whatever calculations you do with the rounded number. If you’re adding a list of rounded numbers, for example, the sum will be biased high.
If you round halves to the nearest even number, though, the bias from upward roundings tends to be negated by an equal number of downward roundings. Overall, you get better results.
This article is technically about a change in PCalc, but it’s worth reading for these two paragraphs alone. I was always taught to round all halves upwards, but the round-to-even rule makes far more sense, especially when working with large sets of numbers. Consider me enlightened.
There are some fundamental differences between Apple and Google when it comes to privacy, and I believe those differences will allow Google to continue to lead in the area of digital assistants infused with artificial intelligence. However, consumer privacy has nothing to do with some of the simple tasks Siri still fails at doing.
Siri falls back to a Bing search results page way too often. I expect my virtual assistant to be able to parse information from the Internet and read it back to me as I drive or am in the kitchen with my hands dirty. Reading a bunch of search results completely defeats the purpose of using Siri to begin with.
I’ve noticed examples of this on my Apple Watch, too. I might get a text message while cooking, read it, and then say “Hey Siri, reply to that text from Geoff”. Siri will show me an onscreen button with a microphone, which I must tap with my probably-messy hand to begin dictating my reply.
This example, and inconsistencies between similar queries, and returning a list of Bing results — these are all examples of Siri’s functional pitfalls. Improving all of these aspects doesn’t require a compromise on privacy. This is just about the software doing the right thing.
Early Tuesday morning, Samsung announced it has permanently discontinued and stopped promoting the Galaxy Note 7, and has asked its customers to return their devices for a refund or exchange. A Samsung spokesperson told me the phones will not be repaired, refurbished, or resold ever again: “We have a process in place to safely dispose of the phones,” the company said.
This sounds reasonable, but the fact is that besides sitting in your nightstand drawer for eternity (a fate that will surely befall some of these phones) or being thrown into a garbage dump or chucked into the bottom of a river, being recycled is the worst thing that can happen to a smartphone.
The consumer electronics industry has made significant improvements towards reducing their environmental footprint, but there’s a long way to go. It’s particularly egregious here because the word recycling connotes a sense of environmental responsibility. But, as such a small amount of a phone is typically being recycled that it feels misleading, at best.
Coincidentally, Amelia Urry looked into Apple’s iPhone-dismantling robot, “Liam”, earlier this week for Grist. It’s better than traditional recycling techniques, but nowhere near as great as one might think. The drift towards a three-year refresh cycle due to higher-quality, better-performing smartphones ought to be encouraged for its ecological benefits.
If you try and treat Siri like a truly intelligent assistant, aware of the wider world, it often fails, even though Apple presentations and its Siri website suggest otherwise. (And I’m not talking about getting your voice wrong. In my recent experience, Siri has become quite good at transcribing what I’m asking, just not at answering it.)
In recent weeks, on multiple Apple devices, Siri has been unable to tell me the names of the major party candidates for president and vice president of the United States. Or when they were debating. Or when the Emmy awards show was due to be on. Or the date of the World Series. When I asked it “What is the weather on Crete?” it gave me the weather for Crete, Illinois, a small village which — while I’m sure it’s great — isn’t what most people mean when they ask for the weather on Crete, the famous Greek island.
Mossberg isn’t alone. Earlier today, Neven Mrgan asked Siri to play music recently added to his library. No variation of that request was successful. Last week, I asked Siri what the weather would be like in Banff the next day, and it provided me with a weekly forecast, not the hourly forecast anyone would expect for that question.
These requests are not complex — Mossberg says that Apple fixed many of these commands in the weeks after he tweeted about them, which suggests to me that it’s trivial to reprogram a given query. I tested some of Mossberg’s questions about the 2016 U.S. election and found many of my questions were answered, but not consistently or reliably.
This comes down to two key gaps in the Siri development chain. First, Apple says they update Siri every other week; I maintain that it should be updated far more frequently than that. Second, Apple told Mossberg that they don’t prioritize trivia:
It puts much less emphasis on what it calls “long tail” questions, like the ones I’ve cited above, which in some cases, Apple says, number in only the hundreds each day.
My hunch is that questions like these are asked less frequently not because people don’t try, but because users have tried and Siri didn’t answer. Over time, users teach themselves that Siri simply isn’t good for this kind of information, and they stop trying.
These sort of glaring inconsistencies are almost as bad as universal failures. The big problem Apple faces with Siri is that when people encounter these problems, they stop trying. It feels like you’re wasting your time, and makes you feel silly or even foolish for having tried. I worry that even if Apple improves Siri significantly, people will never know it because they won’t bother trying because they were burned so many times before. In addition to the engineering hurdles to actually make Siri much better, Apple also has to overcome a “boy who cried wolf” credibility problem.
Entirely agreed, with one minor exception: I think the inconsistencies are worse than outright failure. The inability to answer a query implies a limitation which, while not ideal, is understandable. Inconsistency, on the other hand, makes Siri feel untrustworthy. If I can’t reliably expect the same result with basic queries that are almost identical, I’m much less likely to find it dependable.
Robin Linus demonstrates an impressively straightforward technique for fingerprinting users by testing which social networks have an active login token. This reminds me a little of Epic Marketing’s equally simple history sniffing technique — albeit, in a much more limited capacity.
You should know that this link sniffs several popular sites with login forms, which, naturally, includes a couple of porn sharing sites. Depending on your workplace security settings, that might raise a red flag or two.
Samsung Electronics Co. is ending production of its problematic Galaxy Note 7 smartphones, taking the drastic step of killing off a device that became a major headache for South Korea’s largest company.
Samsung had already recalled the Note 7 once last month after early models exploded and the latest move comes after customers reported that replacement phones were also catching fire. Samsung will be without its highest-end smartphone that was supposed to compete against Apple Inc.’s iPhones and other premium devices during the holiday shopping season.
After even the replacement phones started catching fire, I’m not surprised by Samsung’s decision. Based on Lee and Kim’s report, it seems that they don’t have a satisfactory resolution yet, and it’s far riskier to keep trying new things when the Note 7’s reputation is so damaged.
I’m not sure I really believe that Samsung will have nothing like the Note 7 to show for the all-important holiday quarter. If they manage to find the fault and fix it within the month, they could easily re-launch the phone under a different name — all of the tooling and production capabilities are already in place. The question is: would anyone buy it, or, indeed, any Samsung phone at this rate? No Android OEM has the name recognition or marketing prowess of Samsung’s Galaxy line, so where do all of those sales go now that their brand is in the toilet?
On a somewhat related note, it happens to be National Fire Prevention Week in the United States and Canada. Stay safe and promptly return your Galaxy Note 7.
Last time we discussed Apple’s removal of Dash from the App Store, the situation around it was murky and still unfolding. Today, some of that confusion has ended with two articles. First, Rene Ritchie of iMore received a statement from Apple:
“Almost 1,000 fraudulent reviews were detected across two accounts and 25 apps for this developer so we removed their apps and accounts from the App Store,” an Apple spokesperson told iMore. “Warning was given in advance of the termination and attempts were made to resolve the issue with the developer but they were unsuccessful. We will terminate developer accounts for ratings and review fraud, including actions designed to hurt other developers. This is a responsibility that we take very seriously, on behalf of all of our customers and developers.”
That’s quite the accusation. A second account explains why nobody found the low-quality utility apps allegedly from the same developer, and why so many people rallied behind the developer, Bogdan Popescu: there’s just no need to create fraudulent reviews for a well-regarded niche app like Dash.
However, Popescu provided an explanation for the secondary account today:
What I’ve done: 3-4 years ago I helped a relative get started by paying for her Apple’s Developer Program Membership using my credit card. I also handed her test hardware that I no longer needed. From then on those accounts were linked in the eyes of Apple. Once that account was involved with review manipulation, my account was closed.
Popescu recorded a call on Saturday with an Apple developer relations representative. In the call, the representative says that Popescu would have his account reinstated if he wrote a blog post that stated that his account was linked to another that was involved in fraudulent activity, and that he was working with Apple to unlink the accounts and get back into the program. That seems fair. Popescu apparently sent a draft of the post to Apple that night, and heard nothing back until today, when Apple sent the statement to iMore and other press outlets.
Popescu concludes his response by publishing a recording of a phone call with an Apple representative. Popescu did himself no favors by doing so. For one thing, it’s a breach of trust. But for another, I think Apple comes off well in this recording. They’re bending over backwards to give Popescu another chance and have his account reinstated.
It’s also notable that Apple investigated this and tried to resolve it as well as they did. If it were any other company — say, Google for a suspended AdSense or YouTube account — I suspect the amount of effort devoted to it would be much lower.
Agreed on all points.
I don’t think the reaction to the initial news of Dash’s removal was a waste, nor was it outsized. If a developer’s livelihood is largely dependent on the App Store and their apps are well-regarded, any decision from Apple that affects that ought to be scrutinized.
A public fight isn’t ideal from a PR perspective, but it seems like that it’s what it can take to get an adequate answer. In his first post on the subject, Popescu said that he asked developer relations why Dash was removed and didn’t receive an answer initially. They later contacted him and told him about fraudulent activity on his account — something which he maintains that he’s never participated in.
I wouldn’t be surprised if Apple bans dozens of developer accounts every week for fraud, and almost none of those will be reported because there’s no disputing the facts. Popescu’s case is much more unique: from Apple’s perspective, he was operating two accounts, one of which was dabbling in fraud. Popescu said that he knew nothing about the fraudulent operations of the second account and was unable to see any of Apple’s warnings.
Based on everything released so far, I don’t think Apple made a mistake. As far as they knew, it was the same account with a lot of black marks on its record. However, their process remains opaque enough that it has taken a rather public back-and-forth for Popescu to clarify fundamental aspects of why Dash was pulled. Everyone ought to have learned something here. I do hope Popescu gets his developer account back.
Update: “Frumpsnake” on the MacRumors forum found compelling evidence that Popescu used to manage all of the apps in his two accounts, and that he placed Dash into its own account to try to appear legitimate. Apple would surely have access to his account history, too.
I was clearly too optimistic about the situation with these two accounts, but I stand by what I wrote earlier: I think it’s right to assume the best from the developer, especially since Apple has mistakenly removed apps before. But this is not one of those circumstances.
Joshua Ho and Brandon Chester subjected the iPhones 7 to the rigorous battery of tests unique to AnandTech, and it’s a screamer: insane performance jumps over the already-fast iPhones 6S met with big leaps in battery life. Yet:
As Apple has rapidly added new features, UI performance has taken a hit, and the nature of the performance problems is such that throwing more hardware at them won’t make them go away because they’re often due to circumstances where rendering is blocked or is being done in a manner such that even large improvements in performance would not bring things back to 60fps. While I’m not going to comb through the entire OS to find all the cases where this happens, it happens enough that it’s something I would describe as a significant pain point in my experience as a user.
It’s nowhere near as egregious as the performance hiccups on Android phones, but iOS is increasingly adding instances where animations aren’t as smooth as they should be. Activating Notification Centre, scrolling through widgets in the Today view, and pulling down to show Spotlight are all instances where it’s reliably easy to cause a suboptimal animation.
Catchy name aside, the DP700C6A-X01US is distinguished from a lot of its competitors because it’s cylindrical. Jon Phillips, PC World:
It would be easy to call the ArtPC Pulse a rip-off of Apple’s Mac Pro, but that position just ignores another competitor in the “let’s make a computer shaped like a cylinder!” race. The HP Pavilion Wave is currently for sale in HP’s online store, and, frankly, it looks way more sophisticated that Samsung’s bid for exactly the same market.
Go ahead: click on that link and tell me that the Samsung DP700C6A-X01US looks more like HP’s extruded Reuleaux triangle than it does a Mac Pro. I dare you.
On the plus side, perhaps the introduction of the DP700C6A-X01US will remind Apple that they do still sell the Mac Pro, and maybe it might be a good idea to, you know, update it.
To celebrate their fifth anniversary, the Verge is publishing several interviews with key members of their staff, culminating in a redesign set to launch November 1. The first of those interviews was published today, with “engagement editor” Helen Havlak, who’s basically in charge of getting as many Verge readers as possible.
Nilay Patel, who conducted the interview, thought he’d help out by tossing in a particularly inflammatory statement, because there are no better readers than baited readers:
It was a good run, open web! So sorry that Apple killed you by turning Safari into the new IE and forbidding alternative browsers to innovate on iOS.
Not this shit again.
Since Patel left this hanging in the air with no supporting context, I assume he’s referring to Nolan Lawson’s whining and moaning about Safari’s then-lacklustre IndexedDB support. No matter how valid Lawson’s point may have been, to compare Safari to Internet Explorer is laughable at best.
But IndexedDB doesn’t really apply to a news site like the Verge. In fact, I can’t think of any features or APIs missing from Safari that will help the Verge deliver largely text-based articles. Even Lawson admits that the features he’s looking for in Safari are mostly there for web-based applications. A news site doesn’t — and, arguably, shouldn’t — need the same level of resources as Google Docs.
So what’s the solution that the Verge has come up with to enhance the way they deliver pages to mobile visitors?
“AMP is coming to eat our mobile page views,” says Helen, “But AMP loads super, super quickly and is simply a better experience right now. So can we add enough design to make an AMP page feel like The Verge? […]”
In just two sentences, though, Havlak effectively admitted that the Verge’s mobile web experience is far worse than AMP’s. Why would that be? Well, it could be something to do with the typical weight of a Verge article: this article is about 1,200 words and includes just one visual of substance, yet it downloaded over 12 MB of stuff, most of it in the form of 56 different scripts, a bunch of ads, and a 2.6 MB decorative GIF at the top. This is not atypical — Google reportedly uses the Verge as part of a series of automated performance tests for Android.
The message here is simple: AMP may provide a better reading experience right now, but Patel and Havlak have control over that. They can improve the way that the Verge loads by removing third-party scripts and comments, just like AMP does. They can make the choice to include a small JPEG at the top, if they want to decorate their articles, instead of a large animated GIF. They can choose to reduce the number of different analytics scripts on any given page. All of these options are available for them to improve the reading experience of a typical Verge article while retaining the building blocks of the open web, as Patel so clearly would prefer:
You could also make a fine wine out of the tears I weep each night as the open web dies anew, but that’s neither here nor there.
Instead, they’ve chosen to embrace AMP, a technology that fractures the web. Why?
Our search traffic largely comes from Google, which already serves our AMP pages in Google News. Google is also switching mobile search results to AMP links, and that means almost all of our search visitors will see AMP pages instead of the mobile web.
In short: revenue.
To a certain extent, that’s fair. The Verge is a business and, like most others, they’re going to continue to try to expand in as many ways and generate as much money as they can.
But the Verge isn’t just adding support for AMP. They’re going all-in on it (emphasis mine):
So if we aren’t going to deliver The Verge on the mobile web, what do we have to figure out in order to deliver our brand to the digital audiences of the future?
It sounds like they’re making a conscious choice to skip most typical optimizations for the open mobile web, instead embracing platform-specific distribution to Facebook, Google, Apple News, and the desktop web. More than that, it sounds to me like Patel will stand up for “open” and “free” until it impacts business. Remember when he published that diatribe against the headphone jack-less iPhone 7 prior to its announcement?
Restricting audio output to a purely digital connection means that music publishers and streaming companies can start to insist on digital copyright enforcement mechanisms. We moved our video systems to HDMI and got HDCP, remember? Copyright enforcement technology never stops piracy and always hurts the people who most rely on legal fair use, but you can bet the music industry is going to start cracking down on “unauthorized” playback and recording devices anyway.
The message here is simple: the headphone jack was free and open, while digital audio has the potential for being closed and proprietary. It becomes dependent on the whims and business models of providers, labels, and technology companies.
The Verge has shifted their business model, too. Instead of relying upon traffic from third-party sources, they’re now entirely reliant upon third-party platforms. That seems pretty risky to me. What if, for instance, the Verge pursued a new initiative that was largely dependent on Facebook and, in particular, Facebook Video? And then what if, say, Facebook inflated the success of your video venture by 60–80%? That would make you re-think your strategy, no?
And here’s the trend: almost all of our growth is in video, particularly Facebook video. In particular, look at those Circuit Breaker numbers — most of the content posted to the Circuit Breaker Facebook page never makes it to The Verge’s website, but it’s still way out ahead of YouTube and our custom player, all of which get boosted when we embed them on article pages on the web.
Anyway, back to this article about the headphone jack. Point number six on the list:
No one is asking for this
Raise your hand if the thing you wanted most from your next phone was either fewer ports or more dongles.
I didn’t think so. You wanted better battery life, didn’t you? Everyone just wants better battery life.
Raise your hand if the thing you wanted from a website was more tracking, more ads, and greater consumption of your allotted mobile data or using a third-party platform to access the site through a proprietary language.
I didn’t think so.
We’re so out of ideas that actively making [phones] shittier and more user-hostile is the only innovation left.
Replace “phones” with “websites” and it kinda holds true, doesn’t it?
Robert Graham of Errata Security posted a well-considered critique of Joseph Menn’s blockbuster report on Yahoo:
My point is this: the story is full of mangled details that really tell us nothing. I can come up with multiple, unrelated scenarios that are consistent with the content in the story. The story certainly doesn’t say that Yahoo did anything wrong, or that the government is doing anything wrong (at least, wronger than we already know).
Menn’s initial article, while revelatory, should have been clearer from the time it was posted. Vague reporting on the details of security matters damages our ability to argue for better privacy protections in the long term, because we will be unable to accurately address specific violations of it.
Free space is what we’ve always known it to be. It’s space on disk where there’s nothing, that’s ready to have data poured into it. Purgeable space is different. Purgeable space is a collection of files that are really on disk, ready to be read or modified or added to at any time—stuff like files stored in iCloud, dictionaries you haven’t used recently, certain large fonts (especially of Asian languages) that you may never or rarely use, movies and TV shows you’ve already watched (and are re-downloadable from iTunes), and photos and videos in that are synced with iCloud Photo Library (if the Optimize Mac Storage setting is turned on in Photos preferences).
These are real files, but Apple considers them expendable. They can be deleted immediately, without warning, in order to free up disk space, because they can always be downloaded again later.
I don’t remember hearing about the handling of purgeable space when Sierra was launched, but that’s okay. Most users won’t notice anything different going on under the hood; MacOS will, probably, just do the right thing.
Ubiquity, [Viv cofounder Dag] Kittlaus said in an interview, is the reason Viv is trundling into Samsung’s bosom. Specifically, when I asked him why Samsung, he said this:
“They ship 500 million devices a year. You asked me onstage about what our real goal is, and I said ubiquity.
If you take a look around what’s going on in the market these days, and our readiness to really expand on our distribution, it made perfect sense when we discovered that our visions are so completely aligned, and our assets using the core technology in this huge distribution, the opportunity that now is the right time, and Samsung’s the right partner.”
Kittlaus previously helped build Siri, both pre- and post-acquisition, but Viv is a leaps-and-bounds improvement. If the future of mobile technology is, indeed, through virtual assistants, and if Viv behaves in the real world anything like it did in its first demoes, this is a critical acquisition for Samsung. It also happens to decrease their reliance upon Google.
[Brian Green, the owner of the phone] said that he had powered down the phone as requested by the flight crew and put it in his pocket when it began smoking. He dropped it on the floor of the plane and a “thick grey-green angry smoke” was pouring out of the device. Green’s colleague went back onto the plane to retrieve some personal belongings and said that the phone had burned through the carpet and scorched the subfloor of the plane.
He said the phone was at around 80 percent of battery capacity when the incident occurred and that he only used a wireless charger since receiving the device.
Running the phone’s IMEI (blurred for privacy reasons) through Samsung’s recall eligibility checker returns a “Great News!” message saying that Green’s Galaxy Note 7 is not affected by the recall.
Samsung’s Galaxy Note 7 recall keeps getting worse, especially for customers who must now be extremely wary of even their replacement phones. How does anyone trust any Samsung phone after this?
Yesterday’s hardware announcements from Google came with some pretty intriguing updates to their software, as well. Chief among them: Google says that Pixel owners will be able to back up their entire photo and video library in full resolution, for free. As far as I’m concerned, that’s huge. Regardless of the misgivings someone — me — may have about giving my entire photo collection to Google, it’s probably one of the most precious libraries of data I have. I never want to lose my photos.
Among all of Apple’s iCloud offerings, iCloud Photo Library has been the most successful for me, and I generally trust that it will remain more secure than Google Photos. It’s one of the few iCloud products that I actually trust, the others being iCloud Keychain and Contacts syncing. However, this peace of mind comes at a price: a price.
Thomas Ricker wrote about iCloud’s storage and pricing for the Verge:
Most of the time I’m happy to have gone all-in on Apple. But I feel backed into a corner when it comes to paying for even more iCloud storage when it’s necessitated by Apple’s increasingly cloud-centric app bundles. See, the best way to live inside of the Apple ecosystem is to use the company’s free (as it loves to remind us) apps. But Apple caps its free iCloud storage tier at a paltry 5GB — capacity that’s quickly filled with Live Photos, iOS app data, 4k video, GIFs everyone’s sending you in the new iMessages; and critically, iOS device backups. So in reality, Apple’s apps are not free — Apple charges you for them indirectly by requiring you to purchase more and more storage over time.
I don’t agree with Ricker’s assertion that iCloud storage fees make these apps not free, nor that Apple is being deceptive by marketing them that way — nobody ever complains that free computer programs are not actually free because their data takes up hard drive space, and it’s possible to use many of these apps without touching iCloud.
I do think that the iCloud storage tiers become increasingly stingy with every passing year. iCloud launched with 5GB of free storage, and it has remained so for five years. Over that same time period, Apple has introduced tiered storage upgrades that are priced more competitively than they used to be, but I bet most Apple users are still on the free tier and simply tolerate the messages that say “iCloud Storage Full”, particularly when Apple’s online services efforts occasionally feel half-assed.
There are, I think, a few things Apple could do to make iCloud feel like a serious commitment: increase the space allotted at the free tier, exclude iOS device backups from iCloud storage limits, and improve its reliability to Google or Amazon levels. Apple’s executives may put on a brave face when speaking to the press, but if they’re not concerned about iCloud internally, I find that deeply worrying.
It’s high time that Apple made up for a recent dearth of dumb App Store rejections and removals.
Yesterday I sent Apple a request to migrate my account from an individual one to a company one. Once I verified my company with its D-U-N-S Number, they notified me that some features in iTunes Connect won’t be available during account migration.
A while later my iTunes Connect account started showing as “CLOSED” and my apps were removed from sale. I thought this was normal and part of the migration.
Today I called them and they confirmed my account migration went through and that everything is okay as far as they can tell. A few hours ago I received a “Notice of Termination” email, saying that my account was terminated due to fraudulent conduct. I called them again and they said they can’t provide more information.
This is clearly a mistake, but it’s causing real impact to Popescu’s livelihood, as he’s the sole developer of Dash. To make matters worse, poor communication from Apple’s developer relations team — something we’ve discussedmany times before — is preventing him from understanding what happened or what he can do to fix it.
Update: Popescu just updated the post with some followup from Apple developer relations:
Apple contacted me and told me they found evidence of App Store review manipulation. This is something I’ve never done.
Apple’s decision is final and can’t be appealed.
Either something is awry with Apple’s automated processes for detecting fraudulent reviews, or someone is screwing with Popescu. Regardless, a final decision from Apple with no opportunity for recourse is indefensible, as far as I’m concerned. The Mac App Store has been an awful place for developers for a long time; this is not making it better.
I did look into this situation when I read about it today. I am told this app was removed due to repeated fraudulent activity.
We often terminate developer accounts for ratings and review fraud, including actions designed to hurt other developers. This is a responsibility that we take very seriously, on behalf of all of our customers and developers.
I don’t see why Popescu would lie about his alleged involvement in manipulating reviews. Dash is a widely-used and highly-regarded developer resource.
The scope of Alphabet’s ambition for the Google brand is clear: It wants Google’s information organizing brain to be embedded right at the domestic center — i.e. where it’s all but impossible for consumers not to feed it with a steady stream of highly personal data. (Sure, there’s a mute button on the Google Home, but the fact you have to push a button to shut off the ear speaks volumes… )
In other words, your daily business is Google’s business.
This is not a new argument, but it is astonishing to reflect on how far the goalposts have been moved for what is considered a reasonable expectation of privacy.
Before news broke today that Yahoo was searching through mass amounts of user data by government request, the ACLU posted an article about a client they have been defending — Open Whisper Systems, makers of the Signal messaging app — against a government user data request.
ACLU attorney Brett Max Kaufman:
As the documents show, the government’s effort did not amount to much — not because OWS refused to comply with the government’s subpoena (it complied), but because the company simply does not keep the kinds of information about their customers that the government sought (and that too many technology companies continue to amass). All OWS was able to provide were the dates and times for when the account was created and when it last connected to Signal’s servers.
A policy of not collecting or retaining user data remains a very effective way of protecting users’ privacy. It’s really quite simple.
Sam Biddle of the Intercept asked EFF attorney Andrew Crocker what he thought of the all-encompassing directive served to Yahoo:
Crocker said the Yahoo program seems “in some ways more problematic and broader” than previously revealed NSA bulk surveillance programs like PRISM or Upstream collection efforts. “It’s hard to think of an interpretation” of the Reuters report, he explained, “that doesn’t mean Yahoo isn’t being asked to scan all domestic communications without a warrant” or probable cause.
Biddle also asked other major tech companies if they had been subjected to a similar directive. Responses are still coming in, but this one from Twitter is pretty telling:
Asked whether Twitter had ever received such a directive aimed at its messaging system, Nu Wexler, the company’s public policy communications chief, replied that “Federal law prohibits us from answering your question, and we’re currently suing the Justice Department for the ability to disclose more information about government requests.”
Only Microsoft issued an outright denial on the record, while Apple quoted a previously-issued statement. Google, Yahoo, and others have yet to respond.
Yahoo Inc last year secretly built a custom software program to search all of its customers’ incoming emails for specific information provided by U.S. intelligence officials, according to people familiar with the matter.
The company complied with a classified U.S. government directive, scanning hundreds of millions of Yahoo Mail accounts at the behest of the National Security Agency or FBI, said two former employees and a third person apprised of the events.
According to Menn, the demand was issued by an intelligence agency just last year, and was complied with by Marissa Mayer without legal objection. Unlike the 2014 breach of a Yahoo user database, I very much doubt this will affect Verizon’s proposed acquisition, for obvious reasons.
But, like security breaches, it’s likely that the biggest civil liberties breach since Snowden is not this, but something else that we don’t yet know about. As Menn points out, it’s likely that similar demands were made of other email providers.
You would think less than a month after a very popular HackerNews article on how Dropbox Hacks Your Mac, the file sharing company would be careful about the activities they are performing in their software distribution. Nope, not Dropbox. Today they released an update that adds a hacky overlay UI element to finder that cannot be disabled!
This is an experiment that is being tested with a fraction of users primarily on beta releases (which Seth is on, as evidenced by the version number in his screenshots). We haven’t shipped it to everyone so that we can continue to iterate and incorporate feedback. I checked with the team about the “Finder Toolbar” drop down and it looks like it requires a restart of the Dropbox client in order to take affect — let us know if that doesn’t work.
Vargo said that [the Finder Toolbar preference] didn’t work. (And does “primarily” mean some non-beta users?)
I’m not on the beta stream for Dropbox, yet I recently received an update that enabled the toolbar.
It seems increasingly clear to me that one of the primary reasons Dropbox feels so magical is because it messes with aspects of the system that typical developers wouldn’t dare touch. Dropbox has a history of excelling at this, despite the risks inherent to modifying low-level aspects of an operating system, but what happens when they screw up? A history of haxies and swizzled kernel extensions says that Dropbox’s relatively stable run isn’t likely to last forever.
As a component of lithium-ion batteries, cobalt has become one of the single most important contributors to modern industry. However, the conditions in which it is mined are often dangerous, cramped, and frequently use child labour.
Back in January, Amnesty International released a heartbreaking report (PDF) describing the brutal conditions many cobalt miners face, particularly in the Democratic Republic of the Congo:
Not only are state officials aware of the mining activities taking place in unauthorized locations, but they also financially benefit from them. Officials from a range of different government and security agencies control access to unauthorized mining sites and demand illegal payments from artisanal miners. During their visits, researchers saw officials wearing uniforms and others who identified themselves as working for governmental agencies at three unauthorized mining sites. In Kasulo, they saw uniformed police and were approached by two men out of uniform, and one in uniform, who said that they were members of the mines police. These men told the researchers to leave the area as foreigners were not allowed to visit mines sites without official permission, and demanded that the researchers pay a fine. Artisanal miners at these and other sites complained that the mines police and other officials demand payments from them for each sack of ore or to work on the site.
A Ministry of Mines official confirmed to researchers that none of these agencies are authorized to collect payments (referred to in the mining areas as “taxes”) from artisanal miners. The evidence suggests that the state officials are extorting illegal payments from artisanal miners, while turning a blind eye to the unsafe conditions in which miners work that breach DRC’s own laws, including the prohibition on child labour in mines.
As with other, similar, reports of human rights abuses deep within the global supply chain, I assumed that this would rapidly be forgotten. After all, cobalt has been on the U.S. Department of Labor’s rather depressing “List of Goods Produced by Child Labor or Forced Labor” since 2009 (PDF), and very little has come of it.
However, Todd C. Frankel recently investigated cobalt mining for the Washington Post and it’s making headlines all over again:
The Post traced this cobalt pipeline and, for the first time, showed how cobalt mined in these harsh conditions ends up in popular consumer products. It moves from small-scale Congolese mines to a single Chinese company — Congo DongFang International Mining, part of one of the world’s biggest cobalt producers, Zhejiang Huayou Cobalt — that for years has supplied some of the world’s largest battery makers. They, in turn, have produced the batteries found inside products such as Apple’s iPhones — a finding that calls into question corporate assertions that they are capable of monitoring their supply chains for human rights abuses or child labor.
These raw materials are extremely hard to trace — I encourage you to read both reports. This is particularly true when there is effectively one large company controlling much of the global supply of cobalt, and there is financial incentive to make the cobalt as difficult to trace as possible.
There are lots of ways to address this, but something that may prove most effective is to treat cobalt as a conflict material. Under Dodd-Frank, any products from American companies that use tin, tantalum, tungsten, and gold are required to be certified conflict-free. Amnesty International notes how powerful an adjustment to that legislation could be:
Yet it is clear that these companies are currently failing to operationalise the OECD’s five step due diligence process beyond whatever measures they have put in place for 3T and gold. One company explicitly admitted and others implied that this is because cobalt is not covered under US legislation, clearly underscoring the value of law in influencing corporate behaviour.
Unlike with Amnesty’s report, some major tech and automotive companies responded to the specific allegations within the Post’s story. Just one — Apple — has committed to conflict-free certification:
Starting in 2017, Apple will internally treat cobalt as a conflict mineral, requiring all cobalt refiners to agree to outside supply-chain audits and conduct risk assessments. The company also will soon, for the first time, include cobalt in an annual update of due-diligence efforts for its conflict-minerals supply chain. This goes beyond what current OECD guidelines call for. Apple also supports adding cobalt to the U.S. conflict-minerals law, which currently requires American firms to try to verify the source of tin, tungsten, titanium and gold used in their products.
As the American consumer electronics industry is one of the world’s largest, this policy change could have a significant positive impact on the worldwide cobalt industry.
But, while there are people working in tremendously awful labour conditions around the world in many industries, we should expect that every company practices due diligence and ensures that the raw materials used in its products were not unearthed by slaves, children, or people risking their lives. This should not need to be stated, nor should Apple’s commitment feel like something that differentiates their products from their competitors’. This is basic human decency: an expectation, more than anything.
This isn’t to say that Twitter isn’t worth billions. As this election cycle makes evident, it is unquestionably the place to talk politics and the media’s coverage of it, among other things. Twitter may have struggled to keep up with the growth of other social-media companies, but when was the last time that you heard someone say, “Did you see what Trump said on SnapChat?” Or, “I can’t believe Clinton posted that on Instagram!” Instead, the conversation is all taking place on Twitter. (One of my morning stops for Trump news isn’t FiveThirtyEight or The New York Times; it’s Sopan Deb’s Twitter feed.) But for investors, the question is whether people will still be ready to slurp up the service after November 8th. (My take: if Trump wins, yes; if it’s Clinton, probably not. And, please God, let’s not let Trump win.)
Twitter is still the place that’s synonymous with real-time public commentary. It’s where TV anchors tell you to follow them for updates, and where news gets made and discussed. There’s nothing else quite like it for information junkies, regardless of whether it’s an election year.
But Twitter — the company — has failed to make Twitter — the product — a compelling story, while allowing abusive users and communities of hatred to fester. There’s the core of a good idea and a good product in there, but it’s squandered by a lack of leadership.
Stu Maschwitz — an actual photographer and filmmaker — tried the new Portrait camera mode on his iPhone 7 Plus and he’s amazed:
So don’t ask if Depth Effect is perfect. A better question is if its failures are distracting. And I have certainly taken some test photos where they are. But the funny thing about test photos is that there’s often nothing worth photographing in them, so you just stare at the problems. In my own testing, whenever I’ve pointed Portrait Mode at something I actually care about, the results have been solid.
That’s a good way of putting it. I still don’t think it’s enough to tempt me into using the bigger model full-time, but the results are very compelling. And, according to Maschwitz, it’s definitely not a simple gaussian blur that gets applied to the background:
To my eye, Apple’s blur is obviously not gaussian, or even gaussian-esque. It’s some kind of sharp-edged circular blur kernel, maybe computed at a lower spatial resolution than the final image, which would account for some of the softness — and the miraculous speed with which the iPhone 7 Plus can do the job.
This seems to have been more-or-less In his TechCrunch article previewing Portrait mode, Matthew Panzarino called it a gaussian blur, but the article was updated with a comment from Apple after publishing:
Once it has these nine slices, it can then pick and choose which layers are sharp and which get a gaussian blur blur effect applied to them. Update: On additional inquiry, Apple clarified for me that it is in fact not gaussian effect but instead a custom disc blur. This is a blur with a more defined, circular shape than gaussian blur. So if you’re one of the few that was hankering to know exactly what kind of blur was applied here, now you know.
I can’t find too much on the web about disc blurring, but this explanation from Matt Pettineo seems pretty good:
Samples of the original color render target are taken along a disc, with the radius of the disc based on the blur factor calculated from the depth texture. This can produce a more realistic bokeh effect, and also opens up the possibility to simulate the bokeh produced by various lens types.
In this context, the disc blur effect is being applied to a 3D render, but Apple’s application of it to photography likely utilizes a similar technique. Very clever, and far more realistic than an ultra-soft gaussian blur.
After publishing our second article about the phenomenon, which is being called “touch disease,” my inbox flooded with stories from people who—many out of blind brand loyalty to Apple—have continued replacing their iPhone 6 Pluses with refurbished units that are just as likely to break as their old ones.
As we’ve detailed in those stories, “touch disease” is an iPhone 6 Plus flaw related to “bendgate” in which the two tiny “Touch IC” connectors, which translate touchscreen presses into a machine input, become unseated from the phone’s logic board. It can be recognized by flickering gray bars along the top of the phone, and is associated with intermittent or total touchscreen failure.
In the last 24 hours, I’ve gotten emails from 27 separate iPhone 6 Plus owners who have encountered this problem and were unaware that Apple internally considers it a known issue. Many of them have been put through lengthy tech support protocols on obviously broken phones only to be told that they would have to pay $329 for a refurbished phone that is still fundamentally flawed. Others have had to put up with months of forcefully bending or twisting the phone in order to get its Touch IC connectors to intermittently work for a few minutes, hours, or days before the problem inevitably resurfaces.
It seems pretty unfair to expect customers to shell out for a refurbished phone that will exhibit the same issue because of the engineering defect. Moreover, a few of the customers Koebler cites seem to have experienced unhelpful customer service while trying to get the problem resolved. This is something Apple ought to be making right, not making more complicated.
When there was an known engineering defect in my mid-2007 MacBook Pro, I took it in for the out-of-warranty repair program. They didn’t have any of my model’s logic boards in stock, so they replaced it with an upgraded version that had a better video card and faster processor, at no charge. That’s the kind of customer service users who are reporting this problem should be getting: either upgrade them to a 6S Plus which doesn’t exhibit the same issue, or refurbish the 6 Plus in such a way that the ICs are reinforced.
Without diminishing the effort that’s been put into this new standard, I’m not convinced there’s a plausible rationale for it. It would impose significant costs on type designers, provide no obvious advantage to our customers, and mostly benefit a small set of wealthy corporate sponsors.
Butterick has many well-considered objections to packaging different weights of type into a single font file — OpenType Variations, in a nut — but I vehemently disagree with his objections to the OpenType working group’s file size argument. As written by John Hudson, member of the working group:
A variable font is a single binary with greatly-reduced comparable file size and, hence, smaller disc footprint and webfont bandwidth. This means more efficient packaging of embedded fonts, and faster delivery and loading of webfonts.
As far as I’m concerned, this is one of the best arguments in favour of OpenType Variations, though there are significant problems — see Butterick’s article. But Butterick’s refutation of this argument is incredibly flawed, even in its most basic premise:
“But customers benefit from smaller file sizes too, because that makes web pages faster.” Certainly, that was true in 1996. And some web developers persist with politcal objections. But with today’s faster connections — even on mobile — optimizing for file size is less useful than ever.
On the contrary, optimizing for file size continues to be paramount, especially on mobile. This is because most mobile connections – particularly in North America — continue to have a monthly data allotment. It would be impolite to serve, say, an entirely textual thousand-word article as a 4 MB document.1
That is, of course, a problem of today. There is, of course, the possibility that your cellular carrier will suddenly become charitable and allow everyone to use large amounts of data at very low monthly costs, but — if the way ISPs have behaved for the past twenty years is any guide — I foresee an increase in costs to users, not a decrease.
More than that, Butterick refutes his own objections to the working group’s assertion:
For reasons unclear, this claim about network latency has always provoked howls of outrage among the web-dev Twitterati. Folks, let’s work from evidence, not superstition. For example, here’s a quick test I did this week, with home pages ranked in order of load time. As you can see, load time correlates more strongly with number of requests than download size. And Practical Typography beats everyone but the world’s biggest corporation.
I’m seeing the transfer of eleven different font files every time I load a Practical Typography page without caching. Butterick could cut those requests down to just three — one for each typeface used on the page — by using OpenType Variations, and have a faster site as a result.
There are plenty of factors other than raw file size that affect load times, of course: the speed of the host’s connection, what kind of servers they use, the number of requests, the route that the connection takes between host and destination, and more. But optimizing all of these things is absolutely critical if you care about how your site loads if a visitor happens to have one or two bars or a crappy Midtown connection. Even if they have five bars or they’re using a gigabit connection, it’s just polite, especially when a site is little more than text.
I pay $65 per month for 2 GB of data from my cellular provider. Loading this document on my phone would, therefore, cost me about $0.13. Considering that there are all sorts of background processes and other apps requiring my cellular data, it is only responsible for web developers to be cognizant of the amount of data required for each page load, and to try to reduce it wherever possible. ↩︎
Yahoo has blamed state actors for the attack, but it was actually elite hackers-for-hire who did it, according to InfoArmor, which claims to have some of the stolen information.
The independent security firm found the alleged data as part of its investigation into “Group E,” a team of five professional hackers believed to be from Eastern Europe.
It’s currently unclear how reliable InfoArmor’s analysis is:
Vitali Kremez, a cybercrime analyst at Flashpoint, is more skeptical of InfoArmor’s findings. “They might have jumped the gun too early on this,” he said.
He questioned discrepancies between the database that InfoArmor obtained and what Yahoo said was stolen. For example, Yahoo said passwords hashed with the bcrypt algorithm and security questions may have been lifted as part of the breach. The data InfoArmor uncovered only contains passwords hashed with the MD5 algorithm, and no mention of security questions, he said.
Yahoo’s announcement of the leaks only said that “the vast majority” were encrypted with bcrypt. It’s possible that InfoArmor’s subset of data is just the first few million database rows, which would likely be the older accounts — I suspect those would use MD5.
Finally conceding defeat in a battle lost long ago to Apple Inc. and Samsung Electronics Co., BlackBerry is handing over production of the phones to overseas partners and turning its full attention to the more profitable and growing software business. It’s the formalization of a move in the making since Chief Executive Officer John Chen took over nearly three years ago and outsourced some manufacturing to Foxconn Technology Group. Getting the money-losing smartphone business off BlackBerry’s books will also make it easier for the company to consistently hit profitability.
According to their Q1 2017 results — which were released in June because BlackBerry’s fiscal years are bizarre — their revenue in software services increased by $29 million compared to the year-ago quarter, while revenue from hardware sales and service access fees1 declined by $117 million and $146 million, respectively. Reduced manufacturing costs is how another fruit-themed company helped stem its losses, too.
BlackBerry said it struck a licensing agreement with an Indonesian company to make and distribute branded devices. More deals are in the works with Chinese and Indian manufacturers. It will still design smartphone applications and an extra-secure version of Alphabet Inc.’s Android operating system.
BlackBerry phones — and BBM, their proprietary messaging app — are still fairly popular in Indonesia. However, the iPhone is a significant status symbol there, and their middle class is growing (along with a rapid rise in inequality). If those projections remain accurate, the market for BlackBerry will be eaten up by iPhones and Android phones within just a few years. Maybe BlackBerry can make a comeback, but I doubt it.
Update: Three of the best journalists in tech have compiled a short list of reasons why BlackBerry has struggled with its hardware sales these past few years.
Service access fees apply only to older BlackBerry phones, not their BB10 or Android models. ↩︎
Dan Moren, writing for Macworld regarding the new object and face detection features in Photos:
People don’t like to feel that their personal and private photos are being pored over, even if “just” by a machine. But these local silos have, at least at the moment, made the feature less useful, because the analysis happens on each device that the new Photos is on. That means even if all the photos on your iPhone are scanned for faces, when you upgrade your Mac to Sierra, the Photos app there doesn’t benefit from the information on your phone—even if they’re all the same photos.
Not only does that seem remarkably inefficient, but it also runs into possible collisions. For example, I store my pictures in iCloud Photo Library: My MacBook Air running Sierra and my iPhone 7 running iOS 10 both have my entire 23,154 photo library synced. And yet, if I look at the People album in iOS 10, it identifies 12 people; my Mac’s People album has only 11. Moreover, the total numbers of photos for each of those people largely differs between the two. For example, my phone identified 523 pictures of me; my Mac, only 306. Those are some pretty disparate numbers, and a search for photos on one is sure to look substantially different from the other. And if I make changes in one place to add in more photos to a certain person, I’m just going to have to repeat that process on my other devices.
It sounds like to have things work the way you’d want, you would have to re-tag all your photos on each device. And, I guess, forget about doing anything with faces from the Web interface.
Apple has previously stated that identified objects and people won’t sync — at least, not initially — for privacy reasons. I was under the impression that iCloud was extremely private and secure — again, this is what Apple has been saying for a while. Like Tsai, I don’t get how storing photos in iCloud is totally private, but it’s not yet possible to securely keep associated metadata in sync, too.
At WWDC this year, Craig Federighi strongly disagreed with the argument that privacy and cloud features are competing interests. I’m inclined to believe him; I don’t think that privacy must be compromised in order to provide services that are proactive or user-tailored. I hope that my belief isn’t too idealistic, but Apple’s argument gets harder to agree with when rudimentary gaps remain in existing features.
The University of Pennsylvania’s Wharton School reviewed a new book by Cathy O’Neil:
Most of us, unless we’re insurance actuaries or Wall Street quantitative analysts, have only a vague notion of algorithms and how they work. But they actually affect our daily lives by a considerable amount. Algorithms are a set of instructions followed by computers to solve problems. The hidden algorithms of Big Data might connect you with a great music suggestion on Pandora, a job lead on LinkedIn or the love of your life on Match.com.
These mathematical models are supposed to be neutral. But former Wall Street quant Cathy O’Neil, who had an insider’s view of algorithms for years, believes that they are quite the opposite. In her book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, O’Neil says these WMDs are ticking time-bombs that are well-intended but ultimately reinforce harmful stereotypes, especially of the poor and minorities, and become “secret models wielding arbitrary punishments.”
This article is striking for its summary of how dependent we’ve become on opaque algorithms we know very little about. The developers of these applications treat them as proprietary, and resist any sort of public or regulatory scrutiny. The combination of little oversight and wide use is, to put it mildly, quite terrifying. I will absolutely be reading O’Neil’s book.
The spreadsheet can’t be opened “for some reason”?? What sort of error message is that??
“For some reason.” How this ever got past any sort of quality assurance I cannot imagine. Did the engineer/s assign an out-of-bounds error code to the problem, and the operating system can’t decide what to say and so falls back to “for some reason”?
I got the same error message back in April while opening a Pages file from a shared drive after a reboot. No clue what it meant then, and it was fixed with a restart of Pages so it doesn’t seem to overlap with Arthur’s cause of error. It does seem to be specific to iWork apps, though.
Buzzfeed’s Reggie Ugwu spoke with Zane Lowe, Jimmy Iovine, and Bozoma Saint John about the updates to Apple Music that debuted alongside iOS 10 and MacOS Sierra. The article is pretty light and fluffy, but there’s a noteworthy scoop:
The other big change is the addition of two new personalized playlists: My Favorites Mix and My New Music Mix. The playlists are generated by algorithms, a first for the service, which has largely relied on human curation for its playlists up to this point. Revealing how the mixes operate for the first time to BuzzFeed News, Apple claimed a potential advantage over similar algorithmically personalized playlists, including Spotify’s Discover Weekly and Pandora’s Thumbprint Radio: deep historical knowledge of individual users’ tastes and habits, based on years of data carried over from iTunes.
Anecdotally, the recommendations I’ve received from Spotify have generally been more well-rounded in almost all regards, despite using the service less than I do Apple Music or local playback. I also appreciate Spotify’s large variety of community-created playlists.
But I’d rather pay for just one streaming service, and I’d prefer to use the one that’s integrated into the applications I use most: Music on iOS, and iTunes on my Mac. Unfortunately, as I’ve written before, its recommendations have been lacklustre.
Just now, I opened up Music and tapped the “For You” tab to see what it thinks I should be listening to. For some reason, it’s suggesting Linkin Park and Coheed and Cambria, two artists I’m not fond of. So, I went ahead and tapped the “dislike” button on both. Both albums are still in my recommendations and, presumably, will remain there until For You is refreshed. There appears to be no way to do that manually.
Similarly, I think that the items displayed in “Browse” should be somewhat tailored as well. There’s no reason to suggest the new Luke Bryan record — nothing in my playback history suggests that I would be even remotely interested in that record.
There’s a clear solution to a lot of these issues, and Apple has already implemented it. The two new playlists cited by Buzzfeed are among the most accurate I’ve seen, but there are just two of them, and they’re refreshed weekly. It remains a complete mystery to me why the rest of Apple Music’s recommendation features are not using iTunes playback and rating data, nor the previously-collected Genius data.
Of all the rumoured buyers so far — Google, Microsoft, Salesforce, Facebook, and others — Disney is my preferred option. The others are either institutional (Microsoft and Salesforce) or horrific for privacy (Google and Facebook). Disney is a media company, and that’s rapidly what Twitter is becoming — and, arguably, what Twitter has always been.
Korolova and her student Jun Tang discovered that Apple had lumped in the mention of differential privacy under two different diagnostic sections in iOS 10. With iOS 10, opting in to having diagnostic and usage data sent automatically to app developers means that users are also automatically subjected to data collection using differential privacy. It seems that if a user wants to submit diagnostic data to developers, but not be subject to the collection of this new data, they’re out of luck.
Most of the non-technical people I know will try to get through the long iOS setup process as quickly as they can, and they don’t necessarily read each page in full. Virtually everyone I know has disallowed the submission of diagnostics and usage data and, consequently, opted out of differential privacy features as well.
If differential privacy allows Apple to collect data while keeping it entirely non-specific and unidentifiable, it should be presented as a great way to make every iOS device smarter while keeping information private.
But the entire setup process also ought to be shorter, while allowing users a similar level of control over their privacy and security. Though this may seem paradoxical, I think the critical factor in the unfriendliness of the setup process is the number of pages and options presented. This could be made less intimidating by, for instance, storing as many options and settings as possible in iCloud, and allowing the user to confirm them on a single page during setup. Something like that would go a long way towards making a shorter setup process that asks less of the user, gets them using their device sooner, and maintains their privacy.
Chris began working for Apple in July, but didn’t tell anyone at The Verge that he’d taken a new job until we discovered and verified his dual-employment in early September. Chris continued actively working at The Verge in July, but was not in contact with us through most of August and into September. During that period, in the dark and concerned for Chris, we made every effort to contact him and to offer him help if needed. We ultimately terminated his employment at The Verge and Vox Media the same day we verified that he was employed at Apple.
I have so many questions about this, but I have just one big one: how did nobody notice this? If I didn’t turn up for work for a month, people would notice. People would ask questions. Heck, I would ask questions like How am I maintaining two full-time jobs at the same time? and Why am I trying to maintain two jobs that have a clear conflict of interest for both parties?
I’m sure Patel and the rest of the Verge’s editorial staff are furious about this, but I bet a small part of them is a little bemused by Ziegler’s pluck.
I don’t know what Ziegler is doing at Apple, but he’s not the first journalist they’ve hired over the past couple of years: Anand Lai Shimpi and Chris Breen both left their publications for Apple, and the company has sought more journalists for Apple News, too.
Apple Inc. and Google made tweaks to their popular mobile web browsers recently to enable video content to play automatically in web pages, provided audio is muted.
The changes could result in a boost in mobile video consumption for online publishers if they allow their videos to play automatically, and it could unlock new revenue opportunities as a result.
Looking forward to my cellular carrier finding more revenue opportunities for all of the data this is going to use. Canadians already pay the highest prices for cellular contracts in amongst developed countries. I certainly don’t want to pay for overages when a website like Mic or iMore decides to load an autoplaying video ad somewhere on the page.
I’ve set up a rule in 1Blocker, in the Hide Page Elements package, to block video[~autoplay]. This is broad, but it should prevent autoplaying video elements from loading in Safari.
“We take these things not just seriously, but personally,” said Young Smith in an interview in the atrium of 1 Infinite Loop. “I have been grieved over this … that someone may have had this kind of an experience.”
“Commensurate actions have been taken,” Young Smith said, noting that disciplinary actions can range from an informal conversation to dismissal. She declined to say what was done in these specific cases, citing privacy concerns.
I certainly hope circumstances like these are not as pervasive at Apple as Mic’s Melanie Ehrenkranz initial report suggested. However, a followup report contains new allegations that are somewhat evocative of that Amazon article from last year. That’s concerning.
The massive 2014 breach disclosed today by Yahoo is just one of three reported hacks from the past four years. As noted previously, there was also a 2012 breach of 200 million accounts, and Emptywheel has pointed to an individual account hacked earlier this year.
There’s something very unsettling about the way tech companies are responding to these big security breaches: none of them informed their users with anything resembling a sense of urgency. Dropbox waited four years to tell users about their 2012 hack, and only did so after lying about why they were resetting users’ passwords. Tumblr waited three years.
And then there’s Yahoo. They didn’t tell users about the breach in 2012, even after Vice‘s Joseph Cox asked about it earlier this year. Today, Kara Swisher and Kurt Wagner of Recode have a comment from Verizon — who are currently in the process of acquiring Yahoo — stating that they didn’t know about the 2012 breach until two days ago, and they only discovered the 2014 hack while investigating the one from 2012.
All of these responses are incredibly irresponsible. Nobody should be finding out that their personal details have been floating around underground message boards for years. These breaches ought to have been publicly acknowledged immediately.
Earlier today, Kara Swisher reported that Yahoo would be confirming the breach of 200 million accounts said to have been compromised in 2012. Swisher was, unfortunately, wrong — the breach turns out to have occurred in 2014, and the size of it is unprecedented.
Based on the ongoing investigation, Yahoo believes that information associated with at least 500 million user accounts was stolen and the investigation has found no evidence that the state-sponsored actor is currently in Yahoo’s network. Yahoo is working closely with law enforcement on this matter.
This far eclipses the impact of the previous record-holding breach — a set of nearly 360 million MySpace accounts, ostensibly leaked by the same hacker, “Peace” (PDF), responsible for the 2012 breach.
Also, you read that right: Yahoo is blaming this attack on a “state-sponsored actor”. They haven’t disclosed any more than that, but in a June interview with Wired, Peace claimed to be Russian and working on behalf of a Russian “‘team,’ if you want to call it that”.
Peace is also responsible for the earlier leak of 65 million Tumblr accounts, originating sometime in 2013. It’s unclear whether there’s some overlap between the two data sets, as Yahoo acquired Tumblr that same year.
Update: Clarified the role of Peace in the 2012 attack.
The version of Allo rolling out today will store all non-incognito messages by default — a clear change from Google’s earlier statements that the app would only store messages transiently and in non-identifiable form. The records will now persist until the user actively deletes them, giving Google default access to a full history of conversations in the app. Users can also avoid the logging by using Allo’s Incognito Mode, which is still fully end-to-end encrypted and unchanged from the initial announcement.
According to Google, the change was made to improve Allo’s smart reply feature, which generates suggested responses to a given conversation. Like most machine learning systems, the smart replies work better with more data. As the Allo team tested those replies, they decided the performance boost from permanently stored messages was worth giving up privacy benefits of transient storage.
Or, to put it another way, Google made their development of Allo easier by making it significantly less friendly to your privacy.
If you’ve skipped here to see how the heck it works, I don’t blame you. The short answer: incredibly, miraculously well in many instances. And pretty rough in others. Apple says this is still in beta and it is. It has trouble with leaves, with chain link fences and patterns and with motion. But it also handles things so well that I never thought possible like fine children’s hair and dog fur, shooting pictures with people facing away and objects that are not people at all.
Panzarino’s test shots look decent, but when Serenity Caldwell posted a side-by-side comparison with a DSLR, it was obvious to me which was which. There is an inherently more natural fallout from the point of focus that can’t be simulated with the nine slices of depth generated by the iPhone’s dual cameras.
But these photos are extremely impressive, especially from a smartphone. It’s a simulation, sure, but a very convincing one when these photos are shared on Instagram or Facebook.
William Wilkinson also posted some tests on Twitter — featuring a cat instead of a baby — that are worth taking a look at. I’d love to give this a try, but I’m not sure it’s enough to convince me to choose the Plus model over the regular iPhone.
Update: According to Jeff Benjamin at 9to5Mac, Portrait mode works with inanimate objects:
I was almost sure that it would be a people-only thing, at least initially, due to the way Apple was wording the feature during its event and in its press materials. As Apple often does, it under-promised and over-delivered; that much is obvious, even in this early beta stage of the game.
If you’re part of Apple’s public beta program, you should be getting this update by the end of the week. ↩︎
There is a familiar set of rituals the tech press follows in the weeks after an Apple event. It starts as a keynote happens, with hurriedly-written takes on the lack of surprises, written by someone who kept up with industry rumour blogs leading up to the event. Initial impressions from the hands-on area follow, most of which seem to laud the quality and impressiveness of the products just announced. Then, after one-to-two weeks of mixed takes and boredom, the reviews follow, bringing with them an enthusiastic bout of excitement for the products.
And then, shortly after the first round of product delivery, a mood sets in that I like to call the “Post-Launch Hangover” — a sort of But what have you done for us lately? feeling that takes over the editorial pages of major tech publications.
This isn’t a new phenomenon, of course. Here’s Ben Woods writing for ZDNet a couple of months after WWDC 2012 and about a month before the introduction of the iPhone 5:
While Apple says it has hardware to beat all-comers, I’d argue it doesn’t: it has beautifully designed devices with close to, but not quite, top-of-the-range specs. It’s true, though, that this has been good enough for it to maintain excellent margins on massive volumes of sales and to keep people eager for more.
But to my mind, Apple is in danger of becoming boring.
I’m not sure why Woods hedged his assessment of Apple with an “in danger of” clause. His article is almost entirely about how tired he is of Apple’s hardware in the present, rather than in the future.
Then, in February of 2013, Ashraf Eassa wrote an article for Seeking Alpha about his boredom with Apple:
The point here is that everyone is busy trying new things and really pushing the boundaries while Apple sticks to the tried-and-true formula. While in the short term Apple’s momentum will continue and the profit train won’t suddenly crash, the long-term picture is somewhat grim given that the company derives over 50% of its operating profits from the iPhone.
Dan Nosowitz repeated a similar notion in a 2014 article for Fast Company:
There’s nothing wrong with embracing Apple’s style; It uses fine, sturdy materials which are very functional and very inoffensive. This aesthetic signifies “hip” without alienating anyone; who could possibly object to the style of a MacBook Air? It’s all black and silver and glass. It goes with everything. It is a slim pair of dark jeans. It is fine. But design is a creative field built around evolving ideas, and when it comes to consumer technology, things have become stagnant.
For proof of that, you need look no further than Apple itself. Read any iPhone 6 review, and the design talk paints Apple in the same light as always; Apple design is good. The iPhone is beautiful. But new iPhone designs have typically brought new ideas with it. The iPhone 6 simply adopts the pre-existing design language of the iPads and covers it in ugly antenna lines.
Then, in a January 2015 Engadget article, Aaron Souppouris expressed the same sentiment:
In less than a decade, Apple completely changed the world of personal computing, and the music industry in the process. First came the iPod and the iTunes Store; then the iPhone and App Store; and then the iPad. The Apple of the 2000s was an exciting company to follow. It’s just not that company anymore. Instead, it’s spent the past few years slowly improving its admittedly great cash cows, iterating and iterating and iterating. It’s made cheaper iPhones, bigger iPhones and even gave in and made a phablet. It’s made cheaper iPads, smaller iPads and is apparently planning a bigger iPad. It’s made cheaper MacBooks, smaller MacBooks… you get the point. Its latest project, the Apple Watch, sure looks like a smartwatch, and it might be very successful, but is it doing anything totally unique? Is it really exciting? No.
Should you think that these reactions are limited to the past five years of Tim Cook’s Apple, I humbly submit this turd of a quote from Sébastien Page’s first look at the iPhone 3GS for iDownloadBlog:
I think what I hate the most about the iPhone 3G S is the design which is exactly identical to the iPhone 3G. When I pay $560.16 for a new phone, I expect to have something that looks different from everybody else. Yes, the iPhone is a phone for the elite, I admit it. I kinda miss the days of the first iPhone, when people came to me and candidly asked me “wow, is this the iPhone?”. I was proud of it. Now everyone has an iPhone, and even worse, everyone has an iPhone that looks similar.
So: a long and not particularly proud tradition of tech journalists collectively yawning at the new products that Apple releases.
But most of these writers aren’t really noteworthy. I’m not picking on kids writing in the Verge’s comments section, but none of these contributors are particularly distinguished. For that, one must turn to Farhad Manjoo of the New York Times, shortly after the introduction of the iPhone 7:1
The absence of a jack is far from the worst shortcoming in Apple’s latest product launch. Instead, it’s a symptom of a deeper issue with the new iPhones, part of a problem that afflicts much of the company’s product lineup: Apple’s aesthetics have grown stale.
In an article posted today, John Gruber effectively dismantles this notion piece-by-piece:2
You need to recognize a Porsche 911 as a 911. An iPhone needs to look like an iPhone. The design needs to evolve, not transform. The thing to keep in mind is that the iPhone itself, what it looks like in your hand, is the embodiment of the iPhone brand. There is nothing printed on the front face of an iPhone because there doesn’t need to be. The Apple logo is the company’s logo. The iPhone’s logo is the iPhone itself.
A couple of days ago, Rene Ritchie posted a photo on Instagram of his original iPhone stacked on top of an iPhone 7 Plus. Even if you were to remove the Apple logo from both cases, there is a clear lineage. The iPhone 7 is — again, as Gruber notes — simultaneously new and familiar, and that’s a hell of a feat.
Even Dustin Curtis, a writer and designer whose work I’ve long respected, thinks that Apple’s aesthetics are “stagnating”:
There was a time, not too long ago, when Apple used to test radically new designs all of the time–the iMac used to change almost every year, the iPod changed even more often than that, and though some of those changes were failures (remember the iMac’s swivel screen?), most led to groundbreaking improvements that were eventually adopted by the whole computer industry. The G4 Cube was interesting, if short-lived. The Titanium PowerBook was a statement. But Apple’s recent designs have been much more reserved, much more careful, than designs of the past. I fear they’ve become more boring.
It’s curious that Curtis mentions the TiBook while writing about his exhaustion with Apple’s industrial design team. At the time, it was a radical new design, but Christopher Phin of Macworld compared it to a 15-inch MacBook Pro and they are also, clearly, cousins. Apple’s professional laptops seem to have changed very little in their immediate aesthetics since the TiBook was introduced. But that’s fine.
Apple has long been a company of iterative processes. While the iPod Nano was an ever-changing product, the iPod Classic of 2007 differed little in its overall aesthetic from the model launched in 2001. Similarly, the iPhone has arguably changed its design language just twice from the time it was introduced: to flat edges, with the iPhone 4; and back to curved sides, with the iPhone 6.
I dug up my first-generation iPod Touch for this photo:
Those two products may have been released eight years apart, but they are clearly of the same family. The sameness goes back even further: the original iPhone had solid coloured front and a metal back, like the original iPod, and that aesthetic was brought to the Mac as well.
There are some who will see this as laziness or a lack of creativity, but design is so much more than the way these products look. Apple has become very good at lots of things over the years, but their main innovations in the past few have been in elevating the quality of their high-volume products while trying new ideas on smaller scales.
My iPhone 6S feels like a flattened original iPhone. I’ve previously made known my dislike of the antenna lines and the misaligned camera bump — blessedly fixed in the iPhone 7. But, aside from the size of its display and its thickness, the original iPhone feels noticeably different. It feels like multiple parts, rather than the singular form is seems to be. There are gaps between parts and minor misalignments that would drive today’s Apple absolutely crazy.
My 6S, on the other hand, feels like a continuous shape. The edge of the curved glass meets the rounded edges flawlessly. None of the buttons move in any direction other than inwards. The power button and volume controls all feel the same. The iPhone 7, particularly in the new Jet Black finish, seeks to push this even farther by making the entire phone feel and look like a singular form. And they’re doing this at the scale of tens of millions of iPhones every quarter.
A similar obsession with solidity and uniformity has spread to Apple’s other product lines. The iMac, the MacBook, and the iPad are all designed to feel like the most rigid, solid products you can buy in their respective classes. The little things Apple has been chipping away at — solid-state trackpads, laminated displays, and lower tolerances — add up to make the latest generation of each of their product lines feel less like they were assembled from multiple components, and more as though they were spawned into existence as entirely finished units.
I do understand Curtis’ frustration with the lack of the new, however. One thing you’ll notice is that many of the innovations he cites — from the original iMac to the iPhone 4 — came about as a result of a change in materials. And Apple has, indeed, been working with new and different materials at relatively small scales.
It’s safe to say that no other company understands aluminum as well as Apple due to the scale at which they use it and study it; few others parallel their knowledge of glass, too. But they’ve also used gold, sapphire, Liquidmetal, and — with the new Apple Watch Edition — ceramic. It seems to me that they want to be as rigorous as they can be in their research of new materials, and that often means trying these materials in smaller quantities or with less-impactful applications.
Sapphire, you will recall, was first used for the camera lens cover on the iPhone 5, before making its way onto the home button and Touch ID sensor in the 5S. The 63 was, according to many early rumours, supposed to be fitted with a sapphire display cover, but supplier troubles required a change of plan. This is the risk with using new materials at the scale of the iPhone.
The Apple Watch, on the other hand, is shaping up to be similar to the iPod. The aluminum model is the least expensive, and it’s the material Apple has the most experience working with. Stainless steel is something Apple has worked with less frequently, but they’ve previously used it on the iPhone 4 and 4S and it’s one of the most commonly-used materials in the world.
The Edition, though, is Jony Ive’s playground. The first model was made of high-grade gold alloys, while the Series 2 model is made of ceramic. Both of these materials are new to Apple, and because the Edition sells in such low quantities, there’s plenty of space to try them on a more sedate production line. I don’t know if the next iPhone will be ceramic — in fact, I doubt it will — but their process for making it might yield unique results that are applicable to other product lines.4
But that’s then; this is now, in our Post-Launch Hangover. I have very little idea what the future may hold, but I know what the present holds. And I don’t see anything boring about what I’ve seen this year, nor do I see this as some kind of downfall for Apple’s famed industrial design team. They’re pursuing the same strategy they always have: refine, iterate, repeat. It’s slow, tedious work, but it results in products that are built with unparalleled care and finesse at unprecedented scales. Evolution is slow to see when you’re witnessing it in real-time, but when the iPhone 14 — or whatever — is released, we will almost certainly be able to look back and see that it is a clear descendant from the original iPhone while still looking new. That’s not boring; that’s designing an icon.
I began the article you’re reading right now last night after Tze-Ho Tan sent me a link on Twitter. While I was at work today, Gruber published his excellent piece. It’s merely coincidental that the topics are similar and published on the same day, but I think that offers some light support for my “Post-Launch Hangover” theory. ↩︎
Based on Tim Bajarin’s reporting and the typical iPhone production ramp-up, I believe sapphire displays were more likely targeted for the iPhone 6S, not the 6. I very much doubt that the display material of the iPhone 6 was switched “several weeks” before it was launched. ↩︎
The Mac Pro, explained by Curtis as “the last truly staggering piece of industrial design work that Apple released”, is another lower-volume product with which they can experiment. It utilizes some innovative production techniques, but perhaps none more so than being built in the United States. ↩︎
Transit has always been my favourite public transportation app, and the 4.0 update adds some pretty killer features. Most notably, “GO”:
Using voice and push notifications, GO manages each aspect of your trip so you don’t have to think about…
when to leave
where to get off
or whether you’ll reach your destination in time
It notifies you when you should leave to catch the train or bus, to walk faster if it detects that you’re going to miss your ride, and more. For straightforward daily commutes, this probably won’t be that useful. But for those of us who take public transit everywhere or for unfamiliar cities, this is going to be amazing. If you’re dependent on public transit and you use any app other than Transit, I’m not sure we can be friends.
Update: Jonas Wisser tried the GO feature on an hour-long commute and found that it’s hard on battery life. Keep that in mind when you try it out.
Speaking of Siri, Joanna Stern’s column this week for the Wall Street Journal looks at some of the improvements and changes Apple has made to it in iOS 10:
Most of Siri’s third-party integration has been so reliable and accurate that it’s spurred me to start talking to it more and more. In some cases, I’ve been impressed with how much Siri has learned about how we speak. She understands and responds to casual phrases like, “Shoot an email to Geoff” or “What’s up with the weather today?”
Sometimes, though, I have to carefully phrase the question. “When’s the next train coming?” pulls up the definition of “train” on Wikipedia. “Show me transit directions,” however, shows me the latest train schedule. And when it comes to general knowledge, Siri comes in third place behind Google’s and Amazon’s assistants.
Unfortunately, very few of the apps I use regularly either haven’t yet been updated to support SiriKit, or don’t fit into one of the specific domains that SiriKit supports right now.
I’m looking forward to trying Siri with more data sources, though. I think that these kinds of improvements will reduce the psychological impediment that I — and, perhaps, you — face when talking to technology. If it doesn’t feel entirely right, it feels wrong.
Apple says Siri is updated every other week with new information.
That’s not frequent enough. Not even close. Breaking news stories, in particular, should be indexed by Siri as soon as they’re published.
Available now on the Mac App Store — remember the Mac App Store? — MacOS Sierra brings Siri to the Mac, allows you to offload storage of old files to iCloud, and adds Apple Pay to Safari, amongst miscellaneous updates and improvements.
All conversations with Siri have a small button with a plus symbol. Clicking it opens Notification Center (which now sports a white theme to match iOS) and adds Siri’s results to the top of the stack of widgets.
Here’s the clever bit: the content of these is constantly being updated by the system.
This leads to all sorts of possibilities. Creating a widget during a sports game would keep the real-time score just a swipe away. Creating a Twitter search with a keyword can help you keep updated on what people are saying about your brand. The possibilities are nearly endless.
It’s a little frustrating that this kind of stuff is gated behind a spoken Siri command. Not only does this require talking to your computer — a task which I still find a little bit weird — it also means that the computer must interpret what you’re saying absolutely perfectly for this feature to be of any use. Siri remains not accurate enough for my liking, even on the Sierra betas; so, while I’ll try it out on my Mac, I’m not sure I’ll use it regularly.
Meanwhile, when Jason Snell tried the new iCloud storage optimization feature, he found it working like many of Apple’s other iCloud products:
Here’s what happened: I was editing a podcast in Apple’s Logic Pro X, and my project was stored on the Desktop. All of a sudden, the voice of one of my podcast panelists simply vanished from the mix. I quit and re-launched Logic, only to be told that the file in question was missing. Sure enough, a visit to Finder revealed that Sierra had “optimized” my storage and removed that file from my local drive. I’ll grant you, the file was a couple of weeks old, and very large as most audio files are. But I was also actively using it within a Logic project. Apparently that didn’t count for anything?
That’s not good. The automated storage features in iCloud have been a mixed-bag: iCloud Photo Library has worked perfectly for me so far, but iCloud Music Library has been fairly unreliable — so much so that I refuse to enable it. I doubt I’ll be touching the storage optimization feature in Sierra for a while.1
The iPhone and Watch ads are energetic yet moody, and emphasize the water resistance of both products. The Watch ad, in particular, feels like it has strains of the original iPod campaign’s DNA in it; I will never say no to anything featuring Nina Simone.
The Apple Music ad is entirely different. It’s kind of odd seeing Eddy Cue and Bozoma Saint John in an ad, but it’s pretty fun. I really liked it. James Corden plays himself, and the Apple executives — also including Jimmy Iovine — are on hand to bat down his ridiculous ad pitches. It doesn’t take itself too seriously, and it’s better for it.
Dr. Raymond M. Soneira of DisplayMate summarizes the iPhone 7’s new, wide-gamut LCD panel (capitalization his):
The display on the iPhone 7 is a Truly Impressive Top Performing Display and a major upgrade and enhancement to the display on the iPhone 6. It is by far the best performing mobile LCD display that we have ever tested, and it breaks many display performance records.
Just count the number of superlatives in this review. Soneira is a notoriously tough data-driven reviewer, but his commentary on the iPhone 7’s display is effusive.
The next major iPhone redesign is rumoured to include a change to an OLED display. Soneira addresses this:
LCDs are a great cutting edge high performance display technology for Tablets to TVs, but for handheld Smartphones, OLED displays provide a number of significant advantages over LCDs including: being much thinner, much lighter, with a much smaller bezel providing a near rimless design, they can be made flexible and into curved screens, plus they have a very fast response time, better viewing angles, and an always-on display mode. Many of the OLED performance advantages result from the fact that every single sub-pixel in an OLED display is individually directly powered, which results in better color accuracy, image contrast accuracy, and screen uniformity.
With only the super-saturated displays of Samsung and LG smartphones as reference points, I didn’t think that OLEDs could be calibrated to the accuracy of an LCD. But, after seeing that my Apple Watch is a near-perfect match for the colour profile in my iPhone, I have hope that a hypothetical OLED-equipped iPhone will be as accurate and clear as the LCD reviewed here. I still don’t think greys are quite perfect on my Watch, but they’re close. There are few companies that calibrate displays the way Apple does, and there’s nobody else doing it at their scale and across their entire product line.
The National Security Agency and the FBI are tapping directly into the central servers of nine leading U.S. Internet companies, extracting audio and video chats, photographs, e-mails, documents, and connection logs that enable analysts to track foreign targets, according to a top-secret document obtained by The Washington Post.
That article revealed the existence of the NSA’s PRISM program. It was amongst the earliest articles published from Edward Snowden’s disclosures, and was the first from the Washington Post, reporters from which were given access to the leaked documents. The Post shared a Pulitzer Prize with the Guardian for their work in reporting on the NSA’s domestic and foreign surveillance programs over the course of the rest of the year, including for that article on PRISM.
Another program, PRISM, disclosed by the Guardian and The Washington Post, allows the NSA and the FBI to obtain online data including e-mails, photographs, documents and connection logs. The information that can be assembled about any one person — much less organizations, social networks and entire communities — is staggering: What we do, think and believe.
And, in April 14, Paul Farhi wrote for the Post about the Pulitzer Prize they had just earned:
In both the NSA and Pentagon Papers stories, the reporting was based on leaks of secret documents by government contractors. Both Snowden and Daniel Ellsberg — who leaked the Pentagon Papers to Times reporter Neil Sheehan — were called traitors for their actions. And both the leakers and the news organizations that published the stories were accused by critics, including members of Congress, of enabling espionage and harming national security.
But Post Executive Editor Martin Baron said Monday that the reporting exposed a national policy “with profound implications for American citizens’ constitutional rights” and the rights of individuals around the world.
“Disclosing the massive expansion of the NSA’s surveillance network absolutely was a public service,” Baron said. “In constructing a surveillance system of breathtaking scope and intrusiveness, our government also sharply eroded individual privacy. All of this was done in secret, without public debate, and with clear weaknesses in oversight.”
Mr. Snowden’s defenders don’t deny that he broke the law — not to mention oaths and contractual obligations — when he copied and kept 1.5 million classified documents. They argue, rather, that Mr. Snowden’s noble purposes, and the policy changes his “whistle-blowing” prompted, justified his actions. Specifically, he made the documents public through journalists, including reporters working for The Post, enabling the American public to learn for the first time that the NSA was collecting domestic telephone “metadata” — information about the time of a call and the parties to it, but not its content — en masse with no case-by-case court approval. The program was a stretch, if not an outright violation, of federal surveillance law, and posed risks to privacy. Congress and the president eventually responded with corrective legislation. It’s fair to say we owe these necessary reforms to Mr. Snowden.
The complication is that Mr. Snowden did more than that. He also pilfered, and leaked, information about a separate overseas NSA Internet-monitoring program, PRISM, that was both clearly legal and not clearly threatening to privacy. (It was also not permanent; the law authorizing it expires next year.)
It was the Post’s choice to report on that. They seized on a scoop and published dozens of articles and editorials arguing that the NSA’s surveillance efforts — including PRISM — amounted to a significant intrusion into our personal privacy. These articles earned them admiration, praise, and a Pulitzer Prize. And now, they’re arguing that these programs were fine and that their source should be sent to prison?
The Post continues:
No specific harm, actual or attempted, to any individual American was ever shown to have resulted from the NSA telephone metadata program Mr. Snowden brought to light.
On the contrary, all of these programs — and, in particular, the cited Verizon metadata collection court order — has harmed the privacy rights of every American, not to mention billions of others around the world, as pointed out by Post executive editor Martin Baron.
In contrast, his revelations about the agency’s international operations disrupted lawful intelligence-gathering, causing possibly “tremendous damage” to national security, according to a unanimous, bipartisan report by the House Permanent Select Committee on Intelligence. What higher cause did that serve?
“Higher cause”? How about providing Americans some information on the decisions being made in secret that directly contributed to the mass collection of their communications on an unprecedented scale?
The Editorial Page is separate from the news organization and does not speak for the latter; I seriously doubt the journalists or editors at the Post who worked on these news stories would agree with any of that editorial. But still, if the Post Editorial Page Editors now want to denounce these revelations, and even call for the imprisonment of their paper’s own source on this ground, then they should at least have the courage to acknowledge that it was the Washington Post – not Edward Snowden – who made the editorial and institutional choice to expose those programs to the public. They might want to denounce their own paper and even possibly call for its prosecution for revealing top secrets programs that they now are bizarrely claiming should never have been revealed to the public in the first place.
Snowden knew fully that leaking millions of pages of classified and top secret operational data was illegal, and that the American government would throw every law in the book against him. It’s reasonable for him to have left the United States because remaining at home would land him in prison without a fair trial, as the Post’s editorial acknowledges:
Ideally, Mr. Snowden would come home and hash out all of this before a jury of his peers. That would certainly be in the best tradition of civil disobedience, whose practitioners have always been willing to go to jail for their beliefs. He says this is unacceptable because U.S. secrecy-protection statutes specifically prohibit him from claiming his higher purpose and positive impact as a defense — which is true, though it’s not clear how the law could allow that without creating a huge loophole for leakers.
Holding a fair trial with a defence built around a greater public good is not a “loophole” — it’s the bare minimum for a fair trial in a democracy.
But there’s no current means for such a trial to occur, due in no small part to the legal mechanisms revealed through documents leaked by Snowden. As a result, the only method available for him to argue his case is a pardon. It’s not cowardly or an attempt to trivialize his actions; it’s the only avenue. And we only know that because of documents and stories selected for publishing by the Post. As a result, the Post should be supporting Snowden’s bid for a pardon, if for no “higher purpose” than their own journalistic integrity and ego.
And, if their editorial board continues to argue that Snowden not be pardoned, I submit that the Post should return their Pulitzer. They clearly don’t think of their own work as having a “higher cause”.
Reddit user “AWPrahWinfrey” picked up their iPhone 7 yesterday and, because they’re in Singapore, they had a good half-day of usage before anyone in the United States got their hands on one. Better still, they went to the Singapore Grand Prix and took a lot of photos, which are worth checking out. The bigger sensor and wider aperture clearly make a huge difference.
The Coalition for Better Ads was announced today in Cologne, Germany, where the Dmexco conference has been taking place this week. The coalition’s founding members include Google, Facebook, Procter & Gamble, Unilever, the 4As, the Association of National Advertisers, the World Federation of Advertisers, The Washington Post, GroupM and the Interactive Advertising Bureau.
The consortium aims to monitor the quality of online advertising using technology currently being developed at the IAB’s Tech Lab. Digital ad campaigns will be scored on everything from creative to load time, and the coalition will come up with standards based on data gleaned from the system as well as from consumer feedback and input from marketers.
Of course, there’s nothing in these regulations that attempts to address ever-increasing data collection and user profiling from ads. In fact, if I’m reading the last sentence correctly, they’re increasing the amount of data they collect with these “better” ads.
This is all academic, frankly. After the chairman of an adtech company proposed to the IAB that member sites should disable their traffic to anyone using an ad blocker — with no success — and the IAB launched an entirely-voluntary “LEAN” ad initiative, which has seen little success, I doubt these regulations will create meaningful improvements in the quality, design, or speed of web ads. The advertising industry does not have a good track record of self-governance or improvement.
The constant reminders of potential combustibility have further dented Samsung’s reputation and shaved as much as $14 billion off its market value, just when it looked to be gaining ground on Apple, its longtime rival, with its new line of sleek Galaxy smartphones. They also raise questions about whether Samsung’s rush to take back the phones created more problems.
Experts say it led to a ham-handed effort that confused customers, frustrated regulators and continued to generate headlines both in the United States and at home. Data from the mobile analytics firm Apteligent showed that while Samsung’s recall appeared to have stopped new sales of the phone, a majority of people who had the affected phones were continuing to use them.
As I said earlier this week, Samsung’s recall started off strong and prompt. That was encouraging: an acknowledgement of the problem, and a promise to fix it. The story since then, however, has been a disaster: the CPSC recall program was only announced today, a full two weeks after Samsung announced that they’d be recalling all Galaxy Note 7s sold so far.
But this isn’t the first time Samsung has had a problem with dealing with consumer complaints about fire hazards. Back in May 2015, Brian X. Chen’s Samsung oven melted the side of his kitchen cabinets and had a woeful time trying to get the company to acknowledge the problem and issue a refund.
Fairfax Media can reveal Samsung made a potentially fatal error in its mammoth recall of 144,500 washing machines with a waterproofing fault that has burnt down homes.
In response to an email from Ms Teitzel in May 2015, a product safety officer assured her that, based on the serial number, her unit was manufactured after February 28, 2013, and therefore had been “modified”.
This assessment was incorrect. The machine was manufactured in January 2013 and the fault had never been repaired.
Speaking of the Mac Pro, it crossed a big milestone on Tuesday: one thousand days since it was last updated. With the exception of the effectively-deprecated non-Retina MacBook Pro, no other product has gone for so long without an update. Even the iPod lineup was refreshed more recently.
I think it’s far past time Apple knock some off the price of this machine. Selling three year old hardware at its launch price is pretty insulting to pro users.
Apple’s silicon team has outdone themselves with the A10: it benchmarks faster than my MacBook Air and, indeed, any before or after it. It also beats the 12-core Mac Pro in single-core testing. The iPhone 7 might be the third iteration of a similar exterior design, but it’s one of the biggest leaps forward that they’ve ever made.
According to Apple’s most recent diversity report, women make up 32% of its global workforce. About a dozen of those women joined Danielle on a recent email thread, shared with Mic by an Apple employee, in which they commiserated on their experiences working in a company dominated by men. The thread included stories of discrimination and workplace harassment and was sparked after another Apple employee shared Danielle’s experience in an attempt to galvanize the necessary support to mobilize and enact change.
This thread is just one part of over 50 pages of emails obtained by Mic from current and former Apple employees.
There are several users on Hacker News who claim to be Apple employees, and that the specific complaint that “Danielle” (not her real name) raised was to an inappropriate reference, for which the offender apologized. There are plenty of other incidents in this article that appear far more serious:
Claire* said that she faced retaliation from her male colleagues for reporting them. According to Claire, when someone finally came in to investigate the issue of the harassment she reported up, Apple admitted to her that she was in a hostile work environment. But instead of working to ameliorate her situation, she said, the company gave her a choice: stay in the position or take a lower ranking, lower paying job on another team.
Claire took the demotion.
Brianna Wu says that she’s heard similar stories as well.
The craziest part of reading an article like this is that it feels all too familiar. The culture of Silicon Valley is such that these sorts of experiences and reports are depressingly common. That’s deeply troubling.
All of the major tech firms need to do better, but Apple — in particular — says that they stand for more than this. They should back that up with meaningful action. A little bit of empathy for those taking issue with these incidents would go a long way.
[The] way I summarize this issue: it takes certain type of woman to survive in this industry and it shouldn’t.
Mic has a website that’s even worse than iMore’s. Hundreds of HTTP requests, nearly 10 MB of transferred data, and a load time of nearly 13 seconds on my broadband connection. Lots of email prompts, autoplaying video ads, and page takeover garbage complete the experience. ↩︎
Despite all of the things I thought Apple did right in iOS 10, I found their lack of support for the iPad as a unique platform to be disappointing. I know they can’t hit every item on their internal wish list with each release, but after the robust enhancements to the iPad experience in iOS 9, seeing many of this year’s improvements be scaled-up versions of the iPhone experience was not encouraging. In particular, the lack of significant improvements for the 12.9-inch iPad Pro seems worrying.
On the big iPad Pro, though, the new version of iWork includes a touch-optimized version of the formatting sidebar that appears in all three desktop apps. It’s pretty clear that there are people within Apple who want the iPad to be far more robust and capable, but it’s too bad that more of that focus didn’t make it into iOS 10.
Another iMessage sticker pack I think many of you will enjoy:
Panic Co-Founders Cabel and Steve are for some reason now available in iMessage sticker form at last! Enjoy peppering your important message communications with, for example, Steven dressed as a sea captain, or Cabel riding on top of the Transmit icon. With an icon for every emotion, Steve and Cabel will be happy to enhance your words.
Why do you want this? More like: why don’t you want this?
I touched a little on it in my iOS 10 review, but Ben McCarthy — the guy behind Obscura — wrote a great article for iMore about the increased range and depth available when shooting RAW:
In all these tests, JPEG is to the left; RAW to the right. Directly out of the camera, the JPEG looks a little more interesting: It has more contrast, and there appears to be more detail. The RAW image looks downright drab in comparison.
But as Apple SVP Phil Schiller noted on stage during the iPhone 7 event, Apple does a lot of work to process images behind the scenes using its ISP (image signal processor). It makes the images more vibrant and ready to display on the iPhone’s beautiful screen. But it does mean that the image is being altered as you take it — and that can be a detriment when you want to make further changes beyond what the ISP had in mind.
The built-in camera app has always been my go-to capture app, but ever since Ben sent me a build of Obscura with RAW capture support, I’ve used it almost exclusively. I spent some time yesterday shooting with the newest version of Manual, which also has RAW capture support — not that there’s any difference in the RAW files they create, mind you.1 There’s so much more depth and vitality to a RAW file if you’re willing to spend more time editing it. And, if you already spend a lot of time in post-production, you should be shooting RAW.
Every year, Austin Mann gets a prerelease iPhone from Apple, jets off to someplace awesome, and shoots a lot of great photos. It’s a shitty job, but someone’s gotta do it.
Anyway, this year, he went to Rwanda and the photos he captured on both phones — but particularly on the Plus variant — are gorgeous. Even the time-lapse function is better on these models, with significantly reduced flicker when the phone compensates for changing lighting conditions.
Totally fun new iMessage app that I’ve been testing. You can begin with a familiar emoji’s base shape, then add lips, eyes, accessories, and all sorts of things to build your own variation. It sounds ridiculous — and, believe me, it is — but there’s nothing like a large, disco dancing “pile of poo” emoji to add some pizazz to your conversations.
Back in June, I had this crazy idea that I was going to review iOS 10 and WatchOS 3 this year, both in my usual long-but-not-too-long style. I drafted an entry for each in MarsEdit, made my notes, and then — nothing. Some real life stuff got in the way, I procrastinated, and I ended up only being able to finish my annual iOS review. I’m okay with that, because I’m pretty happy with the way it turned out this year, but I wish I got the time to write up my thoughts on WatchOS 3 as well.
That being said, I think Matt Birchler has done an outstanding job with his review. He touches on all the main points, in a beautifully-designed review, to boot.
Let’s get something out of the way upfront: iOS 10 is a big release. It’s not big in an iOS 7 way, with a full-system redesign, nor does it introduce quite so many new APIs and features for developers as iOS 8 did. But it’s a terrific combination of those two sides of the spectrum, with a bunch of bug fixes tossed in for some zest.
For some perspective, there has been more time between the release of iOS 10 and the original iPhone than between the release of the first iMac and the first iPhone. It’s come a long way, baby, and it shows.
Installing iOS 10 is a straightforward affair, particularly with the enhancements to the software update process initiated in iOS 9. It requires less free space than its predecessors to upgrade, and you can ask iOS to update overnight. Nice.
iOS 10 is compatible with most of the devices that iOS 9 was, but it does drop support for some older devices. A5-generation devices and the third-generation iPad are all incompatible; the iPhone 5 is the oldest device that supports iOS 10.
There are additional limitations for the few 32-bit devices that remain supported: Memories and “rich” notifications are are only supported on 64-bit devices. Raise to Wake is only supported on iPhones 6S and newer; it is not supported on any iPad or the iPod Touch. I find that a curious choice — surely Raise to Wake would be just as helpful, if not more so, on the iPad, given its much larger size. And it’s not like a lack of an M-class motion co-processor is an excuse, because both iPads Pro contain a derivative of the A9 processor in the iPhone 6S with the same embedded M9 co-processor.
Lock Screen, Widgets, and Notifications
Goodbye, Slide to Unlock
Back when I bought my first iPhone OS device in 2007 — a first-generation iPod Touch, as the iPhone wasn’t yet available in Canada — I was constantly being asked to demo two features for anyone who asked: slide to unlock, and pinch to zoom. Everyone I know wanted to plunk their finger onto the little arrow nubby and slide it across the bar.
Once again proving that they give nary a shit about legacy or tradition, Apple is dropping “slide to unlock”. Like any major shift — the transition from the thirty-pin dock connector to Lightning, or, say, the removal of the headphone jack — there will be detractors. But I’m not one of them.
Let’s start from the beginning. Back in the days when our iPhones were made of wood and powered by diesel, it made sense to place an interactive barrier on the touch screen between switching the phone on and accessing its functions. It prevented accidental unlocks, and it provided a deliberate delineation between waking the phone and using it.
The true tipping point for “slide to unlock” was the introduction of Touch ID. Instead of requiring an onscreen interaction, it became easier to press the Home button and simply leave your thumb on the button for a little longer to unlock the device. iOS 10 formalizes the completion of the transition to Touch ID. The expectation is that you have a passcode set on your device and that you’re using Touch ID; iOS 10 supports just four devices that don’t have Touch ID home buttons.
But I happen to have one of those devices: an iPad Mini 2. Because it’s an iPad — and, therefore, much larger than an iPhone — I’m far more likely to use the home button to wake it from sleep than I am the sleep/wake button. It took me a while to lose the muscle memory developed over many years to slide the home screen to unlock my iPad. I’m used to interacting with the hardware first, and the onscreen controls immediately after; iOS 10 upends all of this by requiring me to press the home button twice, followed by typing my passcode onscreen. It’s only slightly different, but it sent my head for a bit of a trip for a month or so. I still, on occasion, try to slide to unlock, and curse myself for doing so.
The lock screen interaction feels much better on my iPhone 6S for two reasons. First, my iPhone has the benefit of having the best Touch ID sensor Apple has ever shipped, which means that pressing once on the home button and leaving my finger on the sensor for a bit longer unlocks my phone — almost exactly the same interaction as before, with no additional friction. That’s something that you’ll find across most of the devices compatible with iOS 10, as most of those devices have Touch ID.
The second reason for the vastly improved lock screen experience on my iPhone is that it supports the new Raise to Wake feature. The Windows 10 phone I used for a week earlier this year had a similar feature, and I loved it then; I’m thrilled to see it come to the iPhone. Raise to Wake allows you to pick up your iPhone or pull it out of your pocket to switch on the screen. Awaiting notifications appear with a subtle zoom effect, as though they’re bubbling onscreen from the ether. I suspect a lot of lessons learned from developing the wrist activation on the Apple Watch went into building Raise to Wake, and it shows: I’ve found it to be extremely reliable when pulling my phone out of its pocket, and only a little less so when lifting my phone off a desk.
Throughout this section, I’ve been using the word “unlock” to refer to the same action it’s always been used for: going from the lock screen to the home screen. But this isn’t quite correct any more because it’s now possible to wake and unlock an iOS device without moving away from the lock screen. This is useful for, say, viewing private data in widgets, but it leads to a complication of terminology — when I say that I unlocked my phone, did I go to the home screen or did I remain on the lock screen?
To clarify the terminology, Apple is now referring to the once-“unlocking” act of going to the home screen as “opening” an iOS device. That makes a lot of sense if you think of your iPhone as a door; as I don’t have a Plus model, I do not.
No matter what iOS device you use, the lock screen is now even more powerful. The familiar notifications screen sits in what is the middle of a sort of lock screen sandwich, with widgets on the left, and the camera to the right.
The widgets screen is actually just a copy of the Today view in Notification Centre; it’s also available to the left of the first home screen. That makes three places where widgets are available; yet, sadly, all three are identical. It seems to me that there are differences in the way one might use widgets in each location: on the lock screen, you may prefer widgets for the weather, your calendar, and the time of the next bus; in your Notification Centre, you may prefer to see your latest Pinboard bookmarks and what the next episode of your favourite TV show will be.
Widgets and notifications now share a similar frosted glass style, but older style widgets don’t suffer from a loss of contrast — if they haven’t been updated for iOS 10, they get a dark grey background instead. Widgets, notifications, the new Control Centre, and many UI components previously rendered as rounded rectangles are now drawn with a superellipse shape, similar to an expanded version of the shape of the icons on the home screen, or the iPhone itself. It’s a shape that’s simultaneously softer-looking and more precise, without the sharp transition between the rounded corner and the straight edge. I really liked this shape when it appeared on the home screen, and to see it used throughout apps and in widgets makes the whole system feel tied-together. It feels sophisticated, and very deliberately so.
In previous versions of iOS, the only place that widgets would appear is in the Today view, and if you have automatic app updates enabled, the only time you’d figure out if your favourite app had a new widget available was to scroll to the bottom of Today and find it. And, if you wanted to use a particular widget occasionally, but not always, you had to add and remove it from the Today view as you needed it.
In addition to the Today view in Notification Centre and on the home and lock screens, apps updated for iOS 10 also get to show their widgets in the 3D Touch menu that accompanies the app’s home screen icon. I think this is terrifically clever. It balances new functionality with the familiarity of the home screen that has remained effectively unchanged in its purpose and appearance in over nine years.
In iOS 10, the similarities in style between notifications and widgets are not coincidental: notifications have been rewritten from the ground up to allow for far more interactivity directly from the notification itself. Notifications can now show dynamic content and images, and they support live updates. Their additional functionality probably explains why they’re so huge, too: it serves as an indication that each notification is interactive. Even so, their size and heavy emphasis on structure makes for a certain lack of elegance. They’re not ugly, but there’s something about the screen to the right that’s not particularly inviting, either.
Pressing on a notification from Messages, for instance, will display the past messages from that thread directly in the notification balloon; or, if the iPhone is unlocked, you can see several past messages. This is particularly nice as a way to reestablish context when someone has replied to an hours- or days-old thread. However, there’s no way to scroll back within a notification balloon — they’re not as fully interactive as they seem to be.
This year also marks the return of my least favourite bug from iOS 8: if you’re typing a quick reply and you tap outside of the keyboard or notification balloon, you lose everything you’ve typed. This bug was fixed in iOS 8.3, but has surfaced again in iOS 10. I’ve lost my fair share of texts due to a misplaced tap; I’m not sure why this remains an issue.
Apple also provides examples of rich data within an expanded notification balloon, like showing the position of a car on a map for a ride hailing app’s notification, or updating a sports score notification as more pucks are dunked in the goalpost. Or whatever. I didn’t have the opportunity to test those features, but I’m looking forward to developers making greater use of Notification Centre as a place to complete tasks without having to open the app.
Notification Centre also borrows a trick from the Apple Watch: you can now press on the clear button in the upper-right to clear all notifications. It really is everything you could have wanted.
After a seemingly-endless climb in the number of preinstalled applications on fresh copies of iOS, finally, a plateau — this year’s total is the same as last year’s, at 33. Though the Home app is new, the Game Centre app has been removed, though the framework remains.
But 33 apps is still a lot, particularly when plenty of them will be squirrelled away by most users in a folder marked “Junk”, or the more-cleverly named “Crapple”. I’d make a handsome wager that a majority of iPhone users who have changed their home screen layout have placed the Stocks app in such a folder. Many others will do the same for Calculator, Clock, Contacts, Compass, and Voice Memos. People who don’t own an Apple Watch have no need for the Watch app, so they dump it in there, too.
We don’t really want this folder full of apps we never use on our phones. What we want is to remove them from our phones entirely, never to be seen again. And that’s kind of what you get in iOS 10: just tap and hold on any icon, and you’ll see delete buttons where you’ve never seen them before. Most of the apps you’d expect to be removable are; you can even remove some you might not expect, like Music, Maps, and Mail. As a result of this broad-reaching change, the on/off switch for the iCloud Drive app has been removed as well.
Want to restore an app? That’s pretty easy, too — just open the App Store and search for it.
There are, unfortunately, a couple of caveats that come with this new power. First, it’s important to know that the app isn’t being deleted from your iPhone — it’s simply being removed from the Home screen. This is in large part for security, according to Craig Federighi:
We’re not actually deleting the application binary, and the reason is really pretty two-fold. One, they’re small, but more significantly, the whole iOS security architecture around the system update is this one signed binary, where we can verify the integrity of that with every update.
That also means that even though the default apps appear in the App Store, they won’t get individual updates.
I see this as a limitation due to the way iOS has been built for the past decade, but I don’t necessarily see it always being this way. It would require a large effort to make these core apps independent of the system, but it’s not inconceivable that, one day, updates to these apps might be delivered via the App Store instead of rolling them into monolithic iOS versions.
So if the binary isn’t being removed, what is? Federighi, again:
[When] you remove an app, you’re removing it from the home screen, you’re removing all the user’s data associated from it, you’re moving all of the hooks it has into other system services. Like, Siri will no longer try to use that when you talk and so forth.
In most cases, this works entirely smoothly. If you remove Calculator, for example, it will also be removed from Control Centre. Even if you remove Calendar, it won’t break your ability to add new events or open .ics calendar files.
But if you remove Mail, be prepared to be in for a world of hurt. Mail is the only app permitted to open mailto: links, and no other app can be set to handle those should Mail not be present. When you tap on an email address or an mailto: link, you’ll be prompted to restore Mail; and, because all of its settings are removed when the app is hidden, you’ll have to rebuild your entire email setup. If you have just one email account, you’ll probably be fine, but if you have several, it’s a pain in the ass.
In earlier betas, tapping on a mailto: link would result in a Safari error page. While the shipping solution is slightly better — insomuch as something actually happens when tapping an email link — I wouldn’t consider this resolved by any stretch. Either it should be impossible to remove Mail, or it ought to be possible to select a third-party app to handle mailto: links.
Bad news, everyone: aside from the blue-green waterfall image we’ve seen in the betas, there are no new wallpapers in iOS 10. In fact, with just fifteen still images included, and the removal of all but one of the “feather” images from iOS 9, and the loss of all but three of the ones added in iOSes 7 and 8, I think the wallpaper selection in iOS 10 might be at its most pitiful since the iPhone’s launch.
Luckily, we can set our own still images as wallpaper, but we have no way to add a custom dynamic wallpaper. And, for the third year in a row, there isn’t a single new dynamic wallpaper in iOS. I’m not sure if it’s something Apple forgot they added back in iOS 7, or if there are simply no new ideas beyond some bouncing circles. There are also no new live wallpapers.
Since its introduction in iOS 7, Control Centre has been a bit of a grab bag of quick-access shortcuts. To sort out all of its functionality, Apple created five groups of related items: toggles for system settings, a screen brightness slider, audio playback controls, AirDrop and AirPlay controls, and lightweight app shortcuts.
But having all of these controls on a single sheet is less than ideal. At a glance, there’s not quite enough space between disparate controls, which means that your thumb can easily tap the wrong thing when looking for a particular button. And that’s without adding new functionality, like a control for Night Shift — the kludgy-looking five-across row at the bottom is a clue that it doesn’t fit into the existing layout — or quick access controls for HomeKit.
Something clearly had to change, and Apple has addressed it in a rather dramatic fashion: a thorough redesign of Control Centre. It’s now split across two “sheets” — three, if you have connected HomeKit devices.
The initial response to the splitting of Control Centre, as I observed on Twitter and in industry press, was, at best, contentious. Adrian Kingsley-Hughes, in a widely-circulated ZDNet article published weeks after the first beta was released:
The iOS 10 Control Center ranks not only as one of the worst user interface designs by Apple, but as one of the worst by any major software developer.
That’s harsh — undeservedly so, I feel. In fact, I’d go so far as to say that the revised Control Centre is one of the smartest and best-considered user interfaces in iOS.
Let’s start with the actual act of splitting it up into multiple pages. As I noted earlier, there’s only so much Apple could do with the existing single-page layout. Since nobody would seriously propose that Control Centre should not gain any new functionality, there are only a few ways for it to be expanded while remaining on a single page: the controls could get smaller, Control Centre could get taller, or the available controls could be customizable.
Making the controls smaller is no good because thumbs aren’t getting any smaller. If anything, some controls — like the track position scrubber — are already far too small for my liking. Making Control Centre taller, meanwhile, isn’t good for usability either, because thumbs aren’t getting any longer.
As for customizing Control Centre, while I’ve heard rumours that it’s being worked on, it clearly hasn’t progressed to a public release yet. It’s a valid solution, but one that also has its own drawbacks and complexities — it could very quickly become a second-level home screen when the doors of customization are opened. That’s not to say it’s not a solvable problem; rather, that the solution hasn’t yet been finalized.
So: extending it over two panels makes sense. And, when you add to the mix the space requirements of several HomeKit devices, having a third page become available makes even more sense.
The beauty of this UI, though, is that it remembers which page you left it on. If you use the music playback controls as frequently as I do, that means you can turn Control Centre into an ever-present remote control for audio, with some additional controls available if, for some reason, you need to toggle WiFi.
Across the bottom of the first page of Control Centre sits a familiar array of quick actions: flashlight, timer, calculator, and camera. The icons in this array now support 3D Touch, so it’s even faster to set a timer, and you can set the flashlight to three different levels of intensity. Unfortunately, it isn’t possible to use 3D Touch on the top row of toggles. It would be helpful, for example, to be able to launch WiFi settings from its toggle, or to have the option to lock the screen in a horizontal orientation on the iPhone.
I think the large buttons for AirPlay and AirDrop are fairly nice. They look like buttons, provide the additional information required by both services in a fairly compact space, but are adequately thumb-sized. However, the gigantic Night Shift button leaves me perplexed. When I first saw it, I assumed that it would be split in half for a True Tone toggle. However, not only does the iPhone 7 not have a True Tone display, the only iOS device with one — the 9.7-inch iPad Pro — doesn’t feature this split toggle. This button is unnecessarily large, and I probably find it particularly grating because Night Shift makes my iPhone look like it has a diseased liver.
Music and News: Back to the Drawing Board
I don’t remember the last time Apple introduced an app in one version of their software, only to radically redesign it just a year later; I certainly can’t think of two instances where that’s happened. But it did, this year, with Music and News.
I’ve always had a funny relationship with the Music app on iOS. In many ways, it has long been one of the finest apps Apple has ever shipped with the platform, featuring prominently in the original iPhone demo and in plenty of ads; but, deep down, there are some baffling design and functionality choices. That imbalance reached something of a high point in iOS 8.4, when Apple Music was added to the mix. Because Apple Music, by design, blurs the delineation between music you own and music you stream, the UI decisions made to add that layer of functionality increased the complexity of Music.
News, meanwhile, was a fine app last year, but it wasn’t particularly imaginative. There was very little distinctive about it; it looked a bit generic, if anything.
Both of these apps have received a complete makeover this year. I’m bundling them together because both of them — and the new HomeKit front-end app called Home — share a common design language unlike anything else on the system. Their UIs are defined by very heavy weights of San Francisco, stark white backgrounds, and big imagery. I read an article just after WWDC — which, regrettably, I cannot find — that described these apps as having “editorial” interfaces, and I think that’s probably the most fitting adjective for this design language.
I’m struggling to understand why it’s being used in these three contexts, though — why in Music, News, and Home, but nowhere else? What do these three apps have in common? Music and News provide personalized recommendations and serve as windows into different media, but Home isn’t akin to either. Home and Music both provide direct control elements, but News doesn’t. If anyone can explain to me why these three apps get the same UI language that’s entirely different from any other app, I’d be happy to hear it.
Incongruity aside, I love the way Music and News look; Home is an app I’ve only seen in screenshots, because every time I try to launch it in my entirely HomeKit-free apartment, it just sits on this screen and spins away:
I’ve no idea what’s going on here. I don’t know if there’s simply no timeout, or maybe there is but it’s set to the year 2022, or maybe you’re just not supposed to be an idiot like me and launch Home if you don’t have any HomeKit devices. (This is also why I was unable to comment on the third, Home-centric page of Control Centre.)
That aside, I think this new design language is fantastic. It’s bold and full of character, but not in a way that feels gaudy or overbearing. They feel like glossy interactive magazines, at least on the surface. As you get deeper into each app, the big, bold titles are minimized — secondary and tertiary interfaces look pretty standard compared with the primary screens of each app.
I think it would be interesting if this design language made its way into more apps on iOS. I think Phone, Reminders, and even Mail could take to this style quite well. Of course, there’s the bigger question of how permanent this style is: it appears in one app that’s brand new, and two others that were redesigned within twelve months of their launch. That’s not to say it can’t or won’t last, but its currently limited application makes it perhaps more experimental than other design languages Apple has implemented throughout the system.
I’ve been an ardent supporter of Apple’s interface design direction over the past few years. Though some critics may bemoan a generally less expressive experience with the iconography and human interface components of many apps, I’ve found that expressiveness to surface in other means — primarily, as it turns out, through motion and animation. From the subtle parallax effects in Weather and Photos to the new super goofy iMessage effects — more on that later — animations have become as much a part of the iOS user interface as are buttons and icons.
Unfortunately, many of the Springboard animations added in iOS 7 felt like they slowed down actually using the system. While they looked great the first few times, waiting for a long and softly-eased animation to complete for every task became, rather quickly, an irritation more than a pleasant experience. This was exacerbated by the inability to cancel any of these animations: if you opened the wrong app or folder on your phone, you had to wait for the “opening” and “closing” animations to play before you could try again. In the grand scheme of things, not the worst UI crime imaginable, but a frustration nonetheless.
In iOS 10, animations have been tweaked throughout the system to feel far faster. In fact, I’d convinced myself that all of the animations actually were faster, until I compared them to an iPhone 5S running iOS 9 and found them to be virtually identical.
But there is one very subtle change that makes a world of difference: it’s now possible to cancel animations before they complete. Tapped on Mail rather than Messages in your dock? Just hit the home button and it instantly responds. It’s the same story for folders, too; but, sadly, not for multitasking or opening Notification Centre.
Other animations still look and feel as slow as they were when iOS 7 debuted, including the icons flying in after unlocking. This animation has always grated on me. It takes about a full second to play; I wish it took about half that time because it makes the system feel much slower than it actually is.
Animations like these are most effective when they imply meaning — a sense of space, or an action. This has long been something that iOS does pretty well. For example, when you tap on a message in the Mail inbox, the whole UI slides to the left to show the message, as though it were laying just to the right of what the screen could contain. This animation is combined with the familiar right carat (›) that’s placed in each cell, completing the spatial relationship between the inbox and each message.
In iOS 7, the rather confusing spatial relationship between Springboard elements was organized into a more straightforward hierarchy. However, some animations and interactions were not fully considered; as a result, this hierarchy did not maintain consistency. The folder animation, in particular, was confusing: tapping on it would hide all of the home screen icons and perform some kind of hyperspace zoom into the folder area.
This has been fixed in iOS 10. Folders now appear to expand and sit overtop the rest of the home screen which, naturally, blurs. This animation feels a lot faster and more logical, while preserving the order of depth established in iOS 7.
The Hidden UI
You may have noticed that many of the most exciting new features I’ve mentioned so far — like additional options in Control Centre, and expanding notifications — make heavy use of 3D Touch. Plenty more of the enhancements that I’ll chat about later do too. In iOS 10, 3D Touch has been upgraded from a curious optional extra to a functional aspect of the system, and there are some complexities that are inherent to such a shift.
Because 3D Touch adds depth to a system that is, by the nature of pixels on a piece of glass, flat, its functionality is not obvious unless you know it’s there first. Paradoxically, the expansion of 3D Touch ought to make it feel much more like an expectation than an option, but there remains a steep learning curve for users to understand that 3D Touch is not necessarily consistent between apps.
3D Touch is also a bit of an anomaly across the iOS lineup. Apple says that they have over a billion iOS devices in use around the world, but only the iPhones 6S and to-be-released 7 support it. They sold a little over 200 million iPhones in the year since the 6S was introduced, which means that a maximum of about 20% of the entire iOS base is able to use those features.
Without 3D Touch, the user experience of a feature like rich notifications really breaks down. Instead of pressing on the notification bubble, it’s necessary to swipe the notification to the left and tap the “View” button that appears, to see its options. Of course, this is a compromise that will scarcely be a memory in a couple of years, about 80% of existing iOS device users will, on launch day, have a less-than-satisfactory experience.
Of all of the images of Steve Jobs onstage at an Apple event, there are few more instantly memorable than this moment at Macworld 2007:
You might remember Jobs explaining that the keyboards “fixed in plastic” are a core issue with these phones, and that changing to a touch screen would allow for optimized controls for each application.
But one thing he didn’t mention — at least, not explicitly — is that the keyboard itself would see significant changes over the next nine versions of the operating system. From international keyboards and dictation, to the Predictive bar and case switching on the keycaps, the keyboard has come a long way since 2007. But it has always primarily been an explicit, active means of user input.
In iOS 10, the keyboard becomes a little more passive and a lot smarter by way of the QuickType bar. Instead of merely predicting what word you should type next based on what you’ve been typing so far, it now suggests inputs based on contextual prompts.
For example, if a webpage has a field for your email address, QuickType will suggest two of your email addresses. Or, if a friend texts you asking “Where are you?”, the keyboard will prompt you to send your current location.
And the improvements to the QuickType bar just keep getting better: as you’re typing, it can also suggest an appropriate emoji. Type “love” and you’ll see a heart; type “ugh”, and you’ll be prompted to add a straight-faced emoji. Unfortunately, as Apple is a strenuously PG-rated company, typing “shit” will not suggest the “pile of poo” emoji — though “crap” will — and typing “penis” won’t suggest the eggplant.
There are also some improvements to autocorrect. For users who type in multiple languages or mix languages, iOS now automatically handles corrections and suggestions in those other languages on the fly, ostensibly. For the most part, I’m monolingual, but I know a few sentences in other languages. Even after adding those languages as keyboards in Settings, I wasn’t able to get it to autocorrect to those languages if I didn’t manually select those keyboards.
The only time I ever saw a language switch in the QuickType bar without manually selecting another keyboard is when my girlfriend sent me a text reading “Yup yup yup”. QuickType decided that I should reply in what appears to be Turkish. I’ve noticed that these reviews get harder to write when I’m able to explain less about how the system works.
I’m entirely the wrong person to be trying this out; that it didn’t work for me means nothing. Maybe read Ticci’s review — that guy knows what he’s talking about.
3D Touch support has also been enhanced in the keyboard. The trackpad gesture now works far more reliably, and pressing harder on the delete key will erase text at about twice the speed.
Apple has long prided itself on standing up for the privacy of its users. They’ve fought the FBI, and have long resisted taking the relatively easy route of uploading all of their users’ data to their own servers to diddle around with in any way they want.
But there comes a time when even they will agree that it’s in the best interests of their users to detect trends, for instance, or enhance certain machine learning qualities.
In iOS 10, Apple is using a fairly esoteric field of study to enhance their machine learning capabilities. It’s called “differential privacy”, and they’re using it beginning only with the keyboard to learn new words.
You’ve probably heard a million explanations of how differential privacy works, so here’s the elevator pitch version, for reference: the keyboard tracks the words that you enter and how Autocorrect responds, and blends all of that with a lot of statistical noise. The data from you and hundreds of millions of other iOS users gets combined and the noise is averaged out, leaving certain trending words behind when they’re used by a significant number of people.
This isn’t a technique invented by Apple, but they’re the first to deploy it at this kind of scale. There are some people who are doubting its success, but there’s no way to tell whether it’s making a meaningful impact on our typing until iOS 10 reaches mass deployment.
As part of the iOS 10 update, Apple has redesigned most of the characters in the “Smileys & People” category, along with a bunch of others in several more categories. The redesigned characters look a little more saturated to my eye, and a tiny bit softer. I really like them.
In addition to the redesigned characters, there are also a bunch of new and more diverse emoji that depict women in professions and activities previously represented only by men, as well as more variations for family characters. This is a good step forward — showing police officers, detectives, and swimmers as men while displaying women only as brides and princesses was clearly not representative of reality.
However, unlike on MacOS, there still isn’t a means to search for emoji in iOS 10. The keyboard may provide suggestions while typing, but it’s not the same as search: there’s only one suggestion, which necessitates a more precise guess to find the right emoji. I wish I could swipe down on the emoji keyboard to see a proper search field.
Before I jump into what’s new in Siri this year, I want to elaborate a little bit on where I see Siri today. To understand the current state of Siri is to understand why there are now APIs available to third parties.
The best place to start, I think, is with Steven Levy’s August profile of Apple’s artificial intelligence and machine learning technologies:
As far as the core [Siri] product is concerned, [Eddy] Cue cites four components of the product: speech recognition (to understand when you talk to it), natural language understanding (to grasp what you’re saying), execution (to fulfill a query or request), and response (to talk back to you). “Machine learning has impacted all of those in hugely significant ways,” he says.
I think it’s critical that we understand all four of these components: how they work on their own, in sequence, and how the unreliability of any component affects Siri as a whole.
So, let’s start with the first: speech recognition. One thing that has become consistently better with Siri’s ongoing development is its ability to clearly and accurately transcribe our speech. Even just a few years ago, it. was. necessary. to. speak. to. Siri. in. a. jolted. manner. William Shatner likely had few problems with Siri, but the rest of us found this frustrating.
In 2014, Apple transitioned Siri from a backend largely reliant upon third parties to one of their own design. The result was a noticeable and, perhaps, dramatic improvement in Siri’s speed and accuracy, to the extent that Apple felt confident enough to add real-time dictation with iOS 8.
But the quality of Siri’s transcription of homonyms and more esoteric words often leaves a lot to be desired, due in part to inconsistencies with the second component cited by Cue: the interpretation of what is being said. Here’s an easily reproducible example that you can try right now: tell Siri “remind me to sew my cardigan tomorrow at noon”. Siri doesn’t understand the context of the word “sew” nor its relationship to the word “cardigan”, so it always — or, at least, every time I’ve tried this — transcribes it as “so”.
Speech recognition and interpretation are, I would argue, two parts of a single “input” step in a given Siri interaction. The next two parts — execution and response — can also be combined into a single “output” step, and I think it has far deeper and more fundamental problems.
Nearly any frustration we have with any computer or any piece of software tends to boil down to a single truth: the output is not what we had expected, based on our input. Whether that’s because we open an app and it crashes, or our email doesn’t refresh on a timely basis, or perhaps because autocorrect inserts the wrong word every ducking time — these are regular irritations because they defy our expectations.
In many ways, Siri is truly amazing, typically answering our requests faster than we could ever type them out. But because Siri can do so much, we experiment, and rightfully expect that similar queries would behave similarly in their response.
Let’s start with a basic request — for instance, “hey Siri, how long would it take me to drive to work?” As expected, Siri will happily respond with information about the current traffic conditions and the amount of time it will take to get there. Now, change the word “drive” to “walk” in the exact same query, and witness an entirely different result:
These requests are nearly identical, but are treated vastly differently. The driving example works perfectly; the walking example doesn’t answer my question — I’m not looking for directions, I’m asking for a time estimate.
Worse still is when Siri fails to provide an answer to a specific request. Siri is akin to pushing the “I’m Feeling Lucky” button in Google: it ought to be the shortest, straightest line between asking something and getting an answer. If I ask Siri to “find me a recipe for banana bread”, I want a recipe, not a web search that gives me a choice of recipes. If I wanted options, I would have asked for them.
As Siri’s speech recognition and interpretation becomes more reliable, this becomes more of a problem. Based solely on anecdotal observations, I think that users will be more tolerant of an occasional mismatched result than they are of having to interact with Siri, so long as it remains fast and reliable.
With that, I’d like to propose a few guidelines for what a virtual assistant ought to be and do.
Speech recognition and transcription should prioritize context over a direct phonetic interpretation.
Similar commands should perform similarly.
Returning an absolute answer should be the highest priority. A web search should be seen as a last-ditch fallback effort, and every effort should be made to minimize its use. User interaction should, overall, be minimized.
These bullet points are, I’m sure, much more difficult to implement than I’ve made them out to be. Contextualizing a phrase to interpret which words are most likely to be spoken in relation to one another requires a great depth of machine learning, for example; however, I see these guidelines as a baseline for all virtual assistants to behave predictably.
SiriKit and Intents
While Apple is busy working on the fundamental components of Siri, they’ve opened up its capabilities to third-party developers who have been chomping at the bit since Siri was launched in 2011. Much like multitasking in iOS 4, the functionality of SiriKit is limited to unique scopes or domains:
These individual scopes each have their own “Intents” and vocabulary, and these can be defined by developers. For example, Uber provides different levels of ride hailing service, and they can define those levels for Siri in their app’s metadata; or, a payment service could define different methods of payment. Developers can include shorthand and alternate variants of their app’s terminology within their app’s vocabulary metadata.
All of this stuff sounds like it’s going to be a great way to expand the capabilities of Siri without Apple having to chase down individual partnerships. Unfortunately, these precise app categories tend to be dominated by big players who wouldn’t care to let me test their new apps. I’m looking forward to seeing what I can do with these apps once they’re released into the wild, though, because I have lots of questions.
The first thing you’ll notice about Maps in iOS 10 is that it’s received a complete makeover. With its bold card-based layout, floating controls, and Proactive suggestions, it now looks like the app Apple has wanted it to be since they dropped Google and went their own way back in iOS 6. It has taken on some of the design cues established in Music and News, though not to the same degree. I find it even easier to use than the old version, though it does retain some of its — uh — charms.
The bigger news in Maps doesn’t come from Apple, though: third-party developers can now integrate their apps directly into Maps’ UI, using Intents and similar code to SiriKit. Only two kinds of Intents are available for Maps integration: ride hailing and restaurant reservations. Third-party restaurant reservation integration is only supported in Maps; Siri has supported OpenTable integration since iOS 6. It’s not a particularly glamorous integration, but it is a useful one. This could be taken one step further by adding an Intent for delivery services as well.
I couldn’t test any of the ride hailing stuff because Uber threw a hissy-fit over Calgary’s requirements that drivers carry proper licensing and that vehicles are inspected, so they don’t offer ride sharing here.
About a year ago, Benedict Evans posed an intriguing question: about how many photos are being taken today? Given that there are a couple of billion smartphones in the world, it’s probably a lot:
How many more were taken and not shared? Again, there’s no solid data for this (though Apple and Google probably have some). Some image sharing is probably 1:1 for taken:shared (Snapchat, perhaps) but other people on other services will take hundreds and share only a few. So it could be double the number of photos shared or it could be 10x. Meanwhile, estimates of the total number of photos ever taken on film range from 2.5-3.5 trillion. That in turn would suggest that more photos will be taken this year than were taken on film in the entire history of the analogue camera business.
That was last year; this year, there will no doubt be a far greater number of photos taken due to the continued proliferation of smartphones worldwide. We all know this, and we all know how difficult it has become to manage those photos.
A few months before Evans wrote that article on photos, Google tried to combat this problem by introducing Photos, to much critical and public acclaim. Instead of worrying about storing those photos on your device — a worry that will be present so long as companies like Apple continue to include inadequate local storage in their smartphone lineups — Google reasoned that it would make more sense to allow users to stash their photos in a cloud storage system. Not only does this free up local space on the device, it allows photos to benefit from the ridiculous redundancy built into Google’s cloud storage facilities.
To sweeten the deal, Google built software that would analyze the photos as they’re stored in Google Photos. It could identify objects and people within photos, which means that finding that one photo of your dog licking a burger became as quick and easy as a Google search.
By all accounts, Google Photos has been a rousing success; it became quite clear in the intervening year that these kinds of improvements were expected from Apple, too. But this intelligence has long been presumed to require a sacrifice on user privacy — a sacrifice that has seemed unlikely for Apple to make. Om Malik wrote what is perhaps the most cogent explanation of this assumed contradiction for the New Yorker in June 2015:
The battle between Google and Apple has shifted from devices, operating systems, and apps to a new, amorphous idea called “contextual computing.” We have become data-spewing factories, and the only way to make sense of it all is through context. Google’s approach to context is using billions of data points in its cloud and matching them to our personal usage of the Google-powered Web; Apple’s approach is to string together personal streams of data on devices, without trying to own any of it. If Google is taking an Internet approach to personal context, then Apple’s way is like an intranet.
From the surface, Google’s approach seems superior. Understanding context is all about data, and the company is collecting a lot more of it. Apple has your phone; Google has access to almost everything. […]
And one day, I wouldn’t be surprised to see an executive from Apple come onstage at the Moscone Center, take a page from its rival, and say that they’re doing the same things with your data that Google is.
That day came, kind of, about a year later, on June 13, 2016. An Apple executive — Craig Federighi, naturally — took the stage at the Bill Graham Auditorium to explain that they‘re not doing the same things with your data that Google is. Apple claimed that they were able to do the same kind of facial and scene recognition on your photos entirely locally.
That sounds pretty compelling: a marriage of privacy and capabilities. All of the power, yet none of the drawbacks. So: has it worked?
Well, there are lots of criteria one could use to judge that. At its core, it’s a simple question of Can you search for objects and see photos you’ve taken of them?, to which the answer is “yes, probably”. But it would be disingenuous and irresponsible of me to view Photos in a vacuum.
While this won’t be a full Apple Photos vs. Google Photos comparison, it seems appropriate to have at least some idea of a benchmark. With that in mind, I uploaded about 1,400 photos that I’d taken through June and July to Google Photos; those same photos also live in my iCloud Photo Library. But, before we get to that, let’s see what Photos has to offer on its own terms.
Upon updating to iOS 10, your existing photo library will be analyzed while your iPhone or iPad is plugged in and locked. How long this will take obviously depends on how many photos you have — my library of about 22,000 photos took a few days of overnight analysis to complete. However, new photos taken on an iPhone are analyzed as they make their way into the Camera Roll. Apple says that they make eleven billion calculations on each photo to determine whether there’s a horse in it. For real:
In fact, we do 11 billion computations per photo to be able to detect things like there’s a horse, there’s water, there’s a mountain.
And those calculations have determined that there are, in fact, horses in some of my photos:
There are lots more searches that are possible, too — an article from earlier this year by Kay Yin pegs the total number of scenes and objects that Photos will detect at 4,432. Yin told me in an email that they acquired the list through an analysis of Apple’s private PhotoAnalysis.framework. It includes everything from the obvious — food, museums, and musical instruments, to name a few — to the peculiar and surprising: ungulates, marine museums, and tympans all make an appearance on the list.
Weirdly, though, some searches still return zero results in Photos. You can’t search for photos by type — like screenshot, panorama, or Live Photos — nor can you search by camera brand or model. This information is within pretty much any photo, but is not indexed by Photos for reasons not entirely clear to me. Perhaps very few people will search for photos taken on their Canon DSLR, but it doesn’t make much sense to me to not allow that. It feels like an artificial limitation. The only way to find Live Photos within your library on your iPhone is still to thumb through each photo individually until you see some sense of movement.
For the myriad keywords Photos does support, however, there’s plenty of good news. After it has finished analyzing and indexing the photo library, searches are fast and respectably accurate, but it’s not perfect. In that “horse” screenshot above, you can see a photo of a dragonfly, for instance. A search of my library for “receipts” shows plenty of receipts that were indexed, but also some recipes, a photo of a railway timetable, and a photo of my wristband from when I was in the hospital a couple of years ago. In general, it seems to err in favour of showing too many photos — those that might be, say, a 70-80% match — rather than being too fine-grained and excluding potential matches.
Perhaps my biggest complaint with Photos’ search is that it isn’t available in the media picker. That wasn’t as big a deal in previous versions of iOS, but with the depth and quality of indexing in iOS 10, it would be really nice to be able to search within Messages or in an image picking sheet for a specific photo to send.
Apple’s facial recognition is also quite good, generally speaking. It’s reasonably adept at identifying photos of the same person when the face is somewhat square with the camera, but differences in hair length, glasses, mediocre lighting, and photos with a more sideways profile-like perspective tend to trip it up.
If you’re a completionist about this sort of thing, you’ll likely become frustrated with the most obvious mechanism for dealing with photos misidentified as being from different people. It’s not that it’s hard; it is, however, extremely tedious. To get to it, tap on the Albums tab within Photos, then tap People, then tap Add People. You’ll be presented with a grid of all of the faces identified in your photos.
The thumbnails are sorted in descending order of the number of photos found per face detected. The first few screens of these thumbnails will look fine — 21 instances of a face here, 40-odd there — but as you scroll, the number of photos per face drops precipitously. I got about a quarter of the way through my thumbnails before I started seeing instances of a single photo per detected face. You can add each to your People album, and assign a set of photos to a contact. If you’ve already started collecting photos with a specific contact in them, it will offer to merge any new photos you add to that contact.
Tapping more than one thumbnail in the Add People view will activate the Merge button in the lower-left corner. This allows you to select multiple photos featuring the same face and teach Photos that they are the same person. Unfortunately, it’s still quite laborious to sort through photos one-by-one, in some cases. To make matters worse, thumbnails will sometimes feature the faces of two people, making it difficult to determine which of them is being detected in this instance.
This is a time-consuming way of handling multiple photos from a single person. Despite its utility, I find this view to be frustrating.
Happily, there’s an easier method of teaching Photos which faces belong to which contact. If you tap on one of the faces you’ve already taught Photos about and scroll to the bottom of the screen, past the Details view — more on that later — you’ll see a Confirm Additional Photos option. Tap on it, and you’ll get a well-designed “yes” or “no” way of confirming additional photos of that person. There’s even some really pleasant audio feedback, making it feel a little bit like a game.
Unlike object detection, which seems to err on the side of including too many photos so as to miss as few potential matches as possible, facial detection errs on the side of caution. It may be much pickier about which faces are of the same person, but I haven’t seen a single false-positive. If there is a false-positive, the process for disassociating a photo with it is a bit bizarre: the button for Not This Person is hidden in the Share sheet.
But is all of this stuff as good as Google Photos? I’m not adequately prepared to fully answer that question, but here’s the short version: it seems real close.
I have common praise. Both successfully identified obvious objects within photos most of the time. Both also had the occasional miss — identifying an object incorrectly, and not identifying an object at all. Both struggle with plurals in searches, too: a search for “mushroom” in both apps returns photos I took of a cluster of mushrooms at the base of a tree, but searching “mushrooms” does not.
I found that both apps were similarly successful at recognizing faces, with a slight edge for Google. However, I’m not sure the pool of photos I uploaded to Google was comprehensive enough for me to figure out how good it was for recognizing a lot of different faces; my iCloud Photo Library has far more images in it with lots more faces. I’d love if someone uploaded an identical batch of tens of thousands of photos to both, and did a more thorough investigation.
My main concern with Apple’s attempt at photo recognition and categorization was that it wouldn’t be anywhere near competitive with Google’s offering. My (admittedly brief) comparison indicates that this simply isn’t the case. Apple’s offering is properly good.
But, perhaps because it’s doing all of the object and facial recognition on the device, locally, it doesn’t sync any of this stuff within iCloud Photo Library. I hope you enjoyed the tedium of assigning names to faces and confirming all which photos contain each of your friends, because you’re going to have to do that for every device that you own. Have fun!
There’s also a rather nice Details view for each photo. You can tap on the Details button in the upper right or, in a completely non-obvious manoeuvre, you can scroll the photo vertically. There, you’ll see a map of where the photo was taken, any people identified within the image, and related Memories.
And I haven’t even mentioned my favourite new feature.
Teddy told me that in Greek, “nostalgia” literally means “the pain from an old wound.” It’s a twinge in your heart far more powerful than memory alone. This device isn’t a spaceship, it’s a time machine. It goes backwards, and forwards… it takes us to a place where we ache to go again.
You’ve probably read that quote in a dozen other reviews and articles about photo-related things, but there’s a good reason for that: photographs are a near-flawless conduit between your eyeballs to your heart. Trite as that excerpt may be, I couldn’t think of a better summary of Memories in iOS 10.
See, the photo analysis that iOS 10 does is not “dumb”; it doesn’t simply leave the data that it collects sitting there for you to comb through. Photos actually tries to do something with the information embedded in all of those images and videos: locations, dates, and — now, in photos — people and objects. It assesses that data looking for anything that might tie certain sets of images together, like those taken within a certain timeframe, or a set of photos from a trip abroad. It automatically groups those photos and videos together into small albums, and creates a short slideshow video from them. That, in a nutshell, is Memories.
I usually take pictures of buildings and empty fields, so my first set of Memories were not particularly inspiring. Don’t get me wrong — the albums were fine, but none were moving or emotional.
And then, one day, Photos surprised me by generating an album of photos of me and my girlfriend over the course of the past year. I guess it figured out that there are a few photos on my phone of us together, and a lot of her, and it put together an album and a video.
Despite all that I know about how automated and mechanized this stuff is, I was and remain deeply touched by the effect.
I’m trying not to sound too sappy here, but, in a way, I want to be extra sappy — I want you to know how powerful this feature can be. Sure, it’s all made by shuffling some bits around and associating whatever data it can find, but when you wake up to find a slideshow of you and your significant other over the past year, it really is pretty powerful.
I can’t confirm this, but Memories seems to prefer edited and liked photos, which makes sense — those are probably the best ones in a given set. It also incorporates Live Photos and video in a really nice way.
If you don’t like the auto-generated video, you can customize it. A theme is assigned by default, but you can pick your own, too, from options like “sentimental” and “dreamy” to “epic” and “extreme, with music and title styles to match. If you don’t like the soundtrack, just tap the adjustment button in the lower-right corner and you can pick from nearly one-hundred provided songs, plus all of the ones in your library. You can also adjust nearly all attributes of the video, including title style and the precise photo selection. But I’ve found that the auto-generated Memories are, generally-speaking, just fine.
The nature of this feature is such that most of the ones that it made for me are quite personal in nature — more on that in a minute. Thankfully, I do take enough photos of buildings and whatnot that it has produced a couple that I feel comfortable sharing. First up is one that was generated entirely automatically:
Here, as they say, is one I made earlier, with a few modifications:
You can imagine that if these were videos of your family or your significant other, they would be much more meaningful. I hope these examples provide you with a sense of what’s possible.
There’s something else unique about Memories, compared to — say — Timehop, or the “On This Day” posts that Facebook dredges up from years past. Because apps like these tend to use primarily public posts, they’re pre-selected based on the kind of image we project of ourselves. But we take far more photos that never get posted for all kinds of reasons.
I have a series of photos from mid-August of a trio of ducks fighting over a fish. I didn’t post them publicly because it’s of little-to-no interest of anyone, I presume, but it reminds me of watching those ducks duke it out on the river. That’s a memory particular to me, and it’s the kind of thing that will one day be served up by Memories.
I will say that I’ve seen it have some problems with facial recognition when cropping portrait-oriented photos to fit within a 16:9 video frame. More than once, the people in the photos have had their heads cut off. Sometimes, it’s only my head visible, instead of the faces of those I’m with; that seems to be the inverse of the most appropriate way to crop an image — who wants to look at themselves?
Regardless, Memories is probably my favourite new feature in iOS 10’s Photos app, and maybe in the entirety of iOS 10. It’s a beautifully-executed and completely effortless high-test nostalgia delivery system.
RAW and Live Photos
iOS 10 unlocks new capabilities for developers as well. Third-party applications can now shoot Live Photos, and encode and decode RAW images. The former capability is fine — I’m happy to have it for those times I want to have a little more control over a Live Photo than the default camera app can provide.
The latter capability, though: oh my. The default camera app doesn’t encode RAW images, but the newest versions of Obscura and Manual can, and they’re totally worth buying just to try RAW shooting on your phone. It’s clear that a lot of detail is obliterated when the photo is processed and compressed as a JPEG; a RAW image is three-to-four times the file size of the same JPEG, and it’s completely lossless. The easiest way to demonstrate the difference is with a simple, unedited comparison of two photos I shot one after another:
In the image shot as a JPEG, the trees become a blocky, gestural mess. The fine lines on the outside of the green building on the left are incomplete and chunky. The whole thing looks a little more like an oil painting than a photograph.
In order to process the RAW for web use, I simply applied Photoshop’s “auto” Camera Raw setting; it may have flattened out the shadows, which is why the roof of the castle-looking building looks darker in the JPEG. But, even with that minimal processing, you can clearly see individual tree branches instead of a blocky mess. The train tracks on the overpass are clearly distinct. You can almost make out the windows on the sandstone school in the distance, in the middle of this crop. Every detail is captured far better.
Of course, the JPEG variant looks far better at the typical size of a photo viewed on Facebook, for example, where these photos typically go. And, naturally, the lack of any processing means that a full spectrum of noise is also captured; it’s not quite fine enough to be considered pleasantly grainy. But for those of us who want some more control over individual attributes, the capability of shooting RAW is extremely exciting. It presents far more flexibility, provided third-party photo editing apps jump on the bandwagon. Snapseed already handles RAW in post; I’ve heard confirmations from the developers of several other apps confirming that they will soon support RAW editing, too.
For an utterly unfair comparison, I shot a similar photo on my DSLR, a Canon XSi with a 12 megapixel sensor — the same rating as the one in my iPhone. Of course, the APS-C sensor in it is far larger and the lens I have on it — the cheap and cheerful 40mm pancake — is much nicer, and has a completely different field of view as my iPhone. Even so:
There’s a long way for the iPhone’s camera to go to become comparable to a professional DSLR — in fact, I’m not sure it ever can compete on that level. But, with RAW shooting capabilities, I see this as one of the single biggest jumps in image quality in the history of the iPhone. It is properly, exceedingly, brilliantly good.
Like most of you, I can think of few apps I use more on my iPhone than Messages — Safari, Mail, and Tweetbot are the only three that come to mind as contenders. Its popularity is a direct result of its simplicity and versatility, with few apps making text-based conversation as straightforward.
Perhaps because of that simplicity, Messages has seen few updates in its lifetime. iPhone OS 3 brought MMS support, iOS 5 introduced iMessage, and iOS 8 added more messaging types and a better Details view. But the ubiquity and flexibility of applications for Messages means that those improvements effected amongst the most significant changes on the utility of any app on iOS. While Apple hasn’t released their monthly active user count for iMessages, for example, I bet that it’s one of the most popular messaging standards in the world.
But, while you’ve always been able to send text, pictures, and video through iMessage, the experience has always been rather static. Until now.
In iOS 10, you can now send handwritten and Digital Touch messages through iMessage on your iOS device. What was once a niche feature for Apple Watch owners takes very kindly to the larger displays of the iPhone and iPad, allowing you to send Snapchat-like sketches through iMessage. The heartbeat option is even available if an Apple Watch is paired, and you can mark up photos and videos right from the media picker. In some ways, it seems that Apple is still chasing a bit of Snapchat’s unique style of photo-based messaging.
The media picker has, by the way, been completely redesigned. There’s now a tiny camera preview right in the picker, alongside a double row of recent photos. Swiping right on the picker will show buttons to open the camera or show the entire Camera Roll.
This redesign is simultaneously brilliant and confusing. I love the camera preview, and I think the recent photo picker is fine. But the hidden buttons are far too hidden for my liking, and it’s somewhat easy to miss the small arrow that provides a visual clue. Once you find them, they’re easy; but I have, on more than one occasion, forgotten where the button to access the full Camera Roll picker now resides.
But what if you want to communicate in a more textual way? Well, iOS 10 has plenty of new features there. After you type out a message, you can tap on the keyboard switcher icon to replace any words in your message with emoji. Relevant words or phrases will be highlighted in orange, and tapping on the words will either suggest emoji to replace them with, or simply replace the words if only one character seems to fit the phrase. Yet, despite the extent to which I already communicate through emoji, I could never really get the hang of this feature. The QuickType bar provides a good-enough suggestion throughout the OS that I never really got the hang of tapping on the emoji icon after typing the word I intend to replace, but only in Messages. It simply doesn’t match with the way I think when I bash out a text message. Your mileage may vary.
And then there’s the stuff I got a little obsessed with while testing iOS 10 this summer. Apple has added a whole host of weird and wonderful effects for when you send an iMessage. Press on the Send button, and a full-screen sheet will appear with a bunch of available effects. Some message effects display inline, while others will take over the entire screen the first time the message is read. Some are interactive: “Invisible Ink” requires the recipient to touch over the message to reveal it. An effect like “Lasers” turns the whole display into a nightclub, replete with bangin’ choons. What’s more, sending some messages — like “Happy Birthday” or “Congrats!” — will automatically fill the recipient’s screen with balloons.
I make no bones about how much I love these effects. I’ve only been screwing around with them for the past few months with a handful of people, but they bring so much excitement and joy to any conversation that they’re easy to over-use, potentially to the chagrin of anyone else you’re talking to.
If you hate fun, you’ll probably be disappointed that there’s no way to opt out of receiving them, with the exception of switching on the “Reduce Motion” option in Accessibility settings — but that has all sorts of other side effects, too.
I’ve also noticed that these effects don’t regress very well. Users on devices running older versions of iOS or OS X will see the message followed by a second message reading “(sent with Loud Effect)”, or whatever the effect might be.
Messages has also learned some lessons from Slack. Links to webpages now show inline previews if the message was sent from a device running iOS 10 or MacOS Sierra. These previews can be pretty clever, too: a pasted Twitter link will show the whole tweet, the user it’s from, and any attached media; and, for YouTube links, you can actually play the video inline (but, curiously, not for Vimeo links). You can also react to individual messages with one of six different emotions by tapping and holding on a message bubble, a feature Apple calls “Tapback”, or with stickers from apps — more on that in a moment. Messages containing just emoji, up to three, will display much larger. All of these relatively small tweaks combine to produce some of the most welcome improvements to an app we use dozens of times a day.
Curiously enough, Messages in iOS 10 actually loses some functionality as well. In iOS 8, Apple attempted their take on Snapchat. You’ll recall that tapping and sliding on the camera icon would immediately send a disappearing photo or video. There is no longer a way to do that in iOS 10. Not that anyone would notice, of course — as I noted at the time, that feature was often more frustrating than helpful. I don’t know anyone who used that shortcut to send photos. I suspect few will notice its removal.
But I think that everyone will notice that developers can now add to Messages in a really big way.
iMessage Apps and Stickers
For the past few releases of iOS, Apple has rapidly been opening up their first-party apps to third-party developers. From sharing sheets to Safari, extension points now exist throughout iOS to make the system vastly more capable, efficient, and personalized. And now, they’re giving developers perhaps one of the biggest opportunities in years: apps and stickers in Messages.
Stickers are probably easiest to understand because they sound exactly like what they are: packs of images — still or animated — that you can stick to messages in a conversation. If the success of stickers in every other chat app is to be believed, they’re are going to be one of the hottest new features for both users and developers alike.
Actually, even saying “developers” is a misnomer here. Creating a sticker pack does not require writing a single line of code. The only things anyone needs to build a sticker pack are Xcode, correctly-sized artwork for the stickers in common image file formats, and an icon in different sizes, which means that virtually any idiot can make one. And I can tell you that because this idiot, right here, made a sticker pack in about ten minutes, excluding the amount of time I spent fighting with Xcode. It could scarcely be simpler: drag your sticker image assets into one tab of Xcode, drag a bunch of icon sizes into the other, and build. Unfortunately, you do have to subscribe to Apple’s Developer Program in order to test the app on your device; you can’t use a free Apple ID to build a sticker pack just for yourself.
As a result of this simplicity, I think a lot of artists and designers are going to have a field day making all kinds of sticker packs and selling them. Aside from free stickers — plenty of which will be from brands half-assing their marketing efforts — I’m guessing that the one-dollar price point will be the sweet spot for a typical pack.
From a user’s perspective, these stickers will be a fun addition to pretty much any conversation. They can be dropped — with a slick animation — on top of any message, or they can be sent as plain images in the chat. Some users may get frustrated that stickers typically overlap a message, which can make it hard to read. You can tap and hold on any message bubble to temporarily hide stickers and get more information about what stickers were used.
Stickers are a hoot for users and developers alike. But, of course, if you want more functionality, you’re going to have to write some code and put together an app for Messages. Apple says that developers can create all sorts of interactive environments, optimized for short-duration usage: think back-and-forth games, peer-to-peer payments, and the like.
It’s telling that they call these “iMessage Apps”, and not “Apps for Messages” or some variant thereof. While apps that confine themselves to sending just images or links will work fine over SMS, any of the truly cool interactive apps won’t work.
Apple ships two examples with iOS 10: Music and “#images”. The former, of course, lets you share your most recently-played tracks with friends. Instead of having to switch to Music from a conversation and tapping on the Share button, the track is served to you from within the thread. When combined with rich previews for Apple Music links, the app provides a totally seamless experience.
The “#images” app — I will continue to use quotation marks because I cannot stand that name — is a much-needed enhancement for those of us who like to spice up any conversation with various movie and T.V. references. It appears to use the same Bing-powered image search engine as Siri on the Mac, except perhaps more tailored for Messages. That is to say, it seems more GIF-oriented, and it appears to suggest images based on the conversation. There are even two buttons across the top that are pre-populated with likely search terms. Like any Messages app or sticker pack, you can tap on the arrow in the lower-right corner to expand its view, but in “#images” you can also press on any image’s thumbnail to see a full preview.
“#images” has been the bane of my friends’ discussions with me for the past few months. GIFs are way better than emoji, of course, and any opportunity to reply to a message with Homer Simpson starting a bowl of corn flakes on fire really is a tremendous ability. If I’m completely honest, though, I don’t really need every movie reference on the planet; I only need clips from the Simpsons. I do hope a Frinkiac app is on the way.
Unlike other app extensions, apps running in Messages are entirely independent, and don’t require a container app; however, developers can use their existing and new iOS apps, if they so choose.
And, like pretty much every other extension point on the system, there’s no indication of when an app is installed that features a Messages extension. Unlike every other extension point, there’s a switch that allows you to automatically activate any new Messages apps. I think a similar option should be available for other extension types, like keyboards and share sheet items, as the current method of determining whether an app has installed a new extension is, at best, a matter of trial and error.
Apps and sticker packs are installed in Messages similarly, in a pseudo-Springboard sheet that appears in place of the keyboard. It behaves like Springboard, too: you can tap and hold on an icon to change the order of the apps, or tap the x in the corner to remove the extension. There’s even a row of page indicator dots across the bottom; if you install a lot of apps, it doesn’t scale particularly gracefully.
I managed to run up this tally just by downloading all of the iOS 10 updates issued to apps already on my phone. Nearly every app that I updated today included a Messages extension. Imagine what it’s going to be like if you really dive deep into the iMessage App Store.
I’m sure that these apps are going to be insanely popular. Consider, for comparison, the popularity of emoji keyboards like Bitmoji or Kimoji. Perhaps a handful of apps will take over, but I anticipate plenty of users overrunning the page dot capacity. I’m surprised that this is not handled more gracefully.
I wrote at length earlier about the interface design changes in Music and News; here, I want to spend a little more time on how those updates affect the usability of the app.
I want to start with the five tabs across the bottom. To me, their relatively subtle change has radically improved how I use Music. Previously, the tabs in Music were, from left to right: For You, What’s New, Radio, Connect, and Library.
The redesigned version of Music makes a subtle but critical change to its overall usability, simply by adjusting the five tabs that appear across the bottom: Library, For You, Browse, Radio, and Search. The implication of this change is a promotion of Library from the lowest priority item to the highest, where it belongs.
Arguably the most significant improvement to usability directly gained from the adjustments to the tab bar is the promotion of Search. After all, when you’re looking for something — whether in Apple Music or your local library — you probably use search. Its previous placement, in the toolbar across the top, was an awkward place for it, primarily because results ended up in the New tab, for reasons I can’t quite explain. By simply adding a search tab bar item, the usability of Music is far better than it used to be.
Even the rather vaguely-named Browse tab is a boon. The old New tab indicated that you’d only find new releases within; Browse, while more generic, allows Apple to add sub-categories for Curated Playlists, Top Charts, Genres, and the previously-buried Videos feature.
Meanwhile, the Connect features have been moved to the For You tab, and individual artist pages. I don’t know if that will improve its popularity among artists or users; I suspect not.
Within the Library tab, Music loses the weird drop picker that previously allowed you to browse by artists, genres, and so forth. This has been replaced by a simple, straightforward list, and it’s much better for it. There’s very little hunting around in this version of the Music app; most everything is pretty much where you’d expect it.
But, while Apple resolved most of the usability issues of the old app, they created a few new ones as well. “Loving” tracks and playlists — a critical component of the Apple Music experience and the only way to train the For You selection — is now a multi step process. There is no longer a heart button on the lock screen, nor is there one on the playback screen. Instead, you need to unlock your device and tap the ellipsis icon on the playback screen, or beside the item in a list. It’s a little odd to see so much emphasis placed on the ellipsis icon; it’s a metaphor that’s more frequently used on Android.
The playback screen is, overall, probably the least-successful element of the redesigned Music app, from a usability perspective. It took me a few days with it before I realized that it was possible to scroll the screen vertically, exposing the shuffle and repeat buttons, adjustable playback queue, and lyrics, when available. There’s simply no visual indicator that it’s possible to scroll this screen. My bug report on this was marked as a duplicate, so I suppose I’m not the only person who feels this way.
There are some holes in other parts of the app as well. There’s still no option to sort albums from an artist by year, otherwise known as “the only acceptable way to sort albums by a single artist”. There’s still no way to filter or search for music by year.
If you want a list of songs from a particular artist, you’ll want to use the Songs menu item to get a giant list of all songs, sorted by artist. There’s no way to do this from within the Artists menu item, which makes no sense to me. If I’m looking for songs by an artist, I’m going to start by looking in Artists; I bet you’d probably do the same.
Aside from the occasional usability bafflement, I’m certain that this version of Music is a much more successful organization of its myriad features. I’ve said many times that my ideal streaming service would feel like a massively extended version of my local library, and Music in iOS 10 comes closest to accomplishing that, even without enabling iCloud Music Library.
So what about some of the new features in the app, like lyrics support and new recommendations in Apple Music? Well, while lyrics are ostensibly supported, I had a hell of a time finding a song where that’s the case. After trying a bunch of different tracks from lots of different genres, I found that lyrics were shown for tracks from Drake’s “Views” album and Kanye West’s “The Life of Pablo“.
Lyrics only display for Apple Music songs, and I do mean only. My purchased-from-iTunes copy of “Views” doesn’t have lyrics, but if I stream the same song from that album on Apple Music, it does.
However, with the notable exception of Kim Mitchell’s truly terrible “Patio Lanterns“, just being able to read the lyrics doesn’t usually communicate the intent or meaning of a song. For that, you need something like Genius — not to be confused with the iTunes feature of the same name. I think it would be more useful if there were some substance behind displaying the lyrics.
While there’s no indication that adjustments have been made to the recommendation algorithms that power For You, there are two playlists that are served up on a weekly basis: the Favourites Mix, refreshed every Wednesday, and the New Releases Mix, refreshed every Friday. Unlike most of the pre-made playlists on Apple Music, these are algorithmically generated, but I’ve found them to be pretty good.
The first New Releases Mix that I got was a decent sampler plate of a bunch of new music that I generally enjoyed. Of the 25 tracks, in the first mix, I’d say that only two or three were misses. From my experience with both Apple Music and Spotify, that success rate compares favourably to the Discover Weekly mix in the latter service. Apple’s mix is, however, focused entirely on new releases a user might like; there doesn’t appear to be an automatically-generated playlist in the vein of Spotify’s.
All told, I think this iteration of Music is markedly more successful than the outgoing one, which grated on me more and more as the year wore on. I haven’t felt that with this version. Though it’s not yet perfect, it’s far better than its predecessor.
After launching with a robust set of initial features last year, the overarching concept of Continuity has been updated to support a frequently-requested feature: a universal clipboard.
The idea is simple: copy a piece of text, or an image, or a URL, or whatever on any device you own and have the ability to paste it on a completely different device. Apps like Copied, CloudClip, and Command-C filled in the gap left by the lack of official support for this functionality.
But, now, there is official support for clipboard sync, and it’s pretty good for my very basic uses. Like Handoff, Apple says that the clipboard is encrypted and synced entirely locally over WiFi and Bluetooth; your iCloud account is only used to ensure that it’s you copying or pasting on both devices.
As I said, my use-case for this is extraordinarily simple. Sometimes, I’ll have read something on my iPhone and want to link to it within a post. I can either open a new Safari tab on my iPad or Mac and wade through my iCloud Tabs until I find the right one, or I can just copy it on my iPhone and paste it on my other device. Or, sometimes, I’ll have something copied in a Mac-only app like TextMate that I can paste into an email message on my iPad. It’s pretty cool.
Unfortunately, there’s no visual indication of when an item is available to paste from a different device. I haven’t yet run into an instance where I’ve pasted in the entirely wrong thing from a different device, and the lack of a visual indicator strikes me as very deliberate: Universal Clipboard isn’t something you should have to think about — it “just works”.
Universal Clipboard lacks some of the more power-friendly options of the third-party apps mentioned earlier, like clipboard history and saved snippets, but it does a perfectly cromulent job fulfilling a basic use case for clipboard syncing. It works pretty well for me.
Apple Pay was only introduced in Canada this June, but I’ve already become accustomed to paying for all kinds of stuff with it. Most payment terminals have supported tap-to-pay for a long time, but Apple Pay is more secure and, from my experience, faster and more reliable.
That it’s come to the web is a good thing; that I no longer have to use PayPal or submit my credit card details to an online store is a very good thing.
None of the places I typically order online from have yet added Apple Pay to their checkout options, so I tried using Stripe’s Apple Pay demo and it seemed to work pretty well.
I’ve dumped this feature into the Continuity section because Apple Pay is also supported in Safari on MacOS Sierra. You just start the purchase on your Mac, and authenticate on your iPhone. Strangely, though, this same cross-device functionality isn’t supported to authenticate an iPad purchase using an iPhone.
After several years of menial adjustments tailored for the iPad, iOS 9 brought serious systemwide improvements: proper multitasking, keyboard shortcuts, ⌘-Tab application switching, and lots more. iOS 9 was the significant boost the iPad needed, particularly since there are now two iPads named “Pro”. I, perhaps naïvely, thought that this was a renaissance for the iPad — a wakeup call for a platform that should feel like its own experience.
I was wrong. iOS 10 brings very few changes specifically designed for the iPad, and a whole lot of changes that feel like they were scaled-up from the iPhone.
There’s a scaled-up Notification Centre’s Today view that makes for an amazing visual trick in looking both cramped and inefficient with its use of the iPad’s larger display:
Control Centre also looks a bit odd on the iPad’s larger display, featuring gigantic buttons for AirDrop, AirPlay Mirroring, and Night Shift:
Half the space in the second Control Centre tile is occupied by a playback output destination list:
Instead of a list of output devices — something which I doubt most users will be adjusting with enough frequency to merit its equal priority to playback controls — why not show the “What’s Next” queue or additional Apple Music controls?
There are plenty of instances where the iPad simply doesn’t utilize the available screen space effectively. While not every pixel should be filled, shouldn’t playlist descriptions in Apple Music expand to fill the available space?
Shouldn’t I see more than this in my library?
Shouldn’t the timer feel a little more deliberate?
Then there are the aspects of the iPad’s interface and features that remain, inexplicably, unchanged. The 12.9-inch iPad Pro retains the 5 × 4 (plus dock) home screen layout of the original iPad. The Slide Over drawer still shows the same large rounded cells around each icon, and its lack of scalability has shown as more apps support Slide Over.
That’s not to say that no new iPad features debuted this year. You can now run two instances of Safari side-by-side on iPads that support multitasking; however, it is the only app where this is possible.
The limitations created by the iPad’s form factor — a finger-based touch screen with a bare minimum of hardware buttons — has required ingenious design solutions for common tasks. Windowing, complex toolbars, and other UI components taken for granted were, and are, either impossible or impractical on the iPad. Similar problems were solved when the iPhone was developed. But, while there’s a good argument for retaining some consistency with the iPhone, the iPad is its own experience, and it should be treated as such.
There’s a glimmer of hope for iPad users: Federico Viticci has heard that more iPad-specific features are “in the pipeline“, presumably for an iOS 10.x release. Their absence from the 10.0 release is, however, noteworthy.
As ever, in addition to the big headlining updates to iOS, there are a bunch of smaller updates to all sorts of apps. This year, though, there’s a deep-level system update as well.
Of all of the enhancements rumoured to be coming to iOS, not one revolved around a new file system. Yet, that’s one of the things that’s coming to iOS 10. It’s not finished yet, and it is projected to arrive as part of a system update next year, but it sounds like a thoroughly modern, fast, and future-friendly file system. I’m nowhere near intelligent enough to fully understand APFS, as it’s called, but Lee Hutchinson of Ars Technica wrote a very good early look at it back in June that you should read.
Phone and Contacts
It’s telling that I’ve buried what is ostensibly the core functionality of a smartphone — that is, being a telephone — all the way down here. We don’t really think of our iPhone as a telephone; it’s more of an always-connected internet box in our pants. But that doesn’t mean that its phone functions can’t be improved.
For those of you who use alternative voice or video calling apps, there’s a new API that allows those apps to present a similar ringing screen as the default phone app. And there’s another API that allows third-party apps to flag incoming phone calls as spam and scams. I don’t get that many unsolicited calls, blessedly, but I hope that apps like these can help get rid of the telemarketing industry once and for all.
The phone app also promises to transcribe voicemails using Siri’s speech-to-text engine. My cellular provider doesn’t support visual voicemail, so I wasn’t able to test this feature.
In addition, Apple says that you can set third-party apps as the primary means of contact for different people.
Mail has an entirely new look within a message thread, with a conversational view very similar to that of Mail on the Mac. This makes a long conversation much easier to follow, and allows you to take action on individual messages by sliding them to either side.
Additionally, there’s a new button in the bottom-left of message lists to filter which messages are shown. After tapping the filter button, you can tap the “filtered by” text that appears in the middle of the bottom toolbar to select filtering criteria; the default is unread messages across all inboxes.
This filter is similar to the Unread inbox introduced in iOS 9; but, with the ability to define much more stringent criteria, it’s far more powerful. I’ve been using it for the past couple of months to try to tame my unruly inbox with an unread count that keeps spiralling out of control.
Mail also offers to unsubscribe you when it detects a message was sent from a mailing list. That can save a great deal of time hunting through the email to find the unsubscribe link and then, inevitably, being asked to fill out a survey or getting caught in some other UI dark pattern. I used it on a couple of newsletters and it seems to have worked with just a tap.
Safari now supports “unlimited” tabs, up from 36 in iOS 8, and 24 prior to that. I checked this claim out, and got to 260 open tabs before I got bored. Of course, not all those tabs will be in memory, but they’ll be open for your tab hoarding pleasure. In addition, a long-press on the tab button in the lower-right lets you close all 260 of those tabs at once, should A&E show up with a film crew.
Ever since iOS 8 allowed third-party developers to add actions from their apps to the Share sheet, I’ve wanted to see this feature enabled systemwide for pretty much anything I could conceivably share. As you can imagine, I save a lot of links to Pinboard and Instapaper. I also subscribe to a bunch of great newsletters, like NextDraft and CNN’s excellent Reliable Sources. But, while third-party email apps have long allowed you to share the contents of emails using the system Share sheet, the default Mail client hasn’t.
It’s a similar story in Safari: you’ve been limited to sharing just the frontmost tab’s URL using the Share sheet, and no other links on the page.
Previously, touching and holding on a link would pull up a series of options, one of which was to send the link to your Reading List. Now, for those of us who don’t use Safari’s Reading List, there’s a far better option available: touching and holding on any link will display a “Share…” option, which launches the system Share sheet. It’s terrific — truly, one of my favourite details in iOS 10.
As previewed in the week prior to WWDC, this year’s round of major updates brings with it some changes to the way the App Store works. Most of these changes have trickled out in a limited way this summer, including faster review times, and a beta test of ads in the American App Store. I’m Canadian, so I’m still not seeing ads, and that’s fine with me.
One thing that wasn’t clarified initially was the handling of the new Share feature for every third-party app. At the time, I wrote:
I sincerely hope that’s not just an additional item in every third-party app’s 3D Touch menu, because that will get pretty gross pretty fast.
Well, guess what?
That’s exactly how that feature works.
It isn’t as bad as I was expecting it to be. The Share menu item is always farthest-away from the app icon in the 3D Touch list, and it means that every icon on the home screen is 3D Touch-able, even if the app hasn’t been updated in ages.
For TestFlight apps, the Share item becomes a “Send Beta Feedback” item, which is a truly helpful reminder to do that.
Improvements for Apple Watch
While I won’t be writing a WatchOS 3 review — at least, not for today — there are a couple of noteworthy changes for Apple Watch owners on the iPhone.
There’s a new tab along the bottom of the Watch app for a “Face Gallery”. In this tab, Apple showcases different ways to use each of the built-in faces and how they look with a multitude of options and Complications set. I’m not one to speculate too much, but this appears to set the groundwork for many more faces coming to the Watch. I don’t think just any developer will be able to create faces any time soon, but I wouldn’t be surprised to see more partnerships with fashion and fitness brands on unique faces.
In addition, the Apple Watch has been added to the Find My iPhone app — and, yes, it’s still called “Find My iPhone”, despite finding iPhones being literally one-quarter of its functionality. Your guess is as good as mine.
With all sorts of systemwide adjustments comes the annual reshuffling of the Settings app. This year, the longstanding combined “Mail, Contacts, Calendars” settings screen has become the separate “Mail”, “Contacts”, and “Calendars” settings screens, as it’s now possible to individually delete any of those apps.
Additionally, Siri has been promoted from being buried “General” to a top-level item, complete with that totally gorgeous new icon. It doesn’t really go with the rest of the icons in Settings, to be fair, but it is awfully pretty.
As Game Centre no longer has a front-end interface, its options have been scaled back to the point of near-pointlessness. There is no longer an option to allow invitations from friends, nor can you enable friend recommendations from Contacts or Facebook. The only option under “Friends Management” is to remove all friends in Game Centre. There is no longer a way to find a list of your Game Centre friends anywhere on iOS or MacOS. Yet, for some reason, the framework lives on. Given these ill-considered omissions, if I were a developer, I wouldn’t necessarily build a new app that’s dependent on it. Just a hunch.
There are a bunch of little tweaks throughout Settings as well. It now warns you if you connect to an insecure WiFi network, and — for some reason — the option to bypass password authentication for free apps has been removed.
There may not be any new wallpapers in iOS this year, but a few of the system sounds have been refreshed. Instead of the noise of a padlock clicking shut, the revised lock sound is more reminiscent of a door closing. Perhaps it’s my affinity for the old lock sound, but the new one hasn’t grown on me. It feels comparatively light and thin — more like a screen door than a bank vault.
The new keyboard clicks, however, sound good enough that I kept them on for most of the beta period, and I really hate keyboard noises on smartphones. There’s a subtle difference in the noise between a letter key and a function key — such as shift or the return key — which should help those with reduced vision and those of us who type while walking.
I should say, however, that my dislike of keyboard sounds eventually caught up with me and I switched them back off. It’s a smartphone, not a typewriter.
iOS 10 is a fascinating update to me. Every other version of iOS has had a single defining feature, from the App Store in iPhone OS 2 and multitasking in iOS 4, to the iOS 7 redesign, iOS 8’s inter-app interoperability, and iOS 9’s iPad focus.
iOS 10 seems to buck this trend with its sheer quantity of updates. Developers have been asking for a Siri API for years, and it’s here, albeit in a limited form. The number of developers using the deep integrations in Messages and Maps is already higher than I had anticipated at this stage of iOS 10’s release, and I’m writing this the night before it launches.
Then there are the little things sprinkled throughout the system that I didn’t have time to cover in this review: breaking news notifications and subscriptions in individual News channels, a redesigned back button, CarPlay integrations, and so much more.
I may regularly bemoan individual parts of iOS. There are certain places where I wish Apple had made more progress than they did, but there are also aspects of the system that have been greatly enhanced in ways I’d never have expected. Saying that iOS 10 is the best release of iOS yet is a bit trite — you’d kind of hope the latest version would be, right?
But there’s so much that has gone into this version of iOS that I deeply appreciate. The experience of using it, from when I wake up in the morning to when I go to bed at night — oh, yeah, there’s this great bedtime alarm thing built into the Clock app — that I can’t imagine going back to a previous version of iOS, or to a different platform. It feels like a unified, integrated system across all of my devices. Some people may call this sort of thing “lock-in”, but I like to think of it as a form of customer appreciation.
Whatever the case, I highly recommend updating to iOS 10 as soon as you can. I hope it surprises and delights you the way it did for me the first time someone sent me an iMessage with a goofy effect, or the last time it made a Memories slide show for me. These are little things, and they’re mechanized and automated like crazy, but they feel alive, in a sense. iOS 10 isn’t just the best version of iOS to date; it’s the most human release.
A big thank you to Sam Gross for proof-reading this review, and to Janik Baumgartner for assisting with some iPad verification. Thanks must also go to all the developers who provided beta versions of their apps.
In iOS 10, when you enable “Limit Ad Tracking”, [the Identifier for Advertisers] now returns a string of zeroes. So for the estimated 15-20% of people who enable this feature, they will all have the same IDFA instead of unique ones. This makes the IDFA pretty much useless when “Limit Ad Tracking” is on, which is a bonus, as this is what users will expect when they enable the feature. These users will still be served ads, but its more likely they will not be targeted to them based on their behaviour.
Of course, there are lots of other ways nefarious ad tech companies can try to build tracking profiles, from device profiling to requiring a user account. This is a step in the right direction, though.
A week after Samsung’s “voluntary” recall of the Galaxy Note 7, customers have yet to be clearly told when and how they’ll be able to replace their devices — devices that could set cars, hotel rooms, or garages on fire — with new, working models. Samsung last week said customers would be able to exchange their phones for a refund or a new device but customers don’t have a clear idea on who to contact or when replacement devices might be available. Samsung USA has not replied to a request for comment from Gizmodo.
The US Consumer Product Safety Commission (CSPC) has now officially weighed in, urging all Galaxy Note 7 owners to power down their devices and not use them. CSPC says it is working with Samsung to announce a formal recall soon, which would result in clearer guidelines for consumers.
This is not only frustrating for consumers who are informed and trying to get a replacement, but properly dangerous for those who haven’t heard about this problem. Samsung was very quick to respond to early reports, but instead of immediately initiating a recall through CSPC, they’ve let this issue escalate.
I’m having a hard time writing about this without comparing it to the flurry of press coverage about “Antennagate”, followed by Apple’s quick response. That would be in bad taste.
Facebook users looking for more context on why the Sept. 11 terrorist attack anniversary was trending on the platform on Friday were, for a time, directed to a tabloid article claiming that “experts” had footage that “proves bombs were planted in Twin Towers.”
I don’t know about you, but I’m beginning to think that canning most or all of the Trending Topics editors wasn’t Facebook’s best decision, especially when it starts giving nut jobs one of the world’s largest megaphones.
A few weeks ago the Norwegian author Tom Egeland posted an entry on Facebook about, and including, seven photographs that changed the history of warfare. You in turn removed the picture of a naked Kim Phuc, fleeing from the napalm bombs – one of the world’s most famous war photographs.
Listen, Mark, this is serious. First you create rules that don’t distinguish between child pornography and famous war photographs. Then you practice these rules without allowing space for good judgement. Finally you even censor criticism against and a discussion about the decision – and you punish the person who dares to voice criticism.
As of yesterday, Facebook was insisting that this was a feature, not a bug, telling reporters that “it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others” — even when that Pulitzer Prize-winning photograph is one of the best-known images in the world.
That’s a load of horse shit. Any human being with half a brain cell can tell the difference between child porn and a photograph of war, just like anyone can tell the difference between child porn and a parent posting a photo of their kid in a bath. There is no overlap. For Facebook to state otherwise is infuriating.
Facebook has now said that they’re going to allow people to share this photo, and they’re working on reinstating any posts that were removed. However, they haven’t committed to a change of policy so that this sort of thing doesn’t happen again.
The company decided to stop the practice because the number of phones sold during the period has become more a reflection of Apple’s supply than demand, a company spokeswoman said, when asked whether Apple will be releasing the figure.
“As we have expanded our distribution through carriers and resellers to hundreds of thousands of locations around the world, we are now at a point where we know before taking the first customer pre-order that we will sell out of iPhone 7,” Apple spokeswoman Kristin Huguet said. “These initial sales will be governed by supply, not demand, and we have decided that it is no longer a representative metric for our investors and customers.”
I bet initial supply of the iPhone 7 will be really restricted this year. If you want one on launch day, you’d better be staying up late tonight.
There may be some lingering issues to resolve with the removal of the headphone jack from the iPhone, but at least it isn’t catching on fire. The Associated Press:
U.S. aviation safety officials took the extraordinary step late Thursday of warning airline passengers not to turn on or charge a new-model Samsung smartphone during flights following numerous reports of the devices catching fire.
The Federal Aviation Administration also warned passengers not to put the Galaxy Note 7 phones in their checked bags, citing “recent incidents and concerns raised by Samsung” about the devices. It is extremely unusual for the FAA to warn passengers about a specific product.
And then there’s this report from Chris Welch of the Verge:
Shortly after returning from a Labor Day yard sale on Monday in St. Petersburg, Florida, a man looked out the window to see his family’s Jeep Grand Cherokee in flames. Nathan Dornacher would later say that he’d left his four-day-old Galaxy Note 7 charging in the vehicle’s center console moments before the fire began.
In a separate incident, a man says he believes the Note 7 is to blame for a garage fire that resulted in his house being condemned. Wesley Hartzog of Horry County, South Carolina left his Samsung phablet plugged into a wall outlet where fire investigators believe the Sunday blaze began.
While Samsung is requesting the return of the Galaxy Note 7, they haven’t yet issued an official recall. That means that customers aren’t necessarily being notified, and that’s a big problem for a phone that’s literally too hot to handle.
From the outset, Apple has positioned the Watch as a multifaceted product — complex, but not complicated. At the Watch’s introduction, Tim Cook used the rule of threes to define its purpose: a health and fitness device, a timepiece, and a means of facilitating communication.
One of the byproducts of this is a device that helps out with a bunch of tasks, but is nearly impossible to explain or demo. Any time anyone asked me for a demo of its features, I meekly fumbled with a few things that I think are cool — raise to wake, Activity, and so on — but it has never been as easy to demonstrate in a pinch as, say, an iPhone or an iPad.
After the “Spring Forward” event held last, well, spring, I found the Apple Watch interesting, but not necessarily compelling in the way it was presented:
And that brings me to the big unanswered question of today: what problems, specifically, does the Watch solve? Apple has traditionally introduced products to the market that addressed specific shortcomings in existing product categories. They have refined and defined markets time and time again. The iPod solved the question of what CDs to bring with you for your Discman, and the iPhone defined the future of the phone in myriad ways, creating the perfect convergence device. They created the perfect travelling or kick-back-on-the-couch companion with the iPad.
But the Watch doesn’t have an easy story like these. There are a bunch of ways Apple suggests you use it: you can now have your calendar chime on your Mac, your iPhone, your iPad, and your Watch at approximately the same time; you can track your workouts; you can use miniaturized versions of your iPhone apps on it; you can pay for stuff with it; and you can communicate with other Apple Watch wearers in subtle ways.
While I felt Apple did not clearly define the story of the Watch at the outset, customers and owners have helped do so over the past year and a quarter that it has been on sale.
Today’s presentation focused heavily on the health and fitness aspects of the Series 2 Watches, almost to the exclusion of the other two focus areas Cook mentioned two years ago. From built-in GPS to waterproofing for swimmers, and from a ceramic back on all models for higher-quality lens covers for the heart rate monitor, to a partnership with Nike, this year’s Apple Watches are all about fitness.
You’ll even note that Apple has dropped the “Sport” branding on the models, choosing instead to differentiate the aluminum and stainless models purely by their case materials. The exceptions are the Hermès models, still called “Apple Watch Hermès”, and the new ceramic Apple Watch Edition. If you wanted to read, perhaps a little too much, into that, the implication is that the models in the standard Series 2 lineup are all appropriate for physical activity.
There’s no question in my mind that this is the right area for Apple to be focusing on with the Watch. Even in my own day-to-day use pattern, the thing I care most about is that I close my activity rings; I suspect many of my readers feel the same. Notifications, apps, answering calls on my wrist, checking the weather — these are all things that are very nice to have. But being mindful of my physical activity while working a sedentary office job is the reason I put my Apple Watch on every day instead of my analogue Boccia.
Fitness was, of course, not the only area Jeff Williams focused on today. He noted the enhancements coming to all Apple Watches with watchOS 3, and I can testify to the performance improvements: it’s night and day. I don’t know where they found all that power while keeping the battery life the same, but it’s there, and it’s remarkable. In the Series 2 models and — amazingly — in the slightly-revised Series 1 models, there is now a dual-core processor which should help performance even more.
The new Edition model, meanwhile, looks really special. It’s made of ceramic is polished to a shine. Unlike last year’s ferociously expensive gold models, this one starts under $1,300 USD. It’s not a bargain, but I wouldn’t be surprised to see a hell of a lot more of these than I did of the gold models.1 In pictures, it looks terrific. I can’t wait to see it in person.
In a bit of a peculiar move, the price of the Watch has actually risen over last year by $20. However, they’ve softened the price bump by carrying forward the now-christened Series 1 model and giving it a faster processor, for $269. The $369 starting price of the Series 2 is entirely reasonable. For comparison, there are plenty of GPS sport watches going for well over $369, and they’re so ugly and cumbersome that you’ll only want to wear them while exercising. The Apple Watch remains a fashion accessory as much as it is a piece of technology.
The main event, as it were. Whether you got your fill of rumours years in advance or just as the keynote was starting, you were probably aware of the gist of the iPhone 7’s headlining features: industrial design that’s similar to the 6 and 6S, water resistance, dual cameras in the Plus model, and a new polished black option.
So, where to begin? I wasn’t at the keynote — my invitation must have gotten lost in the series of tubes — so I have very few first impressions beyond what I could see in the presentation and in Apple’s marketing materials. From what I can tell, the Jet Black finish is unquestionably beautiful, but is apparently more susceptible to scratching:
The high-gloss finish of the jet black iPhone 7 is achieved through a precision nine-step anodization and polishing process. Its surface is equally as hard as other anodized Apple products; however, its high shine may show fine micro-abrasions with use. If you are concerned about this, we suggest you use one of the many cases available to protect your iPhone.
I’ve seen some remarks around the web that paint this as a repeat of the iPod Nano scratching crisis of 2005, but I’m not so sure it will be. The iPod’s face was made of plastic; the iPhone is made of aluminum and glass. I’ve no doubt that some scratches will form on Jet Black iPhones, but there’s a certain wear-and-tear patina that develops. Some people are okay with that, and they’re probably people who don’t use cases. But I would wager that the overwhelming majority of iPhone owners put a case on their phone.
At any rate, I anticipate very low Jet Black stock over the next couple of months, even though it’s limited to the 128 and 256 GB configurations. If you’re aching for that colour, I hope you’re very quick with your pre-order tomorrow night.
While I’m tangentially on the topic of capacities, I should note how happy I am about the near-demise of the 16 GB configuration. The iPod Touch and the iPhone SE are the only iOS devices currently offered with a 16 GB option; even the iPads got a bump today, something which wasn’t mentioned during the keynote. I can think of few changes that so clearly merit the word: finally.
Apple’s processor team, meanwhile, has clearly been very busy. Onstage, Phil Schiller showed a slide with a Bezos chart of the iPhone’s processor speed since launched, and it looks like a hockey stick. My iPhone 6S feels ridiculously fast, but the gulf between it and the performance of the A10 in the iPhone 7 is simply gigantic.
The camera enhancements look equally impressive. Any improvement to the quality of photos in low-light situations is always welcomed, and the new cameras apparently deliver that in spades. The dual camera situation on the Plus model looks particularly intriguing, especially with the rich depth mapping capabilities coming later this year. The Plus model is simply too big for my liking, so I’ll have to wait until these improvements come to the smaller iPhone, but they do make the case for a Plus much more compelling.
And then there’s the display, and the stereo speaker setup, and the new flash, and the vastly improved front-facing camera, the bigger battery, and the solid state home button — there’s a lot in this model, even if it looks similar to its predecessor.
But, of course, there’s only one thing that anyone is talking about today. So, let’s do this.
We all knew it was coming. Ever since the rumour broke in November of last year that the iPhones 6S would be the last with the standard 3.5mm headphone jack, we knew that this would be the dominating controversy of the iPhone 7. And, like clockwork, when Schiller announced that the rumour was, indeed, true, the web erupted once again.
I can see why. The headphone jack has, as was acknowledged during the keynote, been with us for over a hundred years. That’s more than enough time for it to become entrenched — its inclusion in consumer electronics has, for a long time, been an expectation.
So why hasn’t it changed? Well, it has a lot going for it: it’s small, its cylindrical shape makes it nearly perfect from a usability perspective, and it requires no licensing or royalty fees to be paid. It has long been the right solution to connect speakers of any size to just about any device.
But the headphone jack has its flaws, too. Headphone cabling tends to be thin, which means the connectors must be robust. Headphone cables get tangled, which is a source of frustration for pretty much everyone. The port itself is extremely limited, requiring the use of a hacky method to provide remote controls.
But is that enough to replace it in a flagship product? I’m not sure, but Apple2 is trying to find out.
Apple has three solutions that they think span the gamut of iPhone 7 users: Lightning EarPods, wireless AirPods, and a Lightning-to-3.5mm adaptor.
Lightning EarPods are exactly what they sound like: the EarPods used by hundreds of millions of people every day with a Lightning connector on the end instead of a 3.5mm plug. They’re offered at the same $29 in the U.S. and, like the old EarPods, are included with every iPhone 7. I know a lot of people who use EarPods. For them, nothing changes on their iOS devices, but there appears to be no solution for those who want to connect the same headphones to their Mac. And I know a few people who do that every single day.
For iPhone owners who don’t use the included EarPods, Apple is also including a Lightning-to-3.5mm adaptor. If you prefer a particular kind of headphones that are only available with a 3.5mm connector, or if you regularly switch your headphones between your iPhone and a computer, you’ll probably get a lot of use out of this adaptor.
But Apple tends to be very deliberate when they make these kinds of choices. In fact, during the keynote, Schiller laid out the justification for removing the headphone jack:3
We have a vision for how audio should work on mobile devices. […] It makes no sense to tether ourselves with cables to our mobile devices, but until someone takes on these challenges, that’s what we do.
After a bit more preamble, Schiller cut to a video introducing the new AirPods, with Jony Ive’s soothing voiceover:
We believe in a wireless future.
This wireless future is, clearly, not quite there yet. Including with every new iPhone two means of connecting “tethering” ourselves to them, while making the wireless option a $160 extra, makes it feel like it’s still very early days. That’s how Ive positions it later in the video, too:
We’re just at the beginning of a truly wireless future we’ve been working towards for many years, where technology enables the seamless and automatic connection between you and your devices.
That’s a compelling argument. AirPods are, clearly, very advanced. There’s a ridiculously great pairing process that uses the flip top of the charging case to signal a connection, and they will apparently transition between different devices in a seamless fashion. Apple is also promising a reliable listening experience, completely unlike existing Bluetooth headphones. And I think it’s absolutely right that Apple goes wireless with their headphones in such a manner.
But there are things from this announcement that aren’t yet sitting right with me, and it comes down to the proprietary nature of the proposed solutions. To use a Lightning connector with a MFi certification, manufacturers must pay a royalty rate of $2 per product, according to two contracts I reviewed. If the product includes only one Lightning connector, this royalty is baked into the cost of purchasing that connector. The MFi program also regulates what kind of digital-to-analogue converter must be used, some packaging specifications, and other product attributes. This may absolutely be a good thing, and I believe it might very well be. But that also means that Apple’s review board now controls which wired headphones may take advantage of the Lightning port, and there are certain additional fixed costs for manufacturers to consider. This is absolutely their prerogative, of course: it’s their proprietary connector. But it’s one more layer of control that will necessarily limit the market of available wired headphones for the iPhone 7.4
The wireless option is a bit of a mixed bag, too. Bluetooth headphones are, generally-speaking, unreliable, frustrating, battery-sucking half-steps towards a wireless experience. If you want Apple’s far better experience, you’ll need headphones that use their new W1 chip. It’s based on Bluetooth, but “[covered] in a lot of secret sauce”. Three new sets of Beats headphones include it, as do the new AirPods, but it’s currently unclear whether it’s going to be made available to third-party manufacturers as well. That’s important to me because I dislike all three Beats options, and if the AirPods are of a similar size and shape to the existing EarPods — and I believe that is the case — they simply don’t fit into my ears.
Please don’t misread this as a condemnation of Apple’s decision today. I don’t think it was a mistake to prefer a wireless option, nor do I necessarily think it was a mistake for the headphone jack to be removed. I would love to try a pair of AirPods — it sounds like a truly brilliant product. But there are compounding factors, many of which have only been confirmed today, that make the transition harder for me.
But there’s one thing that seems pretty clear to me: making this transition this year paves the way for a much smoother rollout of next year’s massive iPhone redesign. There will be plenty of options of Lightning headphones and, perhaps, some more wireless models that include the W1, depending on its MFi status. And the total refresh of the iPhone next year won’t be overshadowed by the controversy over its lack of a headphone jack.
Schiller also framed it as “courageous” to drop the headphone jack. I get what he meant by that, but I think “bold” or “audacious” would have been better words to use. “Courage” is the word we typically use for people battling cancer, or activists standing up to injustice. ↩︎
The included Lightning-to-3.5mm assuages these concerns, but do you expect Apple continuing to include — or even offer — that connector in a few years? I don’t. ↩︎
Right around the time James Corden and Tim Cook were laughing it up at the irony of internet leaks about the security of the iPhone 7, Apple’s official Twitter account was busy leaking many of its new features. It appears that all of the tweets they posted were promoted, which meant that they wouldn’t appear in their timeline or to followers. Still, a little embarrassing for a company so focused on the surprise unveil.
As the length of the names hasn’t really been the problem, it is keyword spamming at the end of the name.
But the 50 character limit is still interesting to consider, so I dug through my App Store metadata cache to see just how many apps would be affected. It looks like only around 9% of apps currently have names that are longer than 50 characters (around 200k).
Of the ones that do have names longer than 50 characters — all the way up to a hard 255 character limit — many are stuffed with often irrelevant keywords in a bid to capture searches from users looking for mainstream apps.
I’m more surprised that this crap is let into the App Store. The review guidelines are full of references to this being exactly the sort of stuff they don’t want:
If your app looks like it was cobbled together in a few days, or you’re trying to get your first practice app into the store to impress your friends, please brace yourself for rejection. We have lots of serious developers who don’t want their quality apps to be surrounded by amateur hour.
This has always been in the guidelines, but appears to be rarely enforced. For proof, just search for a popular app in the Store: Instagram, Tweetbot, and NYTimes will all work. Then, just scroll right to the bottom and find yourself wading through the App Store equivalent of a backalley flea market.
I would rather the App Store have much tighter restrictions than it currently seems to. I know a plethora of choice is advantageous to consumers, but the minimum bar for quality should be much, much higher.
Two months after the Ohio announcement, Amazon leased 20 more jets from Atlas Air, an air cargo company based in Purchase, N.Y. Amazon has also purchased 4,000 truck trailers. Meanwhile, a company subsidiary in China has obtained a freight-forwarding license that analysts say enables it to sell space on container ships traveling between Asia and the U.S. and Europe. In short, Amazon is becoming a kind of e-commerce Walmart with a FedEx attached.
With any other company, an expansion like this would be preposterous. But Amazon’s growth has been preposterous. In 2010 its annual revenue was $34 billion; last year, $107 billion. In 2010 the company employed 33,700 workers. By this June, it had 268,900. To have enough office space for its swelling headquarters staff, Amazon has swallowed Seattle’s South Lake Union neighborhood, and it’s building three tree-filled biospheres in the city that will allow workers to take contemplative breaks, like so many Ralph Waldo Emersons in Jetsonian luxury. The company is the fifth-most valuable in the world: Its market capitalization is about $366 billion, which is roughly equal to the combined worth of Walmart, FedEx, and Boeing.
Few companies are operating at the same kind of scale as is Amazon. The shipping and delivery chain is to Amazon what the supply chain is to Apple — the more they optimize and refine it, the better their company can perform. Of course, the best way for either company to do that is to own as much as possible of the chain, and it seems that Amazon wants to own every part of the process until the product gets to your door.
Over the years, I have talked with various ODM and manufacturing equipment makers, and many have told me Apple’s real secret to success is how deep the company goes into the overall manufacturing process.
Very few companies go to that level of detail when it comes to their supply chain. Besides Intel, Apple is one of the only other major tech companies I know of that will actually invent the manufacturing equipment needed to bring a new product to market. Most others accept the limitations of the equipment, and instead design the product around the things these machines can do with as little customization as possible.
I think the well-known product designer Greg Koenig would disagree with the idea that Apple outright invents the manufacturing equipment. My understanding is that their most significant manufacturing innovations involve applying techniques usually reserved for one-offs at a massive scale.
But, even though they don’t own the factories their products are built in, they often own the manufacturing technologies themselves. In a well-known story, when Apple wanted to make the battery and iSight indicator lights only visible in any way when lit, they bought the company that made microscopic laser drilling instruments. What was once a technique used only in the smallest of scales became used to make every MacBook Pro, wireless keyboard, and Magic Trackpad for years.