Samuel Axon, Ars Technica:
In the wake of the Apple Silicon announcement, I spoke at length with John Giannandrea, Apple’s Senior Vice President for Machine Learning and AI Strategy, as well as with Bob Borchers, VP of Product Marketing. They described Apple’s AI philosophy, explained how machine learning drives certain features, and argued passionately for Apple’s on-device AI/ML strategy.
Both Giannandrea and Borchers made an impassioned case in our conversation that the features we just went over are possible because of—not in spite of—the fact that all the work is done locally on the device.
Borchers and Giannandrea both repeatedly made points about the privacy implications of doing this work in a data center, but Giannandrea said that local processing is also about performance.
If you spend your free time reading the white papers Apple prepares about various iOS and MacOS features, you probably know a fair amount of what Giannandrea and Borchers discuss in this interview. If, like me, you do not, you will likely learn from it.
One thing Axon appears not to have asked is how Apple grades the success of a machine learning model. For example, I noticed an apparent degradation in automatic typing corrections on my iPhone that coincided with switching from prioritizing nearest-neighbour keys based on dictionary likelihood to a differential privacy model in iOS 10. I have no idea if there was an actual quality reduction, nor if it was connected with the change in autocorrect engine. Perhaps it is more reliable at changing the spelling of obscure place names and public figures, for example. But it continues to bizarrely capitalize common words, and change things that I have typed several words prior when it thinks the context has changed.
Is this an actual difference? Am I misremembering the way autocorrect used to work for me? How does Apple’s machine learning team know when a change to something as crucial to the device as the keyboard is a success?