Riccardo Mori reacts to my piece on Siri and other “fuzzy” UIs:
Siri’s raison d’être is assisting, is being helpful. And indeed, Siri is the kind of interface where, when everything works, there’s a complete lack of friction. But when it does not work, the amount of friction involved rapidly increases: you have to repeat or rephrase the whole request (sometimes more than once), or take the device and correct the written transcription. Both actions are tedious — and defeat the purpose. It’s like having a flesh-and-bone assistant with hearing problems. Furthermore, whatever you do to correct Siri, you’re never quite sure whether your correcting action will have an impact on similar interactions in the future (it doesn’t seem to have one, from my experience).
What I said in my piece still holds true: Siri can only learn the accents, speech patterns, and enunciation levels of the world if it is used the world over, all the time. Mori is right in that it is expecting a lot out of users to keep trying commands while Siri turns a deaf ear. Real-time dictation, added in iOS 8, helps us see where Siri is going wrong, but it remains frustrating that there is little we can do about it.