How Siri Learns reuters.com

Stephen Nellis, Reuters:

At Apple, the company starts working on a new language by bringing in humans to read passages in a range of accents and dialects, which are then transcribed by hand so the computer has an exact representation of the spoken text to learn from, said Alex Acero, head of the speech team at Apple. Apple also captures a range of sounds in a variety of voices. From there, an acoustic model is built that tries to predict words sequences.

Then Apple deploys “dictation mode,” its text-to-speech translator, in the new language, Acero said. When customers use dictation mode, Apple captures a small percentage of the audio recordings and makes them anonymous. The recordings, complete with background noise and mumbled words, are transcribed by humans, a process that helps cut the speech recognition error rate in half.

After enough data has been gathered and a voice actor has been recorded to play Siri in a new language, Siri is released with answers to what Apple estimates will be the most common questions, Acero said. Once released, Siri learns more about what real-world users ask and is updated every two weeks with more tweaks.

For the past year or two, I’ve noticed that Siri’s record for understanding my speech has been outstanding. The lacklustre part is, increasingly, in the comprehension of my intentions. Just as with a real person, it’s not good enough for Siri to simply be able to hear well; it must do something with the information I’ve provided it, and that something needs to be the right thing more often than not.