Gene Munster and Will Thompson of Loup Ventures:
We recently tested four smart speakers by asking Alexa, Siri, Google Assistant, and Cortana 800 questions each. Google Assistant was able to answer 88% of them correctly vs. Siri at 75%, Alexa at 73%, and Cortana at 63%. Last year, Google Assistant was able to answer 81% correctly vs. Siri (Feb-18) at 52%, Alexa at 64%, and Cortana at 56%.
Aside from HomePod’s 22 point increase due to the enabling of more domains in the past year, Alexa had the most noticeable improvement. The largest advancement came in the Information section, where Alexa was much more capable with follow-on questions and providing things like stock quotes without having to enable a skill. We also believe we may be seeing the early effects of the new Alexa Answers program which allows humans to crowdsource answers to questions that Alexa currently doesn’t have answers to. For example, this round, Alexa correctly answered, “who did Thomas Jefferson have an affair with?” and “what is the circumference of a circle when its diameter is 21?”
Their tests indicated that all four smart speakers transcribed virtually all of their queries perfectly, and that all are improving their responses compared to previous tests. However, this is only for smart speakers; Siri, in particular, is tuned differently for each device. In about seven months, Loup Ventures will publish the results of their digital assistant test performed on smartphones. That’s the one I’m most interested in seeing because of the much wider impact it has.
Even though Siri is improving, I think it’s going to take a lot for it to win back those who have become frustrated with it for anything other than setting timers. It’s the same problem Apple Maps faces: it is much better than it used to be, but plenty of people gave up on it after they couldn’t rely on it, and the competition hasn’t stood still either.
It’s a lot to ask, but I wish Loup Ventures would release a full list of all questions asked and how each device answered.