The Information: Siri’s Flagging Quality Is the Result of ‘Turf Wars’ ⇥ macrumors.com
Normally, I would not link to something for which I have not read the source story. In this case, I will make an exception, as the original is by Wayne Ma of the Information, who has a solid track record. I hope these two summaries are accurate reflections of Ma’s reporting.
Hartley Charlton, MacRumors:
The extensive paywalled report explains why former Apple employees who worked in the company’s AI and machine learning groups believe that a lack of ambition and organizational dysfunction have hindered Siri and the company’s AI technologies. Apple’s virtual assistant is apparently “widely derided” inside the company for its lack of functionality and minimal improvement over time.
Apple executives are said to have dismissed proposals to give Siri the ability to conduct extended back-and-forth conversations, claiming that the feature would be difficult to control and gimmicky. Apple’s uncompromising stance on privacy has also created challenges for enhancing Siri , with the company pushing for more of the virtual assistant’s functions to be performed on-device.
Samuel Axon, Ars Technica:
For example, it reveals that the team that has been working on Apple’s long-in-development mixed reality headset was so frustrated with Siri that it considered developing a completely separate, alternative voice control method for the headset.
But it goes beyond just recounting neutral details; rather, it lays all that information out in a structured case to argue that Apple is ill-prepared to compete in the fast-moving field of AI.
By the sound of that, Ma is making a similar argument as was reported by Brian X. Chen, Nico Grant, and Karen Weise in the New York Times last month. I linked to it noting two things: first, that the headline’s proclamation that Apple has “lost the A.I. race” is premature; second, that the vignette in the lede is factually incorrect. But there was a detail I think is worth mentioning in the context of Siri’s capabilities:
Siri also had a cumbersome design that made it time-consuming to add new features, said [former Apple employee John] Burkey, who was given the job of improving Siri in 2014. Siri’s database contains a gigantic list of words, including the names of musical artists and locations like restaurants, in nearly two dozen languages.
That made it “one big snowball,” he said. If someone wanted to add a word to Siri’s database, he added, “it goes in one big pile.”
This is a claim sourced to a single person, but it would not surprise me if the entire Siri backend really is a simple database of known queries and expected responses. Sources the Times reporters spoke to say this structure cannot be adapted to fit a large language model system and, so, Apple is far behind.
Maybe all that is true. But what I cannot understand is why anyone would think users would want to have a conversation with Siri, when many would probably settle for a version of that basic database association schema working correctly.
Siri is infamously frustrating to use. It has unknowable limits to its capabilities — for example, requesting a scoreboard works for some sports but not others, and asking for a translation is only available between a small number of languages. It, like other voice assistants, assumes a stage-practiced speech cadence, which impairs its usability for those with atypical speech, or queries with pauses or corrections. But the things which bum me out in my own use of Siri are the ways in which it does not seem to be built by the same people who made the phone it runs on.
I know reading a list of bugs is boring, so here are two small examples:
My wife, driving home, texts me while I am making dinner to ask if there is anything she should pick up. I see the notification come in on the Lock Screen, but my hands are dirty, so I say “hey Siri, reply to [her name]”. Instead of the prompt asking “okay, what would you like to say?”, I am instead asked “okay, which one should I use?” with the list of phone numbers from her contact card.
There are three things wrong with this: my query uses the word “reply”, so it should compose a message to whatever contact method from which she sent the message; for several versions of iOS now, Messages consolidates conversations from the same contact, so Siri’s behaviour should work the same way; and, I am trying to send something to one of my most-messaged contacts, so it feels particularly dumb.
Siri is, as of a recent version of iOS, hardwired to associate music-related commands to Apple Music. It will sometimes ask if the user wants an alternative app. But it also means it does not reliably play music from a local library, and it has no awareness of whether one has turned off cellular data use for Music.
So if you are driving along, with a local library full of songs, and you ask Siri to play one of them, it will stream it from Apple Music instead; or, if you have cellular data off for Music, it will read out an error message. Meanwhile, the songs are sitting right there, in the library.
Neither of these examples, as far as I can see, should require a humanlike level of deep language understanding. In fact, both of these queries used to work as expected before becoming broken. It seems likely to me the latter was a deliberate change made to promote Apple’s services. In a similar vein, Ma, via Charlton, reports “specific decisions [were made] to exclude information such as iPhone prices from Siri to push users directly to Apple’s website instead”. If true, it is a cynical decision that has no benefit to users. The first problem I listed is simply baffling.
Perhaps these kinds of bugs would be less common if Siri were based on large language models — this is completely outside my field and my inbox is open — but I find that hard to believe. It is not the case that Siri is failing to understand what I am asking it to do. Rather, it is faltering at simple hurdles and functioning as an ad for other Apple services. I would be fine with Siri if it were a database that performed reliably and expectedly, and excited for the possibilities of one fronted by more capable artificial intelligence. What I am, though, is doubtful — doubtful that basic tasks like these will become meaningfully better, instead of a different set of bugs and obstacles I will need to learn.
Ma reports, via Charlton, that some people working on Siri left because there was too much human intervention. I wish it felt anything like that.