Amazon’s Halo Wristband Seems Pretty Terrible

Gadgets these days are reliably okay, at the very least, and often very good, so it is almost newsworthy enough that a new gadget is crap. And that brings us to the first reviews of Amazon’s Halo, a new health and fitness wristband.

Brian X. Chen of the New York Times gave it a try and found that it overestimated his body fat:

When the Halo arrived, I installed the app, removed my T-shirt and propped up my phone camera. Here’s what happened next: The Halo said I was fatter than I thought — with 25 percent body fat, which the app said was “too high.”

I was skeptical. I’m a relatively slim person who has put on two pounds since last year. I usually cook healthy meals and do light exercises outdoors. My clothes still fit.


After reviewing my results, Dr. Cheskin jotted down my height and weight to calculate my body mass index, which is a metric used to estimate obesity. A man my age (36) with my body mass index, he said, is highly unlikely to have 25 percent body fat.

The Halo also overestimated a friend’s body fat. Dr. Cheskin said that this stat lacked context and could potentially lead people to overemphasize its importance.

Geoffrey A. Fowler and Heather Kelly of the Washington Post tested the device’s speech tone recognition feature, which is a diabolical sentence if I’ve ever heard one. Apparently, the Halo’s microphone will passively assess your speech patterns throughout the day and tell you when it thinks you’re being an asshole. But, in the words of Maciej Cegłowski, machine learning is “money laundering for bias”, so there are predictable results:

Our sample size of two isn’t sufficient to conclude whether Amazon’s AI has gender bias. But when we both analyzed our weeks of tone data, some patterns emerged. The three most-used terms to describe each of us were the same: “focused,” “interested” and “knowledgeable.” The terms diverged when we filtered just for ones with negative connotations. In declining order of frequency, the Halo described Geoffrey’s tone as “sad,” “opinionated,” “stern” and “hesitant.” Heather, on the other hand, got “dismissive,” “stubborn,” “stern” and “condescending.”

She doesn’t dispute that she may have sounded like that, especially while talking to her children. But some of the terms, including “overbearing” and “opinionated,” hit Heather differently than they might a male user. The very existence of a tone-policing AI that makes judgment calls in those terms feels sexist. Amazon has created an automated system that essentially says, “Hey, sweetie, why don’t you smile more?”

Just scrap it.