Elizabeth Dwoskin, Will Oremus, Craig Timberg, and Nitasha Tiku, Washington Post:
Earlier this year, as Twitter raced to roll out Spaces, its new live audio chat feature, some employees asked how the company planned to make sure the service didn’t become a platform for hate speech, bullying and calls to violence.
In fact, there was no plan. In a presentation to colleagues shortly before its public launch in May, a top Twitter executive, Kayvon Beykpour, acknowledged that people were likely to break Twitter’s rules in the audio chats, according to an attendee who spoke on the condition of anonymity to describe internal matters. But he and other Twitter executives — convinced that Spaces would help revive the sluggish company — refused to slow down.
Fast forward six months and those problems have become reality. Taliban supporters, white nationalists, and anti-vaccine activists sowing coronavirus misinformation have hosted live audio broadcasts on Spaces that hundreds of people have tuned in to, according to researchers, users and screenshots viewed by The Washington Post. Other Spaces conversations have disparaged transgender people and Black Americans. These chats are neither policed nor moderated by Twitter, the company acknowledges, because it does not have human moderators or technology that can scan audio in real-time.
Abuse in and from live audio rooms is entirely predictable. It permits a massive audience for the worst people while being ephemeral. When Clubhouse — last year’s hot new thing — was just a few months old and still only available by invitation, Casey Newton, then at the Verge, explored the obvious problems with keeping users in check:
And for Clubhouse, moderation issues promise to be particularly difficult — and if the app is to ever escape closed beta successfully, will require sustained attention and likely some product innovation. Tatiana Estévez, who worked on moderation efforts at the question-and-answer site Quora, outlined Clubhouse’s challenges in a Twitter thread.
Audio is fast and fluid; will Clubhouse record it so that moderators can review bad interactions later? In an ephemeral medium, how will Clubhouse determine whether users have a bad pattern of behavior? And can Clubhouse do anything to bring balance to the age-old problem of men interrupting women?
“Is this impossible? Probably not,” Estévez wrote. “But in my experience, moderation and culture have to be a huge priority for both the founding team as well as for the community as a whole.”
Estévez in that Twitter thread:
Clubhouse has to deal with this problem both with policies (to kick off bad actors) and with culture. The culture needs to encourage listening, and valuing female voices. And to be honest, many early adopter tech men are bad listeners and don’t value hearing from women.
This was over a year ago and, perhaps unsurprisingly, Swathi Moorthy in Moneycontrol reported last week that Clubhouse still has problems with abuse.
I do not think we should expect apps like Clubhouse or Twitter Spaces to fix misogyny, but it is unethical to create spaces for it to intensify and target specific individuals. I am not arguing that it ought to be illegal to create a new platform without having a moderation solution in place, but I think it is painfully stupid to do so. I am struggling to understand what is gained by creating an audio version of 4chan where it is even more difficult to set boundaries and expectations.
Apparently the metaverse is just around the corner.