Day: 12 October 2021

Will Knight, Wired:

The company’s cofounder and CEO, Hoan Ton-That, tells WIRED that Clearview has now collected more than 10 billion images from across the web — more than three times as many as has been previously reported.

[…]

Some of Clearview’s new technologies may spark further debate. Ton-That says it is developing new ways for police to find a person, including “deblur” and “mask removal” tools. The first takes a blurred image and sharpens it using machine learning to envision what a clearer picture would look like; the second tries to envision the covered part of a person’s face using machine learning models that fill in missing details of an image using a best guess based on statistical patterns found in other images.

I am stunned Clearview is allowed to remain in business, let alone continue to collect imagery and advance new features, given how invasive, infringing, and dangerous its technology is.

Sometimes, it makes sense to move first and wait for laws and policies to catch up. Facial recognition is not one of those times. And, to make matters worse, policymakers have barely gotten started in many jurisdictions. We are accelerating toward catastrophe and Clearview is leading the way.

One of the reasons I linked to coverage of the Ozy meltdown at the end of last month is because I was apparently one of its email subscribers, but I could not remember registering. But I did notice that my earliest emails from the company were co-branded with Wired, which I was subscribed to at the time. Is that a coincidence?

Jemima McEvoy, Forbes:

Ozy Media boasts that it has more than 26 million subscribers for its newsletters, but former employees say this is another example of deceptive tactics at the embattled digital media company, with most of the email addresses on its newsletter lists either purchased, taken from other companies without their permission or added back to the lists after the recipients unsubscribed — a potentially illegal act (representatives from Ozy have not responded to Forbes’ repeated requests for comment).

[…]

Among the companies they say Ozy collectively accumulated millions of email addresses from were the McClatchy newspaper chain and the technology magazine Wired, according to two of the former employees (McClatchy and Conde Nast, the parent company of Wired, did not respond to requests for comment from Forbes).

It is not a coincidence.

Evelyn Douek, the Atlantic:

Recent Senate hearings — convened under the banner of “Protecting Kids Online” — focused on a whistleblower’s revelations regarding what Facebook itself knows about how its products harm teen users’ mental health. That’s an important question to ask. But if there’s going to be a reckoning around social media’s role in society, and in particular its effects on teens, shouldn’t lawmakers also talk about, um, the platforms teens actually use? The Wall Street Journal’s “Facebook Files” reports, after all, also showed that Facebook itself is petrified of young people abandoning its platforms. To these users, Facebook just isn’t cool.

So TikTok is not a passing fad or a tiny start-up in the social-media space. It’s a cultural powerhouse, creating superstars out of unknown artists overnight. It’s a career plan for young influencers and a portable shopping mall full of products and brands. It’s where many young people get their news and discuss politics. And sometimes they get rowdy: In June 2020, TikTok teens allegedly pranked then-President Donald Trump’s reelection campaign by overbooking tickets to a rally in Tulsa, Oklahoma, and then never showing.

TikTok is an unmitigated sensation, and the best argument made by those who insist that Facebook’s acquisitions of Instagram and WhatsApp have not meaningfully diminished competition in the social media space.

Its privacy and moderation policies are also worrying. Though similar to policies for platforms created in the U.S. and elsewhere, TikTok moderators have also censored videos, and there is more (emphasis mine):

[…] The platform’s content moderation is opaque, but there are plenty of reasons to be concerned: It has suppressed posts of users deemed ugly, poor, or disabled; removed videos on topics that are politically sensitive in China; and automatically added beauty filters to users’ videos. The “devious licks” challenge, which prompted kids to remove soap dispensers in schools, might sound comical, but school administrators aren’t laughing. Connecticut’s attorney general wants to know what’s going on with the “slap a teacher” dare, although TikTok says that’s not its fault.

The last claim is something that appears to be invented or at least exaggerated by a media that cannot get enough of the latest teen trend.

One thing that is certainly concerning is TikTok’s ability to steer users deeper into niche video categories. Like many other things here, this is not unique to TikTok — YouTube is notorious for a recommendation system that used to push users down some pretty dark paths.

An investigation by the Wall Street Journal this summer found that TikTok primarily uses the time spent watching each video to signal what users are most interested in. That weighting is a clever decision in its simplicity. Interacting with something on any platform by liking it, re-sharing it, or commenting on it requires a deliberate effort, and it is often public. Those actions tell a recommendation algorithm what we are comfortable showing other people what we are interested in. But the amount of time we spend looking at something is a far more valuable metric about what captivates us most.

Which is kind of creepy when you think about it.

The fact that our base instincts are revealed by how often we rubberneck at the site of a car accident will, unsurprisingly, create pathways to mesmerizing but ethically dubious videos. A Journal investigation last month found that demo user accounts that appeared to be aged 13–15 were quickly directed to videos about drinking, drug use, and eating disorders, as well as those from users who indicated their videos were for an adult audience only.

I get why this is alarming, but I have to wonder how different it is from past decades’ moral panics. Remember the vitriol expressed against Marilyn Manson in the 2000s for his music? Parents ought to have saved up that anger for now, when it really matters. Rap and hip hop have been blamed for all kinds of youth wrongdoing, as have MTV, television, and the internet more broadly. Is there something different about hearing and seeing this stuff in video form instead of in song lyrics or on message boards?

I think this recent Media Matters study by Olivia Little and Abbie Richards is a better illustration of the social failure of TikTok’s recommendations engine:

After we interacted with anti-trans content, TikTok’s recommendation algorithm populated our FYP [For You Page] feed with more transphobic and homophobic videos, as well as other far-right, hateful, and violent content.

Exclusive interaction with anti-trans content spurred TikTok to recommend misogynistic content, racist and white supremacist content, anti-vaccine videos, antisemitic content, ableist narratives, conspiracy theories, hate symbols, and videos including general calls to violence.

That looks like a pathway to radicalization to me, especially for users in balkanized and politically fragile regions, or places with high levels of anxiety. That seems to describe much of the world right now.