Racist Job Hunter Is Actually an A.I. Ad for A.I. Recruitement Software ⇥ cbc.ca
David Michael Lamb, Ashley Fraser, and Andrew Kitchen, CBC News:
Most of the videos feature what looks like a white man in his 20s named “Josh,” who speaks to the camera and makes racially charged statements about immigrants and their role in the job market. In fact, “Josh” is created by AI and doesn’t exist.
[…]
It’s part of a trend known as “fake-fluencing.” That’s when companies create fake personas with AI in order to make it look like a real person is endorsing a product or service. The company in this case is Nexa, an AI firm that develops software that other companies can use to recruit new hires. Some of the videos feature Nexa logos in the scene. The company’s founder and CEO Divy Nayyar calls that a “subconscious placement” of advertising.
These videos are not massively popular on TikTok, so I am not sure how effective this is as an advertisement for this company. Perhaps this story is the marketing they were hoping to get. That seems desperate.
In any case, the videos still had the tiny Google Veo watermark in the lower-right corner, and that got me thinking: why are these A.I. video generators being so coy about the origins of their products? Surely this is a marvel of technical achievement. Google’s technology generates convincing video and synced audio to match. That is incredible. So, why not shout about it? Make that watermark bigger, I say, and make it say what it is — “A.I. generated by Google Veo”, or something similar.
I think I know why Google and OpenAI are not doing this, and I think you do as well. In any other industry, hiding or masking the origins of a product raises suspicions. When a clothing company does not want to talk about their factories, we understand why that is a problem. It is the same thing here. Traceability matters in physical goods and digital ones, too.
Oh, and I doubt anyone is calling this trend “fake-fluencing”.