On A.I. Video Watermarks ⇥ technologymagazine.com
I dropped something about the tiny watermarks used by Google and OpenAI in generated videos, and I wanted to expand on it:
[…] Surely this is a marvel of technical achievement. Google’s technology generates convincing video and synced audio to match. That is incredible. So, why not shout about it? Make that watermark bigger, I say, and make it say what it is — “A.I. generated by Google Veo”, or something similar.
I think I know why Google and OpenAI are not doing this, and I think you do as well. […]
Instead of leaving this hanging, let me answer why I think videos generated by Google’s Veo and OpenAI’s Sora embed subtle visual watermarks instead of more obvious ones.
This is going to sound more cynical than I think it really is, but here goes: both Google and OpenAI are happy to remove that watermark for users of their most expensive paid plans, marketed to professionals who want to use A.I. in their work. Stories like the one I linked to basically serve as advertisements for these subscriptions, even though they are also illustrations of how the technology can be abused. Someone using A.I. in a professional workflow might be less likely to use it in this manner.
But the watermark needs to be small because otherwise people would be less likely to use these services. Even if it was not very obnoxious, it would feel like an advertisement to post videos generated by these tools if they contained a more honest disclaimer of their origins. Hence, the truly incredible feat of generating video and synced audio from a text prompt is buried and, therefore, only comes up when it is being used to further scams, fraud, hatred, or advertisement campaigns.