A.I. Mistakes Are Way Weirder Than Human Mistakes ⇥ spectrum.ieee.org
Last week, I published some thoughts on Meta’s eventual repositioning as a kind of television channel stocked with generated material for any given user:
Then TikTok came around and did away with two expectations: that you should have to work to figure out what you want to be entertained by, and that your best source of entertainment is your friend group. Meta is taking it a step further: what if the best source of entertainment is generated entirely for them? I find that thought revolting. The magic of art and entertainment is in the humanity of it. Thousands of years of culture is built on storytelling and it is not as though this model has been financially unsuccessful. That is not the lens through which I view art, but it is obviously relevant to Meta’s goals.
I have one small addition to this: there is also humanity in the mistakes we make in creating art. Take any of the examples pointed out by Todd Vaziri recently. Notice how human they are: a period-correct prop license plate covering a more modern one falls off; a camera crew visible in a reflection. These are evidence of the human hands responsible for this art.
Compare this to the mistakes common in generated A.I. images and video, which only serves to underscore the lack of human involvement. When there are errors, they are sometimes human-esque, or at least plausibly so; however, much of the time, they are unnerving.
Bruce Schneier and Nathan E. Sanders wrote, for IEEE Spectrum in January, a pretty good overview of what this is like from a mostly text perspective:
To the extent that AI systems make these human-like mistakes, we can bring all of our mistake-correcting systems to bear on their output. But the current crop of AI models — particularly LLMs — make mistakes differently.
AI errors come at seemingly random times, without any clustering around particular topics. LLM mistakes tend to be more evenly distributed through the knowledge space. A model might be equally likely to make a mistake on a calculus question as it is to propose that cabbages eat goats.
However, the ways mistakes appear in generated visual material is difficult for me to rationalize in the same way. They are unnerving in straightforward videos generated from reality-based prompts, and noticeable even in those intended to be unsettling. It is like if surrealism were expressed through a fungus-infected mind made of Play-Doh.