The Limits of Computational Photography petapixel.com

Will Yager, PetaPixel:

Significantly more objectionable are the types of approaches that impose a complex prior on the contents of the image. This is the type of process that produces the trash-tier results you see in my example photos. Basically, the image processing software has some kind of internal model that encodes what it “expects” to see in photos. This model could be very explicit, like the fake moon thing, an “embodied” model that makes relatively simple assumptions (e.g. about the physical dynamics of objects in the image), or a model with a very complex implicit prior, such as a neural network trained on image upscaling. In any case, the camera is just guessing what’s in your image. If your image is “out-of-band”, that is, not something the software is trained to guess, any attempts to computationally “improve” your image are just going to royally trash it up.

This article arrived at a perfect time as Samsung’s latest flagship is once again mired in controversy over a Moon photography demo. Marques Brownlee tweeted a short clip of the S23 Ultra’s one-hundredfold zoom mode, which combines optical and digital zoom and produces a remarkably clear photo of the Moon. As with similar questions about the S21 Ultra and S22 Ultra, it seems Samsung is treading a blurry line between what is real and what is synthetic.

Samsung is surely not floating a known image of the Moon over its spot in the sky when you point the camera in its direction. But the difference between what it can see and what it displays is also not the result of increasing the image’s contrast and sharpness. If you look at a side-by-side comparison of Samsung’s S22 from last year and an iPhone 14 Pro — which the photographer claims “look the same” but do not in any meaningful sense — the Ultra is able to pull stunning detail and clarity out of a handheld image. Much of that can be attributed to the S22’s 10× optical zoom which outstrips the iPhone’s 3× zoom. Another reason why it is so detailed is also because Samsung specifically trained the camera to take pictures of the Moon, among other scenes. The Moon is a known object with limited variability, so it makes sense to me that machine learning models would be able to figure out which landmarks and details to enhance.

How much of that is the actual photo and how much you might consider to be synthesized is a line I think each person draws for themselves. I think it depends on the context; Moon photography makes for a neat demo but it is rarely relevant. A better question is whether these kinds of software enhancements hallucinate errors along the same lines of what happened in Xerox copiers for years. Short of erroneously reconstructing an environment, I think these kinds of computational enhancements make sense, even though they are inconsistent with my personal taste. I would prefer less processed images — or, at least, photos that look less processed. But that is not the way the wind is blowing.