Since Google’s introduction of its Pixel 8 phones earlier this month, it has been interesting and a little amusing to me to read the reactions to its image manipulation tools. It feels like we have been asking the same questions every year — questions like what is a photograph, anyway?, and has technology gone too far? — since Google went all-in on computational photography with its original Pixels in 2016. In fact, these are things which people have been asking about photography since its early development. Arguments about Google’s complicity in fakery seem to be missing some historical context. Which means, unfortunately, a thousand-word summary.
As it happens, I took a photo history course when I was in university many years ago. I distinctly remember the instructor showing us an 1851 image shot by Edouard Baldus, and revealing to us that it was not a single photo, but instead a series of exposures cut and merged into a single image in a darkroom. That blew my mind at the time because, until then, I had thought of photo manipulation as a relatively recent thing. I had heard about Joseph Stalin’s propaganda efforts to remove officials who displeased him. But, surely, any manipulation that required precisely cutting negatives or painting over people was quite rare until Photoshop came along, right?
No. Not even close. The legacy of photography is a legacy of lies and liars.
In the introductory essay for the 2012 exhibition “Faking It: Manipulated Photography Before Photoshop” — sponsored by Adobe — Mia Fineman writes of the difference between darkroom techniques to adjust regions of a photo for exposure or cropping for composition, and photos where “the final image is not identical to what the camera ‘saw’ in the instant at which the negative was exposed”.1 The catalogue features nearly two hundred years of images which fit this description: from subtle enhancements, like compositing clouds into an overexposed sky, to artistic or humorous choices — “Man on a Rooftop with Eleven Men in Formation on His Shoulders” is an oft-cited delight — to dastardly projections of political power. Perhaps the most insidious examples are those which seem like journalistic “straight” images; one version of an image of the Animas Canyon by William Henry Jackson includes several fictional elements not present in the original.
Even at the time of manipulation-by-negative, there were questions about the legitimacy and ethics of these kinds of changes. In his 1869 essay “Pictorial Effect in Photography”, Henry Peach Robinson writes “[p]hotographs of what it is evident to our senses cannot visibly exist should never be attempted”, concluding that “truth in art may exist without an absolute observance of facts”. Strangely, Robinson defends photographic manipulation that would enhance the image, but disagrees with adding elements — like a “group of cherubs” — which would be purely fantastical.
This exhibition really was sponsored by Adobe — that was not a joke — and the company’s then-senior director of digital imaging Maria Yap explained why in a statement (sic):2
[…] For more than twenty years — since its first release, in 1990 — Adobe® Photoshop® software has been accused of undermining photographic truthfulness. The implicit assumption has been that photographs shot before 1990 captured the unvarnished truth and that manipulations made possible by Photoshop compromised that truth.
Now, “Faking It” punctures this assumption, presenting two hundred works that demonstrate the many ways photographs have been manipulated since the early days of the medium to serve artistry, novelty, politics, news, advertising, fashion, and other photographic purposes. […]
It was a smart public relations decision for Adobe to remind everyone that it is not responsible for manipulated images no matter how you phrase it. In fact, a few years after this exhibition debuted at New York’s Metropolitan Museum of Art, Adobe acknowledged the twenty-fifth anniversary of Photoshop with a microsite that included a “Real or Photoshop” quiz. Several years later, there are games to test your ability to identify which person is real.
The year after Adobe’s anniversary celebration, Google introduced its first Pixel phone. Each generation has leaned harder into its computational photography capabilities, with notable highlights like astrophotography in the Pixel 4, Face Unblur and the first iteration of Magic Eraser in the Pixel 6, and Super Res Zoom in the Pixel 7 Pro. With each iteration, these technologies have moved farther away from reproducing a real scene as accurately as possible, and toward synthesizing a scene based on real-life elements.
The Pixel 8 continues this pattern with three features causing some consternation: an updated version of Magic Eraser, which now uses machine learning to generate patches for distracting photo elements; Best Take, which captures multiple stills of group photos and lets you choose the best face for each person; and Magic Editor, which uses more generative software to allow you to move around individual components of a photo. Google showed off the latter feature by showing how a trampoline could be removed to make it look like someone really did make that sick slam dunk. Jay Peters, of the Verge, is worried:
There’s nothing inherently wrong with manipulating your own photos. People have done it for a very long time. But Google’s tools put powerful photo manipulation features — the kinds of edits that were previously only available with some Photoshop knowledge and hours of work — into everyone’s hands and encourage them to be used on a wide scale, without any particular guardrails or consideration for what that might mean. Suddenly, almost any photo you take can be instantly turned into a fake.
Peters is right in general, but I think his specific pessimism is misguided. Tools like these are not exclusive to Google’s products, and they are not even that new. Adobe recently added Generative Fill to Photoshop, for example, which does the same kind of stuff as the Magic Eraser and Magic Editor. It augments the Content Aware Fill option which has been part of Photoshop since 2010. The main difference is that Content Aware Fill works the way the old Magic Eraser used to: by sampling part of the real image to create a patch, though Adobe has marketed it as an “artificial intelligence” feature before the current wave of “A.I.” hype began.
For what it is worth, I tried that with one of the examples from Google’s Pixel 8 video. You know that scene where the Magic Editor is used to remove the trampoline from a slam dunk?
I roughly selected the area around the trampoline, and used the Content Aware Fill to patch that area. It took two passes but was entirely automatic:
Is it perfect? No, but it is fine. This is with technology that debuted thirteen years ago. I accomplished this in about ten seconds and not, as Peters claims, “hours”. It barely took meaningful knowledge of the software.
The worries about Content Aware Fill are familiar, too. At the time it came out, Dr. Bob Carey, then president of the U.S.-based National Press Photographers Association, was quoted in a Photoshelter blog post saying that “if an image has been altered using technology, the photo consumer needs to know”. Without an adequate disclaimer of manipulation, “images will cease to be an actual documentation of history and will instead become an altered history”.3 According to Peters, Google says the use of its “Magic” generative features will add metadata to the image file, though it says “Best Take” images will not. Metadata can be manipulated with software like ExifTool. Even data wrappers explicitly intended to avoid any manipulation, like digital rights management, can be altered or removed. We are right back where we started: photographs are purportedly light captured in time, but this assumption has always been undermined by changes which may not be obvious or disclosed.
Here is where I come clean: while it may seem like I did a lot of research for this piece, I cannot honestly say I did. This is based on writing about this topic for years, a lot of articles and journal papers I read, one class I took a long time ago, and an exhibition catalogue I borrowed from the library. I also tried my best to fact-check everything here. Even though I am not an expert, it made my head spin to see the same concerns dating back to the mid-nineteenth century. We are still asking the same things, like can I trust this photo?, and it is as though we have not learned the answer is that it depends.
I, too, have criticized computational photography. In particular, I questioned the ethics of Samsung’s trained image model, made famous by its Moon zoom feature. Even though I knew there has been a long history of inauthentic images, something does feel different about a world in which cameras are, almost by default, generating more perfect photos for us — images that are based on a real situation, but not accurately reflecting it.
The criticisms I have been seeing about the features of the Pixel 8, however, feel like we are only repeating the kinds of fears of nearly two hundred years. We have not been able to wholly trust photographs pretty much since they were invented. The only things which have changed in that time are the ease with which the manipulations can happen, and their availability. That has risen in tandem with a planet full of people carrying a camera everywhere. If you believe the estimates, we take more photos every two minutes than existed for the first hundred-and-fifty years after photography’s invention. In one sense, we are now fully immersed in an environment where we cannot be certain of the authenticity of anything.
Then again, Bigfoot and Loch Ness monster sightings are on a real decline.
We all live with a growing sense that everything around us is fraudulent. It is striking to me how these tools have been introduced as confidence in institutions has declined. It feels like a death spiral of trust — not only are we expected to separate facts from their potentially misleading context, we increasingly feel doubtful that any experts are able to help us, yet we keep inventing new ways to distort reality.
Even this article cannot escape that spectre, as you cannot be certain I did not generate it with a large language model. I did not; I am not nearly enough of a dope to use that punchline. I hope you can believe that. I hope you can trust me, because that is the same conclusion drawn by Fineman in “Faking It”:4
Just as we rely on journalists (rather than on their keyboards) to transcribe quotes accurately, we must rely on photographers and publishers (rather than on cameras themselves) to guarantee the fidelity of photographic images when they are presented as facts.
The questions that are being asked of the Pixel 8’s image manipulation capabilities are good and necessary because there are real ethical implications. But I think they need to be more fully contextualized. There is a long trail of exactly the same concerns and, to avoid repeating ourselves yet again, we should be asking these questions with that history in mind. This era feels different. I think we should be asking more precisely why that is.
I am writing this in the wake of another Google-related story that dominated the tech news cycle this week, after Megan Gray claimed, in an article for Wired, that the company had revealed it replaces organic search results with ones which are more lucrative. Though it faced immediate skepticism and Gray presented no proof, the claim was widely re-published; it feels true. Despite days of questioning, the article stayed online without updates or changes — until, it seems, the Atlantic’s Charlie Warzel asked about it. The article has now been replaced with a note acknowledging it “does not meet our [Wired’s] editorial standards”.
Gray also said nothing publicly in response to questions about the article’s claims between when it was published on Monday morning to its retraction. In an interview with Warzel published after the article was pulled, Gray said “I stand by my larger point — the Google Search team and Google ad team worked together to secretly boost commercial queries” — but this, too, is not supported by available documentation and it is something Google also denies. This was ultimately a mistake. Gray, it seems, interpreted a slide shown briefly during the trial in the way her biases favoured. Wired chose to publish the article in its “Ideas” opinion section despite the paucity of evidence. I do not think there was an intent to deceive, though I find the response of both responsible parties lacking — to say the least.
Intention matters. If a friend showed you a photo of them apparently making an amazing slam dunk, you would mentally check it against what you know about their basketball skills. If it does not make sense, you might start asking whether the photo was edited, or carefully framed or cropped to remove something telling, or a clever composite. This was true before you knew about that Pixel 8 feature. What is different now is that it is a little bit easier for that friend to lie to you. But that breach of trust is because of the lie, not because of the mechanism.
The questions we ask about generative technologies should acknowledge that we already have plenty of ways to lie, and that lots of the information we see is suspect. That does not mean we should not believe anything, but it does mean we ought to be asking questions about what is changed when tools like these become more widespread and easier to use.
We put our trust in people to help us evaluate information. Even people who have no faith in institutions and experts have something they see as reputable, regardless of whether it actually is. Generative tools only add to the existing inundation of questionably-sourced media. Something feels different about them, but I am not entirely sure anything is actually different. We still need to skeptically — but not cynically — evaluate everything we see.
Update: Corrected my thrice-written misuse of “jump shot” to “slam dunk” because I am bad at sports. Also, I have replaced the use of “bench” with “trampoline” because that is what that object in the photo is.