For decades, the camera provided a fragile contract with reality: a frame, a shutter, a trace of light. Not a guarantee of truth — but an agreement that something once stood in front of the lens. Today, writer and educator Fred Ritchin sees that contract collapsing under the weight of synthetic imagery. AI-generated visuals borrow the codes of photography — depth of field, grain, natural light — yet they refer to nothing lived, nothing witnessed. They encourage, he says, a new epistemology: images that “reflect our own conceptions of the real,” replacing the world with our fantasies. What was once evidence becomes décor, and the viewer is left unable to distinguish a testimony from an illusion. It is not a marginal concern but a civic one. Photography, he warns, is being “counterfeited,” and with it, our ability to share a common understanding of events.
Fred Ritchin has spent more than four decades studying these shifts from inside the institutions that shaped the medium. As Dean Emeritus of the International Center of Photography and former picture editor of The New York Times Magazine, he has watched photography move from film to pixels, from retouching to recomposition, from witnessing to simulation. The danger is not that AI makes new images, but that it makes them without needing experience, encounter, or risk. Anyone, from their living room, can fabricate worlds, identities, crises that never occurred. Meanwhile, journalism still relies on photographs to bear witness, to anchor discourse, to resist propaganda. For Ritchin, the question is no longer whether AI will alter photography, but whether photography can survive as nonfiction. The future will depend on trust — not in the camera, but in the person behind it; not in the image, but in the context and responsibility that accompany it.
In the following interview, he shares with Blind his thoughts on the rise of AI images.
In recent years, photography has shifted from capturing the world to generating it. From a sociological point of view, how do you think this affects our collective understanding of what is “visible” and what is “real”?
AI-generated images, even if photorealistic in appearance, are not photographs. They are synthetic images that can be generated by a computer in multiple styles, including ones that imitate the look of photographs. Their impact on photography is already enormous, making viewers skeptical of actual photographs as they no longer know what to believe, a conundrum that is now rapidly emerging for video as well. The counterfeiting of the currency of photography, the undermining of the previous understanding that we had that the camera and the lens provided visual reference points to help viewers comprehend what is going on in the world or what happened in the past, creates both uncertainty and chaos. It makes it increasingly difficult for people to parse the real from the fake, posing a growing threat not only to the functioning of democratic societies but to our sense of a shared reality. How can we vote if we don’t know what is going on? How can we communicate with others if we live in separate universes bounded by our own preconceptions that can no longer be effectively challenged by photographs and videos?
Artificial-intelligence images borrow the visual codes of photography: grain, depth of field, light. If a synthetic image behaves like a photograph, are we still responding to the medium itself or to the cultural expectations we’ve built around it?
We are responding to the expectations that something that looks like a photograph is, to use author John Berger’s description, a quotation from appearances. This has never meant that the photograph is the truth, only that it provides reference points considered to be trustworthy in contemplating realities other than one’s own. The AI-generated photorealistic imagery encourages people to construct imagery that reflects their own conceptions of the real, displacing what is actually going on with their own preconceived notions.
This, of course, was also a problem in photography as people with little understanding of another culture made photographs that reflected their own biases. But the problem is much worse now as people no longer have to leave their own homes to make images that not only reflect their biases but allows them to fabricate realistic-looking imagery of people and places that never existed. We can now easily remake the world, as was the title of my first book in 1990, “in our own image.”
You’ve written often about the “truth-value” of photography in journalism. Do you think we need to abandon the idea that a photograph is a proof — or do we need new systems of trust, like provenance or certification?
One answer may be that we trust the photographer, not the camera, just as we trust the writer, not the words. That is why several of us founded Writing with Light (wwlight.org) to emphasize authorship of the photograph as being the key, recognizing the person behind the camera and not placing all our faith in the camera itself.
We all know that it is possible to stage a scene and make a photograph that distorts what is going on, as has routinely been done in politics, for example. In these cases, the photograph distorts events even if the image itself was not manipulated by software or in the darkroom. And it would also be helpful if we began to think of journalistic and most documentary photographs as within the category of “non-fiction” imagery, just as we use the term with words, to clarify the differences between those images that interpret reality and those that fabricate it.
Furthermore, it would be helpful to begin to define a journalistic photograph as a “quotation from appearances,” thinking of the frame of the photograph as the equivalent of quotation marks. Just as we use the term “quotation” with words when nothing that is quoted can be modified without informing the reader with an ellipsis or bracket, whatever is within the frame of a journalistic photograph cannot be changed, other than minor changes such a cleaning up visual noise, cropping, and modest changes in contrast, without alerting the reader. This would make it clear to the reader that what they are looking is a recording of what occurred, not a fabrication. The credit might then be amended to read “Journalistic photograph by name of the photographer.”
Deep learning models are trained on billions of images — many of them personal, private, or culturally sensitive. What long-term cultural consequences do you foresee from having these datasets shape our visual imagination?
In my experience, while a large number of AI-generated images are banal, historically incorrect, racist, misogynist, and self-serving, there are some that are revelatory. As a picture editor, I often had a strong sense of what images a photographer would make on assignment and would often ask the photographer to surprise me, to do something different than the conventional image — such as a portrait of a writer inevitably showing the person in their office with large numbers of books on the shelves. AI-generated imagery for which I have written the text prompts have at times surprised me in fascinating ways, showing in response to a prompt for “the perfect family” a formally posed image of two men with children, “the greatest mothers” eliciting an image of an ape with her children, “Trump’s favorite dictators” a depiction with two Trumps, and so on. The AI sometimes seems to be able to confound expectations, transcending the imagery that it has been trained upon, although recently I have had much less success as the imagery generated has become more conventional and unsurprising.
Again, photojournalism claims to bear witness. Today, synthetic imagery can re-create events that never existed or re-invent scenes that did. Is there still a role for photography as evidence?
Yes, there is a major role for photography as evidence, but it will require more contextualization and, at times, the presence of the photographer, those depicted in the image, or bystanders, to corroborate it as a fair representation of what happened. This has been the case in courts of law for some time, and now we will need to think of how to provide more context in mass media. A single image may not be enough.
For example, a button allowing the reader to view a sequence of images, the equivalent of a contact sheet from which an image was selected, or other photographs and videos from the same scene, would be helpful. When I worked many years ago as a curator on a Magnum exhibition of 400 photographs intended to represent forty years of world history, I realized that I was only looking at four seconds of history if one considers that the camera’s shutter was only open for about 1/100 of a second for each photograph. Especially within the layers of a digital environment, much more can be done to provide context and corroboration
But we must be transparent with the readers indicating what kind of images they are looking at, and have this labeling system widely adopted. The assistance of big tech companies would be essential to help filter out various kinds of imagery. And there need to be penalties for those mislabeling their images.
Many smartphone users unknowingly collaborate with AI: computational HDR, skin smoothing, sky replacement. Do you think this invisible mediation is reshaping our collective sense of what “normal” images should look like?
I think that this is a manifestation of “consumer entitlement,” the idea that what the consumer (or in this case the prosumer) wants he or she then gets. This is often done in the camera, with presets determined by the manufacturer. We have gone from photography that involves light passing through a lens which is then recorded on film, to a computational photography in which that which passes through the lens is routinely modified by software, to AI-generated imagery in which neither lens nor camera is necessary.
In the process, photography has gone from being dialectical, showing us that which we might not have wanted to see, to giving us what camera manufacturers, as well as ourselves, think we should want, to fabricating imagery that may have little or nothing to do with anything that exists. As a result, we are made to become minor deities of worlds that do not exist, while the world in which we actually live is increasingly ignored as it rapidly degenerates due to climate change, wars, dictatorships, etc.
Historically, new technologies in photography have always produced new aesthetics. With AI, we are seeing not only new styles, but new mythologies — “plausible memories,” fabricated archives, imagined futures. What do these narratives tell us about our time?
While there are many ways that AI-generated imagery will expand our horizons in productive ways, I think some of its current uses are signs of the confusion and anxiety in which we currently live, clutching at illusions while finding it difficult to confront the realities that need to be met.
Many of us have, for example, lost a sense of a larger community to which we belong, including reading the same or similar newspapers and magazines with front pages that would help to focus our society on specific issues, to give us common goals. As a result, we have no iconic photographs that can rally the world to a cause such as the famous 1968 “Earthrise” photograph by astronaut William Anders that helped to kickstart the environmental movement, or the agonizing photographs of Emmett Till, the African-American child who was brutalized and killed by racists in the 1950s, that gave momentum to the civil rights movement, or that of two-year-old Syrian refugee Alan Kurdi who was shown to have drowned on a Turkish beach which provoked individuals and governments to be vastly more supportive of refugees.
Certainly AI-generated images will be helpful in ways that we cannot now predict, including in exploring that which may be outside the realm of photography – thoughts and dreams, ancient civilizations, future events such as what might occur in various localities due to climate change, quantum universes, different worldviews, biological processes, etc. – but what we must not do is to allow the widespread use of AI to usurp our own sense of actuality and our abilities to understand and confront it. Right now, it is being increasingly weaponized by governments and other groups to intimidate and terrify people, to destabilize our ability to perceive what is around us, fracturing societies and making dictatorships more likely.
Art institutions are beginning to exhibit AI-generated works as visual arts, yet audiences often feel uneasy or even betrayed. What is at stake ethically when museums, festivals, or media present artificial imagery in documentary contexts?
One question that has not been addressed is what are the rights of a photographer to prevent others from making AI-generated imagery based upon their own work? What are the parameters that are ethically permissible, and what should be the legal consequences? How can we protect critical imagery, such as those explaining pivotal moments in history, from misuse by those aiming to maliciously distort the past, such as turning the perpetrators of violence into its victims?
These are questions that institutions must explore in depth. Otherwise, it is all too easy to celebrate such work, whether made with the assistance of AI or in other ways, simply because it is novel. The “moral right” of the authors of the photographs needs to be respected, as do the rights of the victims of atrocities who are now being revictimized by imagery that distorts what happened to them.
But the photographic industry has been slow to respond to this. As someone who has written four books on the subject beginning in 1990 (my most recent book is from this year, The Synthetic Eye: Photography Transformed in the Age of AI), and a piece for the New York Times Magazine on the same subject in 1984, I can attest to the continuing reluctance of institutions to take these profound changes seriously, to meet together and decide on a common response.
If you had to advise young photographers today, would you encourage them to master AI as a tool, or to resist it?
We can consider AI to be a “digital tool,” but we can also consider it to be part of a digital environment. The digital differs enormously from the analog in many ways – imagery is made of a mosaic of pixels, each capable of being changed quickly and undetectably; it is non-linear, unlike printed books and magazines; it allows everyone to be not only a producer but a publisher at very low cost; and digital technology such as AI can easily usurp authorship.
So, I would advise photographers to understand these and other differences, to stop viewing photography as if it was the same medium practiced during the last century, and to think of the strategies that are best for them to communicate their own understandings of the world.
Would it be helpful to incorporate video and audio in their practices, to write a short code of ethics explaining their approaches that can be retrieved by clicking on their name under the image, to provide considerably greater context for their work, etc.?
Or would it sometimes make sense to have people describe how they were abused when no photographer or camera was present, and then help them make AI-generated imagery that depicts how they were mistreated, as was done by Exhibit A-I in Australia to depict the plight of refugees? Would this be an ethical practice and a useful one?
Yes, everyone should know how AI works and make their own decisions as to whether it helps advance what they are doing or undermines it and be transparent in their choices. Just as the invention of photography helped to provoke a renaissance among painters, so too the invention of AI may help photographers to see new possibilities.
More information on Fred Ritchin here.