Images Under Influence: How AI Has Slipped Into Our Everyday Photography

2025 is the year of the rise of artificial intelligence. While remaining unsettling, it has quietly infiltrated the very practice of photography, influencing aesthetic choices, enriching visual narratives, automating tasks once performed manually, and opening new narrative territories—for amateurs and professionals alike.

Less than 10 years ago, artificial intelligence still belonged to the realm of research labs and futuristic speculation. In just a few years, it has slipped into our phones, search engines, email inboxes, photo-editing software, and even into the finger that taps to take an image. Today, it helps compose, assemble, and imagine images that no camera has ever captured. Photography has always been closely tied to technological innovation—from daguerreotypes to film, from analog to digital—but this relationship has never been transformed as profoundly as it is now.

AI’s arrival in the world of images was initially discreet. Adobe introduced automatic optimization functions based on machine learning as early as 2016. Google Photos began classifying visuals by theme or face without human intervention. But the true rupture came in 2022, when tools like DALL·E, Midjourney, or Stable Diffusion allowed anyone to generate an image in seconds from a simple sentence—the prompt. The boundary between capture and creation, document and invention, suddenly blurred.

For photographers, this mutation raises as many opportunities as it does questions. Should these AI-generated visuals be considered photography or a new medium? Are they original works, or the result of algorithms trained on preexisting visual archives? And what becomes of authorship when the camera itself disappears from the equation? Some see an existential threat. “AI is not just a tool; it redefines the very notion of the image,” warns art historian Antonio Somaini. “What we see is no longer necessarily the trace of a moment or a place, but the statistical projection of thousands of past images.”

Peter Trimmel wins first prize for his UHY fennel at the Kooma Giants Show in Limburg, 1956, New Farmer series, 2023. © Bruce Eesly

Others see an extraordinary field of experimentation. German photographer Boris Eldagsen, who won a Sony prize in 2023 with an AI-generated image before publicly rejecting it, called it “an opportunity to question our relationship to images and truth.” Artist Bruce Eesly, invited to the Rencontres d’Arles in 2024 with “New Farmer,” a series using AI to draw analogies between the history of industrial agriculture and the techno-optimism of our time, believes that his work “blends fact and fiction to disrupt commonly accepted historical narratives.”

Beyond theoretical debates, one fact remains: artificial intelligence is now an essential component of the visual ecosystem. It infiltrates professional practices, reshapes professions, changes audience expectations, and redraws the borders of art. From invisible retouching to fully synthetic creations, from fashion photography to historical archives, AI appears everywhere—often without our full awareness.

AI: new tool or new creator?

The first uses of artificial intelligence in photography date back to the 2010s, well before spectacular generators flooded social networks. Back then, AI was limited to invisible tasks: automatic exposure correction, face recognition, noise reduction. Google Photos already classified albums by places or people. Adobe introduced “smart adjustment” tools in Photoshop capable of identifying a sky or a face to edit them separately. But the image was still captured, not produced.

The shift occurred in 2022. That year, the simultaneous arrival of DALL·E 2, Midjourney, and Stable Diffusion changed everything. For the first time, AI could create an image from scratch using a simple sentence. No sensor, no lens, not even a real scene—an image emerged from a prompt, a textual statement the algorithm interprets and transforms into a visual. In seconds, it became possible to conjure a realistic portrait of an astronaut on Mars in the style of Richard Avedon, or a Parisian street scene as if photographed by Henri Cartier-Bresson.

This shift triggered an explosion of uses. Since 2022, more than 15 billion images have been generated by AI tools, according to data from Everypixel—a historic leap in visual creation. Midjourney, one of the most popular generators, claims an active Discord community estimated between 19 and 21 million members, according to independent analysts. This unprecedented democratization has transformed image-making into a global experimentation ground, accessible to both amateurs and professional photographers. “We are witnessing an unprecedented democratization of visual creation,” says Sam Altman, CEO of OpenAI. “It is no longer reserved for artists or specialists: it is a natural extension of human imagination.”

This unprecedented democratization is transforming photography into a testing ground for millions of amateurs and professionals. “AI is not just a gadget,” observes artist Refik Anadol, a pioneer in the field and creator of generative installations exhibited at MoMA. “It is redefining how we produce images, just as the camera did in the 19th century.”

Unsupervised © Refik Anadol / MoMA

This mass adoption is also reshaping photographic tools. Adobe integrated its Firefly model, which can add, delete, or generate elements in an existing image in a few clicks. Platforms like Runway Gen-2 convert a photo into video. Topaz Labs uses neural networks to restore old images in ultra-high definition. In studios, fashion photographers now generate entire sets without leaving their computer. Some photo editors rely on AI to colorize archival images or reconstruct missing frames in historical sequences.

These tools raise a fundamental question: are they merely auxiliaries—or becoming co-authors? French artist François Bellabas, who combines photography and algorithmic generation, refuses to decide: “AI is neither a paintbrush nor a photographer. It is another way of thinking about images, a partner whose language you must learn.” Authorship itself becomes blurred: who is the author of an image produced by billions of parameters trained on collective archives? The person who wrote the request—or the algorithm itself?

A new visual imagination

The irruption of AI in image creation has produced, within a few years, a new visual aesthetic—disorienting, fascinating, sometimes unsettling. Where traditional photography relies on capturing reality, AI generates visions oscillating between hyperrealism and surrealism. The result is neither fully documentary nor fully imaginary: it belongs to a gray zone, a plausible image.

AI x Future Cities © Manas Bhatia

A single word can now conjure a universe. Platforms like Midjourney or Firefly translate concepts into visuals that once existed only as thoughts. Photographers use them to explore scenarios impossible to stage. Architect Manas Bhatia, through his series “AI x Future Cities,” generated sustainable urban visions: algae-covered towers that filter air, structures blending vegetation and architecture in a futuristic world.

This algorithmic aesthetic is built on unprecedented stylistic hybridization. AI absorbs centuries of art and photography history in an instant. It can combine Rembrandt’s light, Diane Arbus’s framing, and Saul Leiter’s palette to produce new images. German artist Boris Eldagsen, the controversial winner of a Sony award in 2023 before refusing it, explained: “AI gives us access to a collective visual memory. It functions like an immense photographic unconscious.” His series “Pseudomnesia”—false memories—reproduces the look of old family photographs that never existed. Presented at Paris Photo and sold for more than €20,000 each, these works opened a major debate on the nature of images.

© Boris Eldagsen. Sony World Photography Awards

AI-generated visuals adopt the codes of documentary photography—grain, depth of field, natural light—while escaping its material constraints. They may look photographed, even when they are not. This ambiguity fascinates both artists and audiences. “I strive to create images that look as if they have always existed,” says American artist Stephanie Dinkins, who uses AI to explore Afro-American narratives. “The important thing is not whether they are ‘real,’ but what they say about our collective imagination.”

AI aesthetics are not limited to appearance—they also reshape narrative forms. Some artists create series spanning fictitious generations; others reconstruct scenes erased from history. In “The Unseen Archive,” presented at the Venice Biennale in 2024, artists “photographed” overlooked women scientists, imagining the portraits they never had. The goal was not to deceive, but to remedy an absence.

AI in our pockets

Every time we take a smartphone photo, we already collaborate with artificial intelligence—often without knowing it. Behind every image, algorithms analyze, correct, recompute, and interpret the scene, transforming a simple gesture into a complex technological process. Far from being an artist’s or laboratory’s tool, AI has settled in everyday life, embedded in our phones, apps, and visual habits.

This silent revolution relies on what engineers call computational photography. Today, an iPhone, Google Pixel, or Samsung Galaxy does not merely record light: smartphones capture several images simultaneously, merge them, balance contrast, recover highlights, refine textures. “Computational photography uses techniques like stacking, depth mapping, or scene segmentation to transform the image from the moment of capture,” explains a report by Visionary.ai. In less than a second, billions of operations generate an “ideal” image—closer to the user’s expectation than to raw reality.

Tech giants are racing to innovate. Google turned its Night Sight mode into a showcase of embedded AI, illuminating scenes nearly invisible to the naked eye. Apple combines up to nine images per shutter press to create a single frame with optimized dynamic range. And some projects go further: “We want to transform digital photography through AI, not simply improve it afterward,” said former Apple engineer Ziv Attar in an interview with CXOTalk. “Our goal is to invent devices made possible by artificial intelligence.”

These transformations extend beyond capture. Automated retouching has become a universal reflex. In Google Photos or Lightroom Mobile, one tap smooths a sky, softens a face or removes a passer-by. Google’s Magic Editor, rolled out in 2024, can reposition a subject or recompute an entire scene in seconds—tasks that would have required hours of Photoshop work 10 years ago.

Night Sight by Google
Magic Editor by Google

Some brands have been accused of artificially ‘beautifying’ photos. Samsung’s Moon mode, starting in 2023, combined multiple shots and algorithmic detail enhancement to such a degree that users claimed the images were ‘recreated’ rather than captured. The essential question follows: to what extent is an image generated or augmented by AI still a trace of reality?

More than 1.8 billion images are produced daily through smartphones. Almost all undergo algorithmic treatment before being seen or shared. Art historian Daniel Palmer observes that this “hybridization between capture and computation” reshapes our collective relationship to the image: “It changes visual memory, the circulation of images, and our trust in what we see.”

As AI slips into every pixel of everyday life, it becomes an implicit aesthetic filter. Default styles, automatic corrections, and preset enhancements now shape collective imagination. In this new visual age, understanding and mastering technological mediation becomes as essential as framing or choosing light.

AI serving photographers

If AI has disrupted the aesthetics of images, it has also deeply transformed the profession itself. Few photographers now work without relying, in one way or another, on algorithmic tools. From automated retouching to hybrid image generation, these technologies infiltrate every step of visual production—often without the viewer noticing.

Image editing is one of the fields where AI has the greatest impact. So-called ‘smart’ correction tools have made dramatic progress thanks to deep learning. They apply not only to classic parameters like brightness, contrast, and color balance. Adobe Photoshop’s Generative Fill can now extend an image’s frame, inventing new realistic areas from existing content. Tools like Topaz Photo AI or Luminar Neo restore blurry shots, remove unwanted elements, and increase resolution without losing quality.

Generative Fill by Adobe Photoshop

AI also shapes preparation and planning. With tools such as Kaiber or Runway, photographers can preview a scene’s outcome or simulate lighting conditions without leaving their studio. In fashion or commercial photography, studios generate digital ‘twins’ of models before a shoot, drastically reducing logistical costs. Stock image platforms illustrate this shift: Shutterstock now derives a major share of its revenue from licensing data to AI, with more than $104 million recorded in 2023 and $138 million in 2024. Automation touches every phase of visual production, from editing to distribution, and reshapes studio skill sets.

Photojournalism, traditionally anchored in reality, is not spared. AI first serves as a tool of analysis and archiving. Adobe’s Content Authenticity Initiative can trace image origins and certify authenticity. Reuters’ visual search engine uses machine learning to index millions of photos by content, assisting editorial teams. Some reporters use AI to reconstruct missing frames in documentary sequences or colorize archives to contextualize a subject. Others provoke debate, like Magnum photographer Jonas Bendiksen, who asks: “To what extent will our community of photographers and editors be able to distinguish deepfakes from real images?”

This transformation affects dissemination and monetization. Stock platforms deploy AI systems capable of predicting which images will be most successful, influencing editorial choices. “AI has become a silent assistant guiding our production,” says Spanish commercial photographer Javier Cortés. “It shows us trends, optimizes metadata, and helps us sell faster.”

Yet this omnipresence raises questions about identity and authorship. Some fear algorithmic standardization may impoverish creativity. Others see opportunity. “If you ask me why I take photos… they move people and make them see the world as I’ve seen it. And that is still possible with an AI image,” says artist Rankin.

Photography and truth: new ethical and legal stakes

Since its origins, photography has been perceived as a reliable witness. An image was proof. The arrival of AI shattered that certainty. When anyone can generate a fictitious news scene or a portrait of a nonexistent person within seconds, how do we distinguish truth from fabrication? This is not a theoretical question—it has become a major challenge for photographers, media, and society as a whole.

Deepfakes are the most striking example. In 2023, an image of Pope Francis wearing a white Balenciaga-style puffer coat spread widely across social networks. It had been generated using Midjourney, without malicious intent, yet it was enough to show how easily our gaze can be misled. According to a Stanford University study, nearly 20% of Americans have already shared an AI-generated image believing it to be authentic. For photojournalists, this confusion is a direct threat to their credibility. For those whose work is more subjective, the question seems less critical. “AI doesn’t worry me at all,” says Annie Leibovitz. “Photography itself is not really real. I use every tool available to me.”

Pope Francis parading around in a Balenciaga down jacket. © Image generated by Midjourney : Twitter

The proliferation of synthetic images fuels growing distrust. According to a 2024 Reuters Institute study, in several countries, a majority of people feel uncomfortable when information is produced mainly by AI. In the United States, 52% of respondents say they would be uneasy reading newspapers entirely produced by algorithmic systems. A 2025 Pew Research Center survey reveals a long-lasting tension: many Americans believe AI should be transparent, yet doubt their own ability to judge its use in visual content. These data confirm that one of the greatest challenges is restoring trust in images in the age of algorithmic generation.

Newsrooms and cultural institutions have begun to respond. The Associated Press adopted a strict charter in 2023 banning any AI-generated image in editorial content unless explicitly labeled. Agence France-Presse introduced a verification protocol based on metadata detection and pixel analysis tools. Since 2024, Adobe’s Content Credentials—developed within the Content Authenticity Initiative—enables authentication at the source: each file contains a ‘signature’ indicating whether the image was captured or generated.

But the issue of truth extends beyond manipulation. It also concerns copyright. AI-generated images are often produced using datasets of preexisting photographs protected by copyright. Artists such as Sarah Silverman or German photographer Matthias Koch filed lawsuits against model publishers, accusing them of using their work without authorization. “We are facing a legal gray zone,” says American jurist Daniel Gervais. “Who owns an image produced by an algorithm trained on millions of works? The person who wrote the prompt? The developer of the model? Or the authors of the source images?”

Cherry Airlines, 2024 © Pascal Sgro

Responsibility is equally uncertain. What happens when a generated image causes harm—by spreading misinformation or damaging a person’s reputation? National legislations struggle to keep up. The European Union is working on an AI Act that will require mandatory transparency for AI-generated content. In China, a law introduced in 2023 requires that every AI-produced image be clearly labeled. In South Korea, digital advertising screens featuring synthetic photos or videos must explicitly state this.

These issues also redefine photographic ethics. Many call for responsible use of AI, such as Danish photojournalist Mads Nissen: “We must be clear about what we show. If an image was not captured by a camera, it cannot be presented as photography.” Schools and festivals multiply workshops to educate young photographers—proof that photography is no longer only an act of creation, but also an act of transparency.

New visions and institutional recognition

Since 2022, artists have embraced generative tools to test, imagine, and create new visions. 2025 marks a turning point in institutional recognition of these works. In January, Brussels took a clear step with “AImagine – Photography and Generative Images” at Hangar, the first major group exhibition entirely dedicated to AI-created images in a photography venue. Co-curated by historian Michel Poivert and the Hangar team, it gathered 18 projects selected through an open call and curatorial process. “Who is afraid of artificial intelligence? Photographers, perhaps—but not all,” said Poivert, advocating for clear labeling to “educate audiences.” Director Delphine Dumont spoke of a “need to explore the future of the image.” Presented as part of the PhotoBrussels Festival, the exhibition was extended until June 25, 2025.

The exhibition offered concrete references. Brodbeck & de Barbuat retraced photography’s history in “Une Histoire parallèle” (2022–2023), revealing the “failures” and biases of models as narrative material. Alexey Yurenev trained a model on 35,000 WWII images for “Silent Hero” (2019– ), filling silences in his family history. Pascal Sgro revived 1950s imagery with “Cherry Airlines” (2024). Nearby, Delphine Diallo (“Kush,” 2024) and Nicolas Grospierre (“Giant Inscrutable Matrices,” 2024) projected AI toward symbolic or architectural futures, while Justine Van den Driessche (“The Progress,” 2023) and Jordan Beal (“Lineaments,” ২০২৪) blurred timelines and landscapes.

Une Histoire parallèle, 2022-2023 © Brodbeck & de Barbuat
Une Histoire parallèle, 2022-2023 © Brodbeck & de Barbuat
We are at War, Planches Contact 2024 © Phillip Toledano

A few months earlier, at the Planches Contact festival in Deauville (France), Phillip Toledano created every image on display using AI, particularly Midjourney. His projects referenced familiar public imagery like “Another America,” an unsettling rewriting of American history, or “We Are at War,” which visually resurrected the Normandy landings for their 80th anniversary. The artist indirectly proposed to recreate the images Robert Capa allegedly produced on the beach—negatives later lost or damaged. The series oscillates between historical facts and fake news in an era of conspiracism. “I do not use Robert Capa’s imagery or his style to create prompts,” Toledano explained. “I use his story as a vehicle to talk about what artificial intelligence is capable of.” Laura Serani, then the festival’s artistic director, added: “Behind Phillip’s images, there is intelligence, subtlety, narrative, beauty, power, and a dramaturgy very inspired by cinema. It is not the machine that creates these elements. It is the person behind the machine.”

This spring, the Jeu de Paume (Paris) presented “The world according to AI” (April 11–September 21, 2025), taking a different approach: revealing what usually remains invisible. Chief curator Antonio Somaini summarized the stakes: “Algorithms transform our experience of images,” and the exhibition unveiled the “discreet operations” and “invisible processes” of AI. Highlights included Kate Crawford & Vladan Joler’s maps (“Calculating Empires,” 2023), Trevor Paglen’s critique of object detection (“The Treachery of Object Recognition,” 2019), Julian Charrière’s works on the material cost of AI (“Buried Sunshines Burn,” 2023; “Metamorphism,” 2016), Nouf Aljowaysir’s critical erasure of subjects, and Grégory Chatonsky’s “posthumous anticipation” (“La quatrième mémoire,” 2025).

Elsewhere, another institutional milestone emerged: Refik Anadol at MoMA. His installation “Unsupervised” (November 19, 2022–October 29, 2023) dreamed the museum’s collection: a model processed 138,151 collection images to generate real-time abstract visual flows. “What would a machine dream after ‘seeing’ MoMA’s collection?” Anadol asked. With this data-born work, the institution endorsed AI as an exhibition language in its own right.

Unsupervised © Refik Anadol / MoMA
De Beauvoir, 2019 © Trevor Paglen
La quatrième mémoire, 2025 © Gregory Chatonsky
Giant Inscrutable Matrices, 2024 © Nicolas Grospierre

This movement fits into a longer chronology. As early as 2019, the Barbican in London presented “AI: More than Human,” a sweeping panorama blending artists and researchers. Though not centered on photography, it offered a context to understand the rise of generative models—from recognition to creation—now entering photographic institutions.

These exhibitions reveal two things. First, AI is no longer an add-on to photography: it shapes its codes and spaces from within. Second, the narrative is shifting—from “true/false” to “why and how the image is produced, displayed, explained.” “We live in a time when lens technology is fully developed and anything is possible in image creation. With the rise of artificial intelligence, many images may become statistical hallucinations,” says curator Erik Kessels. “The ‘how’ loses importance, the ‘why’ rises. That is precisely why new ideas and original photographic narratives become essential.” For Antonio Somaini, the goal is not to “illustrate” AI but to reveal its circuits: infrastructures, data, labor, biases. And to confront audiences with a medium that, from Hangar to Jeu de Paume, MoMA to the Barbican, is now fully exhibited.

AI in photojournalism and media: opportunities and risks

Nowhere has AI’s arrival raised more debate than in photojournalism. At the Visa pour l’Image festival last September, director Jean-François Leroy reminded audiences: “Faced with the proliferation of artificial intelligence and manipulated images, Visa pour l’Image 2025 presents nuanced, verified information from the field, not from social networks; images made by humans, not generative AI.” For the press, more than anywhere else, images are not merely aesthetic objects: they are testimony, evidence, collective memory. When fiction and reality blur, the credibility of information collapses.

On one hand, AI offers unprecedented possibilities to journalists and agencies. Newsrooms use it not only to index images, but to guarantee authenticity. Initiatives like the Content Authenticity Initiative (CAI) and the C2PA standard embed digital signatures in files to attest their origin. Some media experiment with non-documentary AI—for example, visualizing inaccessible places or reconstructing historical events from textual descriptions. Clearly labeled synthetic images do not replace documentary photography; they extend narrative scope and editorial impact.

AI also contributes to new forms of documentary storytelling. In investigative projects, journalists use generative models to visualize situations impossible to photograph: erased crime scenes, destroyed archives, vanished landscapes. In a New York Times project, AI helped reconstruct the faces of migrants who died in the Mediterranean from partial descriptions and biometric data—a sensitive approach that does not replace photography, but prolongs it.

But these opportunities are balanced by real dangers. The proliferation of false images has become a major issue. In March 2023, a viral photo showing Donald Trump’s fictional arrest was shared millions of times before being debunked. According to a University of Amsterdam study, nearly 60% of respondents admitted having already doubted an image’s authenticity. For newsrooms, pervasive distrust complicates the mission: producing images no longer suffices—one must now prove they are real.

This situation prompted collective mobilization. In 2023, Associated Press, AFP, Getty Images, and others signed the Content Authenticity Initiative to incorporate “origin certificates” directly into photo files. Each image produced by their photographers now contains an unalterable digital trace attesting authenticity. Several agencies also require that any AI-generated image be clearly labeled, under penalty of exclusion.

Photojournalists themselves adapt their practice. Some, like Jonas Bendiksen, used AI to test limits: at Visa pour l’Image in 2021, he presented “The Book of Veles,” a series entirely generated by computer. The project, designed as an intellectual trap, fooled many editors and curators before he revealed the ruse. “It wasn’t to ridicule anyone,” he later said, “but to show how easily our trust in images can be manipulated.”

On April 4, 2023, photographer Michael Christopher Brown’s project, entitled “90 Miles,” also particularly ignited the world of photojournalism. It was a “post-photographic experiment illustrating news reports generated by artificial intelligence (AI); which explored the historical events and realities of Cuban life that motivated Cubans to cross the ocean separating Havana from Florida, a distance of 90 miles,” according to the photographer’s explanation.

90 Miles, 2023 © Michael Christopher Brown

The lesson is clear: the era of the photograph as unquestioned proof is over. In this new context, images must be accompanied by transparency, contextual data, or complementary documentation to regain public trust. As Turkish-American photographer Nicole Tung summarizes: “Our responsibility is twofold: to show the world as it is, but also to explain how and why we show these images.”

Artificial intelligence will probably never replace photojournalism, because it cannot witness reality. But it forces the profession to reinvent itself, strengthen protocols, and redefine its relationship with audiences. In this sense, it is not a threat, but a revealer—exposing both the fragilities and strengths of a profession built more than ever on trust.

A silent revolution in the economics of images

If AI transformed practices and imaginations, its most radical impact lies in photography’s economy. In a few years, traditional business models have been deeply reshaped—historic players see margins collapse while new markets emerge at breakneck speed.

The first upheaval concerns production costs. Creating a professional-quality image no longer requires entire teams, expensive studios, or days of post-production. A fashion photographer can now generate a 3D set or replace an absent model with a synthetic version at a fraction of the price. A food photographer can create an AI-generated dish more appetizing than their own. According to a 2024 Deloitte study, average visual production budgets in advertising dropped by 32% since widespread generative tool adoption—and the trend continues more importantly in 2025. “Where a campaign required ten shooting days, it can now be produced in three,” explains Laura McLean, creative director at Saatchi & Saatchi London.

Falling production costs reshape traditional models. Stock agencies like Getty or Shutterstock now invest heavily in on-demand image generation rather than traditional production. Shutterstock announced that its AI licensing activity reached over $100 million in 2023, a rapidly growing segment reflecting the shift in value toward data exploitation. Meanwhile, Getty’s “Creative” segment declined 4.5% in 2024, showing rising demand for generated imagery. The market is actively recomposing, evidenced by a proposed $3.7 billion merger between the two giants.

Advertising campaign for Martini generated by AI
Advertising campaign for Martini generated by AI

The art market—historically slower to evolve—is not spared. Auction houses now propose AI works. In 2023, “Edmond de Belamy,” an algorithmic piece by the collective Obvious, sold at Christie’s for $432,500, marking a symbolic moment. Since then, specialized galleries, such as Bitforms (New York) or Gazelli Art House (London), exhibit hybrid photographic projects blending real capture and synthetic generation. AI thus becomes a recognized medium—not just a technical tool but an artistic practice.

This economic upheaval also raises the question of value. What is an image produced instantly and infinitely worth? How can scarcity—a foundational concept of the photography market—be preserved in an age of abundance? For art historian Fred Ritchin, former dean of the International Center of Photography (ICP), “value no longer lies in the object, but in its context. What will matter is the story behind the image, the vision that shaped it, the reflection that accompanies it.” In other words, it is less the photograph itself that will be monetized than the entire intellectual and narrative project that produced it.

AI also reshuffles labor. New professions emerge—prompt designers, creative engineers, image authentication specialists—while others transform. Photographers become data curators, image scriptwriters, or supervisors of automated creative pipelines. “What AI can’t do is create in place of humans,” explains Florence Moll, an agent for photographers who regularly take on commissions for luxury brands and commercial campaigns. “It doesn’t have its own imagination or creativity. It’s a brand-new tool that unleashes creative intelligence.”

This silent but relentless economic revolution deeply alters photography’s ecosystem. It creates new hierarchies, redistributes value, and redefines the author’s role. The image is no longer just a finished product; it becomes a service, an experience, a process. In this new visual order, photography moves away from the unique object and enters the age of infinite production.

Photographing tomorrow

AI’s rapid rise has upended the world of images in less than three years. Yet what we are experiencing today is likely only a beginning. As models grow more powerful and their analytical and generative capabilities improve, the boundary between photography and synthetic imagery will continue to dissolve.

In art and photography schools, the change is already underway. Prompt engineering—the craft of shaping an effective request to guide AI—joins courses on composition, lighting, or darkroom printing. Institutions such as the ICP in New York have created hybrid workshops where students must produce series combining real captures and generated imagery. “Understanding how AI works is also understanding how it transforms our vision,” says Fred Ritchin, one of the first theorists to integrate AI into photographic education.

This hybridization appears in many artistic practices. Some, like Sofia Crespo, use AI to explore biological forms impossible to photograph. Others, like Trevor Paglen, use it to reveal invisible structures—databases, surveillance systems—that shape our world. Halfway between documentary and speculation, these approaches suggest a new form of photography: no longer centered on capturing the real, but on modeling and narrating it. “AI is not here to replace photography,” Paglen explains. “It is here to help us imagine what we cannot always see.”

The progress, 2023 © Justine Van den Driessche

Startups are also developing hybrid devices capable of generating complementary imagery directly through the viewfinder or enriching the scene in real time with synthetic elements. The experimental Paragraphica project, for example, uses no sensor or lens: it collects location, weather, and environmental data to generate an image via AI instead of a traditional photograph. Other prototypes like the CMR M-1 embed diffusion algorithms in the device, letting photographers modulate generative output in real time using personalized LoRA maps. Canon is already exploring these possibilities with its latest hybrid cameras, such as the EOS R5 Mark II, whose autofocus relies on deep learning modules that anticipate subject movement, adjust exposure, and automatically improve image quality through onboard algorithms. Together, these innovations outline a near future in which the camera no longer simply records what the photographer sees but anticipates, interprets, and enriches it at the moment of capture.

Other tools of automated storytelling associate text, image, and data to create complete multimedia reports. In this new ecosystem, the photographer becomes less a technical operator and more a creative director—an author capable of orchestrating complex flows of human and synthetic imagery.

Photography’s history has always been one of technological ruptures: from wet collodion to film, from black-and-white to color, from analog to digital, from mechanical shutters to CMOS sensors. Each time, resistance preceded reinvention. Artificial intelligence is no exception. It is probably not the end of photography, but its next chapter—a chapter where photographing will no longer mean simply capturing light, but also conversing with machines capable of inventing new forms of it.

You’re getting blind.
Don’t miss the best of visual arts. Subscribe for $8 per month or $96 $80 per year.

Already subscribed? Log in