Hana Katoba approaches image-making like a traveler returning to a familiar landscape only to find it transformed. Trained as a photographer and shaped by years of observing light, gesture, and atmosphere, she entered the realm of AI without the urgency of novelty. Instead, she treats it as an extension of her visual practice: another room in the house she has been building since she first began taking pictures. The camera is no longer a physical object but a space where intuition and memory guide the composition.
For Katoba, artificial intelligence is not a shortcut but a medium of exploration. She sketches concepts as she once would with a lens, moving through prompts the way she once moved through a city or set up—slowly, letting uncertainty lead. The resulting images carry the same chromatic delicacy that marked her work: dreamlike portraits, spectral architectures, fragmented landscapes that echo the surreal without abandoning coherence. The fantasy in her compositions is not escapism; it is a way to touch the emotional residue of reality.
In her commercial collaborations, Katoba adapts this language to the needs of brands and editorials, without relinquishing authorship. She sees the core challenge not in using AI, but in retaining one’s voice within it. The technology can multiply possibilities, she says, but only the artist can set direction.
In the following interview with Blind, Hana Katoba explains how she navigates that space—between photography and invention, discipline and play—and how AI has become less a tool than a companion shaping her visual imagination.
How did you first begin integrating AI into your photographic practice?
I started integrating AI very organically, almost out of curiosity. I had been working in photography for many years, and when I discovered the possibilities of generative models, I felt I could expand my visual language beyond what the camera and my resources allowed. It wasn’t so much an abrupt change as it was a natural transition: a way to translate ideas that previously only existed as sensations or mental sketches.
Was there a particular project or moment when you realized AI could play a central role in your visual language?
There wasn’t one specific project that marked the moment; it was more of a gradual revelation. While working with AI, I discovered I could transform very abstract ideas into images without the technical and budgetary limitations of traditional media. That new, borderless space allowed my imagination to expand with total freedom. What really made me understand its potential was realizing that the process awakened a way of creating that felt very close to play. I deeply believe that, as humans, learning through play is innate; it’s how we grow, explore, and discover the world for the first time. With AI, I recovered that primal impulse: to experiment without fear, to follow my intuition, to be surprised. That combination of freedom and play—that possibility of building from the most essential curiosity—was what made me understand that AI could become a central part of my visual language.
How does your use of AI differ between your personal projects and your commercial assignments?
In my personal projects, I allow myself a freer, more emotional exploration, where AI acts as a space for introspection. In the commercial realm, my use of AI takes on a more polished and strategic character. It’s not so much about letting go, but about translating concepts into images that dialogue with the client’s vision without losing my visual identity. That process can be challenging because there is a risk that your own voice might get diluted among so many external guidelines. For me, the interesting part is managing to provide sensitivity, authenticity, and a personal perspective even within a tighter framework.
In commercial work, what expectations or boundaries do clients typically have regarding AI-generated or AI-enhanced imagery?
Clients usually look for innovation, but with clear limits: aesthetic coherence, respect for brand identity, and technical viability. Many are interested in the narrative or conceptual potential of AI, but they need the result to be controlled, reproducible, and aligned with the project’s purpose.
When you start an image involving AI, what is your entry point — a photograph, a prompt, a sketch, a concept?
Normally, I start with a concept. Sometimes I develop it into a quick sketch or a written description that serves as a starting point. Little by little, I unravel all the characteristics that will shape the image: how the composition will be organized, what kind of light fits best, and what color palette can sustain the atmosphere I’m looking for. That starting point helps me maintain coherence when I later incorporate AI into the process.
But what really fascinates me is the exploration of latent space. It’s a technical term, yes, but I’d explain it like this: imagine being able to travel from an initial image or idea toward any other visual destination, and during that journey, unexpected findings appear, just like when you travel without a fixed plan. For me, generating images with AI becomes that journey: I can change direction, experiment, discover, and take control at any moment. Each image becomes a different station within an infinite creative route.
Could you walk us step-by-step through your typical workflow for creating an AI-assisted image, from idea to final result?
● Idea / Concept: Everything starts with a clear intention: an emotion, an atmosphere, or an idea I want to address.
● Brief Description: I usually make a small sketch or write a few lines defining the direction. It doesn’t have to be elaborate, but enough to set the “mental framing,” as if I were planning a photo shoot.
● First Iterations with AI: From there, I build the prompts, keeping fundamental concepts very present: lighting type, style, color, angle. I test different word combinations and observe how the images respond.
● Journey through Latent Space: Here begins the most stimulating part of the process. I like to think of latent space as an infinite map where all possible images are connected to each other. Exploring that map is like traveling without a fixed destination: I start from an initial idea, and while testing iterations, I discover unexpected paths. I can deviate, return, or take a completely new course, always guided by the original emotion. Each generated image is a different stop on that creative journey.
● Image Selection: Of all the variants, I choose the one that is not only “correct” on a technical level but the one that best captures the initial emotion or idea. It’s a process
very similar to editing a photography series: it’s about finding the image that sustains the message.
● Editing / Post-production: Afterward, I work on the image in post-production as I would with a photograph: tone, color, contrast, texture, depth perception. This part is key to unifying the piece with the rest of my work so it isn’t perceived as something isolated.
How do you build prompts in a way that keeps the results personal and aligned with your aesthetic rather than generic?
What keeps my results personal isn’t the prompt, but my perspective (“mirada”) behind the whole process and the journey. Honestly, the prompt isn’t the most important part at all; it’s just an initial point, a rough sketch. As a whole, that combination—my way of seeing, my visual language, and my images as a guide—prevents the result from being generic and ensures each piece maintains coherence with my work.
Which AI models, software, or platforms are central to your process today, and why?
My main tool is ComfyUI. It could be described as the “operating system” for AI-generated imagery: an environment where I can connect different models and, at the same time, control in detail every parameter involved in constructing an image or video. In my personal work, I sometimes allow for more experimentation and chance, but in commercial projects, that level of control becomes essential: being able to adjust specific details, respond to concrete changes, and guarantee an exact result is part of the professional process.
How do you manage resolution, texture, and fine detail — common limitations of AI-generated images?
I generate the base image, and when I find the version that works, I upscale it to gain quality, sharpness, and realism. From there, I intervene manually in post-production: I adjust texture, light, and detail. That final work is what ensures the image has visual coherence and an organic feel, far removed from the artificial.
What are the main technical pitfalls that newcomers should avoid when integrating AI into a photographic workflow?
One of the most common mistakes is trusting completely in what the AI generates and setting aside one’s own judgment. It also happens that, with so many options available, it’s easy to lose aesthetic clarity: either by adding too many effects or by following descriptions too literally. It’s not that these elements are a problem in themselves, but they can divert attention from what matters most: the intention of the image. The key lies in maintaining a clear direction, working with patience, and remembering that AI is a tool within a broader creative process.

What balance do you aim for between AI-generated elements and traditional photographic or post-production work?
AI intervenes in my process, but it doesn’t define the image. What really sustains each piece is my gaze as a photographer: the way I understand light, color, space, and the themes I explore. I don’t delegate those decisions; they come from years of training and an eye trained to build images with intention. AI helps me generate initial material or open up new visual possibilities, but I am the one who marks the direction from the start and who, later, gives cohesion to the work in post-production. In short, AI expands the process, but the visual identity comes from me and is present at every stage, not just in the final finish.
Do people ever misunderstand what “AI photography” means in the context of your work?
Yes, it happens frequently. When they see a generated image that preserves certain photographic codes—the light, the composition, the depth—they tend to interpret it as a real scene, as if it had existed in front of a camera. But in my case, there is no captured instant: there is an imagined scene. AI allows for building images that lean on the visual language of photography but don’t have a tangible origin. It’s a way of giving visual form to that which normally stays in the mind: ideas, emotions, fuzzy memories… or even something similar to capturing a dream right before it fades away.
Your images have a very distinctive atmosphere. How do you maintain a coherent style when working with tools that can be sometimes unpredictable?
The coherence of my images comes from my background and my way of understanding photography. Light, color, composition, and atmosphere aren’t random elements to me: they are decisions I’ve been refining for years. Although AI can be unpredictable, my criteria acts as a constant filter. I review, adjust, and select until the image breathes my language.
How do you preserve the emotional and sensory dimension of photography when using algorithmic tools?
I try to make emotion the starting point and not the result. AI helps me build atmospheres, but I am the one who decides where to place the silence, the tension, the humor. I think of each image as a small space where the viewer can stop and feel.
Could you describe a personal project where AI significantly transformed your initial idea, and explain how you built the final image?
It’s not something that happened in just one specific project, but in all those where I allow myself to experiment. Sometimes I start with a very basic guide and travel through the latent space down unexpected paths. That openness usually leads me to results that are much more creative than I imagined. It happens especially in images with a humorous or surreal tone, where there is room to play and let more “out of the box” visual solutions appear.
If you were teaching someone new to AI, what is one concrete technique you would recommend they start with?
I would invite anyone starting out to create a small “style moodboard”: compile 5 of their own images that represent their visual taste and use them as a recurring reference. This way, you learn from the beginning to build aesthetic coherence and not depend solely on what the model proposes. But, above any technique, I would invite them to maintain an attitude of play: experiment and let yourself be surprised. Creativity with AI is learned by playing.
How do you see AI reshaping the role of photographers in the coming years?
I think it will expand the photographer’s role, not replace it. It will turn us into designers of worlds, not just images. The camera will no longer be the only medium to capture, but one of many to build. The essential thing will continue to be the gaze.
Are there new tools or technologies you’re excited to experiment with next?
I am especially excited about the arrival of real-time world-generation tools. We are already seeing advances in video models, getting closer to that idea, but we still lack the necessary immediacy. I imagine a very near future where we can move from one image to another—from one visual universe to another—fluidly, modifying any element instantly. When that happens, the creative experience will be almost like inhabiting a dream that we can design and control in the first person. A space where not only images are generated, but entire realities to interact with. And when that technology arrives… I’ll probably get lost in there, so if you look for me, you know where to find me.
Looking ahead, what place do you imagine AI will have in your artistic practice in five or ten years?
I imagine a future where “traveling” through latent space isn’t just a metaphor, but a literal experience: being able to move from one image to another, from one atmosphere to another, within a visual universe in constant transformation, where any element can be modified instantly. We won’t just generate images anymore: we will inhabit them. That leap will turn creation into something immersive, almost like daydreaming and having total control over the dream. Current tools are already pointing that way, but when that technology matures, it will completely change my process… and probably my relationship with the image too. The exciting thing is that it’s hard to imagine how far we’ll go; the pace of evolution is dizzying. But if one thing is clear to me, it’s that I’ll be there, exploring it, playing… and happily getting lost in those worlds.
More information on Hana Katoba on her website or Instagram. Hana Katoba is represented by FMA Le Bureau.