Anger, loathing and disbelief
Survey
November 2025

I have always been quite pragmatic about AI imagery and new technologies; I don’t expect the end of photography or the end of humanity any time soon, and I don’t fear a manipulative future in which we can’t trust the photographic image. Social media, filter bubbles and platform capitalism handle that very well, without the need of faking images. Deepfakes has always been the area, which I felt was very dangerous and abusive. But I wanted to react to two different images I came across these days, which somehow triggered something new, and which, somehow surprisingly, really upset me.
The first is a deepfake-image of an influencer with Down syndrome, used mostly to clickbait towards adult sites such as OnlyFans and Fanvue. They sexualize and fetishize women with a genetic condition, whose faces are inserted onto bodies of random “hot” women. We can assume that all images of these women were used without their consent, and as various articles have shown, the same body appears with various faces. The second image is a Ghibli version of Israeli soldiers, published by the official X account of the Israeli Defense Forces (IDF).

, ,
In April, OpenAI launched a Ghibli filter for ChatGPT, without notice or consent by the Japanese Studio, trivializing warfare against Palestinian civilians, who are relentlessly bombed and persecuted since the 7 October terrorist attacks against Israeli civilians. Both these examples are incredibly problematic on many levels. But while quite different, they both reflect a new, assumed, multi-layered disregard for any kind of moral compass, or any type of ethic perspective. In the Ghibli IDF image, it is not even the fact that OpenAI uses a cherished aesthetic, by an important artist, inspired by films painstakingly created over years by a multitude of animators, which is the core problem, nor the ensuing copyright issues. One might remember that big companies were suing Napster or torrent sites, at least they claimed, because they fought for the rights of the artists. It is not (only) the normativity and sexualizing of women with Downe syndrome, which is problematic here. In a broader interrogation of cultural shifts, it is in both examples the multi-layered abuse, the sheer, banal and callous disregard for humanity, the manipulation of both depicted people than potential audiences, which puzzles. And which maybe explains the strength of my reaction, a mix of anger, loathing and disbelief.
The blatant disconnection with reality, mediated through superficial synthetic imagery, is crossing many red lines
The blatant disconnection with reality, mediated through superficial synthetic imagery, is crossing many red lines, just for commercial gain or entertaining war communication. Strangely, for me these AI images create a similar sense of suffering and unease, as photojournalistic imagery of hurt or killed people do, which I’ve always struggled to look at. As Harun Farocki has effectively shown in his film Inextinguishable Fire (1968), society cannot address violence by showing violence; you cannot address the use of napalm in warfare, by showing images of burnt victims. I have always been unable to look at violent images, images of drowned migrants or war casualties are unbearable to watch: to paraphrase Farocki, if we show you images of violence, we will hurt you, and you will feel manipulated, and you will look away. I strongly agree with Farocki - although I hear the argument that it could or should be otherwise - that we shouldn’t show images of violence. Somehow strangely, these disconnected AI images of models with Down Syndrome and of Ghibli war communication, create similar reactions, despite being strongly disconnected from any indexical imprint. All synthetic they might be, they do refer to the exploitation of women’s bodies, and the systematic bombing and starving of civilians. And the key aspect these examples raise, is the important reconfiguration of the ways we connect images with reality. Photorealism doesn’t seem to be the connecting factor, but a connection between the real and the image endures; and one conclusion of this reconfiguration, seems to be that emotions, subjectivity and affect, seem to play a central role.
Do you see a change of medium/category between generative images and what we've previously experienced in the history of art/photography?
One obvious answer is the fact that AI images rely predominantly on formal and stylistic features, the technology being unable to understand an image in any other manner than statistically. A direct consequence of these limitations, combined with the actual uses of these images, only reinforces pre-existing forms. Still prevalent visual norms, whiteness, objectification of women, alpha males, dominant pop culture phenomena, are only reinforced, as for example Alan Warburton has shown in the context of pop culture and Roland Meyer in the context of platform realism and right-wing propaganda.
The less obvious answer, although a direct consequence of these technological determinations, is probably a new relationship to reality. The idea of photography as an imprint or documentation of something has lost of relevance, and images can increasingly be seen as potential, almost auto-referential, visual forms. They are created to respond to needs and uses, tailored for specific applications, and thus somehow echo the emergence of stock imagery on the internet in the late 2000s and 2010s. AI images should be assessed by looking at their uses or function, rather than strictly focusing on their non-indexical nature. The notion of “potential image” sems to be a better way of understanding these images through their actual uses and our interactions with them. It seems better fitted than notions like latent space, which I feel is just a metaphor reflecting our inability to understand the ways these image are produced.