Blog post

Speculative Violence as IdeoShock: How AI Visually Scripts the Unthinkable

Donatella Della Ratta explores how AI-generated images prepare the public to accept violent political agendas.

Donatella Della Ratta15 July 2025

Speculative Violence as IdeoShock: How AI Visually Scripts the Unthinkable

Who would have guessed, only a few months ago, that an AI-generated 'fantasy', dismissed as dystopian kitsch and over exaggerated expression of political unorthodoxy, would rapidly take shape as a tenable geopolitical strategy? When virally shared by US President Donald Trump in February 2025, the surreal "Riviera of the Middle East” clip portraying Gaza as a beachfront utopia of corporate peace and real estate investment was met with outrage, satire, and disbelief. With its glossy towers, palm-lined avenues, and Trump himself drinking cocktails with Netanyahu while sunbathing on a Gaza beach, the video was too quickly depreciated as propaganda, provocation, or simply another grotesque (and fun) meme in the age of shock politics.

But, as proven by the Israeli Prime Minister's July visit to Washington, the outlines of that fantasy are becoming disturbingly real. Just before Netanyahu's departure, Israeli Defense Minister Katz announced that the IDF had been instructed to prepare for the relocation of at least 600,000 Palestinians into what he described as “humanitarian city”, complete with entry screenings and movement restrictions. Simultaneously, Reuters reported on a $2 billion blueprint to construct “Humanitarian Transit Areas”—large scale camps inside and potentially outside Gaza—submitted by the U.S.-backed Gaza Humanitarian Foundation to the Trump administration. Echoing Netanyahu’s remarks to the press, the plan envisions these camps as transit hubs from which Palestinians might “relocate if they wish,” under what Israeli Prime Minister described as their own “free will.” Tony Blair and the consultancy firm BCG are also reportedly involved in further developing Trump's Gaza plan, though Blair’s institute maintains that it has merely joined “in listening mode,” under the guise of the former UK Prime Minister's interest in building “a better Gaza for Gazans.

This language only rhetorically conceals what’s underway: the logistical staging of ethnic cleansing and mass displacement of the Palestinian people. Truth is, the AI video has never operated as critique, cautionary tale, or even simply “just satire” - which its makers have indicated as its original purpose, before the US President decided to spread it on a global scale. Rather, it functioned as a prophecy. Generative AI acted as a quintessential visual infrastructure, a prophetic apparatus making visible what had previously been politically unspeakable and unthinkable. By rendering such visions into photorealistic form, AI granted them legitimate entry into the global public imagination. What once could hardly be whispered was rendered visually plausible, and in aesthetically pleasing form. 

Under the guise of these apparently harmless pictures, so cheap in their digital polish—too crisp, too smooth, too obviously computer-generated-, AI becomes the conduit for what I call speculative violence: a form of anticipatory harm in which synthetic images pre-stage acts of destruction, erasure, and forced displacement before they even occur. By shaping public perception and rendering contentious outcomes possible by virtue of visualization, they gradually condition political and emotional acceptance, softening public resistance and inducing acceptance. They operate at the level of what Walter Benjamin called 'the optical unconscious', referring to dimensions of reality that escape human perception and can only be revealed through technological mediation. Just as psychoanalysis makes visible the psyche's hidden drives, photography uncovers unnoticed structures in the visual field. Benjamin believed that the photographic camera awakens a mimetic power which is not simply about imitating the real. Rather, it is a sort of a sensory-affective capacity to register likeness not in appearance, but in resonance, rhythm, and affinity. Similarly, generative AI does not mirror the real, but extracts patterns and latent affinities from data, recomposing them into plausible visualization of the future. Like the photographic gaze, AI  does not just show what is, but hints at what could (or should) be by activating the imagination’s latent potential to recognize and accept suggested combinations as meaningful.

It wasn’t Trump himself who commissioned the infamous “Riviera of the Middle East” video. Its makers—L.A.-based Israeli artists and AI entrepreneurs Solo Avital and Ariel Vromen—claim to have no idea how it ended up on the US president’s feed. But does that even matter? Whether it was meant as satire, critique, or provocation is beside the point. What really counts is that the grotesque video was viewed over 10.5 million times on Instagram and shared some 2,400 times on Trump’s Truth Social by the morning after it was posted, becoming “one of the most viral videos ever”.  What really counts is that we have seen these images, and that they have stayed emotionally undigested, too problematic to be consciously acknowledged, yet impossible to unsee. 

Virality has further enhanced their normalization, feeding them into the collective eye, naturalizing them. Their spreadability is unstoppable, as they do not depict explicit horror, therefore they do not trigger social media alerts and algorithmic censorship. This is how speculative violence operates: gently, under the radar. It does not explode, does not bleed, does not scream, yet seeps into the unconscious, preparing the mind for a future that has not yet arrived but has already been seen. It presents the destruction before it unfolds, wrapped in high-definition serenity and disguised as an image of futurity. The Riviera of the Middle East is not an AI fantasy. It is a violent and gloomy prefiguration of what corporate-imposed peace will be about: genocide and ecocide, forced displacement, ethnic cleansing, and the replacement of a people with a consumer-friendly mirage.

French far-right philosopher Guillaume Faye called such provocations ideoshocks, deliberate “ideological elettroshocks” designed to crash through the “soft ideas” of liberal democracy. In Archeofuturism, his 1998 manifesto, Faye called for the return of “archaic values” through a techno-futurist vision built just for the few, cleansed of humanitarianism and egalitarianism. His declared goal was to prepare minds for the collapse of the old world and the birth of a new order, done with modernity and its hypocritical values. 

Back in September 2024, I encountered a clip that felt very much like an ideological electroshock. Scrolling on my social media feed, I saw al-Aqsa mosque, Islam's third holy site, wrapped in flames. For a few long moments, I believed it. Only after replaying it several times, I realized that it was AI-generated. The clip had been circulated by Zionist channels and quickly debunked by fact-checkers. After that first clip, I began encountering more: some showed more of al-Aqsa burning; others went further, depicting the construction of the Third Temple on the mosque’s historical site on the Temple Mount in Jerusalem. The "Third Temple" refers to a hypothetical future reconstruction of a Jewish temple in Jerusalem which will pave the way to the coming of the messianic age. 

But these cheaply made photorealistic clips are much more than AI crap. They are ideoshocks. They are detonators of speculative violence, just like Trump's Riviera of the Middle East. In the altered political landscape after October 7—where Israel's far right has surged and eschatological ideas have entered mainstream discourse—such visions no longer seem far-fetched or just fringe theological dreams. Violations to the status-quo in place in Jerusalem since its occupation in 1967 have dramatically escalated post October 7 2023. On May 25 2025, remembering the city's takeover, thousands of Israelis joined a state-funded march through the Muslim quarter of the Old City chanting racist slogans including “Gaza is ours;” while far-right minister Itamar Ben-Gvir led public prayers atop the Temple Mount, yet another blatant violation of the status quo.  And on July 7, as Katz illustrated his 'humanitarian city' and Netanyahu flew to Washington to advance Gaza’s deportation plan, another video emerged – this time filmed, not generated - where settlers, protected by Israeli police, openly performed Talmudic rituals in the al-Aqsa courtyard. 

Against this backdrop of real-world events, the AI-generated images of burning mosques and rising temples no longer read as digital nonsense or divertissement. They function as hyperstitions, speculative projections  that help bring about the very futures they depict. Coined by the Cybernetic Culture Research Unit in the 1990s, the term anticipated this phenomenon—except today’s generative AI gives such fantasies a visual immediacy and emotional force that was once impossible. In an AI-driven media ecology, speculative violence translates hyperstitions into ideoshocks, training the public eye to accept radical transformation as normal and legitimate.  

The synthetic appears real, the future inevitable. And the end goal behind advances not through politics first, but through ideological shocks visually rendered in the aesthetic language of our time. The absence of visible violence is not its negation, but its concealment. Beneath the artificial waterways of Gaza’s imagined Riviera, and under the polished stones of a synthetic Third Temple, lies the hidden harm: displacement, erasure, domination. Rather than laugh off these images as provocation or kitsch, we must see them for what they are: not warnings, but blueprints. Not fictions, but premonitions. Not satire, but a declaration of war.