Contesting official accounts of state violence
An excerpt from Investigative Aesthetics: Conflicts and Commons in the Politics of Truth by Matthew Fuller and Eyal Weizman. Join the Verso Book Club at the Subscriber or Comrade level in August to get a copy as part of your subscription.
Violence lands. Hundreds of troops break into a city. At that moment the city starts recording its pain. Bodies are torn and punctured. Inhabitants memorise the assault in stutters and fragments refracted by trauma. Before the Internet is switched off, thousands of phone cameras light up. People risk their lives to record the hell surrounding them. As they frantically call and text each other, their communication erupts into hundreds of star-shaped networks. Others throw signals into the void of social media and encrypted messaging, hoping they will be picked up by someone. Meanwhile, the environment captures traces. Unpaved ground registers the tracks of long columns of armoured vehicles. Leaves on vegetation receive the soot of their exhaust while the soil absorbs and retains the identifying chemicals released by banned ammunition. The broken concrete of shattered homes records the hammering collision of projectiles. Pillars of smoke and debris are sucked up into the atmosphere, rising until they mix with the clouds, anchoring this strange weather at the places the bombs hit.
Each person, substance, plant, structure, technology and code in this incident records in a different way. Some traces accumulate so fast and haphazardly that they erase previous traces. These records, traces of destruction and pain, are both modes of aesthetic registration and modes of erasure. When they remain, such traces may, given the right techniques, be read for different purposes: some for furthering violence, others for opposing it or simply to stay alive somehow. Those that are obfuscated or repressed are more difficult to access.
Those delivering violence have recourse to higher-resolution sensors: cameras on drones, planes and satellites to record clashes from multiple perspectives. Their overwhelming power relies on weapons, but also on access to information – gathered as floods of images and signals – and the means of working through these streams of data, using AI for interpretation and prediction. While massive collection takes place, such violence also consists of a simultaneous attempt to impose on those experiencing the attack a uniform and impenetrable block of information, one that reads as information from one side and as noise on the other.
That difference between signal and noise will also be used to allow officials of all kinds to lie about what happened, spread disinformation, marshal or manipulate data, and deny the most basic of facts. Later, those experiencing or resisting the violence will testify. Perhaps a soldier will have the guts to reveal what they or their comrades have done – either publicly or by leaking secretly downloaded files. Another might do so accidentally, while bragging on their social network.
However, there can also be a counter-reading, a counter-narrative that gathers all these different kinds of trace, and is attuned to their erasure. Reworking what sometimes are merely weak signals – forming a composite from all of these recordings – can show what happened and what political conditions gave rise to it. Interpreting weak signals and faint traces is complicated as only an act of close reading can be. Weaving these signals in relation to each other is not only a scientific or technical endeavour but a cultural, ethical and political one. It involves wide and varied ways of paying close attention to the accounts of people, matter and code. To those experiencing violence first-hand who lead the struggle for something approximating justice, the question is always how finding the truth about current events may also reveal the shadow of long-term historical processes, and how telling history from the lived perspective of present violence can support their political struggles. To be effective, contesting official accounts of what has happened is a question of investigation, of history and of solidarity, and such telling is only as good as the political process of which it is part.
Towards the end of the second decade of this century the flood of images from significant incidents turned into a torrent and there were simply too many for human researchers alone to sieve through. One of the differences between the Hong Kong protests of 2014 and those of 2020, and between two of the major phases of the Black Lives Matter protests (2015 and 2020) in the USA, is that many more hundreds of hours of video were posted online and streamed by participants. Working at the invitation of protest organisations in both places, Forensic Architecture [a research group led by Eyal Weizman] began to use artificial intelligence-based machine-vision to automate the act of seeing.
These programmatic eyes had to be taught to see by being shown thousands of labelled and annotated images. The repetitions of training a neural network on datasets of images so that they could learn to differentiate implied a kindergarten-style reading of images: ‘This is a bomb. This is a tank. But this is not a police-grade tear gas canister.’ Using machine learning tools meant having to account for the ways these technologies come with their own baggage and biases. Recognising and working with and against the quirks of one medium by combining them with those of another is a means of involving the introspection that is necessary. As such technologies increasingly become the medium in which the investigation takes place, introspection – a critical examination of the way a digital tool operates, its own aesthetics – becomes more important. More broadly, though critical and investigative work in software intensified in the last decade of the twentieth century, it has become an increasingly crucial field of investigation as more social and economic processes move online.
For instance, the artist Trevor Paglen worked with critical-AI scholar Kate Crawford on the ImageNet Roulette project to illustrate the ways in which images are labelled and assigned meaning in a database used by many AI systems, often reproducing the biases and racist attitudes of some of those doing the tagging: low-paid and often disenchanted crowd-sourced workers. Images thus processed would be labelled with categories such as ‘rage’, ‘deviance’, ‘radical’ or ‘risk’ in ways that were culturally, and often racially, inflected.
In 2019, also responding to ImageNet, programmer and artist Nicolas Malevé produced work exposing the curious structure of this database, showing all 14 million photographs at a rate of ninety milliseconds per image over two months, pausing the incomprehensible torrent every now and then to show a randomly selected image and its metadata. Images were often tagged with bizarre connections by ImageNet workers, linking image data to spur-of-the-moment misjudgements. These would be mildly significant glitches were they not embedded in a system used by other software to develop ‘solutions to societal problems’.
These examples show that, for investigation, the technological context changed, as did the volume of images, as more and more footage from devices such as smartphones and from streaming services came online. But crucially, the organisational context also changed. It was not only that the field of expertise radically shifted and opened up, but also investigations had become a more collective endeavour undertaken by extensive networks based on intense collaboration and strong solidarity.
Such networks form commons that might include groups of different nature and standing that could be previously thought of as incompatible: the community who had experienced violence, who recorded their local environment and often led the struggle; the people risking their lives to take and upload such images; citizen and self-taught journalists, bloggers, image and film-makers, artists and architects. These are allied with a remote network of volunteers, activists and human rights lawyers. In turn, open-source researchers scattered around the world pore over images in search of clues. Academics of different kinds, such as archaeologists, oceanographers, historians and scientists, working voluntarily with them also collaborate with film editors, artists and curators to produce and display the cases.