Blog post

Fanon and (digital) self-determination

We need to wrest digital self-determination from surveillance capitalism and the state. How can the work of Frantz Fanon help us in this struggle?

Lizzie O'Shea 9 July 2019

Fanon and (digital) self-determination

Fanon was committed to the idea of self-determination. How can his thinking influence our fight for digital self-determination? An excerpt from Future Histories by Lizzie O'Shea.

Future Histories: What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us about Digital Technology is on sale for 50% off until Tuesday, July 16th at 11:59PM EST as part of our Beach Reads sale!

Fanon wrote about seeing himself, a black man, in a world ordered by white supremacy. He explained how his identity was not his own, how the white man had “woven me out of a thousand details, anecdotes, stories.” He viewed his own self through the prism of white supremacy:

I was responsible at the same time for my body, for my race, for my ancestors. I subjected myself to an objective examination, I discovered my blackness, my ethnic characteristics; and I was battered down by tom-toms, cannibalism, intellectual deficiency, fetishism, racial defects, slave-ships, and above all else, above all: “sho good eatin’.”

He was, in short, a collection of stereotypes and assumptions, drawn from past experiences of that same system of supremacy. The experience of being black was, by virtue of being black, not an experience. Racist logic rendered it nonexistent—defined by others, the subjective experience of the black person could not be bridged or translated into the real world. There was no agency in how his identity was determined, no way to escape the judgments about him, no glimmer of autonomy. His identity was “fixed”: “I am overdetermined from the exterior. I am not the slave of the ‘idea’ that others have of me but of my appearance.” This is the theoretical basis of colonialism, and it is how the idea of race is socially constructed.

These ideas also apply to the experience of living in the digital age. How we appear online—our abstract identity—is being generated and fixed by the data mining industry. It is shunting us into reputation silos and being used by the government to make decisions about us. We must urgently find ways to live free from presumptions and categories, ways to limit how the twin powers of surveillance capitalism and the state seek to define our sense of self.

Digital technology already exists that allows us to impose some limits on this power. We can evade some of the invasive practices of surveillance that define our abstract identities. Technologically, we are already able to communicate in secrecy and anonymity: we can use encryption and “The Onion Router” (Tor). Encryption especially has become commonplace, as a standard requirement of mail and web browsing after the revelations of NSA surveillance by Edward Snowden. For better or worse, this is also a function of capitalism: because encryption underpins the operation of the international banking system, its widespread adoption is vital to the economy. Similarly, Tor is (at least in part) a product of the military and its need to communicate anonymously. To put it differently, privacy-enhancing tools often serve multiple purposes, which is why they are improving and getting easier to use. But there is more political mileage to be gained in this space for the left if we put our minds to it.

There is still work to be done in ensuring that this kind of technology is used more widely, to resist the power of companies and governments to define our sense of self. Technologists need to build bridges with everyday people and help popularize these tools, with empathy and humility. People without a technological background need to learn more about what they are up against. The left’s job is to build communities and organizations where people can both learn and teach tactics for digital self-defense.

My point here is that, from a technological perspective at least, certain kinds of privacy are possible—namely secrecy and anonymity. But to some degree this misses the point: it means very little if people still spend significant time on social media sites or apps and platforms that collect and use data that can undermine the usefulness of these tools. Most people do not face challenges of the kind Edward Snowden did after he leaked NSA documents—very few of us will find ourselves in situations where we are trying to communicate privately while under direct scrutiny by the world’s largest intelligence agencies. Most people have to live in the real world, where many everyday transactions require giving up some privacy. This is why many of us, when confronted with the enormity of state surveillance and the monumental effort it takes to personally circumvent it, often give up. Why not give Facebook your data, if all that happens is you get better ads? Why bother with slow and fiddly browsers and complicated passwords if you’re a nobody the government is unlikely to care about? What if you cannot afford to learn about various privacy tools that seem designed for only the tech-savvy? For these reasons, people often end up tolerating surveillance as a neutral or relatively benign phenomenon, or as a tool used on individuals who warrant being watched—people who have something to hide. Privacy gets depoliticized. Its supposed importance starts to wear thin.

The kind of privacy that can be protected by these digital tools reflects only a thin slice of the kind of freedom we should be able to enjoy in digital society. We need to reframe how we understand the problem of surveillance at a remove from the purely personal or straightforwardly technical right to privacy. We also need to junk the associated, underlying premise that it is necessary to give up privacy in the interests of security.

With reference to his experiences in Algeria, Fanon wrote about how the “degrading and infantilizing structures” of colonial relations disappeared during the independence struggle. “The Algerian has brought into existence a new, positive, efficient personality, whose richness is provided … by his certainty that he embodies a decisive moment of the national consciousness.” Fanon used this idea metaphorically: the new personality of the Algerian was autonomously self-defined. By opening up the idea of social relationships outside oppressive colonial traditions, the struggle in Algeria created space for a whole new sense of personality to come to the fore.

In Algeria, this took a specifically technological form. Fanon wrote about how prior to the war of independence, radio had “an extremely important negative valence” for Algerians, who understood it to be “a material representation of the colonial configuration.” But after the revolution broke out, Algerians began to make their own news, and radio was a critically important way of distributing it cheaply to a population with minimal literacy. Radio represented access not just to news but also “to the only means of entering into communication with the Revolution, of living with it.” Radio transformed from a technology of the oppressor into something that allowed Algerians to define their own sense of self, “to become a reverberating element of the vast network of meanings born of the liberating combat.”

To reapply this in a modern context, therefore: digital privacy— and its philosophical twin, freedom—involves anonymity, secrecy, and autonomy. Autonomy is not just evading surveillance. Autonomy means the freedom to act without being controlled by others or manipulated by covert influences. This kind of freedom is not only jeopardized by spooks and cops. It is also being eroded by the practices of technology capitalism. Our understanding of privacy needs to engage with the imprint we leave on the web collected by companies, categorized and manipulated. Just as the Algerians took control of their sense of self using the technology of radio, so too can we do something similar with digital platforms today.

Bernard Harcourt talks about us all having “digital doppelgangers,” or algorithmically matched versions of ourselves. They follow us around digital spaces, reminding us of what we have done and channeling us into a particular future. This process of abstract identification is built on assumptions that are path-dependent, accumulating data mindlessly, beyond our control—and increasingly an inescapable part of the modern experience of being human. People find dates online, they keep in touch with relatives using social media, they visit health sites to research embarrassing conditions, they buy things, they apply for loans using the web and perhaps miss a payment or two. We cannot expect the problem of diminishing privacy to be resolved by social abstinence and daily inconvenience. For privacy to be meaningful, it needs to be about winning back control over our own sense of self—demanding our rights collectively. It needs to drive a stake through the heart of these zombie digital doppelgangers.

A better way to understand what we mean when we talk about privacy, then, is to see it as a right to self-determination. Self-determination is about self-governance, or determining one’s own destiny. Its origins as a legal concept stretch back to the American Declaration of Independence, which states that governments derive “their just powers from the consent of the governed.” It has always featured as a right of some description in international law, usually in the framework of nationhood and governance of territory. But with the explosion of postcolonial struggles in the latter half of the twentieth century, it gained new meaning—not least in the struggle for Algerian independence that Fanon was involved in. In places like South Africa, Zimbabwe (then Rhodesia), the Democratic Republic of Congo and others, mass social movements struggled for recognition outside the confines of colonial settler states. Later these places often found themselves burdened with postcolonial systems that reproduced familiar hierarchies. The right to self-determination took on a renewed and deeper urgency, raising questions about how to empower people culturally, socially and politically, outside of the European ideals that offered lofty language but had also legitimized colonialism.

Self-determination in this latter sense is the kind of thinking we need to take into the twenty-first century. Rather than seeing it solely as about the right to vote or national governance, it is a right that can find new meaning with the help of digital technology. The ideals of nationhood have (justly enough) less relevance in digital environments. Self-determination is both a collective and individual right, an idea of privacy that is much more expansive and politically oriented. It is about allowing people to communicate, read, organize and come up with better ways of doing things, sharing experiences across borders, without scrutiny or engineering, a kind of cyberpunk internationalism. If we think about the practice of abstract identification discussed in the earlier parts of this book—the practice of weaving us out of a “thousand details, anecdotes, stories,” in Fanon’s words—we need to think about how we can wrest back control of these processes, to give people the power to define their own identities, a form of personal data sovereignty. We need to transform the technology of surveillance and oppression into a tool of liberation for defining ourselves autonomously.

So what could digital self-determination mean, in practical terms? The first demand must be that public and private actors be held to meaningful standards of disclosure about how information is collected, stored and used. Everyone should have the right to know what is known about them. Everyone should have a right to see the data that go into these processes of identification and in doing so have the power to alter them. Self-determination ought to include the right to meet your digital doppelganger, as a way of understanding ourselves, as a way of regaining control of our identity. Such transparency would mean that companies are better able to be held to legal standards of nondiscrimination, much in the way that banks were the focus of community activism and legislative reform to address redlining in the mid-twentieth century.

We should consider resetting the legal nature of our relationship with companies that collect and hold data from and about us. Law professor Jack M. Balkin argues that we should think of these companies as holding our data in the same way a doctor or lawyer would—that is, by virtue of a relationship of trust. “Certain kinds of information constitute matters of private concern,” writes Balkin, “not because of their content, but because of the social relationships that produce them.” He argues that we should think of these companies as information fiduciaries, and just as we would not allow our doctor or lawyer to sell information about us to data brokers, the same restrictions should apply to companies. Under this area of law, fiduciaries owe a duty of care and a duty of loyalty, and breaches of these duties are penalized by courts. The kind of information held about us by companies is personal, and potentially damaging if made public; it ought to be subject to similar regulation.

Companies should not be permitted to sell data we give to them to third parties, and we should not be able to contract out of this via terms of service. It would be illegal for a doctor to sell medical information, even if the patient consented. The same ought to apply for our abstract identities. Such a measure could strike at the heart of surveillance capitalism’s business model and all the associated technologies of state surveillance that draw on the data collected as a result of it.

Recasting our personal legal relations with the data mining industry is an important first step. But it must be more radical than a mere renegotiation of rights. “The Algerian combatant is not only up in arms against the torturing parachutists,” wrote Fanon. “Most of the time he has to face problems of building, of organizing, of inventing the new society that must come into being.” It is not enough to address wrongdoing or limit the worst excesses: we have to be more ambitious in our thinking. Digital self-determination also asks us to rethink how we design and build our information systems in aid of a more democratic society. It forces us to question who we allow to be the gatekeepers of our knowledge and how this power could be redistributed. We have to actively build, organize and invent a new digital society.

The systems as they are currently designed are vulnerable. The centralized approach to managing personal data underpins surveillance capitalism. It allows companies to accumulate data and draw conclusions that can be used for segregated marketing. This is not a consumer choice problem: the terms of service are imposed upon us rather than freely entered into.

But there are alternatives. It is possible to design data storage systems that are decentralized and give individuals control over their personal data, including its portability, and the ways in which it is shared with third parties. One example of this idea in action is Solid. Analogous to a post office box, Solid aims to create pods of data. Instead of handing over data to companies wholesale, either directly or by granting permission to access devices, the idea is that users give permission to certain applications to access certain information, and that information can be updated as necessary. The outcome is that an individual can create their own personalized terms of service. These kinds of systems allow data to be stored anywhere—in the cloud or on a hard drive—while also giving the individual full control over it. It is an example of data sovereignty—control over one’s personal information, creating the proper basis for meaningful consent. Other examples of projects that center on the interests of users include Freedom Box, a router with open source firmware built in to give users control over and protection for their data, and Diaspora, a decentralized, federated social media platform. Just like how Algerians repurposed radio—from a platform of the colonialist project into a tool for liberation—we too can adapt existing digital infrastructure to decentralize data flows and take back power from companies and governments.

These alternatives aim to “re-decentralize” the web—that is, to return to its original architecture, before its centralization into large platforms. These projects make up “a robust and fertile community of experimenters developing promising software” in this space, which involves “deeply exciting new ideas.” To become viable alternatives, these software programs will need to overcome serious hurdles, including proper resourcing and management, as well as mass adoption. But in them we see the building blocks of a new and different way of structuring our online lives that combines security and autonomy, a re-decentralization with respect for privacy baked in.

Electronic medical data could be stored in a similarly decentralized way, rather than on centralized servers run by government authorities. We would need backup processes and secure storage, but such a system reduces incentives for hackers stealing data for pay (by eliminating the honeypot) and also has the potential to reduce the impact of cyberattacks like the WannaCry worm. Our data would no longer be concentrated in the hands of a few key players, with access points that can be used to surveil or force us to pay a ransom. We could share only what we wanted to share, on our own terms, and keep our information up-to-date under our own control.

These kinds of decentralized systems require planning. Unlike updating some kinds of personal information, it is critical that medical data, for example, be verifiable—to limit doctor-shopping for drugs, for example, or to get an accurate picture of events after a particular treatment or a medical mishap. Blockchain technology has great potential in this regard. Blockchain is a digital ledger that records transactions, without storing that record in a central place. Instead the record or ledger is dispersed and saved in multiple places. Blockchain involves creating a chain of data transactions (or blocks), and the evidence of the chain is distributed across numerous computers. The upshot is that there is no central gatekeeper or repository of knowledge, while the distributed nature of the ledger makes it theoretically impossible, and at least immensely difficult, to tamper with.

Perhaps most famously, blockchain is the technical basis of Bitcoin. Because blockchain removes the need for a central ledger (in this case, a bank) by devising a distributed ledger of transactions, it is possible to create a currency without a state. But cryptocurrency is not the only potential application of this technology. Certainly there has been some over-hyping of both bitcoin and blockchain in recent years. But there still remains plenty of scope to use the latter for other kinds of transactions and record-keeping systems.

Provided we can find ways to protect patient anonymity and confidentiality and obtain informed consent, it is possible to imagine medical records being stored in a decentralized way that is also verifiable. Various combinations of these technologies could also allow people to perform many other activities online that involve giving access to personal information. The information could be parceled for sharing with banks and insurers, for example. In other words, digital technology has the potential to give people power over their own data in ways that are accountable and effective.

The WannaCry worm created havoc in numerous universities, health systems and workplaces globally. Nearly a quarter of a million computers were affected in over 150 countries. The experience spotlighted the risks of centralized data on an enormous scale; it is urgent to plan and build systems that are resilient against these risks. Information systems that protect privacy—that give people control over their data—improve security for everyone. We have the technology to begin designing networks that are far more impervious to attack.

So why haven’t these gone mainstream? Part of the reason at least is that the proliferation of technologies like decentralized data pods and blockchain undermine the ability of technology capitalism to monitor and profit from our digital behavior, and they limit the capacity of governments to tap into this infrastructure for their own surveillance purposes. In other words, this kind of data diffusion challenges the two central repositories of power in our society: capital and the state. The re-decentralization of the web may be a technological design issue, but it will only be achieved if we understand it as a political objective.

Here we encounter a “classic Fanonian theme” as described by Gordon: “There is no reciprocal respect without confrontation.” In the words of Fanon himself: “You do not disorganize a society … if you are not determined from the very start to smash every obstacle encountered.” It is critically important to disorganize the society currently built around digital doppelgangers and segregated marketing. This is unlikely to happen of its own accord. “We do not expect this colonialism to commit suicide,” wrote Fanon. “It is altogether logical for it to defend itself fanatically.” In such circumstances, relying on the benevolence of state and capital to restructure digital society is a mistake. As Fanon concluded: “It is the colonial peoples who must liberate themselves from colonialist domination.”

[book-strip index="1" style="buy"]
Future Histories
The key to understanding technology lies not in the future--but in the past. That's the contention of Lizzie O'Shea's Future Histories, a grand tour through past and present to explore the practica...

Filed under: excerpts