The Ideology Behind Technology

Big-and-thomas-heatherwick-to-design-google-hq-hero03-1-

Last week I found myself in the banquet room of a far-suburban conference and retreat center, giving a talk to an audience that was mostly comprised of software engineers and the sales and marketing people charged with merchandising their work to the world. I was there to present to them an approach to thinking about the so-called smart city – the notion that every object, surface and relation of the contemporary urban environment should be subjected to algorithmic optimization, in the name of efficiency, sustainability and convenience. And it was not going at all well.

I was doing my best to explain to these otherwise superbly capable professionals why I thought the smart city as it exists today was, at best, a misguided ambition, founded on a very shallow understanding of the processes we know to be responsible for generating decent, humane urban environments and richly-textured urban experiences. Now in truth, this is more or less the same talk I’ve given all over the world in the past five or so years, and it’s generally fairly well-received. But on this occasion, even though I’d taken some trouble to modulate my most pointed commentary, it clearly wasn’t making any converts. Early on in my presentation I could tell – as you very often can from the podium, if you’re at all attentive to the nuances of eye contact and facial expression and body language – that I was losing my audience. Not from boredom, thankfully, but from something that looked from where I was standing a whole lot more like outright rejection.

During the Q&A session after the talk, which is generally far and away the best bit of any speaking engagement I do, some of the reasons for this audience’s reluctance to cede any rhetorical ground whatsoever came into sharper focus. “It seems to me,” one of my interlocutors began, “that you are saying Technology is Bad, while I think most of the people in this room would agree that Technology is Good.” And he went on to list a few of the reasons he thought that.

How to explain to him that far from thinking technology is Bad, or even Neutral, I find it impossible to discuss “technology” as a unitary, autonomous, reified thing at all? How might I convey to someone unfamiliar with it – on the spot, in real time, neither patronizingly nor in a way that reduces nuanced positions to caricatures – the long debate between technodeterminism and the social constructionism which challenges it? How to account for the relatively recent rise of actor-networky perspectives, that in turn problematize social constructionism’s refusal to grant nonhuman entities agency? Or the McLuhanite perspective, that simply engaging with the world through a particular mediating technology conditions one’s perceptions to a far greater degree than whatever content is nominally being transmitted?

“I get it: you believe in all that revolutionary stuff,” offered another audience member, referring to images I’d earlier thrown up on the screen of large protest crowds in Madrid, Hong Kong and New York; Seoul, Montréal, Istanbul and Brasília. To him, if I can gloss his sentiments fairly, 15M and Occupy, Hong Kong’s Umbrella movement and the million-strong crowds agitating for Korean president Park Geun-hye’s impeachment were evidently very little more than impertinences - something ginned up by disruptive layabouts, inconveniences that kept all right-thinking people from getting to their jobs on time.

How to explain that many of us understand even the most massive protests of this sort as nothing other than the ordinary working of democracy, arguably (depending on your perspective) a sign of the system’s healthy functioning or an indication that the multitude’s righteously transformative fervor has succumbed to mere liberal reformism? And how, further, to convey an understanding so obvious and trivial to readers of this blog that it feels odd even for me to type it out – that whatever else such demonstrations may be, whether prefigurative enactments of a more participatory politics to come, counterproductive venting of a pressure that might otherwise drive genuine change or futile supplication of the powers that be, they are not in any event remotely “revolutionary”?

After three or four such questions – each rife with the kind of mutual incomprehension that crops up whenever two parties use the same words to describe a situation, without quite twigging to the fact that they mean very different things by them – it was painfully clear to me what was going on. I realized that it would be all but impossible for me to have a genuine discussion of this set of issues with this particular group of people without first undertaking a whole lot of what my academic friends, for better or worse, call “unpacking.”

What I really mean, actually, is helping them unpack. Because while both my interlocutors and I brought to our encounter our own body of assumptions, I like to believe mine have been arrived at consciously. And another name for an unconscious body of assumptions is “ideology.”

***

My unhappy encounter with the engineers is a stand-in for why it’s so often difficult to have usefully deep conversations about the networked digital technologies that suddenly seem to be everywhere in our lives, from Snapchat and Fitbit to Uber and Tinder.

Too many of us don’t recognize that the decisions made in the design of these products and services constitute a coherent ideology, let alone wonder where that ideology comes from. Too many of us fail to see these products and services as places where distinct values are being enacted. And as a result, too many of us fail to understand these products and serves as contested, or at least eminently contestable, sites. (This includes a surprising number of people who pride themselves on their degree of wokeness in virtually every other facet of their lives.) It becomes far easier to perceive these aspects of the world around us, though, if we take a little time to understand how software functions.

Software is a forked thing, a way of doing work in the world that has a necessary dualism inscribed in it. Its ability to perform that work relies, on the one hand, on a set of rules or instructions that establish the basic parameters of what is to be done, called code, and on the other hand the things these instructions operate on: data, a representation of some aspect the world translated into ones and zeros, the universal binary language of digital information-processing engines. In the universe as rendered in software, there is no order without code, no traction or meaning without data.

We can get a sense for how this works in the everyday by considering what happens, say, at the checkout counter whenever someone equipped with a supermarket loyalty card does their grocery shopping.

The software that runs the checkout terminal consists, in effect, of a long series of conditional instructions – each one a standing order inscribed in a few lines of code, that will trigger specified patterns of response whenever certain conditions arise.

But which conditions, and which responses? At the checkout counter, both conditions and responses correspond with clear, direct commercial imperatives. Buy cat food often enough over the course of a year, and the profile associated with your loyalty card will evolve to reflect the strong likelihood that you have a companion cat in the household. The loyalty software will leverage this state of affairs in its attempts to cross-sell or upsell, directing you to the grocery chain’s favored partners by offering you a discount coupon for cat litter on your next trip, and in doing so pushing you through the steps of a business logic. The code establishes that there is such a category as Cat In Household (and more distantly and abstractly, that there are such social facts as households in the first place). But it’s the data – the running tally of observed facts about your purchases which you increment on every trip to the market – that operationalizes that definition.

There’s a trade-off involved in all this, of course. You surrender to the supermarket chain (as well as its vendors, partners and customers) data about the time, whereabouts and frequency of your purchases, and in return you receive discounts, upgrades or other offers. Here, once you’ve accepted the fundamental terms of this bargain, there’s not a whole lot of scope for the expression of values. Your endorse the belief systems implicit in the loyalty scheme by participating in it, and your opportunity to reject those belief systems begins and ends with the right of refusal.

In the case of the smart city, though, as with virtually all facets of metropolitan experience, the ambit of behavior and response is hugely more complicated. And this is where the largely preconscious values and conceptions of urban life held by such a system’s designers come into play.

Code is more obviously a site of intervention in the affairs of a city. When you establish rule sets, instructions that apply to the allocation of civic resources and have the force of command, you are obviously intervening in the distribution of possibilities, even lifechances. It is code that specifies a relationship between wait times at an intersection and the behavior of traffic signals elsewhere in the road network, code that sets the threshold at which a neighborhood or even a specific individual receives more attention from the police, code that defines a blob on a video feed as a potential nexus of dissent crystallizing in space and time. These specifications are clearly salient to the ways in which we differentially experience the city.

But so, more subtly, is data. The way one chooses to collect it, the very taxonomies and ontologies that it is sorted into concretely articulate the conditions of possibility we confront in the networked city. If your software, for example, specifies that individuals have an attribute called Gender, and the range of acceptable attribute values your database will accept is limited to Female and Male, well, that is a decision. As of course so is the belief that this attribute is somehow salient to the way in which people will be treated in the first place.

As vexingly complicated as the interactions between these terms and the things they represent seem, it’s not so very difficult to prise them apart. Data is the decision to acquire and measure bone-length dimensions from faces moving through the field of vision of a municipal CCTV camera. Code is the sorting of people into gendered buckets based on the results of those measurements. Policy is treating people differently depending on which bucket the system has placed them in. There is a politics and a system of values operating at every level.

Perhaps such sorting is defensible in some contexts, and less so in others. But the point is that such decisions should always be made consciously, and in the fullest possible awareness of the values they reproduce. And in my experience, anyway, it so very rarely is. Only the most enlightened software development organizations weigh such matters, or give them even the slightest consideration. Which means that when decisions are made about how software-based systems are going to handle and mediate gender, class, ethnicity or caste, about what does or does not constitute a crime or what public space is for, all of those decisions will be made on the spot, by engineers who are generally both immersed in a particular way of seeing the world and unaware that their way of seeing the world may not at all be universal or unquestioned.

Now, finally, we’re in a position to understand the questioner who – with an expression on his face that was something between surprise and open horror – took issue with my assertion that one of the aims of municipal technology ought to be “preventing capture of the commons for private advantage.” Isn’t that the whole point of capitalist enterprise, he wondered? Yes, I agreed, it was. Then why on Earth would you ever want to design software that might prevent that from happening? It had evidently never occurred to him that capitalism itself might be a value, or a system of values, shared neither by the designers of civic software nor by the people whose lifechances were shaped by its operation.

This isn’t simply a classic Two Cultures problem, nor merely a matter of semantic distinctions (or “wordsmithing,” as every blithely ignorant boss you’ve ever had has characterized the notion that precision in language matters, and that different words actually have different meanings). History is replete with examples of software engineers who were both entirely conscious of the values enacted by the systems they devised, and intended for those systems to realize noncapitalist ends. In our time, though, after four solid decades of a regnant and seemingly unassailable neoliberalism in the core settings and institutions of global power, the overwhelming majority of those currently working on the smart city (as, indeed, on the algorithmic products and apps which now mediate so much of everyday experience) subscribe to that framework of values, more or less unconsciously. And they reproduce those values in every line of code they touch and every container they devise for the collection, storage and analysis of data.

And it matters profoundly. If we are to have any hope whatsoever of establishing the conditions of justice in the cities of the twenty-first century, we will need to raise the values embedded in software to the surface and force them to speak themselves. We will need to demand that the engineers who will craft the code that determines all the million material ways in which the networked city interacts with the people who live in it, and give it shape and meaning, are able to consciously articulate the things they believe (even, at the very most basic level, whether or not they conceive of the distribution of civic goods as a zero-sum game). We will have to stop treating the various networked technologies around us as givens, let alone uncomplicated gifts, and learn to see them anew as bearers of ideology. And we’ll need to understand the design of software as the level at which that ideology operates.

None of this will be straightforward or easy, especially for those of us whose eyes begin to glaze over the moment things become even the slightest bit technical. But I don’t think it’s any exaggeration to say that our capacity to live, act, associate and create meaning as we will in the years ahead will depend on our ability to so.

--

Adam Greenfield was previously a rock critic, bike messenger and psychological operations specialist in the US Army. He has spent over a decade working in the design and development of networked digital information technologies and is Senior Urban Fellow at the LSE Cities centre of the London School of Economics.  He is the author of Radical Technologies.