In the Name of Efficiency, the Time of Monsters has Arrived
Ron Salaj tracts the origins of efficiency and algorithmic technologies that shape our current political landscape.

The great promise of modernity has always been efficiency. From Charles Babbage’s dream of mechanical calculation to the algorithmic optimization of labor in today’s AI-driven workplaces, efficiency is treated as an unquestioned virtue, a force that separates progress from stagnation, winners from losers, the optimized from the obsolete. It is the logic that fuels industrial revolutions, reorganizes economies, rationalizes war, and justifies mass firings at the click of a button. In the name of efficiency, entire social structures have been dismantled, bureaucracies streamlined, and workers rendered disposable. But what if efficiency is not merely a tool of progress, but an ideology of control—one that, when left unchecked, operates as a purge?
History offers no shortage of examples where efficiency became not just a mechanism for productivity, but a justification for radical transformation, exclusion, and, at times, outright elimination. In the nineteenth century, the polymath Charles Babbage designed the first computing machine, celebrated as the precursor of modern computers, known as the “Difference Engine”. Babbage’s Difference Engine, as Matteo Pasquinelli notes in his book “The Eye of the Master”, aimed to “automate the calculation of logarithms and sell error-free logarithmic tables, which were crucial in astronomy and for maintaining British hegemony in maritime trade and [its] aggressive colonial expansion”[1]. [GR1] Babbage was not only a pioneer in computational design but also a significant figure in the development of industrial management practices. From the beginning, Babbage’s computational theories were envisioned as tools for automating and disciplining labor.
Meredith Whittaker’s essay “Origin Stories: Plantations, Computers, and Industrial Control” sheds light on Babbage’s influence beyond the factory floor, highlighting how his vision for mechanization and computational tools was deeply intertwined with British colonial administration. By automating complex calculations and standardizing processes, Babbage’s innovations aimed to streamline colonial governance, making it not only more efficient but also more controllable. His approach facilitated the management of vast colonial territories and mirrored the oppressive structures of plantation systems, where control over enslaved labor was paramount.
The plantation, as both an economic and social institution, served as a precursor to many industrial control mechanisms. Techniques originally developed to manage enslaved populations—such as meticulous record-keeping, surveillance, and regimented work schedules—were later adapted and refined in industrial settings. Babbage’s work drew from these methodologies, embedding them into his designs for mechanical computation and labor management. Thus, the very foundations of modern computing and industrial control are intertwined with practices rooted in exploitation and domination.
Today, these same principles underpin AI-driven labor management, where algorithmic oversight tracks workers in warehouses with biometric precision, predictive policing systems disproportionately target marginalized communities, and automated hiring platforms reinforce historical patterns of exclusion. Much like early industrial systems optimized human labor through disciplinary control and statistical forecasting, contemporary AI optimizes efficiency through backpropagation—the fundamental learning process in neural networks. By continuously minimizing error and adjusting internal weights, AI refines decision-making without human intervention, shifting efficiency from a managerial tool to an autonomous algorithmic process.
The drive for efficiency, once a tool for industrial discipline and colonial administration, would take on even more sinister dimensions in the twentieth century. The same logic that rationalized plantations and mechanized labor under Babbage found its ultimate expression in the bureaucratic machinery of war and genocide. Nowhere was this more evident than in Nazi Germany’s embrace of IBM’s subsidiary, Dehomag, which provided the punch-card technology that enabled the systematic classification, tracking, and extermination of millions. If Babbage’s computational vision sought to optimize the movement of goods and workers, Dehomag’s innovations streamlined the movement of human lives toward death camps.
[book-strip index="1"]
When Adolf Hitler became chancellor in 1933, his regime quickly moved to enforce racial laws that excluded Jews and Roma, among others, from public life, dispossessed their businesses, and ultimately sought their total extermination. But executing such a mass-scale purge required more than just ideology—it required data, classification, and organization. As historian Harry Murphy points out in his paper “Dealing with the devil: the triumph and tragedy of IBM’s business with the Third Reich”, Hitler needed the technology and databases to count, organize, and number the population to distinguish those deemed "undesirable" or a threat to the Nazi racial order—Jews, Roma, disabled individuals, LGBTQ+ people, political dissidents, and others—from the rest of society. This required vast amounts of data, and not just any data. As part of this effort, a nationwide census was scheduled for May 1939, explicitly designed to gather detailed information on the religion of each member of the population. To implement this operation, Nazi Germany turned to Dehomag, a company that, in practice, functioned as an extension of IBM.
With IBM’s monopoly on tabulating technology in Germany—holding 95% of the market share—the company, under the leadership of Thomas J. Watson, directly modified its machines to meet the Nazi regime’s demands. IBM did not simply supply the hardware; it also trained personnel, developed custom punch-card systems, and actively managed the processing of population data. When the Nazis conducted[GR2] their May 1939 census, IBM designed it to extract granular information on religion, ancestry, and racial identity—data that would later be used to systematically isolate, deport, and murder millions.
The Hollerith punch card system did not just facilitate census operations; it became an integral part of the Nazi concentration camp infrastructure. Upon entering a camp, prisoners were assigned a five-digit Hollerith code, tattooed onto their bodies, corresponding to punch cards that stored their demographic data and their fate. Every aspect of their existence—ethnicity, sexuality, labor capability, and eventual "departure" from the camp—was processed, cataloged, and recorded through IBM’s technology. The system’s efficiency was so seamless that in the Westerbork concentration camp, a prisoner named Rudolf Cheim—who worked in a labor service office—managed to decode its algorithm. According to the “Papers of Rudolph Martin Cheim”[2] he observed that: “Columns 3 and 4 stated “reason for delivery,” and code five designated a Jew, while code two signified a homosexual. Column thirty-four entailed “reason for departure,” and the egregious code six stated “special handling.” While scrutinizing the machines, Cheim recalls “[There was] never a name, only the assigned numbers”.
In this system, names disappeared, replaced by numbers processed by machines—a chilling example of how the ideology of bureaucratic efficiency reduced human lives to sortable data points. This transformation aligns with what Giorgio Agamben describes as "bare life" (la nuda vita)—the stripping away of political and personal identity, reducing individuals to mere biological existence within a system that decides who is expendable.
IBM’s role in the Holocaust was not an accidental byproduct of technology; it was a deliberate collaboration, motivated by profit and optimized for genocidal efficiency. As thousands were processed and murdered in the camps, IBM’s CEO Thomas J. Watson reaped immense profits, further entrenching the company’s legacy of efficiency at any cost.
But IBM’s involvement in the fusion of computation, governance, and control did not end in 1945. The same efficiency-driven logic that optimized labor in the plantation and mechanized genocide in Nazi Germany was not abandoned in 1945—it was refined, repurposed, and rebranded in the postwar era of cybernetics, automation, and data governance.
One striking example of IBM’s postwar entanglement with efficiency ideology can be found in its collaboration with the psychologist B.F. Skinner, whose vision for "teaching machines" sought to mechanize learning through principles of operant conditioning. Inspired by behaviorist principles, Skinner proposed an automated system of education, where students would interact with mechanical devices that reinforced correct responses while eliminating inefficiencies in traditional teaching. IBM supported these early efforts, envisioning a future where computational efficiency could shape human learning much like it had optimized industrial production and wartime logistics. Though these projects were ultimately never fully realized, they reveal how the pursuit of efficiency extended far beyond factories and war offices—into the very fabric of social and cognitive development.
At the same time, the postwar period saw the rise of cybernetics. Developed by Norbert Wiener, cybernetics was originally conceived as a scientific approach to understanding communication, feedback, and control systems in both machines and living organisms. However, it quickly expanded beyond its initial scope, becoming a foundational theory for systems engineering, AI, and even political and economic management. Cybernetics took the wartime logic of data-driven governance and applied it to a new era of predictive modeling, automation, and algorithmic regulation—ushering in a technocratic vision of governance that continues to shape the modern world.
The 1950s and 60s saw cybernetic ideas being integrated into military strategy, industrial automation, and state planning. The U.S. government, for example, relied on cybernetics to develop early AI models, nuclear deterrence strategies, and real-time battlefield decision-making systems. The Pentagon’s Semi-Automatic Ground Environment (SAGE) system, a network of early computers designed to process and respond to Cold War threats, reflected how cybernetics transformed military efficiency into a logic of automated surveillance and rapid response.
The cybernetic dream of predictive control and real-time adaptation evolved into modern algorithmic governance, AI, and predictive policing. The Big Data systems that fuel contemporary AI—whether in state surveillance, financial markets, or corporate management—operate on the same cybernetic principles of feedback loops, automation, and real-time decision-making.
Today, AI-driven efficiency models are more and more influencing corporate hiring decisions, predictive policing, financial market forecasting and automated supply chain logistics. The pursuit of optimization through data-driven governance, a direct descendant of cybernetics, has become so deeply embedded in society that it now shapes not just economies and governments but individual behavior itself—from social media algorithms that shape public discourse to AI-powered decision-making in healthcare, law enforcement, and warfare.
The question that emerges from this cybernetic evolution is not simply whether efficiency leads to progress, but rather: Who controls the systems that define efficiency?
And as history has shown, it repeats itself—first as tragedy, then as farce. Or, to paraphrase Antonio Gramsci— “the old world is dying and the new one struggles to be born. Now is the time of monsters”. Gramsci’s concept of the interregnum—a period of crisis where the old hegemonies are collapsing, but a new one has yet to emerge—perfectly describes our current moment. When traditional institutions falter, reactionary forces and business opportunists fill the vacuum, creating an era of uncertainty, instability, and political extremism.
This interregnum is precisely where we find ourselves today, with the rise of Trump, the far-right’s infiltration of federal agencies, the corporate-state alliance of figures like Elon Musk, and the unchecked power of the Department of Governmental Efficiency (DOGE).
In this interregnum, the convergence of political power and technological influence has become increasingly pronounced. Silicon Valley, once a bastion of innovation and liberal ideals, now finds itself intertwined with the current administration's agenda. Prominent tech leaders, including Elon Musk, have assumed significant roles within the government, signaling a shift towards a techno-fascist regime. Historian Janis Mimura describes this fusion of governmental and industrial power as "techno-fascism," drawing parallels to pre-World War II Japan, where technocrats drove state-dictated industrialization at the expense of liberal norms and labor rights.
Elon Musk's influence is particularly noteworthy. As the head of DOGE, Musk leads efforts to cut federal spending and streamline operations, despite controversy and scrutiny. Alongside Musk, members of the so-called "PayPal Mafia," including Peter Thiel and David Sacks, have taken key positions within the administration, pushing policies aligned with their technological and conservative ethos. Their influence spans various departments, such as NASA, the Commodity Futures Trading Commission, and the Office of Science and Technology Policy. Critics argue that their fast-paced, disruption-prone approach, typically successful in the private sector, might not be suitable for government operations where democratic checks and balances require transparency, deliberation, and accountability rather than unilateral decision-making and rapid, efficiency-driven restructuring. Concerns also arise regarding potential conflicts of interest, given their ties to tech and defense industries.
[book-strip index="2"]
The integration of AI into government functions further exemplifies this alliance. The administration announced a half-trillion-dollar project called "Stargate," aiming to build up AI infrastructure. This venture, involving collaborations with companies like OpenAI, Oracle, and SoftBank, seeks to bolster U.S. AI capabilities. However, the close ties between these tech giants and the government raise questions about the potential for unchecked surveillance and the erosion of civil liberties.
Companies like Palantir, co-founded by Peter Thiel, have become instrumental in this new paradigm. Palantir's big data AI-powered analytics technology is reportedly used to target and kill, highlighting the company's deep integration into military and intelligence operations. Palantir’s CEO Alex Karp has recently acknowledged the controversial reality of Palantir’s role, stating bluntly, “our product is used to scare our enemies and on occasion kill them.”
But Palantir’s influence is deeply entwined with the ideological and political forces shaping the governance in the US. Peter Thiel's worldview provides a critical foundation for understanding this shift. He has openly expressed skepticism about the compatibility of democracy and freedom, declaring in an essay for Cato Institute: "I no longer believe that freedom and democracy are compatible." This statement encapsulates the core tenets of the neo-reactionary movement “Dark Enlightenment”, which Thiel has helped popularize. Their ideology rejects egalitarian democracy in favor of elite rule, corporate and modern feudalism models of governance, where “kingdoms would instead look like corporations, with CEOs as sovereigns.”
One of Thiel’s closest political protégés is J.D. Vance, current Vice-President of the United States, once a venture capitalist-turned-politician who has rapidly risen to prominence within the Trump-aligned right. Thiel heavily funded Vance’s Senate campaign, seeing in him a vehicle for advancing a post-democratic, corporate-aligned conservative order. Vance, once a critic of Trump, has since become one of his most loyal defenders, echoing Thiel’s neo-reactionary rhetoric by advocating for strong executive power, government purges, and a rollback of democratic institutions in favor of a nationalist, authoritarian realignment.
Their vision is not just a right-wing populist movement, but a fundamental restructuring of American governance—and beyond, as seen in Vice President J.D. Vance’s recent declarations during the Munich Security Conference. At Munich, Vance signaled a radical shift in U.S. foreign policy, suggesting that American alliances should be re-evaluated not only on ideological grounds, but through a strict lens of national interest and transactional efficiency. His remarks echoed the neo-reactionary rejection of liberal democracy as an outdated model, advocating instead for a geopolitical Darwinism where alliances, aid and diplomacy are dictated by raw cost-benefit analysis rather than shared democratic values.
But while figures like Vance, Thiel, and Musk push for a consolidation of power in the hands of technopolitical elites, another strain of the same ideology goes further—toward a world where governance itself is rendered obsolete by AI. This is the vision articulated by Marc Andreessen’s Techno-Optimist Manifesto: a future where AI permeates all facets of human life, effectively automating decision-making across governance, economics, and warfare. Andreessen’s AI accelerationism is not just an economic thesis—it is a worldview that has unsettling historical precedents. In his Techno-Optimist Manifesto, he explicitly references and praises Filippo Tommaso Marinetti, the founder of Italian Futurism, whose 1909 Futurist Manifesto exalted speed, war, technology, and authoritarian control. Marinetti’s ideas would later be absorbed into early fascist ideology, as he co-authored the 1919 Fascist Manifesto, which advocated for a technocratic state led by industrialists and military elites, justified by the logic of efficiency and force. Andreessen’s embrace of Marinetti’s rhetoric—stripped of its historical context—reveals the deeper ideological underpinnings of his vision. He does not simply imagine a world shaped by AI; he envisions a technopolitical order where the logic of efficiency obliterates political deliberation itself.
As Umberto Eco warned in Towards a Semiological Guerrilla Warfare, modern power no longer relies solely on armies and tanks but on seizing the means of communication. Today, however, it is not just communication in the traditional sense—it is the seizure of code itself. Figures like Elon Musk, through political projects like DOGE, his companies xAI and Starlink, and media platform such as X, are not merely reshaping media control but rewiring the entire informational architecture of the state. The ideological battle is no longer simply who controls the message but who controls the algorithm that decides what messages can be sent, received, and even thought. The digital sphere, once seen as an open landscape of innovation, has become the primary battleground for corporate-state consolidation, where efficiency is a weapon and mass purges—whether bureaucratic, political, or algorithmic—are enacted in real-time.
It is precisely in these moments that monsters emerge. They have names and gestures. The question is not whether they will seize power, but whether we will allow them to claim the future before it is even formed. The battle over AI, algorithmic governance, and corporate-state control is not merely a debate about technology—it is a battle over the very conditions of political life and of democracy.
As Europe faces this new reality, we must remind ourselves what is at stake. Before it is too late, Frantz Fanon’s words remain as urgent now as when he wrote them in 1961: “The European people must first decide to wake up and shake themselves, use their brains, and stop playing the stupid game of the Sleeping Beauty.”