Think Piece

Against Redeeming Catastrophe: In Memory of Ruth Klüger

By Jonathon Catlin

Last month saw the passing of Ruth Klüger, one of the most influential memoirists of the Holocaust, at age 88. Born to a Jewish family in Vienna in 1931, Klüger was deported to Theresienstadt with her family shortly before her eleventh birthday then transferred to Auschwitz and a women’s subcamp of Gross-Rosen concentration camp. She survived Auschwitz by lying about her age, after a female clerk told her to say she was 15 when she was only 12 order to be transferred for work in a labor camp. In her memoir, Klüger presents this moment, to which she owed her life, as neither heroic, nor redemptive, but human. “What more do you need for an example of perfect goodness?” (Still Alive,108). Klüger and her mother ultimately survived by escaping from a death march. Her brother and father were murdered. After the war Klüger studied in Regensburg and then in 1947 emigrated with her mother to the United States, where she graduated from Hunter College and went on to earn a Ph.D. in from Berkeley. She became a professor of German literature and taught at a number of universities, including Princeton (where she became the first female full professor of German) and the University of California, Irvine.

At Irvine, Klüger oversaw a study abroad program in Göttingen, Germany, where she was struck by a bicycle in 1989. As she emerged from a days-long coma, she began to recall long-repressed memories and began to write. As Michael Rothberg writes, “In her reconstruction of her thoughts as the bicyclist bore down on her, the bicycle and headlight are transformed into barbed wire and a spotlight” (130). Klüger’s memoir, Weiter Leben, published in German in 1992 (“German, strange as this statement may sound, is a Jewish language” (Still Alive, 205)), when she was 70, was a bestseller and won major awards. It can be said to exemplify what Freud called Nachträglichkeit, trauma delayed. Rothberg praised her memoir for depicting “a site of extreme violence as a borderland of extremity and everydayness” and described it as a model of a literary mode he termed “traumatic realism,” which breaks through deadlock between “realist” versus “antirealist” Holocaust representation (109).The English version, Still Alive,was published in 2001. The book is widely taught in university courses and is used as an eyewitness testimony in major works of Holocaust historiography including Saul Friedländer’s Nazi Germany and the Jews(1997/2007) and Peter Fritzsche’s Life and Death in the Third Reich(2008).

Klüger stands out among survivors for her outspoken critical stance toward Holocaust remembrance culture. She once said to the German Studies Association, “the present memorial cult that seeks to inflict certain aspects of history and their presumed lessons on our children, with its favorite mantra, ‘Let us remember, so the same thing doesn’t happen again,’ is unconvincing. To be sure, a remembered massacre may serve as a deterrent, but it may also serve as a model for the next massacre” (GSA, 392). She called out Holocaust tourism, asking what the “carefully tended, unlovely remains” of former concentration camps could possibly communicate of the experience of being imprisoned there (Still Alive, 63). “I don’t go to these concentration camp memorial sites,” she once flatly told Der Spiegel, and she even asked the Auschwitz Museum to take down her poems, which were displayed there against her wishes (111). “I don’t hail from Auschwitz, I come from Vienna,” she wrote, and it seems that this attitude led her to opt to have the number tattooed on her arm removed in later life (112). She remained especially critical of German attempts to identify with victims from the Nazi era. “Defeat,” she reflected, “bred its own variants of nationalism”—a kind of negative exceptionalism about German crimes (163).

Writing in the wake of popular milestones like Holocaust (1978) and Schindler’s List (1993),  Klüger echoed critics of “the Americanization of the Holocaust” and “the end of the Holocaust,” for whom brutal historical reality was displaced by redemptive and enjoyable kitsch. The problem of this oxymoronic “Holocaust aesthetics” became apparent to Klüger after a young woman approached her at a book signing and said, with a smile, “I love the Holocaust.” Klüger was taken aback. She understood that the woman loved not the event itself, but reading about it. “But her naïve and undisguised pleasure brought up the question: Should she love to read about the Holocaust? Should we in any shape or form feel positive and empowered or cathartically purged when we contemplate the extinction of a people? My impulse was to say to this woman: You shouldn’t. Stop reading these books, including mine, if you enjoy them” (GSA, 393).

This anecdote, Klüger reflects, validates continued discussion of Theodor Adorno’s famous claim that to write poetry after Auschwitz is barbaric, however much one may disagree with it. She worried about Holocaust literature slipping into “kitsch”  and reflected about the popularity of Anne Frank’s diary, “it is easier to shed tears over an unfortunate little girl than to agree or disagree with an eminently rational Italian chemist with pronounced opinions that was Primo Levi” (GSA, 400). Telling are the two other survivor authors she praises: the Hungarian Imre Kertész, who reflected on the paradox that once the Holocaust is written about, experience is substituted by the “artefact” of narrative representation, and who was also critical of “Holocaust culture.” She also names the Polish writer Tadeusz Borwoski, whose acerbic This Way to the Gas, Ladies and Gentlemen(1946) she criticizes for its antisemitism and fictional embellishments, while also recognizing “the starkness with which Borowski portrays the slide from indifference to depravity at the edge of human existence and endurance” (GSA, 402).

Still Alive is also notable for its “feminist” discussions of nonconsensual sex work in the camps that debunked popular “whorehouse fantasies” and “a thriving cottage industry” of Holocaust-themed pornography (184), while also problematizing Holocaust “photos as an instrument of sublimated voyeurism” (159). But Still Alive stands out most for its bitter tone. A reviewer wrote thatKlüger “unsentimentally redefined the conventional mythos of the heroic Holocaust survivor,” and indeed, she rejected the idea that suffering united victims as “sentimental rubbish.” Instead she challenged her readers to “rearrange a lot of furniture in their inner museum of the Holocaust,” reminding them that “Surviving was not normal. Death was normal.” As the Wiener Zeitung fittingly titled their obituary, Klüger modeled “surviving relentlessly.”

As Klüger’s former colleague Gail Hart reflects, “I think she was a little bit too hard on those who, say, visited Holocaust memorials and Holocaust museums because she said that, you know, they were trying to admire themselves for hating the Nazis.” In Still Alive (as once quoted by Slavoj Žižek), Klüger recounts a conversation with German graduate students in Göttingen:

One reports how in Jerusalem he made the acquaintance of an old Hungarian Jew who was a survivor of Auschwitz, and yet this man cursed the Arabs and held them all in contempt. How can someone who comes from Auschwitz talk like that? the German asks. I get into the act and argue, perhaps more hotly than need be. What did he expect? Auschwitz was no instructional institution…You learned nothing there, and least of all humanity and tolerance. Absolutely nothing good came out of the concentration camps, I hear myself saying, with my voice rising, and [the student] expects catharsis, purgation, the sort of thing you go to the theatre for? They were the most useless, pointless establishments imaginable. That is the one thing to remember about them if you know nothing else. (Still Alive, 65)

Klüger refuses to cast herself as a saint on account of her experience (unlike the canonized but also criticized figure of Elie Wiesel). On the contrary, she even recounts a disturbing incident from the postwar years, while she waited in vain for her father and brother to return from the camps, in which she accidentally killed a little dog by forgetting to turn off the gas in the kitchen where he slept. “Of course I was mortified the next morning when I found his lifeless body and was despondent for days. Accident or symptom? And if the latter, symptom of what?” (Still Alive, 158). This fundamental recognition of moral fallibility led fellow Auschwitz survivor Primo Levi to coin the term “the grey zone” to describe how the camps generated amorality, not morality among victims. Yet while Levi similarly characterized Auschwitz as a “Black Hole,” he also, in sharp contrast to Klüger, called it his “true university” for understanding human nature.

Gabriele Annan spoke for many when she called Klüger “merciless” and “outspoken to the point of aggressiveness”: “She also resents all views on the Holocaust that do not tally exactly with her own, and gets indignant about everyone who criticizes her….Levi told a similar story without making one feel so hectored.” Speaking before Angela Merkel and the German Bundestag on Holocaust Remembrance Day in 2016, Klüger called her own approach “bitter and aggressive,” yet also positively connected her experience with the plight of other groups, praising the “heroic” act of welcoming refugees from war-torn Syria. (She similarly opposed inhumane treatment of asylum-seekers at the U.S.–Mexico border.) Hart, her former colleague, reflects of Klüger, “She is a very direct interrogator. She doesn’t have any of these conciliatory mechanisms that most women have. And she seeks argument. And at first I found that a little bit off-putting and I kind of avoided her.” As Klüger admitted herself, “Damn right. I am hard to satisfy” (Still Alive, 184).

Carolyn J. Dean’s Aversion and Erasure: The Fate of the Victim after the Holocaust (2010) interrogates (sometimes gendered) contemptuous responses to testimonies like Klüger’s—to the point that some of Klüger’s poems “were removed from German-language anthologies of poems from the camps because their desire for vengeance undercut the depiction of an ‘innocent victim worthy of pity’” (AE, 151–2). As Klüger writes in Still Alive, a German newspaper in 1945 to which she submitted her poetry published her poems in an “eviscerated” form that depicted her with “a drawing of a ragged, terrorized child” and reduced her writing to “a maudlin, hand-wringing text, in effect asking the public for pity” (154). Feeling misunderstood partly explains why she waited nearly fifty years to write her memoir—that and wishing not to hurt her mother, whom she describes as a paranoid woman of unsound mind: “in Auschwitz [she was] finally in a place where the social order (or social chaos) had caught up with [her] delusions” (104).

Klüger shatters stereotypes about victims by challenging what Dean calls “the power of particular self-protective, normative rhetorical constructions of victims as modest, reticent (they proclaim their suffering quietly and never angrily), life affirming, or innocent,” a mold that Dean argues “stifles other possible representations, including those prominent in a great deal of victim testimony” (AE, 153). In recurring preferences for testimonies like Levi’s, Dean observes, “again and again…modesty and [the] embrace of ordinary virtues stand in for the veracity of memory” (AE, 53). Dean explains this as a desire to maintain a coherent moral universe, one which preserves a sense of cause and effect, by valorizing self-restraint among victims and excluding abjection and other reactions deemed “excessive” or histrionic.

Klüger writes that for many in the postwar years, being a survivor was a sign of moral degradation: “the Jewish catastrophe was mainly and merely a resounding humiliation to them, not the tragedy of saints and martyrs that our own propaganda has made of it since” (Still Alive, 187). Klüger bristled at some of her readers “mixing up empathy with sentimentality,” being somehow moved by her work but missing what she actually had to say; many already knew what they wanted to hear from Klüger, and she refused to provide it (“Talking to Ruth Kluger,” 245). As Dean shows, such critical preferences often feed mythic notions about victims and can fall back upon narratives about survivors seeking advantages and making “exorbitant” claims to memory and victimhood. What do we lose when abject, angry, or silent, traumatized voices become unhearable to us—erased—and we instead stick to rational, heroic, and redemptive narratives we are primed to hear—often those written by unrepresentative men who survived by having certain advantages in the camps? Dean warns historians that “pursuing one kind of truth may well lead to the neglect of the very voices they seek to recuperate” (Emotions, 409).

Klüger writes that she even felt misunderstood by her longtime friend from university, the famous writer Martin Walser, a German war veteran who “forgets” that she survived Auschwitz, thinking, Klüger muses, that “a concentration camp was something for grown men, not for little girls” (167). This is just one instance of Klüger’s conflicted relationship with Germany and Austria. She calls Walser’s moral failings but beautiful writing “the epitome of what attracts and repels me about his country” (169). The two entered “a conversation…that is still ongoing and will never, can never, have a satisfactory end” (165). “We survivors are not responsible for forgiveness,” she once told reporters in Austria, a country she never ceased to see as antisemitic. “I perceived resentment as an appropriate feeling for an injustice that can never be atoned for.”

Klüger also explored this topic in her scholarly writings. In her book Katastrophen: Über deutsche Literatur(1993), she writes that she approaches the two hundred years between the Enlightenment and the “Final Solution” in which “the Jews were part of German culture” with “the indignation and nostalgia of a latecomer” (7). All she could get hold of was “one last corner” of that tradition, “under which the abyss of the Jewish catastrophe opens up.” She argued in that book that there was still a “Jewish problem” in German postwar literature, criticizing several instances of “Wiedergutmachungsfantasien,” fantasies of reparation for the Holocaust, by making unusually good treatment of Jews appear the norm rather than the exception. “The fulfillment of secret wishes, or the ‘rectification’ of a harsh reality, is certainly one of the therapeutic functions of literature,” she wrote, “but when fantasy presents itself as realism, it becomes by definition kitsch” (13). Such works, she wrote, “create a quasi-reality that shifts what actually happened in the direction of an attempt at rehabilitation for the German population of that time. They hardly overlap with the Jewish memory of the events of those years” (15–16). She thus rejected the popular use of fantasy in postwar German literature to pose the redemptive question, “what would it have been like if people had acted differently?” As she wrote in Still Alive, “The Germans hadn’t lost their hatred and contempt of Jews, but it had become subliminal. What else could you expect? We survivors reminded the population through our mere existence of what had happened, and what they and their people had done” (151).

Klüger urged her readers against “feeling good about the obvious drift of my story away from the gas chambers and the killing fields and towards the postwar period, where prosperity beckons” (Still Alive, 138). The cutting style and substance of Klüger’s writings bar the redemption of the Jewish catastrophe she felt was impertinently expected of her as a survivor. Referring to herself, her mother, and the young woman who survived with them, Klüger wrote, “you cannot deduct our three paltry lives from the sum of those who had no lives after the war….I was with them when they were alive, but now we are separated. We who escaped do not belong to the community of those victims, my brother among them, whose ghosts are unforgiving.”

Jonathon Catlin is a Ph.D. Candidate in the Department of History and the Interdisciplinary Doctoral Program in the Humanities (IHUM) at Princeton University. His dissertation is a conceptual history of “catastrophe” in modern European thought. He tweets @planetdenken.

Featured Image: Daniel A. Anderson, Ruth Klüger, courtesy of UC Irvine.

Think Piece

Dialectics of the Overman: Nietzsche in the Frankfurt School

By Sid Simpson

A typical genealogy of the Frankfurt School traces the roots of its critical theory back to what Martin Jay calls the “intellectual ferment” of mid-nineteenth-century German intellectual history (41). Seeking to transcend Hegel’s heady idealism and wrest his legacy from the politically conservative Right-Hegelians, the so-called Left-Hegelians, among them Karl Marx, re-grounded philosophy in progressive practice by developing a materialist approach to apprehending social reality. Ironically, this framework would eventually reify into a scientistic metaphysics of its own. Thus, so the story goes, Frankfurt School forerunners György Lukács and Karl Korsch went beyond vulgar Marxism by looking back to its Hegelian roots and in doing so inspired the Hegelian-Marxism now closely associated with the Institute for Social Research. While no doubt a largely accurate account, this story leaves out a particular voice in the nineteenth-century German philosophical scene that profoundly shaped the Institute’s work. As Theodor Adorno himself made clear in a 1963 lecture, “to tell the truth, of all the so-called great philosophers I owe [Nietzsche] the greatest debt—more even than to Hegel” (172).

The more one looks, the more one finds Friedrich Nietzsche both implicitly and explicitly in the writings of the first generation of the Frankfurt School. In Dialectic of Enlightenment(1944), Adorno and Max Horkheimer place Friedrich Nietzsche alongside Hegel in “recogniz[ing] the dialectic of enlightenment. He formulated the ambivalent relationship of enlightenment to power” (36). In this way we might see, as Gillian Rose has argued, Nietzsche’s diagnosis of modernity’s self-undermining, indictment of modern culture, and critique of reason as foreshadowing many of Adorno and Horkheimer’s arguments in Dialectic. The continuities are more than merely thematic, however. For example, Nietzsche’s aphoristic writing style is the guiding structure for the fragmentary form of Adorno’s Minima Moralia: Reflections from Damaged Life (1951), which he describes as a product of his “melancholy science”—a knowing inversion of Nietzsche’s famous “gay science” (in addition to the Magna Moralia once attributed to Aristotle). Likewise, Nietzsche and the early Frankfurt School can be linked on account of a shared ethics of radical creation in the face of stupefying conformity. Herbert Marcuse’s Essay on Liberation (1969)is a case in point: Marcuse explicitly invokes Nietzsche and the very same “gay science” in order to articulate an aesthetic ethos of liberation. Nietzsche’s influence and omnipresence were such that Rolf Wiggershaus, noted historian of the Frankfurt School, claimed that the original members of the Institute “find in him, as in no other philosopher, their own desires confirmed and accentuated” (145).

Despite these clear continuities, we must also take stock of how the early Frankfurt School broke from and, in some cases, condemned Nietzsche. In Minima Moralia, a work already so indebted to Nietzsche, Adorno laments that the seemingly inescapable horrors of the twentieth-century transformed the well-known Nietzschean liberatory dictum amor fati into nothing more than a conservative “love of stone walls and barred windows,” the “last resort of someone who sees nothing and has nothing else to love” (98). At the same time, Horkheimer points out that the master morality Nietzsche articulated in response to the Christian slavishness of the nineteenth century metamorphosed into a projection of the oppressed masses, who having lost their spontaneity lionize the antics of faux-supermen (159–160). But the most stringent critique of all appears in Dialectic of Enlightenment, the very same text in which Adorno and Horkheimer also praise him for his insight into the interpenetration of reason and power. There, Nietzsche is accused of “maliciously celebrat[ing] the mighty and their cruelty” (77) and advocating a “cult of strength” which taken to its “absurd conclusion” as a “world-historical doctrine” results in atrocities like German fascism (79). Further, Adorno and Horkheimer dig into the Nazi connotations of Nietzsche’s infamous blond beast, quoting at length from the Genealogy of Moralsto describe its “pleasure in destruction” and “taste for cruelty.” Finally, they make the strong claim that “from Kant’s Critique to Nietzsche’s Genealogy of Morals, the hand of philosophy had traced the writing on the wall; one individual put that writing into practice, in all its details” (68). As the text makes clear, that one individual is Adolf Hitler.

While we might reason that the early members of the Frankfurt School invoked no one uncritically, their all-but-explicit association of Nietzsche with fascism in Dialectic is bizarre given their explicit interest elsewhere in rescuing him from these trappings. For example, in his early essay “Egoism and the Freedom Movement: On the Anthropology of the Bourgeois Era” (1936), Horkheimer writes that Nietzsche’s superman “has been interpreted along the lines of the philistine bourgeois’ wildest dreams, and has been confused with Nietzsche himself” but is precisely the “opposite of this inflated sense of power” (108–109).Wiggershaus, who already described the Nietzschean streak in their writing, contended that transcripts of early conversations between Adorno, Horkheimer, Marcuse, and a handful of other associated thinkers display a clear agreement that “Nietzsche must be rescued from fascist and racist appropriations” (145). If that’s true, what seems to be going on in Dialectic? How do we make sense of this relationship, and what does it teach us about the Frankfurt School itself?

In an article forthcoming in Contemporary Political Theory I argue that the early Frankfurt School saw in Nietzsche not only their desires confirmed, as Wiggershaus puts it, but also their fears: that a radical critique of reason risks courting fascism if it cedes all recourse to reason. In this way, the striking presentation of Nietzsche in the pages of Dialectic is less an interpretive aberration than a performative exaggerationof how Nietzsche’s critical insights made possible his own misappropriation. In other words, the authors wish to draw attention to how Nietzsche is himself an exemplar of enlightenment’s relapse into barbarism. What’s more, knowing that Nietzsche would balk at his uptake by the ressentiment-fueled National Socialists, Adorno and Horkheimer nevertheless exaggerate the proximity between master morality and National Socialism as a way to differentiate themselves profoundly on the question of reason; whereas Nietzsche asks, “why not unreason?” Adorno and Horkheimer remain committed to an admittedly elusive form of positive enlightenment precisely so that they have a bulwark against the barbarism that coopted Nietzsche. As they make clear in the preface to Dialectic, though enlightenment thinking already contains within it the “germ of the regression that is taking place everywhere today,” it is at the same time indispensable for freedom (xvi).

In the final analysis Nietzsche, like Hegel, is turned on his head in the writings of the first generation of Frankfurt School thinkers. Elements of his radical critique were extended while his articulation of master morality and morally-generative radical autonomy were written off as abstract negations of bourgeois reality that risked being appropriated by fascists. For Adorno and Horkheimer in particular, Nietzsche’s most powerful contribution was exposing the domination at the heart of enlightenment. That his moral and political visions could not also be separated from this violence was his prime limitation. And yet, Nietzsche’s shortcoming was of critical interest given their own hostility to abstract utopia and skittishness about articulating the positive form of enlightenment which would serve as the boundary between their own critique of enlightenment rationality and irrationalism. Right alongside their admiration for Nietzsche’s artistic style and critical subtlety was their anxiety about its repercussions and how they might proceed anew. Whereas most descriptions of the early Frankfurt School place them solely within a Hegelian-Marxist framework (though perhaps with an occasional mention of Freud), it is also entirely the case that they took up and reworked this fundamental Nietzschean problematic.

In some ways, labelling the first generation of critical theory (but especially Adorno and Horkheimer) “Nietzschean” not only illuminates their own project but also the contours of the Frankfurt School more broadly. Adorno and Horkheimer’s concern in Dialectic with distinguishing their own disposition toward reason from Nietzsche’s would ironically become a site of contestation within the Frankfurt School, between Adorno and Horkheimer on the one hand and their protégé Jürgen Habermas on the other. In much the same way that Dialectic took issue with Nietzsche’s total evacuation of reason, Habermas writes in his Theory of Communicative Action (1981) that Adorno and Horkheimer’s reliance on the elusive mimetic impulse (their “positive” form of enlightenment) results in an aporia. As he puts it, “the ‘dialectic of enlightenment’ is an ironic affair: It shows the self-critique of reason the way to truth, and at the same time contests the possibility ‘that at this stage of complete alienation the idea of truth is still accessible’” (383). Habermas’s Philosophical Discourse of Modernity (1985) drives the wedge between the first and second generations of the Frankfurt School even deeper: given the aporetic culmination of Adorno and Horkheimer’s appeal to mimesis, Habermas condemns them with precisely the irrationalism they attempted to differentiate themselves from in Nietzsche. Here, Habermas pulls no punches: “Horkheimer and Adorno find themselves in the same embarrassment as Nietzsche: If they do not want to renounce the effect of a final unmasking and still want to continue with critique, they will have to leave at least one rational criterion intact for their explanation of the corruption of all rational criteria.” (126–127). Thus, paying attention to Nietzsche’s hyperbolized presentation in Dialectic brings into relief the distinct irony that Adorno and Horkheimer’s own attempt to “rescue” some form of enlightenment (the title Horkheimer proposed for a sequel volume that was never written) was subjected to the same charges of irrationalism that they performatively raised against Nietzsche.

If Nietzsche looms large both in the early Frankfurt School’s theoretical approach as well as in the critiques leveled against it, a further chapter in the story of their dialectical relationship seems to be on the horizon. Dialectic was in part an attempt to understand how Nietzsche was coopted and mobilized by the far right—a tradition that persists up to the present day (one need only note Richard Spencer as an example). However, it may well be the Frankfurt School’s own turn to be conscripted into the culture wars; as Jay lays out in a recently-published collection of essays, a vulgar right-wing meme is emerging that traces the “cultural Marxism” ostensibly responsible for the “subversion of Western civilization” directly back to the Institute for Social Research (155).

In the same way that Nietzsche’s critical acumen was perverted and mobilized for precisely the ends it sought to undermine, the legacy of Adorno, Horkheimer, and the other early lights of the Frankfurt School is currently being twisted into precisely the prohibition on thinking they vehemently rejected. For the far right, “Critical Theory” has become shorthand for the academic and intellectual forces ostensibly seeking to stifle free speech with its “wokeness” and “social justice,” ultimately destroying Western culture. It’s no surprise, then, that right-wing critics seeking to attack Angela Davis—to many the modern face of radical communism and identity politics in the United States—emphasize her “radical Marxist” training under Marcuse and Adorno. This vast conspiracy theory has proven deadly: Anders Breivik, a Norwegian neo-fascist who believed himself to be combatting “cultural Marxism” when he murdered 77 people in a 2011 killing spree, even recommended Jay’s Dialectical Imaginationas a resourceto his imagined followers. More recently, Trump’s “Patriotic Education Commission” likewise defines itself as a response to Critical Theory—though over time Trump has begun to call out Critical Race Theory, clearly not understanding that the two terms do not mean the same thing.

Of course, even ironic reversals of historical narratives are not total. Despite Nietzsche’s continued misuse and caricature by the right—notably on the issue of antisemitism—his work remains of critical inspiration to progressive social theory after Adorno and Horkheimer, from Gilles Deleuze, Jacques Derrida, and Michel Foucault on through contemporaries such as Judith Butler and Wendy Brown. In a short piece debunking the facile argument that “postmodernism gave us Trump,” Ethan Kleinberg seems to open a way to recuperate the critical Nietzschean legacy of “French Theory” for combatting the far-right “post-truth” ideologies they are wrongly conflated with. In much the same way, the Frankfurt School continues to be a deeply generative critical tradition regardless of their embattled position in current right-wing ideology.

Sid Simpson is Perry-Williams Postdoctoral Fellow in Philosophy and Political Science at the College of Wooster.

Featured Image: Friedrich Nietzsche (circa 1875), courtesy of Wikimedia.

Think Piece

“The Original Qi of the State”: Zheng Guanying’s Democratic Trade War

By Gabriel Groz

Trade war, protectionism, ‘industrial policy’: the concepts used by today’s ascendant nationalists to articulate their political economy are a marked departure from decades of neoliberal consensus on international trade. But different does not mean new. More than intellectual innovation, the resurgence of protectionist discourse around the world signals a return to an older vision of how states should manage international commerce: mercantilism’s zero-sum logic, with its fixation on trade imbalances, industrial subsidies, and interstate commercial warfare.

In today’s United States, most mercantilist policy and projection—whether Donald Trump’s tariffs or Joe Biden’s ‘Buy American’ campaign—is directed against China. Ironically, however, many of the key themes of the American mercantilist position—anxiety over trade agreements, attempts to protect industries from foreign competition, a bellicose tariff strategy—resemble a program that a generation of nineteenth-century Chinese nationalists developed to defend their own markets. Before the tariff returned to the American policy arsenal as an anti-China cudgel, the idea of “trade war” (shangzhan) cast a long specter over the late Qing Dynasty (1644-1912) and early Republic, as Chinese mercantilists confronted the West’s “imperialism of free trade.” With the current trade war between the United States and China unlikely to end soon, the career of shangzhan is an episode in the history of economic ideas worth returning to, and one not without subtleties.

The idea of treating economic policy as a species of warfare did not originate with either Chinese or Western mercantilists but has a long history in early-modern China. Many Chinese reformers like Huang Zongxi and Gu Yanwu, living through the chaos of the Ming-Qing transition (1618-1683), proposed a political economy that integrated agricultural and defensive capabilities through a system of self-sufficient military colonies. Although these proposals were met with scant support from the state, the idea of an agrarian political economy that stationed troops as farmers remained influential in the eighteenth and nineteenth centuries. What distinguished nineteenth-century Chinese mercantilism from this earlier discourse was its emphasis on state support for commercial development—radical in a political culture outwardly hostile towards mercantile activity—as well as the unprecedented nature of the Western challenge.

The first Chinese reformer to advance an explicitly mercantilist political economy was Zheng Guanying (1842-1922), the pioneering late-Qing theorist of “trade war.” Zheng came to appreciate firsthand the connection between mercantile and state power and the severity of China’s crisis through his work as a Shanghai comprador. Influenced by earlier statecraft writers like Feng Guifen (1809-1874) and Wei Yuan (1794-1857), Zheng published a mercantilist tract titled Shengshi weiyan, or Words of Warning for a Gilded Age (hereafter SSWY) in 1893.[1] Especially after China’s defeat in the 1895 war with Japan, SSWY was hugely influential in the final decades of Qing rule, reaching a readership that included both the Guangxu Emperor and a young Sun Yat-sen. By the 1910s, SSWY was still widely read by reformers, including in Hunan, where Zheng’s essays were included in Mao Zedong’s political economy reading list.

Zheng’s passionate rhetoric in SSWY introduced a radical, new approach to international politics. Where the Mencius—the mainstay of classical Confucian education—began with a denunciation of any profit basis to politics, Zheng announced in SSWY that the rules of engagement had fundamentally changed. “Today all under heaven,” Zheng proclaimed, “is all about profit” (SSWY, 9.9a). This was lamentable; “fraud follows profit,” he conceded, “like shadow form.” But “making good use of” the profit impulse and its effects, Zheng argued, was key to appreciating the novelty of, and responding to, the political-economic challenge Western imperialism posed to China.

“Since the advent of Sino-foreign trade relations,” Zheng wrote, “the foreigners have engaged in endless deception, while our people are endlessly humiliated.” Revenge fantasies were only human; “of all those living,” Zheng asked, “is there anyone who does not wish to be the man, to grab a dagger and decide things once and for all?” (SSWY, 2.35b) This was the attitude that had inspired the Tongzhi Restoration, a suite of conservative reforms introduced under the Tongzhi Reign (1862-1874). But those reforms, Zheng insisted, were misguided. “We bought warships, created forts, firearms and naval mines, reformed our navy and infantry, and have endlessly discussed military policy,” Zheng wrote, “all for naught; the foreigners laugh at our expense.” The Western “scheme against us,” he continued “is an attempt to devour our flesh and blood, not our skin and hair; they attack our capital and assets, not our troop formations,” using “treaties as weapons of war” (SSWY, 2.36a). It was time to beat the West at its own game. “Studying military war,” Zheng concluded, “is inferior to studying shangzhan—trade war.”

Zheng then outlined his shangzhan program. First, the state needed to assemble statistics to understand “its gains and losses in commercial affairs” (SSWY, 2.41a). China’s trade deficit was severe, the result of unequal treaties and industrial stagnation. But there were solutions. The first step was to redirect the state’s energies, distracted by militarism and agrarian conservatism, towards commercial matters: “at its core,” Zheng explained, “the program of trade war aims to revive the domestic tea and silk industries” through state investment and collaboration with merchant groups. Zheng continued with a ten-point plan that would, among other things, allow domestic cultivation of opium, subsidize modern textile factories, facilitate import substitution, develop forges, copper and coal mines, and remove silver currency from circulation.

Each measure would “win a specific victory.” But a broader economic strategy was still necessary. While insisting that China fight to restore its tariff autonomy lost in the Opium Wars, Zheng promoted a deregulated system of internal trade, abolishing or reducing the internal tariffs on domestic trade that the Qing had relied on to raise funds since the Taiping Rebellion. This, combined with boycotts on foreign goods and import substitution, he hoped would “reverse currency outflow.” Identifying the state’s interests with those of its merchants—no small matter in an officially agrarian, anti-commercial political culture—Zheng insisted that the government  “use the power of its officials when merchants are in need,” which would “solidify the basis for trade war and amplify warlike power” (SSWY, 2.38b; 2.40b).

Beyond boosting domestic industries, Zheng’s concept of trade war in SSWY entailed a holistic program of state-building. Zheng encouraged the reorganization of the (long-underfunded) national postal service, state-sponsored development of railroads, and the formation of a network of free libraries to provide readers with “useful knowledge.” As part of an assertive foreign policy, modeled on Meiji Japan’s diplomatic successes, Zheng encouraged state visits abroad as a means of comparing different development strategies. In domestic administration, he demanded a role for merchants in commercial policy decisions, and advocated for the creation of a Department of Commerce with branches in every province, each run by an elected merchant-director, as, “commercial affairs,” Zheng concluded, “are the original qi of the state” (SSWY, 2.9a; 2.15b).

It is for shangzhan that SSWY is remembered today. But Zheng’s mercantilism is not the full story of his ideas. In the book’s 1894 edition,Zheng grouped a dozen chapters together in a section titled “On Developing Resources.” Rather than focusing on economic development, the volume’s four initial chapters addressed very different themes: “Parliaments,” “Public Elections,” “Public Law,” and “Newspapers.” Here, Zheng the mercantilist abruptly became Zheng the constitutional democrat. Relying on the age-old Confucian dialectic of substance and function (ti and yong), Zheng’s preface to SSWY insisted that any “functional” reforms China adopted on its way to “wealth and power,” among them railroads and technological upgrades, could not be separated from “substantial” reforms, chief among them “the discussion of politics in parliaments,” necessary for achieving “unity between sovereign and people, and common spirit between higher and lower classes” (SSWY, “Author’s Preface,” 1b).

In other words: for trade war to succeed, a parliament was necessary. In “On Parliaments,” among the longer essays in the 1898 edition of SSWY, Zheng heaped praise on Britain, with its “lower house that initiates legislation,” its “upper house that comments on the legislation passed,” and its monarch who plays a key but non-initiatory role. While supporting an appointed upper house comprised of merchants, scholars, and gentry, Zheng recommended an open franchise for a representative lower house, in which “the members are elected by the people according to a population ratio” (SSWY,4.3b; 4.1b). Zheng argued from history that parliaments and constitutions were preconditions for national power. “Some may ask,” Zheng wrote, “whether parliaments suit the West but not China.” He dismissed such quibblers as “having no sense of the big picture, and no deep knowledge of the causes of [national] profit and affliction in either society.” A parliament was a precondition for national renewal and power; “how else can it be that Britain, that tiny country,” has an empire that encompassed “twenty times the territory of the original state?” It was, Zheng concluded, “the obvious effect of parliamentary government.” Here Zheng also mentioned Japan, which “instituted a parliament and quickly underwent revival, outpacing the West” (SSWY, 4.3a). Given the facts, “how,” Zheng wondered, “could anyone in China say that parliamentary government is impracticable?”

Constitutionalism would make the state powerful but would also render it moral. “Joining sovereign and people as one body,” parliamentary democracy was a precondition for the renewal of Chinese politics. “The rise and fall of states is the result of the human talent (rencai),” Zheng wrote, drawing on a pervasive meritocratic strain in Confucian discourse; “elections facilitate this perfectly.” Zheng further suggested that elections were the “lost meaning” of the “village examination system” instituted in the ancient Three Dynasties of lore, in which talents were rewarded by sage-kings (SSWY, 4.6b). Constitutionalism was therefore key to the moral revival of the Chinese state and, as the examples of Britain and Japan demonstrated, would also secure the state against international threats that might compromise its economic integrity and very existence. Thus as an economic strategy, trade war required parliamentary constitutionalism, and the national unity of purpose it would bring in its wake, to succeed. But a trade war was meaningful in the first place only because the state fighting it embodied constitutional, popular principles.

Zheng’s vision of international trade as a commercial war between states endured in China long after his death in 1922. Whether through Chiang Kai-shek’s campaign for tariff autonomy or Mao’s autarkic-nationalist version of Marxism, deformed versions of Zheng’s mercantilism have continued to inform Chinese political economy to this day. But Zheng’s original formulation, in which constitutional democracy was the prerequisite for interstate economic warfare, disappeared. Racial nationalism and then Marxism filled the ideological void left by constitutionalism and served as justifications for an aggressive international political economy. Trade war and constitutional democracy were entirely separable.

How, then, to evaluate Zheng’s program and its fate? Far from being simply a product of 1890s China, SSWY underscores a broader trend in the history of economic ideas: efforts to join democratic development with a mercantilist policy in democracy’s defense. Zheng’s attempt to thread this needle connects him to a lineage of “reason of state” thinkers, both Chinese and non-Chinese, seeking to negotiate, as the German historian Friedrich Meinecke famously wrote, “between behavior prompted by the power-impulse and behavior prompted by moral responsibility” in the struggle to meet the “necessity of state” (5). In Zheng’s conception, trade war as “function” satisfied the needs for state power, while constitutionalism as “substance” met its moral needs, especially after Confucianism’s failure to confront capitalist modernity head-on.

Zheng insisted that there was a necessary relationship between trade war and democracy; his intellectual descendants called his bluff. But the concept of a dual commitment to economic nationalism and democracy retains significant appeal, as recent events would suggest. The failure of Zheng’s dual vision to materialize says little about what the future may bring, especially as the relationship between globalization and democracy remains contentious. The dream of a democratic state that can defend its economic interests is very much alive. Even if ultimately unsatisfying, by thinking through the relationship between international economic competition and democracy, Shengshi weiyan has us asking: trade war for what?

[1] The version of the text cited in this piece is the standard expanded edition, Zengding zhengxu Shengshi weiyan (Shanghai; Liuxian shuju, 1894). I am grateful to McKinsey Crozier for her help in retrieving the edition from Yale’s Sterling Memorial Library. The phrase shengshi usually has the sense of “golden age”; Zheng’s deployment of the term was ironic, and resonates well with Mark Twain’s (roughly contemporaneous) idea of a ‘gilded age.’

Gabriel Groz studies the history of early-modern China and Britain at the University of Chicago.

Featured Image: Zheng Guanying, 1910s, via Wikimedia Commons.

Think Piece

Living Archives, Dying Wards

By Marissa Mika

1. Fieldnotes from the Cancer Archives

I want you to imagine yourself for a moment in a dusty former pharmacy store for an AIDS research trial, where you can still see the word cotrimoxazole (an antibiotic frequently used for HIV patients) written in Sharpie on the wooden shelving. It’s a warm day, not too hot, not too cold, not too rainy, not too dry. It’s a typical day in Kampala. You are here in this room, now repurposed as the Uganda Cancer Institute’s INACTIVE records room, surrounded, and I mean literally surrounded, floor to ceiling with the files of mainly dead or long forgotten cancer patients. There is a digital SLR camera on a tripod for taking photographs of a selective sample of the thousands of patient records housed in this room dating back to the 1960s. There is rat excrement on the floor. Some of the files are water damaged. This large former supply closet, no more than twelve feet by six feet, runs along the wall of the current records room, which is bustling today. Through the metal cage of the door, you can hear the records staff interface with patients just coming in to open up files for the first time. Their information—name, tribe, age, village, cell phone number—is written into “Face Sheets” which have remained in the same format for the past 40 years. 

This field site, essentially an archive, is where I spent a good portion of 2012 while conducting historical and ethnographic fieldwork on the history of cancer research in Kampala, Uganda. In brief, my research traces the history of how a small experimental chemotherapy research site established by the US National Cancer Institute and the Makerere Medical School’s department of surgery in 1967 remained open through a long period of political instability, the HIV/AIDS epidemic, and governmental neglect. Today, thanks to research collaborations and newfound institutional autonomy from Mulago Hospital in 2009, drug stocks are more plentiful, more nurses are on the wards, and the number of Ugandan oncologists has increased from one in the year 2000 to nearly 20 in 2020. The number of patients, everyone agrees, has also increased dramatically, crowding the two original wards that were never designed to provide comprehensive cancer care for a population catchment of 40 million.  

The Uganda Cancer Institute was founded on the heels of a set of chemotherapy experiments conducted by Denis Burkitt and colleagues in which children with Burkitt’s lymphoma were given single doses of cytotoxic drugs to see if this would create tumor regression. The results of these experiments were impressive enough that the UCI was established to study the responsiveness of tumors to combination chemotherapies in a context where far more attention was paid to patient retention and the informed consent of parents. Speaking with the current director of the Uganda Cancer Institute, he notes that the sort of “heroic medicine” conducted by Denis Burkitt in the 1960s would not be cleared by a Ugandan or American Institutional Review Board today. But the reality still stands that there are certain sorts of questions and patient populations that are available in a Ugandan setting, such as children with Kaposi’s sarcoma or endemic AIDS related malignancies that are just not as prevalent or present in the United States or the United Kingdom. It is perhaps no wonder that “Research is Our Resource” is the current slogan of the UCI. 

When I first found out that the Uganda Cancer Institute had not thrown away its old patient records, I was stunned. And the INACTIVE records room is a treasure trove. Not only are there patient records from the 1960s to the present, but there are old personnel files, log books marking the events of a night’s shift on the wards, and old oncology journals from the 1970s. There are home visit reports from epidemiology studies in the 1960s. There are patient records written out on student exercise notebook paper in the 1980s and assembled with tiny strips of gauze—a signal of just how scarce things were during Uganda’s civil war in the early 1980s. The archive is in remarkably good shape given the years of benign neglect behind a padlock. As Dr. John Ziegler, the founding director of the UCI said to me in email correspondence about these materials, “Uganda is extraordinary in that nothing is discarded. Offices are like museums.” 

I’ve been thinking quite a bit about this museum as I make final revisions on my book manuscript now in 2020. I’ve returned to my old handwritten ethnographic fieldnotes, oral history interview transcripts, and scientific journal articles. I’m also beginning to pore over old patient records and institutional correspondence once more through my digital photo records, which are housed on a hard drive in California. As I revisit these records, I am all the more convinced that it would be a tragedy to let these materials sitting in the Kampala museum crumble to dust. And at the same time, as I look back through these material remains of cancer patients–notes from medical ward rounds, photographs of shrinking tumors, biopsy reports, orders to wrap up the body and take it to the morgue, maps to the homes of patients and their families dozens of miles off tarmac–I am deeply unsettled by the high degree of access I have to these intimate, bodily stories and the glimpses they offer into these all too abruptly extinguished lives. 

In this first post, I discuss the actual labor of working at the UCI’s archives and in the United Kingdom during an intensive year of fieldwork in 2012. I discuss both the embodied work of fieldwork in an archive, and also some of the practical considerations of what the obligations of a historian ethnographer are to her field site. Following Nicholas Dirks and his ethnography of the archive, I revisit old ethnographic fieldnotes from this period and use them as a source to reflect upon the politics, ethics, and practicalities of working with these materials. Subsequent blog posts will discuss the ethical and financial challenges of creating a durable digital and physical archive for these materials, as well as a discussion of other forms preservation through photographs and public history with Andrea Stultiens

a. Of Ownership and Access—January 2012

Since my return to Uganda in early January, I’ve been surprised by both how easy and how difficult it has been to readjust to living in this country. A large part of my social and practical work for this past month has involved turning conversations of support and bureaucratic letters of research clearance into the concrete action of a formal introduction. Although the director of the Institute warmly supports the research I am doing here, a misstep in terms of performing official introductions has set my work back by about two weeks and counting. 

This wrinkle came up after I met with the Uganda Cancer Institute director and gave him an update. I laid out a sensible plan up until the end of March, and at the end of the meeting, there was a green light to start from the director. 

But we forgot one essential thing. When in Uganda, formal introductions and a letter with an official stamp and signature are needed. It slipped both of our minds. At first, this did not seem to be a problem. I had a wonderful meeting with the current records manager, who was happy, after a bit of a negotiation, to show me what was behind the door marked “Inactive” in the records room. 

But the welcome soured the following day, when I showed up at the records room expecting to get started on an inventory. The current records manager stepped out for a while, and then came back and said, “I’m sorry, but I need a letter and a proposal before you can work here.”

The substance of the negotiations that followed between me, the records manager, and the head of research at the UCI weren’t about stamps or procedures or methods at all. It was about ownership. If I take an inventory of the Inactive records room, which is desperately needed if the Institute applies for funding to preserve these materials, who owns the inventory? Is an archival inventory the same thing as a dataset created from the bodies and fluids and outcomes of the Institute’s patients? If I take digital photos of patient files, am I obligated to provide copies? Where does the past of the Institute end and the present begin? These ethical issues of “who owns what” also seem to be shorthand for an even bigger issue about whose expertise counts. What if the historian gets something clinical “wrong” and makes a false generalization about how cytoxan dosing practices changed over time? Shouldn’t it be the clinicians who are writing the history of cancer care in Uganda? 

I do not know how common it is to renegotiate access to a field site after receiving Institutional Review Board clearance and an institutional go-ahead for historical and ethnographic research. But I do think these conversations about access, ownership, and expertise that unfolded need to be understood within a politics of hospitality. The expatriate researcher is, and will always be, a guest.  Formal letterhead, official stamps, and proper introductions all work to transform the expatriate researcher into a guest. And a guest who behaves badly will be asked to leave. I also think these discussions cannot be separated from the historical context of the UCI itself, where over the years expatriate scientists have sent tissues and biopsies and blood to freezers and laboratories overseas. In at least one such instance, samples taken in the 1960s and 1970s were kept at the US National Cancer Institute in Bethesda and went missing (either lost or thrown out) without any sort of infrastructural redundancy in Uganda.

Of Cataloging, Standardizing, and Organizing—March 2012

I fortuitously found myself able to sit in on a meeting about long term plans for the Inactive room. When I initially spoke with Dr J, he was like, “no, this is a managerial meeting,” but I’m really happy I just didn’t bother to move. So they are planning on renovating and reorganizing the Inactive records room to make more space for Deaths Files and Inactive records. The Inactive room houses much of “M’s [the former director’s] things,” and this is true—the medical journals, the boxes of research articles and correspondence, the personnel files—these are Institutional archives to be sure, but are different from the records. So the plan is for cleaning, organizing, and I hope cataloging. What’s great about this is that it means that the UCI will purchase materials like protective gear and a ladder and spearhead this reorganization project, thus making everyone take it a little more seriously, and thus making it less like I am a patron and more like I am just a person with some expertise helping out.

But there is a downside to this. The labor of creating archival space where one might be able to do historical or medical research sanitizes the materiality of these forgotten rooms. The old telephones, the pillows, and the forgotten AZT boxes all get thrown out and discarded as trash. The memories of the director’s long time secretary walking into the room and telling you how a group of men moved the contents of the director’s office into this storage room—these memories get washed away the more you attempt to organize the materials.

b. Of Privacy and Patients—August 2012

As I’m in the dusty room, pulling down patient files, photographing what I can selectively, a mzee (elderly man) comes into the records room. He is wearing a smart chocolate brown suit, a tie, and thick spectacles. I basically work in a room that’s like a cage, and so I’m hearing this conversation as it unfolds. The gentleman is unfailingly polite to A, who mistakenly identifies the man as a member of the clergy, when in fact this is a former army person. “I want to thank you for taking care of my wife here. She was a patient here in 2005 on this ward. She became good friends with the nurses. They loved her and would laugh with her. Every year around this time when I am remembering my wife, I come here to thank you for the care that you all gave her.” A suggests that this gentleman go and visit the Director to extend his thanks. I think to myself as I go back into the vault. . .that behind every single patient record that I’ve gone through, there is a constellation of family members and friends who have lost someone. And I’ll only get to sample a handful. 

So back to the patient records. I pull out a bundle of Lymphoma Treatment Center files from the 1960s and begin to go through them. And about half way through the bundle, I realize that I have in my hand the file of Dr. X, who was a patient here in the 1960s and then worked at the UCI as a medical officer in the 1970s. Earlier on in the day, I was wondering if I would ever locate Dr X’s files in the hundreds. . .it seemed like a very tiny needle in a very big haystack. And here it was in my hands, and it looked so very different than I was expecting it to be. No photographs, and there on the last page of clinical notes, Dr X’s own handwriting signing off on blood work—the last entry in his file signaling a permanent remission from his lymphoma in the 1980s. 

On this day, perhaps more than other days, it seemed that the living and the dead were colliding in the records room in unexpected ways. I had eaten delicious tandoori baked tilapia with Dr. X just a few weeks prior to finding his file springing to life in my hands. I had in all likelihood walked by the same nursing sisters who took care of mzee’s wife in 2005 earlier that day. 

Later on, I would ask Dr. A if patients were ever allowed to look at their files. I gave Dr. A., one of the oncologists at the UCI, a sketch of the situation, but did not identify Dr. X, as a key informant and one of the first people to be successfully treated at the UCI in the late 1960s. Dr A. said, “No, Marissa, that is very bad. Patients under no circumstances are ever allowed to look at their files.” Even if they work here? “Even if they worked here.” 

At the Uganda Cancer Institute, the medical records very explicitly say “Not to be handled by the patient.” Consider for a moment how different this is from the US context, where HIPAA protections grant patients the right to request and read their medical records. In this context, who owns a patient record? Does it belong to the institution? Does it belong to the patient? To the family? To the person who created the record? And what about the research afterlives of these records? More often than not patient records are used for cancer epidemiology studies or cancer registration. A  master’s research student may read through hundreds of records to determine how many patients come to the UCI with an advanced stage of Kaposi’s sarcoma. A cancer registrar will work with these records to determine the number of breast cancer or stomach cancer patients that are coming to the UCI in any given year. Far less common is to read these records as a form of personal biography. 

c. “Only in Africa” 

After I left Uganda in October 2012, I worked at various archives in the United Kingdom , tracking down where fragments of the Uganda Cancer Institute’s early histories now scattered amongst  remain in the personal papers of various British Colonial medical officers who worked in Uganda in the 1950s and 1960s. Of these gentlemen, Denis Burkitt, the surgeon who took an interest in a large rapidly growing jaw tumor, has left patient records in several different archives. These patient records are inaccessible to the public, in one case because the Wellcome Library has yet to catalog the 36 boxes of his material, many of which directly relate to patients, and in another, because Trinity College in Dublin has marked them off as “private.” 

In the midst of the UCI’s archival working conditions, it is sometimes difficult to remember that it is largely the disorder and the general neglect and a lack of standardization of materials—the designation of this stuff somewhere between bureaucratic detritus and preciously private—that has been instrumental both for its preservation and its accessibility. What you can access in Uganda is not the same as what you can access in the United Kingdom. 

Adriana Petryna’s notion of ethical variability in the context of conducting clinical trials is helpful here for conceptualizing how geographies, entitlements to health care, and many other things fundamentally shape how and in what ways people conduct experiments in the Global South. In the Uganda Cancer Institute’s case, experiments do travel and are fundamentally shaped by their local context. In a context where the history of experiments go back for nearly half a century, there’s also a question of what to do after the experiments have ended. When do these materials themselves become historical artifacts rather than confidential patient records and the remains of clinical trials past? 

For historian-ethnographers, what are the ethics of preservation here? Despite the standardizing work of Institutional Review Boards to smooth out what constitutes privacy and confidentiality, national legal frameworks to ensure patient privacy matter. So do local cultures of what becomes private in a place where oncology is itself quite public. And then there is the problem of preserving the logic of the past. To what extent do you treat these places as archaeological sites? How do you preserve the logic of not throwing things away even as you attempt to make an archival site legible enough to be used? 

Finally, and most importantly, there is the reality that archival research, especially in a setting like the Uganda Cancer Institute is an embodied, collaborative practice that cannot be stripped away from the present. How do you forge a collegial and mutual interest in preserving a collection like the UCI’s archives while attending to the very real concerns of patient privacy, an overstretched staff, and ongoing crises around space? 

If permissions from all parties were secured, would you sit down and talk with Dr. X about his file? Or would you tuck it back into the sea of pinks, manilas, and browns, bundled with string, allowing it to float in the anonymity of thousands of people turned into patients, many of whom have final entries of either “lost to follow up” or “died this morning”?


A note on the text: This piece is part of a blog series based on lectures I’ve given over the years about “what is to be done” with these rich materials. 

Marissa Mika is a writer, historian-ethnographer, and academic. Her scholarship examines the past and future of science, medicine, and technology. For the past fifteen years, she has worked primarily in eastern and southern Africa on the techno-politics of global health. Her current book project, Africanizing Oncology, examines the histories of survival, experiments, and creativity in times of crisis at a cancer hospital in Uganda. She holds a PhD in the History and Sociology of Science from the University of Pennsylvania, an MHS in International Health from Johns Hopkins, and a BA in Development Studies from UC Berkeley. Learn more about her work here: Find her on Twitter @dr_mmika

Featured Image: Lymphoma Treatment Center, 2012. Photograph by Andrea Stultiens.

Think Piece

The Ghostly Presence of the Police

By Juliane Baruck

Discrimination, excessive force, fatal assaults—is police violence a problem that can be explained by reference to the proverbial ‘bad apples’? For the philosophers Walter Benjamin and Jacques Derrida, it points to deeper, structural causes: the contradictory constitutional logic of the police as an institution. Benjamin’s essay Critique of Violence [Zur Kritik der Gewalt] from 1921 is shaped by the experience of World War I and the violent postwar period that saw the strengthening of military and police forces at the end of the November Revolution and the political clashes of the Weimar Republic. Written seventy years later in 1991, Derrida’s lecture Force of Law, by contrast, marks a reflection on the Cold War with its proliferation of anti-democratic and terrorist forces. Despite the distinct historical backgrounds they reflect on, however, both thinkers use the example of the police in the modern-nation state to reveal the structure of law as mythical—as something that is not naturally given. Rereading their texts on law and violence together, I propose, can help interpret the structural position of the police within the modern state apparatus and contextualize the persistence of systemic police violence beyond individual transgressions.

On a basic level, the police constitute the means of the state to exercise, enforce, observe, and safeguard the legal order. As an institution, they establish the binding nature of legal norms and uphold the state’s sovereignty. In other words, the relationship between law and police is that of means and ends; the police (as the means) are legitimized by the law (the ends) to exercise violence for the purpose of the latter’s preservation. In reality, however, it quickly becomes clear that the police often fail to fulfill their function as a mere means of upholding the law, and that turning to or even against them is simply not an option for many people.

Walter Benjamin addresses this peculiar position of the modern state police in his Critique of Violence. Notorious for being inimical to English translation, the essay deliberately exploits the ambiguity of the German term Gewalt, which can be translated as both violence and legitimate power, to illustrate that the logic of law is always already accompanied by violence. Attributing to the police the status of a “spectral mixture,” Benjamin argues that the police’s power exceeds that of a law-preserving institution because it can set its own ends. The police, in Benjamin’s terms, hold both law-preserving and law-making power (Gewalt). While the former grants the police the right to act in order to pursue legal ends (the “right of disposition”), the latter provides them with the right to “decide these ends within wide limits”—and  the police thus also possess the “right of decree.” (242) For Benjamin, exactly this entanglement between law-making and law-preserving power (Gewalt) constitutes the “ignominy” (242) of the police.

Walter Benjamin, 1892-1940

However, the systematic connection between these two forms of police power often remains concealed. The police’s room to pursue self-imposed ends is to a certain extent limited, as it is restricted by the specific legal system within which the police operate and the vagueness of which they have to exploit. Within this vagueness, police law finds its legitimized place:

Rather the ‘law’ of the police really marks the point at which the state, whether from impotence or because of the immanent connections within any legal system, can no longer guarantee through the legal system the empirical ends that it desires at any price to attain. Therefore, the police intervene ‘for security reasons’ in countless cases where no clear legal situation exists […]

Benjamin, Critique of Violence, 243

Hence, the vaguer the legal system is, the more room for interpretation it allows, and it consequently becomes easier for the police to disguise their actions as the mere exercise of law. In fact, this ambiguous space within the law requires the police to set themselves ends that, on a structural level, differ from the ends of the legal system. In this constellation, Benjamin argues, the police can easily set themselves ends that are “without the slightest relation to legal ends,” while they are “accompanying the citizen as a brutal encumbrance through a life regulated by ordinances, or simply supervising him.” (243) For Benjamin, the police thus have relative autonomy of action, especially in those cases where the state has no special laws to protect specific groups or regulate distinct situations. Extending the powers of police in states of emergency or crisis further increases the scope for police intervention. For example, when the police during protests are sent out to ‘calm down the demonstrations’ and ‘secure the city,’ there is no real legal framework detailing the exact actions that are implied in such an order.

According to Benjamin, the ignominy that marks the authority of the police arises from the fact that they are exempt from providing any grounds on which they could justify the exercise of both powers (Gewalten). Law-making power is usually “required to prove its worth in victory” (243), i.e. by defeating the enemy the winner gains the legitimacy necessary in order to establish a new legal order. The police’s authority, by contrast, is always-already legitimized by the existing legal system. And while law-preserving power typically “may not set itself new ends” (243), the police are also exempt from this condition because they are authorized by the state to do precisely that. The traditional separation of the two powers (Gewalten) is thus clearly suspended in the case of the police: “Its power is formless, like its nowhere-tangible, all-pervasive, ghostly presence in the life of civilized states” (243). This ghostly nature results from the police’s ability to not only apply, interpret, and change laws, but also to present their violations of the law as supposedly lawful. Because the police are not policed themselves and their limits of action remain necessarily blurry, they turn into an abstract and continuously present object of fear. In sum, the institution’s capacity structurally transgresses the framework of legitimacy that a legal system considers appropriate for preserving itself.

This transgression, in turn, has three far-reaching, structurally related consequences in Benjamin’s reading. First, the police forfeit their a priori legitimacy as a means of maintaining the law. As they exercise both law-preserving and law-making power (Gewalt), they systematically cannot be legitimized by the law itself. It is therefore precisely the empowerment of the police to act autonomously along certain lines that leads to the transgression of a legal structure meant to legitimize the institution of the police as a necessary means to attain certain legal ends. As a result, by exceeding the mere function of preserving the law, the police’s relationship to the law becomes one of consubstantiality, as succinctly put by Daniel Loick in his recent reading of Benjamin (119).

Second, in its dependence on the police, which is the consequence of its inability to ground its validity in itself, the law is exposed as arbitrary, or mythical. For Benjamin, this renders visible the violence (Gewalt) that is inherent in every act of law-making: The legal order itself is not naturally given but arbitrarily imposed, manifested or forced upon a state. Hence, Benjamin describes this kind of arbitrary necessity, which exists structurally in every legal system, as the “mythic manifestation of immediate violence” (249). What constitutes the fateful character of this mythical violence is that the legal order decides on the justification of means, and thus ultimately on all violence. (248) Because its ultimate end is the preservation of itself, that fateful, arbitrarily-necessary law seeks to monopolize all violence in itself. (239) Any other, i.e. external violence, whether individual or collective, poses a threat to the law not (only) in its particular form as a distinct statute, but to the law as such. Thus, the law exists in a permanent state of fear vis-à-vis any other violence because of the latter’s inherent law-making capacity and, consequently, its potential to entirely overturn the existing order. (239) The monopolization of all violence within the law implies that police violence, in turn, can simply be covered up as a defense of the law.

The subsequent third outcome is that the police pose a fundamental threat to the sovereignty of the law. The police, in their capacity to assert their own sovereignty, always pose a threat to the state. This finding, then, undermines the rule of law as a legitimate end. In other words, as argued by Loick, the conflation of law-making and law-preserving power (Gewalt) in the police must result in a contradiction: the police are instantiated to preserve the law while at the same time, in their capacity as law-making power, they pose an immediate threat to that same law. (116-17) Although the autonomy of the police is initially restricted by the law, it is, as Benjamin argues, precisely the necessity to apply laws and to interpret and decide “where no clear legal situation exists” (243) that provides the condition of possibility for the police to turn against the state as the framework of their legitimization—above all because their actions are taking place in the present, while the establishment of law necessarily precedes this present.

For Jacques Derrida, who built upon Benjamin’s essay in his lecture Force of Law, it is this third consequence that exposes the fundamental character of the law: it always contains a threat to itself. Following Benjamin, Derrida posits that the law cannot be grounded in or justified by itself; any attempt to do so must inevitably be condemned as  “legitimate fiction” (12), as something that presents itself as a naturally given and thus ultimately legitimate in a Platonic sense. Yet, while Benjamin is concerned with illustrating the impossibility of providing legal legitimacy to the police, Derrida turns this argument around and contends that the existence of the police necessarily follows from the law’s lack of legitimacy: “By definition, the police are present or represented everywhere that there is force of law.” (44) This leads him to deduce that “[t]hat which threatens law already belongs to it, […] to the origin of the law”. (35) In other words, Derrida assumes that, because law is never able to be grounded in an absolute truth but instead develops in a process of mutual influence with the contingencies of history and the historical understanding of justice, the possibility of its self-destruction is inscribed in every law—precisely because of its contingency. The police, then, are the very instance of the law that enables it, by applying and thus iterating it.

Jacques Derrida, 1930-2004

According to Derrida, rules in the sense of the law are only valid when they are enforceable. Thus, the existence of laws is conditioned by the possibility of their enforcement. (6) The mixture of law-preserving and law-making violence evident in the police thus draws attention to a central aspect of the law, which can be described as auto-deconstructive.

What threatens the rigor of the distinction between the two types of violence is at bottom the paradox of iterability. Iterability requires the origin to repeat itself originarily, to alter itself so as to have the value of origin, that is, to conserve itself. Right away there are police and the police legislate, not content to enforce a law that would have had no force before the police.

Derrida, Force of Law, 43

The institution of the police paradigmatically reveals that the establishment of a law always implies its preservation in a form that requires an iterated foundation. Already in the act of foundation, the law “inscribes the possibility of repetition at the heart of the originary.” (38) Consequently, in Derrida’s understanding, there is no more “a pure founding violence, than there is a purely conservative violence.” (38) Every law contains a re-founding, “so that it can conserve what it claims to found.” (38) Here, Derrida adopts from Benjamin the aspect of the circular self-weakening of the law-making by the law-preserving power (Gewalt) (cf. 55; 251).

This means, however, that for Derrida, the threat to sovereignty does not come from outside, i.e. from the police as an independent, autonomous force in opposition to the law. Rather, the police illustrate a fundamental and constitutive element of every law: the necessity of its re-founding always implies the possibility of its own destruction. And vice versa, this possibility of the destruction of the law is the motor of its constant self-authorization, of its continuation. This circumstance fundamentally enables the structural facilitation and disguise of police violence (Gewalt), since the police act to enforce a law and, by refounding it over and over again, legitimize it. Being this refounding force, the police structurally reach beyond the law.

What follows from Benjamin’s and Derrida’s thoughts on the police? Both essentially argue that law has a mythical origin, that there is no absolute law that could claim universal validity. By emphasizing the ghostly presence of the police, and their mixing of law-preserving and law-making powers (Gewalten), Benjamin is particularly concerned with showing that it is impossible to legitimize violence (Gewalt) as a means of preserving the law, and thus aims to expose the violence (Gewalt) that is implicit in any legislation. Analyzing this same fundamental lack of legitimation of the law, Derrida argues that the police reveal the threat of destruction inherent in every law, precisely because of the impossibility of its self-legitimation. Ultimately, it is this legitimation that is, and will continue to be, the task of law.

In conclusion, police violence is legitimized by the structural coupling of law with the police, which themselves lack any firm legitimation. Therefore, for both Benjamin and Derrida, the danger is not that individual police officers become criminals, as police violence is not external to the police’s existence as a means. Rather, they indicate a structural problem that is rooted in the very constitution of the police: by means of a legitimate fiction, the police are legitimized by a law that they themselves must uphold. This contradiction cannot be resolved.

Juliane Baruck is a graduate student and research assistant in the Department of Philosophy at the Freie Universität Berlin. Her current research focuses on Political Critical Theory and particularly on the intersection between democracy, justice, and responsibility. 

Featured Image: “phantom (cc)” by Martin Fisch, licensed under CC BY 2.0.

Think Piece

Labor in the Mind of Masters: Antebellum Slaveholder Ideas of Work

By Alec Israeli

In his opening address to the May 1853 Convention to Organize an Agricultural Association of the Slaveholding States, chairman William C. Daniell mounted a detailed defense of slavery. Much of his speech articulated a critique of free wage labor that could have come from a working-class radical— “strife” between labor and capital was inevitable; machinery made labor dependent on capital. Yet he did not attack capitalism itself. Rather, he claimed the superiority of the economic form that bound labor and capital as practiced by his fellow “capitalists”— slaveowners. He also defended slavery in terms beyond political economy, claiming that slavery’s benefit lay in the “wholesome pupilage” provided by the master to the slave (“An Address,” American Cotton Planter, Feb. 1854).

Daniell’s apparent ambivalence about how to define the role of slaveowners in relation to their slaves was representative of a broader tension felt by the master class. As studies in the New History of Capitalism now aim to center American slavery in the development of capitalism, it is worth examining exactly how slaveowners themselves understood their status vis-à-vis the labor, free and slave, that made their wealth possible. I suggest that they attempted to maintain identities of both employer and master, manager and paterfamilias, which were often in contradiction, but united by an elevation of planters’ mental labor. Personal class identity notwithstanding, however, planters could never abandon the reality that their enslaved laborers served their bottom line.

As scholars like Laurence Shore have written, planters consciously sought to prove the compatibility of slave labor with classical political economy and to solidify their status as responsible capitalists. By the mid-nineteenth century, this tendency engendered both an anxiety over the South’s lagging economy and a reformist call for planters to improve Southern domestic production, so as to relinquish dependence on Northern imports. Among the many reformist planter journals that sprang up was the American Cotton Planter (ACP), which presented itself as “Devoted to Improved Plantation Economy, Manufactures, and the Mechanic Arts.” Most articles discussed applications of so-called “book” farming, with various suggestions for practicing crop rotation, maintaining soil fertility, and experimenting with different cultivars. But efficient farming was not the ACP’s only concern.

Peppered throughout practical advice columns were intimations of slaveholders’ growing consciousness as managers of business enterprise. As one 1854 article on crop rotation offered, “proper management” of a plantation would secure the “labor and pains-taking of the proprietor.” Such management in turn “reward[ed] the industrious labor of the merely plower and hoer”—that plower and hoer, of course, being the slave (“System and Rotation in Cotton Culture,” Dec. 1854). Notably, the author described the work of both the planter and the slave in the same terms of “labor,” rather than taking pride in the planter’s status in the leisure class. The distinction was not in labor itself (emphasized in either case), but the kind of labor: while the slave “merely” plowed and hoed, the planter thought about how to do so. When slaveholders engaged in modern management, they saw themselves as modern brainworkers.

This aspiration towards membership in the bourgeois world of brainwork was a common line throughout the ACP. Farming, its writers insisted, was not drudgery. “I don’t care,” complained one letter to the editor, “how practical any man may be […] he must apply his mind to work […]” It was “high time,” indeed, “that intellect make it mark upon Southern Agriculture,” (“Agricultural Improvement,” Feb. 1854). One article taken from a peer publication, the American Agriculturalist, likewise made the case that agriculture, “not simply the art of plowing” but “viewed in all its […] scientific bearings”, should be understood as a respectable career path for college-educated boys with a “culture of mind,” and take its place alongside the “professions.” (“Agriculture and the Professions,” Dec. 1854).

Granted, the “professions” are unspecified here beyond a passing mention of politicians; they likely refer to law, medicine, and perhaps the then-growing world of business management (clerks, administrators, and so on). Other writers, though, were more specific in their connection between cultivating the mind and professionalizing planting. A letter in the ACP from Carlisle P.B. Martin of Georgia, taken from another peer publication, The Soil of the South, maintained the premise of planting as educated brainwork, but pushed the timeline back for schoolboys-cum-planters. Carlisle called for the creation of a “scientific and practical College” to teach incipient planters modern agriculture, complete with an on-site farm for study. He insisted: “I do not propose to make it a MANUAL LABOR SCHOOL.” The plowing and hoeing was to be left to slaves, “as on any other plantation.” Carlisle was well-received; the ACP published a glowing companion piece to this letter in the same issue (“Agricultural Education,” “Agricultural School,” Oct. 1855).

The planter identification as brainworkers went beyond technical training and plantation work. A short article about the need for physical recreation placed its readers as quintessential Americans: “an unhappy people […] Perpetually absorbed in business, with our mental faculties constantly on the stretch […] Bending all our energies to the one object of making money […]” (“Physical Recreation,” May 1856). The ACP seemed to relish that planters could join white-collar workers in anxiety-ridden productivity—soon to be medicalized as a condition peculiar to the American brainworker class.

It is hard to know how much the planter readership saw itself in this image offered in the ACP. But it is clear, though, that these kinds of publications were avidly read, and their advice taken to heart. The Mississippi cotton planter Francis Terry Leak, for example, was the consummate scientific farmer. As Caitlin Rosenthal has discussed, Leak kept daily picking records in hand-drawn tables, made projections for future planting, and wrote detailed records of fertilization experiments on his plantation. Looking to his expense reports, one can see that he was a book farmer: Leak subscribed to De Bow’s Review, the Southern Cultivator, and the Soil of the South, all forums for reformist planters.

Leak’s financial records also point to his work as a manager of free labor as well as slave. He recorded three year-long contracts for overseers, with a set salary, often including room and board. Overseers, as free laborers, were employed at will; as Leak’s contracts suggest, planters were scrupulous employers, and overseer turnover was high. Overseers were responsible for plantation management when planters were absent, but that did not mean they could escape the watchful eye of their employer. Writing to his business partner Rice C. Ballard in July 1846, Mississippi planter Samuel Boyd reported that upon a brief visit to one of their plantations, he had to scold the overseer Cox about his overly-harsh “discipline of the negroes.” Cox himself previously wrote that a slave had escaped for fear of being whipped.

This brief episode, stringing together the employer/master, the free laborer, and the slave, offers some insight into the dual nature of the planter. On the one hand, here is a manager chastising his employee (the overseer) for damaging his productive property (the slave). On the other hand, though, is a paternalistic master distinguishing between the mere worker who he pays, and the enslaved ward, for whose life he sees himself responsible. Cox’s apparent deviation from the master’s will constituted a disturbance in the metaphysics of plantation power.

The proper relationship between master and overseer was a matter of some contention in the planting community. A debate on the subject unfolded in the November 1854 ACP. One reader wrote in, asserting that, “An overseer is an agent, invested with authority to act for his employer, and if he has no authority to act for or instead of his employer, he is no agent, or overseer, but a driver,” (“Reply to Dr. Phillips,” Nov. 1854). In other words, overseers should have some license to exercise their own will in plantation management.

M.W. Phillips—a prominent scientific planter, and frequent contributor to the ACP and other similar periodicals— disagreed. He wrote in reply that “I regard the overseer to be an agent, and no more than an agent.” The overseer was “bound to obey instructions,” (“Overseers, Agents,” Nov.1854). Phillips expected the overseer to extend the will of the planter, and nothing more.

This, it seems, was the correct view for enlightened book farmers. The next issue of the ACP opened with an essay on “The Duties of the Overseer” from Thomas Affleck’s Cotton Plantation Record and Account Book (one of the more commonly used pre-printed recordkeeping books) which affirmed the overseer’s purpose to “carry out the orders of [his] employer […] and […] to study his interests,” (“The Duties of an Overseer,” Dec. 1854). Another article in this same issue affirmed this sentiment, advising planters to “place the overseer’s copy of the contract” in their copies of the Account Book so “that it may be before [the overseer] every evening.” Advice like this suggests that the ACP’s managerial tips were meant for a readership of planter-proprietors rather than overseers. Indeed, this article echoed Phillips’ top-down directive in stating that “The overseer is the agent of the employer, to carry out his will in the promotion of his interest in the management of his plantation, hands and stock,” (“Overseers,” Dec. 1854).

Here, the overseer is but a surrogate for the planter, mediating between the “will” of the planter and its material extension through the “hands”—the slaves. The overseer was the connective tissue of a plantation body-politic, with the planter as the head controlling the movements of its enslaved limbs. The plantation in this formulation becomes an organic form; brainworkers and manual workers, those units of capitalist organization strictly divided by skill and class, now become mind and flesh bound together in preordained purpose. Both articulations were espoused by these book farmers— a dual identity of mental capacity not without contradictions.

“Hands” as a metonym for enslaved laborers offers a curious case. The word itself, of course, suggests corporal form. But its simplicity as a designation also lent it to planters’ use in their attempt to fully commoditize slave labor by rendering slaves fungible. Rosenthal has written how a single “hand” became a reference point for a slave at maximum productivity, such that slaves could then be described in terms of fractional hands to determine their sale value. But such methods to render slaves uniform belied the subtle ways planters capitulated to the impossibility of the aim. Letters to Ballard from his overseers refer to “hands”, but likewise mention slaves by name; the diary of planter William Ethelbert Ervin followed a similar pattern. Leak, as carefully as he tallied cotton picking, still noted slave births, deaths, and tasks by name.

Likewise, planters’ capitalist-conscious desire to encourage Southern domestic production ironically engendered a paternalistic vision of the plantation as a self-reproducing, market-free being. Articles throughout the ACP called on planters to develop a “proper system of plantation economy” to avoid import dependence, for example, by planting plenty of grain alongside their cotton (“Prospects of the Crop,” Nov. 1854; “System and Rotation in Cotton Culture,” Dec. 1854).

And yet, commodity production took precedence over reproductive plantation labor for in situ slaves. One of Ballard’s overseers wrote that he would not be able to do much “surplus work”— in this case, building corn sheds— aside from “the crop”; another had been hesitant to direct slaves to begin repair work, as picking season approached. Leak avoided the dilemma entirely, contracting out painting, bricklaying, fencing, and other such work to free laborers.

Try as they might, planters could not escape the pull of the market of free labor beyond. Nor could they resist organizing their enslaved laborers to cater to the demands of this market. By necessity (and in practice), the old body-politic submitted to the new brainworker, but the former was still a salient articulation of hierarchy that structured slaveholders’ understanding of labor. In itself, the significance of this atavism should not be forgotten as continuing scholarship reveals the brutal historical intertwining of capitalism and slavery.

Alec Israeli is a senior in the Department of History at Princeton University, pursuing certificates in African American Studies, European Cultural Studies, and Humanistic Studies. With research interests in the overlaps of labor and intellectual history in the 19th-century Atlantic, his senior thesis advised by Matthew Karp will examine the labor ideologies of antebellum cotton planters in the Lower Mississippi Valley. His work has also appeared in the Vanderbilt Historical Review, the Columbia Journal of History, the Princeton Progressive Magazine, and the Mudd Library Blog. He can be reached at

Featured Image: The American Cotton Planter, vol. 4, no. 5, 1 May 1856. American Historical Periodicals from the American Antiquarian Society. Accessed 9 Oct. 2020.