Historiography

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951

Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])

I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg

Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC

Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg

Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original

Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens

Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

A Story of Everything

By guest contributor Nuala F. Caomhánach

9780226476124

Nasser Zakariya, A Final Story: Science, Myth, and Beginnings (University of Chicago Press, 2017)

In his A Final Story: Science, Myth, and Beginnings (2017), Nasser Zakariya pries open a Latourian black box to reveal how natural philosophers and later scientists constructed “scientific epics” using four possible  “genres of synthesis”—historic, fabular, scalar, foundational—to frame all branches of scientific knowledge. Their totalizing aspirations displaced outliers, contradictions, and obstructions to elevate a universal, global history of the universe. Zakariya highlights the paradox of the process of science from the 1830s to the present.  He shows how the parallel forces of narration and scientific explanatory methods merely continued to confirm discursive, epistemological and ontological pluralism. The desire to tame this pluralism legitimized the boundaries of science through pedagogy, priority and authority. A panel of historians recently met to discuss the book at New York University, including the author (UC, Berkeley), Myles Jackson (NYU), Hent de Vries (NYU), and Marwa Elshakry (Columbia University), moderated by Stefanos Geroulanos (NYU) (an audio recording of the event is available above).

image.img.320.medium

Prof. Myles Jackson (NYU)

Jackson agreed that the invention of a myth or tradition “usually deals with origin stories and tend to be universal.” Tradition “result[s] from some sort of conflict, or debate, shaken identity, boundary dispute and…[has] a moral dimension.” Jackson emphasized how critical the 1820s and 1830s proved as science began to specialize alongside the invention of the term ‘scientist’ (1834) by William Whewell. He wholeheartedly agreed with Zakariya’s interpretation that for such natural philosophers as John Herschel, the “scientific context is irrelevant, precisely because science is universal.” Jackson elaborated on the conflictual divisions between artisans and natural philosophers whereby makers of scientific instrumentation—crucial for advances in science—were denied the status of philosophers themselves.  This division proved social, cultural and political at the turn of the nineteenth century, as knowledge became commodified, and natural philosophers legitimized their role as creators and curators of science. Jackson mapped out the “contextual moral” for this transition, and pointed to the Industrial Revolution and its effects on Handwerk and Kopfwerk. Natural philosophers insisted that artisans “should reveal their secrets so that their knowledge could be managed and applied universally, a great Enlightenment trope.” Their interest in economic efficiency focussed on replacing artisan skills and human calculators with controllable machines. For these natural philosophers “the key was the unity of science serving as models for other forms of knowledge.” Yet, as Jackson concluded, “there is an ethics for scientific imperative….the usefulness of useless knowledge….a moral argument that we must do it because it is about knowledge itself.”

150929_022-Edit

Prof. Nasser Zakariya (UC Berkeley)

Zakariya agreed that there are “still richer contexts” in the analysis of “matters of material practices.” He acknowledged that his actors “were deeply engaged in reconstruction of both technical craft they were working through, and the theorization of that technical craft.” Discourse drew Zakariya away from material practice, toward his actors’ resistance to a historical synthesis. Their anxieties rested with who they imagined had the expertise to undertake this synthesis. Therefore, the synthesis “starts to construct… despite their democratizing impulses… a particular kind of elite that will carry out that democratization.” For Herschel, an author like Alexander von Humboldt in his Kosmos suggested that “if [synthesis] were possible… people like Humboldt [were the philosophers] to do it and yet we find Humboldt is insufficient to be able to do this.” For Zakariya, the discursive maneuvers in trying to articulate “what is and what is not possible” within these genres is at stake. His actors are not stating that synthesis is not possible, only that a historical narrative of the universe was not possible.

hent-de-vries.png

Prof. Hent de Vries (NYU)

For de Vries what is at stake is contemporary scholarship that has returned to the “age-old appeal to myth.” He met Zakariya‘s use of the term “myth” with suspicion, albeit agreeing with its premise. Zakariya echoed Adorno’s and Horkheimer’s concerns in Dialectic of Enlightenment by arguing that “[J]ust as myths already entail enlightenment, with every step enlightenment entangles itself more deeply in mythology. Receiving all its subject matter from myths, in order to destroy them, it falls as judge under the spell of myth” (10). Myth and enlightenment co-evolve into a constricting knot, despite the notion that the foundation of the inductive sciences was based on “the rejection of tradition, mythic authority” (10). With additional knowledge of the physical and natural world, de Vries pointed out the possibility “that there is “a final story” to be told about the emergence of the frames or “genres for synthesizing” knowledge in question.” He emphasized this as “a meta- or mega-narrative, a myth of myth” but problematized that final theories of stories “offer just that” because they are built on  “empirical finalities that are… particular, not general and decidedly partial, but also on account of a fundamental, call it transcendentally grounded, incompleteness, of sorts.” “Or is it?” he asked.

elshakry

Prof. Marwa Elshakry (Columbia)

Elshakry also probed at Zakariya’s categories of “myth”, “epic” and “universal histories,” out of “genuine curiosities.” She found the main tension was conceptual, not semantic, and was “connected ultimately to [the] alpha and omega of universal history, with myth concerning ultimately and uncomfortably the notion of final ends [and] epic… primarily a concern with origins.”  The quest of the scientists as they vacillated  “between the known and unknown” is to begin to recognize that “this very heroic quest” may also reveal the “story of self-destruction rather than point the way to cures and wonders or the idea that being human… engages us in an extended historical  process of self-destruction.” She wondered about the logic behind the pursuit of “scientific realism as a Hegelian process of negation and  death.” Surely this pursuit suggested that humans can “induce and deduce our own ultimate species death and extinction… and yet we cannot.” Therefore, there was an inherent tension between “the secular humanist order and a sacred one.” She concluded with a tantalizing question “what is the final story—if in our own minds own science narratives or cosmic epics, come up with a good origin, but [we seem in our] collective species being imaginaries incapable of dealing with the problem of death itself.”

Zakariya tackled these questions eloquently. He explained how these scientists did not endorse myth uncritically, acknowledging their awareness of the paradoxes they had adopted. This paradox was a “tendency to have this eruption of a kind of mythic status to the project of knowledge, despite the project of knowledge seeing itself often as undercutting the grounds upon myth stands.” These natural philosophers’ and scientists’ totalizing ambitions forced them to question the very axioms with which the framework was constructed. Zakariya noted the constant reinscription “of the work of doing the totalizing” as these men argued that science was the most effective and natural discipline to tell this scientific epic. Their frameworks were limitless, but as they enlarged these structures, the edges became frayed, and they were forced to brood over questions that “[brought] us back to critiques of reason.”

In response to Elshakry, Zakariya revealed that she had uncovered “a number of elements [he] hadn’t quite thought about.” In answer, he discussed Hermann von Helmholtz ‘s views on the idea of universal history. In a period where thermodynamics was emerging around the contradictory concepts of entropy, enthalpy, and conservation, it began to reflect the impossibility of an infinitely old universe. By integrating thermodynamics into a scientific epic, Helmholtz realised that one must “bear up to this idea that it spells a conclusion of ourselves.” Similarly, these epics, for Zakariya, “forc[e] us to dwell on our mortality as a species.”

This book is a must for scholars in both the sciences and the humanities. Zakariya’s intervention, fusing the physical, natural and human histories, shows how the historical narrative in epic form was not self-evident, and a “strict chronology was not the ultimate arbiter” (126). Political contexts and contests influenced competing worldviews of humanity, the Earth, and the universe, in the professional and the public sphere. He demonstrates how a scientific worldview grows from the kind of questions asked, how these physical and metaphysical spaces are symbiotic, complementing and contradicting at the same moment, and how many universes can emerge. At the core of this narrative is a selection of personalities, from Mary Somerville to Steven Weinberg, who oversaw the totalizing visions circulating between professional and popular epics. As they shoe-horned these visions into a single narrative, some made the synthesis, many others sank, and some were transformed, leaving behind the forces, traces, and circumstances they had come up against. Slowly, the “suburban position of humanity and the earth” revealed the real limits of science (313). In this anxiety, the voice of the scientists metamorphosed into the only voice for the planet itself, thus claiming hegemony over the history of the universe.  The reader travels through Zakariya’s mindfully researched and vividly written tales of the attempt to stage the construction of a whole of knowledge, of everything. Thus, “whatever the future condition of species being and knowing, the universal human story must be maintained in the generic form of an epic” (339).

Nuala F. Caomhanach is a Ph.D. student in the History Department at New York University, and research associate in the Invertebrate Zoology Department at the American Museum of Natural History.

A Man Walks Into A Bar; or the possibilities of the individual in international history.

by Editor Sarah Claire Dunstan.

One summer’s afternoon in 1923, a French barrister was enjoying a drink in a Parisian café.  A man of broad experience and education, the barrister was also a medical doctor who had served in the First World War. This service had allowed him to become a French citizen in 1915, a privilege denied previously because he was a native of the former Kingdom of Dahomey, now a French colonial territory. Kojo Tovalou Houénou

Cécile_Sorel,_par_Reutlinger

Comtesse de Ségur of the Comédie Francaise

was not just from Dahomey, he also claimed the title of Prince on the basis that his mother was the sister of the last King. Contemporaries and later scholars doubted the veracity of this claim but it made him of much interest to the Parisian dailies. In their pages, tales of his exploits amongst bohemian circles – notably his on-again, off-again affair with the Comtesse de Ségur of the Comédie Française – were reported with glee.

On this particular August afternoon, Houénou was simply a French man. Or at least he was until a group of drunk Americans sat down at a table nearby. He thought little of them until they began to object, loudly, to his presence. The waiters, virtuous Frenchmen one and all, refused to eject Houénou from the café but the Americans grew rowdier. Finally, the foreigners stood up, dragged him from the café, beating him up and throwing him in the gutter. This example of American racism shocked Houénou, awakening him to the reality of black experiences outside of la belle France. He resolved to do all that he could to extend and uphold the principles of French civilization and to protect the less fortunate amongst his race. To this end, Houénou founded the Ligue Universelle pour la défense de la race noire and its journal, Les Continents. This very tale was printed in one of the early issues and reiterated as the origins story for the Ligue by other press outlets such as the African American journal the Crisis and Marcus Garvey’s newspaper, the Negro World, as well as by Houénou himself in speeches delivered to mainly black audiences in Paris and New York.

Although primarily concerned with abuses being perpetrated towards the indigenous populations in the French colonies, Les Continents became one of the first francophone print forums for collaborations between African American activists and thinkers and their French counterparts, crafting a bridge between Harlem and the Parisian left bank. The Ligue itself had a mission statement that articulated its desire to ‘develop the bonds of solidarity and universal brotherhood between all members of the black race.’ Celebrated Harlem Renaissance figures from Alain Locke and Langston Hughes through to Countee Cullen published in the journal.  Under Houénou’s leadership, the group built relationships with the American National Association for the Advancement of Colored People and Marcus Garvey’s Universal Negro Improvement Association. As a result, the Ligue has received some scholarly attention as an institution that fostered black international solidarity (most notably in Brent Hayes Edward’s wonderful The Practice of Diaspora, Christopher L. Miller’s Nationalists and Nomads and Michael Goebel’s Anti-Imperial Metropolis.) More than that, Houénou’s neat origin story has much in common with those employed contemporaneously by other black activists as they attempted to leverage the potential of French civilization against the specter of American racial discord and to agitate against racism in France. Insofar as the existing scholarship is concerned, Houénou tends to appear in histories of black internationalism that focus upon institutional organization or ideological mechanisms. Where his activism is given credence, it is as a corrective to the scholarship’s tendency to focus upon the African American presence in movements towards black internationalism. Always, Houénou’s experience is subsumed in the institutions he founded or participated in.

Princ_Tovalou-Houenou

From left to right: Marc Quenum, Kojo Tovalou Houénou and Marcus Garvey in Harlem, 1924.

This is due in part to the scarcity and nature of remaining sources. No archive holds Houénou’s personal papers. Fragments of his life have to be pieced together from newspaper articles from his heyday in the Parisian social landscape, or from letters appearing in other collections such as that of W.E.B. Du Bois. The Service de contrôle et d’assistance des indigènes, established by the French Minister for the Colonies Albert Sarrault in 1923, offers perhaps the most comprehensive chronology of Houénou’s life. Given that Sarrault utilized the Service for surveillance of those deemed threatening to the French imperial system, this tends to emphasize his involvement in black activist organizations rather than pay heed to his individual behavior. All the more so given the French authorities’ tendency to conflate all Pan-Africanist organization with Garveyism and all Garveyism with insurrectionist and usually Bolshevik politics. When Senegalese politician Blaise Diagne successfully sued Les Continents for libel in 1924, the paper and the organization folded, leaving Houénou bankrupt. He was forced to leave Paris and to renounce his diasporan affiliations (specifically any connection with Marcus Garvey) before he was allowed back into Dahomey. Black international solidarity at this moment, then, appeared to crumble in the face of the machinery of French Third Republic.

Inverting the study to map an international history through Houénou’s individual perspective, however, changes the narrative from one of failure at the hands of unstoppable empire. Instead it allows us to re-position the way we think about the spatial geography of black internationalism which is often characterized in terms of experiences in Northern hemisphere metropoles. Houénou himself participated in the construction of this narrative with his repeated telling and refashioning of the café incident. The Ligue and the other black activist organizations he participated in certainly were rooted in Paris and New York. Moreover, the freedom of speech permitted in Paris as opposed to the colonies created a space for black internationalism that would not have been possible elsewhere. However his own individual experiences belie the story he constructed.

In 1921, two years prior to his ‘racial awakening’, he had visited Dakar. Whilst there, he spoke to the Senegalese tirailleurs who had been abandoned by the French Government after fulfilling their conscripted duties. The reality of their exploitation was only too visible and Houénou spoke out to local authorities about it. He was ignored. Soon afterwards he published a little-read book entitled L’Involution des métamorphoses et des métempsychoses de l’univers. In it, he attacked European assumptions of cultural superiority by arguing that each people and culture comprised equal parts of a universal civilization. Early in 1923, in the aftermath of rioting in Porto-Novo in Dahomey, he criticized the colonial administrators’ handling of the issues, to little avail. True, neither incident was quite so personal and dramatic as being beaten up in a Parisian café but they do indicate a public engagement with the question of race on an imperial, if not an international, level much earlier than narratives focusing upon the Ligue or his UNIA support allow. It also locates the site of his racial awakening outside the colonial metropole.

This reframes our understanding of the valency of a racial awakening in Paris rather than Porto-Novo or Dakar, pointing to the way that gestures of black solidarity were sometimes easier to perform in the metropole than elsewhere.  In particular, it demonstrates the crucial symbolic role that examples of US racism played in francophone black activism at this time. This is especially clear when one looks beyond Houénou’s sanctioned version of the story to the one relayed in other sources such as the Parisian press: it was a French bartender who threw Houénou from the premises and beat him, not the crowd of racist Americans who bayed for his removal.  Moreover, Houénou’s activities after the collapse of the Ligue and his departure from Paris lead the historian away from the print formulations of universal black brotherhood found in Les Continents to their application on the ground in Africa.

Hardly a year after his relocation to Dahomey, Houénou and a group of unnamed allies attempted to overthrow French colonial rule there. His movement was small, ill-equipped and failed spectacularly. Forced to flee to Togo, Houénou was quickly caught and imprisoned. Some reports indicate that he was incarcerated for five years, others three. What we do know is that he was never allowed to enter Dahomey again. Instead, he went to Senegal by 1930, possibly as early as 1928, and became heavily involved in Senegalese politics. At first he supported Ngalandou Diouf against Blaise Diagne in the elections of 1932. He would switch candidates for the following election of 1934, supporting Lamine Gueye against Diouf. In both cases, Houénou applied a committed Pan-Africanism of the type that the French colonial authorities feared Garveyism represented: the call for the recognition of the equality of all races and the independence of African territories from colonial rule. Neither Diouf nor Gueye were quite so radical in their views. Indeed, Houénou’s platform was far removed from the Parisian story that played American racism off against la belle France. His early cries for universal black brotherhood had transformed at the hands of the treatment of colonial authorities to his support for the total independence for Africa.

Houénou’s involvement in Senegalese politics is usually not considered in the context of black internationalism. To be strictly honest, it has not exactly earned him a noteworthy place in the annals of Senegalese history either. He met an ignominious end in the electoral campaign of 1936 when the meeting he was running exploded into violence. Nevertheless, by focusing on Houénou’s own story, rather than solely upon his involvement in the international and diasporic institutions he helped to build, it is possible to shift the geography of black internationalism away from imperial metropoles back to the African continent.

Sarah Claire Dunstan is an ARC Postdoctoral Fellow with the International History Laureate at the University of Sydney (@IntHist ). She is an intellectual historian of 20th century France and the United States with a particular interest in questions of race, rights and gender. She can be found on Twitter  @sarahcdunstan .

The Historical Origins of Human Rights: A Conversation with Samuel Moyn

By guest contributor Pranav Kumar Jain

picture-826-1508856803

Professor Samuel Moyn (Yale University)

Since the publication of The Last Utopia: Human Rights in History, Professor Samuel Moyn has emerged as one of the most prominent voices in the field of human rights studies and modern intellectual history. I recently had a chance to interview him about his early career and his views on human rights and recent developments in the field of history.

Moyn was educated at Washington University in St. Louis, where he studied history and French literature. In St. Louis, he fell under the influence of Gerald Izenberg, who nurtured his interest in modern French intellectual history. After college, he proceeded to Berkeley to pursue his doctorate under the supervision of Martin Jay. However, unexcited at the prospect of becoming a professional historian, he left graduate school after taking his orals and enrolled at Harvard Law School. After a year in law school, he decided that he did want to finish his Ph.D. after all. He switched the subject of his dissertation to a topic that could be done on the basis of materials available in American libraries. Drawing upon an earlier seminar paper, he decided to write about the interwar moral philosophy of Emmanuel Levinas. After graduating from Berkeley and Harvard in 2000-01, he joined Columbia University as an assistant professor in history.

Though he had never written about human rights before, he had become interested in the subject in law school and during his work in the White House at the time of the Kosovo bombings. At Columbia, he decided to pursue his interest in human rights further and began to teach a course called “Historical Origins of Human Rights.” The conversations in this class were complemented by those with two newly arrived faculty members, Mark Mazower and Susan Pedersen, both of whom were then working on the international history of the twentieth century. In 2008, Moyn decided that it was finally time to write about human rights.

9780674064348-lg

Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge: Harvard University Press, 2012)

In The Last Utopia, Moyn’s aim was to contest the theories about the long-term origins of human rights. His key argument was that it was only in the 1970s that the concept of human rights crystallized as a global language of justice. In arguing thus, he sharply distinguished himself from the historian Lynn Hunt who had suggested that the concept of human rights stretched all the way back to the French Revolution. Before Hunt published her book on human rights, Moyn told me, his class had shared some of her emphasis. Both scholars, for example, were influenced by Thomas Laqueur’s account of the origins of humanitarianism, which focused on the upsurge of sympathy in the eighteenth century. Laqueur’s argument, however, had not even mentioned human rights. Hunt’s genius (or mistake?), Moyn believes, was to make that connection.

Moyn, however, is not the only historian to see the 1970s as a turning point. In his Age of Fracture (2012), intellectual historian Daniel Rodgers has made a similar argument about how the American postwar consensus came under increasing pressure and finally shattered in the 70s. But there are some important differences. As Moyn explained to me, Rodgers’s argument is more about the disappearance of alternatives, whereas his is more concerned with how human rights survived that difficult moment. Furthermore, Rodgers’s focus on the American case makes   his argument unique because, in comparison with transatlantic cases, the American tradition does not have a socialist starting point. Both Moyn and Rodgers, however, have been criticized for failing to take neoliberalism into account. Moyn says that he has tried to address this in his forthcoming book Not Enough: Human Rights in an Unequal World.

Some have come to see Moyn’s book as mostly about President Jimmy Carter’s contributions to the human rights revolution. Moyn himself, however, thinks that the book is ultimately about the French Revolution and its abandonment in modern history for an individualistic ethics of rights, including the Levinasian ethics which he once studied. In Moyn’s view, human rights are a part of this “ethical turn.” While he was working on the book, Moyn’s own thinking underwent a significant revolution. He began to explore the place of decolonization in the story he was trying to tell. Decolonization was not something he had thought about very much before but, as arguably one of the biggest events of the twentieth century, it seemed indispensable to the human rights revolution. In the book, he ended up making the very controversial argument that human rights largely emerged as the response of westerners to decolonization. Since they had now lost the interventionist tool of empire, human rights became a new universalism that would allow them to think about, care about, and perhaps intervene in places they had once ruled directly.

Though widely acclaimed, Moyn’s thesis has been challenged on a number of fronts. For one thing, Moyn himself believes that the argument of the book is problematic because it globalizes a story that it mostly about French intellectuals in the 1970s. Then there are critics such as Stefan-Ludwig Hoffmann, a German historian at UC Berkeley, who have suggested, in Moyn’s words, that “Sam was right in dismissing all prior history. He just didn’t dismiss the 70s and 80s.” Moyn says that he finds Hoffmann’s arguments compelling and that, if we think of human rights primarily as a political program, the 90s do deserve the lion’s share of attention. After all, Moyn’s own interest in the politics of human rights emerged during the 90s.

EleanorRooseveltHumanRights

Eleanor Roosevelt with a Spanish-language copy of the Universal Declaration of Human Rights

Perhaps one of Moyn’s most controversial arguments is that the field of the history of human rights no longer has anything new to say. Most of the questions about the emergence of the human rights movements and the role of international institutions have already been answered. Given the major debate provoked by his own work, I am skeptical that this is indeed the case. Plus, there are a number of areas which need further research. For instance, we need to better understand the connections between signature events such as the adoption of the Universal Declaration of Human Rights, and the story that Moyn tells about the 1970s. But I think Moyn made a compelling point when he suggested to me that we cannot continue to constantly look for the origins of human rights. In doing so, we often run the risk of anachronism and misinterpretation. For instance, some scholars have tried to tie human rights back to early modern natural law. However, as Moyn put it, “what’s lost when you interpret early modern natural law as fundamentally a rights project is that it was actually a duties project.”

Moyn is ambivalent about recent developments in the study and practice of history in general. He thinks that the rise of global and transnational history is a welcome development because, ultimately, there is no reason for methodological nationalism to prevail. However, in his view, this has had a somewhat adverse effect on graduate training. When he went to grad school, he took courses that focused on national historiographical canons and many of the readings were in the original language. With the rise of global history, it is not clear that such courses can be taught anymore. For instance, no teacher could demand that all the students know the same languages. Consequently, Moyn says, “most of what historians were doing for most of modern history is being lost.” This is certainly an interesting point and it begs the question of how graduate programs can train their students to strike a balance between the wide perspectives of global history and the deep immersion of a more national approach.

Otherwise, however, in contrast with many of his fellow scholars, Moyn is surprisingly upbeat about the current state and future of the historical profession. He thinks that we are living in a golden age of historiography with many impressive historians producing outstanding works. There is certainly more scope for history to be more relevant to the public. But historians engaging with the public shouldn’t do so in crass ways, such as suggesting that there is a definitive relevance of history to public policy. History does not have to change radically. It can simply continue to build upon its existing strengths.

lynn-hunt

Professor Lynn Hunt (UCLA)

In the face of Lynn Hunt’s recent judgment that the field of “history is in crisis and not just one of university budgets,” this is a somewhat puzzling conclusion. However, it is one that I happen to agree with. Those who suggest that historians should engage with policy makers certainly have a point. However, instead of emphasizing the uniqueness of history, their arguments devolve to what historians can do better than economists and political scientists. In the process, they often lose sight of the fact that, more than anything, historians are storytellers. History rightly belongs in the humanities rather than the social sciences. It is only in telling stories that inspire and excite the public’s imagination that historians can regain the respect that many think they have lost in the public eye.

Pranav Kumar Jain is a doctoral student in early modern history at Yale University.

In Dread of Derrida

By guest contributor Jonathon Catlin

According to Ethan Kleinberg, historians are still living in fear of the specter of deconstruction; their attempted exorcisms have failed. In Haunting History: For a Deconstructive Approach to the Past (2017), Kleinberg fruitfully “conjures” this spirit so that historians might finally confront it and incorporate its strategies for representing elusive pasts. A panel of historians recently discussed the book at New York University, including Kleinberg (Wesleyan), Joan Wallach Scott (Institute for Advanced Study), Carol Gluck (Columbia), and Stefanos Geroulanos (NYU), moderated by Zvi Ben-Dor Benite (NYU). A recording of the lively two-hour exchange is available at the bottom of this post.

Processed with VSCO with f2 preset

Left to Right: Profs Geroulanos, Gluck, Kleinberg, and Scott

History’s ghost story goes back some decades. Hayden White’s Metahistory roiled the profession in 1973 by effectively translating the “linguistic turn” of the French deconstruction into historical terms: historical narratives are no less “emplotted” in genres like romance and comedy, and hence no less unstable, than literary ones. White sparked fierce debate, notably about the limits of representing the Holocaust, which took place alongside probes into the ethics of those of deconstruction’s heroes with ties to Nazism, including Martin Heidegger and Paul de Man. The intensity of these battles was arguably a product of hatred for one theorist in particular: Jacques Derrida, whose work forms the backbone of Kleinberg’s book. Yet despite decades of scholarship undermining the nineteenth-century, Rankean foundations of the historical discipline, the regime of what Kleinberg calls “ontological realism” apparently still reigns. His book is not simply the latest in a long line of criticism of such work, but rather a manifesto for a positive theory of historical writing that employs deconstruction’s linguistic and epistemological insights.

This timely intervention took place, as Scott remarked, “in a moment when the death of theory has been triumphantly proclaimed, and indeed celebrated, and when many historians have turned with relief to accumulating big data, or simply telling evidence-based stories about an unproblematic past.” She lamented that

the self-reflexive moment and the epistemological challenge associated with names like Foucault, Irigaray, Derrida, and Lacan—all those dangerous French theorists who integrated the very ground on which we stood—reality, truth, experience, language, the body—that moment is said to be past, a wrong turn taken; thankfully we’re now on the right course.

Scott praised Kleinberg’s book for haunting precisely this sense of “triumphalism.”

Kleinberg began his remarks with a disappointed but unsurprised reflection that most historians still operate under the spell of what he calls “ontological realism.” This methodology is defined by the attempt to recover historical events, which, insofar as they are observable, become “fixed and immutable.” This elides the difference between the “real” past and history (writing about the past), unwittingly taking “the map of the past,” or historical representation, as the past itself. It implicitly operates as if the past is a singular and discrete object available for objective retrieval. While such historians may admit their own uncertainty about events, they nevertheless insist that the events really happened in a certain way; the task is only to excavate them ever more exactly.

This dogmatism reigns despite decades of deconstructive criticism from the likes of White, Frank Ankersmit, and Dominick LaCapra in the pages of journals like History and Theory (of which Kleinberg is executive editor), which has immeasurably sharpened the self-consciousness of historical writing. In his 1984 History and Criticism, LaCapra railed against the “archival fetishism” then evident in social history, whereby the archive became “more than the repository of traces of the past which may be used in its inferential reconstruction” and took on the quality of “a stand-in for the past that brings the mystified experience of the thing itself” (p. 92, n. 17). If historians had read their Derrida, however, they would know that the past inscribed in writing “is ‘always already’ lost for the historian.” Scott similarly wrote in a 1991 Critical Inquiry essay: “Experience is at once always already an interpretation and is in need of interpretation.” As she cited from Kleinberg’s book, meaning is produced by reading a text, not released from it or simply reflected. Every text, no matter how documentary, is a “site of contestation and struggle” (15).

Kleinberg’s intervention is to remind us that this erosion of objectivity is not just a tragic story of decline into relativism, for a deconstructive approach also frees historians from the shackles of objectivism, opening up new sources and methodologies. White famously concluded in Metahistory that there were at the end of the day no “objective” or “scientific” reasons to prefer one way of telling a story to another, but only “moral or aesthetic ones” (434). With the acceptance of what White called the “Ironic” mode, which refused to privilege certain accounts of the past as definitive, also came a new freedom and self-consciousness. Kleinberg similarly revamps White’s Crocean conclusion that “all history is contemporary history,” reminding us that our present social and political preoccupations determine which voices we seek out and allow to speak in our work. We can never tell the authoritative history of a subject, but only construct a possible history of it.

Kleinberg relays the upside of deconstructive history more convincingly than White ever did: Opening up history beyond ontological realism makes room for “alternative pasts” to enter through the “present absences” in historiography. Contrary to historians’ best intentions, the hold of ontological positivism perversely closes out and renders illegible voices that do not fit with the dominant paradigm, who are marginalized to obscurity by the authority of each self-enclosed narrative. Hence making some voices legible too often makes others illegible, for example E. P. Thompson foregrounding the working class only to sideline women. The alternative is a porous account that allows itself to be penetrated by alterity and unsettled by the ghosts it has excluded. The latent ontology of holding onto some “real,” to the exclusion of others, would thus give way to a hauntology (Derrida’s play on the ambiguous sound of the French ontologie) whereby the text acknowledges and allows in present absences. Whereas for Kleinberg Foucault has been “tamed” by the historical discipline, this Derridean metaphor remains unsettling. Reinhart Koselleck’s notion of “non-simultaneity” (Ungleichzeitgkeit) further informs Kleinberg’s view of “hauntology as a theory of multiple temporalities and multiple pasts that all converge, or at least could converge, on the present,” that is, on the historian in the act of writing about the past (133).

Kleinberg fixates on the metaphor of the ghost because it represents the liminal in-between of absent presences and present absences. Ghosts are unsettling because they obey no chronology, flitting between past and present, history and dream. Yet deconstructive hauntology stands to enrich narratives because destabilized stories become porous to previously excluded voices. In his response, Geroulanos pressed Kleinberg to consider several alternative monster metaphors: ghosts who tell lies, not bringing back the past “as it really was” but making up alternative claims; and the in-between figure of the zombie, the undead past that has not passed.

Even in the theory-friendly halls of NYU, Kleinberg was met with some of the same suspicion and opposition White was decades ago. While all respondents conceded the theoretical import of Kleinberg’s argument, the question remained how to write such a history in practice. Preempting this question, Kleinberg’s conclusion includes a preview of a parallel book he has been writing on the Talmudic lectures Emmanuel Levinas presented in postwar Paris. He hopes to enact what Derrida called a “double session.” The first half of the book provides a secular intellectual history of how Levinas, prompted by the Holocaust, shifted from Heidegger to Talmud; but the second half tells this history from the perspective of revelation, inspired by “Levinas’s own counterhistorical claim that divine and ethical meaning transcends time,” telling a religious counter-narrative to the standard secular one. Scott praised the way Kleinberg’s two narratives provide two positive accounts that nonetheless unsettle one another. Kleinberg writes: “The two sessions pull at each other, creating cracks in any one homogenous history, through which portions of the heterogeneous and polysemic past that haunts history can rise and be activated.” This “dislodging” and “irruptive” method “marks an irreducible and generative multiplicity” of alternate histories (149). Active haunting prevents Kleinberg’s method from devolving into mere perspectivism; each narrative actively throws the other into question, unsettling its authority.

A further decentering methodology Kleinberg proposed was breaking through the “analog ceiling” of print scholarship into the digital realm. Gluck emphasized how digital or cyber-history has the freedom to be more associative than chronological, interrupting texts with links, alternative accounts, and media. Thus far, however, digital history, shackled by big data and “neoempiricism,” has largely remained in the grip of ontological realism, producing linear narratives. Still, there was some consensus that these technologies might enable new deconstructive approaches. In this sense, Kleinberg writes, “Metahistory came too soon, arriving before the platforms and media that would allow us to explore the alternative narrative possibilities that were at our ready disposal” (117).

Listening to Kleinberg, I thought of a recent experimental book by Yair Mintzker, The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew (2017). It tells the story of the death of Joseph Oppenheimer, the villain of the infamous Nazi propaganda film Jud Süss (1940) produced at the behest of Nazi propaganda minister Joseph Goebbels. Mintzker was inspired by the narrative model of the film Rashomon (1950), which Geroulanos elaborated in some depth. Director Akira Kurosawa famously presents four different and conflicting accounts of how a samurai traveling through a wooded grove ends up murdered, from the perspectives of his wife, the bandit they encounter, a bystander, and the samurai himself speaking through a medium. Mintzker’s narrative choice is not postmodern fancy, but in this case a historiographical necessity. Because Oppenheimer, as a Jew, was not entitled to give testimony in his own trial, the only extant accounts available come from four similarly self-interested and conflictual sources: a judge, a convert, a Jew, and a writer. Mintzker’s work would seem to demonstrate the viability of Kleinbergian hauntology well outside twentieth-century intellectual history.

Kleinberg mused in closing: “If there’s one thing I want to do…it’s to take this book and maybe scare historians a little bit, and other people who think about the past. To make them uncomfortable, in the end, I hope, in a productive way.” Whether historians will welcome this unsettling remains to be seen, for as with White the cards remain stacked against theory. Yet our present anxiety about living in a “post-truth era” might just provide the necessary pressure for historians to recognize the ghosts that haunt the interminable task of engaging the past.

 

Jonathon Catlin is a PhD student in History at Princeton University. He works on intellectual responses to catastrophe in German and Jewish thought and the Frankfurt School of critical theory.

 

 

Aristotle in the Sex Shop and Activism in the Academy: Notes from the Joint Atlantic Seminar in the History of Medicine

By Editor Spencer J. Weinreich

Four enormous, dead doctors were present at the opening of the 2017 Joint Atlantic Seminar in the History of Medicine. Convened in Johns Hopkins University’s Welch Medical Library, the room was dominated by a canvas of mammoth proportions, a group portrait by John Singer Sargent of the four founders of Johns Hopkins Hospital. Dr. William Welch, known in his lifetime as “the dean of American medicine” (and the library’s namesake). Dr. William Halsted, “the father of modern surgery.” Dr. Sir William Osler, “the father of modern medicine.” And Dr. Howard Kelly, who established the modern field of gynecology.

1905 Professors Welch, Halsted, Osler and Kelly (aka The Four Doctors) oil on canvas 298.6 x 213.3 cm Johns Hopkins University School of Medicine, Baltimore MD

John Singer Sargent, Professors Welch, Halsted, Osler, and Kelly (1905)

Beneath the gazes of this august quartet, graduate students and faculty from across the United States and the United Kingdom gathered for the fifteenth iteration of the Seminar. This year, the program’s theme was “Truth, Power, and Objectivity,” explored in thirteen papers ranging from medical testimony before the Goan Inquisition to the mental impact of First World War bombing raids, from Booker T. Washington’s National Negro Health Week to the emergence of Chinese traditional medicine. It would not do justice to the papers or their authors to cover them all in a post; instead I shall concentrate on the two opening sessions: the keynote lecture by Mary E. Fissell and a faculty panel with Nathaniel Comfort, Gianna Pomata, and Graham Mooney (all of Johns Hopkins University).

I confess to some surprise at the title of Fissell’s talk, “Aristotle’s Masterpiece and the Re-Making of Kinship, 1820–1860.” Fissell is known as an early modernist, her major publications exploring gender, reproduction, and medicine in seventeenth- and eighteenth-century England. Her current project, however, is a cultural history of Aristotle’s Masterpiece, a book on sexuality and childbirth first published in 1684 and still being sold in London sex shops in the 1930s. The Masterpiece was distinguished by its discussion of the sexual act itself, and its consideration (and copious illustrations) of so-called “monstrous births.” It was, in Fissell’s words, a “howling success,” seeing an average of one edition a year for 250 years, on both sides of the Atlantic.

It should be explained that there is very little Aristotle in Aristotle’s Masterpiece. In early modern Europe, the Greek philosopher was regarded as the classical authority on childbirth and sex, and so offered a suitably distinguished peg on which to hang the text. This allowed for a neat trick of bibliography: when the Masterpiece was bound together with other (spurious) works, like Aristotle’s Problems, the spine might be stamped with the innocuous (indeed impressive) title “Aristotle’s Works.”

st-john-the-baptist-el-greco-c-1600

El Greco, John the Baptist (c.1600)

At the heart of Aristotle’s Masterpiece, Fissell argued, was genealogy: how reproduction—“generation,” in early modern terms—occurred and how the traits of parents related to those of their offspring. This genealogy is unstable, the transmission of traits open to influences of all kinds, notably the “maternal imagination.” The birth of a baby covered in hair, for example, could be explained by the pregnant mother’s devotion to an image of John the Baptist clad in skins. Fissell brilliantly drew out the subversive possibilities of the Masterpiece, as when it “advised” women that adultery might be hidden by imagining one’s husband during the sex act, thus ensuring that the child would look like him. Central though family resemblance is to reproduction, it is “a vexed sign,” with “several jokers in every deck,” because women’s bodies are mysterious and have the power to disrupt lineage.

Fissell principally considered the Masterpiece’s fortunes in the mid-nineteenth-century Anglophone world, as the unstable generation it depicted clashed with contemporary assumptions about heredity. Here she framed her efforts as a “footnote” to Charles Rosenberg’s seminal essay, “The Bitter Fruit: Heredity, Disease, and Social Thought in Nineteenth-Century America,” which traced how discourses of heredity pervaded all branches of science and medicine in this period. George Combe’s Constitution of Man (1828), an exposition of the supposedly rigid natural laws governing heredity (with a tilt toward self-discipline and self-improvement), was the fourth-bestselling book of the period (after the Bible, Pilgrim’s Progress, and Robinson Crusoe). Other hereditarian works sketched out the gendered roles of reproduction—what children inherited from their mothers versus from their fathers—and the possibilities for human action (proper parenting, self-control) for modulating genealogy. Wildly popular manuals for courtship and marriage advised young people on the formation of proper unions and the production of healthy children, in terms shot through with racial and class prejudices (though not yet solidified into eugenics as we understand that term).

The fluidity of generation depicted in Aristotle’s Masterpiece became conspicuous against the background of this growing obsession with a law-like heredity. Take the birth of a black child to white parents. The Masterpiece explains that the mother was looking at a painting of a black man at the moment of conception; hereditarian thought identified a black ancestor some five generations back, the telltale trait slowly but inevitably revealing itself. Thus, although the text of the Masterpiece did not change much over its long career, its profile changed dramatically, because of the shifting bibliographic contexts in which it moved.

In the mid-nineteenth century, the contrasting worldviews of the Masterpiece and the marriage manuals spoke to the forms of familial life prevalent at different social strata. The more chaotic picture of the Masterpiece reflected the daily life of the working class, characterized by “contingent formations,” children born out of wedlock, wife sales, abandonment, and other kinds of “marital nonconformity.” The marriage manuals addressed themselves to upper-middle-class families, but did so in a distinctly aspirational mode. They warned, for example, against marrying cousins, precisely at a moment when well-to-do families were “kinship hot,” in David Warren Sabean’s words, favoring serial intermarriage among a few allied clans. This was a period, Fissell explained, in which “who and what counted as family was much more complex” and “contested.” The ambiguity—and power—of this issue manifested in almost every sphere, from the shifting guidelines for census-takers on how a “family” was defined, to novels centered on complex kinship networks, such as John Lang’s Will He Marry Her? (1858), to the flood of polemical literature surrounding a proposed law forbidding a man to marry his deceased wife’s sister—a debate involving many more people than could possibly have been affected by the legislation.

After a rich question-and-answer session, we shifted to the faculty panel, with Professors Comfort, Pomata, and Mooney asked to reflect on the theme of “Truth, Power, and Objectivity.” Comfort, a scholar of modern biology, began by discussing his work with oral histories—“creating a primary source as you go, and in most branches of history that’s considered cheating.” Here perfect objectivity is not necessarily helpful: “when you make yourself emotional availability to your subjects […] you can actually gain their trust in a way that you can’t otherwise.” Equally, Comfort encouraged the embrace of sources’ unreliability, suggesting that unreliability might itself be a source—the more unreliable a narrative is, the more interesting and the more indicative of something meant it becomes. He closed with the observation that different audiences required different approaches to history and to history-writing—it is not simply a question of tone or language, but of what kind of bond the scholar seeks to form.

Professor Pomata, a scholar of early modern medicine, insisted that moments of personal contact between scholar and subject were not the exclusive preserve of the modern historian: the same connections are possible, if in a more mediated fashion, for those working on earlier periods. In this interaction, respect is of the utmost importance. Pomata quoted a line from W. B. Yeats’s “He wishes for the Cloths of Heaven”:

I have spread my dreams under your feet;

Tread softly because you tread on my dreams.

As a historian of public health—which he characterized as an activist discipline—Mooney declared, “I’m not really interested in objectivity. […] I’m angry about what I see.” He spoke compellingly about the vital importance of that emotion, properly channeled toward productive ends. The historian possesses power: not simply as the person setting the terms of inquiry, but as a member of privileged institutions. In consequence, he called on scholars to undermine their own power, to make themselves uncomfortable.

The panel was intended to be open-ended and interactive, so these brief remarks quickly segued into questions from the floor. Asked about the relationship between scholarship and activism, Mooney insisted that passion, even anger, are essential, because they drive the scholar into the places where activism is needed—and cautioned that it is ultimately impossible to be the dispassionate observer we (think we) wish to be. With beautiful understatement, Pomata explained that she went to college in 1968, when “a lot was happening in the world.” Consequently, she conceived of scholarship as having to have some political meaning. Working on women’s history in the early 1970s, “just to do the scholarship was an activist task.” Privileging “honesty” over “objectivity,” she insisted that “scholarship—honest scholarship—and activism go together.” Comfort echoed much of this favorable account of activism, but noted that some venues are more appropriate for activism than others, and that there are different ways of being an activist.

Dealing with the horrific—eugenics was the example offered—requires, Mooney argued, both the rigor of a critical method and sensitive emotional work. Further, all three panelists emphasized crafting, and speaking in, one’s own voice, eschewing the temptation to imitate more prominent scholars and embracing the first person (and the subjectivity it marks). Voice, Comfort noted, isn’t natural, but something honed, and both he and Pomata recommended literature as an essential tool in this regard.

Throughout, the three panelists concurred in urging collaborative, interdisciplinary work, founded upon respect for other knowledges and humility—which, Comfort insightfully observed, is born of confidence in one’s own abilities. Asking the right questions is crucial, the key to unlocking the stories of the oppressed and marginalized within sources created by those in power. Visual sources have the potential to express things inexpressible in words—Comfort cited a photograph that wonderfully captured the shy, retiring nature of Dr. Barton Childs—but must be used, not mere illustrations. The question about visual sources was the last of the evening, and Professor Pomata had the last word. Her final comment offers the perfect summation of the creativity, dedication, and intellectual ferment on display in Baltimore that weekend: “we are artists, don’t forget that.”

William Plumer and the Politics of History Writing

By guest contributor Emily Yankowitz

On December 30, 1806, on the inner cover of his first attempt at writing a historical work, the New Hampshire statesman William Plumer wrote, “An historian, like a witness, is bound to relate the truth, the whole truth, & nothing but the truth.” He would take up his project of writing a “History of North America” in November 1809 after three years of research. In what appears to be typical of Plumer’s personality, he intended to write a history of the United States government, but the project quickly expanding into “a general history of the United States” from its discovery by Europeans to his own time It was to include accounts of administrations, laws, presidents, heads of departments, members of Congress, judiciary, foreign relations, negotiations, relations with Indian tribes, purchases of lands, and commerce. Reaching even further into the past, he began with an overview of classical history, including the invention of hieroglyphics, and a detailed study of European political events, before arriving at the settlement of Jamestown in 1607 over 220 pages later. Yet having worked on the project for nine years and seeing little progress, Plumer unceremoniously put it aside, writing, “The undertaking I have abandoned” on the last page.

Picture1

William Plumer, engraving by Charles Balthazar Julien Fevret de Saint-Mémin (1806). Photo credit: Library of Congress

A Federalist senator in a Congress dominated by President Thomas Jefferson and the Republicans, Plumer had little hope of influencing politics. Watching his vision of the world collapse around him, Plumer recalled that with nearly every measure Jefferson proposed, he was reminded of the angel’s declaration to Ezekiel, “Turn, & thou shall behold yet greater abominations” (Plumer to Jeremiah Smith, January 27, 1803, quoted in Turner, “Thomas Jefferson,” 207). These “abominations” included the Louisiana Purchase, the Twelfth Amendment, and the impeachment of New Hampshire judge John Pickering. Frustrated and alarmed, Plumer helped to plan a scheme for New England secession in 1803–1804, hoping to create a “Northern confederacy.” But the project quickly fell apart, although intransigent Federalists would take up a similar plan at the 1814–1815 Hartford Convention.

 

Amid a career in jeopardy and anxieties about the future, Plumer found solace in historical pursuits. Overwhelmed by his country’s fast-paced development, history offered Plumer a method of “preserving facts & opinions” that were “rapidly hasting to oblivion” as a result of the “changes & revolution of time and parties” (May 2, 1805). Unlike other senators who indulged in horse racing and gambling, Plumer spent his free time hidden for hours in the Congressional Library, reading voraciously. This curiosity was one of Plumer’s most pronounced traits; the son of a farmer, Plumer received little formal schooling beyond elementary studies, and pursued much of his education through books.

Over time, Plumer’s intellectual interests expanded. Spotting a mound of scattered government documents in the damp, mildewed lumber room above the Senate chamber, he devoted himself to preserving them, methodically sorting through the soiled records. Through the next four years, Plumer collected journals of every Congress from 1774 to his own, enough to fill between four and five hundred bound volumes. He eventually came to possess one of the largest and most complete collections of public papers held by a private citizen, even after he donated a substantial amount to the Massachusetts Historical Society. This effort rescued valuable documents from destruction, and also provided Plumer with a substantial number of sources for his later historical works. According to his son, it was this collecting effort that inspired Plumer to write a history of the country (For more information, see Freeman, Affairs of Honor, 262-4).

1280px-Official_Presidential_portrait_of_Thomas_Jefferson_(by_Rembrandt_Peale,_1800)

President Thomas Jefferson, painted by Rembrandt Peale (1800)

With the end of his term approaching, Plumer set about preparing for this enormous task—consulting with government officers, copying private letters shown to him by friends, and corresponding with antiquarians and scholars. He conferred with Albert Gallatin, Secretary of the Treasury, who offered him any materials needed from the Treasury department. Not everyone was supportive—at least one friend advised Plumer to publish his history posthumously to avoid giving “mortal offence” to contemporaries (February 28, 1807). His meeting with President Jefferson showed how complex the publication of his history might be. Plumer observed that Jefferson’s “countenance […] repeatedly changed.” Jefferson expressed “uneasiness and embarrassment—at other [moments] he seemed pleased.” Seemingly affected by a range of emotions, Jefferson alternated between looking at Plumer and staring at the floor. Jefferson’s reaction perplexed Plumer, who reasoned that Jefferson must have been “embarrassed,” and “disapproved” of the project (February 4, 1807). But he also discussed Jefferson’s strange response with John Quincy Adams, who informed him that Jefferson “cannot be a lover of history,” as he did not want certain “prominent traits in his character” and “important actions in his life” to be outlined and communicated to posterity (February 9, 1807). Jefferson’s own actions appear to echo this sentiment. Out of a desire to control how he would be remembered, Jefferson later professed to have “no materials whatever” for Plumer’s project despite its usefulness to the country.

Plumer’s background and personality did not make him a particularly obvious candidate for the project. In his diary, he mulled over his doubts about his efforts, noting his personal shortcomings, the complications of his private life, and the magnitude of the project. He was not a “scholar” or a “master of the English grammar,” he noted, and could not read any foreign language or express his ideas quickly on paper. Regarding his personal life, his wife was often sick and he himself had a “weak & feeble constitution.” However, Plumer was also highly aware of the shortcomings of existing “historic performances,” namely state histories, which were written too quickly. They contained factual errors, had a “loose & slovenly” style, and “fall short of the true style & dignity of history.” He found Benjamin Trumbull’s Complete History of Connecticut to be “written in the style of a low dull Chronicle,” while James Sullivan’s History of the District of Maine was a “jumble of fact & fable” (July 22, 1806). Yet his task would take “indefatigable industry, & patient labour to render it useful to others and honorable to myself.” Virgil took twelve years to write the Aeneid, Plumer worried, while Edward Gibbon took twenty years to write The History of the Decline and Fall of the Roman Empire. Plumer would exceed both Virgil and Gibbon, ultimately devoting the remainder of his life to historical works that ultimately remained unpublished.

While Plumer believed the work would be useful for “future statesmen,” he also hoped to enhance his reputation. If he successfully produced the work, it would be an “imperishable monument that would perpetuate” his name. Highlighting the inextinguishable impact of history, Plumer noted that it would exist when “columns of marble are dissolved & crumbled to dust.” However, if he did not execute it well it would “tarnish & destroy” the little “fame” he had acquired (July 22, 1806). Thus, writing history had political as well as personal consequences.

William_Plumer,_Jr.

William Plumer, Jr., depicted in The Granite State Monthly (1889)

Plumer was not alone in using history to achieve a recognition he would never receive through politics. In fact, one of his sons, William Plumer Jr., would take up a similar project in 1830, after completing his term as a representative. Reflecting on the project, he noted that if “executed with any tolerable success, it would be a more important service rendered to the public than I can hope in any other way to perform” and he might be able to acquire a “reputation, however small” if the work was successfully produced (“Manuscript History of the United States”). While the boundaries of Plumer Jr.’s intended project were smaller (he planned to begin with Columbus’s voyage in 1492), he made little progress.

 

Unable to acquire national political fame, Plumer sought recognition through history, while also pursuing a political (though nonpartisan) agenda. Even after his formal political party had changed to the Republican position, Plumer retained much of his Federalist view of the world, in part because of his own distaste for partisanship and in part because he lived in Federalist-concentrated New England. In particular, much like the Federalists of the 1790s, Plumer never fully supported the existence of political parties, viewing them as agents of division that distracted men from effectively evaluating candidates based on their abilities. Just as Plumer disapproved of partisanship in politics, he also disapproved of it in historical writing. For example, he wrote that historians and biographers should have “no other object than faithfully narrate facts & justly delineate characters” for when they “stoop to the support of a party or a sect” their “facts are misstated and their reasoning is sophistry” (“May 25, 1808”). Plumer argued that a historian should be “of no party in politic’s [sic] … without prejudice, & have more judgement than fancy” (“October 1, 1807”). Thus, for Plumer, historians did a disservice not only to the integrity of their subject, but also to the influence of their work, if they espoused partisan views.

Looking a bit further into the nineteenth century, historians would divide over whether it was acceptable to combine history and politics. In particular, following the decline of the Federalist party and the rise of Andrew Jackson, New England historians attempted to use history as a mechanism of regaining the power and influence they had lost in politics. Some followed both paths, like George Bancroft, who pursued a political career while working on his History of the United States, while others such as William Prescott and Jared Sparks believed that the two disciplines were incompatible (Cheng, The Plain and Noble Garb of Truth, 36-41). However, many members of both groups believed that history could be used as a method of advancing political agendas.

In an attempt to save their party from destruction in the wake of the Hartford Convention, some Federalists wrote historical works that tried (largely unsuccessfully) to shape how posterity remembered the event. Prompted in part by the publication of Matthew Carey’s wildly successful The Olive Branch and the Nullification Crisis, Federalists turned to writing histories to justify their actions. These works included Theodore Lyman’s 1823 A Short Account of the Hartford Convention, Harrison Gray Otis’ 1824 Letters in Defence of the Hartford Convention, and the People of Massachusetts, and Theodore Dwight’s 1833 History of the Hartford Convention. However, these works were generally unsuccessful.

Eager to shape both policies and how they would be remembered, early American politicking occurred both in the halls of Congress and in the pages of books. Plumer hoped to play a central role in constructing the young nation’s emerging identity and its memories of the early figures of the founding era. Thus, his historical writings—which he would continue for decades after his failed “History,” but largely never publish—serve as a reminder that our very understanding of the past has often been shaped by the individuals in the moment who had the foresight to record it. Given how the historical discipline has changed over time, it is perhaps tempting to dismiss early historian’s writings. However, they nonetheless offer a useful perspective on how contemporaries perceived the world around them and how they wanted it to be remembered.

Emily Yankowitz recently graduated from Yale University and is an incoming M.Phil. student in American History at the University of Cambridge. She is interested in the intersection of politics, culture, and memory in the early American republic.

The state, and revolution, Part II: View from a Public Square Closed to the Public

By guest contributor Dr. Dina Gusejnova

This is the third and final installment of “The state, and revolution,” following the introduction and “Part I: The Revolution Reshuffled.”

The new age needed only the hide of the revolution—and this was being flayed off people who were still alive. Those who then slipped into it spoke the language of the Revolution and mimicked its gestures, but their brains, lungs, livers and eyes were utterly different.

—Vasily Grossman, Life and Fate (1960), trans. Robert Chandler (2006)

Scholarly interpretations of modern revolutions used to revolve around the idea of the state as the main structure for understanding them—mostly in national, sometimes in comparative, perspective. Since the last decade of the Cold War, however, many of the revolutions, which used to be known as English, French, American, Chinese, Irish, Russian, or Cuban, have been gradually placed in a different kind of order: like Grossman’s words, they began to enter into dialogue with other post-revolutionary legacies, aligned on an imperial meridian, put on a global scale, or, on the contrary, shrunk to the space of a single house. While some of the national labels have disappeared behind inverted commas, the very idea of ‘revolution’ has recently been replaced by a new interest in civil wars and the ‘roads not taken’. Peace itself is increasingly seen as a postwar pretext for new disputes over sovereignty, and the hybrid realities of paramilitary violence are being examined in terms of their effects on mass migration. This kind of revisionism is no longer just a reaction to the supposed end of history, but arguably, the beginning of a new response to the issues we are all facing in the present.

In contrast to this academic trend, most public responses to the latest centenaries are still wrapped in national flags, or at least, in national kinds of silences. In March 2016, I was briefly in Dublin, just before the centenary of the Easter Rising. A minimal common narrative of events appeared to have emerged, as the city was preparing for a large crowd, many of them from abroad.

1 Dublin airport 2016 photo dg

A stack of books on 1916, Dublin Airport (photo by Dina Gusejnova)

2 Dublin 2016 parade announcement

Poster announcing the parade (photo by Dina Gusejnova)

Some public history projects even revived the language of revolution to establish a connection between the events of Easter 1916, modern Irish sovereignty, and other world events. In Parnell Square, a uniformed “Patrick Pearse” read aloud the 1916 Proclamation every day at midday.

In 1916, one of the buildings in Parnell Square, the Ambassador Theatre, had served as the backdrop to a famous photo marking the defeat of the Rising by the British, who posed with an inverted Irish flag, which they had captured from the Citizen Army. In 2016, an exhibition by Sinn Féin used the building to show some original objects from the revolution, and a reconstruction of Kilmainham Gaol,  where the sixteen men of the Rising had been executed.

3 Parnell Square

The Ambassador Theatre at Parnell Square (photo by Dina Gusejnova)

Visitors were encouraged to take selfies and portraits while listening to recordings of their last words, and it was particularly striking to see a mother doing a photo-shoot of her children in front of the sandbags.

4 Ambassador 2

Photo by Dina Gusejnova

What a contrast to Russia where, in April 2017, nobody was reading the April Theses aloud, neither in St. Petersburg nor in Moscow. Granted, Moscow’s Red Square was certainly not as central to the revolution as Petrograd’s Palace Square had been, but it was, still, an important site of revolutionary action in November and December of 1917. Since the Bolsheviks had transferred the capital here, channeling the older, Muscovite center of Russian power, it remained the symbol of Soviet and now post-Soviet claims to global influence. Yet the one set of events that epitomizes this universal aspiration does not suit current plans. Instead, as always at the end of April, preparations were in full swing for the celebrations of an anniversary that the government felt more comfortable with: the Victory of 1945. In April 2017, the public square was therefore routinely closed to the public.

One of the visitors to the Square that month was Richard Bourke, professor of the history of political thought and co-director of the Centre for the Study of the History of Political Thought at Queen Mary University of London. He had travelled to Moscow to attend a conference at the Higher School of Economics. Bourke’s recent intellectual biography of Edmund Burke places Burke’s responses to the revolutions of his age in an imperial, transatlantic, and party political context, disentangling Burke from his later image as a rhetorician of reaction. With Ian McBride, Bourke has recently also co-edited the Princeton History of Modern Ireland, and, with Quentin Skinner, Popular Sovereignty in Historical Perspective. I could not miss this occasion, therefore, to ask a few questions about the contrasting revolutionary legacies in Ireland and Russia, as they engage with the burden of anniversaries of 1916 and 1917.

Standing by the walls of the Kremlin, near a plaque marking the place where the eighteenth-century author Alexander Radishchev had been held prisoner before being deported to Siberia, offered a compelling setting for the discussion. The view of one-way traffic beneath the Kremlin towers, and a reference to W.B. Yeats, concludes these reflections on the politics and ethics of commemorations.

Video by Kseniya Babushkina

 

“Well, that is disappointing. This is my first visit, but when I arrive, it transpires that the Square is closed to the public.

Revolution as a foundation for political legitimacy—prudentially, that has to be discarded in Russia, surely; I can’t imagine the current government wanting to embrace it. Secondly, and equally challenging, there is the communist legacy itself: the attitude to capitalism and private property. Since attitudes to the original ideology have been so utterly transformed, what is there for the establishment today to take ownership of? 

For its part, Ireland is full of commemorations. So, in this case, historians tend to greet such festivities as an irresistible opportunity to publicize their views, and to generate putatively deep, manifestly more penetrating analyses than politicians can muster… whereas I think that risks ending up with a confusion of roles.

Before the Good Friday Agreement—before, that is, the current settlement of the Irish problem—commemoration had the power to rock the state. It was, in other words, a very serious thing. So, the peaceable passing of 2016 in Ireland is, from a political point of view, entirely gratifying.

The political utility of 1917—one can’t see that quite so readily at all. Hence, presumably, the reluctance to celebrate.

I see commemorations as essentially pieces of political theatre. I don’t regard directing them as the business of the historian. Presumably, in the Russian case now, a shared narrative is far more difficult to achieve by comparison with Ireland. There is a will to disavow the revolutionary legacy without that having ever been overtly articulated. On the other hand, in the recent Irish case, the Southern Irish state’s commitment to abjuring certain versions of the 1916 legacy during the thirty years of the Troubles [1968–1998] had already passed, and consequently the need for revolutionary disavowal had (as it were) already been “worked through” the polity by 2016.

With Ireland, you have to remember, in 1966—and that was just two years before the ‘reinauguration’ of the Troubles in 1968—and then over the next thirty years, the Southern state had to disown much of the legacy of 1916 for the next three decades. So, with the end of the Troubles, as a result, a certain distance between the Southern Irish state and the history of its own militancy was possible. Also, generally speaking, a mood of collaboration around a possible shared narrative emerged. There was a commitment all round to manufacturing—because these are essentially manufactured stories—to manufacturing a liberal, cosmopolitan vision: excavating the diverse roles of peoples in 1916; children in 1916; women in 1916—so a diversified picture, by comparison with the original “16 Dead Men” narrative. It was a sort of attempt to bring all parties on board: the British state could have a role, because they’d accepted all that now; Irish republicans could have a role; we could pretend that Northern Irish Protestants might have a role; we could pretend that we can fully acknowledge that the First World War at the time was a far bigger event in Irish history than 1916 had been—certainly, considerably larger numbers died. In effect, there was a mood of opening up to these diverse possibilities. Actually, it was quite a constrained vision, to be honest. But nonetheless, the self-congratulatory story was that tremendous “openness” was prospering, then and now. Having said that—having just put it critically—I was there in Ireland at the time for the centenary celebrations, and in truth I don’t think it was at all badly done. There was no inappropriate pomp: I went with my children, and it was perfectly inoffensive to be there. I am no purist: states habitually resort to such rites of passage, and it’s just a matter of coming up with productive versions of the fanfare—a conducive version of it.

There is one poem, just a single poem, which has had as large an influence on the interpretation of the events of 1916 on subsequent historiography as any other document or text—and that is, of course, W. B. Yeats’ poem of that title: ‘Easter 1916’. Many, many historical studies of the period invoke its version of what transpired. The final stanza poses a rhetorical question: Was it needless death after all? So, the poem has a provocative question at its very heart. And, in a way, that has the effect of casting doubt on the whole enterprise: it seems it was needless death, a vain exercise! That’s another way of asking: Was this whole undertaking without any positive justification? But then there’s a gear change in the poem, which amounts to proclaiming that, given the fact that a ‘terrible beauty’ has indeed been born, the national poet has no choice but to lay claim to the legacy of this martyrdom, and that’s what the author proceeds to do in the poem.

I am currently working on a book, which is on the relationship between the philosophy of history, on the one hand—that is to say, fundamental views about what drives the historical process, and its direction of travel—and, on the other hand, the effect of one’s philosophical commitment to a given vision of the kind upon one’s investment in particular historical narratives. So, basically, I am concerned with conceptions of progress, specifically the notion that history is progressing—a perspective that emerged in the eighteenth century as a basic, almost a priori assumption about historical development. I am interested in the connection between that assumption and the impulse to read events themselves as progressive or retrogressive. That amounts, in turn, to an interest in the very idea of being “on the right side of history” in the familiar sense—of deeming oneself to be making the right moral choices because these choices coincide with the overarching directionality of history. It is fascinating to reflect on how this mode of thinking about our world first emerged, and now frames our approach to the past and the future.

Despite the long shadow cast by the philosophy of history, practicing historians ought to think more multi-perspectivally about the past, and therefore in less partisan and party-driven ways. I think that’s an honorable vocation for historians, though it’s not always the one they choose.”

Dina Gusejnova is Lecturer in Modern History at the University of Sheffield. She is the author of European Elites and Ideas of Empire, 1917-57 (Cambridge University Press, 2016) and the editor of Cosmopolitanism in Conflict: Imperial Encounters from the Seven Years’ War to the Cold War (Palgrave Macmillan, forthcoming later in 2017).

The state, and revolution: A site-specific view of centenaries. Part I: The revolution reshuffled: Statelessness and civil war in the museum

By guest contributor Dr. Dina Gusejnova

The introduction to “The state, and revolution” can be found here.

Museums and libraries are the kinds of places that promise to transport you to any other time or place. But some people experience their structure as a constraint on their imagination. One reaction to their static and state-centered character might be to give up on the structure of museums altogether and resort to watching films instead. It is not surprising that this medium was most successful in marking the first decade after the October Revolution—celebrating it as a leaderless movement, without an obvious protagonist and certainly no national teleology. In fact, most of today’s museums have embedded films in their displays. Yet if you want to resist path-dependent constraints in interpreting revolutions, films are hardly a solution: they are the products of fixed scripts, of a specially built set, narrative music, and so on. (October was first performed to the sound of the Marseillaise, before new tunes could be composed).

Is a museum of the revolution necessarily an oxymoron? As a type of space, most museums have the advantage of being physical sites. In such places, visitors can recognize what they thought of as ownership of the present as a mere tenancy, which places them not only in a subordinate relationship to the landlord, but also in an imaginary relationship to the previous tenants, who may even have left things behind. From then on, it is up to them how many degrees of separation they establish between themselves and this past.

The Russian Revolution exhibition at the British Library—its interior designed in the style of a grand opera set — is one example of this kind of possibility. The Communist Manifesto is placed at the entrance as a relic of one of the Library’s most famous users, yet it is as feeble a guide to the Russian Revolution as Rousseau had been to the French. If anything, the curators emphasise, the Manifesto discouraged the Communists of its time from transporting ideas of revolution to unsuitable locations like Russia. Like the gimmicky poster of a Bolshevik, it functions merely as a hook, because what you find instead of a party line is an aspect of the revolution as the product of a social process of intellectual contagion. Connoisseurs of magical realism will appreciate this opportunity to trace how the revolution as an idea “became” an event in and through the library itself. What sorts of studies in the library collections led Lenin, who, between 1902 and 1911 identified himself to the library as Jacob Richter, supposedly a German subject of the Russian empire, to call for a revolution in Russia six years later under the more ubiquitous pseudonym of Lenin? For Marx, contemplation itself had been a kind of action, since he preferred a Victorian library to the barricades of Paris. But where did Marx’s theories of how to “change the world” connect to the Bolshevik practices of terror and violence? The exhibition hints at the unlikely friendship between the Victorian library curator Richard Garnett, Dostoyevsky’s first English translator, Constance Garnett (his daughter-in-law), and the exiled Popular Will activist Sergei Stepnyak. Without connections like this, would Lenin have found sufficient reading material on “the land question”?

Finally, how did readers decide where to change the world? Ideas did not just migrate from book to book in a Republic of Letters, nor were they confined to their author’s “home” states. In a postwar world governed by new frontiers, visas, and immigration detention centres, it was the librarians who mattered. In the twentieth century, you are more likely to find a folio edition of counter-revolutionary thoughts than a revolutionary manifesto, but the exiled socialists made sure that ephemeral pamphlets also got collected. Lenin’s wife, Nadezhda Krupskaya, had been a librarian in St Petersburg before the 1905 revolution, working together with Nikolai Roubakine, who is introduced in this exhibition only as a social statistician of the late Russian empire. As an exiled revolutionary of 1905, Roubakine had started a new library in Switzerland, which also supplied Lenin with reading material during this time of his exile.

Instead of a state withering away, the visitor is confronted with the notion of a civil war that is only “Russian”  in inverted commas. The protracted statelessness of the “white émigré” exiles in the West coexisted alongside a Bolshevik-run Soviet apparatus in the East, which was eventually signed out of existence in a Byelorussian forest with the Belavezha agreement of 1991, as Katie McElvaney reminds us in her timeline. At the end of the magic, there is also the reality of censorship. Apparently, in 1922, a British library consultant concluded that some materials calling for revolution beyond Russia were not “desirable to be released to readers.” We may not know if the Library caused this or any other revolution, but we can certainly see that it had tried not to cause it.

To get away from issues of representation to the memory of revolutionary action, however, I had to travel further, to Finland, where, in March 2017, Tampere University had organized a conference called “Reform and Revolution in Europe, 1917-1919.” Like many attendees, I was struck by the range of new insights into the Revolution that Russia’s former periphery offers, through the transnational perspective of the First World War in the work of Richard Bessel, and the concept of civil war as contextualized by Bill Kissane. Formerly an underdeveloped outpost of the Russian Empire, Finland had risen to the status of an autonomous Grand Duchy by the time of the Revolution. As such, it was the first post-imperial polity to gain sovereignty from the Russian empire, by Lenin’s decree—and to keep it, for the most part.

In the summer of 1917, Lenin was in Tampere as he worked on The State and Revolution. Eleven years before that, he had his first fateful encounter with Stalin here. The site of their encounter, a former Workers Hall, is the space for a newly redesigned Lenin Museum, which first opened here in 1946, under the close watch of Soviet authorities—one of the more visible effects of what is now called “Finlandization.” Its new curators have resorted to a combination between history and humor to tell the story:

Reproduced with kind permission from the Lenin Museum, https://museot.fi/en.php

The rest of the Lenin Museum has little to do with Lenin, and more to do with the history of Finnish democracy and the vicissitudes of European integration, after decades of civil war, partial Soviet occupation, and collaboration with National Socialism, before the gradual emergence of a Finnish brand of Social Democracy.

Seeing the city itself, surrounded by its stunning landscape, also offers other opportunities to reflect on how ideas might relate to the places in which they are formulated. How could this ethereally calming landscape inspire someone to work on a book called The State and Revolution? Could Lenin have instead become a twentieth-century Lake Poet?

9 Tampere Lake 1

Photo by Dina Gusejnova

10 Tampere Lake 2

Photo by Dina Gusejnova

11 Tampere Lake 3

Photo by Dina Gusejnova

As I walked through a working-class neighborhood of today’s Tampere, I noticed that its outer lake was still frozen, so I borrowed some skates to have a final look at the skyline: two days, two seasons. Lenin, of course, had missed the Finnish ice-skating season, with the revolution gaining speed in Petrograd just as the ice had begun to thicken. I thought about the remarkable contrast between the long-term outcomes of the revolution for Finland and for Ukraine—another imperial province, but with a much shorter history of post-imperial sovereignty, and an incomparably higher death toll in the twentieth century. This is a complex issue for historians, and one which, perhaps, will always call for the assistance of a writer like Vassily Grossman.

In the Labor Museum, a three-year long exhibition (2014–17) marks the cultural memory of the revolution of 1917 from the perspective of the Finnish Civil War of 1918, which the exhibition laconically identifies to its visitors as “a short, but traumatic and sorrowful period.” This exhibition is a unique, if slightly quixotic, place. The visitor will look in vain for any kind of partisanship here, with the Reds or the Whites, the Russians or the Finns, workers, peasants, or bystanders. What they see is a memorial to civil violence, a focus on human experience. It is challenging to try to capture a war inside the walls of a museum, but Tampere has clearly learned from commemorative practice in France and other countries, with their focus on reconciliation. The site of the museum belongs to one of the largest cotton weaving halls in the Nordic countries, Finlayson & Compagnie, a focus of socialist mobilization in 1917. The last Finnish factories were closed in 1995, but the company continues selling its products in Europe. Founded by the Scottish industrialist James Finlayson, it is also a reminder that a civil war always has not just local and imperial, but also trans-imperial dimensions. At the museum, I met social historian Richard Bessel, a first-time visitor to the city, and social theorist Rebecca Boden, who has recently moved there.

Rebecca Boden is a professor of critical management. She is interested in the effects of regimes of accounting and management on sites of knowledge creation, and the relationship between individuals and the state. She recently joined the University of Tampere as the Director of Research of the New Social Research Centre. Professor Boden also attended the conference “Reform and Revolution in Europe, 1917-1919,” held at the University of Tampere in March 2017.

I’ve never lived in this part of the world, and as a British person, I know very little about it. So what strikes me is how little people brought up and educated in Britain know about Central and Eastern Europe. I’ve felt ashamed about some of the questions I’ve had to ask about the Finnish Civil War, in terms of understanding this part of the world. And I suspect, during my upbringing, it was during the Cold War and the Iron Curtain, so Central and Eastern Europe as very much an unknown quantity to people in the West.

What’s interesting to me is, in Britain, you’ve got a reversal of trends in history. There is greater and greater interest in British history, especially British imperial history, and that becomes dangerously xenophobic, and insular, and parochial. And I think the thing for Finland is—and I can say this as an outsider, they never would, because they are quite humble, quiet, understated sort of people generally—Finland has so many interesting things about it, it is such an interesting geopolitical space, it achieved so much so well, that I am urging people to get to know the Finnish story quite urgently.

A lot of the quiet places—very far from anywhere, on the periphery, small population, very thinly spread—they have to move themselves to make themselves heard.  All the isomorphic tendencies, policies and practices and cultures tend to move in the other direction. And it would be quite good to have the quiet places listened to. But part of it is, the quiet places have to find their voice. And that’s partly what I am doing, helping Finland to find their voice and engage with the outside world in a really proactive kind of way.

Richard Bessel is professor of twentieth-century history at the University of York. He works on the social and political history of modern Germany, the aftermath of the two World Wars, and the history of policing, and is currently co-editing, with Dorothee Wierling, Inside World War One? The First World War and its Witnesses (Oxford University Press, 2018). In March 2017, he travelled to Finland to attend the conference on “Reform and Revolution in Europe, 1917-1919,” at which he delivered a keynote lecture.

I’ve never been to Finland, and it’s just a really interesting place to come to. And I thought it would be an interesting intellectual challenge to try to think about revolution and its relationship to the First World War, if not globally, certainly focusing more on Eastern Europe rather than on Western Europe.

I am finding Tampere very interesting, and … this is my first time in Finland! To be in a city which, as we see here, had such a fundamentally different history, with violence right in the middle of it. The differences, I just hadn’t thought about the differences to that extent. What in many ways looks and feels similar to Sweden, but then you scratch the surface, and you realize it’s not. And that surprised me, I hadn’t really quite expected that.

As I get older, it becomes more important both to me and also to colleagues: we talk about our families a lot. When I was younger, I wouldn’t do that professionally. When I was younger, we wrote history in the third person, and now we use the first person.

I’ve just been working through a book, an edited collection on ego-documents of the First World War, with a colleague of mine, which is also very much about the East and the South.

There is one question that I always wanted to get it on an exam, but nobody would allow me to do it. And the question is: when did the twentieth century begin?

Dina Gusejnova is Lecturer in Modern History at the University of Sheffield. She is the author of European Elites and Ideas of Empire, 1917-57 (Cambridge University Press, 2016) and the editor of Cosmopolitanism in Conflict: Imperial Encounters from the Seven Years’ War to the Cold War (Palgrave Macmillan, forthcoming later in 2017).