Author: spencejw

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951

Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])

I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg

Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC

Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

Reading Saint Augustine in Toledo

By Editor Spencer J. Weinreich

1024px-Antonio_Rodríguez_-_Saint_Augustine_-_Google_Art_Project

Antonio Rodríguez, Saint Augustine

In his magisterial history of the Reformation, Diarmaid MacCulloch wrote, “from one perspective, a century or more of turmoil in the Western Church from 1517 was a debate in the mind of long-dead Augustine.” MacCulloch riffs on B. B. Warfield’s pronouncement that “[t]he Reformation, inwardly considered, was just the ultimate triumph of Augustine’s doctrine of grace over Augustine’s doctrine of the Church” (111). There can be no denying the centrality to the Reformation of Thagaste’s most famous son. But Warfield’s “triumph” is only half the story—forgivably so, from the last of the great Princeton theologians. Catholics, too, laid claim to Augustine’s mantle. Not least among them was a Toledan Jesuit by the name of Pedro de Ribadeneyra, whose particular brand of personal Augustinianism offers a useful tonic to the theological and polemical Augustine.

ribadeneyra_retrato_index

Pedro de Ribadeneyra

To quote Eusebio Rey, “I do not believe there were many religious writers of the Siglo de Oro who internalized certain ascetical aspects of Saint Augustine to a greater degree than Ribadeneyra” (xciii). Ribadeneyra translated the Confessions, the Soliloquies, and the Enchiridion, as well as the pseudo-Augustinian Meditations. His own works of history, biography, theology, and political theory are filled with citations, quotations, and allusions to the saint’s oeuvre, including such recondite texts as the Contra Cresconium and the Answer to an Enemy of the Laws and the Prophets. In short, like so many of his contemporaries, Ribadeneyra invoked Augustine as a commanding authority on doctrinal and philosophical issues. But there is another component to Ribadeneyra’s Augustinianism: his spiritual memoir, the Confesiones.

Composed just before his death in September 1611, Ribadeneyra’s Confesiones may be the first memoir to borrow Augustine’s title directly (Pabel 456). Yet, a title does not a book make. How Augustinian are the Confesiones?

Pierre Courcelle, the great scholar of the afterlives of Augustine’s Confessions, declared that “the Confesiones of the Jesuit Ribadeneyra seem to have taken nothing from Augustine save the form of a prayer of praise” (379). Of this commonality there can be no doubt: Ribadeneyra effuses with gratitude to a degree that rivals Augustine. “These mercies I especially acknowledge from your blessed hand, and I praise and glorify you, and implore all the courtiers of heaven that they praise and forever thank you for them” (21). Like the Confessions, the Confesiones are written as “an on-going conversation with God to which […] readers were deliberately made a party” (Pabel 462). That said, reading the two side-by-side reveals deeper connections, as the Jesuit borrows from Augustine’s life story in narrating his own.

Though Ribadeneyra could not recount flirtations with Manicheanism or astrology, he could follow Augustine in subjecting his childhood to unsparing critique. His early years furnished—whose do not?—sufficient petty rebellions to merit Augustinian laments for “the passions and awfulness of my wayward nature” (5–6). In one such incident, Pedro stubbornly demands milk as a snack; enraged by his mother’s refusal, he runs from the house and begins roughhousing with his friends, resulting in a broken leg. Sin inspired by a desire for dairy sets up an echo of Augustine’s rebuke of

the jealousy of a small child: he could not even speak, yet he glared with livid fury at his fellow-nursling. […] Is this to be regarded as innocence, this refusal to tolerate a rival for a richly abundant fountain of milk, at a time when the other child stands in greatest need of it and depends for its very life on this food alone? (I.7,11)

Luis_Tristán_de_Escamilla_-_St_Monica_-_WGA23069

Luis Tristán, Santa Monica

Ribadeneyra’s mother, Catalina de Villalobos, unsurprisingly plays the role of Monica, the guarantor of his Catholic future (while pregnant, Catalina vows that her son will become a cleric). She was not the only premodern woman to be thus canonized by her son: Jean Gerson tells us that his mother, Élisabeth de la Charenière, was “another Monica” (400n10).

Leaving Toledo, Pedro comes to Rome, which was cast as one of Augustine’s perilous earthly cities. Hilmar Pabel points out that the Jesuit’s description of the city as “Babylonia” imitates Augustine’s jeremiad against Thagaste as “Babylon” (474). Like its North African predecessor, this Italian Babylon threatens the soul of its young visitor. Foremost among these perils are teachers: in terms practically borrowed from the Confessions, Ribadeneyra decries “those who ought to be masters, [who] are seated in the throne of pestilence and teach a pestilent doctrine, and not only do not punish the evil they see in their vassals and followers, but instead favor and encourage them by their authority” (7–8).

After Ribadeneyra left for Italy, Catalina’s duties as Monica passed to Ignatius of Loyola, who combined them with those of Ambrose of Milan—the father-figure and guide encountered far from home . Like Ambrose, Ignatius acts como padre, one whose piety is the standard that can never be met, who combines affection with correction.

1024px-Ignatius_of_Loyola_(militant)

A young Ignatius of Loyola

The narrative climax of the Confessions is Augustine’s tortured struggle culminating in his embrace of Christianity. No such conversion could be forthcoming for Ribadeneyra, its place taken by tentacion, an umbrella term encapsulating emotional upheavals, doubts over his vocation, the fantasy of returning to Spain, and resentment of Ignatius. Famously, Augustine agonizes until he hears a voice that seems to instruct him, tolle lege (VIII.29). Ribadeneyra structures the resolution of his own crises in analogous fashion, his anxieties dissolved by a single utterance of Ignatius’s: “I beg of you, Pedro, do not be ungrateful to one who has shown you so many kindnesses, as God our Lord.” “These words,” Ribadeneyra tells us, “were as powerful as if an angel come from heaven had spoken them,” his tentacion forever banished (37).

I am not suggesting Ribadeneyra fabricated these incidents in order to claim an Augustinian mantle. But the choices of what to include and how to narrate his Confesiones were shaped, consciously and unconsciously, by Augustine’s example.

Ribadeneyra’s story also diverges from its Late Antique model, and at times the contrast is such as to favor the Jesuit, however implicitly. Ribadeneyra professes an unmistakably early modern Marian piety that has no equivalent in Augustine. Where Monica is reassured by a vision of “a young man of radiant aspect” (III.11,19), Catalina de Villalobos makes her vow to vuestra sanctíssima Madre y Señora nuestra (3). Augustine addresses his gratitude to “my God, my God and my Lord” (I.2, 2), while Ribadeneyra, who mentions his travels to Marian shrines like Loreto, is more likely to add the Virgin to his exclamations: “and in particular I implored your most glorious virgin-mother, my exquisite lady, the Virgin Mary” (11). The Confessions mention Mary only twice, solely as the conduit for the Incarnation (IV.12, 19; V.10, 20). Furthermore, Ribadeneyra’s early conquest of his tentaciones produces a much smoother path than Augustine’s erratic embrace of Christianity; thus the Jesuit declares, “I never had any inclination for a way of life other than that I have” (6). His rhapsodic praise of chastity—“when could I praise you enough for having bound me with a vow to flee the filthiness of the flesh and to consecrate my body and soul to you in a clean and sweet sacrifice” (46)—is far cry from the infamous “Make me chaste, Lord, but not yet” (VIII.17)

When Ribadeneyra translated Augustine’s Confessions into Spanish in 1596, his paratexts lauded Augustine as the luz de la Iglesia and God’s signal gift to the Church. There is no hint—anything else would have been highly inappropriate—of equating himself with Augustine, whose ingenio was “either the greatest or one of the greatest and most excellent there has ever been in the world.” As a last word, however, Ribadeneyra mentions the previous Spanish version, published in 1554 by Sebastián Toscano. Toscano was not a native speaker, “and art can never equal nature, and so his style could not match the dignity and elegance of our language.” It falls to Ribadeneyra, in other words, to provide the Hispanophone world with the proper text of the Confessions; without ever saying so, he positions himself as a privileged interpreter of Augustine.

The Confessions is a profoundly personal text, perhaps the seminal expression of Christian subjectivity—told in a searingly intense first-person. Ribadeneyra himself writes that in the Confessions “is depicted, as in a masterful portrait painted from life, the heavenly spirit of Saint Augustine, in all its colors and shades.” Without wandering into the trackless wastes of psychohistory, it must have been a heady experience for so devoted a reader of Augustine to compose—all translation being composition—the life and thought of the great bishop.

Ribadeneyra was of course one of many Augustinians in early modern Europe, part of an ongoing Catholic effort to reclaim the Doctor from the Protestants, but we will misunderstand his dedication if we regard the saint as no more than a prime piece of symbolic real estate. For scholars of early modern Augustinianism have rooted the Church Father in philosophical schools and the cut-and-thrust of confessional conflict. To MacCulloch and Warfield we might add Meredith J. Gill, Alister McGrath, Arnoud Visser, and William J. Bouwsma, for whom early modern thought was fundamentally shaped by the tidal pulls of two edifices, Augustinianism and Stoicism.

There can be no doubt that Ribadeneyra was convinced of Augustine’s unimpeachable Catholicism and opposition to heresy—categories he had no hesitation in mapping onto Reformation-era confessions. Equally, Augustine profoundly influenced his own theology. But beyond and beneath these affinities lay a personal bond. Augustine, who bared his soul to a degree unmatched among the Fathers, was an inspiration, in the most immediate sense, to early modern believers. Like Ignatius, the bishop of Hippo offered Ribadeneyra a model for living.

That early modern individuals took inspiration from classical, biblical, and late antique forebears is nothing new. Bruce Gordon writes that, influenced by humanist notions of emulation, “through intensive study, prayer and conduct [John] Calvin sought to become Paul” (110). Mutatis mutandis, the sentiment applies to Ribadeneyra and Augustine. Curiously, Stephen Greenblatt’s seminal Renaissance Self-Fashioning does not much engage with emulation, concerning itself with self-fashioning as creation ad nihilum—that is to say, a new self, not geared toward an existing model (Greenblatt notes in passing, and in contrast, the tradition of imitatio Christi). Ribadeneyra, in reading, translating, interpreting, citing, and imitating Augustine, was fashioning a self after another’s image. As his Catholicized Confesiones indicate, this was not a slavish and literal-minded adherence to each detail. He recognized the great gap of time that separated him from his hero, changes that demanded creativity and alteration in the fashioning of a self. This need not be a thought-out or even conscious plan, but simply the cumulative effect of a lifetime of admiration and inspiration. Without denying Ribadeneyra’s formidable mind or his fervent Catholicism, there is something to be gained from taking emotional significance as our starting point, from which to understand all the intellectual and personal work the Jesuit, and others of his time, could accomplish through a hero.

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg

Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original

Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens

Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

A Book of Battle: Marcelino Menéndez y Pelayo and La ciencia española

By Editor Spencer J. Weinreich

OLYMPUS DIGITAL CAMERA

Statue of Marcelino Menéndez Pelayo at the Biblioteca Nacional de España

Marcelino Menéndez y Pelayo’s La ciencia española (first ed. 1876) is a battlefield long after the guns have fallen silent: the soldiers dead, the armies disbanded, even the names of the belligerent nations changed beyond recognition. All the mess has been cleared up. Like his contemporaries Leopold von Ranke, Arnold Toynbee, or Jacob Burckhardt, Menéndez Pelayo has been enshrined as one of the nineteenth-century tutelary deities of intellectual history. Seemingly incapable of writing except at great length and in torrential cascades of erudition, his oeuvre lends itself to reverence—and frightens off most readers. And while reverence is hardly undeserved, we do a disservice to La ciencia española and its author if we leave the marmoreal exterior undisturbed. The challenge for the modern reader is to recover the passions—intellectual, political, and personal—animating what Menéndez Pelayo himself called “a book of battle [un libro de batalla]” (2:268).

Gumersindo-de-Azcarate-1907

Gumersindo de Azcárate

La ciencia española is a multifarious collection of articles, reviews, speeches, and letters that takes its name from its linchpin, a feisty exchange over the history of Spanish learning (la ciencia española). The casus belli came from an 1876 article by the distinguished philosopher and jurist Gumersindo de Azcárate, who argued that early modern Spain had been intellectually stunted by the Catholic Church. Menéndez Pelayo responded with an essay vociferously defending the honor of Spanish learning, exonerating the Church, and decrying the neglect of early modern Spanish intellectual history. Azcárate never replied, but his colleagues  Manuel de la Revilla, Nicolás Salméron, and José del Perojo took up his cause, trading articles with Menéndez Pelayo in which they debated these and related issues—was there such a thing as “Spanish philosophy”?—in excruciating detail.

The exchange showcases the driving concerns of Menéndez Pelayo’s scholarly career: the greatness of the Spanish intellectual tradition, critical bibliography, Catholicism as the national genius of Spain, and an almost-frightening sense of how much these issues matter. This last is the least accessible element of La ciencia española: the height of its stakes. Why should Spain’s very identity rest upon abstruse questions of intellectual history? How did a group of academics merit the label “the eternal enemies of religion and the patria [los perpetuos enemigos de la Religión y de la patria]” (1:368)?

Here we must understand that La ciencia española is but one rather pitched battle in a broader war. Nineteenth-century Spain was in the throes of an identity crisis, the so-called “problem of Spain.” In the wake of the loss of a worldwide empire, serial revolutions and civil wars, a brief flirtation with a republic, endemic corruption, and economic stagnation, where was Spain’s salvation to be found—in the past or in the future? With the Church or with the Enlightenment? By looking inward or looking outward?

krause

Karl Christian Friedrich Krause

Menéndez Pelayo was a self-declared neocatólico, a movement of conservative Catholics for whom Spain’s identity was indissolubly linked to the Church. He also stands as perhaps the foremost exponent of casticismo, a literary and cultural nationalism premised on a return to Spain’s innate, authentic identity.  All of Menéndez Pelayo’s antagonists in that initial exchange—Azcárate, Revilla, Salmerón, and Perojo—were Krausists, from whom not much is heard these days. Karl Christian Friedrich Krause was a student of Schelling, Hegel, and Fichte, long (and not unjustly) overshadowed by his teachers. But Krause found an unlikely afterlife among a cohort of liberal thinkers in Restoration Spain. These latter-day Krausists aimed at the intellectual rejuvenation of Spain, which they felt had been stifled by the Catholic Church. Accordingly, they called for religious toleration, academic freedom, and, above all, an end to the Church’s monopoly over education.

To Menéndez Pelayo, Krausism threatened the very wellsprings of the national culture. The Krausists were “a horde of fanatical sectarians […] murky and repugnant to every independent soul” (qtd. in López-Morillas, 8). He acidly denied both that Spain’s learning had declined, and that the Church had in any way hindered it:

For this terrifying name of “Inquisition,” the child’s bogeyman and the simpleton’s scarecrow, is for many the solution to all problems, the deus ex machina that comes as a godsend in difficult situations. Why have we had no industry in Spain? Because of the Inquisition. Why have we had bad customs, as in all times and places, save in the blessed Arcadia of the bucolics? Because of the Inquisition. Why are we Spaniards lazy? Because of the Inquisition. Why are there bulls in Spain? Because of the Inquisition. Why do Spaniards take the siesta? Because of the Inquisition. Why were there bad lodgings and bad roads and bad food in Spain in the time of Madame D’Aulnoy? Because of the Inquisition, because of fanaticism, because of theocracy. [Porque ese terrorífico nombre de Inquisición, coco de niños y espantajo de bobos, es para muchos la solución de todos los problemas, el Deus ex machina que viene como llovido en situaciones apuradas. ¿Por qué no había industria en España? Por la Inquisición. ¿Por qué había malas costumbres, como en todos tiempos y países, excepto en la bienaventurada Arcadia de los bucólicos? Por la Inquisición. ¿Por qué somos holgazanes los españoles? Por la Inquisición. ¿Por qué hay toros en España? Por la Inquisición. ¿Por qué duermen los españoles la siesta? Por la Inquisición. ¿Por qué había malas posadas y malos caminos y malas comidas en España en tiempo de Mad. D’Aulnoy? Por la Inquisición, por el fanatismo, por la teocracia.]. (1:102–03)

What was called for was not—perish the thought—a move away from dogmatism, but a renewed appreciation for Spain’s magnificent heritage. “I desire only that the national spirit should be reborn […] that spirit that lives and beats at the base of all our systems, and gives them a certain aspect of their parentage, and connects and ties together even those most discordant and opposed [Quiero sólo que renazca el espíritu nacional […], ese espíritu que vive y palpita en el fondo de todos nuestros sistemas, y les da cierto aire de parentesco, y traba y enlaza hasta a los más discordes y opuestos]” (2:355).

2358

Title page of Miguel Barnades Mainader’s Principios de botanica (1767)

Menéndez Pelayo practiced what he preached. He is as comfortable discussing such obscure peons of the Republic of Letters as the Portuguese theologian Manuel de Sá and the Catalan botanist Miguel Barnades Mainader, as he is in extolling Juan Luis Vives, arguing over the influence of Thomas Aquinas, or establishing the birthplace of Raymond Sebold. Menéndez Pelayo writes with genuine pain at “the lamentable oblivion and neglect in which we hold the nation’s intellectual glories [del lamentable olvido y abandono en que tenemos las glorias científicas nacionales]” (1:57). His fellow neocatólico Alejandro Pidal y Mon imagines Menéndez Pelayo as a necromancer, calling forth the spirits of long-dead intellectuals (1:276), a power on extravagant display in La ciencia española. The third volume of La ciencia española comprises nearly three hundred pages of annotated bibliography, on every conceivable branch of the history of knowledge in Spain.

I am aware how close I have strayed to the kind of pedestal-raising I deprecated at the outset. Fortunately, we do not have to look far to find the clay feet that will be the undoing of any such monument. Menéndez Pelayo’s lyricism should not disguise the reactionary character of his intellectual project, with its nationalism and loathing of secularism, religious toleration, and any challenge to Catholic orthodoxy. His avowed respect for the achievements of Jews and Muslims in medieval Spain is cheapened by a pervasive, muted anti-Semitism and Islamophobia: La ciencia española speaks of “the scientific poverty of the Semites [La penuria científica de los semitas]” (2:416) and the “decadence [decadencia]” of contemporary Islam. When he writes, “I am, thanks be to God, an Old Christian [gracias a Dios, soy cristiano viejo]” (2:265), we cannot pretend he is ignorant of the pernicious history of that term. Of the colonization of the New World he baldly states, “we sowed religion, science, and blood with a liberal hand, later to reap a long harvest of ingratitudes and disloyalties [sembramos a manos llenas religión, ciencia y sangre, para recoger más tarde larga cosecha de ingratitudes y deslealtades]” (2:15).

It is no coincidence that Menéndez Pelayo’s prejudices are conveyed in superlative Spanish prose—ire seems to have brought out the best of his wit. “I cannot but regret that Father [Joaquín] Fonseca should have felt himself obliged, in order to vindicate Saint Thomas [Aquinas] from imagined slights, to throw upon me all the corpulent folios of the saint’s works [no puedo menos de lastimarme de que el Padre Fonseca se haya creído obligado, para desagraviar a Santo Tomás de ofensas soñadas, a echarme encima todos los corpulentos infolios de las obras del Santo]” (2:151) “Mr. de la Revilla says that he has never belonged to the Hegelian school. Congratulations to him—his philosophical metamorphoses are of little interest to me [El Sr. de la Revilla dice que nunca ha pertenecido a la escuela hegeliana. En hora buena: me interesan poco sus transformaciones filosóficas]” (1:201). On subjects dear to his heart, baroque rhapsodies could flow from his pen. He spends three pages describing the life of the medieval Catalan polymath Ramon Llull, whom he calls the “knight errant of thought [caballero andante del pensamiento]” (2:372).

At the same time, many pages of La ciencia española make for turgid reading, bare catalogues of obscure Spanish authors and their yet more obscure publications.

*     *     *

Menéndez Pelayo died in 1912. Azcárate, his last surviving interlocutor, passed away five years later. Is the battle over? In the intervening decades, Spain has found neither cultural unity nor political coherence—and not for lack of trying. Reactionary Catholic and conservative though he was, Menéndez Pelayo does not fit the role of Francoist avant la lettre, in spite of the regime’s best efforts  to coopt him. La ciencia española shows none of Franco’s Castilian chauvinism and suspicion of regionalism. Menéndez Pelayo chides an author for using the phrase “the Spanish language [la lengua española]” when he means “Castilian.” “The Catalan language is as Spanish as Castilian or Portuguese [Tan española es la lengua catalana como la castellana or la portuguesa]” (2:363).

Today the Church has indeed lost its iron grip on the Spanish educational system, and the nation is not only no longer officially Catholic, but has embraced religious toleration and even greater heterodoxies, among them divorce, same-sex marriage, and abortion. We are all Krausists now.

If the crusade against the Krausists failed, elements of Menéndez Pelayo’s intellectual project have fared considerably better. We are witnessing a flood of scholarly interest in early modern Spain’s intellectual history—historiography, antiquarianism, the natural sciences, publishing. Whether they know it or not, these scholars are answering a call sounded more than a century before. And never more so than when they turn their efforts to those Menéndez Pelayo sympathetically called “second-order talents [talentos de segundo orden]” (1:204). In the age of USTC, EEBO, Cervantes Virtual, Gallica, and countless similar resources, the discipline of bibliography he so cherished is expanding in directions he could never have imagined.

Rey_Carlos_II_de_España

Charles II of Spain

Spain’s decline continues to inspire debate among historians—and will continue to do so, I expect, so long as there are historians to do the debating. The foreword to J. H. Elliott’s still-definitive survey, Imperial Spain: 1469–1716, places the word “decline” in inverted commas, but the prologue acknowledges the genuine puzzle of explaining the shift in Spain’s fortunes over the early modern period. Menéndez Pelayo could hardly deny that Charles II ruled an altogether less impressive realm than had his great-grandfather, but would presumably counter that whatever the geopolitics, Spanish letters remained vibrant. As for the Spanish Inquisition, his positivity prefigures that of Henry Kamen, who has raised not a few eyebrows with his favorably inclined “historical revision.”

La ciencia española is at once the showcase for a prodigious young talent, a call to arms for intellectual traditionalism, and a formidable if flawed collection of insights and reflections. As the grand old man of Spanish letters, a caricature of conservatism and Catholic partisanship, Menéndez Pelayo furnishes an excellent foil—or strawman, for those less charitably inclined—against whom generations can and should sharpen their pens and their arguments.

La lutte continue.

The Historical Origins of Human Rights: A Conversation with Samuel Moyn

By guest contributor Pranav Kumar Jain

picture-826-1508856803

Professor Samuel Moyn (Yale University)

Since the publication of The Last Utopia: Human Rights in History, Professor Samuel Moyn has emerged as one of the most prominent voices in the field of human rights studies and modern intellectual history. I recently had a chance to interview him about his early career and his views on human rights and recent developments in the field of history.

Moyn was educated at Washington University in St. Louis, where he studied history and French literature. In St. Louis, he fell under the influence of Gerald Izenberg, who nurtured his interest in modern French intellectual history. After college, he proceeded to Berkeley to pursue his doctorate under the supervision of Martin Jay. However, unexcited at the prospect of becoming a professional historian, he left graduate school after taking his orals and enrolled at Harvard Law School. After a year in law school, he decided that he did want to finish his Ph.D. after all. He switched the subject of his dissertation to a topic that could be done on the basis of materials available in American libraries. Drawing upon an earlier seminar paper, he decided to write about the interwar moral philosophy of Emmanuel Levinas. After graduating from Berkeley and Harvard in 2000-01, he joined Columbia University as an assistant professor in history.

Though he had never written about human rights before, he had become interested in the subject in law school and during his work in the White House at the time of the Kosovo bombings. At Columbia, he decided to pursue his interest in human rights further and began to teach a course called “Historical Origins of Human Rights.” The conversations in this class were complemented by those with two newly arrived faculty members, Mark Mazower and Susan Pedersen, both of whom were then working on the international history of the twentieth century. In 2008, Moyn decided that it was finally time to write about human rights.

9780674064348-lg

Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge: Harvard University Press, 2012)

In The Last Utopia, Moyn’s aim was to contest the theories about the long-term origins of human rights. His key argument was that it was only in the 1970s that the concept of human rights crystallized as a global language of justice. In arguing thus, he sharply distinguished himself from the historian Lynn Hunt who had suggested that the concept of human rights stretched all the way back to the French Revolution. Before Hunt published her book on human rights, Moyn told me, his class had shared some of her emphasis. Both scholars, for example, were influenced by Thomas Laqueur’s account of the origins of humanitarianism, which focused on the upsurge of sympathy in the eighteenth century. Laqueur’s argument, however, had not even mentioned human rights. Hunt’s genius (or mistake?), Moyn believes, was to make that connection.

Moyn, however, is not the only historian to see the 1970s as a turning point. In his Age of Fracture (2012), intellectual historian Daniel Rodgers has made a similar argument about how the American postwar consensus came under increasing pressure and finally shattered in the 70s. But there are some important differences. As Moyn explained to me, Rodgers’s argument is more about the disappearance of alternatives, whereas his is more concerned with how human rights survived that difficult moment. Furthermore, Rodgers’s focus on the American case makes   his argument unique because, in comparison with transatlantic cases, the American tradition does not have a socialist starting point. Both Moyn and Rodgers, however, have been criticized for failing to take neoliberalism into account. Moyn says that he has tried to address this in his forthcoming book Not Enough: Human Rights in an Unequal World.

Some have come to see Moyn’s book as mostly about President Jimmy Carter’s contributions to the human rights revolution. Moyn himself, however, thinks that the book is ultimately about the French Revolution and its abandonment in modern history for an individualistic ethics of rights, including the Levinasian ethics which he once studied. In Moyn’s view, human rights are a part of this “ethical turn.” While he was working on the book, Moyn’s own thinking underwent a significant revolution. He began to explore the place of decolonization in the story he was trying to tell. Decolonization was not something he had thought about very much before but, as arguably one of the biggest events of the twentieth century, it seemed indispensable to the human rights revolution. In the book, he ended up making the very controversial argument that human rights largely emerged as the response of westerners to decolonization. Since they had now lost the interventionist tool of empire, human rights became a new universalism that would allow them to think about, care about, and perhaps intervene in places they had once ruled directly.

Though widely acclaimed, Moyn’s thesis has been challenged on a number of fronts. For one thing, Moyn himself believes that the argument of the book is problematic because it globalizes a story that it mostly about French intellectuals in the 1970s. Then there are critics such as Stefan-Ludwig Hoffmann, a German historian at UC Berkeley, who have suggested, in Moyn’s words, that “Sam was right in dismissing all prior history. He just didn’t dismiss the 70s and 80s.” Moyn says that he finds Hoffmann’s arguments compelling and that, if we think of human rights primarily as a political program, the 90s do deserve the lion’s share of attention. After all, Moyn’s own interest in the politics of human rights emerged during the 90s.

EleanorRooseveltHumanRights

Eleanor Roosevelt with a Spanish-language copy of the Universal Declaration of Human Rights

Perhaps one of Moyn’s most controversial arguments is that the field of the history of human rights no longer has anything new to say. Most of the questions about the emergence of the human rights movements and the role of international institutions have already been answered. Given the major debate provoked by his own work, I am skeptical that this is indeed the case. Plus, there are a number of areas which need further research. For instance, we need to better understand the connections between signature events such as the adoption of the Universal Declaration of Human Rights, and the story that Moyn tells about the 1970s. But I think Moyn made a compelling point when he suggested to me that we cannot continue to constantly look for the origins of human rights. In doing so, we often run the risk of anachronism and misinterpretation. For instance, some scholars have tried to tie human rights back to early modern natural law. However, as Moyn put it, “what’s lost when you interpret early modern natural law as fundamentally a rights project is that it was actually a duties project.”

Moyn is ambivalent about recent developments in the study and practice of history in general. He thinks that the rise of global and transnational history is a welcome development because, ultimately, there is no reason for methodological nationalism to prevail. However, in his view, this has had a somewhat adverse effect on graduate training. When he went to grad school, he took courses that focused on national historiographical canons and many of the readings were in the original language. With the rise of global history, it is not clear that such courses can be taught anymore. For instance, no teacher could demand that all the students know the same languages. Consequently, Moyn says, “most of what historians were doing for most of modern history is being lost.” This is certainly an interesting point and it begs the question of how graduate programs can train their students to strike a balance between the wide perspectives of global history and the deep immersion of a more national approach.

Otherwise, however, in contrast with many of his fellow scholars, Moyn is surprisingly upbeat about the current state and future of the historical profession. He thinks that we are living in a golden age of historiography with many impressive historians producing outstanding works. There is certainly more scope for history to be more relevant to the public. But historians engaging with the public shouldn’t do so in crass ways, such as suggesting that there is a definitive relevance of history to public policy. History does not have to change radically. It can simply continue to build upon its existing strengths.

lynn-hunt

Professor Lynn Hunt (UCLA)

In the face of Lynn Hunt’s recent judgment that the field of “history is in crisis and not just one of university budgets,” this is a somewhat puzzling conclusion. However, it is one that I happen to agree with. Those who suggest that historians should engage with policy makers certainly have a point. However, instead of emphasizing the uniqueness of history, their arguments devolve to what historians can do better than economists and political scientists. In the process, they often lose sight of the fact that, more than anything, historians are storytellers. History rightly belongs in the humanities rather than the social sciences. It is only in telling stories that inspire and excite the public’s imagination that historians can regain the respect that many think they have lost in the public eye.

Pranav Kumar Jain is a doctoral student in early modern history at Yale University.

Alexander and Wilhelm von Humboldt, Brothers of Continuity

By guest contributor Audrey Borowski

At the beginning of the nineteenth century, a young German polymath ventured into the heart of the South American jungle, climbed the Chimborazo volcano, crawled through the Andes, conducted experiments on animal electricity, and delineated climate zones across continents.  His name was Alexander von Humboldt (1769–1859). With the young French scientist Aimé Bonpland and equipped with the latest instruments, Humboldt tirelessly collected and compared data and specimens, returning after five years to Paris with trunks filled with notebooks, sketches, specimens, measurements, and observations of new species. Throughout his travels in South America, Russia and Mongolia, he invented isotherms and formulated the idea of vegetation and climate zones. Crucially, he witnessed the continuum of nature unfold before him and set forth a new understanding of nature that has endured up to this day. Man existed in a great chain of causes and effects in which “no single fact can be considered in isolation.” Humboldt sought to discover the “connections which linked all phenomena and all forces of nature.” The natural world was teeming with organic powers that were incessantly at work and which, far from operating in isolation, were all “interlaced and interwoven.” Nature, he wrote, was “a reflection of the whole” and called for a global understanding. Humboldt’s Essay on the Geography of Plants (1807) was the world’s first book on ecology in which plants were grouped into zones and regions rather than taxonomic units and analogies drawn between disparate regions of the globe.

In this manner, Alexander sketched out a Naturgemälde, a “painting of nature” that fused botany, geology, zoology and physics in one single picture, and in this manner broke away from prevailing taxonomic representations of the natural world. His was a fundamentally interdisciplinary approach, at a time when scientific inquiry was becoming increasingly specialized. The study of the natural world was no abstract endeavor and was far removed from the mechanistic philosophy that had held sway up till then. Nature was the object of scientific inquiry, but also of wonder and as such, it exerted a mysterious pull. Man was firmly relocated within a living cosmos broader than himself, which appealed equally to his emotions and imagination. From the heart of the jungle to the summit of volcanoes, “nature everywhere [spoke] to man in a voice that is familiar to his soul” and what spoke to the soul, Humboldt wrote, “escapes our measurements” (Views of Nature, 217-18). In this manner Humboldt followed in the footsteps of Goethe, his lifelong friend, and the German philosopher Friedrich Schelling, in particular the latter’s Naturphilosophie (“philosophy of nature”). Nature was a living organism it was necessary to grasp in its unity, and its study should steer away from “crude empiricism” and the “dry compilation of facts” and instead speak to “our imagination and our spirit.” In this manner, rigorous scientific method was wedded to art and poetry and the boundaries between the subjective and the objective, the internal and the external were blurred. “With an aesthetic breeze,” Alexander’s long-time friend Goethe wrote, the former had lit science into a “bright flame” (quoted in Wulf, The Invention of Nature, 146).

Alexander von Humboldt’s older brother, Wilhelm (1767-1835), a government official with a great interest in reforming the Prussian educational system, had been similarly inspired. While his brother had ventured out into the jungle, Wilhelm, on his side, had devoted much of his life to the exploration of the linguistic realm, whether in his study of Native American and ancient languages or in his attempts to grasp the relation between linguistic and mental structures. Like the German philosopher and literary critic Johann Gottfried Herder before him, Humboldt posited that language, far from being a merely a means of communication, was the “formative organ” (W. Humboldt, On the Diversity of Human Language, 54) of thought. According to this view, man’s judgmental activity was inextricably bound up with his use of language. Humboldt’s linguistic thought relied on a remarkable interpretation of language itself: language was an activity (energeia) as opposed to a work or finished product (ergon). In On the Diversity of Human Language Construction and its Influence on the Mental Development of the Human Species (1836), his major treatise on language, Wilhelm articulated a forcefully expressivist conception of language, in which he brought to bear the interconnectedness and organic nature of all languages and by extension, various worldviews. Far from being a “dead product,” an “inert mass,” language appeared as a “fully-fashioned organism” that, within the remit of an underlying universal template, was free to evolve spontaneously, allowing for maximum linguistic diversity (90).

Weimarer_Klassik

Left to Right: Friedrich Schiller, Wilhelm von Humboldt, Alexander von Humboldt, and Johann Wolfgang von Goethe, depicted by Adolph Müller (c.1797)

To the traditional objectification of language, Wilhelm opposed a reading of language that was heavily informed by biology and physiology, in keeping with the scientific advances of his time. Within this framework, language could not be abstracted, interwoven as it was with the fabric of everyday life. Henceforth, there was no longer one “objective” way of knowing the world, but a variety of different worldviews. Like his brother, Wilhelm strove to understand the world in its individuality and totality.

At the heart of the linguistic process lay an in-built mechanism, a feedback loop that accounted for language’s ability to generate itself. This consisted in the continuous interplay between an external sound-form and an inner conceptual form, whose “mutual interpenetration constitute[d] the individual form of language” (54). In this manner, rhythms and euphonies played a role in expressing internal mental states. The dynamic and self-generative aspect of language was therefore inscribed in its very core. Language was destined to be in perpetual flux, renewal, affecting a continuous generation and regeneration of the world-making capacity powerfully spontaneous and autonomous force, it brought about “something that did not exist before in any constituent part” (473).

As much as the finished product could be analyzed, the actual linguistic process defied any attempt at scientific scrutiny, remaining inherently mysterious. Language may well abide by general rules, but it was fundamentally akin to a work of art, the product of a creative outburst which “cannot be measured out by the understanding” (81). Language, as much as it was rule-governed and called for empirical and scientific study, originated somewhere beyond semio-genesis. “Imagination and feeling,” Wilhelm wrote, “engender individual shapings in which the individual character […] emerges, and where, as in everything individual, the variety of ways in which the thing in question can be represented in ever-differing guises, extends to infinity” (81). Wilhelm therefore elevated language to a quasi-transcendental status, endowing it with a “life-principle” of its own and consecrating it as a “mental exhalation,” the manifestation of a free, autonomous spiritual force. He denied that language was the product of voluntary human activity, viewing instead as a “mental exhalation,” a “gift fallen to [the nations] by their own destiny” (24) partaking in a broader spiritual mission. In this sense, the various nations constituted diverse individualities pursuant of inner spiritual paths of their own, with each language existing as a spiritual creation and gradual unfolding:

If in the soul the feeling truly arises that language is not merely a medium of exchange for mutual understanding, but a true world which the intellect must set between itself and objects by the inner labour of its power, then the soul is on the true way toward discovering constantly more in language, and putting constantly more into it (135).

While he seemed to share his brother’s intellectual demeanor, Wilhelm disapproved of many of Alexander’s life-choices, from living in Paris rather than Berlin (particularly during the wars of liberation against Napoleon), which he felt was most unpatriotic, to leaving the civilized world in his attempts to come closer to nature (Wulf 151). Alexander, the natural philosopher and adventurer, on his side reproached his brother for his conservatism and social and political guardedness. In a time marred by conflict and the growth of nationalism, science, for him, had no nationality and he followed scientific activity wherever it took him, especially to Paris, where he was widely celebrated throughout his life. In a European context of growing repression and censorship in the wake of Napoleon’s defeat, he encouraged the free exchange of ideas and information, and pleaded for international collaborations between scientists and the collection of global data; truth would gradually emerge from the confrontation of different opinions. He also gave many lectures during which he would effortlessly hop from one subject to another, in this manner helping to popularize science. More generally, he would help other scholars whenever he could, intellectually or financially.

As the ideas of 1789 failed to materialize, giving way instead to a climate of censorship and repression, Alexander slowly grew disillusioned with politics. His extensive travels had provided him insights not only on the natural world but also on the human condition. “European barbarity,” especially in the shape of colonialism, tyranny and serfdom had fomented dissent and hatred. Even the newly-born American Republic, with its founding principles of liberty and the pursuit of happiness, was not immune to this scourge (Wulf 171). Man with his greed, violence and ignorance could be as barbaric to his fellow man as he was to nature. Nature was inextricably linked with the actions of mankind and the latter often left a trail of destruction in its wake through deforestation, ruthless irrigation, industrialization and intensive cultivation. “Man can only act upon nature and appropriate her forces to his use by comprehending her laws.” Alexander would later write in his life, and failure to do so would eventually leave even distant stars “barren” and “ravaged” (Wulf 353).

Furthermore, while Wilhelm was perhaps the more celebrated in his time, it was Alexander’s legacy that would prove the more enduring, inspiring new generations of nature writers, including the American founder of the transcendentalist movement Henry David Thoreau, who intended his masterpiece Walden as an answer to Humboldt’s Cosmos, John Muir, the great preservationist, or Ernst Haeckel, who discovered radiolarians and coined our modern science of ecology” Another noteworthy influence was on Darwin and his theory of evolution. Darwin took Humboldt’s web of complex relations a step further and turned them into a tree of life from which all organisms stem. Humboldt sought to upend the ideal of “cultivated nature,” most famously perpetuated by the French naturalist the Comte de Buffon, whereby nature had to be domesticated, ordered, and put to productive use. Crucially, he inspired a whole generation of adventurers, from Darwin to Joseph Banks, and revolutionized scientific practice by tearing the scientist away from the library and back into the wilderness.

For all their many criticisms and disagreements, both brothers shared a strong bond. Alexander, who survived Wilhelm by twenty-four years, emphasized again and again Wilhelm’s “greatness of the character” and his “depth of emotions,” as well as his “noble, still-moving soul life.” Both brothers carved out unique trajectories for themselves, the first as a jurist, a statesman and a linguist, the second arguably as the first modern scientist; yet both still remained beholden to the idea of totalizing systems, each setting forth insights that remain more pertinent than ever.

2390168a

Alexander and Wilhelm von Humboldt, from a frontispiece illustration of 1836

Audrey Borowski is a historian of ideas and a doctoral candidate at the University of Oxford.

In Dread of Derrida

By guest contributor Jonathon Catlin

According to Ethan Kleinberg, historians are still living in fear of the specter of deconstruction; their attempted exorcisms have failed. In Haunting History: For a Deconstructive Approach to the Past (2017), Kleinberg fruitfully “conjures” this spirit so that historians might finally confront it and incorporate its strategies for representing elusive pasts. A panel of historians recently discussed the book at New York University, including Kleinberg (Wesleyan), Joan Wallach Scott (Institute for Advanced Study), Carol Gluck (Columbia), and Stefanos Geroulanos (NYU), moderated by Zvi Ben-Dor Benite (NYU). A recording of the lively two-hour exchange is available at the bottom of this post.

Processed with VSCO with f2 preset

Left to Right: Profs Geroulanos, Gluck, Kleinberg, and Scott

History’s ghost story goes back some decades. Hayden White’s Metahistory roiled the profession in 1973 by effectively translating the “linguistic turn” of the French deconstruction into historical terms: historical narratives are no less “emplotted” in genres like romance and comedy, and hence no less unstable, than literary ones. White sparked fierce debate, notably about the limits of representing the Holocaust, which took place alongside probes into the ethics of those of deconstruction’s heroes with ties to Nazism, including Martin Heidegger and Paul de Man. The intensity of these battles was arguably a product of hatred for one theorist in particular: Jacques Derrida, whose work forms the backbone of Kleinberg’s book. Yet despite decades of scholarship undermining the nineteenth-century, Rankean foundations of the historical discipline, the regime of what Kleinberg calls “ontological realism” apparently still reigns. His book is not simply the latest in a long line of criticism of such work, but rather a manifesto for a positive theory of historical writing that employs deconstruction’s linguistic and epistemological insights.

This timely intervention took place, as Scott remarked, “in a moment when the death of theory has been triumphantly proclaimed, and indeed celebrated, and when many historians have turned with relief to accumulating big data, or simply telling evidence-based stories about an unproblematic past.” She lamented that

the self-reflexive moment and the epistemological challenge associated with names like Foucault, Irigaray, Derrida, and Lacan—all those dangerous French theorists who integrated the very ground on which we stood—reality, truth, experience, language, the body—that moment is said to be past, a wrong turn taken; thankfully we’re now on the right course.

Scott praised Kleinberg’s book for haunting precisely this sense of “triumphalism.”

Kleinberg began his remarks with a disappointed but unsurprised reflection that most historians still operate under the spell of what he calls “ontological realism.” This methodology is defined by the attempt to recover historical events, which, insofar as they are observable, become “fixed and immutable.” This elides the difference between the “real” past and history (writing about the past), unwittingly taking “the map of the past,” or historical representation, as the past itself. It implicitly operates as if the past is a singular and discrete object available for objective retrieval. While such historians may admit their own uncertainty about events, they nevertheless insist that the events really happened in a certain way; the task is only to excavate them ever more exactly.

This dogmatism reigns despite decades of deconstructive criticism from the likes of White, Frank Ankersmit, and Dominick LaCapra in the pages of journals like History and Theory (of which Kleinberg is executive editor), which has immeasurably sharpened the self-consciousness of historical writing. In his 1984 History and Criticism, LaCapra railed against the “archival fetishism” then evident in social history, whereby the archive became “more than the repository of traces of the past which may be used in its inferential reconstruction” and took on the quality of “a stand-in for the past that brings the mystified experience of the thing itself” (p. 92, n. 17). If historians had read their Derrida, however, they would know that the past inscribed in writing “is ‘always already’ lost for the historian.” Scott similarly wrote in a 1991 Critical Inquiry essay: “Experience is at once always already an interpretation and is in need of interpretation.” As she cited from Kleinberg’s book, meaning is produced by reading a text, not released from it or simply reflected. Every text, no matter how documentary, is a “site of contestation and struggle” (15).

Kleinberg’s intervention is to remind us that this erosion of objectivity is not just a tragic story of decline into relativism, for a deconstructive approach also frees historians from the shackles of objectivism, opening up new sources and methodologies. White famously concluded in Metahistory that there were at the end of the day no “objective” or “scientific” reasons to prefer one way of telling a story to another, but only “moral or aesthetic ones” (434). With the acceptance of what White called the “Ironic” mode, which refused to privilege certain accounts of the past as definitive, also came a new freedom and self-consciousness. Kleinberg similarly revamps White’s Crocean conclusion that “all history is contemporary history,” reminding us that our present social and political preoccupations determine which voices we seek out and allow to speak in our work. We can never tell the authoritative history of a subject, but only construct a possible history of it.

Kleinberg relays the upside of deconstructive history more convincingly than White ever did: Opening up history beyond ontological realism makes room for “alternative pasts” to enter through the “present absences” in historiography. Contrary to historians’ best intentions, the hold of ontological positivism perversely closes out and renders illegible voices that do not fit with the dominant paradigm, who are marginalized to obscurity by the authority of each self-enclosed narrative. Hence making some voices legible too often makes others illegible, for example E. P. Thompson foregrounding the working class only to sideline women. The alternative is a porous account that allows itself to be penetrated by alterity and unsettled by the ghosts it has excluded. The latent ontology of holding onto some “real,” to the exclusion of others, would thus give way to a hauntology (Derrida’s play on the ambiguous sound of the French ontologie) whereby the text acknowledges and allows in present absences. Whereas for Kleinberg Foucault has been “tamed” by the historical discipline, this Derridean metaphor remains unsettling. Reinhart Koselleck’s notion of “non-simultaneity” (Ungleichzeitgkeit) further informs Kleinberg’s view of “hauntology as a theory of multiple temporalities and multiple pasts that all converge, or at least could converge, on the present,” that is, on the historian in the act of writing about the past (133).

Kleinberg fixates on the metaphor of the ghost because it represents the liminal in-between of absent presences and present absences. Ghosts are unsettling because they obey no chronology, flitting between past and present, history and dream. Yet deconstructive hauntology stands to enrich narratives because destabilized stories become porous to previously excluded voices. In his response, Geroulanos pressed Kleinberg to consider several alternative monster metaphors: ghosts who tell lies, not bringing back the past “as it really was” but making up alternative claims; and the in-between figure of the zombie, the undead past that has not passed.

Even in the theory-friendly halls of NYU, Kleinberg was met with some of the same suspicion and opposition White was decades ago. While all respondents conceded the theoretical import of Kleinberg’s argument, the question remained how to write such a history in practice. Preempting this question, Kleinberg’s conclusion includes a preview of a parallel book he has been writing on the Talmudic lectures Emmanuel Levinas presented in postwar Paris. He hopes to enact what Derrida called a “double session.” The first half of the book provides a secular intellectual history of how Levinas, prompted by the Holocaust, shifted from Heidegger to Talmud; but the second half tells this history from the perspective of revelation, inspired by “Levinas’s own counterhistorical claim that divine and ethical meaning transcends time,” telling a religious counter-narrative to the standard secular one. Scott praised the way Kleinberg’s two narratives provide two positive accounts that nonetheless unsettle one another. Kleinberg writes: “The two sessions pull at each other, creating cracks in any one homogenous history, through which portions of the heterogeneous and polysemic past that haunts history can rise and be activated.” This “dislodging” and “irruptive” method “marks an irreducible and generative multiplicity” of alternate histories (149). Active haunting prevents Kleinberg’s method from devolving into mere perspectivism; each narrative actively throws the other into question, unsettling its authority.

A further decentering methodology Kleinberg proposed was breaking through the “analog ceiling” of print scholarship into the digital realm. Gluck emphasized how digital or cyber-history has the freedom to be more associative than chronological, interrupting texts with links, alternative accounts, and media. Thus far, however, digital history, shackled by big data and “neoempiricism,” has largely remained in the grip of ontological realism, producing linear narratives. Still, there was some consensus that these technologies might enable new deconstructive approaches. In this sense, Kleinberg writes, “Metahistory came too soon, arriving before the platforms and media that would allow us to explore the alternative narrative possibilities that were at our ready disposal” (117).

Listening to Kleinberg, I thought of a recent experimental book by Yair Mintzker, The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew (2017). It tells the story of the death of Joseph Oppenheimer, the villain of the infamous Nazi propaganda film Jud Süss (1940) produced at the behest of Nazi propaganda minister Joseph Goebbels. Mintzker was inspired by the narrative model of the film Rashomon (1950), which Geroulanos elaborated in some depth. Director Akira Kurosawa famously presents four different and conflicting accounts of how a samurai traveling through a wooded grove ends up murdered, from the perspectives of his wife, the bandit they encounter, a bystander, and the samurai himself speaking through a medium. Mintzker’s narrative choice is not postmodern fancy, but in this case a historiographical necessity. Because Oppenheimer, as a Jew, was not entitled to give testimony in his own trial, the only extant accounts available come from four similarly self-interested and conflictual sources: a judge, a convert, a Jew, and a writer. Mintzker’s work would seem to demonstrate the viability of Kleinbergian hauntology well outside twentieth-century intellectual history.

Kleinberg mused in closing: “If there’s one thing I want to do…it’s to take this book and maybe scare historians a little bit, and other people who think about the past. To make them uncomfortable, in the end, I hope, in a productive way.” Whether historians will welcome this unsettling remains to be seen, for as with White the cards remain stacked against theory. Yet our present anxiety about living in a “post-truth era” might just provide the necessary pressure for historians to recognize the ghosts that haunt the interminable task of engaging the past.

 

Jonathon Catlin is a PhD student in History at Princeton University. He works on intellectual responses to catastrophe in German and Jewish thought and the Frankfurt School of critical theory.

 

 

Aristotle in the Sex Shop and Activism in the Academy: Notes from the Joint Atlantic Seminar in the History of Medicine

By Editor Spencer J. Weinreich

Four enormous, dead doctors were present at the opening of the 2017 Joint Atlantic Seminar in the History of Medicine. Convened in Johns Hopkins University’s Welch Medical Library, the room was dominated by a canvas of mammoth proportions, a group portrait by John Singer Sargent of the four founders of Johns Hopkins Hospital. Dr. William Welch, known in his lifetime as “the dean of American medicine” (and the library’s namesake). Dr. William Halsted, “the father of modern surgery.” Dr. Sir William Osler, “the father of modern medicine.” And Dr. Howard Kelly, who established the modern field of gynecology.

1905 Professors Welch, Halsted, Osler and Kelly (aka The Four Doctors) oil on canvas 298.6 x 213.3 cm Johns Hopkins University School of Medicine, Baltimore MD

John Singer Sargent, Professors Welch, Halsted, Osler, and Kelly (1905)

Beneath the gazes of this august quartet, graduate students and faculty from across the United States and the United Kingdom gathered for the fifteenth iteration of the Seminar. This year, the program’s theme was “Truth, Power, and Objectivity,” explored in thirteen papers ranging from medical testimony before the Goan Inquisition to the mental impact of First World War bombing raids, from Booker T. Washington’s National Negro Health Week to the emergence of Chinese traditional medicine. It would not do justice to the papers or their authors to cover them all in a post; instead I shall concentrate on the two opening sessions: the keynote lecture by Mary E. Fissell and a faculty panel with Nathaniel Comfort, Gianna Pomata, and Graham Mooney (all of Johns Hopkins University).

I confess to some surprise at the title of Fissell’s talk, “Aristotle’s Masterpiece and the Re-Making of Kinship, 1820–1860.” Fissell is known as an early modernist, her major publications exploring gender, reproduction, and medicine in seventeenth- and eighteenth-century England. Her current project, however, is a cultural history of Aristotle’s Masterpiece, a book on sexuality and childbirth first published in 1684 and still being sold in London sex shops in the 1930s. The Masterpiece was distinguished by its discussion of the sexual act itself, and its consideration (and copious illustrations) of so-called “monstrous births.” It was, in Fissell’s words, a “howling success,” seeing an average of one edition a year for 250 years, on both sides of the Atlantic.

It should be explained that there is very little Aristotle in Aristotle’s Masterpiece. In early modern Europe, the Greek philosopher was regarded as the classical authority on childbirth and sex, and so offered a suitably distinguished peg on which to hang the text. This allowed for a neat trick of bibliography: when the Masterpiece was bound together with other (spurious) works, like Aristotle’s Problems, the spine might be stamped with the innocuous (indeed impressive) title “Aristotle’s Works.”

st-john-the-baptist-el-greco-c-1600

El Greco, John the Baptist (c.1600)

At the heart of Aristotle’s Masterpiece, Fissell argued, was genealogy: how reproduction—“generation,” in early modern terms—occurred and how the traits of parents related to those of their offspring. This genealogy is unstable, the transmission of traits open to influences of all kinds, notably the “maternal imagination.” The birth of a baby covered in hair, for example, could be explained by the pregnant mother’s devotion to an image of John the Baptist clad in skins. Fissell brilliantly drew out the subversive possibilities of the Masterpiece, as when it “advised” women that adultery might be hidden by imagining one’s husband during the sex act, thus ensuring that the child would look like him. Central though family resemblance is to reproduction, it is “a vexed sign,” with “several jokers in every deck,” because women’s bodies are mysterious and have the power to disrupt lineage.

Fissell principally considered the Masterpiece’s fortunes in the mid-nineteenth-century Anglophone world, as the unstable generation it depicted clashed with contemporary assumptions about heredity. Here she framed her efforts as a “footnote” to Charles Rosenberg’s seminal essay, “The Bitter Fruit: Heredity, Disease, and Social Thought in Nineteenth-Century America,” which traced how discourses of heredity pervaded all branches of science and medicine in this period. George Combe’s Constitution of Man (1828), an exposition of the supposedly rigid natural laws governing heredity (with a tilt toward self-discipline and self-improvement), was the fourth-bestselling book of the period (after the Bible, Pilgrim’s Progress, and Robinson Crusoe). Other hereditarian works sketched out the gendered roles of reproduction—what children inherited from their mothers versus from their fathers—and the possibilities for human action (proper parenting, self-control) for modulating genealogy. Wildly popular manuals for courtship and marriage advised young people on the formation of proper unions and the production of healthy children, in terms shot through with racial and class prejudices (though not yet solidified into eugenics as we understand that term).

The fluidity of generation depicted in Aristotle’s Masterpiece became conspicuous against the background of this growing obsession with a law-like heredity. Take the birth of a black child to white parents. The Masterpiece explains that the mother was looking at a painting of a black man at the moment of conception; hereditarian thought identified a black ancestor some five generations back, the telltale trait slowly but inevitably revealing itself. Thus, although the text of the Masterpiece did not change much over its long career, its profile changed dramatically, because of the shifting bibliographic contexts in which it moved.

In the mid-nineteenth century, the contrasting worldviews of the Masterpiece and the marriage manuals spoke to the forms of familial life prevalent at different social strata. The more chaotic picture of the Masterpiece reflected the daily life of the working class, characterized by “contingent formations,” children born out of wedlock, wife sales, abandonment, and other kinds of “marital nonconformity.” The marriage manuals addressed themselves to upper-middle-class families, but did so in a distinctly aspirational mode. They warned, for example, against marrying cousins, precisely at a moment when well-to-do families were “kinship hot,” in David Warren Sabean’s words, favoring serial intermarriage among a few allied clans. This was a period, Fissell explained, in which “who and what counted as family was much more complex” and “contested.” The ambiguity—and power—of this issue manifested in almost every sphere, from the shifting guidelines for census-takers on how a “family” was defined, to novels centered on complex kinship networks, such as John Lang’s Will He Marry Her? (1858), to the flood of polemical literature surrounding a proposed law forbidding a man to marry his deceased wife’s sister—a debate involving many more people than could possibly have been affected by the legislation.

After a rich question-and-answer session, we shifted to the faculty panel, with Professors Comfort, Pomata, and Mooney asked to reflect on the theme of “Truth, Power, and Objectivity.” Comfort, a scholar of modern biology, began by discussing his work with oral histories—“creating a primary source as you go, and in most branches of history that’s considered cheating.” Here perfect objectivity is not necessarily helpful: “when you make yourself emotional availability to your subjects […] you can actually gain their trust in a way that you can’t otherwise.” Equally, Comfort encouraged the embrace of sources’ unreliability, suggesting that unreliability might itself be a source—the more unreliable a narrative is, the more interesting and the more indicative of something meant it becomes. He closed with the observation that different audiences required different approaches to history and to history-writing—it is not simply a question of tone or language, but of what kind of bond the scholar seeks to form.

Professor Pomata, a scholar of early modern medicine, insisted that moments of personal contact between scholar and subject were not the exclusive preserve of the modern historian: the same connections are possible, if in a more mediated fashion, for those working on earlier periods. In this interaction, respect is of the utmost importance. Pomata quoted a line from W. B. Yeats’s “He wishes for the Cloths of Heaven”:

I have spread my dreams under your feet;

Tread softly because you tread on my dreams.

As a historian of public health—which he characterized as an activist discipline—Mooney declared, “I’m not really interested in objectivity. […] I’m angry about what I see.” He spoke compellingly about the vital importance of that emotion, properly channeled toward productive ends. The historian possesses power: not simply as the person setting the terms of inquiry, but as a member of privileged institutions. In consequence, he called on scholars to undermine their own power, to make themselves uncomfortable.

The panel was intended to be open-ended and interactive, so these brief remarks quickly segued into questions from the floor. Asked about the relationship between scholarship and activism, Mooney insisted that passion, even anger, are essential, because they drive the scholar into the places where activism is needed—and cautioned that it is ultimately impossible to be the dispassionate observer we (think we) wish to be. With beautiful understatement, Pomata explained that she went to college in 1968, when “a lot was happening in the world.” Consequently, she conceived of scholarship as having to have some political meaning. Working on women’s history in the early 1970s, “just to do the scholarship was an activist task.” Privileging “honesty” over “objectivity,” she insisted that “scholarship—honest scholarship—and activism go together.” Comfort echoed much of this favorable account of activism, but noted that some venues are more appropriate for activism than others, and that there are different ways of being an activist.

Dealing with the horrific—eugenics was the example offered—requires, Mooney argued, both the rigor of a critical method and sensitive emotional work. Further, all three panelists emphasized crafting, and speaking in, one’s own voice, eschewing the temptation to imitate more prominent scholars and embracing the first person (and the subjectivity it marks). Voice, Comfort noted, isn’t natural, but something honed, and both he and Pomata recommended literature as an essential tool in this regard.

Throughout, the three panelists concurred in urging collaborative, interdisciplinary work, founded upon respect for other knowledges and humility—which, Comfort insightfully observed, is born of confidence in one’s own abilities. Asking the right questions is crucial, the key to unlocking the stories of the oppressed and marginalized within sources created by those in power. Visual sources have the potential to express things inexpressible in words—Comfort cited a photograph that wonderfully captured the shy, retiring nature of Dr. Barton Childs—but must be used, not mere illustrations. The question about visual sources was the last of the evening, and Professor Pomata had the last word. Her final comment offers the perfect summation of the creativity, dedication, and intellectual ferment on display in Baltimore that weekend: “we are artists, don’t forget that.”

Ultimate Evil: Cultural Sociology and the Birth of the Supervillain

By guest contributor Albert Hawks, Jr.

In June 1938, editor Jack Leibowitz found himself in a time crunch. Needing to get something to the presses, Leibowitz approved a recent submission for a one-off round of prints. The next morning, Action Comics #1 appeared on newsstands. On the cover, a strongman in bright blue circus tights and a red cape was holding a green car above his head while people ran in fear. Other than the dynamic title “ACTION COMICS”, there was no text explaining the scene. In an amusing combination of hubris and prophecy, the last panel of Action Comics #1 proclaimed: “And so begins the startling adventures of the most sensational strip character of all time!” Superman was born.

Action_Comics_1

Comics are potentially incomparable resources given the cultural turn in the social sciences (a shift in the humanities and social sciences in the late 20th century toward a more robust study of culture and meaning and away from positivism). The sheer volume of narrative—somewhere in the realm of 10,000 sequential Harry Potter books– and social saturation—approximately 91-95% of people between the ages of six and eleven in 1944 read comics regularly according to a 1944 psychological study—remain singular today (Lawrence 2002, Parsons 1991).

Cultural sociology has shown us that “myth and narrative are elemental meaning-making structures that form the bases of social life” (Woodward, 671). In a lecture on Giacometti’s Standing Woman, Jeffrey Alexander pushes forward a way of seeing iconic experiences as central, salient components of social life. He argues:

 Iconographic experience explains how we feel part of our social and physical surroundings, how we experience the reality of the ties that bind us to people we know and people we don’t know, and how we develop a sense of place, gender, sexuality, class, nationality, our vocation, indeed our very selves (Alexander, 2008, 7).

He further suggests these experiences informally establish social values (Alexander, 2008, 9). Relevant to our purposes, Alexander stresses Danto’s work on “disgusting” and “offensive” as aesthetic categories (Danto, 2003) and Simmel’s argument that “our sensations are tied to differences” with higher and lower values (Simmel, 1968).

This suggests that theoretically the comic book is a window into pre-existing, powerful, and often layered morals and values held by the American people that also in turn helped build cultural moral codes (Brod, 2012; Parsons, 1991).

The comic book superhero, as invented and defined by the appearance of Superman, is a highly culturally contextualized medium that expresses particular subgroups’ anxieties, hopes, and values and their relationship to broader American society.

But this isn’t a history of comics, accidental publications, or even the most famous hero of all time. As Ursula LeGuin says, “to light a candle is to cast a shadow.” It was likely inevitable that the superhero—brightest of all the lights—would necessarily cast a very long shadow. Who after all could pose a challenge to Superman? Or what could occupy the world’s greatest detective? The world needed supervillains. The emergence of the supervillain offers a unique slice of moral history and a potentially powerful way to investigate the implicit cultural codes that shape society.

I want to briefly trace the appearance of recurring villains in comic books and note what their characteristics suggest about latent concepts of evil in society at the time. Given our limited space, I’m here only considering the earliest runs of the two most iconic heroes in comics: Superman (Action Comics #1-36) and Batman (Detective Comics #27-; Batman #1-4).

Ultra First AppearanceInitially, Superman’s enemies were almost exclusively one-off problems tied to socioeconomic situations. It wasn’t until June 1939 that readers met the first recurring comic book villain: the Ultrahumanite. Pursuing a lead on some run-of-the-mill racketeers, Superman comes across a bald man in a wheel chair: “The fiery eyes of the paralyzed cripple burn with terrible hatred and sinister intelligence.” His “crippled” status is mentioned regularly. The new villain wastes no time explaining that he is “the head of a vast ring of evil enterprises—men like Reynolds are but my henchmen” (Reynolds is a criminal racketeer introduced earlier in the issue), immediately signaling something new in comics. The man then formally introduces himself, not bothering with subtlety.

I am known as the ‘Ultra-humanite’. Why? Because a scientific experiment resulted in my possessing the most agile and learned brain on earth! Unfortunately for mankind, I prefer to use this great intellect for crime. My goal? Domination of the world!!

In issue 20, Superman discovers that, somehow, Ultra has become a woman. He explains to the Man of Steel: “Following my instructions, they kidnapped Dolores Winters yesterday and placed my mighty brain in her young vital body!” (Action Comics 20).

ultra dolores winters

Superman found his first recurring foil in unfettered intellect divorced from physicality. It’s hard not to wonder if this reflected a general distrust of the ever-increasing destructive power of science as World War II dawned. It’s also fascinating to note how consistently the physical status of the Ultrahumanite is emphasized, suggesting a deep social desire for physical strength, confidence, and respect.

After Ultra’s death, our hero would not be without a domineering, brilliant opponent for long. Action Comics 23 saw the advent of Lex Luthor. First appearing as an “incredibly ugly vision” of a floating face and lights, Luthor’s identity unfolds as a mystery. Superman pursues a variety of avenues, finding only a plot to draw countries into war and thugs unwilling to talk for fear of death. Lois actually encounters Luthor first, describing him as a “horrible creature”. When Luthor does introduce himself, it nearly induces déjà vu: “Just an ordinary man—but with the brain of a super-genius! With scientific miracles at my fingertips, I’m preparing to make myself supreme master of th’ world!”

dr deathThe Batman develops his first supervillain at nearly the same time as Superman. In July 1939, one month after the Ultrahumanite appeared, readers are introduced to Dr. Death. Dr. Death first appears in a lavish study speaking with a Cossack servant (subtly implying Dr. Death is anti-democratic) about the threat Batman poses to their operations. Death is much like what we would now consider a cliché of a villain—he wears a suit, has a curled mustache and goatee, a monocle, and smokes a long cigarette while he plots. His goal: “To extract my tribute from the wealthy of the world. They will either pay tribute to me or die.” Much like Superman’s villains, he uses science—chemical weapons in particular—to advance these sinister goals. In their second encounter, Batman prevails and Dr. Death appears to burn to death. Of course, in comics the dead rarely stay that way; Dr. Death reappears the very next issue, his face horribly scarred.

hugostrangeThe next regularly recurring villain to confront Batman appears in February 1940. Batman himself introduces the character to the reader: “Professor Hugo Strange. The most dangerous man in the world! Scientist, philosopher, and a criminal genius… little is known of him, yet this man is undoubtedly the greatest organizer of crime in the world.” Elsewhere, Strange is described as having a “brilliant but distorted brain” and a “malignant smile”. While he naturally is eventually captured, Strange becomes one of Batman’s most enduring antagonists.

The very next month, in Batman #1, another iconic villain appears: none other than the Joker himself.

Once again a master criminal stalks the city streets—a criminal weaving a web of death about him… leaving stricken victims behind wearing a ghastly clown’s grin. The sign of death from the Joker!

Also utilizing chemicals for his plots, the Joker is portrayed as a brilliant, conniving plotter who leads the police and Batman on a wild hunt. Unique to the Joker among the villains discussed is his characterization as a “crazed killer” with no aims of world power. The Joker wants money and murder. He’s simply insane.

DSC_0737

Some striking commonalities appear across our two early heroes’ comics. First, physical “flaws” are a critical feature. These deformities are regularly referenced, whether disability, scarring, or just a ghastly smile. Second, virtually all of these villains are genius-level intellects who use science to pursue selfish goals. And finally, among the villains, superpowers are at best a secondary feature, suggesting a close tie between physical health, desirability, and moral superiority. Danto’s aesthetic categories of “disgusting” and “offensive” certainly ring true here.

This is remarkably revealing and likely connected to deep cultural moral codes of the era. If Superman represents the “ideal type,” supervillains such as the Ultrahumanite, Lex Luthor, and the Joker are necessary and equally important iconic representations of those deep cultural moral codes. Such a brief overview cannot definitively draw out the moral world as revealed through comics and confirmed in history. Rather, my aims have been more modest: (1) to trace the history of the birth of the supervillain, (2) to draw a connective line between the strong cultural program, materiality, and comic books, and (3) to suggest the utility of comics for understanding the deep moral codes that shape a society. Cultural sociology allows us to see comics in a new light: as an iconic representation of culture that both reveals preexisting moral codes and in turn contributes to the ongoing development of said moral codes that impact social life. Social perspectives on evil are an actively negotiated social construct and comics represent a hyper-stylized, exceedingly influential, and unfortunately neglected force in this negotiation.

Albert Hawks, Jr. is a doctoral student in sociology at the University of Michigan, Ann Arbor, where he is a fellow with the Weiser Center for Emerging Democracies. He holds an M.Div. and S.T.M. from Yale University. His research concerns comparative Islamic social movements in Southeast and East Asia in countries where Islam is a minority religion, as well as in the American civil sphere.

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2

A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image

Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.