Categories
Think Piece

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951
Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])
I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg
Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC
Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

Categories
Think Piece

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg
Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original
Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens
Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

Categories
Intellectual history

Aristotle in the Sex Shop and Activism in the Academy: Notes from the Joint Atlantic Seminar in the History of Medicine

By Editor Spencer J. Weinreich

Four enormous, dead doctors were present at the opening of the 2017 Joint Atlantic Seminar in the History of Medicine. Convened in Johns Hopkins University’s Welch Medical Library, the room was dominated by a canvas of mammoth proportions, a group portrait by John Singer Sargent of the four founders of Johns Hopkins Hospital. Dr. William Welch, known in his lifetime as “the dean of American medicine” (and the library’s namesake). Dr. William Halsted, “the father of modern surgery.” Dr. Sir William Osler, “the father of modern medicine.” And Dr. Howard Kelly, who established the modern field of gynecology.

1905 Professors Welch, Halsted, Osler and Kelly (aka The Four Doctors) oil on canvas 298.6 x 213.3 cm Johns Hopkins University School of Medicine, Baltimore MD
John Singer Sargent, Professors Welch, Halsted, Osler, and Kelly (1905)

Beneath the gazes of this august quartet, graduate students and faculty from across the United States and the United Kingdom gathered for the fifteenth iteration of the Seminar. This year, the program’s theme was “Truth, Power, and Objectivity,” explored in thirteen papers ranging from medical testimony before the Goan Inquisition to the mental impact of First World War bombing raids, from Booker T. Washington’s National Negro Health Week to the emergence of Chinese traditional medicine. It would not do justice to the papers or their authors to cover them all in a post; instead I shall concentrate on the two opening sessions: the keynote lecture by Mary E. Fissell and a faculty panel with Nathaniel Comfort, Gianna Pomata, and Graham Mooney (all of Johns Hopkins University).

I confess to some surprise at the title of Fissell’s talk, “Aristotle’s Masterpiece and the Re-Making of Kinship, 1820–1860.” Fissell is known as an early modernist, her major publications exploring gender, reproduction, and medicine in seventeenth- and eighteenth-century England. Her current project, however, is a cultural history of Aristotle’s Masterpiece, a book on sexuality and childbirth first published in 1684 and still being sold in London sex shops in the 1930s. The Masterpiece was distinguished by its discussion of the sexual act itself, and its consideration (and copious illustrations) of so-called “monstrous births.” It was, in Fissell’s words, a “howling success,” seeing an average of one edition a year for 250 years, on both sides of the Atlantic.

It should be explained that there is very little Aristotle in Aristotle’s Masterpiece. In early modern Europe, the Greek philosopher was regarded as the classical authority on childbirth and sex, and so offered a suitably distinguished peg on which to hang the text. This allowed for a neat trick of bibliography: when the Masterpiece was bound together with other (spurious) works, like Aristotle’s Problems, the spine might be stamped with the innocuous (indeed impressive) title “Aristotle’s Works.”

st-john-the-baptist-el-greco-c-1600
El Greco, John the Baptist (c.1600)

At the heart of Aristotle’s Masterpiece, Fissell argued, was genealogy: how reproduction—“generation,” in early modern terms—occurred and how the traits of parents related to those of their offspring. This genealogy is unstable, the transmission of traits open to influences of all kinds, notably the “maternal imagination.” The birth of a baby covered in hair, for example, could be explained by the pregnant mother’s devotion to an image of John the Baptist clad in skins. Fissell brilliantly drew out the subversive possibilities of the Masterpiece, as when it “advised” women that adultery might be hidden by imagining one’s husband during the sex act, thus ensuring that the child would look like him. Central though family resemblance is to reproduction, it is “a vexed sign,” with “several jokers in every deck,” because women’s bodies are mysterious and have the power to disrupt lineage.

Fissell principally considered the Masterpiece’s fortunes in the mid-nineteenth-century Anglophone world, as the unstable generation it depicted clashed with contemporary assumptions about heredity. Here she framed her efforts as a “footnote” to Charles Rosenberg’s seminal essay, “The Bitter Fruit: Heredity, Disease, and Social Thought in Nineteenth-Century America,” which traced how discourses of heredity pervaded all branches of science and medicine in this period. George Combe’s Constitution of Man (1828), an exposition of the supposedly rigid natural laws governing heredity (with a tilt toward self-discipline and self-improvement), was the fourth-bestselling book of the period (after the Bible, Pilgrim’s Progress, and Robinson Crusoe). Other hereditarian works sketched out the gendered roles of reproduction—what children inherited from their mothers versus from their fathers—and the possibilities for human action (proper parenting, self-control) for modulating genealogy. Wildly popular manuals for courtship and marriage advised young people on the formation of proper unions and the production of healthy children, in terms shot through with racial and class prejudices (though not yet solidified into eugenics as we understand that term).

The fluidity of generation depicted in Aristotle’s Masterpiece became conspicuous against the background of this growing obsession with a law-like heredity. Take the birth of a black child to white parents. The Masterpiece explains that the mother was looking at a painting of a black man at the moment of conception; hereditarian thought identified a black ancestor some five generations back, the telltale trait slowly but inevitably revealing itself. Thus, although the text of the Masterpiece did not change much over its long career, its profile changed dramatically, because of the shifting bibliographic contexts in which it moved.

In the mid-nineteenth century, the contrasting worldviews of the Masterpiece and the marriage manuals spoke to the forms of familial life prevalent at different social strata. The more chaotic picture of the Masterpiece reflected the daily life of the working class, characterized by “contingent formations,” children born out of wedlock, wife sales, abandonment, and other kinds of “marital nonconformity.” The marriage manuals addressed themselves to upper-middle-class families, but did so in a distinctly aspirational mode. They warned, for example, against marrying cousins, precisely at a moment when well-to-do families were “kinship hot,” in David Warren Sabean’s words, favoring serial intermarriage among a few allied clans. This was a period, Fissell explained, in which “who and what counted as family was much more complex” and “contested.” The ambiguity—and power—of this issue manifested in almost every sphere, from the shifting guidelines for census-takers on how a “family” was defined, to novels centered on complex kinship networks, such as John Lang’s Will He Marry Her? (1858), to the flood of polemical literature surrounding a proposed law forbidding a man to marry his deceased wife’s sister—a debate involving many more people than could possibly have been affected by the legislation.

After a rich question-and-answer session, we shifted to the faculty panel, with Professors Comfort, Pomata, and Mooney asked to reflect on the theme of “Truth, Power, and Objectivity.” Comfort, a scholar of modern biology, began by discussing his work with oral histories—“creating a primary source as you go, and in most branches of history that’s considered cheating.” Here perfect objectivity is not necessarily helpful: “when you make yourself emotional availability to your subjects […] you can actually gain their trust in a way that you can’t otherwise.” Equally, Comfort encouraged the embrace of sources’ unreliability, suggesting that unreliability might itself be a source—the more unreliable a narrative is, the more interesting and the more indicative of something meant it becomes. He closed with the observation that different audiences required different approaches to history and to history-writing—it is not simply a question of tone or language, but of what kind of bond the scholar seeks to form.

Professor Pomata, a scholar of early modern medicine, insisted that moments of personal contact between scholar and subject were not the exclusive preserve of the modern historian: the same connections are possible, if in a more mediated fashion, for those working on earlier periods. In this interaction, respect is of the utmost importance. Pomata quoted a line from W. B. Yeats’s “He wishes for the Cloths of Heaven”:

I have spread my dreams under your feet;

Tread softly because you tread on my dreams.

As a historian of public health—which he characterized as an activist discipline—Mooney declared, “I’m not really interested in objectivity. […] I’m angry about what I see.” He spoke compellingly about the vital importance of that emotion, properly channeled toward productive ends. The historian possesses power: not simply as the person setting the terms of inquiry, but as a member of privileged institutions. In consequence, he called on scholars to undermine their own power, to make themselves uncomfortable.

The panel was intended to be open-ended and interactive, so these brief remarks quickly segued into questions from the floor. Asked about the relationship between scholarship and activism, Mooney insisted that passion, even anger, are essential, because they drive the scholar into the places where activism is needed—and cautioned that it is ultimately impossible to be the dispassionate observer we (think we) wish to be. With beautiful understatement, Pomata explained that she went to college in 1968, when “a lot was happening in the world.” Consequently, she conceived of scholarship as having to have some political meaning. Working on women’s history in the early 1970s, “just to do the scholarship was an activist task.” Privileging “honesty” over “objectivity,” she insisted that “scholarship—honest scholarship—and activism go together.” Comfort echoed much of this favorable account of activism, but noted that some venues are more appropriate for activism than others, and that there are different ways of being an activist.

Dealing with the horrific—eugenics was the example offered—requires, Mooney argued, both the rigor of a critical method and sensitive emotional work. Further, all three panelists emphasized crafting, and speaking in, one’s own voice, eschewing the temptation to imitate more prominent scholars and embracing the first person (and the subjectivity it marks). Voice, Comfort noted, isn’t natural, but something honed, and both he and Pomata recommended literature as an essential tool in this regard.

Throughout, the three panelists concurred in urging collaborative, interdisciplinary work, founded upon respect for other knowledges and humility—which, Comfort insightfully observed, is born of confidence in one’s own abilities. Asking the right questions is crucial, the key to unlocking the stories of the oppressed and marginalized within sources created by those in power. Visual sources have the potential to express things inexpressible in words—Comfort cited a photograph that wonderfully captured the shy, retiring nature of Dr. Barton Childs—but must be used, not mere illustrations. The question about visual sources was the last of the evening, and Professor Pomata had the last word. Her final comment offers the perfect summation of the creativity, dedication, and intellectual ferment on display in Baltimore that weekend: “we are artists, don’t forget that.”

childs2
Dr. Barton Childs
Categories
Think Piece

What was life like as a female singer 3400 years ago?

By guest contributor Lynn-Salammbô Zimmermann

20864322_1877042938989339_1594451543_n
A singer and a musician on the royal standard of Ur.

In the mid-14th century BCE, a group of young female singers contracted an unknown disease. A corpus of letters from Nippur, a religious and administrative center in the Middle Babylonian kingdom (modern-day Iraq), tells us about the medical condition of these young females, who learned to become singers sharing the same quarters (cf. BE 17, 31, 32, 33, 47, N 969, and PBS 1/2, 71, 72, 82).

The letters about these girls’ medical conditions were exchanged between the physician Šumu-libši (and his colleague Bēlu-muballiṭ) and the governor of Nippur, Enlil-kidinnī. Šumu-libši provides the governor with meticulous reports of the girls’ symptoms, as well as his attempts to cure them. The symptoms include an inflammation of the chest, fever, perspiration and coughing. The girls are treated with poultices on the chest. Thus it is likely that Šumu-libši was an asû, a physician, and not an exorcist. An asû would have concentrated on the natural causes of symptoms, applying drugs and using the scalpel when dealing with the physical side of the disease, while an exorcist would have also spoken incantations (Geller, 2001: 27-33, 43-48, 56-61). So far the focus of research has unfortunately only been on the sender and recipient of these letters, but not on the female patients, due to the lack of information and their passive role in the narrative. This article aims to shift the perspective.

Unfortunately, we really do not know much more about these girls. We do not even know their names. They are all called “the daughter of NN” with the exception of a woman named Eṭirtu, who may have been in a higher position, such as that of a supervisor or a teacher.

The girls were most likely trained to become singers in a palace or a temple complex (Sallaberger, Vulliet, 2005: 634). Every report by Šumu-libši begins with the greeting: “Your servant Šumu-libši: I may die as my lord’s substitute. The male and female musicians, Eṭirtu and the house of my lord are well.” The governor, who is inquiring after the girls’ health, was not only responsible for the provincial administration of Nippur, but also for its temples, as he also held the position of the highest priest in the city (Petschow, 1983: 143-155; Sassmannshausen, 2001: 16-21). Additionally, he owned large estates, so we cannot exclude the possibility that he would employ singers for his private entertainment there. Since the kingdom had a patrimonial structure, and the concept of “privacy” separate from an official’s public role did not exist until later, “the house of my lord” could apply not only to the various official households under Enlil-kidinnī’s command, but also to his own estates.

20839967_1877043438989289_1183641232_o
Musicians and singers in Girsu. Louvre Museum, Paris. Photo by Lynn-Salammbô Zimmermann.

In general, musicians, both male and female, had a high status at the royal courts of the Old Babylonian period. This is consistent with the fact that the governor, who held the most important office of the Middle Babylonian kingdom, made inquiries about the young singers’ health. Despite the fact that the girls are rather passive in the letters, they can apparently give orders to the healing specialists, as is reported in the letter BE 17, 47, ll. 4-5: “they bandaged her with a poultice as (she) requested” (Sibbing Plantholt, 2014: 180).

During the Middle Babylonian period, Elamite and Subarean singers can be found at the royal court in Dūr-Kurigalzu (Ambos, 2008: 502). Foreign singers were exchanged as precious diplomatic gifts. Young female musicians often ended up in the royal harems (Ziegler, 2006: 247, 349). Nonetheless, in Mesopotamia—and especially in the “international” Middle Babylonian period—the ethnicity of a person cannot automatically be deduced from the language of their name. That being said, the majority of the names of the fathers of “our” girls appear to be Babylonian, one father bearing a supposedly Hurrian name (Hölscher, 1996: 85).

We can find out more about Šumu-libši’s patients by comparing their situation with that of other female singers in Mesopotamia. This unfortunate case of an epidemic infecting apprentice musicians is reminiscent of another disease among female singers at a royal court, some 400 years earlier (Ziegler, 1999: 28-29). The archive of this royal court, that of king Zimri-Lim (1775-1762 BC) in the city state of Mari (modern-day Syria), documents a large number of female musicians present (Ziegler, 1999: 69-82; Ziegler, 2006: 245). Many of the female musicians at court were actually concubines. We know this because some of them received oil after successfully giving birth, and since they were “unmarried”, we can conclude that they got pregnant by the king as members of his harem. One of Zimri-Lim’s favourite wives actually supervised a number of female musicians, who must have been very young, since according to the oil accounts they only received small allotments. We can see in the accounts of oil for their toilette and for the lighting of the palace quarters that there existed a strict hierarchy among these women (Ziegler, 1999: 22-24, 29-30; Ziegler, 2006: 346). According to their rank, the women received larger or smaller rations. The female singers were among the lower classes of the harem, being supervised by a governess (Lafont, 2001:135-136). In the Middle Babylonian letters, Eṭirtu might have been such a governess.

20861384_1877045242322442_1522685028_o
A model of the royal palace of Mari. The women’s quarters are in the lower right corner. Louvre Museum, Paris. Photo by Lynn-Salammbô Zimmermann.

Contrary to our imagination of an oriental harem, it is attested that these women could move beyond the scope of their quarters (Lafont, 2001: 136; Ziegler, 1999: 15-20). In the younger Middle Assyrian harem edicts, however, which were issued in Assyria during the Middle Babylonian period, the freedom of the women at court was much more limited, rendering them completely dependent on the king and palace officials (Roth, 1997: 196-209). If we assume that the Middle Babylonian patients were singers at court, then—according to the contemporary Middle Assyrian harem edicts—they were kept under strict surveillance by palace officials.

In both cases we see that the apprentices apparently shared the same quarters and had close daily contact with one another. This might have not only lead to the spread of a contagious disease, but also to conflicts: quarrels between women at court were addressed in the Assyrian edicts (Roth, 1997: 201-202). While “our” Middle Babylonian singers’ lives were valuable enough to their employer to receive medical care, the king of Mari ordered his queen in two letters to isolate sick women from the rest of the harem (Lafont, 2001. 138-139). In one of these letters (ARM X, 129), Zimri-Lim writes that a sick woman had infected other women in the palace. Therefore he orders his queen: “[G]ive strict orders that no one is to drink from the cup from which she drinks, or sit on the seat where she sits, or lie on the bed where she lies, so that she does not infect many women by her contact alone” (Lafont, 2001: 138). In the second letter (MARI III, 144), Zimri-Lim orders his queen to let the isolated woman die (illnesses were believed to be a divine punishment, cf. the arnu principle in Neumann, 2006:36): “So let this woman die, she alone, and that will cause the illness to abate” (Lafont, 2001: 138-139).

Where were Zimri-Lim’s concubines from? Apparently the king had his pick among the women whom he had brought back as booty from campaigns to the north. In the Middle Babylonian letter, however, nothing implicates that “our” girls were booty—not even the fathers’ names. It is also possible that the girls’ families wanted them to become singers, because it was a prestigious position at court or in a temple.

20840048_1877044255655874_926516137_o
Heads of votive figures of priestesses or ladies of the court at Mari. Louvre Museum. Photo by Lynn-Salammbô Zimmermann

How did the young women in Mari become singers? Since they were not only used for entertainment and/or the cult, but also functioned as concubines, physical attributes were the main criteria, rather than artistic or musical talents. Thus the king orders his queen to pick the prettiest ones (ARM X, 126): “Choose some thirty of them […] who are perfect and flawless, from their toenails to the hair of their head.” Only afterwards does the king want them to learn how to sing. Once the concubines were picked, they should also keep their weight according to the king’s orders: “Give [also] instructions concerning their food, so that their appearance may not be displeasing” (Lafont, 2001: 138). Such appearance-related pressure presumably applied to “our” girls as well. Even if they worked in temple premises at Nippur and not in a royal harem, the religious cult would have required an immaculate body due to purity regulations.

The Middle Babylonian (14th century BCE) letters themselves do not offer much information about “our” young female patients. This is consistent with the patriarchal nature of Mesopotamian society, resulting in the textual evidence mostly being written from the male perspective, reporting about women referring to their looks, their fertility and use as workforce (Note, though, that women had some legal rights, i.e. appearing at court and as contracting partners, and especially in the Middle Babylonian period as single heads of their families, cf. Paulus, 2014: 240-245). Research, focusing on the available information, has consequently followed this perspective. However, drawing parallels to the conditions of female singers at court 400 years earlier offers us a plausible glimpse into the possible living conditions of “our” female patients.

Lynn-Salammbô Zimmermann is a D.Phil. candidate in Assyriology at the University of Oxford, writing her thesis about the Middle Babylonian/Kassite period administration. She completed her undergraduate and graduate studies in Egyptology, Assyriology and Religious Studies in Münster, Germany.

Categories
Think Piece

Prophetic Medicine in the Indian Yūnānī Tradition

by guest contributor Deborah Schlein

When Greek medical texts were transmitted and translated in the ʿAbbasid capital of Baghdad in the ninth and tenth centuries, they paved the way for original Arabic medical sources which built off Greek humoral theory (the four humors: blood, phlegm, yellow bile, and black bile; in Arabic: dam, balgham, ṣafrāʾ, and sawdāʾ). The most famous of these sources is Ibn Sīnā’s (d. 1037) Qānūn, Latinized to Avicenna’s Canon. The Qānūn is often cited as the foundation of what became known as Yūnānī Ṭibb, or Greek medicine, hearkening back to its use of Greek humoral theory as the basis of aetiology, diagnosis, and treatment. With the movement and transmission of texts such as the Qānūn, the study and practice of Yūnānī Ṭibb flourished and adapted to new surroundings.

While Yūnānī medicine has a long history in the Islamic world, popular medicine also drew enthusiastically on other traditions. Practices included the use of amulets, local knowledge of flora and their medicinal properties, prayer, and al-Ṭibb al-Nabawī, or Prophetic medicine. This last is characterized by the use of folk remedies, medical traditions cited in the Qur’an, and, most notably, the use of medical ḥadīth, or sayings of the Prophet Muḥammad, which were collected in book form.

Both al-Ṭibb al-Nabawī and Yūnānī Ṭibb had a large following in the Islamic world, and still do to this day. India is a perfect example of the staying power of these kinds of medicine. When Yūnānī arrived in South Asia, scholars and intellectuals fleeing the Mongol invasions of the thirteenth century brought with them medical knowledge based on Arabic sources, beginning a medical tradition which would adapt and thrive from the period of the Delhi Sultanate (1206-1516) into the modern day. Knowledge of al-Ṭibb al-Nabawī also accompanied these scholars to India. Today, Yūnānī colleges are supported by the Indian government, and medical practice in the region is a mixture of the traditions that flourished there, including Yūnānī, Ayurveda, al-Ṭibb al-Nabawī, and allopathy (often called Western medicine).

Yet, too often, the medical traditions are discussed separately, without mention of the ways in which they influenced one another, particularly in regard to Yūnānī‘s adoption of treatments from al-Ṭibb al-Nabawī. Even a cursory glance at the sources, however, can tell a reader how these medical traditions interacted and shaped each other over the centuries. A study of Yūnānī manuscripts and their reception gives a clearer picture of that mix of Yūnānī Ṭibb and al-Ṭibb al-Nabawī during such earlier periods as the Mughal empire, showing that the different bodies of knowledge in fact interacted.

One way to better understand the reception of these texts and the interactions of these medical traditions is to study the marginal notations in the premodern manuscripts. These notes are a window into the thoughts of the readers themselves: they refer to other medical sources, describe prescriptions the readers used and knew to be beneficial, and relate the realities of the medical traditions in practice. One single manuscript can have marginal notations with references to Galen, Ibn Sīnā, and the Prophet Muḥammad, all concerned, for example, with the best remedy for toothache. These notes, therefore, tell us a great deal about the usage and understanding of the text at hand.

The major medical encyclopedia of Najīb al-Dīn al-Samarqandī (d. 1222), al-Asbāb wa alʿ-ʿAlāmāt (The Causes and the Symptoms), and its attendant commentaries follow Yūnānī medical theory. Copies of both the commentaries and the original work number in the hundreds in the Indian manuscript collections, not far behind Ibn Sīnā’s Qānūn and its commentaries. Al-Samarqandī’s sources come from medical greats such as al-Rāzī (d. 925), al-Majūsī (d. 994), and, of course, Ibn Sīnā, but unlike the five-volume medical compendium that is the Qānūn, al-Samarqandī’s al-Asbāb wa al-ʿAlāmāt is a handbook of medical diagnoses and treatments that was meant for personal use, to be referred to and utilized in practice. Other medical scholars, such as Nafīs b. ʿIwad al-Kirmānī (flourished 1437) and Muḥammad Akbar Arzānī (flourished 1700) took up the text and wrote major commentaries on it, in Arabic and Persian respectively. I now turn to an Indian manuscript of al-Kirmānī’s Sharḥ [commentary of] al-Asbāb wa alʿ-ʿAlāmāt in an effort to shine light on the interactions of Yūnānī Ṭibb and al-Ṭibb al-Nabawī.

Al-Kirmānī dedicated this Sharḥ to his patron, the Timurid ruler Ulugh Beg, in whose royal court he was a physician. Copies of the Sharḥ can be found all over India, and are even more common in the region than al-Samarqandī’s original text, upon which the commentary is based. The Raza Library in Rampur, Uttar Pradesh holds six manuscripts of al-Kirmānī’s Sharḥ al-Asbāb wa alʿ-ʿAlāmāt, ranging in date from the seventeenth to the nineteenth centuries and covering the transition of power from the Mughals to the British Raj. One particular manuscript, No. 3999 (Raza Library, Acc. No. 4195 M), is an eighteenth-century copy of al-Kirmānī’s Sharḥ, and its margins are littered with explanations, prescriptions, and references to other medical sources, mostly in Arabic. While some notes offer quotes from Galen or Ibn Sīnā, others refer to the works of al-Samarqandī himself. What makes this manuscript important to the study of Yūnānī and Prophetic medicine’s interactions, however, are the many notations citing early Islamic and, in some cases, pre-Islamic medical advice.

The margins of fourteen folios exhibit references to the Prophet’s advice and actions in the realm of medical practice. These various ḥadīth are reported by a total of twelve different companions and members of the Prophet’s family, and they showcase Muḥammad’s own knowledge of the region’s flora and their medical benefits, as well as the traditional folk medicine of the Arabian peninsula. For example, the mid-point of al-Kirmānī’s Sharḥ advocates the use of medicaments to rid the body of excess fluid to relieve dhāt al-janb, or pleurisy, which is an inflammation of the tissue lining the lungs and the chest cavity. The marginal note on this page relates the report of Zayd b. Arqam, a companion of the Prophet, who says that Muḥammad named zayt (oil) and wars (memecylon tinctorium, a Yemenite dye-yielding plant) as treatment for pleurisy (MS. No. 3999, f. 166b). Similarly, while al-Kirmānī explains al-Samarqandī’s definition of kulf, or freckles, as localized changes of color in the face to shades of black or red, the ḥadīth states that Umm Salama, one of the wives of Muḥammad, related that the Prophet spoke of the use of wars (seemingly, a common medicament at the time) to coat the affected areas of the face in order to counteract these spots (MS. No. 3999, f. 336a). Here, these marginalia serve to underscore the accuracy of the lessons of the text’s author, but they also give more specificity to how the ailment should be treated.

One additional notation is worth noting because it predates Islam: it is attributed to Luqmān the Ḥakīm (literally, wise man), a pre-Islamic sage who is mentioned in the Qur’an. His treatments (Elaj-e-Lokmani, or “treatment of Lokman”) are still practiced today in an orally-transmitted medical tradition in Eastern India, particularly Bengal. Luqmān’s medical advice, like the ḥadīth of the Prophet, recalls the medicine practiced in Arabia at the time. The notation before the text begins prescribes a treatment using gharghara (a gargle) and julāb (julep, a fruit- or petal-infused drink) for problems originating in the stomach (f. 1a, MS 3999) and is written in Persian. The Arabic note following it describes the above treatment’s source, denoting Luqmān the Ḥakīm as its originator. This reference to a pre-Islamic sage’s medical advice brings to the fore the Arabian medicine upon which al-Ṭibb al-Nabawī is based. These references reveal the thoughts of the manuscript’s reader, and force the scholar to question the boxes to which these medical traditions have often been assigned.

It is clear that the early Arab medicine described by the Prophet, and practiced before and during his lifetime, was very much alive and influential throughout the time of Yūnānī medical manuscript production and study in India. The treatments explained in al-Kirmani’s Sharḥ must have reminded the reader of the Prophet’s own medical advice. He may have written these thoughts down as a memory aide, for future readers of the text, or to underscore the benefits of these remedies. Whatever the reasoning behind these notations, the margins of this particular Yūnānī manuscript show that there was an awareness of al-Ṭibb al-Nabawī in the study of Yūnānī Ṭibb, and the two were not at all mutually exclusive.

Deborah Schlein is a Ph.D. candidate in Near Eastern Studies at Princeton University. She is currently pursuing archival research in India with the support of a Fulbright-Nehru grant.

Categories
Think Piece

Claude Eatherly, the Bomb, and the Atomic Age

by contributing editor Carolyn Taratko

In late May, President Obama laid a wreath at the Hiroshima Peace Memorial, making him the first sitting U.S. President to visit the city that was the target of the first atomic bomb on August 6th 1945. He called for the pursuit of “a future in which Hiroshima and Nagasaki are known not as the dawn of atomic warfare but as the start of our own moral awakening.” The mere suggestion of the President’s visit proved incendiary to many Americans, who argued that it would be seen as an apology for acts that official consensus holds ended the war and saved hundreds of thousands of lives in the process. Obama made no such apology, though. After expressing generalized remorse at the devastation, he used the occasion to call for non-proliferation, albeit on a timescale outside of his lifetime. It was a poignant moment of remembrance, but then there were other pressing issues to attend to. The survivors of Hiroshima and Nagasaki are, after all, the reminders of the immediate dangers of these weapons. At home, in the US, who feels this fear acutely and every day?

Eatherly
Major Claude Eatherly, 1966 (Waco Tribune)

On June 3rd, 1959, an Austrian philosopher addressed a letter to a former US Air Force pilot from Texas. The Austrian, Günther Anders, initiated this correspondence after learning through the media that the American, Claude Eatherly, had once again been committed to the psychiatric ward of the V.A. Hospital in Waco. Eatherly had flown the mission to scout the weather above Japan before giving the ‘ok’ to drop the atom bomb on Hiroshima. After returning to civilian life, he was wracked by guilt over the consequences of his mission. Multiple suicide attempts and petty crimes ensued over the years that followed. Each time he was acquitted on psychiatric grounds. These offences and his outspoken insistence on his own guilt in partaking in the bombing mission left the Air Force and V.A. administration unsettled. Unwilling to risk another incident and wary of Eatherly’s growing media presence, he was to remain under medical supervision in the Waco hospital, at first voluntarily and then against his will. The Anders-Eatherly correspondence bears witness to this difficult time for the man who wanted to draw attention to the perils of nuclear warfare by making himself the first example.

It also bears witness to an attempt between two men of vastly different backgrounds to grapple with moral questions haunting the postwar world. In Anders’ first letter, he outlines his philosophical schema in which he sees Eatherly as an improbable hero. Anders dismissed the claims of Eatherly’s psychiatric disturbance and instead praised his vigorous moral health. He described the way that humans can become “guiltlessly guilty” as a result of the vast and complicated technology that humans have created (Letter 1). This condition is unprecedented; the imaginations of our forbearers outpaced their ability to act, whereas in modern times— which he alternately calls the “Atomic Age” and the “Age of the Apparatus”—the opposite proves true. Technology is increasingly complicated, danger lurks at a new scale, and miscalculation threatens the existence of humans at a planetary level. This new epoch distinguishes itself from previous ones in that it is the first time that “the capacity of man’s imagination cannot compete with that of our praxis. As a matter of fact, our imagination is unable to grasp the effect of that which we are producing” (Anders, Commandments in the Atomic Age). For Anders, this new age called for, above all else, the widening of man’s moral fantasy to encompass his new technological aptitude and both its intended and unintended effects. Eatherly had grasped this and the two men discussed the implications of this new moral burden in their letters over the course of two years.

Straight flush
Eatherly’s plane, the Straight Flush

The epistolary form is ideally suited for viewing the ethical challenges of nuclear proliferation. The letters are at once intimately private and also global in their concerns. Through them, Anders outlines his view of the problem of increased specialization and expertise, which cultivates a feeling of helplessness among the lay population. His warning that nuclear proliferation should not be left to the military and politicians because of its effects on mankind serve to further justify his activism on Eatherly’s behalf. Questions of morality recognize no neat divisions, and concern for others must lie at the heart of an ethical project (a view later elaborated by Philip Kitcher, who elsewhere takes up the subject of the compatibility between increasingly complex science and democratic values). The degree of intimacy which develops between Eatherly and Anders, who never met in person, is striking. United not only by their concern over nuclear proliferation, but out of concern for humanity and its many faces, Eatherly quickly accepts Anders as a trusted friend and advocate. Anders comes across a bit pedantic at times, and Eatherly naïve and rendered helpless by his situation. In spite of this, Anders’ treats Eatherly with respect. With his mental health called into question repeatedly, Anders shows a willingness to pull out all stops to defend the freedom and sanity of his interlocutor.

The letters center upon Eatherly’s personal drama, but events out in the world make their mission more pressing. The capture of Adolf Eichmann in May of 1960 and his subsequent trial works its way into the letters. Although Anders despairs at this point, having not heard from Eatherly in five months, he writes to deliver the news of his capture and delineates how Eatherly is the “antipode of Eichmann” (Letter 65). While Eichmann defended his complicity in the planning and execution of genocide by calling himself a “cog in the machine,” a man who lacked agency, and therefore culpability, in an expansive system, Eatherly rejected this excuse in his own situation.

Having secured Eatherly’s permission, Anders published their exchanges (with commentary, and some redaction) in 1961 in Germany, then one year later in the US in an attempt to gain recognition for Eatherly, who was still fighting the V.A. for his freedom, and for the cause of proliferation. The publication aligned with Anders’ twin convictions: that Eatherly’s “problem” was not a private mental health issue, and that nuclear proliferation was not only for a cadre of experts, but touched every citizen. Turning their letters out into the reading public, Anders assumed a position at the center of the anti-nuclear movement of the 1960s. Despite the urgency with which Eatherly saw the need to halt nuclear proliferation, both his story and the issue of proliferation itself have largely faded from public discourse. And, despite growing resistance to the idea of nuclear weaponry, the majority of Americans still believes the dropping the bombs at Hiroshima and Nagasaki was justified, and that it spared American soldiers. Even beyond the exigencies of wartime, Eatherly was rejected as the conscience of a generation. Nuclear weapon states and their stockpiles survive, insulated from serious criticism by the rhetoric of security and national prestige. All the same, the public cannot, and should not refrain from asking the question of whether these weapons serve as a means of self-regulation or rather, to paraphrase the warning of former US Secretary of Defense, an invitation for an inevitable catastrophe.