Categories
music Political history

“It’s Coming Back Around Again”: Rage Against The Machine as Radical Historians

By guest contributor Jake Newcomb

The music world has been abuzz this year with the reunion of Rage Against The Machine, whose reunion world tour includes a headlining stint at Coachella in April. Rumors of the imminent return have abounded since a spin-off band (Prophets of Rage) formed in 2016 to protest the “mountain of election-year bullshit” that emerged that year. Prophets of Rage’s lineup consisted of the instrumentalists of Rage Against The Machine with Chuck D (of Public Enemy) and B-Real (of Cypress Hill) performing vocals in lieu of Zack De La Rocha, the vocalist of Rage Against The Machine. Guitarist Tom Morello stated back in 2016 that they “could no longer stand on the side of history. Dangerous times demand dangerous songs. Both Donald Trump and Bernie Sanders are both constantly referred to in the media as raging against the machine. We’ve come back to remind everyone what raging against the machine really means.” Prophets of Rage embarked on their “Make America Rage Again” tour in 2016, and they even staged a demonstration outside the Republican National Convention in Cleveland, an attempted repeat of Rage Against The Machine’s renowned performance directly outside of the Democratic National Convention in 2000, on the street in Los Angeles. Now, Zack De La Rocha has returned to complete the reunion. Their “Public Service Announcement” tour was scheduled to begin on March 26th, in El Paso, Texas, as a response to the domestic terror attack there last August, but in light of the global COVID-19 pandemic, they have postponed all the shows scheduled between March and May. The July and August legs of their world tour are, as of now, still on schedule.

Aside from their signature sound, Rage Against The Machine (hereafter RATM) are most commonly beloved and denounced for their commitment to radical politics, which has commanded significant attention by fans and critics alike. Their songs are public stances taken on some of America’s most polarizing topics: police brutality, wealth inequality, globalization, racism, and the two-party system, the media, and education. They also publicly embraced radical movements outside of the United States, like the Zapatista movement against NATO during the 1990s. Culturally, their fame and left-wing politics have seen them associated with figures like Michael Moore and Noam Chomsky, both of whom RATM has worked with in some capacity. Their politics are often discussed as inseparable from their music (aside from the bizarre case of Paul Ryan, who claimed to enjoy their sound but hate their lyrics) since their political stances and statements are viewed as a key component of their entire act. What is much less discussed, or analyzed by scholars, however, is RATM’s presentation of history. This is surprising, because RATM’s music engages in a “re-casting” of history, not unlike Howard Zinn’s A People’s History of the United States, with the past a recurring element of their lyrics. The historical narratives in the songs identify the downtrodden as the protagonist, continuously battling multiple, interlocking spheres of oppression (a.k.a., The Machine) over centuries. This generations-long struggle, and the consistent oppression of the poor and weak, gives urgency to lyrics such as, “Who controls the past now, controls the future. Who controls the present now, controls the past,” a direct homage to George Orwell. Breaking out of this cycle of history is what RATM preaches.  

On their first album, released in 1992 as Rage Against The Machine, RATM’s songs argued that the education system, the media, and the state worked in tandem to brainwash the population into believing false historical narratives and fake news. De La Rocha specifically took aim at public school curriculums and teachers that forced “one-sided” Eurocentric histories down the throats of pupils. This false narrative (of American history), accordingly, celebrates and obscures the violent realities of “Manifest Destiny” ideology as well as stripping non-white students of their historical and cultural identities, in order to assimilate them into American society. The true narrative, according to the lyrics, is a history of racial and economic oppression at the hands of both the state and private corporations, who have succeeded in no small part over the centuries by actual and cultural genocide. Further, this false narrative of history interlocks with contemporary false media reports and psychologically-manipulative advertising that keep the population docile, obsessed with consumer products, and supportive of oppressive class and racial relations. They sing that the United States is trapped in a loop that perpetuates injustice, ignorance of that injustice, and ignorance of the history of that injustice. This is the loop they first called their fans to rally against. 

Despite the unique rap-metal denunciation of “The Machine” that RATM presented on this first album, those familiar with historiography from the 1980s and 1990s will recognize the similarities between their presentation of America’s past and those of others. Compared with popular historiography, RATM presents similar longue durée historical claims as A People’s History of the United States and Lies My Teacher Told, according to which the long-term history of oppression and exploitation in the United States has been long-obscured by false, nationalistic history. Like RATM’s albums, these books were massively successful, although in the latter case, their popularity derived explicitly from their depiction of history. RATM’s presentation of history was present, but it was (and is) obscured by their denunciation of contemporary politics, their revolutionary slogans, and their distinctive sound. Of course, these shifts in popular historiography to initiate a change in the dominant narrative of history also emerged in academic historiography, as with the Subaltern Studies group. Scholars like Ranajit Guha and Gyan Prakash published works on India that tried to move beyond the British colonial and Indian nationalist narratives that obscured the lives of “subaltern” Indian populations and the exploitation they suffered at the hands of colonialism and industrialization alike. Women and gender scholars also prominently emerged at this time to analyze long-term subjugation of women and gender minorities as well as address the lack of women’s historical contributions in academic historiography. RATM’s music can be viewed as an extension of these historiographic shifts into the world of music, specifically the emerging world of alternative rock and rap. Their inclusion alongside this historiography also points to a broader cultural moment, whereby the traditional historical narratives broke down.  

RATM continued to expand their historical commentary throughout their initial run in the 1990s, even going so far as to start their second album with the lyrics, “Since 1516, Mayans attacked and overseen…” in the song “People of the Sun.” The song is an anthem of support for the Zapatista movement in Southern Mexico, who De La Rocha visited before writing the second album. While politically the song was written as a song of support with the Zapatistas, the song associates the struggles of the Zapatistas with others in a long history of oppression in Mexico, dating back to Spanish colonization. So on their second album, RATM continued to address long-term historical trends that repeat over time, which they asked their listeners to fight against. They bring the long-term historical trends into the third and final studio album as well, 1999’s The Battle of Los Angeles. For example, in the song “Sleep Now In The Fire,” De La Rocha identifies many difficult historical topics as being aspects of the same long-term phenomenon: violent greed, specifically in the context of colonialism, slavery, and war. The crews of the Niña, the Pinta, and the Santa Maria are part of the same lineage as the overseers of antebellum plantations, and the wielders of agent Orange and nuclear weapons. De La Rocha also suggests in the lyrics that Jesus Christ has historically been invoked as the ultimate justification for various forms of greed or intense violence, pushing that lineage back millennia. 

While music as history is nothing new (in fact, for some cultures, history has traditionally been expressed through music), it is rare to find such an explicit historical dimension in contemporary popular music in the West (although, some intrepid historians have begun interpreting western music and art as history). Not only did RATM present their fans with a unique sound and highly-charged politics in the 1990s, but they also advocated for a historiographical framing that paralleled changes happening in popular and academic historiography. Along with Subaltern Studies and A People’s History of the United States, for example, RATM asked listeners to shift their historical focus to the lives and stories of the oppressed, instead of glorying the rich and famous. This historical framing, no doubt, was tied to RATM’s political project, as were the writings of Zinn and Guha. And like Guha and Zinn, RATM’s productions (cultural rather than intellectual) became both highly influential and targeted by critics. RATM has not announced any plans to record and release any new albums, so the jury’s out on whether there will be any new takes on history from De La Rocha and co. What’s likely though, is that thousands of fans will pack out stadiums this summer to sing along with RATM’s radical history if the COVID-19 pandemic subsides in the United States and Europe.

Jake Newcomb is an MA student in the Rutgers History Department, and he is also a musician. He can be followed on Twitter and Instagram at @jakesamerica 

Categories
Early modern Europe religious history

Believing in Witches and Demons

By Jan Machielsen

How do we assess whether a claim is worthy of belief? What does it mean to treat it with scepticism? Do we reject it outright as a fiction or lie? Or do we simply refuse to act while we wait for further confirmation? After all, as the French essayist Michel de Montaigne (1533–1592) observed, ‘it is putting a very high price on one’s conjectures to have a man roasted alive because of them.’ Montaigne was writing about witch-burnings, but the question as to whether to act on witchcraft belief was, for him, as much a matter of temperament—a question of trusting one’s beliefs—as it was about reasoned argument. When he was given the opportunity to interview a group of convicted witches while traveling through Germany, he was not convinced, deciding the women needed hellebore instead of hemlock—that is, they were mentally ill, rather than deserving of death. And yet, Montaigne’s witchcraft scepticism was not certain knowledge of a falsehood. It was not knowing. Witches might well exist—Montaigne did not know, and if they did, it might not even matter.

Michael Pacher, The Devil holding up the Book of Vices to St. Augustine (1483)

We do not typically think about the early modern witch-hunt in this way. We tend to see witchcraft almost as axiomatically false, as a falsehood which will wilt away when exposed to reason. US Supreme Court Justice Louis D. Brandeis used witches in that sense in a famous 1927 free speech case: ‘men feared witches and burnt women. It is the function of speech to free men from the bondage of irrational fears.’ Witchcraft, unfortunately, did not die because more and better speech was available. Indeed, as Thomas Waters has shown in a marvellous recent study, witchcraft was nothing if not free-speech-resistant. Yet the concepts and categories—credulity, superstition, bigotry—meant to contain the irrational have been equally persistent. Credulous folk and bigoted inquisitors believe in superstitions, and those superstitious beliefs demonstrate their credulity and/or bigotry. Do not prod this further. Whatever the cost, witchcraft belief can never be reasonable.

Inevitably, it was nineteenth-century historians keen to banish witchcraft into the past who transformed it into an eschatological battle between reason and superstition, between science and (a perversion of) religion. Andrew Dickson White and his student George Lincoln Burr co-opted witchcraft in their A History of the Warfare of Science with Theology (1896) as one more battleground between reactionaries and ‘the thinking, open-minded, devoted men … who are evidently thinking the future thought of the world.’ This reduced the early modern witch-hunt into a conflict between ‘bigots and pedants’ and their heroic opponents, who risked their lives for ‘some poor mad or foolish or hysterical creature.’ This struggle for reason, in which White and Burr were still very much engaged, was very much the preserve of men alone. White’s principal reason for supporting women’s education at Cornell, the university he co-founded in 1865, was to ‘smooth the way for any noble thinkers who are to march through the future’—by ‘increas[ing] the number of women who, by an education which has caught something from manly methods, are prevented from … throwing themselves hysterically across their pathway.’

The early modern witch-hunt has served many moral purposes since then—noble yet doomed peasant revolt, Wiccan holocaust, or feminist ‘gynocide’—but the structures sustaining such readings have long collapsed. Witch-hunting was, for the most part, not an organized affair, instigated by elites. Instead, it was the product of daily interactions between villagers who did not get along. Nor was the witch-hunt particularly severe. With most estimates ranging between 40 and 50, 000 victims—across a continent and multiple centuries—it is easy to list a number of recent environmental catastrophes that cost as many lives in weeks, days, even seconds. The persistence of these moral readings tells us more about our own time than it does about the early modern period.

Jan Luyken, Two Bamberg Girls taken to Their Execution Site (1685)

Now, even the containment field of irrationality no longer appears to be holding. This may also reflect the present, when truth claims seem to have lost their value and the world’s most powerful figure proclaims himself to be the victim of a witch-hunt. In that sense, to study the historiography of witchcraft really is to study ourselves. Yet the demise of irrationality has been a long time coming. In a seminal book in 1997, Stuart Clark taught us that witchcraft belief, far from the preserve of a fringe group of demonologists, was embedded in larger modes of political, religious, and—indeed—scientific thought. Yet the question he was effectively pointing to, and with which we opened, is only more recently being answered: what did it mean to believe, or not believe, in witches? Precisely because it appears to us as almost axiomatically false, the early modern witch-hunt invites us to think about what it means to believe in anything

Historians of early modern demonology have mostly stopped dividing authors into (irrational) believers and (rational) sceptics. As Montaigne has shown, belief can take on many forms. It may be cautious acceptance, or indistinguishable from certain knowledge. It can be highly reasoned, or entirely unthinking. It can also be entirely passive—part of a wider subscription package. Certainly, many eighteenth-century elite thinkers, most notably the founder of methodism John Wesley, treated witchcraft in this way: as proof of the existence of the spirit world but without any expectation to ever meet a witch. Witchcraft belief could be partial and caveated, or it could be extreme. The heterodox political thinker Jean Bodin believed that the devil could even break the laws of nature (because God would permit him), while King James VI of Scotland was virtually alone among the major demonologists to support the ducking or swimming of witches. Belief could be sustained, or discredited, by direct experience with witches and their bodies, which could be tortured and examined for a devil’s mark. Or it could be textual, founded on a wide range of biblical, patristic, and classical texts whose authority was incontrovertible. Nor was belief founded on fear alone. For those ensconced in the safe comfort of their study, tales of witchcraft delighted and entertained as much as any horror story today. And partly because of different and shifting emotional registers, belief in witchcraft could also change over time. Scepticism yielded to belief, and vice versa.

Seen from the angle of what it meant to believe—and why, how much, and when—the entire field of early modern demonology looks very different. It no longer resembles a battlefield between two opposing camps, nor can it sustain an opposition between irrationality and reason, between false belief and knowledge of falsehood. The science of demons is much messier and hence, for historians, much more interesting. It consisted of many conflicts and disagreements, both major and minor. Could witches transform into mice and thus enter homes through key holes? The Lorraine judge Nicolas Remy said yes, the Flemish-Spanish Jesuit Martin Delrio said no. Witchcraft also looked very different from different vantage points and at different points of time. One ardent Catholic, the Jesuit Juan Maldonado living in Paris in the run-up to the 1572 St Bartholomew’s Day Massacre, could see Protestantism and witchcraft as the devil’s twin attacks. Like any good brothel-keeper, the devil transformed beautiful courtesans (heretics) into procurers (witches) when they lost their physical appeal. Writing twenty years later in the midst of the Trier ‘super-hunt’, the Dutch Catholic priest Cornelius Loos considered witchcraft belief to be diabolical in origin, making the witch-hunters the devil’s true human allies. Yet Loos was not, as White and Burr once supposed, a harbinger of enlightenment, as they saw themselves. A religious exile from the Dutch Republic, he repeatedly called for a universal crusade against all Protestants.

Unshackled from moral straitjackets and the concepts that defined them, the early modern witch-hunt can actually teach us a great deal. On the level of human interactions, it shows us how forced daily interactions can foster resentment. (A colleague once suggested I write a book about witch-hunting as office politics.) It reveals the processes by which we demonize those with whom we disagree. At the level of belief, following in Montaigne’s footsteps, it should make us question why we believe what we believe, and how we know what we think we know. Most importantly, the early modern witch-hunt, when studied properly, teaches us that we are, when push comes to shove, not very different from those who came before us. And that is perhaps the most sobering thought of all.

Jan Machielsen is Senior Lecturer in Early Modern History at Cardiff University. He is the author of Martin Delrio: Demonology and Scholarship in the Counter-Reformation (Oxford University Press, 2015), and the editor of The Science of Demons: Early Modern Authors Facing Witchcraft and the Devil, published by Routledge on April 13, 2020. 

Categories
Think Piece

Tory Marxism

by Charles Troup 

For many on the Right today, describing something as “Marxist” is sufficient to mark it out as something every decent conservative should stand against. Indeed, at first glance Marxism and conservatism may even look diametrically opposed. One is radically egalitarian, whilst the other has always found it necessary to defend inequality in some form. One demands that a society’s institutions express social justice; whilst the other asks principally that they be stable and capable of managing change. One proceeds from principle; the other prefers pragmatism.

But things weren’t always this way. The Right’s most creative thinkers have often drawn on an eclectic range of sources when expressing and renewing their creed—Marx not excepted. On the British Right, in fact, we can find surprisingly frank engagement with Marxism as recently as the 1980s: in particular amongst the Salisbury Group, a collection of “traditionalists” skeptical about the doctrines of neoliberalism which were conquering right-wing parties in the Western world one by one.

Roger Scruton

The influence of Marx is plain, for instance, in the philosopher Roger Scruton’s 1980 book The Meaning of Conservatism. Here Scruton made the striking claim that Marxism was a more suitable philosophical tradition than liberalism for conservatives to engage in dialogue because ‘it derives from a theory of human nature that one might actually believe’. This was because liberalism, for Scruton, began with a fictitious ideal of autonomous individual agents and believed that they could not be truly free under authority unless they had somehow consented to it. For Scruton, however, this notion “isolates man from history, from culture, from all those unchosen aspects of himself which are in fact the preconditions of his subsequent autonomy.” Liberalism lacked an account of how society and the self deeply interpenetrated each other. Scruton believed that the individuals yearned to see themselves reflected at some profound level in the way their society was organized, in its culture, and in the forms of collective membership it offered. Yet liberalism presented no idea of the self above its desires and no idea of self-fulfillment other than their satisfaction.

Marxism, on the other hand, possessed a philosophical anthropology which was much friendlier to the sort of “Hegelian” conservatism which Scruton advocated. He was particularly impressed with the concept of “species-being” or “human essence,” which Marx had borrowed from Ludwig Feuerbach and employed in the Manuscripts of 1844. It was this notion, Scruton reminded his readers, that underpinned the whole centrality of labour for Marxists, since they regarded it as an essential, intrinsically valuable human activity. Moreover, it was the estrangement of the individual from their labour under capitalism which caused the malaise of ‘alienation’: that condition of spiritual disaffection which, Scruton believed, conservatives should recognise in their own instincts about modernity’s deficiencies. Of course, the conservative would seek to ‘present his own description of alienation, and to try to rebut the charge that private property is its cause’; but Marxists should be praised for recognising ‘that civil order reflects not the desires of man, but the self of man’.

There was an urgent political stake in this discussion. Scruton had welcomed Thatcher’s victory in 1979 as an opportunity to recast British conservatism after its post-war dalliance with Keynesianism and redistributive social policy. Still, he felt a sense of foreboding about the ideological forces which had ushered her to victory. The Conservative Party, he complained, “has begun to see itself as the defender of individual freedom against the encroachments of the state, concerned to return to people their natural right of choice.” The result was damaging ‘urges to reform’, stirred by the newly ascendant language of “economic liberalism.” Scruton implored his fellow conservatives not to mistake this for true conservatism, but to recognize it as a derivation of its “principal enemy.”

In doing so, he once again compared Marxism and liberalism to demonstrate to conservatives the limitations of the latter. “The political battles of our time,” he wrote, “concern the conservation and destruction of institutions and forms of life: nothing more vividly illustrates this than the issues of education, political unity, the role of trade unions and the House of Lords, issues with which the abstract concept of ‘freedom’ fails to make contact.” Marxists at least understood that “the conflict concerns not freedom but authority, authority vested in a given office, institution or arrangement.” Their approach of course was “to demystify the ideal of authority’ and ‘replace it with the realities of power,” which Scruton thought reductive. But “in preferring to speak of power, the Marxist puts at the centre of politics the only true political commodity, the only thing which can actually change hands”—it “correctly locates the battleground.”

Scruton wasn’t the only figure in the Salisbury Group to engage with Marxism. So too did the historian Maurice Cowling, doyen of the “Peterhouse school” associated with the famously conservative Cambridge college. He believed that Marxism’s “explanatory usefulness can be considerable” and was even described by one of his admirers as a “Tory Marxist jester.”

Maurice Cowling

Cowling hated the Whiggish historians who dominated the English academy in the first half of the 20th century, and welcomed the rise of the English Marxist school in the 1950s—those figures around the journal Past & Present like E.P Thompson, Eric Hobsbawm, Dona Torr and Christopher Hill—as a breath of fresh air. Whereas Whig liberals gave bland and credulous accounts of the motive forces of British political history, the English Marxists were cynical and clear-eyed about power and conflict. As he explained in a 1987 radio interview for the BBC, he agreed with them that “class struggle” was “a real historical fact” and that we should “always see a cloven hoof beneath a principle.” Marxists knew that any set of institutions unequally apportioned loss amongst the social classes, making the business of politics that of deciding in whose image this constitution be made.

This was one point for Cowling where Marxists and conservatives parted ways: accepting the reality of class struggle didn’t mean picking the same side of the barricades. But Cowling believed that conservatives also diverged analytically from Marxists. One of their great errors, he wrote, was to believe that all forms of cultural or social attachment which entailed hierarchy were reducible to false consciousness; but Cowling believed that these were more concrete, especially if they connected to a sort of national consciousness he often referred to in quasi-mystical terms. The error made Marxists naïve about “the fertility and resourcefulness of established regimes.” For Cowling, it was the job of conservative political elites to enact this “resourcefulness:” to tap into the deep well of national sentiment and renew it for successive generations, and thus to blunt class conflict and insulate Britain’s political system from popular pressure.

We can see Cowling applying these ideas to contemporary politics most explicitly in the Salisbury Group’s first publication, the 1978 edited collection Conservative essays. Here he criticized Thatcher’s political rhetoric. Adam Smith might be a useful name to deploy against socialism, he wrote, but if carried to its “rationalistic” pretensions his political language was too rigid and unimaginative for the great task facing conservative elites. “If there is a class war—and there is—it is important that it should be handled with subtlety and skill […] it is not freedom that Conservatives want; what they want is the sort of freedom that will maintain existing inequalities or restore lost ones.” No class war could be managed by “politicians who are so completely encased in a Smithian straitjacket that they are incapable of recognizing that it is going on.” Conservatives needed to read more widely in search of insights to press into service against the reformers and revolutionaries of the age.

Marx rapidly fell out of favor as a source for creative borrowing, however. The collapse of the USSR was hailed by many conservatives as the ultimate indictment of socialism and Marx’s whole system along with it – something many on the Right still believe. Even Scruton became more reluctant to engage with Marx as the Cold War wore on (Cowling criticized him for making the journal Salisbury Review “crudely anti-Marxist” under his editorship). The frank openness to learning from Marx that we find in these texts looks like a historical curiosity today.

The story of the Salisbury Group is also something of a historiographical curiosity. The conservative revival of the 1970s has been the subject of much excellent work in recent British history; but the Group, despite its reputation on the Right and the status of its most prominent figures, has with a few exceptions been passed over for study. Thatcherism and its genealogy have understandably drawn the eye, but this has sometimes unhelpfully excluded its conservative critics or more skeptical fellow-travellers. Historians should seek now to tell more complex stories about the intellectual history of conservatism in this period: after all, the ascendance on the Right of the doctrines and rhetoric of neoliberalism was, in the words of philosopher John Gray, “perhaps the most remarkable and among the least anticipated developments in political thought and practice throughout the Western world in the 1980s.”

As for the present, whilst we shouldn’t expect a conservative re-engagement with Marx we should expect to see more creative re-appropriation of thinkers beyond the typical right-wing canon. This is especially so because the Tory Marxists of the 1970s were looking for something still sought by many conservatives today. That is a counterpoint to a neoliberalism which in its popular idiom increasingly rests upon a notion of individual freedom which fewer and fewer people experience as cohering with their aspirations, values or attachments; or which appeals to moralistic maxims about personal grit, endeavour and innovation which are belied by the inequalities and precarities of contemporary economic life. They seek a political perspective which issues from a holistic analysis of society and its constituent forces rather than individualistic axioms about entitlements and incentives, and which can speak to alienation and to conflict over authority. We can see this process underway already on the French Right, as Mark Lilla made clear in a recent article, where a new generation of intellectuals count the ‘communitarian’ socialists Alasdair MacIntyre, Christopher Lasch and Charles Péguy among their lodestars. And in a perhaps less self-conscious way we can see it on the American Right too, as the long-durable “fusionist” coalition between social conservatives and business libertarians comes under strain: witness Patrick Deneen’s surprise bestseller Why Liberalism Failed and the much-publicized debate between Sohrab Ahmari and David French over whether conservatives should reject or reconcile themselves to liberal institutions and norms. In this moment especially, we should expect to see more inspiration on the intellectual Right from strange places.


Charles Troup is a second-year Ph.D. student in Modern European History at Yale University. 

Categories
Book reviews Think Piece

Personal Memory and Historical Research

By Contributing Editor Pranav Kumar Jain

51Z3BE64J5L._SX309_BO1,204,203,200_
Eric Hobsbawm, Interesting Times (2002)

During a particularly bleak week in the winter of 2013, I picked up a copy of Eric Hobsbawm’s modestly titled autobiography Interesting Times: A Twentieth-Century Life (2002), perhaps under the (largely correct) impression that the sheer energy and power of Hobsbawm’s prose would provide a stark antidote to the dullness of a Chicago winter. I had first encountered Hobsbawm the year before when he had died a day before my first history course in college. The sadness of the news hung heavy on the initial course meeting and I was curious to find out more about the historian who had left such a deep impression on my professor and several classmates. Over the course of the next year or so, I had read through several of his most important works, and ending with his autobiography seemed like a logical way of contextualizing his long life and rich corpus.

Needless to say, Interesting Times was an absolutely riveting read. Hobsbawm’s attempt to bring his unparalleled observational skills and analytical shrewdness to his own work and career revealed a life full of great adventures and strong convictions. Yet throughout the book, apart from marveling at his encounters with figures like the gospel singer and civil rights activist Mahalia Jackson, I was most stuck by what can best be described as the intersection of historical techniques and personal memory. Though much of the narrative is written from his prodigious memory, Hobsbawm regularly references his own diary, especially when discussing his days as a Jewish teenager in early 1930s Berlin and then as a communist student in Cambridge. In one instance, it allows his later self to understand why he didn’t mingle with his schoolmates in mid-1930s London (his diary indicates that he considered himself intellectually superior to the whole lot). In another, it helps him chart, at least in his view, the beginnings of peculiarly British Marxist historical interpretations. Either way, I was fascinated by his readings of what counts as a primary source written by himself. He naturally brought the historian’s skepticism to this unique primary source, repeatedly questioning his own memory against the version of events described in the diary and vice versa. This inter-mixing of personal memory with the historian’s interrogation of primary sources has long stayed with me and I have repeatedly sought out similar examples since then.

In recent years, there has been a remarkable flowering of memoirs or autobiographies written by historians. Amongst others, Carlos Eire and Sir J. H. Elliott’s memoirs stand out. Eire’s unexpectedly hilarious but ultimately depressing tale of his childhood in Cuba is a moving attempt to recover the happy memories long buried by the upheavals of the Cuban Revolution. In a different vein, Elliott ably dissects the origins of his interests in Spanish history and a Protestant Englishman’s experiences in the Catholic south. The intermingling of past and present is a constant theme. Elliott, for example, was once amazed to hear the response of a Barcelona traffic policeman when he asked him for directions in Catalan instead of Castilian. “Speak the language of the empire [Hable la lengua del imperio],” said the policeman, which was the exact phrase that Elliott had read in a pamphlet from the 1630s that attacked Catalans for not speaking the “language of the empire.” As Elliott puts it, “it seemed as though, in spite of the passage of three centuries, time had stood still” (25). (There are also three memoirs by Sheila Fitzpatrick and one by Hanna Holborn Gray, none of which, regrettably, I have yet had a chance to read.)

51OMnkIcZyL._SX331_BO1,204,203,200_
Mark Mazower, What You Did Not Tell (2017)

Yet, while Eire and Elliott’s memoirs are notably rich in a number of ways, they have little to offer in terms of the Hobsbawm-like connection between historical examination and personal memory that had started me on the quest in the first place. However, What You Did Not Tell (2017) Mark Mazower’s recent account of his family’s life in Tsarist Russia, the Soviet Union, Nazi Germany, France, and the tranquil suburbs of London provides a wonderful example of the intriguing nexus between historical research and personal memory.

In some ways, it is quite natural that I have come to see affinities between Hobsbawm’s autobiography and Mazower’s memoir. Both are stories of an exodus from persecution in Central and Eastern Europe for the relative safety and stability of London. But the surface level similarities perhaps stop there. While Hobsbawm, of course, is writing mostly about himself, Mazower is keen to tell the remarkable story of his grandfather’s transformation from a revolutionary Bundist leader in the early twentieth-century to a somewhat conservative businessman in London (though, as he learned in the course of his research, the earlier revolutionary connections did not fade away easily and his grandparents’ household was always a welcome refuge for activists and revolutionaries from across the world.) However, on a deeper level, the similarities persist. For one thing, the attempt to measure personal memories against a historical record of some sort is what drives most of Mazower’s inquiries in the memoir.

The memories at work in Mazower’s account are of two kinds. The first, mostly about his grandfather whom he never met (Max Mazower died six years before his grandson Mark was born), are inherited from others and largely concern silences—hence the title What You Did Not Tell. Though Max Mazower was a revolutionary pamphleteer, amongst other things, in the Russian Empire, he kept quiet about his radical past during his later years. His grandfather’s silence appears to have perturbed Mazower and this plays a central role in his bid to dig deeper in archives across Europe to uncover traces of his grandfather’s extraordinary life. The other kind of memories, largely about his father, are more personal and urge Mazower to understand how his father came to be the gentle, practical, and affectionate man that Mazower remembered him to be. Naturally, in the course of phoning old acquaintances, acquiring information through historian friends with access to British Intelligence archives, and pouring through old family documents such as diaries and letters, Mazower’s memories have both been confirmed and challenged.

ows_151181013914759
Mark Mazower

In the case of his grandfather, while Mazower is able to solve quite a few puzzles through expert archival work and informed guessing, there are some that continue to evade satisfactory conclusion. Perhaps the thorniest amongst these is the parentage of his father’s half-brother André. Though most relatives knew that André had been Max’s son from a previous relationship with a fellow revolutionary named Sofia Krylenko, André himself came to doubt his paternity later in life, a fact that much disturbed Mazower’s father, who saw André’s doubts as a repudiation of their father and everything he stood for. Mazower’s own research into André’s paternity through naturalization papers and birth certificate appears to have both further confused and enlightened him. While he concludes that André’s doubts were most likely unfounded, a tinge of unresolved tension about the matter runs through the pages.

With his father, Mazower is naturally more certain of things. Yet, as he writes towards the beginning of the memoir, after his father’s death he realized that there was much about his life that he did not know. In most cases, he was pleasantly surprised with his discoveries. For instance, he seems to take satisfaction in the fact that, in his younger years, his father had a more competitive streak than he had previously assumed. But reconstructing the full web of his father’s friendships proved to be quite challenging. At one point, he called a local English police station from Manhattan to ask if they could check on a former acquaintance of his father whose phone had been busy for a few days. After listening to him sympathetically, the duty sergeant told him that this was not reason enough for the police to go knocking on someone’s door. Only later did he learn that he was unable to reach the person in question because she had been living in a nursing home and had died around the time that he had first tried to get in touch.

The Pandora’s Box opened by my reading of Hobsbawm’s autobiography is far from shut. It has led me from one memoir to another and each has presented a distinct dimension of the question of how historical research intersects with personal memories. In Hobsbawm’s case, there was the somewhat peculiar case of a historian using a primary source written by himself. Mazower’s multi-layered account, of course, moves across multiple types of memories interweaving straightforward archival research with personal impressions.

While these different examples hamper any attempt at offering a grand theory of personal memory and historical research, they do suggest an intriguing possibility. The now not so incipient field of memory studies has spread its wings from memories of the Reformation in seventeenth and eighteenth-century England to testimonies of Nazi and Soviet soldiers who fought at the Battle of Stalingrad. Perhaps it is now time to bring historians themselves under its scrutinizing gaze.

Pranav Kumar Jain is a doctoral student at Yale where his research focuses on religion and politics in early modern England.

Categories
Think Piece

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951
Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])
I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg
Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC
Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

Categories
Think Piece

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg
Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original
Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens
Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.