Author: spencejw

Personal Memory and Historical Research

By Contributing Editor Pranav Kumar Jain

51Z3BE64J5L._SX309_BO1,204,203,200_

Eric Hobsbawm, Interesting Times (2002)

During a particularly bleak week in the winter of 2013, I picked up a copy of Eric Hobsbawm’s modestly titled autobiography Interesting Times: A Twentieth-Century Life (2002), perhaps under the (largely correct) impression that the sheer energy and power of Hobsbawm’s prose would provide a stark antidote to the dullness of a Chicago winter. I had first encountered Hobsbawm the year before when he had died a day before my first history course in college. The sadness of the news hung heavy on the initial course meeting and I was curious to find out more about the historian who had left such a deep impression on my professor and several classmates. Over the course of the next year or so, I had read through several of his most important works, and ending with his autobiography seemed like a logical way of contextualizing his long life and rich corpus.

Needless to say, Interesting Times was an absolutely riveting read. Hobsbawm’s attempt to bring his unparalleled observational skills and analytical shrewdness to his own work and career revealed a life full of great adventures and strong convictions. Yet throughout the book, apart from marveling at his encounters with figures like the gospel singer and civil rights activist Mahalia Jackson, I was most stuck by what can best be described as the intersection of historical techniques and personal memory. Though much of the narrative is written from his prodigious memory, Hobsbawm regularly references his own diary, especially when discussing his days as a Jewish teenager in early 1930s Berlin and then as a communist student in Cambridge. In one instance, it allows his later self to understand why he didn’t mingle with his schoolmates in mid-1930s London (his diary indicates that he considered himself intellectually superior to the whole lot). In another, it helps him chart, at least in his view, the beginnings of peculiarly British Marxist historical interpretations. Either way, I was fascinated by his readings of what counts as a primary source written by himself. He naturally brought the historian’s skepticism to this unique primary source, repeatedly questioning his own memory against the version of events described in the diary and vice versa. This inter-mixing of personal memory with the historian’s interrogation of primary sources has long stayed with me and I have repeatedly sought out similar examples since then.

In recent years, there has been a remarkable flowering of memoirs or autobiographies written by historians. Amongst others, Carlos Eire and Sir J. H. Elliott’s memoirs stand out. Eire’s unexpectedly hilarious but ultimately depressing tale of his childhood in Cuba is a moving attempt to recover the happy memories long buried by the upheavals of the Cuban Revolution. In a different vein, Elliott ably dissects the origins of his interests in Spanish history and a Protestant Englishman’s experiences in the Catholic south. The intermingling of past and present is a constant theme. Elliott, for example, was once amazed to hear the response of a Barcelona traffic policeman when he asked him for directions in Catalan instead of Castilian. “Speak the language of the empire [Hable la lengua del imperio],” said the policeman, which was the exact phrase that Elliott had read in a pamphlet from the 1630s that attacked Catalans for not speaking the “language of the empire.” As Elliott puts it, “it seemed as though, in spite of the passage of three centuries, time had stood still” (25). (There are also three memoirs by Sheila Fitzpatrick and one by Hanna Holborn Gray, none of which, regrettably, I have yet had a chance to read.)

51OMnkIcZyL._SX331_BO1,204,203,200_

Mark Mazower, What You Did Not Tell (2017)

Yet, while Eire and Elliott’s memoirs are notably rich in a number of ways, they have little to offer in terms of the Hobsbawm-like connection between historical examination and personal memory that had started me on the quest in the first place. However, What You Did Not Tell (2017) Mark Mazower’s recent account of his family’s life in Tsarist Russia, the Soviet Union, Nazi Germany, France, and the tranquil suburbs of London provides a wonderful example of the intriguing nexus between historical research and personal memory.

In some ways, it is quite natural that I have come to see affinities between Hobsbawm’s autobiography and Mazower’s memoir. Both are stories of an exodus from persecution in Central and Eastern Europe for the relative safety and stability of London. But the surface level similarities perhaps stop there. While Hobsbawm, of course, is writing mostly about himself, Mazower is keen to tell the remarkable story of his grandfather’s transformation from a revolutionary Bundist leader in the early twentieth-century to a somewhat conservative businessman in London (though, as he learned in the course of his research, the earlier revolutionary connections did not fade away easily and his grandparents’ household was always a welcome refuge for activists and revolutionaries from across the world.) However, on a deeper level, the similarities persist. For one thing, the attempt to measure personal memories against a historical record of some sort is what drives most of Mazower’s inquiries in the memoir.

The memories at work in Mazower’s account are of two kinds. The first, mostly about his grandfather whom he never met (Max Mazower died six years before his grandson Mark was born), are inherited from others and largely concern silences—hence the title What You Did Not Tell. Though Max Mazower was a revolutionary pamphleteer, amongst other things, in the Russian Empire, he kept quiet about his radical past during his later years. His grandfather’s silence appears to have perturbed Mazower and this plays a central role in his bid to dig deeper in archives across Europe to uncover traces of his grandfather’s extraordinary life. The other kind of memories, largely about his father, are more personal and urge Mazower to understand how his father came to be the gentle, practical, and affectionate man that Mazower remembered him to be. Naturally, in the course of phoning old acquaintances, acquiring information through historian friends with access to British Intelligence archives, and pouring through old family documents such as diaries and letters, Mazower’s memories have both been confirmed and challenged.

ows_151181013914759

Mark Mazower

In the case of his grandfather, while Mazower is able to solve quite a few puzzles through expert archival work and informed guessing, there are some that continue to evade satisfactory conclusion. Perhaps the thorniest amongst these is the parentage of his father’s half-brother André. Though most relatives knew that André had been Max’s son from a previous relationship with a fellow revolutionary named Sofia Krylenko, André himself came to doubt his paternity later in life, a fact that much disturbed Mazower’s father, who saw André’s doubts as a repudiation of their father and everything he stood for. Mazower’s own research into André’s paternity through naturalization papers and birth certificate appears to have both further confused and enlightened him. While he concludes that André’s doubts were most likely unfounded, a tinge of unresolved tension about the matter runs through the pages.

With his father, Mazower is naturally more certain of things. Yet, as he writes towards the beginning of the memoir, after his father’s death he realized that there was much about his life that he did not know. In most cases, he was pleasantly surprised with his discoveries. For instance, he seems to take satisfaction in the fact that, in his younger years, his father had a more competitive streak than he had previously assumed. But reconstructing the full web of his father’s friendships proved to be quite challenging. At one point, he called a local English police station from Manhattan to ask if they could check on a former acquaintance of his father whose phone had been busy for a few days. After listening to him sympathetically, the duty sergeant told him that this was not reason enough for the police to go knocking on someone’s door. Only later did he learn that he was unable to reach the person in question because she had been living in a nursing home and had died around the time that he had first tried to get in touch.

The Pandora’s Box opened by my reading of Hobsbawm’s autobiography is far from shut. It has led me from one memoir to another and each has presented a distinct dimension of the question of how historical research intersects with personal memories. In Hobsbawm’s case, there was the somewhat peculiar case of a historian using a primary source written by himself. Mazower’s multi-layered account, of course, moves across multiple types of memories interweaving straightforward archival research with personal impressions.

While these different examples hamper any attempt at offering a grand theory of personal memory and historical research, they do suggest an intriguing possibility. The now not so incipient field of memory studies has spread its wings from memories of the Reformation in seventeenth and eighteenth-century England to testimonies of Nazi and Soviet soldiers who fought at the Battle of Stalingrad. Perhaps it is now time to bring historians themselves under its scrutinizing gaze.

Pranav Kumar Jain is a doctoral student at Yale where his research focuses on religion and politics in early modern England.

Geneva’s Calvin

By Editor Spencer J. Weinreich

How the mightily Protestant have fallen. Almost five hundred years after Geneva deposed its (absentee) bishop and declared for the Reformation, there are nearly three Catholics and two agnostics/atheists for every Protestant Genevan. This, the city of John Calvin, acclaimed by his Scottish follower John Knox as the “the most perfect school of Christ that ever was in the earth since the days of the apostles” (qtd. in Reid, 15), revered (and reviled) as the “Protestant Rome.”

Of course, a lot changes in five centuries, as well it should. No longer a fief of the House of Savoy or a satellite dependent on the military might of Bern, Geneva has become the epitome of a global city, home to more international organizations than any other place on the planet. The rich diversity of twenty-first-century Geneva is a transformation undreamt-of in the days of Calvin—and one reflected in the astonishing diversity of the reformed tradition in contemporary Christianity, whose adherents are more likely to hail from Nigeria and Indonesia, Madagascar and Mexico, than from Geneva or Lausanne.

In a sense, I came to Geneva looking for John Calvin, as a student of the Reformation and more particularly of the city Calvin remolded in his three decades as the spiritual leader of Geneva. I came to participate in a summer course offered by the Université de Genève’s Institut d’histoire de la Réformation, whose very existence owes much to the special relationship between this city and the religious transformations of the sixteenth century. I came, too, to immerse myself in Geneva’s exceptional archives—principally the Archives d’Etat de Genève (the cantonal archives) and the Bibliothèque de Genève (a public research library operated by the city government)—to understand how Calvin and the structures he created maintained the vision and the day-to-day realities of a godly city.

So my eyes were peeled for the footprints of the reformer, far more than the average visitor to this beautiful city at the far western edge of Lake Léman. And as much the intervening years have changed the city, I did not need to look too far. For Calvin remains the most iconic (how he would have hated to be called iconic!) figure of whom Geneva can boast (though he was born some four hundred miles to the north and west, in Noyon). One of the city’s most famous tourist attractions is the International Monument to the Reformation, usually known as the Reformation Wall, a massive relief that spans one side of the Parc des Bastions on the grounds of the Université de Genève. Erected in 1909—the quatercentenary of Calvin’s birth and the 350th anniversary of the university’s foundation—the centerpiece of the memorial is a larger-than-life sculpture of Calvin flanked by three of his associates: Guillaume Farel (the French reformer who convinced Calvin to stay in Geneva), Theodore Beza (Calvin’s protégé and successor as the leader of the Genevan church), and Knox. The Reformation Wall offers a curious vision of Calvin: the gaunt, dour likeness of the reformer, in Bruce Gordon’s felicitous phrase, “casts him to look like some forgotten figure of Middle Earth” (147).

ReformationsdenkmalGenf1

The central relief of the Reformation Wall

The Reformation Wall is the most prominent monument to Calvin in Geneva. There is a memorial in the Cimitière des Rois at a grave long thought to be his, but the true location of his remains has been unknown since his death in 1564. There is the Auditoire de Calvin, a chapel next to the cathedral where the great man taught Scripture every morning. There is the Rue Jean-Calvin, running through the very heart of the Old Town. And, in a more diffuse fashion, there is Geneva’s abiding relationship with the Reformation: the Musée International de la Réforme, the Ecumenical Centre housing groups like the World Council of Churches, and the reformed services that take place each Sunday across the city.

IMG_4212Yet what has struck me in the past month has been the extent to which Calvin’s presence in Geneva slips the bounds of Reformation Studies, the early modern period, and even the persona of the reformer himself. Thus a restaurant in the Eaux-Vives neighborhood, whose logo traces out the features of its namesake. A learned friend mooted the possibility—which I suspect have not been actualized—of a restaurant run according to Calvinist theology: “One’s choice of dish is not conditional on how good the dish actually is.” Thus, too, Calvinus, a popular local brand of lager. (The man himself was certainly fond of good wine, at least [Gordon, 147].)

IMG_4405

Spotted in the gift shop of the Musée International de la Réforme

But perhaps my favorite sighting of Calvin came in the gift shop of the Musée International de la Réforme. In the children’s section, in pride of place among the illustrated biographies and primers on the world’s religions, were several volumes of that sublime theological exploration, Calvin and Hobbes.

It is worth noting that Bill Waterson, the peerless thinker behind said magnum opus, chose the names of his protagonists as deliberate nods to the early modern thinkers (1995, 21–22). And in their turn scholars have taken Waterson’s pairing as a jumping-off point for analyzing early modern thought: next to one of the albums of Calvin and Hobbesin the gift shop—rather incongruous in the children’s section—was Pierre-François Moreau, Olivier Abel, and Dominique Weber’s Jean Calvin et Thomas Hobbes: Naissance de la modernité politique (Labor et Fides, 2013), one of several scholarly works to juxtapose the authors of the Institutesand Leviathan. Charmingly, the influence occasionally flows in the other direction, as another friend flagged with the delightful art of Nina Matsumoto.

john_calvin_and_thomas_hobbes_by_spacecoyote

Nina MatsumotoJohn Calvin and Thomas Hobbes, used by kind permission of the artist.

Calvin is by no means unique in having his image and persona coopted by the new devotions of consumerism and mass media. Nabil Matar ended his keynote lecture, “The Protestant Reformation in Arabic Sources, 1517–1798,” at this year’s Renaissance Society of America meeting with the use of Luther’s likeness to advertise cold-cuts. Think of Caesar’s salads, King Arthur’s flour, Samuel Adams’s beer.

beer-3-001

Calvinus

I hasten to say that this post should not be taken as a lament for the (mythical) theological and intellectual rigor of yesteryear. I may not be thrilled that many Genevans will know Calvin first and foremost as the face on a bottle of lager, but nor would I particularly welcome a reinstatement of the kind of overwhelming public religiosity the man himself enforced on this city. Things change. Calvin’s Geneva is long gone, for better and for worse, and as a historian it is no bad thing that I can—must—look at it from without.

More to the point, the demise of Calvin the theologian is easy to exaggerate. The very fact that Calvin is used to sell beer and to brand restaurants indicates the enduring currency of his cultural profile. Furthermore, countless visitors to the Reformation Wall, to the Musée International de la Réforme, and to Geneva’s historic churches are devout members of one branch or other of the Calvinist tradition, coming to pay their respects to, and to learn something about, the place where their faith took shape. For millions of Christians across the world, John Calvin remains a towering spiritual presence, the forceful and penetrating thinker whose efforts even now structure their beliefs and practices. God isn’t quite dead, certainly not in “the most perfect school of Christ.”

The Cold War Counter-Enlightenment

By guest contributor Jonathon Catlin

Nicolas Guilhot (Centre National de la Recherche Scientifique) spoke on his new book, After the Enlightenment: Political Realism and International Relations in the Mid-Twentieth Century (Cambridge, 2017) at the New York University Intellectual History Workshop on May 16, 2018. He was introduced by Stefanos Geroulanos (NYU), while Gisèle Sapiro (École des Hautes Études en Sciences Sociales) and Hugo Drochon (Cambridge) provided responses. An audio recording of the discussion is available at the bottom of this post.

 

After the Enlightenment is a collection of six essays that have been reworked to tell an intellectual history of realist political thought in twentieth-century America. It tracks a gradual displacement within American political science and foreign policy in the mid-twentieth century: the triumph of “political realism” and the fledgling discipline it took hold of, International Relations (IR). Initially premised on the contingency of power and decision, the field ultimately became wedded to “rational choice,” a “new basis on which political decisions could be taken without democratic mandate” because they promised potentially “unanimous consent” (24). Guilhot convincingly argues that even as the rationalized field of IR moved toward systems theory and cybernetics, it never fully abandoned its roots in Christian values and aristocratic traditions of decision-making and leadership. Realism earned the backing of powerful institutions like the Rockefeller Foundation—the “midwife” of IR—and turned out to be one of the most enduring offspring of the rationalistic social sciences’ heyday in the early Cold War era (42).

 

Guilhot opens with pressing stakes for why we should care about political realism’s enduring legacy:

We are still…capable of great uprisings against a recognized threat or danger. But we are so confused in our thoughts as to which positive goals should guide our action that a general fear of what will happen after the merely negative task of defense against danger has been performed paralyzes our planning and thinking in terms of political ideas and ideals. (1)

herz_john_klein

John Herz

These words were written in 1951 by John Herz, a German-Jewish refugee scholar, yet they “nonetheless resonate uncannily with our present situation.” After 9/11, Guilhot writes “we too have become engulfed by our own concern with security and confused about the more general meaning and purpose of politics.” In the wake of that catastrophe, “security has become the universal framework of political thinking and the primary deliverable of any policy, foreign or domestic, often overriding well-established constitutional rights and provisions.” Yet the pursuit of this narrow goal ultimately displaces normative political theory, the construction of positive ideals, and the pursuit of a more just world. Realism thus amounts to a form of anti-politics.

One hardly needs to look far for instances in which “facing and confronting ‘a recognized threat or danger’ has become the essence of government as well as a new source of legitimacy” (2). Guilhot’s native and adoptive countries, France and the United States, are home to two of the most egregious biopolitical defense and surveillance regimes today. In such states, “references to a permanent state of exception now sound like academic platitudes glossing over the obvious.” When “the notion of security has expanded to become the all-encompassing horizon of human experience,” he writes, “security itself has become an ideal—maybe the only ideal left.”

Guilhot

Nicolas Guilhot (CNRS)

Guilhot’s work exemplifies a new wave of intellectual history bringing together political theory, policy, and institutional history of the Cold War. This includes work by Daniel Bessner, with whom Guilhot co-edited The Decisionist Imagination: Sovereignty, Social Science, and Democracy in the Twentieth Century (Berghahn, forthcoming 2018). Bessner is also author of Democracy in Exile: Hans Speier and the Rise of the Defense Intellectual (Cornell, 2018) and a forthcoming history of the RAND corporation (Princeton). It also includes the work on the “militant democracy” of Karl Loewenstein by Jan-Werner Müller and Udi Greenberg, with whom Bessner authored a popular critique of “the Weimar analogy” in Jacobin in the wake of the 2016 election of Donald Trump.

After the Enlightenment also engages present debates on the origins of neoliberalism such as Quinn Slobodian’s Globalists (Harvard, 2018). These works track how domains of social life in the mid-twentieth century, from international relations to market economics, fell under new regimes of scientific and technological management that shielded them from democratic contestation—and hence from politics itself, according to an ancient line of political theory equating politics with deliberative rationality and public speech running from Aristotle, to Arendt, to Habermas. As a recent review of Globalists aptly characterized one of its key insights: “the neoliberal program was not simply a move in the distributional fight, but rather about establishing a social order in which distribution was not a political question at all. For money and markets to be the central organizing principle of society, they have to appear natural—beyond the reach of politics.”

Carl_Schmitt.jpg

Carl Schmitt

The anti-democratic thrust of mid-century realism stems from its foundational premise, what Herz called the “security dilemma”: the ever-present possibility of conflict as “a basic fact of human life” (3). Herz is fairly unique in having tried to resist the most cynical and conservative implications of this premise; he strove “to strike a balance between the grim necessities of power and the striving for ideals,” alternately calling his project “liberal realism” or “realist liberalism.” Yet like many other realists, Herz ultimately capitulated to conservatism, abandoning liberalism, socialism, and internationalism. He was a student of Hans Kelsen, the Viennese legal positivist and author of the interwar Austrian constitution. Both were assimilated Jews who fled Europe for the United States after 1938 and found homes in American universities and policy circles. In her response, Gisèle Sapiro rightly pressed Guilhot to reflect on the significance of the experiences of exile and Judaism—even in secular, assimilated forms—for his thinkers’ realism. Herz gradually drifted away from Kelsen towards his arch-rival, Carl Schmitt, identifying his realist liberalism with Schmittian decisionism. For Guilhot, the failure of Herz’s liberal project is instructive: “It suggests that realism places limits upon the kind of political goals that one can pursue and indeed makes it difficult if not impossible to pursue positive or transformative goals” (4).

In order to retain the appearance of a politics of ideals, realism rewrote the history of political thought, appropriating the “glorious lineage” of Thucydides, Machiavelli, and Hobbes as realism’s forefathers. By linking a new politics of decision to the tradition of political “republicanism,” realists came to develop a school of thought that could justify “dictatorial measures in the defense of freedom” (26). Political realism thus conflated two distinct forms of realism in order to establish its “historical legitimacy.” First, the ethical realism of Machiavelli, which does not “imply a pessimistic anthropology or a regressive social ontology,” but simply proposes “prudential conduct” that is “naturalistic, pragmatic, and concrete.” The cunning of political realism in the mid-twentieth century was to wed this practical wisdom to the needs of Cold Warrior ideology. The hybrid that resulted is by definition a “conservative realism” insofar as it “stifles the capacity to elaborate any political project beyond the maintenance of order.” Realism is an exact reaction to utopian aims of the Atlantic revolutions and the rise of mass democracy. It was its era’s most influential representative of the Counter-Enlightenment.

Guilhot’s critique of realism targets not only its vision of global power, but especially the ways it perpetuates an “exhaustion of alternatives” (5). He thus remains deeply skeptical of those today turning back to early realism “as a potentially progressive intellectual project.” Realism is considered one of the only “grand narratives” still standing, even by some on the Left. Notably, it has been invoked to critique of America’s slip from “soft-power” “democracy promotion” in the 1980s into costly militarized intervention under recent administrations. Realism has also been hailed as one of the last genuinely “political” responses to neoliberal globalism that can still be voiced in policy circles. Yet Guilhot reveals that the progressive attempt to reclaim realism today “to oppose neoliberal depoliticization fundamentally misunderstands realism and ignores how much it has in common with neoliberalism” (6). Already in his 1955 essay “The Political Thought of Neo-Liberalism,” Carl J. Friedrich, a German refugee, argued that neoliberalism was nearly indistinguishable from realism; Guilhot calls them “twin ideological movements born in the crisis of the 1930s that reacted to the crisis of liberalism and to the rise of totalitarianism.” Both were essentially defensive movements, sharing a neo-Burkean anthropology. Both thought liberalism could only be saved by illiberal means and saw themselves as building a “concretely managed order” that sought to “insulate from democracy core domains of decision-making, including foreign and economic policy, and to entrust them to a select elite of expert decision-makers” (7). Like Bessner, Guilhot argues that “decisionism” had appeal across the political spectrum and was hardly evidence of Schmitt lurking behind every realist thinker. One of the early realist’s most influential ideas was their conception of politics as an art, not a science; in genuinely political circumstances, there are no rational answers, only force and the wisdom of experience and leadership required to execute it. Yet their belief in the irrationality of public opinion led them to a new God, rational choice, “to legitimate economic and political decisions.”

Hans_Morgenthau.jpg

Hans Morgenthau

Hans Morgenthau is often considered the father of realism. He was a German Jew forced from Europe by Nazism, and ended up as a professor of political science at the University of Chicago. He was also one of realism’s most explicit critics of the dangers of mass democracy. For Morgenthau’s generation, “Even their analysis of totalitarianism was premised upon a critique of its democratic origins” (15). “Fascism,” he wrote in a 1966 review of Ernst Nolte’s Der Faschismus in seiner Epoche, “can be considered the consummation of the equalitarian and fraternal tenets of 1789.” As the Harvard political theorist Judith Shklar—yet another Jewish refugee—wrote of the realists, “rationalism sooner or later must and did lead to totalitarianism” (67). Yet Guilhot shows that realism’s success ultimately lies in its deviation from this initial opposition to rationalism and liberalism toward compromise with these leading values of its era.

While Morgenthau became the figurehead of IR, Guilhot shows that he shared much of his worldview with figures in very different fields, including the Isaiah Berlin and the German theologian Reinhold Niebuhr (who went so far as to call Augustine the first realist). Together they forged “a powerful intellectual program that blended anti-liberal and Christian conservative elements”—especially a lapsarian Christian negative anthropology and suspicion of science—“with a rhetoric of the defense of liberalism” (15). As Hugo Drochon put it, for Guilhot’s realists there was a natural affinity between the Christians’ “we have only God” and the decisionists’ “we have only the nation state.” While Carl Schmitt actually reviewed Morgenthau’s first book, Drochon argued that the realists didn’t really need him; as the example of Niebuhr illustrates, religion could have grounded realism on its own. Extending realism’s Christian and conservative lineage back to earlier reappraisals of Machiavelli such as Friedrich Meinecke’s 1924 Die Idee der Staatsräson in der Neueren Geschichte, Drochon challenged Guilhot’s framing of realism as a postwar, Cold War phenomenon.

Considering American political culture bereft of the necessary moral resources to combat totalitarianism, the realists, many of them witnesses to the collapse of Weimar, argued that “liberalism, if left to its own devices, was incapable of ensuring its own survival.” Given similar anxieties today, Guilhot’s critical reassessment of mid-century realism could not be more timely. By reconstructing the rich beginnings of realist ideas still influential today, he reveals their latent commitments to be complicit with technocratic and unrepresentative forms of politics under fire today. Once hailed as a scientifically unimpeachable solution to democratic crisis, Guilhot leads us to see realism rather as partly responsible for our present crisis of democratic representation.

Jonathon Catlin is a Ph.D. student in the Department of History at Princeton University. His work focuses on intellectual responses to catastrophe, especially in German-Jewish thought and the Frankfurt School of critical theory.

 

Graduate Forum: Expanding Subjects, Race, and Global Contexts: Tisa Wenger’s Religious Freedom and Developments in the History of Religious Ideas

This is the first in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which will be running this summer. Our goal is to engage with the diversity of ways that graduate students are approaching the History of Ideas across academic sub-fields. After the final commentary, Professor Anthony Grafton will add his thoughts to the conversation.

This first piece is by guest contributor Andrew Klumpp.

9781469634623.png

Tisa Wenger, Religious Freedom
The Contested History of an American Ideal (Chapel Hill: University of Carolina Press, 2017).

Kiram II

Kiram II

Evangelists, statesmen, and academics once dominated many histories of religious ideas, but in Tisa Wenger’s recent Religious Freedom: The Contested History of an American Ideal (UNC Press, 2017), they are eclipsed. Figures like the Moros Muslim sultan Kiram II, black nationalist Marcus Garvey, archbishop of the Philippine Independent Church Gregorio Aglipay, the founder of the Moorish Science Temple Nobel Drew Ali, and the Native American Shaker prophet John Slocum take center-stage. Spanning the decades from the Spanish-Cuban-Filipino-American War until WWII, this study weaves together diverse communities ranging from Filipino Muslims to Jewish immigrants, the Nation of Islam to practitioners of Native American traditions such as Peyote and the Ghost Dance. Although globetrotting in scope and diverse in its subjects, Religious Freedom’s persistent attention to how interpretations of religious freedom relied on the formative relationship between race, religion, and empire effectively ties the study together.

9780807859353

Tisa Wenger, We Have a Religion: The 1920s Pueblo Indian Dance Controversy and American Religious Freedom (Chapel Hill: University of Carolina Press, 2009).

Much like Wenger’s first book, We Have a Religion (UNC Press, 2009), which investigated the formation and use of the category of religion among Pueblo Indians, Religious Freedom focuses on a fundamental concept, in this case religious freedom, and explores its formation and re-formation by those outside of the halls of power in the United States. In doing so, it offers a fresh perspective on central questions about religious liberty, nation-states, and global empire. It also provides a window into engaging trends in how historians of religion and ideas approach their work and frame their questions.

Forged at the nexus of race, religion, and empire, religious freedom, Wenger argues, was not always universally beneficent. Consequently, when various communities invoked the ideal, winners and losers emerged. She convincingly demonstrates that white American Protestants, in particular, deployed the ideal of religious freedom to assert their supremacy and highlight the deficiencies of others.

Ghost Dance ca. 1900

Native American Ghost Dance, ca. 1900

In order to further these broader arguments, Religious Freedom explores contexts ranging from Muslim settlements in the Philippines to Native American reservations in the Dakotas. For example, perceived racial inferiority meant that Christianity served as a key civilizing force in the Philippines after the U.S. conquest of the islands. Nevertheless, this otherness also kept Filipino Catholic priests under the thumb of a Western bishop. Racial otherness also cast suspicion on Native American religious practices. Missionaries and government officials nursed a longstanding skepticism about Native American religious practices including Peyote, the Ghost Dance, and the Native American Shaker tradition, even when these traditions exhibited some of the trappings of Christianity.

Wenger also deftly reveals how some groups leveraged their religious identities to minimize perceived racial otherness and inferiority. American Jews in both the Reform and Orthodox communities tended to lean into their religious identities and minimize their racial and ethnic distinctiveness. By framing their differences as religious rather than racial or ethnic, Jews, as well as Catholics, wielded the language of religious freedom to secure a spot standing shoulder to shoulder with white American Protestants.

Alice Fletcher 2

Alice Fletcher speaking to Native Americans

Overall, Wenger’s book is remarkably successful. It explores an understudied chapter in the definition and deployment of religious freedom ideology and expands its study to a broad and engaging collection of individuals and communities. Nevertheless, Religious Freedom is, somewhat disappointingly, largely a story of men. Only a handful of women appear throughout the entire text. This is particularly unfortunate because of the tradition of exploring the relationship between gender, race, and civilization, yet this vein of inquiry remains underexplored. Some of this is due to the sources Wenger uses and the prominence of men in the military, government agencies, and religious leadership at the time. Nevertheless, a woman like Alice Fletcher—discussed as an example of the of the interplay between race, gender, and civilization in Louise Michele Newman’s White Women’s Rights (Oxford University Press, 1999)—was active in the Women’s National Indian Association, and women like Fletcher might provide valuable insight about this topic. In this way, Wenger’s work provides an excellent platform for future explorations of the role of women in discourses about religious freedom, race, and empire.

Alice Fletcher 1

Alice Fletcher speaking with Native Americans

Gregorio Aglipay

Archbishop Gregorio Aglipay

Taken on its own, Wenger’s work is excellent scholarship and a valuable contribution to the field, yet it also provides an enlightening lens through which to appreciate some of the trends in the history of ideas, particularly as it relates to those of us who study religious history in the United States. At the most fundamental level, Religious Freedom deftly balances many of the lessons learned from social and cultural history—notably its attention to minority communities and global contexts— while remaining decidedly a history of ideas. Wenger’s study looks beyond politicians, preachers, and government officials, and that significantly reorients her history. Men like William Howard Taft, the governor-general of the Philippines and future president, or W.E.B. Du Bois appear only briefly. The focus remains on the leaders and members of each of the religious communities she explores.

Marcus Garvey.jpg

Marcus Garvey

In doing so, Wenger offers her take on whose ideas contribute to intellectual history. Is the history of ideas primarily about those who wrote books, set policy, and preached sermons, or is it broader than that? Wenger unequivocally answers with the latter definition. To be sure, she does not discount men like Theodore Roosevelt or William James, yet her attention to Jewish immigrants and colonized Filipinos (both Catholic and Muslim), as well as Native Americans and African-American religious movements, tells a more complicated and robust story. Though government agents and prominent religious thinkers engaged with the ideal of religious freedom, this approach to the history of ideas does not privilege their voices. Wenger illumines an expansion in the subjects studied by those of us who work in the history of ideas by intentionally integrating previously overlooked voices.

The centrality of race and racial hierarchies to the history of religion, both in the United States and in a global context, is another significant historiographical theme that Wenger continues to develop. She joins a bevy of excellent scholarship that has emerged at this intersection of religion and race, such as J. Spencer Fluhman’s “A Peculiar People:” Anti-Mormonism and the Making of Religion in Nineteenth-Century America, Rebecca Goetz’s The Baptism of Early Virginia: How Christianity Created Race, and Max Perry Mueller’s Race and the Making of the Mormon People. The history of religious ideas in the United States cannot be divorced from conversations about race. Wenger joins a growing chorus in making that point. As she argues in her book, race must be considered as a constitutive part of the development of religious ideals, not simply another factor to consider.

The consistent theme of the American empire also looms over this study. It forms a key part of Wenger’s argument and gestures toward another development in the history of religious ideas. Global contexts increasingly appear in the history of U.S. religion and religious ideas, even when focused on an “American ideal,” as Wenger’s title suggests. This study does just that, but it is not alone. For example, Cara Burnidge’s A Peaceful Conquest: Woodrow Wilson, Religion, and the New World Order or David Hollinger’s recent Protestants Abroad: How Missionaries Tried to Change the World but Changed America are just two examples of works that touch on similar themes of empire, American religion, and global contexts. Religious Freedom not only emphasizes this growing theme but also furthers the argument that a global lens is necessary for understanding the history of religious ideas in the United States.

Sod House

A Midwestern sod house

The themes that Wenger highlights—diverse voices, race, and global contexts—also resonate in the prairies and farmsteads that populate my own work as a historian of religion in rural America during the Gilded Age and Progressive Era. In particular, her contention that global forces shaped far-flung communities rings true of the rural communities that I study. Increasingly in my own work, I find the tiny hamlets strewn across the Midwestern plains engaged with global forces and positioned themselves in order to benefit from them. Markets and commodities form the best-known examples of rural America’s place in a global network, yet I find that ideas flowed just as freely as crops, livestock, and other essential goods. For example, when rural communities rejected or embraced religious movements like the Social Gospel or Pentecostalism, they often drew upon their own self-conceptions of how their community fit into regional, national, and international contexts. Relatedly, these same networks allowed relatively homogenous rural communities to take part in discourses about race, revealing yet another central theme of this work that appears in my own. The international network that shaped, exchanged, and refined ideals of religious freedom did not pass over rural America.

Dutch Immigration Advertisement

A Dutch advertisement encouraging immigrants to settle in the rural Midwest

Religious Freedom is engaging, rich, and valuable scholarship on its own, but when placed within the field, it is also an instructive guide to themes that are shaping the scholarship about the history of religious ideas in the United States. It answers the question of whose ideas can become the subject of intellectual history by casting a wider net, in a way that resonates with my own efforts to introduce rural voices into the scholarly conversation. Ultimately, this excellent piece engages readers, integrates compelling new voices, and inspires others to do the same.

Andrew Klumpp is a Ph.D. candidate in American Religious History at Southern Methodist University in Dallas, TX. He holds degrees from Northwestern College in Orange City, Iowa and Duke University in Durham, North Carolina. His research explores how rural Midwestern communities engaged in nineteenth-century debates about religious liberty, racial strife and social reform.

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951

Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])

I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg

Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC

Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

Reading Saint Augustine in Toledo

By Editor Spencer J. Weinreich

1024px-Antonio_Rodríguez_-_Saint_Augustine_-_Google_Art_Project

Antonio Rodríguez, Saint Augustine

In his magisterial history of the Reformation, Diarmaid MacCulloch wrote, “from one perspective, a century or more of turmoil in the Western Church from 1517 was a debate in the mind of long-dead Augustine.” MacCulloch riffs on B. B. Warfield’s pronouncement that “[t]he Reformation, inwardly considered, was just the ultimate triumph of Augustine’s doctrine of grace over Augustine’s doctrine of the Church” (111). There can be no denying the centrality to the Reformation of Thagaste’s most famous son. But Warfield’s “triumph” is only half the story—forgivably so, from the last of the great Princeton theologians. Catholics, too, laid claim to Augustine’s mantle. Not least among them was a Toledan Jesuit by the name of Pedro de Ribadeneyra, whose particular brand of personal Augustinianism offers a useful tonic to the theological and polemical Augustine.

ribadeneyra_retrato_index

Pedro de Ribadeneyra

To quote Eusebio Rey, “I do not believe there were many religious writers of the Siglo de Oro who internalized certain ascetical aspects of Saint Augustine to a greater degree than Ribadeneyra” (xciii). Ribadeneyra translated the Confessions, the Soliloquies, and the Enchiridion, as well as the pseudo-Augustinian Meditations. His own works of history, biography, theology, and political theory are filled with citations, quotations, and allusions to the saint’s oeuvre, including such recondite texts as the Contra Cresconium and the Answer to an Enemy of the Laws and the Prophets. In short, like so many of his contemporaries, Ribadeneyra invoked Augustine as a commanding authority on doctrinal and philosophical issues. But there is another component to Ribadeneyra’s Augustinianism: his spiritual memoir, the Confesiones.

Composed just before his death in September 1611, Ribadeneyra’s Confesiones may be the first memoir to borrow Augustine’s title directly (Pabel 456). Yet, a title does not a book make. How Augustinian are the Confesiones?

Pierre Courcelle, the great scholar of the afterlives of Augustine’s Confessions, declared that “the Confesiones of the Jesuit Ribadeneyra seem to have taken nothing from Augustine save the form of a prayer of praise” (379). Of this commonality there can be no doubt: Ribadeneyra effuses with gratitude to a degree that rivals Augustine. “These mercies I especially acknowledge from your blessed hand, and I praise and glorify you, and implore all the courtiers of heaven that they praise and forever thank you for them” (21). Like the Confessions, the Confesiones are written as “an on-going conversation with God to which […] readers were deliberately made a party” (Pabel 462). That said, reading the two side-by-side reveals deeper connections, as the Jesuit borrows from Augustine’s life story in narrating his own.

Though Ribadeneyra could not recount flirtations with Manicheanism or astrology, he could follow Augustine in subjecting his childhood to unsparing critique. His early years furnished—whose do not?—sufficient petty rebellions to merit Augustinian laments for “the passions and awfulness of my wayward nature” (5–6). In one such incident, Pedro stubbornly demands milk as a snack; enraged by his mother’s refusal, he runs from the house and begins roughhousing with his friends, resulting in a broken leg. Sin inspired by a desire for dairy sets up an echo of Augustine’s rebuke of

the jealousy of a small child: he could not even speak, yet he glared with livid fury at his fellow-nursling. […] Is this to be regarded as innocence, this refusal to tolerate a rival for a richly abundant fountain of milk, at a time when the other child stands in greatest need of it and depends for its very life on this food alone? (I.7,11)

Luis_Tristán_de_Escamilla_-_St_Monica_-_WGA23069

Luis Tristán, Santa Monica

Ribadeneyra’s mother, Catalina de Villalobos, unsurprisingly plays the role of Monica, the guarantor of his Catholic future (while pregnant, Catalina vows that her son will become a cleric). She was not the only premodern woman to be thus canonized by her son: Jean Gerson tells us that his mother, Élisabeth de la Charenière, was “another Monica” (400n10).

Leaving Toledo, Pedro comes to Rome, which was cast as one of Augustine’s perilous earthly cities. Hilmar Pabel points out that the Jesuit’s description of the city as “Babylonia” imitates Augustine’s jeremiad against Thagaste as “Babylon” (474). Like its North African predecessor, this Italian Babylon threatens the soul of its young visitor. Foremost among these perils are teachers: in terms practically borrowed from the Confessions, Ribadeneyra decries “those who ought to be masters, [who] are seated in the throne of pestilence and teach a pestilent doctrine, and not only do not punish the evil they see in their vassals and followers, but instead favor and encourage them by their authority” (7–8).

After Ribadeneyra left for Italy, Catalina’s duties as Monica passed to Ignatius of Loyola, who combined them with those of Ambrose of Milan—the father-figure and guide encountered far from home . Like Ambrose, Ignatius acts como padre, one whose piety is the standard that can never be met, who combines affection with correction.

1024px-Ignatius_of_Loyola_(militant)

A young Ignatius of Loyola

The narrative climax of the Confessions is Augustine’s tortured struggle culminating in his embrace of Christianity. No such conversion could be forthcoming for Ribadeneyra, its place taken by tentacion, an umbrella term encapsulating emotional upheavals, doubts over his vocation, the fantasy of returning to Spain, and resentment of Ignatius. Famously, Augustine agonizes until he hears a voice that seems to instruct him, tolle lege (VIII.29). Ribadeneyra structures the resolution of his own crises in analogous fashion, his anxieties dissolved by a single utterance of Ignatius’s: “I beg of you, Pedro, do not be ungrateful to one who has shown you so many kindnesses, as God our Lord.” “These words,” Ribadeneyra tells us, “were as powerful as if an angel come from heaven had spoken them,” his tentacion forever banished (37).

I am not suggesting Ribadeneyra fabricated these incidents in order to claim an Augustinian mantle. But the choices of what to include and how to narrate his Confesiones were shaped, consciously and unconsciously, by Augustine’s example.

Ribadeneyra’s story also diverges from its Late Antique model, and at times the contrast is such as to favor the Jesuit, however implicitly. Ribadeneyra professes an unmistakably early modern Marian piety that has no equivalent in Augustine. Where Monica is reassured by a vision of “a young man of radiant aspect” (III.11,19), Catalina de Villalobos makes her vow to vuestra sanctíssima Madre y Señora nuestra (3). Augustine addresses his gratitude to “my God, my God and my Lord” (I.2, 2), while Ribadeneyra, who mentions his travels to Marian shrines like Loreto, is more likely to add the Virgin to his exclamations: “and in particular I implored your most glorious virgin-mother, my exquisite lady, the Virgin Mary” (11). The Confessions mention Mary only twice, solely as the conduit for the Incarnation (IV.12, 19; V.10, 20). Furthermore, Ribadeneyra’s early conquest of his tentaciones produces a much smoother path than Augustine’s erratic embrace of Christianity; thus the Jesuit declares, “I never had any inclination for a way of life other than that I have” (6). His rhapsodic praise of chastity—“when could I praise you enough for having bound me with a vow to flee the filthiness of the flesh and to consecrate my body and soul to you in a clean and sweet sacrifice” (46)—is far cry from the infamous “Make me chaste, Lord, but not yet” (VIII.17)

When Ribadeneyra translated Augustine’s Confessions into Spanish in 1596, his paratexts lauded Augustine as the luz de la Iglesia and God’s signal gift to the Church. There is no hint—anything else would have been highly inappropriate—of equating himself with Augustine, whose ingenio was “either the greatest or one of the greatest and most excellent there has ever been in the world.” As a last word, however, Ribadeneyra mentions the previous Spanish version, published in 1554 by Sebastián Toscano. Toscano was not a native speaker, “and art can never equal nature, and so his style could not match the dignity and elegance of our language.” It falls to Ribadeneyra, in other words, to provide the Hispanophone world with the proper text of the Confessions; without ever saying so, he positions himself as a privileged interpreter of Augustine.

The Confessions is a profoundly personal text, perhaps the seminal expression of Christian subjectivity—told in a searingly intense first-person. Ribadeneyra himself writes that in the Confessions “is depicted, as in a masterful portrait painted from life, the heavenly spirit of Saint Augustine, in all its colors and shades.” Without wandering into the trackless wastes of psychohistory, it must have been a heady experience for so devoted a reader of Augustine to compose—all translation being composition—the life and thought of the great bishop.

Ribadeneyra was of course one of many Augustinians in early modern Europe, part of an ongoing Catholic effort to reclaim the Doctor from the Protestants, but we will misunderstand his dedication if we regard the saint as no more than a prime piece of symbolic real estate. For scholars of early modern Augustinianism have rooted the Church Father in philosophical schools and the cut-and-thrust of confessional conflict. To MacCulloch and Warfield we might add Meredith J. Gill, Alister McGrath, Arnoud Visser, and William J. Bouwsma, for whom early modern thought was fundamentally shaped by the tidal pulls of two edifices, Augustinianism and Stoicism.

There can be no doubt that Ribadeneyra was convinced of Augustine’s unimpeachable Catholicism and opposition to heresy—categories he had no hesitation in mapping onto Reformation-era confessions. Equally, Augustine profoundly influenced his own theology. But beyond and beneath these affinities lay a personal bond. Augustine, who bared his soul to a degree unmatched among the Fathers, was an inspiration, in the most immediate sense, to early modern believers. Like Ignatius, the bishop of Hippo offered Ribadeneyra a model for living.

That early modern individuals took inspiration from classical, biblical, and late antique forebears is nothing new. Bruce Gordon writes that, influenced by humanist notions of emulation, “through intensive study, prayer and conduct [John] Calvin sought to become Paul” (110). Mutatis mutandis, the sentiment applies to Ribadeneyra and Augustine. Curiously, Stephen Greenblatt’s seminal Renaissance Self-Fashioning does not much engage with emulation, concerning itself with self-fashioning as creation ad nihilum—that is to say, a new self, not geared toward an existing model (Greenblatt notes in passing, and in contrast, the tradition of imitatio Christi). Ribadeneyra, in reading, translating, interpreting, citing, and imitating Augustine, was fashioning a self after another’s image. As his Catholicized Confesiones indicate, this was not a slavish and literal-minded adherence to each detail. He recognized the great gap of time that separated him from his hero, changes that demanded creativity and alteration in the fashioning of a self. This need not be a thought-out or even conscious plan, but simply the cumulative effect of a lifetime of admiration and inspiration. Without denying Ribadeneyra’s formidable mind or his fervent Catholicism, there is something to be gained from taking emotional significance as our starting point, from which to understand all the intellectual and personal work the Jesuit, and others of his time, could accomplish through a hero.

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg

Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original

Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens

Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

A Book of Battle: Marcelino Menéndez y Pelayo and La ciencia española

By Editor Spencer J. Weinreich

OLYMPUS DIGITAL CAMERA

Statue of Marcelino Menéndez Pelayo at the Biblioteca Nacional de España

Marcelino Menéndez y Pelayo’s La ciencia española (first ed. 1876) is a battlefield long after the guns have fallen silent: the soldiers dead, the armies disbanded, even the names of the belligerent nations changed beyond recognition. All the mess has been cleared up. Like his contemporaries Leopold von Ranke, Arnold Toynbee, or Jacob Burckhardt, Menéndez Pelayo has been enshrined as one of the nineteenth-century tutelary deities of intellectual history. Seemingly incapable of writing except at great length and in torrential cascades of erudition, his oeuvre lends itself to reverence—and frightens off most readers. And while reverence is hardly undeserved, we do a disservice to La ciencia española and its author if we leave the marmoreal exterior undisturbed. The challenge for the modern reader is to recover the passions—intellectual, political, and personal—animating what Menéndez Pelayo himself called “a book of battle [un libro de batalla]” (2:268).

Gumersindo-de-Azcarate-1907

Gumersindo de Azcárate

La ciencia española is a multifarious collection of articles, reviews, speeches, and letters that takes its name from its linchpin, a feisty exchange over the history of Spanish learning (la ciencia española). The casus belli came from an 1876 article by the distinguished philosopher and jurist Gumersindo de Azcárate, who argued that early modern Spain had been intellectually stunted by the Catholic Church. Menéndez Pelayo responded with an essay vociferously defending the honor of Spanish learning, exonerating the Church, and decrying the neglect of early modern Spanish intellectual history. Azcárate never replied, but his colleagues  Manuel de la Revilla, Nicolás Salméron, and José del Perojo took up his cause, trading articles with Menéndez Pelayo in which they debated these and related issues—was there such a thing as “Spanish philosophy”?—in excruciating detail.

The exchange showcases the driving concerns of Menéndez Pelayo’s scholarly career: the greatness of the Spanish intellectual tradition, critical bibliography, Catholicism as the national genius of Spain, and an almost-frightening sense of how much these issues matter. This last is the least accessible element of La ciencia española: the height of its stakes. Why should Spain’s very identity rest upon abstruse questions of intellectual history? How did a group of academics merit the label “the eternal enemies of religion and the patria [los perpetuos enemigos de la Religión y de la patria]” (1:368)?

Here we must understand that La ciencia española is but one rather pitched battle in a broader war. Nineteenth-century Spain was in the throes of an identity crisis, the so-called “problem of Spain.” In the wake of the loss of a worldwide empire, serial revolutions and civil wars, a brief flirtation with a republic, endemic corruption, and economic stagnation, where was Spain’s salvation to be found—in the past or in the future? With the Church or with the Enlightenment? By looking inward or looking outward?

krause

Karl Christian Friedrich Krause

Menéndez Pelayo was a self-declared neocatólico, a movement of conservative Catholics for whom Spain’s identity was indissolubly linked to the Church. He also stands as perhaps the foremost exponent of casticismo, a literary and cultural nationalism premised on a return to Spain’s innate, authentic identity.  All of Menéndez Pelayo’s antagonists in that initial exchange—Azcárate, Revilla, Salmerón, and Perojo—were Krausists, from whom not much is heard these days. Karl Christian Friedrich Krause was a student of Schelling, Hegel, and Fichte, long (and not unjustly) overshadowed by his teachers. But Krause found an unlikely afterlife among a cohort of liberal thinkers in Restoration Spain. These latter-day Krausists aimed at the intellectual rejuvenation of Spain, which they felt had been stifled by the Catholic Church. Accordingly, they called for religious toleration, academic freedom, and, above all, an end to the Church’s monopoly over education.

To Menéndez Pelayo, Krausism threatened the very wellsprings of the national culture. The Krausists were “a horde of fanatical sectarians […] murky and repugnant to every independent soul” (qtd. in López-Morillas, 8). He acidly denied both that Spain’s learning had declined, and that the Church had in any way hindered it:

For this terrifying name of “Inquisition,” the child’s bogeyman and the simpleton’s scarecrow, is for many the solution to all problems, the deus ex machina that comes as a godsend in difficult situations. Why have we had no industry in Spain? Because of the Inquisition. Why have we had bad customs, as in all times and places, save in the blessed Arcadia of the bucolics? Because of the Inquisition. Why are we Spaniards lazy? Because of the Inquisition. Why are there bulls in Spain? Because of the Inquisition. Why do Spaniards take the siesta? Because of the Inquisition. Why were there bad lodgings and bad roads and bad food in Spain in the time of Madame D’Aulnoy? Because of the Inquisition, because of fanaticism, because of theocracy. [Porque ese terrorífico nombre de Inquisición, coco de niños y espantajo de bobos, es para muchos la solución de todos los problemas, el Deus ex machina que viene como llovido en situaciones apuradas. ¿Por qué no había industria en España? Por la Inquisición. ¿Por qué había malas costumbres, como en todos tiempos y países, excepto en la bienaventurada Arcadia de los bucólicos? Por la Inquisición. ¿Por qué somos holgazanes los españoles? Por la Inquisición. ¿Por qué hay toros en España? Por la Inquisición. ¿Por qué duermen los españoles la siesta? Por la Inquisición. ¿Por qué había malas posadas y malos caminos y malas comidas en España en tiempo de Mad. D’Aulnoy? Por la Inquisición, por el fanatismo, por la teocracia.]. (1:102–03)

What was called for was not—perish the thought—a move away from dogmatism, but a renewed appreciation for Spain’s magnificent heritage. “I desire only that the national spirit should be reborn […] that spirit that lives and beats at the base of all our systems, and gives them a certain aspect of their parentage, and connects and ties together even those most discordant and opposed [Quiero sólo que renazca el espíritu nacional […], ese espíritu que vive y palpita en el fondo de todos nuestros sistemas, y les da cierto aire de parentesco, y traba y enlaza hasta a los más discordes y opuestos]” (2:355).

2358

Title page of Miguel Barnades Mainader’s Principios de botanica (1767)

Menéndez Pelayo practiced what he preached. He is as comfortable discussing such obscure peons of the Republic of Letters as the Portuguese theologian Manuel de Sá and the Catalan botanist Miguel Barnades Mainader, as he is in extolling Juan Luis Vives, arguing over the influence of Thomas Aquinas, or establishing the birthplace of Raymond Sebold. Menéndez Pelayo writes with genuine pain at “the lamentable oblivion and neglect in which we hold the nation’s intellectual glories [del lamentable olvido y abandono en que tenemos las glorias científicas nacionales]” (1:57). His fellow neocatólico Alejandro Pidal y Mon imagines Menéndez Pelayo as a necromancer, calling forth the spirits of long-dead intellectuals (1:276), a power on extravagant display in La ciencia española. The third volume of La ciencia española comprises nearly three hundred pages of annotated bibliography, on every conceivable branch of the history of knowledge in Spain.

I am aware how close I have strayed to the kind of pedestal-raising I deprecated at the outset. Fortunately, we do not have to look far to find the clay feet that will be the undoing of any such monument. Menéndez Pelayo’s lyricism should not disguise the reactionary character of his intellectual project, with its nationalism and loathing of secularism, religious toleration, and any challenge to Catholic orthodoxy. His avowed respect for the achievements of Jews and Muslims in medieval Spain is cheapened by a pervasive, muted anti-Semitism and Islamophobia: La ciencia española speaks of “the scientific poverty of the Semites [La penuria científica de los semitas]” (2:416) and the “decadence [decadencia]” of contemporary Islam. When he writes, “I am, thanks be to God, an Old Christian [gracias a Dios, soy cristiano viejo]” (2:265), we cannot pretend he is ignorant of the pernicious history of that term. Of the colonization of the New World he baldly states, “we sowed religion, science, and blood with a liberal hand, later to reap a long harvest of ingratitudes and disloyalties [sembramos a manos llenas religión, ciencia y sangre, para recoger más tarde larga cosecha de ingratitudes y deslealtades]” (2:15).

It is no coincidence that Menéndez Pelayo’s prejudices are conveyed in superlative Spanish prose—ire seems to have brought out the best of his wit. “I cannot but regret that Father [Joaquín] Fonseca should have felt himself obliged, in order to vindicate Saint Thomas [Aquinas] from imagined slights, to throw upon me all the corpulent folios of the saint’s works [no puedo menos de lastimarme de que el Padre Fonseca se haya creído obligado, para desagraviar a Santo Tomás de ofensas soñadas, a echarme encima todos los corpulentos infolios de las obras del Santo]” (2:151) “Mr. de la Revilla says that he has never belonged to the Hegelian school. Congratulations to him—his philosophical metamorphoses are of little interest to me [El Sr. de la Revilla dice que nunca ha pertenecido a la escuela hegeliana. En hora buena: me interesan poco sus transformaciones filosóficas]” (1:201). On subjects dear to his heart, baroque rhapsodies could flow from his pen. He spends three pages describing the life of the medieval Catalan polymath Ramon Llull, whom he calls the “knight errant of thought [caballero andante del pensamiento]” (2:372).

At the same time, many pages of La ciencia española make for turgid reading, bare catalogues of obscure Spanish authors and their yet more obscure publications.

*     *     *

Menéndez Pelayo died in 1912. Azcárate, his last surviving interlocutor, passed away five years later. Is the battle over? In the intervening decades, Spain has found neither cultural unity nor political coherence—and not for lack of trying. Reactionary Catholic and conservative though he was, Menéndez Pelayo does not fit the role of Francoist avant la lettre, in spite of the regime’s best efforts  to coopt him. La ciencia española shows none of Franco’s Castilian chauvinism and suspicion of regionalism. Menéndez Pelayo chides an author for using the phrase “the Spanish language [la lengua española]” when he means “Castilian.” “The Catalan language is as Spanish as Castilian or Portuguese [Tan española es la lengua catalana como la castellana or la portuguesa]” (2:363).

Today the Church has indeed lost its iron grip on the Spanish educational system, and the nation is not only no longer officially Catholic, but has embraced religious toleration and even greater heterodoxies, among them divorce, same-sex marriage, and abortion. We are all Krausists now.

If the crusade against the Krausists failed, elements of Menéndez Pelayo’s intellectual project have fared considerably better. We are witnessing a flood of scholarly interest in early modern Spain’s intellectual history—historiography, antiquarianism, the natural sciences, publishing. Whether they know it or not, these scholars are answering a call sounded more than a century before. And never more so than when they turn their efforts to those Menéndez Pelayo sympathetically called “second-order talents [talentos de segundo orden]” (1:204). In the age of USTC, EEBO, Cervantes Virtual, Gallica, and countless similar resources, the discipline of bibliography he so cherished is expanding in directions he could never have imagined.

Rey_Carlos_II_de_España

Charles II of Spain

Spain’s decline continues to inspire debate among historians—and will continue to do so, I expect, so long as there are historians to do the debating. The foreword to J. H. Elliott’s still-definitive survey, Imperial Spain: 1469–1716, places the word “decline” in inverted commas, but the prologue acknowledges the genuine puzzle of explaining the shift in Spain’s fortunes over the early modern period. Menéndez Pelayo could hardly deny that Charles II ruled an altogether less impressive realm than had his great-grandfather, but would presumably counter that whatever the geopolitics, Spanish letters remained vibrant. As for the Spanish Inquisition, his positivity prefigures that of Henry Kamen, who has raised not a few eyebrows with his favorably inclined “historical revision.”

La ciencia española is at once the showcase for a prodigious young talent, a call to arms for intellectual traditionalism, and a formidable if flawed collection of insights and reflections. As the grand old man of Spanish letters, a caricature of conservatism and Catholic partisanship, Menéndez Pelayo furnishes an excellent foil—or strawman, for those less charitably inclined—against whom generations can and should sharpen their pens and their arguments.

La lutte continue.

The Historical Origins of Human Rights: A Conversation with Samuel Moyn

By guest contributor Pranav Kumar Jain

picture-826-1508856803

Professor Samuel Moyn (Yale University)

Since the publication of The Last Utopia: Human Rights in History, Professor Samuel Moyn has emerged as one of the most prominent voices in the field of human rights studies and modern intellectual history. I recently had a chance to interview him about his early career and his views on human rights and recent developments in the field of history.

Moyn was educated at Washington University in St. Louis, where he studied history and French literature. In St. Louis, he fell under the influence of Gerald Izenberg, who nurtured his interest in modern French intellectual history. After college, he proceeded to Berkeley to pursue his doctorate under the supervision of Martin Jay. However, unexcited at the prospect of becoming a professional historian, he left graduate school after taking his orals and enrolled at Harvard Law School. After a year in law school, he decided that he did want to finish his Ph.D. after all. He switched the subject of his dissertation to a topic that could be done on the basis of materials available in American libraries. Drawing upon an earlier seminar paper, he decided to write about the interwar moral philosophy of Emmanuel Levinas. After graduating from Berkeley and Harvard in 2000-01, he joined Columbia University as an assistant professor in history.

Though he had never written about human rights before, he had become interested in the subject in law school and during his work in the White House at the time of the Kosovo bombings. At Columbia, he decided to pursue his interest in human rights further and began to teach a course called “Historical Origins of Human Rights.” The conversations in this class were complemented by those with two newly arrived faculty members, Mark Mazower and Susan Pedersen, both of whom were then working on the international history of the twentieth century. In 2008, Moyn decided that it was finally time to write about human rights.

9780674064348-lg

Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge: Harvard University Press, 2012)

In The Last Utopia, Moyn’s aim was to contest the theories about the long-term origins of human rights. His key argument was that it was only in the 1970s that the concept of human rights crystallized as a global language of justice. In arguing thus, he sharply distinguished himself from the historian Lynn Hunt who had suggested that the concept of human rights stretched all the way back to the French Revolution. Before Hunt published her book on human rights, Moyn told me, his class had shared some of her emphasis. Both scholars, for example, were influenced by Thomas Laqueur’s account of the origins of humanitarianism, which focused on the upsurge of sympathy in the eighteenth century. Laqueur’s argument, however, had not even mentioned human rights. Hunt’s genius (or mistake?), Moyn believes, was to make that connection.

Moyn, however, is not the only historian to see the 1970s as a turning point. In his Age of Fracture (2012), intellectual historian Daniel Rodgers has made a similar argument about how the American postwar consensus came under increasing pressure and finally shattered in the 70s. But there are some important differences. As Moyn explained to me, Rodgers’s argument is more about the disappearance of alternatives, whereas his is more concerned with how human rights survived that difficult moment. Furthermore, Rodgers’s focus on the American case makes   his argument unique because, in comparison with transatlantic cases, the American tradition does not have a socialist starting point. Both Moyn and Rodgers, however, have been criticized for failing to take neoliberalism into account. Moyn says that he has tried to address this in his forthcoming book Not Enough: Human Rights in an Unequal World.

Some have come to see Moyn’s book as mostly about President Jimmy Carter’s contributions to the human rights revolution. Moyn himself, however, thinks that the book is ultimately about the French Revolution and its abandonment in modern history for an individualistic ethics of rights, including the Levinasian ethics which he once studied. In Moyn’s view, human rights are a part of this “ethical turn.” While he was working on the book, Moyn’s own thinking underwent a significant revolution. He began to explore the place of decolonization in the story he was trying to tell. Decolonization was not something he had thought about very much before but, as arguably one of the biggest events of the twentieth century, it seemed indispensable to the human rights revolution. In the book, he ended up making the very controversial argument that human rights largely emerged as the response of westerners to decolonization. Since they had now lost the interventionist tool of empire, human rights became a new universalism that would allow them to think about, care about, and perhaps intervene in places they had once ruled directly.

Though widely acclaimed, Moyn’s thesis has been challenged on a number of fronts. For one thing, Moyn himself believes that the argument of the book is problematic because it globalizes a story that it mostly about French intellectuals in the 1970s. Then there are critics such as Stefan-Ludwig Hoffmann, a German historian at UC Berkeley, who have suggested, in Moyn’s words, that “Sam was right in dismissing all prior history. He just didn’t dismiss the 70s and 80s.” Moyn says that he finds Hoffmann’s arguments compelling and that, if we think of human rights primarily as a political program, the 90s do deserve the lion’s share of attention. After all, Moyn’s own interest in the politics of human rights emerged during the 90s.

EleanorRooseveltHumanRights

Eleanor Roosevelt with a Spanish-language copy of the Universal Declaration of Human Rights

Perhaps one of Moyn’s most controversial arguments is that the field of the history of human rights no longer has anything new to say. Most of the questions about the emergence of the human rights movements and the role of international institutions have already been answered. Given the major debate provoked by his own work, I am skeptical that this is indeed the case. Plus, there are a number of areas which need further research. For instance, we need to better understand the connections between signature events such as the adoption of the Universal Declaration of Human Rights, and the story that Moyn tells about the 1970s. But I think Moyn made a compelling point when he suggested to me that we cannot continue to constantly look for the origins of human rights. In doing so, we often run the risk of anachronism and misinterpretation. For instance, some scholars have tried to tie human rights back to early modern natural law. However, as Moyn put it, “what’s lost when you interpret early modern natural law as fundamentally a rights project is that it was actually a duties project.”

Moyn is ambivalent about recent developments in the study and practice of history in general. He thinks that the rise of global and transnational history is a welcome development because, ultimately, there is no reason for methodological nationalism to prevail. However, in his view, this has had a somewhat adverse effect on graduate training. When he went to grad school, he took courses that focused on national historiographical canons and many of the readings were in the original language. With the rise of global history, it is not clear that such courses can be taught anymore. For instance, no teacher could demand that all the students know the same languages. Consequently, Moyn says, “most of what historians were doing for most of modern history is being lost.” This is certainly an interesting point and it begs the question of how graduate programs can train their students to strike a balance between the wide perspectives of global history and the deep immersion of a more national approach.

Otherwise, however, in contrast with many of his fellow scholars, Moyn is surprisingly upbeat about the current state and future of the historical profession. He thinks that we are living in a golden age of historiography with many impressive historians producing outstanding works. There is certainly more scope for history to be more relevant to the public. But historians engaging with the public shouldn’t do so in crass ways, such as suggesting that there is a definitive relevance of history to public policy. History does not have to change radically. It can simply continue to build upon its existing strengths.

lynn-hunt

Professor Lynn Hunt (UCLA)

In the face of Lynn Hunt’s recent judgment that the field of “history is in crisis and not just one of university budgets,” this is a somewhat puzzling conclusion. However, it is one that I happen to agree with. Those who suggest that historians should engage with policy makers certainly have a point. However, instead of emphasizing the uniqueness of history, their arguments devolve to what historians can do better than economists and political scientists. In the process, they often lose sight of the fact that, more than anything, historians are storytellers. History rightly belongs in the humanities rather than the social sciences. It is only in telling stories that inspire and excite the public’s imagination that historians can regain the respect that many think they have lost in the public eye.

Pranav Kumar Jain is a doctoral student in early modern history at Yale University.

Alexander and Wilhelm von Humboldt, Brothers of Continuity

By guest contributor Audrey Borowski

At the beginning of the nineteenth century, a young German polymath ventured into the heart of the South American jungle, climbed the Chimborazo volcano, crawled through the Andes, conducted experiments on animal electricity, and delineated climate zones across continents.  His name was Alexander von Humboldt (1769–1859). With the young French scientist Aimé Bonpland and equipped with the latest instruments, Humboldt tirelessly collected and compared data and specimens, returning after five years to Paris with trunks filled with notebooks, sketches, specimens, measurements, and observations of new species. Throughout his travels in South America, Russia and Mongolia, he invented isotherms and formulated the idea of vegetation and climate zones. Crucially, he witnessed the continuum of nature unfold before him and set forth a new understanding of nature that has endured up to this day. Man existed in a great chain of causes and effects in which “no single fact can be considered in isolation.” Humboldt sought to discover the “connections which linked all phenomena and all forces of nature.” The natural world was teeming with organic powers that were incessantly at work and which, far from operating in isolation, were all “interlaced and interwoven.” Nature, he wrote, was “a reflection of the whole” and called for a global understanding. Humboldt’s Essay on the Geography of Plants (1807) was the world’s first book on ecology in which plants were grouped into zones and regions rather than taxonomic units and analogies drawn between disparate regions of the globe.

In this manner, Alexander sketched out a Naturgemälde, a “painting of nature” that fused botany, geology, zoology and physics in one single picture, and in this manner broke away from prevailing taxonomic representations of the natural world. His was a fundamentally interdisciplinary approach, at a time when scientific inquiry was becoming increasingly specialized. The study of the natural world was no abstract endeavor and was far removed from the mechanistic philosophy that had held sway up till then. Nature was the object of scientific inquiry, but also of wonder and as such, it exerted a mysterious pull. Man was firmly relocated within a living cosmos broader than himself, which appealed equally to his emotions and imagination. From the heart of the jungle to the summit of volcanoes, “nature everywhere [spoke] to man in a voice that is familiar to his soul” and what spoke to the soul, Humboldt wrote, “escapes our measurements” (Views of Nature, 217-18). In this manner Humboldt followed in the footsteps of Goethe, his lifelong friend, and the German philosopher Friedrich Schelling, in particular the latter’s Naturphilosophie (“philosophy of nature”). Nature was a living organism it was necessary to grasp in its unity, and its study should steer away from “crude empiricism” and the “dry compilation of facts” and instead speak to “our imagination and our spirit.” In this manner, rigorous scientific method was wedded to art and poetry and the boundaries between the subjective and the objective, the internal and the external were blurred. “With an aesthetic breeze,” Alexander’s long-time friend Goethe wrote, the former had lit science into a “bright flame” (quoted in Wulf, The Invention of Nature, 146).

Alexander von Humboldt’s older brother, Wilhelm (1767-1835), a government official with a great interest in reforming the Prussian educational system, had been similarly inspired. While his brother had ventured out into the jungle, Wilhelm, on his side, had devoted much of his life to the exploration of the linguistic realm, whether in his study of Native American and ancient languages or in his attempts to grasp the relation between linguistic and mental structures. Like the German philosopher and literary critic Johann Gottfried Herder before him, Humboldt posited that language, far from being a merely a means of communication, was the “formative organ” (W. Humboldt, On the Diversity of Human Language, 54) of thought. According to this view, man’s judgmental activity was inextricably bound up with his use of language. Humboldt’s linguistic thought relied on a remarkable interpretation of language itself: language was an activity (energeia) as opposed to a work or finished product (ergon). In On the Diversity of Human Language Construction and its Influence on the Mental Development of the Human Species (1836), his major treatise on language, Wilhelm articulated a forcefully expressivist conception of language, in which he brought to bear the interconnectedness and organic nature of all languages and by extension, various worldviews. Far from being a “dead product,” an “inert mass,” language appeared as a “fully-fashioned organism” that, within the remit of an underlying universal template, was free to evolve spontaneously, allowing for maximum linguistic diversity (90).

Weimarer_Klassik

Left to Right: Friedrich Schiller, Wilhelm von Humboldt, Alexander von Humboldt, and Johann Wolfgang von Goethe, depicted by Adolph Müller (c.1797)

To the traditional objectification of language, Wilhelm opposed a reading of language that was heavily informed by biology and physiology, in keeping with the scientific advances of his time. Within this framework, language could not be abstracted, interwoven as it was with the fabric of everyday life. Henceforth, there was no longer one “objective” way of knowing the world, but a variety of different worldviews. Like his brother, Wilhelm strove to understand the world in its individuality and totality.

At the heart of the linguistic process lay an in-built mechanism, a feedback loop that accounted for language’s ability to generate itself. This consisted in the continuous interplay between an external sound-form and an inner conceptual form, whose “mutual interpenetration constitute[d] the individual form of language” (54). In this manner, rhythms and euphonies played a role in expressing internal mental states. The dynamic and self-generative aspect of language was therefore inscribed in its very core. Language was destined to be in perpetual flux, renewal, affecting a continuous generation and regeneration of the world-making capacity powerfully spontaneous and autonomous force, it brought about “something that did not exist before in any constituent part” (473).

As much as the finished product could be analyzed, the actual linguistic process defied any attempt at scientific scrutiny, remaining inherently mysterious. Language may well abide by general rules, but it was fundamentally akin to a work of art, the product of a creative outburst which “cannot be measured out by the understanding” (81). Language, as much as it was rule-governed and called for empirical and scientific study, originated somewhere beyond semio-genesis. “Imagination and feeling,” Wilhelm wrote, “engender individual shapings in which the individual character […] emerges, and where, as in everything individual, the variety of ways in which the thing in question can be represented in ever-differing guises, extends to infinity” (81). Wilhelm therefore elevated language to a quasi-transcendental status, endowing it with a “life-principle” of its own and consecrating it as a “mental exhalation,” the manifestation of a free, autonomous spiritual force. He denied that language was the product of voluntary human activity, viewing instead as a “mental exhalation,” a “gift fallen to [the nations] by their own destiny” (24) partaking in a broader spiritual mission. In this sense, the various nations constituted diverse individualities pursuant of inner spiritual paths of their own, with each language existing as a spiritual creation and gradual unfolding:

If in the soul the feeling truly arises that language is not merely a medium of exchange for mutual understanding, but a true world which the intellect must set between itself and objects by the inner labour of its power, then the soul is on the true way toward discovering constantly more in language, and putting constantly more into it (135).

While he seemed to share his brother’s intellectual demeanor, Wilhelm disapproved of many of Alexander’s life-choices, from living in Paris rather than Berlin (particularly during the wars of liberation against Napoleon), which he felt was most unpatriotic, to leaving the civilized world in his attempts to come closer to nature (Wulf 151). Alexander, the natural philosopher and adventurer, on his side reproached his brother for his conservatism and social and political guardedness. In a time marred by conflict and the growth of nationalism, science, for him, had no nationality and he followed scientific activity wherever it took him, especially to Paris, where he was widely celebrated throughout his life. In a European context of growing repression and censorship in the wake of Napoleon’s defeat, he encouraged the free exchange of ideas and information, and pleaded for international collaborations between scientists and the collection of global data; truth would gradually emerge from the confrontation of different opinions. He also gave many lectures during which he would effortlessly hop from one subject to another, in this manner helping to popularize science. More generally, he would help other scholars whenever he could, intellectually or financially.

As the ideas of 1789 failed to materialize, giving way instead to a climate of censorship and repression, Alexander slowly grew disillusioned with politics. His extensive travels had provided him insights not only on the natural world but also on the human condition. “European barbarity,” especially in the shape of colonialism, tyranny and serfdom had fomented dissent and hatred. Even the newly-born American Republic, with its founding principles of liberty and the pursuit of happiness, was not immune to this scourge (Wulf 171). Man with his greed, violence and ignorance could be as barbaric to his fellow man as he was to nature. Nature was inextricably linked with the actions of mankind and the latter often left a trail of destruction in its wake through deforestation, ruthless irrigation, industrialization and intensive cultivation. “Man can only act upon nature and appropriate her forces to his use by comprehending her laws.” Alexander would later write in his life, and failure to do so would eventually leave even distant stars “barren” and “ravaged” (Wulf 353).

Furthermore, while Wilhelm was perhaps the more celebrated in his time, it was Alexander’s legacy that would prove the more enduring, inspiring new generations of nature writers, including the American founder of the transcendentalist movement Henry David Thoreau, who intended his masterpiece Walden as an answer to Humboldt’s Cosmos, John Muir, the great preservationist, or Ernst Haeckel, who discovered radiolarians and coined our modern science of ecology” Another noteworthy influence was on Darwin and his theory of evolution. Darwin took Humboldt’s web of complex relations a step further and turned them into a tree of life from which all organisms stem. Humboldt sought to upend the ideal of “cultivated nature,” most famously perpetuated by the French naturalist the Comte de Buffon, whereby nature had to be domesticated, ordered, and put to productive use. Crucially, he inspired a whole generation of adventurers, from Darwin to Joseph Banks, and revolutionized scientific practice by tearing the scientist away from the library and back into the wilderness.

For all their many criticisms and disagreements, both brothers shared a strong bond. Alexander, who survived Wilhelm by twenty-four years, emphasized again and again Wilhelm’s “greatness of the character” and his “depth of emotions,” as well as his “noble, still-moving soul life.” Both brothers carved out unique trajectories for themselves, the first as a jurist, a statesman and a linguist, the second arguably as the first modern scientist; yet both still remained beholden to the idea of totalizing systems, each setting forth insights that remain more pertinent than ever.

2390168a

Alexander and Wilhelm von Humboldt, from a frontispiece illustration of 1836

Audrey Borowski is a historian of ideas and a doctoral candidate at the University of Oxford.