Historiography

Colonial Knowledge, South Asian Historiography, and the Thought of the Eurasian Minority

This is the fifth in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer and fall. The first piece was by Andrew Klumpp, the second by Cynthia Houng, the third by Robert Greene II, and the fourth by Gloria Yu.

This last piece is by contributing writer Brent Howitt Otto, a PhD student in History at UC Berkeley.

It is hard to overstate the contemporary and enduring impact of British colonialism on the Indian Subcontinent. Bernard Cohn compellingly argued that the British conquest of India was a conquest of knowledge, as much as it was of land, peoples and markets. By combining the disciplinary tools of history and anthropology, Cohn helped birth a generation of historiography that has examined how the discursive categories of religion, caste and community (approximate to ‘ethnicity’ in South Asian usage) were deeply molded and in some instances created by the bureaucratic attempts to rationalize and systematize the exercise of colonial power over diverse peoples (Nicholas B. Dirks, Castes of Mind). These colonial knowledge systems not only helped colonial officials to think about India and Indians but has subsequently affected how Indians of all classes, castes and religions came to think about themselves in relation to one another and to the state. The anti-colonial nationalism of the late British Raj, far from freeing India of colonial categories and divisions, demonstrated their enduring and deepening power.

When discontent with British rule began to ferment in various forms of nationalist organizing and mobilization in the late nineteenth century, a preoccupation among Indian minorities—Muslims, Untouchables, Sikhs, and even the relatively small community of Eurasians (later known as Anglo-Indians)—emerged, that swaraj (self-rule) or indeed Independence would ultimately create a tyranny of the majority. Would the British Raj simply be replaced by a Hindu Raj, in which minorities would lose their already tenuous position in politics and society?

Dr._Bhimrao_Ambedkar

B. D. Ambedkar

Fear ran deepest among Muslims, who had been scapegoated by the British as the group responsible for the Sepoy Rebellion in 1857. Their fears were not irrational, for the Indian National Congress, as the largest expression of the nationalist movement, struggled to appear as anything but a party of English-educated elite Hindus. Despite Gandhi’s exhortation of personal moral conversion to a universal regard for all people, his message came packaged in the iconic form and practice of a deeply religious Hindu ascetic. Gandhi famously disagreed with the desire of B. D. Ambedkar, a leader of the Untouchables, to abolish the caste ‘system’. Muslims and other minorities called for ‘separate electorates,’ protected seats and separate voting mechanisms to ensure minorities were represented.

In part to pacify the anxieties of minorities and in part to further a ‘divide and rule’ agenda to prolong colonial rule, the British responded with a series of Round Table Conferences from 1930-32 in which India’s minorities represented their views. This resulted in the Communal Award of separate electorates for Muslims, Buddhists, Sikhs, Indian Christians, Anglo-Indians, Europeans and Depressed Classes (Scheduled Castes). Gandhi’s opposition rested on the principle that Separate Electorates would only impede unity and sow greater division, both in the movement to end British rule and the hope of a unified nation thereafter. Yet in the Poona Pact of September 1932 Gandhi acquiesced to Separate Electorates while coercing Ambedkar through a fast unto death to renounce them for Dalits.

440px-Jinnah1945c

Mohammad Ali Jinnah

British colonial knowledge had constructed blunt categories of India’s minorities, which failed to acknowledge their internal diversity. Muslims included numerous sects, schools of jurisprudence, regions and languages. Eurasians were divided internally by region (north, south, Burma), occupation (railways, government services, private trade and industry), lineage (Portuguese, English, Dutch, French) and class. The same was true for other minorities, and yet the British insisted upon dealing with each group by recognizing an organization and its leader as the ‘sole spokesman’ for that ‘community’s’ interests. For Muslims it was Mohammad Ali Jinnah and the Muslim League (Ayesha Jalal, The Sole Spokesman). For Eurasians (Anglo-Indians) it was the president of the All India Anglo-Indian Association, under the leadership of Sir Henry Gidney (1919-42) and Frank Anthony (1942 onward), which by no means could claim membership sufficient to represent the interests of a majority of Anglo-Indians.

Who is allowed to speak for the group? Which voices are suppressed or silenced? These are crucial questions for historians who seek to make an accurate reconstruction of the textures and contours of a group’s thinking over time, of their unity and disunity, internal dynamics, the ways they see themselves and others. Otherwise the scholar will only be able to conjure up an historical narrative that coheres with the sympathies of power, but gets no closer to representing the group on its own terms. The archive is often limited in what it can say, for it too is a construction of power: the editorial discretion of a newspaper, the policy and practice of record keeping and classification in an organization or a government, and the status and education implicit in any literary production. This has been a foremost concern and debate of Subaltern historiography in South Asia (see the journal Subaltern Studies and Gayatri Spivak, “Can the Subaltern Speak?“), and a motivating problem addressed by Anthro-History.

The scholarship on the mixed-race of colonial South Asia manifests some of these problems. Some histories have been written by important Anglo-Indian leaders and politicians, such as Herbert Alick Stark and Frank Anthony, constituting less an academic history than their own rhetorical attempt to shape Anglo-Indians’ view of themselves and of others’ views of Anglo-Indians. Indeed, these constitute primary sources that portray particular dominant, though not representative perspectives of the community. Even serious academic studies have erred by leaning too heavily on official sources to substantiate the community’s attitudes (e.g., Alison Blunt, Domicile and Diaspora) or by inordinate attachment to a social scientific theory such as “marginality” to explain the social position and self-consciousness of Anglo-Indians, at times entertaining untenable generalizations and ignorance of facts (See Noel P. Gist and Roy Dean Wright, Marginality and Identity, or Paul Fredrick Cressey, “The Anglo-Indians“). Other studies are too narrowly focused on Anglo-Indians of a particular place and time to include very much dialogue with the greater Anglo-Indian community or with other interlocutors such as the state (e.g., Kuntala Lahiri-Dutt, In Search of a Homeland, or Robyn Andrews, Christmas in Calcutta).

The new monograph by Uther Charlton-Stevens, Anglo-Indians and Minority Politics in South Asia: Race, Boundary Making and Communal Nationalism (London: Routledge, 2018) is a deeply textured historical study of the Eurasian community over its lengthy history. Uninterested in presenting a uniform narrative, Charlton-Stevens digs deeply into diverse sources to show the various interlocutors that Anglo-Indians and their leaders had, and the often discordant opinions they took with respect to their own history, concepts of race, Indian nationalism, the colonial state, and plans for their post-colonial future. Anglo-Indians were neither univocal, nor insular. Views among Anglo-Indians were diverse and power over them was contested. Skillfully Charlton-Stevens traces these various crisscrossing strands that shows Anglo-Indians were embedded in a web of local, colonial and international discourses, and were interacting with and speaking about concepts as diverse and far reaching as notions of nation and national self-determination, Zionism, and eugenics. Although the community had a sole spokesman as far as government was concerned, the voices of dissenting and contesting positions were louder and clearer than prior scholarship has ever made out.

Charlton-Stevens refreshingly situates the question of Anglo-Indian identity in the crucial context of race and eugenical theories current from the late 19th to mid-20th centuries. He explores in depth the writings of two Anglo-Indian figures who were not community leaders, yet had complex articulations of mixed race. Millicent Wilson of Bangalore wrote arguing that Anglo-Indians’ whiteness (and thus superiority) should be acknowledged on the supposed grounds of the dominance of white genes, and thus their predominance in mixed-race people. Wilson regarded Americans and Australians as exemplars of the success of whitening an admittedly hybrid race. In effect she argued against extreme theories of racial purity, while continuing to support a concept of a racial hierarchy that presumed the relative superiority of whiteness (Charlton-Stevens, 177–79, 194–96). Though seldom referenced in other studies on Anglo-Indians, Charlton-Stevens shows that Wilson’s work was read and responded to by Anglo-Indians, and that she engaged in disputes with Anglo-Indian leaders and critiqued those who promoted Anglo-Indians emigration from India. Though not conforming to the official positions of the Anglo-Indian Association, Wilson surely represents a strand of Anglo-Indian thinking on race.

Quite different from Wilson’s belief in a racial hierarchy into which she wanted to insinuate Anglo-Indians as ‘white,’ stand the writings of Anglo-Indian social scientist Cedric Dover. Contesting the alleged superiority of racial purity, Dover argued instead hybridization promoted genetic vigor. He predicted that mixed races would therefore define the future and spell the ultimate end of racial difference. He was a vocal opponent of the Nazi eugenics of racial purity, while himself promoting the eugenics of genetic mixing. As for his own community of Anglo-Indians, Dover believed they should identify as ‘Eurasians,’ a more expansive category than ‘Anglo-Indian,’ and forge a pan-Eurasian solidarity with other Eurasians outside of British India. This view was largely at odds with the stated aims and positions of official leaders of the Community. While Dover’s book, which was most explicitly directed at Anglo-Indians, is noted in the historiography, Charlton-Stevens goes further to demonstrate the effects and resonances of Dover’s ideas and other works on Anglo-Indian discourse about themselves and their future. At the same time, by drawing on the work of Nico Slate’s Colored Cosmopolitanism: The Shared Struggle for Freedom in the United States and India (Cambridge, MA: Harvard University Press, 2012) he shows how Dover saw through his academic work in the United States and the examples of W.E.B. DuBois and Booker T. Washington, a model of mixed-race success which supported his claims and which he recommended to Anglo-Indians (Charlton-Stevens, 191–96).

Then Charlton-Stevens carefully explores the numerous projects Anglo-Indians undertook as they prepared for a post-colonial future. Several schemes proposed for domestic colonial settlements—Abbott Mount (1920s), Whitefield (1882), McCluskigunge (1933) (Charlton-Stevens, 179–91). Others suggested overseas colonization—of the Andaman and Nicobar Islands (in 1922–3 and 1946), or the creation of an “Eurasia” in the former German New Guinea with League of Nations support, an idea which surfaced in the 1930s and then again in the 1950s (196–206). The Anglo-Indian promoters of these projects envisioned a degree of self-sufficiency, “emancipation” from dependency and colonial oppression, a “national homeland”.  Through a close reading of correspondence, committee reports, organization records, and letters to the editor in Anglo-Indian and English-language church sponsored newspapers in India, Charlton-Stevens shows that these aims do not only have incidental resonance but direct connection with the larger international discourses on race, the post-World War I “balkanization” that came with ethnic or racial conceptions of nationality and national self-determination, and drew on foreign models such as the Zionist success in the Palestine Mandate. Finally, numerous other associations and individuals promoted emigration, contrary to the stated position of the All India Anglo-Indian Association to remain in India—especially in the two years between the end of World War II and Independence. This even included as unlikely a destination as Brazil: ideologically branded as “Mestizism,” its promoters believed that as a mixed-race Christian people they would be accepted in a largely mixed-race Christian country. Others mainly sought to settle elsewhere within the British Commonwealth.

These are but a few of the most significant contributions of Charlton-Stevens’ book, which I have selected because they break new ground by foregrounding that Anglo-Indians were diverse in their thought, despite being forced to accept a sole spokesman who at times was the target of considerable resistance. Moreover they engaged with broader Indian and international discourses. Charlton-Stevens achieved this textured treatment of the ideas of Anglo-Indians on their own terms by a close, broad and critical reading of the archive as well as (in parts not mentioned above) ethnographic work and oral history that highlights the value of non-textual sources to a thoroughgoing historic account that interrogates power, expects diversity, and eschews easy generalizations.

Brent Howitt Otto is a graduate student in UC Berkeley’s Department of History.

Graduate Forum: The Radical African American Twentieth Century

This is the third in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer. The first piece was by Andrew Klumpp, the second by Cynthia Houng.

This third piece is by guest contributor Robert Greene II.

“Remember the ladies.” This is a line from Abigail Adams’ famous letter to her husband, John Adams, defending the idea of rights and equality for women. “Remember the ladies,” however, could easily also serve as the defining idea of modern African American intellectual history. Many historians of the African American intellectual tradition have taken great pains to emphasize the importance—indeed, the centrality—of African American women to that intellectual milieu. At the same time, other fundamental questions have been raised of not just who to privilege in this new turn in African American intellectual history, but what sources are appropriate for intellectual history. Finally, the ways in which the public remembers the past animates newer trends in African American intellectual history. In short, African American intellectual history’s recent historiographic turns offer much food for thought for all intellectual historians.

 

The field of African American intellectual history has come a long way since the heyday of historians August Meier and Earlie E. Thorpe, both prominent in the then-nascent field of African American intellectual history in the 1960s. Meier’s Negro Thought in America, 1880-1915 and Thorpe’s The Mind of the Negro: An Intellectual History of Afro-Americans were both written in the 1960s and set the standard for African American intellectual history for decades to come. Both books were focused heavily on male intellectuals, however. As such they both set the standard for the field and, along with so much of African American history up until the late 1980s, left out the important voices of many African American women.

The rise of historians like Evelyn Higginbotham in the early 1990s ushered in new ways of understanding the intersection of race and gender through American history. Her book Righteous Discontent (1992) and essay “African American Women’s History and the Metalanguage of Race” (1992) both provided templates for how to easily meld women’s history and African American history into texts that became essential works of understanding the past through viewpoints and sources normally ignored by most male historians.

Today, the field of African American intellectual history has been influenced by the evolution of several related fields: African American women’s history and Black Power studies. Both fields have attempted to both overturn older assumptions about African American history and do so by focusing on previously marginalized sources and historical figures. Much of the recent historiographic trends in African American history—namely, a deeper understanding of Black Nationalism and its relationship to broader ideological trends in both Black America and the African Diaspora—would not have been possible without both a deeper understanding of the importance of gender to African American history, and a willingness to expand the definition of who are “important” intellectuals “worthy” of study.

In just the last year alone, numerous books about the intersection of Black Nationalism and gender have challenged earlier assumptions about the histories of both fields in relationship to African American history. Both Keisha Blain’s Set the World On Fire: Black Nationalist Women and the Global Struggle for Freedom (University of Pennsylvania Press, 2018) and Ashley D. Farmer’s Remaking Black Power: How Black Women Transformed an Era (UNC Press, 2017) stretch the time period in which historians should understand the origins of Black Power—getting further away from just understanding the 1960s-era context and situating Black Power and larger Black Nationalist trends in a long era of resistance and struggle led and strategized by African American women.

Set the World on Fire follows up on other works about the Black Nationalism of the 1920s, arguing that it did not end with Marcus Garvey’s deportation from the United States in 1927. Instead, argues Blain, it was women such as his spouse Amy Jacques Garvey who kept Black Nationalist fervor alive across the United States. Meanwhile, Farmer’s book shows how the ideas of women associated with the Black Power movement of the 1960s owe a great deal to the longer arc of radical black women’s history in the twentieth century—from the agitation of black women within the Communist left of the 1930s and stretching well into the 1970s and 1980s. For Farmer, the history of a radical black nationalism does not end with the collapse of the Black Panther Party in the late 1970s.

Marcus_Garvey_with_Amy_Jacques_Garvey,_1922

Amy Jacques Garber, with her husband, Marcus Garvey.

 

WHITE008_500x500

Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (Gainesville: University Press of Florida, 2011)

Meanwhile, other trends within African American intellectual history point to the utilization of previously ignored or forgotten sources to provide a deeper understanding of the past. Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (University Press of Florida, 2011) argues for diving deeper into relatively recent African American intellectual history to provide a fuller picture of the post-Civil Rights Movement era. For White, the African American think tank was an important ideological clearing house for not just African Americans, but the broader Left in the 1970s.

 

A third movement within the field is the study of African American history itself. Pero Dagbovie has led the way in this, writing several key works detailing the rise of African American history over a broad timespan. Works such as African American History Reconsidered (University of Illinois Press, 2010) and The Early Black History Movement (University of Illinois Press, 2007) detail not only historiographic trends in the field, but the ways in which the institutions necessary for the growth of African American history were born and nurtured against the backdrop of Jim Crow segregation.

Finally, the importance of understanding memory to African American intellectual history has changed the way African American intellectual historians think about the intersection of ideas with public discourse. In reality, much of the understanding of “memory” by African American intellectual historians concerns forgetting by the vast public. Books such as Jeanne Theoharis’s A More Beautiful and Terrible History (Beacon Press, 2018) emphasizes how much of the American mainstream media—along with most politicians—have been complicit in hiding the deeper, more complicated histories of the Black freedom struggle in the United States.

African American intellectual history offers plenty of new opportunity for scholars interested in linking intellectual history to other sub-fields. African American activists and intellectuals never existed in a vacuum, whether geographic or ideological. They made alliances with a variety of groups and forces, all for the sake of freedom across the African diaspora. The new turns in African American intellectual history reflect this aspect of black history.

Robert Greene II is a Visiting Assistant Professor of History at Claflin University. He studies American intellectual and political history since 1945 and is the book review editor for the Society of US Intellectual Historians.

Personal Memory and Historical Research

By Contributing Editor Pranav Kumar Jain

51Z3BE64J5L._SX309_BO1,204,203,200_

Eric Hobsbawm, Interesting Times (2002)

During a particularly bleak week in the winter of 2013, I picked up a copy of Eric Hobsbawm’s modestly titled autobiography Interesting Times: A Twentieth-Century Life (2002), perhaps under the (largely correct) impression that the sheer energy and power of Hobsbawm’s prose would provide a stark antidote to the dullness of a Chicago winter. I had first encountered Hobsbawm the year before when he had died a day before my first history course in college. The sadness of the news hung heavy on the initial course meeting and I was curious to find out more about the historian who had left such a deep impression on my professor and several classmates. Over the course of the next year or so, I had read through several of his most important works, and ending with his autobiography seemed like a logical way of contextualizing his long life and rich corpus.

Needless to say, Interesting Times was an absolutely riveting read. Hobsbawm’s attempt to bring his unparalleled observational skills and analytical shrewdness to his own work and career revealed a life full of great adventures and strong convictions. Yet throughout the book, apart from marveling at his encounters with figures like the gospel singer and civil rights activist Mahalia Jackson, I was most stuck by what can best be described as the intersection of historical techniques and personal memory. Though much of the narrative is written from his prodigious memory, Hobsbawm regularly references his own diary, especially when discussing his days as a Jewish teenager in early 1930s Berlin and then as a communist student in Cambridge. In one instance, it allows his later self to understand why he didn’t mingle with his schoolmates in mid-1930s London (his diary indicates that he considered himself intellectually superior to the whole lot). In another, it helps him chart, at least in his view, the beginnings of peculiarly British Marxist historical interpretations. Either way, I was fascinated by his readings of what counts as a primary source written by himself. He naturally brought the historian’s skepticism to this unique primary source, repeatedly questioning his own memory against the version of events described in the diary and vice versa. This inter-mixing of personal memory with the historian’s interrogation of primary sources has long stayed with me and I have repeatedly sought out similar examples since then.

In recent years, there has been a remarkable flowering of memoirs or autobiographies written by historians. Amongst others, Carlos Eire and Sir J. H. Elliott’s memoirs stand out. Eire’s unexpectedly hilarious but ultimately depressing tale of his childhood in Cuba is a moving attempt to recover the happy memories long buried by the upheavals of the Cuban Revolution. In a different vein, Elliott ably dissects the origins of his interests in Spanish history and a Protestant Englishman’s experiences in the Catholic south. The intermingling of past and present is a constant theme. Elliott, for example, was once amazed to hear the response of a Barcelona traffic policeman when he asked him for directions in Catalan instead of Castilian. “Speak the language of the empire [Hable la lengua del imperio],” said the policeman, which was the exact phrase that Elliott had read in a pamphlet from the 1630s that attacked Catalans for not speaking the “language of the empire.” As Elliott puts it, “it seemed as though, in spite of the passage of three centuries, time had stood still” (25). (There are also three memoirs by Sheila Fitzpatrick and one by Hanna Holborn Gray, none of which, regrettably, I have yet had a chance to read.)

51OMnkIcZyL._SX331_BO1,204,203,200_

Mark Mazower, What You Did Not Tell (2017)

Yet, while Eire and Elliott’s memoirs are notably rich in a number of ways, they have little to offer in terms of the Hobsbawm-like connection between historical examination and personal memory that had started me on the quest in the first place. However, What You Did Not Tell (2017) Mark Mazower’s recent account of his family’s life in Tsarist Russia, the Soviet Union, Nazi Germany, France, and the tranquil suburbs of London provides a wonderful example of the intriguing nexus between historical research and personal memory.

In some ways, it is quite natural that I have come to see affinities between Hobsbawm’s autobiography and Mazower’s memoir. Both are stories of an exodus from persecution in Central and Eastern Europe for the relative safety and stability of London. But the surface level similarities perhaps stop there. While Hobsbawm, of course, is writing mostly about himself, Mazower is keen to tell the remarkable story of his grandfather’s transformation from a revolutionary Bundist leader in the early twentieth-century to a somewhat conservative businessman in London (though, as he learned in the course of his research, the earlier revolutionary connections did not fade away easily and his grandparents’ household was always a welcome refuge for activists and revolutionaries from across the world.) However, on a deeper level, the similarities persist. For one thing, the attempt to measure personal memories against a historical record of some sort is what drives most of Mazower’s inquiries in the memoir.

The memories at work in Mazower’s account are of two kinds. The first, mostly about his grandfather whom he never met (Max Mazower died six years before his grandson Mark was born), are inherited from others and largely concern silences—hence the title What You Did Not Tell. Though Max Mazower was a revolutionary pamphleteer, amongst other things, in the Russian Empire, he kept quiet about his radical past during his later years. His grandfather’s silence appears to have perturbed Mazower and this plays a central role in his bid to dig deeper in archives across Europe to uncover traces of his grandfather’s extraordinary life. The other kind of memories, largely about his father, are more personal and urge Mazower to understand how his father came to be the gentle, practical, and affectionate man that Mazower remembered him to be. Naturally, in the course of phoning old acquaintances, acquiring information through historian friends with access to British Intelligence archives, and pouring through old family documents such as diaries and letters, Mazower’s memories have both been confirmed and challenged.

ows_151181013914759

Mark Mazower

In the case of his grandfather, while Mazower is able to solve quite a few puzzles through expert archival work and informed guessing, there are some that continue to evade satisfactory conclusion. Perhaps the thorniest amongst these is the parentage of his father’s half-brother André. Though most relatives knew that André had been Max’s son from a previous relationship with a fellow revolutionary named Sofia Krylenko, André himself came to doubt his paternity later in life, a fact that much disturbed Mazower’s father, who saw André’s doubts as a repudiation of their father and everything he stood for. Mazower’s own research into André’s paternity through naturalization papers and birth certificate appears to have both further confused and enlightened him. While he concludes that André’s doubts were most likely unfounded, a tinge of unresolved tension about the matter runs through the pages.

With his father, Mazower is naturally more certain of things. Yet, as he writes towards the beginning of the memoir, after his father’s death he realized that there was much about his life that he did not know. In most cases, he was pleasantly surprised with his discoveries. For instance, he seems to take satisfaction in the fact that, in his younger years, his father had a more competitive streak than he had previously assumed. But reconstructing the full web of his father’s friendships proved to be quite challenging. At one point, he called a local English police station from Manhattan to ask if they could check on a former acquaintance of his father whose phone had been busy for a few days. After listening to him sympathetically, the duty sergeant told him that this was not reason enough for the police to go knocking on someone’s door. Only later did he learn that he was unable to reach the person in question because she had been living in a nursing home and had died around the time that he had first tried to get in touch.

The Pandora’s Box opened by my reading of Hobsbawm’s autobiography is far from shut. It has led me from one memoir to another and each has presented a distinct dimension of the question of how historical research intersects with personal memories. In Hobsbawm’s case, there was the somewhat peculiar case of a historian using a primary source written by himself. Mazower’s multi-layered account, of course, moves across multiple types of memories interweaving straightforward archival research with personal impressions.

While these different examples hamper any attempt at offering a grand theory of personal memory and historical research, they do suggest an intriguing possibility. The now not so incipient field of memory studies has spread its wings from memories of the Reformation in seventeenth and eighteenth-century England to testimonies of Nazi and Soviet soldiers who fought at the Battle of Stalingrad. Perhaps it is now time to bring historians themselves under its scrutinizing gaze.

Pranav Kumar Jain is a doctoral student at Yale where his research focuses on religion and politics in early modern England.

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951

Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])

I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg

Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC

Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg

Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original

Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens

Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

A Story of Everything

By guest contributor Nuala F. Caomhánach

9780226476124

Nasser Zakariya, A Final Story: Science, Myth, and Beginnings (University of Chicago Press, 2017)

In his A Final Story: Science, Myth, and Beginnings (2017), Nasser Zakariya pries open a Latourian black box to reveal how natural philosophers and later scientists constructed “scientific epics” using four possible  “genres of synthesis”—historic, fabular, scalar, foundational—to frame all branches of scientific knowledge. Their totalizing aspirations displaced outliers, contradictions, and obstructions to elevate a universal, global history of the universe. Zakariya highlights the paradox of the process of science from the 1830s to the present.  He shows how the parallel forces of narration and scientific explanatory methods merely continued to confirm discursive, epistemological and ontological pluralism. The desire to tame this pluralism legitimized the boundaries of science through pedagogy, priority and authority. A panel of historians recently met to discuss the book at New York University, including the author (UC, Berkeley), Myles Jackson (NYU), Hent de Vries (NYU), and Marwa Elshakry (Columbia University), moderated by Stefanos Geroulanos (NYU) (an audio recording of the event is available above).

image.img.320.medium

Prof. Myles Jackson (NYU)

Jackson agreed that the invention of a myth or tradition “usually deals with origin stories and tend to be universal.” Tradition “result[s] from some sort of conflict, or debate, shaken identity, boundary dispute and…[has] a moral dimension.” Jackson emphasized how critical the 1820s and 1830s proved as science began to specialize alongside the invention of the term ‘scientist’ (1834) by William Whewell. He wholeheartedly agreed with Zakariya’s interpretation that for such natural philosophers as John Herschel, the “scientific context is irrelevant, precisely because science is universal.” Jackson elaborated on the conflictual divisions between artisans and natural philosophers whereby makers of scientific instrumentation—crucial for advances in science—were denied the status of philosophers themselves.  This division proved social, cultural and political at the turn of the nineteenth century, as knowledge became commodified, and natural philosophers legitimized their role as creators and curators of science. Jackson mapped out the “contextual moral” for this transition, and pointed to the Industrial Revolution and its effects on Handwerk and Kopfwerk. Natural philosophers insisted that artisans “should reveal their secrets so that their knowledge could be managed and applied universally, a great Enlightenment trope.” Their interest in economic efficiency focussed on replacing artisan skills and human calculators with controllable machines. For these natural philosophers “the key was the unity of science serving as models for other forms of knowledge.” Yet, as Jackson concluded, “there is an ethics for scientific imperative….the usefulness of useless knowledge….a moral argument that we must do it because it is about knowledge itself.”

150929_022-Edit

Prof. Nasser Zakariya (UC Berkeley)

Zakariya agreed that there are “still richer contexts” in the analysis of “matters of material practices.” He acknowledged that his actors “were deeply engaged in reconstruction of both technical craft they were working through, and the theorization of that technical craft.” Discourse drew Zakariya away from material practice, toward his actors’ resistance to a historical synthesis. Their anxieties rested with who they imagined had the expertise to undertake this synthesis. Therefore, the synthesis “starts to construct… despite their democratizing impulses… a particular kind of elite that will carry out that democratization.” For Herschel, an author like Alexander von Humboldt in his Kosmos suggested that “if [synthesis] were possible… people like Humboldt [were the philosophers] to do it and yet we find Humboldt is insufficient to be able to do this.” For Zakariya, the discursive maneuvers in trying to articulate “what is and what is not possible” within these genres is at stake. His actors are not stating that synthesis is not possible, only that a historical narrative of the universe was not possible.

hent-de-vries.png

Prof. Hent de Vries (NYU)

For de Vries what is at stake is contemporary scholarship that has returned to the “age-old appeal to myth.” He met Zakariya‘s use of the term “myth” with suspicion, albeit agreeing with its premise. Zakariya echoed Adorno’s and Horkheimer’s concerns in Dialectic of Enlightenment by arguing that “[J]ust as myths already entail enlightenment, with every step enlightenment entangles itself more deeply in mythology. Receiving all its subject matter from myths, in order to destroy them, it falls as judge under the spell of myth” (10). Myth and enlightenment co-evolve into a constricting knot, despite the notion that the foundation of the inductive sciences was based on “the rejection of tradition, mythic authority” (10). With additional knowledge of the physical and natural world, de Vries pointed out the possibility “that there is “a final story” to be told about the emergence of the frames or “genres for synthesizing” knowledge in question.” He emphasized this as “a meta- or mega-narrative, a myth of myth” but problematized that final theories of stories “offer just that” because they are built on  “empirical finalities that are… particular, not general and decidedly partial, but also on account of a fundamental, call it transcendentally grounded, incompleteness, of sorts.” “Or is it?” he asked.

elshakry

Prof. Marwa Elshakry (Columbia)

Elshakry also probed at Zakariya’s categories of “myth”, “epic” and “universal histories,” out of “genuine curiosities.” She found the main tension was conceptual, not semantic, and was “connected ultimately to [the] alpha and omega of universal history, with myth concerning ultimately and uncomfortably the notion of final ends [and] epic… primarily a concern with origins.”  The quest of the scientists as they vacillated  “between the known and unknown” is to begin to recognize that “this very heroic quest” may also reveal the “story of self-destruction rather than point the way to cures and wonders or the idea that being human… engages us in an extended historical  process of self-destruction.” She wondered about the logic behind the pursuit of “scientific realism as a Hegelian process of negation and  death.” Surely this pursuit suggested that humans can “induce and deduce our own ultimate species death and extinction… and yet we cannot.” Therefore, there was an inherent tension between “the secular humanist order and a sacred one.” She concluded with a tantalizing question “what is the final story—if in our own minds own science narratives or cosmic epics, come up with a good origin, but [we seem in our] collective species being imaginaries incapable of dealing with the problem of death itself.”

Zakariya tackled these questions eloquently. He explained how these scientists did not endorse myth uncritically, acknowledging their awareness of the paradoxes they had adopted. This paradox was a “tendency to have this eruption of a kind of mythic status to the project of knowledge, despite the project of knowledge seeing itself often as undercutting the grounds upon myth stands.” These natural philosophers’ and scientists’ totalizing ambitions forced them to question the very axioms with which the framework was constructed. Zakariya noted the constant reinscription “of the work of doing the totalizing” as these men argued that science was the most effective and natural discipline to tell this scientific epic. Their frameworks were limitless, but as they enlarged these structures, the edges became frayed, and they were forced to brood over questions that “[brought] us back to critiques of reason.”

In response to Elshakry, Zakariya revealed that she had uncovered “a number of elements [he] hadn’t quite thought about.” In answer, he discussed Hermann von Helmholtz ‘s views on the idea of universal history. In a period where thermodynamics was emerging around the contradictory concepts of entropy, enthalpy, and conservation, it began to reflect the impossibility of an infinitely old universe. By integrating thermodynamics into a scientific epic, Helmholtz realised that one must “bear up to this idea that it spells a conclusion of ourselves.” Similarly, these epics, for Zakariya, “forc[e] us to dwell on our mortality as a species.”

This book is a must for scholars in both the sciences and the humanities. Zakariya’s intervention, fusing the physical, natural and human histories, shows how the historical narrative in epic form was not self-evident, and a “strict chronology was not the ultimate arbiter” (126). Political contexts and contests influenced competing worldviews of humanity, the Earth, and the universe, in the professional and the public sphere. He demonstrates how a scientific worldview grows from the kind of questions asked, how these physical and metaphysical spaces are symbiotic, complementing and contradicting at the same moment, and how many universes can emerge. At the core of this narrative is a selection of personalities, from Mary Somerville to Steven Weinberg, who oversaw the totalizing visions circulating between professional and popular epics. As they shoe-horned these visions into a single narrative, some made the synthesis, many others sank, and some were transformed, leaving behind the forces, traces, and circumstances they had come up against. Slowly, the “suburban position of humanity and the earth” revealed the real limits of science (313). In this anxiety, the voice of the scientists metamorphosed into the only voice for the planet itself, thus claiming hegemony over the history of the universe.  The reader travels through Zakariya’s mindfully researched and vividly written tales of the attempt to stage the construction of a whole of knowledge, of everything. Thus, “whatever the future condition of species being and knowing, the universal human story must be maintained in the generic form of an epic” (339).

Nuala F. Caomhanach is a Ph.D. student in the History Department at New York University, and research associate in the Invertebrate Zoology Department at the American Museum of Natural History.

A Man Walks Into A Bar; or the possibilities of the individual in international history.

by Editor Sarah Claire Dunstan.

One summer’s afternoon in 1923, a French barrister was enjoying a drink in a Parisian café.  A man of broad experience and education, the barrister was also a medical doctor who had served in the First World War. This service had allowed him to become a French citizen in 1915, a privilege denied previously because he was a native of the former Kingdom of Dahomey, now a French colonial territory. Kojo Tovalou Houénou

Cécile_Sorel,_par_Reutlinger

Comtesse de Ségur of the Comédie Francaise

was not just from Dahomey, he also claimed the title of Prince on the basis that his mother was the sister of the last King. Contemporaries and later scholars doubted the veracity of this claim but it made him of much interest to the Parisian dailies. In their pages, tales of his exploits amongst bohemian circles – notably his on-again, off-again affair with the Comtesse de Ségur of the Comédie Française – were reported with glee.

On this particular August afternoon, Houénou was simply a French man. Or at least he was until a group of drunk Americans sat down at a table nearby. He thought little of them until they began to object, loudly, to his presence. The waiters, virtuous Frenchmen one and all, refused to eject Houénou from the café but the Americans grew rowdier. Finally, the foreigners stood up, dragged him from the café, beating him up and throwing him in the gutter. This example of American racism shocked Houénou, awakening him to the reality of black experiences outside of la belle France. He resolved to do all that he could to extend and uphold the principles of French civilization and to protect the less fortunate amongst his race. To this end, Houénou founded the Ligue Universelle pour la défense de la race noire and its journal, Les Continents. This very tale was printed in one of the early issues and reiterated as the origins story for the Ligue by other press outlets such as the African American journal the Crisis and Marcus Garvey’s newspaper, the Negro World, as well as by Houénou himself in speeches delivered to mainly black audiences in Paris and New York.

Although primarily concerned with abuses being perpetrated towards the indigenous populations in the French colonies, Les Continents became one of the first francophone print forums for collaborations between African American activists and thinkers and their French counterparts, crafting a bridge between Harlem and the Parisian left bank. The Ligue itself had a mission statement that articulated its desire to ‘develop the bonds of solidarity and universal brotherhood between all members of the black race.’ Celebrated Harlem Renaissance figures from Alain Locke and Langston Hughes through to Countee Cullen published in the journal.  Under Houénou’s leadership, the group built relationships with the American National Association for the Advancement of Colored People and Marcus Garvey’s Universal Negro Improvement Association. As a result, the Ligue has received some scholarly attention as an institution that fostered black international solidarity (most notably in Brent Hayes Edward’s wonderful The Practice of Diaspora, Christopher L. Miller’s Nationalists and Nomads and Michael Goebel’s Anti-Imperial Metropolis.) More than that, Houénou’s neat origin story has much in common with those employed contemporaneously by other black activists as they attempted to leverage the potential of French civilization against the specter of American racial discord and to agitate against racism in France. Insofar as the existing scholarship is concerned, Houénou tends to appear in histories of black internationalism that focus upon institutional organization or ideological mechanisms. Where his activism is given credence, it is as a corrective to the scholarship’s tendency to focus upon the African American presence in movements towards black internationalism. Always, Houénou’s experience is subsumed in the institutions he founded or participated in.

Princ_Tovalou-Houenou

From left to right: Marc Quenum, Kojo Tovalou Houénou and Marcus Garvey in Harlem, 1924.

This is due in part to the scarcity and nature of remaining sources. No archive holds Houénou’s personal papers. Fragments of his life have to be pieced together from newspaper articles from his heyday in the Parisian social landscape, or from letters appearing in other collections such as that of W.E.B. Du Bois. The Service de contrôle et d’assistance des indigènes, established by the French Minister for the Colonies Albert Sarrault in 1923, offers perhaps the most comprehensive chronology of Houénou’s life. Given that Sarrault utilized the Service for surveillance of those deemed threatening to the French imperial system, this tends to emphasize his involvement in black activist organizations rather than pay heed to his individual behavior. All the more so given the French authorities’ tendency to conflate all Pan-Africanist organization with Garveyism and all Garveyism with insurrectionist and usually Bolshevik politics. When Senegalese politician Blaise Diagne successfully sued Les Continents for libel in 1924, the paper and the organization folded, leaving Houénou bankrupt. He was forced to leave Paris and to renounce his diasporan affiliations (specifically any connection with Marcus Garvey) before he was allowed back into Dahomey. Black international solidarity at this moment, then, appeared to crumble in the face of the machinery of French Third Republic.

Inverting the study to map an international history through Houénou’s individual perspective, however, changes the narrative from one of failure at the hands of unstoppable empire. Instead it allows us to re-position the way we think about the spatial geography of black internationalism which is often characterized in terms of experiences in Northern hemisphere metropoles. Houénou himself participated in the construction of this narrative with his repeated telling and refashioning of the café incident. The Ligue and the other black activist organizations he participated in certainly were rooted in Paris and New York. Moreover, the freedom of speech permitted in Paris as opposed to the colonies created a space for black internationalism that would not have been possible elsewhere. However his own individual experiences belie the story he constructed.

In 1921, two years prior to his ‘racial awakening’, he had visited Dakar. Whilst there, he spoke to the Senegalese tirailleurs who had been abandoned by the French Government after fulfilling their conscripted duties. The reality of their exploitation was only too visible and Houénou spoke out to local authorities about it. He was ignored. Soon afterwards he published a little-read book entitled L’Involution des métamorphoses et des métempsychoses de l’univers. In it, he attacked European assumptions of cultural superiority by arguing that each people and culture comprised equal parts of a universal civilization. Early in 1923, in the aftermath of rioting in Porto-Novo in Dahomey, he criticized the colonial administrators’ handling of the issues, to little avail. True, neither incident was quite so personal and dramatic as being beaten up in a Parisian café but they do indicate a public engagement with the question of race on an imperial, if not an international, level much earlier than narratives focusing upon the Ligue or his UNIA support allow. It also locates the site of his racial awakening outside the colonial metropole.

This reframes our understanding of the valency of a racial awakening in Paris rather than Porto-Novo or Dakar, pointing to the way that gestures of black solidarity were sometimes easier to perform in the metropole than elsewhere.  In particular, it demonstrates the crucial symbolic role that examples of US racism played in francophone black activism at this time. This is especially clear when one looks beyond Houénou’s sanctioned version of the story to the one relayed in other sources such as the Parisian press: it was a French bartender who threw Houénou from the premises and beat him, not the crowd of racist Americans who bayed for his removal.  Moreover, Houénou’s activities after the collapse of the Ligue and his departure from Paris lead the historian away from the print formulations of universal black brotherhood found in Les Continents to their application on the ground in Africa.

Hardly a year after his relocation to Dahomey, Houénou and a group of unnamed allies attempted to overthrow French colonial rule there. His movement was small, ill-equipped and failed spectacularly. Forced to flee to Togo, Houénou was quickly caught and imprisoned. Some reports indicate that he was incarcerated for five years, others three. What we do know is that he was never allowed to enter Dahomey again. Instead, he went to Senegal by 1930, possibly as early as 1928, and became heavily involved in Senegalese politics. At first he supported Ngalandou Diouf against Blaise Diagne in the elections of 1932. He would switch candidates for the following election of 1934, supporting Lamine Gueye against Diouf. In both cases, Houénou applied a committed Pan-Africanism of the type that the French colonial authorities feared Garveyism represented: the call for the recognition of the equality of all races and the independence of African territories from colonial rule. Neither Diouf nor Gueye were quite so radical in their views. Indeed, Houénou’s platform was far removed from the Parisian story that played American racism off against la belle France. His early cries for universal black brotherhood had transformed at the hands of the treatment of colonial authorities to his support for the total independence for Africa.

Houénou’s involvement in Senegalese politics is usually not considered in the context of black internationalism. To be strictly honest, it has not exactly earned him a noteworthy place in the annals of Senegalese history either. He met an ignominious end in the electoral campaign of 1936 when the meeting he was running exploded into violence. Nevertheless, by focusing on Houénou’s own story, rather than solely upon his involvement in the international and diasporic institutions he helped to build, it is possible to shift the geography of black internationalism away from imperial metropoles back to the African continent.

Sarah Claire Dunstan is an ARC Postdoctoral Fellow with the International History Laureate at the University of Sydney (@IntHist ). She is an intellectual historian of 20th century France and the United States with a particular interest in questions of race, rights and gender. She can be found on Twitter  @sarahcdunstan .

The Historical Origins of Human Rights: A Conversation with Samuel Moyn

By guest contributor Pranav Kumar Jain

picture-826-1508856803

Professor Samuel Moyn (Yale University)

Since the publication of The Last Utopia: Human Rights in History, Professor Samuel Moyn has emerged as one of the most prominent voices in the field of human rights studies and modern intellectual history. I recently had a chance to interview him about his early career and his views on human rights and recent developments in the field of history.

Moyn was educated at Washington University in St. Louis, where he studied history and French literature. In St. Louis, he fell under the influence of Gerald Izenberg, who nurtured his interest in modern French intellectual history. After college, he proceeded to Berkeley to pursue his doctorate under the supervision of Martin Jay. However, unexcited at the prospect of becoming a professional historian, he left graduate school after taking his orals and enrolled at Harvard Law School. After a year in law school, he decided that he did want to finish his Ph.D. after all. He switched the subject of his dissertation to a topic that could be done on the basis of materials available in American libraries. Drawing upon an earlier seminar paper, he decided to write about the interwar moral philosophy of Emmanuel Levinas. After graduating from Berkeley and Harvard in 2000-01, he joined Columbia University as an assistant professor in history.

Though he had never written about human rights before, he had become interested in the subject in law school and during his work in the White House at the time of the Kosovo bombings. At Columbia, he decided to pursue his interest in human rights further and began to teach a course called “Historical Origins of Human Rights.” The conversations in this class were complemented by those with two newly arrived faculty members, Mark Mazower and Susan Pedersen, both of whom were then working on the international history of the twentieth century. In 2008, Moyn decided that it was finally time to write about human rights.

9780674064348-lg

Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge: Harvard University Press, 2012)

In The Last Utopia, Moyn’s aim was to contest the theories about the long-term origins of human rights. His key argument was that it was only in the 1970s that the concept of human rights crystallized as a global language of justice. In arguing thus, he sharply distinguished himself from the historian Lynn Hunt who had suggested that the concept of human rights stretched all the way back to the French Revolution. Before Hunt published her book on human rights, Moyn told me, his class had shared some of her emphasis. Both scholars, for example, were influenced by Thomas Laqueur’s account of the origins of humanitarianism, which focused on the upsurge of sympathy in the eighteenth century. Laqueur’s argument, however, had not even mentioned human rights. Hunt’s genius (or mistake?), Moyn believes, was to make that connection.

Moyn, however, is not the only historian to see the 1970s as a turning point. In his Age of Fracture (2012), intellectual historian Daniel Rodgers has made a similar argument about how the American postwar consensus came under increasing pressure and finally shattered in the 70s. But there are some important differences. As Moyn explained to me, Rodgers’s argument is more about the disappearance of alternatives, whereas his is more concerned with how human rights survived that difficult moment. Furthermore, Rodgers’s focus on the American case makes   his argument unique because, in comparison with transatlantic cases, the American tradition does not have a socialist starting point. Both Moyn and Rodgers, however, have been criticized for failing to take neoliberalism into account. Moyn says that he has tried to address this in his forthcoming book Not Enough: Human Rights in an Unequal World.

Some have come to see Moyn’s book as mostly about President Jimmy Carter’s contributions to the human rights revolution. Moyn himself, however, thinks that the book is ultimately about the French Revolution and its abandonment in modern history for an individualistic ethics of rights, including the Levinasian ethics which he once studied. In Moyn’s view, human rights are a part of this “ethical turn.” While he was working on the book, Moyn’s own thinking underwent a significant revolution. He began to explore the place of decolonization in the story he was trying to tell. Decolonization was not something he had thought about very much before but, as arguably one of the biggest events of the twentieth century, it seemed indispensable to the human rights revolution. In the book, he ended up making the very controversial argument that human rights largely emerged as the response of westerners to decolonization. Since they had now lost the interventionist tool of empire, human rights became a new universalism that would allow them to think about, care about, and perhaps intervene in places they had once ruled directly.

Though widely acclaimed, Moyn’s thesis has been challenged on a number of fronts. For one thing, Moyn himself believes that the argument of the book is problematic because it globalizes a story that it mostly about French intellectuals in the 1970s. Then there are critics such as Stefan-Ludwig Hoffmann, a German historian at UC Berkeley, who have suggested, in Moyn’s words, that “Sam was right in dismissing all prior history. He just didn’t dismiss the 70s and 80s.” Moyn says that he finds Hoffmann’s arguments compelling and that, if we think of human rights primarily as a political program, the 90s do deserve the lion’s share of attention. After all, Moyn’s own interest in the politics of human rights emerged during the 90s.

EleanorRooseveltHumanRights

Eleanor Roosevelt with a Spanish-language copy of the Universal Declaration of Human Rights

Perhaps one of Moyn’s most controversial arguments is that the field of the history of human rights no longer has anything new to say. Most of the questions about the emergence of the human rights movements and the role of international institutions have already been answered. Given the major debate provoked by his own work, I am skeptical that this is indeed the case. Plus, there are a number of areas which need further research. For instance, we need to better understand the connections between signature events such as the adoption of the Universal Declaration of Human Rights, and the story that Moyn tells about the 1970s. But I think Moyn made a compelling point when he suggested to me that we cannot continue to constantly look for the origins of human rights. In doing so, we often run the risk of anachronism and misinterpretation. For instance, some scholars have tried to tie human rights back to early modern natural law. However, as Moyn put it, “what’s lost when you interpret early modern natural law as fundamentally a rights project is that it was actually a duties project.”

Moyn is ambivalent about recent developments in the study and practice of history in general. He thinks that the rise of global and transnational history is a welcome development because, ultimately, there is no reason for methodological nationalism to prevail. However, in his view, this has had a somewhat adverse effect on graduate training. When he went to grad school, he took courses that focused on national historiographical canons and many of the readings were in the original language. With the rise of global history, it is not clear that such courses can be taught anymore. For instance, no teacher could demand that all the students know the same languages. Consequently, Moyn says, “most of what historians were doing for most of modern history is being lost.” This is certainly an interesting point and it begs the question of how graduate programs can train their students to strike a balance between the wide perspectives of global history and the deep immersion of a more national approach.

Otherwise, however, in contrast with many of his fellow scholars, Moyn is surprisingly upbeat about the current state and future of the historical profession. He thinks that we are living in a golden age of historiography with many impressive historians producing outstanding works. There is certainly more scope for history to be more relevant to the public. But historians engaging with the public shouldn’t do so in crass ways, such as suggesting that there is a definitive relevance of history to public policy. History does not have to change radically. It can simply continue to build upon its existing strengths.

lynn-hunt

Professor Lynn Hunt (UCLA)

In the face of Lynn Hunt’s recent judgment that the field of “history is in crisis and not just one of university budgets,” this is a somewhat puzzling conclusion. However, it is one that I happen to agree with. Those who suggest that historians should engage with policy makers certainly have a point. However, instead of emphasizing the uniqueness of history, their arguments devolve to what historians can do better than economists and political scientists. In the process, they often lose sight of the fact that, more than anything, historians are storytellers. History rightly belongs in the humanities rather than the social sciences. It is only in telling stories that inspire and excite the public’s imagination that historians can regain the respect that many think they have lost in the public eye.

Pranav Kumar Jain is a doctoral student in early modern history at Yale University.

In Dread of Derrida

By guest contributor Jonathon Catlin

According to Ethan Kleinberg, historians are still living in fear of the specter of deconstruction; their attempted exorcisms have failed. In Haunting History: For a Deconstructive Approach to the Past (2017), Kleinberg fruitfully “conjures” this spirit so that historians might finally confront it and incorporate its strategies for representing elusive pasts. A panel of historians recently discussed the book at New York University, including Kleinberg (Wesleyan), Joan Wallach Scott (Institute for Advanced Study), Carol Gluck (Columbia), and Stefanos Geroulanos (NYU), moderated by Zvi Ben-Dor Benite (NYU). A recording of the lively two-hour exchange is available at the bottom of this post.

Processed with VSCO with f2 preset

Left to Right: Profs Geroulanos, Gluck, Kleinberg, and Scott

History’s ghost story goes back some decades. Hayden White’s Metahistory roiled the profession in 1973 by effectively translating the “linguistic turn” of the French deconstruction into historical terms: historical narratives are no less “emplotted” in genres like romance and comedy, and hence no less unstable, than literary ones. White sparked fierce debate, notably about the limits of representing the Holocaust, which took place alongside probes into the ethics of those of deconstruction’s heroes with ties to Nazism, including Martin Heidegger and Paul de Man. The intensity of these battles was arguably a product of hatred for one theorist in particular: Jacques Derrida, whose work forms the backbone of Kleinberg’s book. Yet despite decades of scholarship undermining the nineteenth-century, Rankean foundations of the historical discipline, the regime of what Kleinberg calls “ontological realism” apparently still reigns. His book is not simply the latest in a long line of criticism of such work, but rather a manifesto for a positive theory of historical writing that employs deconstruction’s linguistic and epistemological insights.

This timely intervention took place, as Scott remarked, “in a moment when the death of theory has been triumphantly proclaimed, and indeed celebrated, and when many historians have turned with relief to accumulating big data, or simply telling evidence-based stories about an unproblematic past.” She lamented that

the self-reflexive moment and the epistemological challenge associated with names like Foucault, Irigaray, Derrida, and Lacan—all those dangerous French theorists who integrated the very ground on which we stood—reality, truth, experience, language, the body—that moment is said to be past, a wrong turn taken; thankfully we’re now on the right course.

Scott praised Kleinberg’s book for haunting precisely this sense of “triumphalism.”

Kleinberg began his remarks with a disappointed but unsurprised reflection that most historians still operate under the spell of what he calls “ontological realism.” This methodology is defined by the attempt to recover historical events, which, insofar as they are observable, become “fixed and immutable.” This elides the difference between the “real” past and history (writing about the past), unwittingly taking “the map of the past,” or historical representation, as the past itself. It implicitly operates as if the past is a singular and discrete object available for objective retrieval. While such historians may admit their own uncertainty about events, they nevertheless insist that the events really happened in a certain way; the task is only to excavate them ever more exactly.

This dogmatism reigns despite decades of deconstructive criticism from the likes of White, Frank Ankersmit, and Dominick LaCapra in the pages of journals like History and Theory (of which Kleinberg is executive editor), which has immeasurably sharpened the self-consciousness of historical writing. In his 1984 History and Criticism, LaCapra railed against the “archival fetishism” then evident in social history, whereby the archive became “more than the repository of traces of the past which may be used in its inferential reconstruction” and took on the quality of “a stand-in for the past that brings the mystified experience of the thing itself” (p. 92, n. 17). If historians had read their Derrida, however, they would know that the past inscribed in writing “is ‘always already’ lost for the historian.” Scott similarly wrote in a 1991 Critical Inquiry essay: “Experience is at once always already an interpretation and is in need of interpretation.” As she cited from Kleinberg’s book, meaning is produced by reading a text, not released from it or simply reflected. Every text, no matter how documentary, is a “site of contestation and struggle” (15).

Kleinberg’s intervention is to remind us that this erosion of objectivity is not just a tragic story of decline into relativism, for a deconstructive approach also frees historians from the shackles of objectivism, opening up new sources and methodologies. White famously concluded in Metahistory that there were at the end of the day no “objective” or “scientific” reasons to prefer one way of telling a story to another, but only “moral or aesthetic ones” (434). With the acceptance of what White called the “Ironic” mode, which refused to privilege certain accounts of the past as definitive, also came a new freedom and self-consciousness. Kleinberg similarly revamps White’s Crocean conclusion that “all history is contemporary history,” reminding us that our present social and political preoccupations determine which voices we seek out and allow to speak in our work. We can never tell the authoritative history of a subject, but only construct a possible history of it.

Kleinberg relays the upside of deconstructive history more convincingly than White ever did: Opening up history beyond ontological realism makes room for “alternative pasts” to enter through the “present absences” in historiography. Contrary to historians’ best intentions, the hold of ontological positivism perversely closes out and renders illegible voices that do not fit with the dominant paradigm, who are marginalized to obscurity by the authority of each self-enclosed narrative. Hence making some voices legible too often makes others illegible, for example E. P. Thompson foregrounding the working class only to sideline women. The alternative is a porous account that allows itself to be penetrated by alterity and unsettled by the ghosts it has excluded. The latent ontology of holding onto some “real,” to the exclusion of others, would thus give way to a hauntology (Derrida’s play on the ambiguous sound of the French ontologie) whereby the text acknowledges and allows in present absences. Whereas for Kleinberg Foucault has been “tamed” by the historical discipline, this Derridean metaphor remains unsettling. Reinhart Koselleck’s notion of “non-simultaneity” (Ungleichzeitgkeit) further informs Kleinberg’s view of “hauntology as a theory of multiple temporalities and multiple pasts that all converge, or at least could converge, on the present,” that is, on the historian in the act of writing about the past (133).

Kleinberg fixates on the metaphor of the ghost because it represents the liminal in-between of absent presences and present absences. Ghosts are unsettling because they obey no chronology, flitting between past and present, history and dream. Yet deconstructive hauntology stands to enrich narratives because destabilized stories become porous to previously excluded voices. In his response, Geroulanos pressed Kleinberg to consider several alternative monster metaphors: ghosts who tell lies, not bringing back the past “as it really was” but making up alternative claims; and the in-between figure of the zombie, the undead past that has not passed.

Even in the theory-friendly halls of NYU, Kleinberg was met with some of the same suspicion and opposition White was decades ago. While all respondents conceded the theoretical import of Kleinberg’s argument, the question remained how to write such a history in practice. Preempting this question, Kleinberg’s conclusion includes a preview of a parallel book he has been writing on the Talmudic lectures Emmanuel Levinas presented in postwar Paris. He hopes to enact what Derrida called a “double session.” The first half of the book provides a secular intellectual history of how Levinas, prompted by the Holocaust, shifted from Heidegger to Talmud; but the second half tells this history from the perspective of revelation, inspired by “Levinas’s own counterhistorical claim that divine and ethical meaning transcends time,” telling a religious counter-narrative to the standard secular one. Scott praised the way Kleinberg’s two narratives provide two positive accounts that nonetheless unsettle one another. Kleinberg writes: “The two sessions pull at each other, creating cracks in any one homogenous history, through which portions of the heterogeneous and polysemic past that haunts history can rise and be activated.” This “dislodging” and “irruptive” method “marks an irreducible and generative multiplicity” of alternate histories (149). Active haunting prevents Kleinberg’s method from devolving into mere perspectivism; each narrative actively throws the other into question, unsettling its authority.

A further decentering methodology Kleinberg proposed was breaking through the “analog ceiling” of print scholarship into the digital realm. Gluck emphasized how digital or cyber-history has the freedom to be more associative than chronological, interrupting texts with links, alternative accounts, and media. Thus far, however, digital history, shackled by big data and “neoempiricism,” has largely remained in the grip of ontological realism, producing linear narratives. Still, there was some consensus that these technologies might enable new deconstructive approaches. In this sense, Kleinberg writes, “Metahistory came too soon, arriving before the platforms and media that would allow us to explore the alternative narrative possibilities that were at our ready disposal” (117).

Listening to Kleinberg, I thought of a recent experimental book by Yair Mintzker, The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew (2017). It tells the story of the death of Joseph Oppenheimer, the villain of the infamous Nazi propaganda film Jud Süss (1940) produced at the behest of Nazi propaganda minister Joseph Goebbels. Mintzker was inspired by the narrative model of the film Rashomon (1950), which Geroulanos elaborated in some depth. Director Akira Kurosawa famously presents four different and conflicting accounts of how a samurai traveling through a wooded grove ends up murdered, from the perspectives of his wife, the bandit they encounter, a bystander, and the samurai himself speaking through a medium. Mintzker’s narrative choice is not postmodern fancy, but in this case a historiographical necessity. Because Oppenheimer, as a Jew, was not entitled to give testimony in his own trial, the only extant accounts available come from four similarly self-interested and conflictual sources: a judge, a convert, a Jew, and a writer. Mintzker’s work would seem to demonstrate the viability of Kleinbergian hauntology well outside twentieth-century intellectual history.

Kleinberg mused in closing: “If there’s one thing I want to do…it’s to take this book and maybe scare historians a little bit, and other people who think about the past. To make them uncomfortable, in the end, I hope, in a productive way.” Whether historians will welcome this unsettling remains to be seen, for as with White the cards remain stacked against theory. Yet our present anxiety about living in a “post-truth era” might just provide the necessary pressure for historians to recognize the ghosts that haunt the interminable task of engaging the past.

 

Jonathon Catlin is a PhD student in History at Princeton University. He works on intellectual responses to catastrophe in German and Jewish thought and the Frankfurt School of critical theory.

 

 

Aristotle in the Sex Shop and Activism in the Academy: Notes from the Joint Atlantic Seminar in the History of Medicine

By Editor Spencer J. Weinreich

Four enormous, dead doctors were present at the opening of the 2017 Joint Atlantic Seminar in the History of Medicine. Convened in Johns Hopkins University’s Welch Medical Library, the room was dominated by a canvas of mammoth proportions, a group portrait by John Singer Sargent of the four founders of Johns Hopkins Hospital. Dr. William Welch, known in his lifetime as “the dean of American medicine” (and the library’s namesake). Dr. William Halsted, “the father of modern surgery.” Dr. Sir William Osler, “the father of modern medicine.” And Dr. Howard Kelly, who established the modern field of gynecology.

1905 Professors Welch, Halsted, Osler and Kelly (aka The Four Doctors) oil on canvas 298.6 x 213.3 cm Johns Hopkins University School of Medicine, Baltimore MD

John Singer Sargent, Professors Welch, Halsted, Osler, and Kelly (1905)

Beneath the gazes of this august quartet, graduate students and faculty from across the United States and the United Kingdom gathered for the fifteenth iteration of the Seminar. This year, the program’s theme was “Truth, Power, and Objectivity,” explored in thirteen papers ranging from medical testimony before the Goan Inquisition to the mental impact of First World War bombing raids, from Booker T. Washington’s National Negro Health Week to the emergence of Chinese traditional medicine. It would not do justice to the papers or their authors to cover them all in a post; instead I shall concentrate on the two opening sessions: the keynote lecture by Mary E. Fissell and a faculty panel with Nathaniel Comfort, Gianna Pomata, and Graham Mooney (all of Johns Hopkins University).

I confess to some surprise at the title of Fissell’s talk, “Aristotle’s Masterpiece and the Re-Making of Kinship, 1820–1860.” Fissell is known as an early modernist, her major publications exploring gender, reproduction, and medicine in seventeenth- and eighteenth-century England. Her current project, however, is a cultural history of Aristotle’s Masterpiece, a book on sexuality and childbirth first published in 1684 and still being sold in London sex shops in the 1930s. The Masterpiece was distinguished by its discussion of the sexual act itself, and its consideration (and copious illustrations) of so-called “monstrous births.” It was, in Fissell’s words, a “howling success,” seeing an average of one edition a year for 250 years, on both sides of the Atlantic.

It should be explained that there is very little Aristotle in Aristotle’s Masterpiece. In early modern Europe, the Greek philosopher was regarded as the classical authority on childbirth and sex, and so offered a suitably distinguished peg on which to hang the text. This allowed for a neat trick of bibliography: when the Masterpiece was bound together with other (spurious) works, like Aristotle’s Problems, the spine might be stamped with the innocuous (indeed impressive) title “Aristotle’s Works.”

st-john-the-baptist-el-greco-c-1600

El Greco, John the Baptist (c.1600)

At the heart of Aristotle’s Masterpiece, Fissell argued, was genealogy: how reproduction—“generation,” in early modern terms—occurred and how the traits of parents related to those of their offspring. This genealogy is unstable, the transmission of traits open to influences of all kinds, notably the “maternal imagination.” The birth of a baby covered in hair, for example, could be explained by the pregnant mother’s devotion to an image of John the Baptist clad in skins. Fissell brilliantly drew out the subversive possibilities of the Masterpiece, as when it “advised” women that adultery might be hidden by imagining one’s husband during the sex act, thus ensuring that the child would look like him. Central though family resemblance is to reproduction, it is “a vexed sign,” with “several jokers in every deck,” because women’s bodies are mysterious and have the power to disrupt lineage.

Fissell principally considered the Masterpiece’s fortunes in the mid-nineteenth-century Anglophone world, as the unstable generation it depicted clashed with contemporary assumptions about heredity. Here she framed her efforts as a “footnote” to Charles Rosenberg’s seminal essay, “The Bitter Fruit: Heredity, Disease, and Social Thought in Nineteenth-Century America,” which traced how discourses of heredity pervaded all branches of science and medicine in this period. George Combe’s Constitution of Man (1828), an exposition of the supposedly rigid natural laws governing heredity (with a tilt toward self-discipline and self-improvement), was the fourth-bestselling book of the period (after the Bible, Pilgrim’s Progress, and Robinson Crusoe). Other hereditarian works sketched out the gendered roles of reproduction—what children inherited from their mothers versus from their fathers—and the possibilities for human action (proper parenting, self-control) for modulating genealogy. Wildly popular manuals for courtship and marriage advised young people on the formation of proper unions and the production of healthy children, in terms shot through with racial and class prejudices (though not yet solidified into eugenics as we understand that term).

The fluidity of generation depicted in Aristotle’s Masterpiece became conspicuous against the background of this growing obsession with a law-like heredity. Take the birth of a black child to white parents. The Masterpiece explains that the mother was looking at a painting of a black man at the moment of conception; hereditarian thought identified a black ancestor some five generations back, the telltale trait slowly but inevitably revealing itself. Thus, although the text of the Masterpiece did not change much over its long career, its profile changed dramatically, because of the shifting bibliographic contexts in which it moved.

In the mid-nineteenth century, the contrasting worldviews of the Masterpiece and the marriage manuals spoke to the forms of familial life prevalent at different social strata. The more chaotic picture of the Masterpiece reflected the daily life of the working class, characterized by “contingent formations,” children born out of wedlock, wife sales, abandonment, and other kinds of “marital nonconformity.” The marriage manuals addressed themselves to upper-middle-class families, but did so in a distinctly aspirational mode. They warned, for example, against marrying cousins, precisely at a moment when well-to-do families were “kinship hot,” in David Warren Sabean’s words, favoring serial intermarriage among a few allied clans. This was a period, Fissell explained, in which “who and what counted as family was much more complex” and “contested.” The ambiguity—and power—of this issue manifested in almost every sphere, from the shifting guidelines for census-takers on how a “family” was defined, to novels centered on complex kinship networks, such as John Lang’s Will He Marry Her? (1858), to the flood of polemical literature surrounding a proposed law forbidding a man to marry his deceased wife’s sister—a debate involving many more people than could possibly have been affected by the legislation.

After a rich question-and-answer session, we shifted to the faculty panel, with Professors Comfort, Pomata, and Mooney asked to reflect on the theme of “Truth, Power, and Objectivity.” Comfort, a scholar of modern biology, began by discussing his work with oral histories—“creating a primary source as you go, and in most branches of history that’s considered cheating.” Here perfect objectivity is not necessarily helpful: “when you make yourself emotional availability to your subjects […] you can actually gain their trust in a way that you can’t otherwise.” Equally, Comfort encouraged the embrace of sources’ unreliability, suggesting that unreliability might itself be a source—the more unreliable a narrative is, the more interesting and the more indicative of something meant it becomes. He closed with the observation that different audiences required different approaches to history and to history-writing—it is not simply a question of tone or language, but of what kind of bond the scholar seeks to form.

Professor Pomata, a scholar of early modern medicine, insisted that moments of personal contact between scholar and subject were not the exclusive preserve of the modern historian: the same connections are possible, if in a more mediated fashion, for those working on earlier periods. In this interaction, respect is of the utmost importance. Pomata quoted a line from W. B. Yeats’s “He wishes for the Cloths of Heaven”:

I have spread my dreams under your feet;

Tread softly because you tread on my dreams.

As a historian of public health—which he characterized as an activist discipline—Mooney declared, “I’m not really interested in objectivity. […] I’m angry about what I see.” He spoke compellingly about the vital importance of that emotion, properly channeled toward productive ends. The historian possesses power: not simply as the person setting the terms of inquiry, but as a member of privileged institutions. In consequence, he called on scholars to undermine their own power, to make themselves uncomfortable.

The panel was intended to be open-ended and interactive, so these brief remarks quickly segued into questions from the floor. Asked about the relationship between scholarship and activism, Mooney insisted that passion, even anger, are essential, because they drive the scholar into the places where activism is needed—and cautioned that it is ultimately impossible to be the dispassionate observer we (think we) wish to be. With beautiful understatement, Pomata explained that she went to college in 1968, when “a lot was happening in the world.” Consequently, she conceived of scholarship as having to have some political meaning. Working on women’s history in the early 1970s, “just to do the scholarship was an activist task.” Privileging “honesty” over “objectivity,” she insisted that “scholarship—honest scholarship—and activism go together.” Comfort echoed much of this favorable account of activism, but noted that some venues are more appropriate for activism than others, and that there are different ways of being an activist.

Dealing with the horrific—eugenics was the example offered—requires, Mooney argued, both the rigor of a critical method and sensitive emotional work. Further, all three panelists emphasized crafting, and speaking in, one’s own voice, eschewing the temptation to imitate more prominent scholars and embracing the first person (and the subjectivity it marks). Voice, Comfort noted, isn’t natural, but something honed, and both he and Pomata recommended literature as an essential tool in this regard.

Throughout, the three panelists concurred in urging collaborative, interdisciplinary work, founded upon respect for other knowledges and humility—which, Comfort insightfully observed, is born of confidence in one’s own abilities. Asking the right questions is crucial, the key to unlocking the stories of the oppressed and marginalized within sources created by those in power. Visual sources have the potential to express things inexpressible in words—Comfort cited a photograph that wonderfully captured the shy, retiring nature of Dr. Barton Childs—but must be used, not mere illustrations. The question about visual sources was the last of the evening, and Professor Pomata had the last word. Her final comment offers the perfect summation of the creativity, dedication, and intellectual ferment on display in Baltimore that weekend: “we are artists, don’t forget that.”