Author: spencejw

Alexander and Wilhelm von Humboldt, Brothers of Continuity

By guest contributor Audrey Borowski

At the beginning of the nineteenth century, a young German polymath ventured into the heart of the South American jungle, climbed the Chimborazo volcano, crawled through the Andes, conducted experiments on animal electricity, and delineated climate zones across continents.  His name was Alexander von Humboldt (1769–1859). With the young French scientist Aimé Bonpland and equipped with the latest instruments, Humboldt tirelessly collected and compared data and specimens, returning after five years to Paris with trunks filled with notebooks, sketches, specimens, measurements, and observations of new species. Throughout his travels in South America, Russia and Mongolia, he invented isotherms and formulated the idea of vegetation and climate zones. Crucially, he witnessed the continuum of nature unfold before him and set forth a new understanding of nature that has endured up to this day. Man existed in a great chain of causes and effects in which “no single fact can be considered in isolation.” Humboldt sought to discover the “connections which linked all phenomena and all forces of nature.” The natural world was teeming with organic powers that were incessantly at work and which, far from operating in isolation, were all “interlaced and interwoven.” Nature, he wrote, was “a reflection of the whole” and called for a global understanding. Humboldt’s Essay on the Geography of Plants (1807) was the world’s first book on ecology in which plants were grouped into zones and regions rather than taxonomic units and analogies drawn between disparate regions of the globe.

In this manner, Alexander sketched out a Naturgemälde, a “painting of nature” that fused botany, geology, zoology and physics in one single picture, and in this manner broke away from prevailing taxonomic representations of the natural world. His was a fundamentally interdisciplinary approach, at a time when scientific inquiry was becoming increasingly specialized. The study of the natural world was no abstract endeavor and was far removed from the mechanistic philosophy that had held sway up till then. Nature was the object of scientific inquiry, but also of wonder and as such, it exerted a mysterious pull. Man was firmly relocated within a living cosmos broader than himself, which appealed equally to his emotions and imagination. From the heart of the jungle to the summit of volcanoes, “nature everywhere [spoke] to man in a voice that is familiar to his soul” and what spoke to the soul, Humboldt wrote, “escapes our measurements” (Views of Nature, 217-18). In this manner Humboldt followed in the footsteps of Goethe, his lifelong friend, and the German philosopher Friedrich Schelling, in particular the latter’s Naturphilosophie (“philosophy of nature”). Nature was a living organism it was necessary to grasp in its unity, and its study should steer away from “crude empiricism” and the “dry compilation of facts” and instead speak to “our imagination and our spirit.” In this manner, rigorous scientific method was wedded to art and poetry and the boundaries between the subjective and the objective, the internal and the external were blurred. “With an aesthetic breeze,” Alexander’s long-time friend Goethe wrote, the former had lit science into a “bright flame” (quoted in Wulf, The Invention of Nature, 146).

Alexander von Humboldt’s older brother, Wilhelm (1767-1835), a government official with a great interest in reforming the Prussian educational system, had been similarly inspired. While his brother had ventured out into the jungle, Wilhelm, on his side, had devoted much of his life to the exploration of the linguistic realm, whether in his study of Native American and ancient languages or in his attempts to grasp the relation between linguistic and mental structures. Like the German philosopher and literary critic Johann Gottfried Herder before him, Humboldt posited that language, far from being a merely a means of communication, was the “formative organ” (W. Humboldt, On the Diversity of Human Language, 54) of thought. According to this view, man’s judgmental activity was inextricably bound up with his use of language. Humboldt’s linguistic thought relied on a remarkable interpretation of language itself: language was an activity (energeia) as opposed to a work or finished product (ergon). In On the Diversity of Human Language Construction and its Influence on the Mental Development of the Human Species (1836), his major treatise on language, Wilhelm articulated a forcefully expressivist conception of language, in which he brought to bear the interconnectedness and organic nature of all languages and by extension, various worldviews. Far from being a “dead product,” an “inert mass,” language appeared as a “fully-fashioned organism” that, within the remit of an underlying universal template, was free to evolve spontaneously, allowing for maximum linguistic diversity (90).

Weimarer_Klassik

Left to Right: Friedrich Schiller, Wilhelm von Humboldt, Alexander von Humboldt, and Johann Wolfgang von Goethe, depicted by Adolph Müller (c.1797)

To the traditional objectification of language, Wilhelm opposed a reading of language that was heavily informed by biology and physiology, in keeping with the scientific advances of his time. Within this framework, language could not be abstracted, interwoven as it was with the fabric of everyday life. Henceforth, there was no longer one “objective” way of knowing the world, but a variety of different worldviews. Like his brother, Wilhelm strove to understand the world in its individuality and totality.

At the heart of the linguistic process lay an in-built mechanism, a feedback loop that accounted for language’s ability to generate itself. This consisted in the continuous interplay between an external sound-form and an inner conceptual form, whose “mutual interpenetration constitute[d] the individual form of language” (54). In this manner, rhythms and euphonies played a role in expressing internal mental states. The dynamic and self-generative aspect of language was therefore inscribed in its very core. Language was destined to be in perpetual flux, renewal, affecting a continuous generation and regeneration of the world-making capacity powerfully spontaneous and autonomous force, it brought about “something that did not exist before in any constituent part” (473).

As much as the finished product could be analyzed, the actual linguistic process defied any attempt at scientific scrutiny, remaining inherently mysterious. Language may well abide by general rules, but it was fundamentally akin to a work of art, the product of a creative outburst which “cannot be measured out by the understanding” (81). Language, as much as it was rule-governed and called for empirical and scientific study, originated somewhere beyond semio-genesis. “Imagination and feeling,” Wilhelm wrote, “engender individual shapings in which the individual character […] emerges, and where, as in everything individual, the variety of ways in which the thing in question can be represented in ever-differing guises, extends to infinity” (81). Wilhelm therefore elevated language to a quasi-transcendental status, endowing it with a “life-principle” of its own and consecrating it as a “mental exhalation,” the manifestation of a free, autonomous spiritual force. He denied that language was the product of voluntary human activity, viewing instead as a “mental exhalation,” a “gift fallen to [the nations] by their own destiny” (24) partaking in a broader spiritual mission. In this sense, the various nations constituted diverse individualities pursuant of inner spiritual paths of their own, with each language existing as a spiritual creation and gradual unfolding:

If in the soul the feeling truly arises that language is not merely a medium of exchange for mutual understanding, but a true world which the intellect must set between itself and objects by the inner labour of its power, then the soul is on the true way toward discovering constantly more in language, and putting constantly more into it (135).

While he seemed to share his brother’s intellectual demeanor, Wilhelm disapproved of many of Alexander’s life-choices, from living in Paris rather than Berlin (particularly during the wars of liberation against Napoleon), which he felt was most unpatriotic, to leaving the civilized world in his attempts to come closer to nature (Wulf 151). Alexander, the natural philosopher and adventurer, on his side reproached his brother for his conservatism and social and political guardedness. In a time marred by conflict and the growth of nationalism, science, for him, had no nationality and he followed scientific activity wherever it took him, especially to Paris, where he was widely celebrated throughout his life. In a European context of growing repression and censorship in the wake of Napoleon’s defeat, he encouraged the free exchange of ideas and information, and pleaded for international collaborations between scientists and the collection of global data; truth would gradually emerge from the confrontation of different opinions. He also gave many lectures during which he would effortlessly hop from one subject to another, in this manner helping to popularize science. More generally, he would help other scholars whenever he could, intellectually or financially.

As the ideas of 1789 failed to materialize, giving way instead to a climate of censorship and repression, Alexander slowly grew disillusioned with politics. His extensive travels had provided him insights not only on the natural world but also on the human condition. “European barbarity,” especially in the shape of colonialism, tyranny and serfdom had fomented dissent and hatred. Even the newly-born American Republic, with its founding principles of liberty and the pursuit of happiness, was not immune to this scourge (Wulf 171). Man with his greed, violence and ignorance could be as barbaric to his fellow man as he was to nature. Nature was inextricably linked with the actions of mankind and the latter often left a trail of destruction in its wake through deforestation, ruthless irrigation, industrialization and intensive cultivation. “Man can only act upon nature and appropriate her forces to his use by comprehending her laws.” Alexander would later write in his life, and failure to do so would eventually leave even distant stars “barren” and “ravaged” (Wulf 353).

Furthermore, while Wilhelm was perhaps the more celebrated in his time, it was Alexander’s legacy that would prove the more enduring, inspiring new generations of nature writers, including the American founder of the transcendentalist movement Henry David Thoreau, who intended his masterpiece Walden as an answer to Humboldt’s Cosmos, John Muir, the great preservationist, or Ernst Haeckel, who discovered radiolarians and coined our modern science of ecology” Another noteworthy influence was on Darwin and his theory of evolution. Darwin took Humboldt’s web of complex relations a step further and turned them into a tree of life from which all organisms stem. Humboldt sought to upend the ideal of “cultivated nature,” most famously perpetuated by the French naturalist the Comte de Buffon, whereby nature had to be domesticated, ordered, and put to productive use. Crucially, he inspired a whole generation of adventurers, from Darwin to Joseph Banks, and revolutionized scientific practice by tearing the scientist away from the library and back into the wilderness.

For all their many criticisms and disagreements, both brothers shared a strong bond. Alexander, who survived Wilhelm by twenty-four years, emphasized again and again Wilhelm’s “greatness of the character” and his “depth of emotions,” as well as his “noble, still-moving soul life.” Both brothers carved out unique trajectories for themselves, the first as a jurist, a statesman and a linguist, the second arguably as the first modern scientist; yet both still remained beholden to the idea of totalizing systems, each setting forth insights that remain more pertinent than ever.

2390168a

Alexander and Wilhelm von Humboldt, from a frontispiece illustration of 1836

Audrey Borowski is a historian of ideas and a doctoral candidate at the University of Oxford.

In Dread of Derrida

By guest contributor Jonathon Catlin

According to Ethan Kleinberg, historians are still living in fear of the specter of deconstruction; their attempted exorcisms have failed. In Haunting History: For a Deconstructive Approach to the Past (2017), Kleinberg fruitfully “conjures” this spirit so that historians might finally confront it and incorporate its strategies for representing elusive pasts. A panel of historians recently discussed the book at New York University, including Kleinberg (Wesleyan), Joan Wallach Scott (Institute for Advanced Study), Carol Gluck (Columbia), and Stefanos Geroulanos (NYU), moderated by Zvi Ben-Dor Benite (NYU). A recording of the lively two-hour exchange is available at the bottom of this post.

Processed with VSCO with f2 preset

Left to Right: Profs Geroulanos, Gluck, Kleinberg, and Scott

History’s ghost story goes back some decades. Hayden White’s Metahistory roiled the profession in 1973 by effectively translating the “linguistic turn” of the French deconstruction into historical terms: historical narratives are no less “emplotted” in genres like romance and comedy, and hence no less unstable, than literary ones. White sparked fierce debate, notably about the limits of representing the Holocaust, which took place alongside probes into the ethics of those of deconstruction’s heroes with ties to Nazism, including Martin Heidegger and Paul de Man. The intensity of these battles was arguably a product of hatred for one theorist in particular: Jacques Derrida, whose work forms the backbone of Kleinberg’s book. Yet despite decades of scholarship undermining the nineteenth-century, Rankean foundations of the historical discipline, the regime of what Kleinberg calls “ontological realism” apparently still reigns. His book is not simply the latest in a long line of criticism of such work, but rather a manifesto for a positive theory of historical writing that employs deconstruction’s linguistic and epistemological insights.

This timely intervention took place, as Scott remarked, “in a moment when the death of theory has been triumphantly proclaimed, and indeed celebrated, and when many historians have turned with relief to accumulating big data, or simply telling evidence-based stories about an unproblematic past.” She lamented that

the self-reflexive moment and the epistemological challenge associated with names like Foucault, Irigaray, Derrida, and Lacan—all those dangerous French theorists who integrated the very ground on which we stood—reality, truth, experience, language, the body—that moment is said to be past, a wrong turn taken; thankfully we’re now on the right course.

Scott praised Kleinberg’s book for haunting precisely this sense of “triumphalism.”

Kleinberg began his remarks with a disappointed but unsurprised reflection that most historians still operate under the spell of what he calls “ontological realism.” This methodology is defined by the attempt to recover historical events, which, insofar as they are observable, become “fixed and immutable.” This elides the difference between the “real” past and history (writing about the past), unwittingly taking “the map of the past,” or historical representation, as the past itself. It implicitly operates as if the past is a singular and discrete object available for objective retrieval. While such historians may admit their own uncertainty about events, they nevertheless insist that the events really happened in a certain way; the task is only to excavate them ever more exactly.

This dogmatism reigns despite decades of deconstructive criticism from the likes of White, Frank Ankersmit, and Dominick LaCapra in the pages of journals like History and Theory (of which Kleinberg is executive editor), which has immeasurably sharpened the self-consciousness of historical writing. In his 1984 History and Criticism, LaCapra railed against the “archival fetishism” then evident in social history, whereby the archive became “more than the repository of traces of the past which may be used in its inferential reconstruction” and took on the quality of “a stand-in for the past that brings the mystified experience of the thing itself” (p. 92, n. 17). If historians had read their Derrida, however, they would know that the past inscribed in writing “is ‘always already’ lost for the historian.” Scott similarly wrote in a 1991 Critical Inquiry essay: “Experience is at once always already an interpretation and is in need of interpretation.” As she cited from Kleinberg’s book, meaning is produced by reading a text, not released from it or simply reflected. Every text, no matter how documentary, is a “site of contestation and struggle” (15).

Kleinberg’s intervention is to remind us that this erosion of objectivity is not just a tragic story of decline into relativism, for a deconstructive approach also frees historians from the shackles of objectivism, opening up new sources and methodologies. White famously concluded in Metahistory that there were at the end of the day no “objective” or “scientific” reasons to prefer one way of telling a story to another, but only “moral or aesthetic ones” (434). With the acceptance of what White called the “Ironic” mode, which refused to privilege certain accounts of the past as definitive, also came a new freedom and self-consciousness. Kleinberg similarly revamps White’s Crocean conclusion that “all history is contemporary history,” reminding us that our present social and political preoccupations determine which voices we seek out and allow to speak in our work. We can never tell the authoritative history of a subject, but only construct a possible history of it.

Kleinberg relays the upside of deconstructive history more convincingly than White ever did: Opening up history beyond ontological realism makes room for “alternative pasts” to enter through the “present absences” in historiography. Contrary to historians’ best intentions, the hold of ontological positivism perversely closes out and renders illegible voices that do not fit with the dominant paradigm, who are marginalized to obscurity by the authority of each self-enclosed narrative. Hence making some voices legible too often makes others illegible, for example E. P. Thompson foregrounding the working class only to sideline women. The alternative is a porous account that allows itself to be penetrated by alterity and unsettled by the ghosts it has excluded. The latent ontology of holding onto some “real,” to the exclusion of others, would thus give way to a hauntology (Derrida’s play on the ambiguous sound of the French ontologie) whereby the text acknowledges and allows in present absences. Whereas for Kleinberg Foucault has been “tamed” by the historical discipline, this Derridean metaphor remains unsettling. Reinhart Koselleck’s notion of “non-simultaneity” (Ungleichzeitgkeit) further informs Kleinberg’s view of “hauntology as a theory of multiple temporalities and multiple pasts that all converge, or at least could converge, on the present,” that is, on the historian in the act of writing about the past (133).

Kleinberg fixates on the metaphor of the ghost because it represents the liminal in-between of absent presences and present absences. Ghosts are unsettling because they obey no chronology, flitting between past and present, history and dream. Yet deconstructive hauntology stands to enrich narratives because destabilized stories become porous to previously excluded voices. In his response, Geroulanos pressed Kleinberg to consider several alternative monster metaphors: ghosts who tell lies, not bringing back the past “as it really was” but making up alternative claims; and the in-between figure of the zombie, the undead past that has not passed.

Even in the theory-friendly halls of NYU, Kleinberg was met with some of the same suspicion and opposition White was decades ago. While all respondents conceded the theoretical import of Kleinberg’s argument, the question remained how to write such a history in practice. Preempting this question, Kleinberg’s conclusion includes a preview of a parallel book he has been writing on the Talmudic lectures Emmanuel Levinas presented in postwar Paris. He hopes to enact what Derrida called a “double session.” The first half of the book provides a secular intellectual history of how Levinas, prompted by the Holocaust, shifted from Heidegger to Talmud; but the second half tells this history from the perspective of revelation, inspired by “Levinas’s own counterhistorical claim that divine and ethical meaning transcends time,” telling a religious counter-narrative to the standard secular one. Scott praised the way Kleinberg’s two narratives provide two positive accounts that nonetheless unsettle one another. Kleinberg writes: “The two sessions pull at each other, creating cracks in any one homogenous history, through which portions of the heterogeneous and polysemic past that haunts history can rise and be activated.” This “dislodging” and “irruptive” method “marks an irreducible and generative multiplicity” of alternate histories (149). Active haunting prevents Kleinberg’s method from devolving into mere perspectivism; each narrative actively throws the other into question, unsettling its authority.

A further decentering methodology Kleinberg proposed was breaking through the “analog ceiling” of print scholarship into the digital realm. Gluck emphasized how digital or cyber-history has the freedom to be more associative than chronological, interrupting texts with links, alternative accounts, and media. Thus far, however, digital history, shackled by big data and “neoempiricism,” has largely remained in the grip of ontological realism, producing linear narratives. Still, there was some consensus that these technologies might enable new deconstructive approaches. In this sense, Kleinberg writes, “Metahistory came too soon, arriving before the platforms and media that would allow us to explore the alternative narrative possibilities that were at our ready disposal” (117).

Listening to Kleinberg, I thought of a recent experimental book by Yair Mintzker, The Many Deaths of Jew Süss: The Notorious Trial and Execution of an Eighteenth-Century Court Jew (2017). It tells the story of the death of Joseph Oppenheimer, the villain of the infamous Nazi propaganda film Jud Süss (1940) produced at the behest of Nazi propaganda minister Joseph Goebbels. Mintzker was inspired by the narrative model of the film Rashomon (1950), which Geroulanos elaborated in some depth. Director Akira Kurosawa famously presents four different and conflicting accounts of how a samurai traveling through a wooded grove ends up murdered, from the perspectives of his wife, the bandit they encounter, a bystander, and the samurai himself speaking through a medium. Mintzker’s narrative choice is not postmodern fancy, but in this case a historiographical necessity. Because Oppenheimer, as a Jew, was not entitled to give testimony in his own trial, the only extant accounts available come from four similarly self-interested and conflictual sources: a judge, a convert, a Jew, and a writer. Mintzker’s work would seem to demonstrate the viability of Kleinbergian hauntology well outside twentieth-century intellectual history.

Kleinberg mused in closing: “If there’s one thing I want to do…it’s to take this book and maybe scare historians a little bit, and other people who think about the past. To make them uncomfortable, in the end, I hope, in a productive way.” Whether historians will welcome this unsettling remains to be seen, for as with White the cards remain stacked against theory. Yet our present anxiety about living in a “post-truth era” might just provide the necessary pressure for historians to recognize the ghosts that haunt the interminable task of engaging the past.

 

Jonathon Catlin is a PhD student in History at Princeton University. He works on intellectual responses to catastrophe in German and Jewish thought and the Frankfurt School of critical theory.

 

 

Aristotle in the Sex Shop and Activism in the Academy: Notes from the Joint Atlantic Seminar in the History of Medicine

By Editor Spencer J. Weinreich

Four enormous, dead doctors were present at the opening of the 2017 Joint Atlantic Seminar in the History of Medicine. Convened in Johns Hopkins University’s Welch Medical Library, the room was dominated by a canvas of mammoth proportions, a group portrait by John Singer Sargent of the four founders of Johns Hopkins Hospital. Dr. William Welch, known in his lifetime as “the dean of American medicine” (and the library’s namesake). Dr. William Halsted, “the father of modern surgery.” Dr. Sir William Osler, “the father of modern medicine.” And Dr. Howard Kelly, who established the modern field of gynecology.

1905 Professors Welch, Halsted, Osler and Kelly (aka The Four Doctors) oil on canvas 298.6 x 213.3 cm Johns Hopkins University School of Medicine, Baltimore MD

John Singer Sargent, Professors Welch, Halsted, Osler, and Kelly (1905)

Beneath the gazes of this august quartet, graduate students and faculty from across the United States and the United Kingdom gathered for the fifteenth iteration of the Seminar. This year, the program’s theme was “Truth, Power, and Objectivity,” explored in thirteen papers ranging from medical testimony before the Goan Inquisition to the mental impact of First World War bombing raids, from Booker T. Washington’s National Negro Health Week to the emergence of Chinese traditional medicine. It would not do justice to the papers or their authors to cover them all in a post; instead I shall concentrate on the two opening sessions: the keynote lecture by Mary E. Fissell and a faculty panel with Nathaniel Comfort, Gianna Pomata, and Graham Mooney (all of Johns Hopkins University).

I confess to some surprise at the title of Fissell’s talk, “Aristotle’s Masterpiece and the Re-Making of Kinship, 1820–1860.” Fissell is known as an early modernist, her major publications exploring gender, reproduction, and medicine in seventeenth- and eighteenth-century England. Her current project, however, is a cultural history of Aristotle’s Masterpiece, a book on sexuality and childbirth first published in 1684 and still being sold in London sex shops in the 1930s. The Masterpiece was distinguished by its discussion of the sexual act itself, and its consideration (and copious illustrations) of so-called “monstrous births.” It was, in Fissell’s words, a “howling success,” seeing an average of one edition a year for 250 years, on both sides of the Atlantic.

It should be explained that there is very little Aristotle in Aristotle’s Masterpiece. In early modern Europe, the Greek philosopher was regarded as the classical authority on childbirth and sex, and so offered a suitably distinguished peg on which to hang the text. This allowed for a neat trick of bibliography: when the Masterpiece was bound together with other (spurious) works, like Aristotle’s Problems, the spine might be stamped with the innocuous (indeed impressive) title “Aristotle’s Works.”

st-john-the-baptist-el-greco-c-1600

El Greco, John the Baptist (c.1600)

At the heart of Aristotle’s Masterpiece, Fissell argued, was genealogy: how reproduction—“generation,” in early modern terms—occurred and how the traits of parents related to those of their offspring. This genealogy is unstable, the transmission of traits open to influences of all kinds, notably the “maternal imagination.” The birth of a baby covered in hair, for example, could be explained by the pregnant mother’s devotion to an image of John the Baptist clad in skins. Fissell brilliantly drew out the subversive possibilities of the Masterpiece, as when it “advised” women that adultery might be hidden by imagining one’s husband during the sex act, thus ensuring that the child would look like him. Central though family resemblance is to reproduction, it is “a vexed sign,” with “several jokers in every deck,” because women’s bodies are mysterious and have the power to disrupt lineage.

Fissell principally considered the Masterpiece’s fortunes in the mid-nineteenth-century Anglophone world, as the unstable generation it depicted clashed with contemporary assumptions about heredity. Here she framed her efforts as a “footnote” to Charles Rosenberg’s seminal essay, “The Bitter Fruit: Heredity, Disease, and Social Thought in Nineteenth-Century America,” which traced how discourses of heredity pervaded all branches of science and medicine in this period. George Combe’s Constitution of Man (1828), an exposition of the supposedly rigid natural laws governing heredity (with a tilt toward self-discipline and self-improvement), was the fourth-bestselling book of the period (after the Bible, Pilgrim’s Progress, and Robinson Crusoe). Other hereditarian works sketched out the gendered roles of reproduction—what children inherited from their mothers versus from their fathers—and the possibilities for human action (proper parenting, self-control) for modulating genealogy. Wildly popular manuals for courtship and marriage advised young people on the formation of proper unions and the production of healthy children, in terms shot through with racial and class prejudices (though not yet solidified into eugenics as we understand that term).

The fluidity of generation depicted in Aristotle’s Masterpiece became conspicuous against the background of this growing obsession with a law-like heredity. Take the birth of a black child to white parents. The Masterpiece explains that the mother was looking at a painting of a black man at the moment of conception; hereditarian thought identified a black ancestor some five generations back, the telltale trait slowly but inevitably revealing itself. Thus, although the text of the Masterpiece did not change much over its long career, its profile changed dramatically, because of the shifting bibliographic contexts in which it moved.

In the mid-nineteenth century, the contrasting worldviews of the Masterpiece and the marriage manuals spoke to the forms of familial life prevalent at different social strata. The more chaotic picture of the Masterpiece reflected the daily life of the working class, characterized by “contingent formations,” children born out of wedlock, wife sales, abandonment, and other kinds of “marital nonconformity.” The marriage manuals addressed themselves to upper-middle-class families, but did so in a distinctly aspirational mode. They warned, for example, against marrying cousins, precisely at a moment when well-to-do families were “kinship hot,” in David Warren Sabean’s words, favoring serial intermarriage among a few allied clans. This was a period, Fissell explained, in which “who and what counted as family was much more complex” and “contested.” The ambiguity—and power—of this issue manifested in almost every sphere, from the shifting guidelines for census-takers on how a “family” was defined, to novels centered on complex kinship networks, such as John Lang’s Will He Marry Her? (1858), to the flood of polemical literature surrounding a proposed law forbidding a man to marry his deceased wife’s sister—a debate involving many more people than could possibly have been affected by the legislation.

After a rich question-and-answer session, we shifted to the faculty panel, with Professors Comfort, Pomata, and Mooney asked to reflect on the theme of “Truth, Power, and Objectivity.” Comfort, a scholar of modern biology, began by discussing his work with oral histories—“creating a primary source as you go, and in most branches of history that’s considered cheating.” Here perfect objectivity is not necessarily helpful: “when you make yourself emotional availability to your subjects […] you can actually gain their trust in a way that you can’t otherwise.” Equally, Comfort encouraged the embrace of sources’ unreliability, suggesting that unreliability might itself be a source—the more unreliable a narrative is, the more interesting and the more indicative of something meant it becomes. He closed with the observation that different audiences required different approaches to history and to history-writing—it is not simply a question of tone or language, but of what kind of bond the scholar seeks to form.

Professor Pomata, a scholar of early modern medicine, insisted that moments of personal contact between scholar and subject were not the exclusive preserve of the modern historian: the same connections are possible, if in a more mediated fashion, for those working on earlier periods. In this interaction, respect is of the utmost importance. Pomata quoted a line from W. B. Yeats’s “He wishes for the Cloths of Heaven”:

I have spread my dreams under your feet;

Tread softly because you tread on my dreams.

As a historian of public health—which he characterized as an activist discipline—Mooney declared, “I’m not really interested in objectivity. […] I’m angry about what I see.” He spoke compellingly about the vital importance of that emotion, properly channeled toward productive ends. The historian possesses power: not simply as the person setting the terms of inquiry, but as a member of privileged institutions. In consequence, he called on scholars to undermine their own power, to make themselves uncomfortable.

The panel was intended to be open-ended and interactive, so these brief remarks quickly segued into questions from the floor. Asked about the relationship between scholarship and activism, Mooney insisted that passion, even anger, are essential, because they drive the scholar into the places where activism is needed—and cautioned that it is ultimately impossible to be the dispassionate observer we (think we) wish to be. With beautiful understatement, Pomata explained that she went to college in 1968, when “a lot was happening in the world.” Consequently, she conceived of scholarship as having to have some political meaning. Working on women’s history in the early 1970s, “just to do the scholarship was an activist task.” Privileging “honesty” over “objectivity,” she insisted that “scholarship—honest scholarship—and activism go together.” Comfort echoed much of this favorable account of activism, but noted that some venues are more appropriate for activism than others, and that there are different ways of being an activist.

Dealing with the horrific—eugenics was the example offered—requires, Mooney argued, both the rigor of a critical method and sensitive emotional work. Further, all three panelists emphasized crafting, and speaking in, one’s own voice, eschewing the temptation to imitate more prominent scholars and embracing the first person (and the subjectivity it marks). Voice, Comfort noted, isn’t natural, but something honed, and both he and Pomata recommended literature as an essential tool in this regard.

Throughout, the three panelists concurred in urging collaborative, interdisciplinary work, founded upon respect for other knowledges and humility—which, Comfort insightfully observed, is born of confidence in one’s own abilities. Asking the right questions is crucial, the key to unlocking the stories of the oppressed and marginalized within sources created by those in power. Visual sources have the potential to express things inexpressible in words—Comfort cited a photograph that wonderfully captured the shy, retiring nature of Dr. Barton Childs—but must be used, not mere illustrations. The question about visual sources was the last of the evening, and Professor Pomata had the last word. Her final comment offers the perfect summation of the creativity, dedication, and intellectual ferment on display in Baltimore that weekend: “we are artists, don’t forget that.”

Ultimate Evil: Cultural Sociology and the Birth of the Supervillain

By guest contributor Albert Hawks, Jr.

In June 1938, editor Jack Leibowitz found himself in a time crunch. Needing to get something to the presses, Leibowitz approved a recent submission for a one-off round of prints. The next morning, Action Comics #1 appeared on newsstands. On the cover, a strongman in bright blue circus tights and a red cape was holding a green car above his head while people ran in fear. Other than the dynamic title “ACTION COMICS”, there was no text explaining the scene. In an amusing combination of hubris and prophecy, the last panel of Action Comics #1 proclaimed: “And so begins the startling adventures of the most sensational strip character of all time!” Superman was born.

Action_Comics_1

Comics are potentially incomparable resources given the cultural turn in the social sciences (a shift in the humanities and social sciences in the late 20th century toward a more robust study of culture and meaning and away from positivism). The sheer volume of narrative—somewhere in the realm of 10,000 sequential Harry Potter books– and social saturation—approximately 91-95% of people between the ages of six and eleven in 1944 read comics regularly according to a 1944 psychological study—remain singular today (Lawrence 2002, Parsons 1991).

Cultural sociology has shown us that “myth and narrative are elemental meaning-making structures that form the bases of social life” (Woodward, 671). In a lecture on Giacometti’s Standing Woman, Jeffrey Alexander pushes forward a way of seeing iconic experiences as central, salient components of social life. He argues:

 Iconographic experience explains how we feel part of our social and physical surroundings, how we experience the reality of the ties that bind us to people we know and people we don’t know, and how we develop a sense of place, gender, sexuality, class, nationality, our vocation, indeed our very selves (Alexander, 2008, 7).

He further suggests these experiences informally establish social values (Alexander, 2008, 9). Relevant to our purposes, Alexander stresses Danto’s work on “disgusting” and “offensive” as aesthetic categories (Danto, 2003) and Simmel’s argument that “our sensations are tied to differences” with higher and lower values (Simmel, 1968).

This suggests that theoretically the comic book is a window into pre-existing, powerful, and often layered morals and values held by the American people that also in turn helped build cultural moral codes (Brod, 2012; Parsons, 1991).

The comic book superhero, as invented and defined by the appearance of Superman, is a highly culturally contextualized medium that expresses particular subgroups’ anxieties, hopes, and values and their relationship to broader American society.

But this isn’t a history of comics, accidental publications, or even the most famous hero of all time. As Ursula LeGuin says, “to light a candle is to cast a shadow.” It was likely inevitable that the superhero—brightest of all the lights—would necessarily cast a very long shadow. Who after all could pose a challenge to Superman? Or what could occupy the world’s greatest detective? The world needed supervillains. The emergence of the supervillain offers a unique slice of moral history and a potentially powerful way to investigate the implicit cultural codes that shape society.

I want to briefly trace the appearance of recurring villains in comic books and note what their characteristics suggest about latent concepts of evil in society at the time. Given our limited space, I’m here only considering the earliest runs of the two most iconic heroes in comics: Superman (Action Comics #1-36) and Batman (Detective Comics #27-; Batman #1-4).

Ultra First AppearanceInitially, Superman’s enemies were almost exclusively one-off problems tied to socioeconomic situations. It wasn’t until June 1939 that readers met the first recurring comic book villain: the Ultrahumanite. Pursuing a lead on some run-of-the-mill racketeers, Superman comes across a bald man in a wheel chair: “The fiery eyes of the paralyzed cripple burn with terrible hatred and sinister intelligence.” His “crippled” status is mentioned regularly. The new villain wastes no time explaining that he is “the head of a vast ring of evil enterprises—men like Reynolds are but my henchmen” (Reynolds is a criminal racketeer introduced earlier in the issue), immediately signaling something new in comics. The man then formally introduces himself, not bothering with subtlety.

I am known as the ‘Ultra-humanite’. Why? Because a scientific experiment resulted in my possessing the most agile and learned brain on earth! Unfortunately for mankind, I prefer to use this great intellect for crime. My goal? Domination of the world!!

In issue 20, Superman discovers that, somehow, Ultra has become a woman. He explains to the Man of Steel: “Following my instructions, they kidnapped Dolores Winters yesterday and placed my mighty brain in her young vital body!” (Action Comics 20).

ultra dolores winters

Superman found his first recurring foil in unfettered intellect divorced from physicality. It’s hard not to wonder if this reflected a general distrust of the ever-increasing destructive power of science as World War II dawned. It’s also fascinating to note how consistently the physical status of the Ultrahumanite is emphasized, suggesting a deep social desire for physical strength, confidence, and respect.

After Ultra’s death, our hero would not be without a domineering, brilliant opponent for long. Action Comics 23 saw the advent of Lex Luthor. First appearing as an “incredibly ugly vision” of a floating face and lights, Luthor’s identity unfolds as a mystery. Superman pursues a variety of avenues, finding only a plot to draw countries into war and thugs unwilling to talk for fear of death. Lois actually encounters Luthor first, describing him as a “horrible creature”. When Luthor does introduce himself, it nearly induces déjà vu: “Just an ordinary man—but with the brain of a super-genius! With scientific miracles at my fingertips, I’m preparing to make myself supreme master of th’ world!”

dr deathThe Batman develops his first supervillain at nearly the same time as Superman. In July 1939, one month after the Ultrahumanite appeared, readers are introduced to Dr. Death. Dr. Death first appears in a lavish study speaking with a Cossack servant (subtly implying Dr. Death is anti-democratic) about the threat Batman poses to their operations. Death is much like what we would now consider a cliché of a villain—he wears a suit, has a curled mustache and goatee, a monocle, and smokes a long cigarette while he plots. His goal: “To extract my tribute from the wealthy of the world. They will either pay tribute to me or die.” Much like Superman’s villains, he uses science—chemical weapons in particular—to advance these sinister goals. In their second encounter, Batman prevails and Dr. Death appears to burn to death. Of course, in comics the dead rarely stay that way; Dr. Death reappears the very next issue, his face horribly scarred.

hugostrangeThe next regularly recurring villain to confront Batman appears in February 1940. Batman himself introduces the character to the reader: “Professor Hugo Strange. The most dangerous man in the world! Scientist, philosopher, and a criminal genius… little is known of him, yet this man is undoubtedly the greatest organizer of crime in the world.” Elsewhere, Strange is described as having a “brilliant but distorted brain” and a “malignant smile”. While he naturally is eventually captured, Strange becomes one of Batman’s most enduring antagonists.

The very next month, in Batman #1, another iconic villain appears: none other than the Joker himself.

Once again a master criminal stalks the city streets—a criminal weaving a web of death about him… leaving stricken victims behind wearing a ghastly clown’s grin. The sign of death from the Joker!

Also utilizing chemicals for his plots, the Joker is portrayed as a brilliant, conniving plotter who leads the police and Batman on a wild hunt. Unique to the Joker among the villains discussed is his characterization as a “crazed killer” with no aims of world power. The Joker wants money and murder. He’s simply insane.

DSC_0737

Some striking commonalities appear across our two early heroes’ comics. First, physical “flaws” are a critical feature. These deformities are regularly referenced, whether disability, scarring, or just a ghastly smile. Second, virtually all of these villains are genius-level intellects who use science to pursue selfish goals. And finally, among the villains, superpowers are at best a secondary feature, suggesting a close tie between physical health, desirability, and moral superiority. Danto’s aesthetic categories of “disgusting” and “offensive” certainly ring true here.

This is remarkably revealing and likely connected to deep cultural moral codes of the era. If Superman represents the “ideal type,” supervillains such as the Ultrahumanite, Lex Luthor, and the Joker are necessary and equally important iconic representations of those deep cultural moral codes. Such a brief overview cannot definitively draw out the moral world as revealed through comics and confirmed in history. Rather, my aims have been more modest: (1) to trace the history of the birth of the supervillain, (2) to draw a connective line between the strong cultural program, materiality, and comic books, and (3) to suggest the utility of comics for understanding the deep moral codes that shape a society. Cultural sociology allows us to see comics in a new light: as an iconic representation of culture that both reveals preexisting moral codes and in turn contributes to the ongoing development of said moral codes that impact social life. Social perspectives on evil are an actively negotiated social construct and comics represent a hyper-stylized, exceedingly influential, and unfortunately neglected force in this negotiation.

Albert Hawks, Jr. is a doctoral student in sociology at the University of Michigan, Ann Arbor, where he is a fellow with the Weiser Center for Emerging Democracies. He holds an M.Div. and S.T.M. from Yale University. His research concerns comparative Islamic social movements in Southeast and East Asia in countries where Islam is a minority religion, as well as in the American civil sphere.

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2

A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image

Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.

“Doctrine according to need”: John Henry Newman and the History of Ideas

By guest contributor Burkhard Conrad O.P.L.

Any history of ideas and concepts hinges on the observation that ideas and concepts change over time. This notion seems to be so self-evident that the question of why they change is rarely addressed.

Interestingly enough, it was only during the course of the nineteenth century that the notion of a history of ideas, concepts, and doctrines became widespread. Ideas increasingly came to be seen as contingent or “situational,” as we might phrase it today. Ideas were no longer regarded as quasi-metaphysical entities unaffected by time and change. This hermeneutic shift from metaphysics to history, however, was far from sudden. It came about gradually and is still ongoing.

Portrait_of_John_Henry_Newman

John Henry Newman in 1844, a year before he wrote his Essay on the Development of Christian Doctrine (portrait by George Richmond)

The theologian and controversialist John Henry Newman (1801–1890) must be regarded as one of the intellectual protagonists of this shift from metaphysics to history. An eminent intellectual within the high-church Anglican Oxford Movement, Newman decided mid-life to join the Church of Rome, eventually to become one of the foremost voices of nineteenth century Roman Catholicism in Britain. Newman exerts an influence well into our time with figures such as Joseph Ratzinger (Pope Benedict XVI) paying tribute to his thought. Benedict eventually beatified Newman in 2010.

Rarely quoted in any non-theological study in the history of ideas, Newman’s work is eminently important for understanding both the quest for a historical understanding of ideas and the anxious existential situation of those thinkers who found themselves in the middle of a momentous intellectual revolution. In Newman’s day, it was not uncommon for such intellectual and personal queries to go hand in hand.

In Newman’s case, this phenomenon becomes particularly obvious when we look at his contribution to the history of ideas. In 1845, during his phase of conversion from Anglicanism to Roman Catholicism, Newman wrote his famous Essay on the Development of Christian Doctrine. I wish to focus on two of Newman’s central claims within the Essay. Firstly, he states that it is justifiable to argue that doctrines and ideas change over time. Confronting a more “scriptural”—i.e. Protestant—understanding, Newman affirms that ecclesiastical doctrines hinge not only on the acknowledgement of divine, biblical revelations, but also on a tradition of teaching that has evolved since the early church. Newman writes in one of his sermons that “Scripture (…) begins a series of development which it does not finish; that is to say, in other words, it is a mistake to look for every separate proposition of the Catholic doctrine in Scripture” (§28). He complements this thought by adding in the Essay that “time is necessary for the full comprehension and perfection of great ideas” (29), which is to say that doctrine is not something given only once, but rather possesses a dynamic quality.

But why does doctrine evolve in the first place? Why is it important to speak of a history of ideas? It is remarkable that Newman answers those questions in much the same way as we would do today. Doctrine, according to Newman, is ever-evolving because there is a need for such transformation. Ideas are simply “keeping pace with the ever-changing necessities of the world, multiform, prolific, and ever resourceful” (56).

Newman speaks of “a gradual supply [of doctrine] for a gradual necessity” (149). The logic behind doctrinal change may thus be explained as follows: a given or conventional doctrine comes into contact with or is challenged by alternative expressions of doctrine. These alternative expressions come to be regarded as false, as heresy. Hence, an adequate doctrinal reaction is necessary. This reaction may take the form either of a doctrinal change through the absorption of novel thought, or of a transformation through counter-reaction. Hence, Newman is even able to rejoice in the fact that false teaching may arise. He writes in the aforementioned sermon: “Wonderful, to see how heresy has but thrown that idea into fresh forms, and drawn out from it farther developments, with an exuberance which exceeded all questioning, and a harmony which baffled all criticism”(§6).

hans_blumenberg_namensgeber_1200x1600

Hans Blumenberg © Bildarchiv der Universitätsbibliothek Gießen und des Universitätsarchivs Gießen, Signatur HR A 603 a

The necessity for doctrinal development, hence, is born within situations of discursive tension. These situations—the philosopher Hans Blumenberg once called them “rhetorical situations” (Ästhetische und metaphorologische Schriften, 417)—demand a continuous, diachronical stream of cognitive solutions. The need has to be answered. As Quentin Skinner once noted, “Any statement (…) is inescapably the embodiment of a particular intention, on a particular situation, addressed to the solution of a particular problem, and thus specific to its situation in a way that is can only be naïve to try to transcend” (in Tully, Meaning and Context, 65). The need for development and change, thus, is born in distinctive historical settings with concrete utterances and equally concrete counter-utterances. That is why ideas and concepts change over time: they simply react. “One proposition necessarily leads to another” (Newman, Essay, 52). This change is required in order to come to terms with new challenges, both rhetoric and real.

To be frank, Newman would hardly agree with Skinner on the last part of his statement, namely, that statements are never able to transcend their particular context. For all his historical consciousness, Newman was, like most his contemporaries, fixated on the idea that, despite its dynamic character, the teaching of the Church forms a harmonious, logical and self-explaining system of thought, a higher truth. He writes, “Christianity being one, and all its doctrines are necessarily developments of one, and, if so, are of necessity consistent with each other, or form a whole” (96).

Newman was also convinced—again contrary to Quentin Skinner—that the history of theological ideas had to be looked at with a normative bias. He identified certain “corruptions” in the history of theological ideas (169). It was important to Newman to be able to distinguish between “genuine developments” and these “corruptions.” A large part of his Essay is devoted to setting out apparently objective criteria for such a normative classification.

Who is to decide what is genuine and what is corrupted? Newman’s second claim in the Essay deals with this question. According to Newman, the only true channel of any genuine tradition is found within the Roman Catholic Church, with the pope as the final and supreme arbiter. Newman could not do without such a clear attribution of doctrinal decision-making power. It was necessary “for putting a seal of authority upon those developments;” that is, those which ought to be regarded as genuine (79).

In consequence, Newman mobilized his first claim about the dynamic nature of doctrines with regard to his second claim, the idea of papal infallibility in deciding matters of doctrine. It was clear even to the nineteenth-century theologian that the notion of papal infallibility was not explicitly contemplated by anyone in the early Church. But, according to Newman, the requirement for an “infallible arbitration in religious disputes” became more and more pronounced as centuries of doctrinal disputes passed (89). And Newman says of his own century that “the absolute need of a spiritual supremacy is at present the strongest of arguments in favour of the fact of its supply” (89). The idea of infallibility thus came into existence because, according to Newman, it was needed to settle doctrinal uncertainty within his own time.

John_Henry_Newman_(by_Emmeline_Deane)

Cardinal John Henry Newman, painted in 1889 by Emmeline Deane

Having admitted the pope’s supremacy in matters of doctrine, a consequent existential decision by Newman had to follow. He had no choice but to leave Canterbury for Rome. It is only fair to say that Newman required the idea of a final arbitrator, someone to decide on genuine doctrinal development, in order to fulfill his own need for certainty and a spiritual home. Even a sympathetic interpreter like the historian and theologian Jaroslav Pelikan had to concede “that here Newman’s existential purpose did get in the way of his historical vision” (Development of Christian Doctrine, 144).

And so Newman’s account of a history of ideas developing according to need was born as much out of an intellectual interest in the history of theological ideas as it was triggered by biographical motives. But which twenty-first-century scholar in the humanities could claim that his or her research interests had nothing to do with personal circumstances?

Burkhard Conrad O.P.L., Ph.D., taught politics at the University of Hamburg, Germany. He is now an independent scholar working for the archdiocese of Hamburg. Among his research interests are political theology, Søren Kierkegaard, and the Oxford Movement. He is a lay Dominican and writes a blog at www.rotsinn.wordpress.com.

What We’re Reading: Week of 11th September

Here are a few interesting articles and pieces we found around the web this week. If you come across something that other intellectual historians might enjoy, please let us know in the comments section.

 

Disha

Pankaj Mishra, “What Is Great About Ourselves” (LRB)

Rembert Browne, “Colin Kaepernick Has a Job” (Bleacher Report)

Toni Morrison, “The Color Fetish” (The New Yorker)

 

Spencer

Wyatt Mason, “Violence and Creativity” (NYRB)

Ruth Graham, “Could Father Mychal Judge Be the First Gay Saint?” (Slate)

Geoffrey Stone and Eric J. Segall, “Faith, Law, and Diane Feinstein” (NYT), responding to Noah Feldman, “Feinstein’s Anti-Catholic Questions are an Outrage” (Bloomberg)

 

Sarah

Jon Baskin, “Philosophy and the Gods of the City: Benjamin Aldes Wurgaft’s “Thinking in Public,” (LARB)

Adrien Chen, “The Fake News Fallacy,” (The New Yorker)

Oona A. Hathaway and Scott J. Shapiro, “Making war illegal changed the world. But it’s becoming too easy to break the law,” (Guardian)

Parul Sehgal, “The Gloom, Doom and Occasional Joy of Writing Life,” (NYT)

 

Derek

Maura Ewing, “How One Agency is Fixing American Amnesia about Reconstruction” (Pacific Standard)

Jennet Conant, “From Triumph to Terror” (Lithub)

James McCorkle, “A History of Barbed Wire” (New England Review)
Kristin:

John Lanchester, “The Case Against Civilization” (The New Yorker)

Charlotte Gao, “One Man, One Road: A Funny Tale of Civic Protest in China” (The Diplomat)

Leah Donnella, Kat Chow, Gene Demby “What Our Monuments (Don’t) Teach Us About Remembering the Past” (NPR)

Cynthia

Susan Sontag, “Simone Weil” (NYRB)

Okwui Enwezor with David Carrier & Joachim Pissaro, “In Conversation” (Brooklyn Rail)

Mary Jo Bang, “Five Hundred Glass Negatives” (The Paris Review)

Ken Gordon, “Narration Vs.Curation” (Design Observer)

 

Erin

The fall books issue of the NYRB is excellent! I’ve enjoyed these pieces:
Tim Flannery, “Gone Fishing” (NYRB)
Geoffrey O’Brien, “Five Magnificent Years” (NYRB)
Edmund White, “Under a Spell” (NYRB)
Joyce Carol Oates, “The Poet of Freakiness” (NYRB)

Film Series:
New Yorkers should check out Anthology’s “The Cinema of Transgression: Trans Film,” screening September 15-25 and Metrograph’s “UCLA Festival of Preservation,” September 15-20.

William Plumer and the Politics of History Writing

By guest contributor Emily Yankowitz

On December 30, 1806, on the inner cover of his first attempt at writing a historical work, the New Hampshire statesman William Plumer wrote, “An historian, like a witness, is bound to relate the truth, the whole truth, & nothing but the truth.” He would take up his project of writing a “History of North America” in November 1809 after three years of research. In what appears to be typical of Plumer’s personality, he intended to write a history of the United States government, but the project quickly expanding into “a general history of the United States” from its discovery by Europeans to his own time It was to include accounts of administrations, laws, presidents, heads of departments, members of Congress, judiciary, foreign relations, negotiations, relations with Indian tribes, purchases of lands, and commerce. Reaching even further into the past, he began with an overview of classical history, including the invention of hieroglyphics, and a detailed study of European political events, before arriving at the settlement of Jamestown in 1607 over 220 pages later. Yet having worked on the project for nine years and seeing little progress, Plumer unceremoniously put it aside, writing, “The undertaking I have abandoned” on the last page.

Picture1

William Plumer, engraving by Charles Balthazar Julien Fevret de Saint-Mémin (1806). Photo credit: Library of Congress

A Federalist senator in a Congress dominated by President Thomas Jefferson and the Republicans, Plumer had little hope of influencing politics. Watching his vision of the world collapse around him, Plumer recalled that with nearly every measure Jefferson proposed, he was reminded of the angel’s declaration to Ezekiel, “Turn, & thou shall behold yet greater abominations” (Plumer to Jeremiah Smith, January 27, 1803, quoted in Turner, “Thomas Jefferson,” 207). These “abominations” included the Louisiana Purchase, the Twelfth Amendment, and the impeachment of New Hampshire judge John Pickering. Frustrated and alarmed, Plumer helped to plan a scheme for New England secession in 1803–1804, hoping to create a “Northern confederacy.” But the project quickly fell apart, although intransigent Federalists would take up a similar plan at the 1814–1815 Hartford Convention.

 

Amid a career in jeopardy and anxieties about the future, Plumer found solace in historical pursuits. Overwhelmed by his country’s fast-paced development, history offered Plumer a method of “preserving facts & opinions” that were “rapidly hasting to oblivion” as a result of the “changes & revolution of time and parties” (May 2, 1805). Unlike other senators who indulged in horse racing and gambling, Plumer spent his free time hidden for hours in the Congressional Library, reading voraciously. This curiosity was one of Plumer’s most pronounced traits; the son of a farmer, Plumer received little formal schooling beyond elementary studies, and pursued much of his education through books.

Over time, Plumer’s intellectual interests expanded. Spotting a mound of scattered government documents in the damp, mildewed lumber room above the Senate chamber, he devoted himself to preserving them, methodically sorting through the soiled records. Through the next four years, Plumer collected journals of every Congress from 1774 to his own, enough to fill between four and five hundred bound volumes. He eventually came to possess one of the largest and most complete collections of public papers held by a private citizen, even after he donated a substantial amount to the Massachusetts Historical Society. This effort rescued valuable documents from destruction, and also provided Plumer with a substantial number of sources for his later historical works. According to his son, it was this collecting effort that inspired Plumer to write a history of the country (For more information, see Freeman, Affairs of Honor, 262-4).

1280px-Official_Presidential_portrait_of_Thomas_Jefferson_(by_Rembrandt_Peale,_1800)

President Thomas Jefferson, painted by Rembrandt Peale (1800)

With the end of his term approaching, Plumer set about preparing for this enormous task—consulting with government officers, copying private letters shown to him by friends, and corresponding with antiquarians and scholars. He conferred with Albert Gallatin, Secretary of the Treasury, who offered him any materials needed from the Treasury department. Not everyone was supportive—at least one friend advised Plumer to publish his history posthumously to avoid giving “mortal offence” to contemporaries (February 28, 1807). His meeting with President Jefferson showed how complex the publication of his history might be. Plumer observed that Jefferson’s “countenance […] repeatedly changed.” Jefferson expressed “uneasiness and embarrassment—at other [moments] he seemed pleased.” Seemingly affected by a range of emotions, Jefferson alternated between looking at Plumer and staring at the floor. Jefferson’s reaction perplexed Plumer, who reasoned that Jefferson must have been “embarrassed,” and “disapproved” of the project (February 4, 1807). But he also discussed Jefferson’s strange response with John Quincy Adams, who informed him that Jefferson “cannot be a lover of history,” as he did not want certain “prominent traits in his character” and “important actions in his life” to be outlined and communicated to posterity (February 9, 1807). Jefferson’s own actions appear to echo this sentiment. Out of a desire to control how he would be remembered, Jefferson later professed to have “no materials whatever” for Plumer’s project despite its usefulness to the country.

Plumer’s background and personality did not make him a particularly obvious candidate for the project. In his diary, he mulled over his doubts about his efforts, noting his personal shortcomings, the complications of his private life, and the magnitude of the project. He was not a “scholar” or a “master of the English grammar,” he noted, and could not read any foreign language or express his ideas quickly on paper. Regarding his personal life, his wife was often sick and he himself had a “weak & feeble constitution.” However, Plumer was also highly aware of the shortcomings of existing “historic performances,” namely state histories, which were written too quickly. They contained factual errors, had a “loose & slovenly” style, and “fall short of the true style & dignity of history.” He found Benjamin Trumbull’s Complete History of Connecticut to be “written in the style of a low dull Chronicle,” while James Sullivan’s History of the District of Maine was a “jumble of fact & fable” (July 22, 1806). Yet his task would take “indefatigable industry, & patient labour to render it useful to others and honorable to myself.” Virgil took twelve years to write the Aeneid, Plumer worried, while Edward Gibbon took twenty years to write The History of the Decline and Fall of the Roman Empire. Plumer would exceed both Virgil and Gibbon, ultimately devoting the remainder of his life to historical works that ultimately remained unpublished.

While Plumer believed the work would be useful for “future statesmen,” he also hoped to enhance his reputation. If he successfully produced the work, it would be an “imperishable monument that would perpetuate” his name. Highlighting the inextinguishable impact of history, Plumer noted that it would exist when “columns of marble are dissolved & crumbled to dust.” However, if he did not execute it well it would “tarnish & destroy” the little “fame” he had acquired (July 22, 1806). Thus, writing history had political as well as personal consequences.

William_Plumer,_Jr.

William Plumer, Jr., depicted in The Granite State Monthly (1889)

Plumer was not alone in using history to achieve a recognition he would never receive through politics. In fact, one of his sons, William Plumer Jr., would take up a similar project in 1830, after completing his term as a representative. Reflecting on the project, he noted that if “executed with any tolerable success, it would be a more important service rendered to the public than I can hope in any other way to perform” and he might be able to acquire a “reputation, however small” if the work was successfully produced (“Manuscript History of the United States”). While the boundaries of Plumer Jr.’s intended project were smaller (he planned to begin with Columbus’s voyage in 1492), he made little progress.

 

Unable to acquire national political fame, Plumer sought recognition through history, while also pursuing a political (though nonpartisan) agenda. Even after his formal political party had changed to the Republican position, Plumer retained much of his Federalist view of the world, in part because of his own distaste for partisanship and in part because he lived in Federalist-concentrated New England. In particular, much like the Federalists of the 1790s, Plumer never fully supported the existence of political parties, viewing them as agents of division that distracted men from effectively evaluating candidates based on their abilities. Just as Plumer disapproved of partisanship in politics, he also disapproved of it in historical writing. For example, he wrote that historians and biographers should have “no other object than faithfully narrate facts & justly delineate characters” for when they “stoop to the support of a party or a sect” their “facts are misstated and their reasoning is sophistry” (“May 25, 1808”). Plumer argued that a historian should be “of no party in politic’s [sic] … without prejudice, & have more judgement than fancy” (“October 1, 1807”). Thus, for Plumer, historians did a disservice not only to the integrity of their subject, but also to the influence of their work, if they espoused partisan views.

Looking a bit further into the nineteenth century, historians would divide over whether it was acceptable to combine history and politics. In particular, following the decline of the Federalist party and the rise of Andrew Jackson, New England historians attempted to use history as a mechanism of regaining the power and influence they had lost in politics. Some followed both paths, like George Bancroft, who pursued a political career while working on his History of the United States, while others such as William Prescott and Jared Sparks believed that the two disciplines were incompatible (Cheng, The Plain and Noble Garb of Truth, 36-41). However, many members of both groups believed that history could be used as a method of advancing political agendas.

In an attempt to save their party from destruction in the wake of the Hartford Convention, some Federalists wrote historical works that tried (largely unsuccessfully) to shape how posterity remembered the event. Prompted in part by the publication of Matthew Carey’s wildly successful The Olive Branch and the Nullification Crisis, Federalists turned to writing histories to justify their actions. These works included Theodore Lyman’s 1823 A Short Account of the Hartford Convention, Harrison Gray Otis’ 1824 Letters in Defence of the Hartford Convention, and the People of Massachusetts, and Theodore Dwight’s 1833 History of the Hartford Convention. However, these works were generally unsuccessful.

Eager to shape both policies and how they would be remembered, early American politicking occurred both in the halls of Congress and in the pages of books. Plumer hoped to play a central role in constructing the young nation’s emerging identity and its memories of the early figures of the founding era. Thus, his historical writings—which he would continue for decades after his failed “History,” but largely never publish—serve as a reminder that our very understanding of the past has often been shaped by the individuals in the moment who had the foresight to record it. Given how the historical discipline has changed over time, it is perhaps tempting to dismiss early historian’s writings. However, they nonetheless offer a useful perspective on how contemporaries perceived the world around them and how they wanted it to be remembered.

Emily Yankowitz recently graduated from Yale University and is an incoming M.Phil. student in American History at the University of Cambridge. She is interested in the intersection of politics, culture, and memory in the early American republic.

Mystery Attracts Mystery: The Forgotten Partnership of H. P. Lovecraft and Harry Houdini

By Editor Spencer J. Weinreich

Pulp is one of the great unheralded archives of American cultural history. Ephemeral by its very nature, the pulp magazine or paperback brought millions of readers the derring-do of detectives and superheroes, the misadventures of doomed lovers, and the horrors of gruesome monsters. They were the birthplaces of Tarzan and Zorro, and published the work of such luminaries as Agatha Christie, F. Scott Fitzgerald, Mark Twain, and Tennessee Williams.

Under_the_Pyramids

The cover of the May-June-July 1924 issue of Weird Tales

In 1924, readers of the fantasy and horror pulp Weird Tales found a more familiar figure alongside the usual crowd of ghouls, corpses, and scantily clad women. The cover story of the May–June–July issue was “Imprisoned with the Pharaohs,” by none other than Harry Houdini. The magician tells of his voyage to Egypt, where he is captured by nefarious locals and imprisoned beneath a pyramid, to be sacrificed to horrid monsters of untold age. With his trademark skills, Houdini frees himself and reaches the surface, insisting—despite his injuries—that it was nothing more than a dream.
Fans of horror fiction know this bizarre story under a different name and authorship: H. P. Lovecraft’s “Under the Pyramids.” Each in their own way icons of early twentieth-century America, Lovecraft and Houdini led strikingly different lives. The magician was an international celebrity, drawing rapturous crowds wherever he went. He performed for the Russian royal family. He amassed a personal fortune sufficient to purchase, among other things, a dress once worn by Queen Victoria (a gift for his mother) and a 1907 Voisin biplane (complete with mechanic). His funeral was attended by two thousand members of the public. By contrast, Lovecraft’s biographer S. T. Joshi holds that the writer “as he lay dying […] was envisioning the ultimate oblivion that would overtake his work.” All but one of his stories were unpublished or moldering away in back issues of pulp magazines. It was only posthumously that his writings found their audience, eventually attaining the cult status they enjoy today.

 

H._P._Lovecraft,_June_1934

H. P. Lovecraft in 1934 (photograph by Lucius B. Truesdell)

The original idea for “Imprisoned with the Pharaohs” came from Houdini, whom the proprietor of Weird Tales, J. C. Henneberger, had retained as a columnist to boost flagging sales. Henneberger and Edwin Baird, the magazine’s editor, tapped Lovecraft to ghostwrite what Houdini was claiming to be a true story. “Lovecraft quickly discovered that the account was entirely fictitious, so he persuaded Henneberger to let him have as much imaginative leeway as he could in writing up the story” (Joshi, A Dreamer and a Visionary, 191).

“Imprisoned with the Pharaohs” was by no means Houdini’s only fictional exploit. As early as 1906, he had been making films of his tricks, and between 1918 and 1923 starred in and/or produced several silent movies. Though these films do not present themselves as Houdini’s own experiences, there was little attempt to hide the fact that the magician was their raison d’être and main selling-point. Their protagonists—given telling names like Harvey Hanford or Harry Harper—spend most of their time onscreen being straitjacketed, chained, thrown into rivers, suspended from airplanes or cliffs, or otherwise discomfited so as to give audience the greatest possible opportunity to see Houdini do what he did best.

 

Houdini Showing How To Escape Handcuffs

Harry Houdini in 1918

Lovecraft understood that readers wanted Houdini the escape artist, and he obliged. During a nocturnal visit to the Pyramids, “Houdini” is attacked, bound, and into the deep recesses of an underground temple. Our hero is undaunted.

The first step was to get free of my bonds, gag, and blindfold; and this I knew would be no great task, since subtler experts than these Arabs had tried every known species of fetter upon me during my long and varied career as an exponent of escape, yet had never succeeded in defeating my methods.

At the same time, the story bears all the hallmarks of Lovecraftian “cosmic horror”: ghastly and ancient things lurking beneath ordinary life, grotesque monsters compounded from all manner of anatomies and mythologies, the inability of the human mind to comprehend the awful truth, and his unique—to put it kindly—prose style.

It was the ecstasy of nightmare and the summation of the fiendish. The suddenness of it was apocalyptic and daemoniac—one moment I was plunging agonisingly down that narrow well of million-toothed torture, yet the next moment I was soaring on bat-wings in the gulfs of hell; swinging free and swoopingly through illimitable miles of boundless, musty space; rising dizzily to measureless pinnacles of chilling ether, then diving gaspingly to sucking nadirs of ravenous, nauseous lower vacua. . . . Thank God for the mercy that shut out in oblivion those clawing Furies of consciousness which half unhinged my faculties, and tore Harpy-like at my spirit! That one respite, short as it was, gave me the strength and sanity to endure those still greater sublimations of cosmic panic that lurked and gibbered on the road ahead.

Lovecraft’s reputation is rightly tarnished by his racism. Though “Under the Pyramids” is by no means the worst offender within his corpus, Egypt offered ample scope for his prejudices: “the crowding, yelling, and offensive Bedouins,” “squalid Arab settlement,” “filthy Bedouins.” The orientalist mode is out in force, as “Houdini” and his wife arrive in Cairo only to be disappointed that “amidst the perfect service of its restaurant, elevators, and generally Anglo-American luxuries the mysterious East and immemorial past seemed very far away.” Once they journey deeper into the city, they find what they are looking for—“in the winding ways and exotic skyline of Cairo, the Bagdad of Haroun-al-Raschid seemed to live again.” Lovecraft summons every trope of the orientalized Middle East: bazaars and camels, secret tombs and perfidious natives, the call of the muezzin and the scent of spice and incense. Egypt, “Houdini” is told by one of his captors, “is very old; and full of inner mysteries and antique powers.”

Khafra

Statue of Khafra, Egyptian Museum, Cairo (photograph by Juan R. Lazaro)

Within these fantasies, however, are a few kernels of genuine Egyptiana. The figureheads of “Houdini’s” Egypt, for instance, are the undead “King Khephren and his ghoul-queen Nitokris,” who reign over a legion of unnatural things. Denuded of Lovecraft’s nightmarish trappings, both are historical personages with unsavory reputations. “Khephren,” usually known as Khafra, was the builder of the second-largest pyramid at Giza and (probably) the Great Sphinx, but Herodotus and other ancient historians remember him as a cruel and heretical ruler who closed Egypt’s temples and plunged the land into misery (II.127–28). Nitocris is the subject of Egyptological debate: some scholars accept ancient accounts naming her as a pharaoh of the late Sixth Dynasty (2345–2181 B.C.E.), others deny her very existence. Describing his unease near even the smallest pyramid, “Houdini” explains, “was it not in this that they had buried Queen Nitokris alive in the Sixth Dynasty; subtle Queen Nitokris, who once invited all her enemies to a feast in a temple below the Nile, and drowned them by opening the water-gates?” An anecdote worthy of the horror writer, to be sure, but not his invention. Herodotus relates how Nitocris avenged herself on her brother’s killers:

She built a spacious underground chamber; then […] she gave a great feast, inviting to it those Egyptians whom she knew to have been most concerned in her brother’s murder; and while they feasted she let the river in upon them by a great and secret channel. This was all that the priests told of her, save that also when she had done this she cast herself into a chamber full of hot ashes, thereby to escape vengeance. (II.100, trans. A. D. Godley)

The association of Nitocris with the Pyramid of Menkaure, third and smallest of the Pyramids of Giza, comes from the priest-historian Manetho (early third century B.C.E.). He calls Nitocris “the noblest and loveliest of the women of her time, of fair complexion, the builder of the third pyramid” (The History of Egypt, 55).

Though Houdini is (justifiably) remembered more fondly than Lovecraft, his career was by no means free from discomfiting racial politics. John F. Kasson links Houdini to Tarzan, early bodybuilders like Eugen Sandow, and others, as focal points of anxiety about the white body. In this and in many other respects, the superstar magician and the obscure writer had more in common than might be suspected. Both cultivated a supernatural mystique—in which neither believed—personas that took on lives of their own. Both sought to satisfy an early twentieth-century hunger for excitement. Certainly, the two men got on well: Houdini asked Lovecraft to write a now-lost article about astrology and tried (unsuccessfully) to help the young writer secure employment with a newspaper. They were planning to collaborate, with Lovecraft’s friend and fellow pulp author C. M. Eddy, on an anti-spiritualist book, The Cancer of Superstition, when Houdini died on October 31, 1926. Only Lovecraft’s outline and thirty-odd pages of Eddy’s manuscript survive—together with “Under the Pyramids” the only witness to the strange partnership of the magician and the horror writer.

Reptiles, Amphibians, Herptiles, and other Creeping Things: Variations on a Taxonomic Theme

by Contributing Editor Spencer J. Weinreich

King Philip Came Over For Good Soup. Kingdom, Phylum, Class, Order, Family, Genus, Species. Few mnemonics can be as ubiquitous as the monarch whose dining habits have helped generations of biology students remember the levels of the taxonomic system. Though the progress of the field has introduced domains (above kingdoms), tribes (between family and genus) and a whole array of lesser taxons (subspecies, subgenus, and so on), the system remains central to identifying and thinking about organic life.

greenanaconda-001

Green anaconda (Eunectes murinus): a reptile, not an amphibian (photo credit: Smithsonian’s National Zoo)

Consider “reptiles.” Many a precocious young naturalist learns—and impresses upon their parents with zealous (sometimes exasperated) insistence—that snakes are not slimy. The snake is a reptile, not an amphibian, covered with scales rather than a porous skin. Reaching high school biology, this distinction takes on taxonomic authority: in the Linnaean system, reptiles and amphibians belong to separate classes (Reptilia and Amphibia, respectively). The division has much to recommend it, given the two groups’ considerable divergences in physiology, life-cycle, behavior, and genetics. But, like all scientific categories, the distinction between reptiles and amphibians is a historical creation, and of surprisingly recent vintage at that.

When Carl Linnaeus first published his Systema Naturæ in 1735, what we know as reptiles and amphibians were lumped together in a class named Amphibia. The class—“naked or scaly body; molar teeth, none, others, always; no feathers” (“Corpus nudum, vel squamosum. Dentes molares nulli: reliqui semper. Pinnæ nullæ”)—was divided among turtles, frogs, lizards, and snakes. Linnaeus concludes his outline with these words:

the benignity of the Creator chose not to extend the class of amphibians any further; indeed, if it should enjoy as many genera as the other classes of animals include, or if that which the teratologists fantasize about dragons, basilisks, and such monsters were true, the human race could hardly inhabit the earth” (“Amphibiorum Classem ulterius continuare noluit benignitas Creatoris; Ea enim si tot Generibus, quot reliquæ Animalium Classes comprehendunt, gauderet; vel si vera essent quæ de Draconibus, Basiliscis, ac ejusmodi monstris si οι τετραλόγοι [sic] fabulantur, certè humanum genus terram inhabitare vix posset”) (n.p.).

KONICA MINOLTA DIGITAL CAMERA

Sand lizard (Lacerta agilis), an amphibian according to Linnaeus (photo credit: Friedrich Böhringer)

In the 1758 canonical tenth edition of the Systema, Linnaeus provided a more elaborate set of characteristics for Amphibia: “a heart with a single ventricle and a single atrium, with cold, red blood. Lungs that breathe at will. Incumbent jaws. Double penises. Frequetly membranaceous eggs. Senses: tongue, nose, eyes, and, in many cases, ears. Covered in naked skin. Limbs: some multiple, others none” (“Cor uniloculare, uniauritum; Sanguine frigido, rubro. Pulmones spirantes arbitrarie. Maxillæ incumbentes. Penes bini. Ova plerisque membranacea. Sensus: Lingua, Nares, Oculi, multis Aures. Tegimenta coriacea nuda. Fulcra varia variis, quibusdam nulla”) (I.12). Interestingly, Linnaeus now divides Amphibia into three, based on their mode of
locomotion:

  1. Reptiles (“those that creep”), including turtles, lizards, frogs, and toads;
  2. Serpentes (“those that slither”), including snakes, worm lizards, and caecilians;
  3. Nantes (“those that swim”), including lampreys, rays, sharks, sturgeons, and several other types of cartilaginous fish (I.196).
Smokey_Jungle_Frog

Smokey jungle frog (Leptodactylus pentadactylus), a reptile according to Laurenti and Brongniart (photo credit: Trisha M. Shears)

Linnaeus’s younger Austrian contemporary, Josephus Nicolaus Laurenti, also groups modern amphibians and reptiles together, even as he excludes the fish Linnaeus had categorized as swimming AmphibiaLaurenti was the first to call this group Reptilia (19), and though its denizens have changed considerably in the intervening centuries, he is still credited as the “auctor” of class Reptilia. The French mineralogist and zoologist Alexandre Brongniart also subordinated “batrachians” (frogs and toads) within the broader class of reptiles. All the while, exotic specimens continued to test taxonomic boundaries: “late-eighteenth-century naturalists tentatively described the newly discovered platypus as an amalgam of bird, reptile, and mammal” (Ritvo, The Platypus and the Mermaid, 132).

It was not until 1825 that Brongniart’s compatriot and contemporary Pierre André Latreille’s Familles naturelles du règne animal separated Reptilia and Amphibia as adjacent classes. The older, joint classification survives in the field of herpetology (the study of reptiles and amphibians) and the sadly underused word “herptile” (“reptile or amphibian”).

“Herptile” is a twentieth-century coinage. “Reptile,” by contrast, appears in medieval English; derived from the Latin reptile, reptilis—itself from rēpō (“to creep”)—“reptile” originally meant simply “a creeping or crawling animal” (“reptile, n.1” in OED). The first instance cited by the Oxford English Dictionary is from John Gower’s Confessio Amantis (c.1393): “And every neddre and every snake / And every reptil which mai moeve, / His myht assaieth for to proeve, / To crepen out agein the sonne” (VII.1010–13). The Vulgate Latin Bible uses reptile, reptilis to translate the “creeping thing” (רֶמֶשׂ) described in Genesis 1, a usage carried over into medieval English, as in the “Adam and Eve” of the Wheatley Manuscript (BL Add. MS 39574), where Adam is made lord “to ech creature & to ech reptile which is moued on þe erþe” (fol. 60r). Eventually, these “creeping things” became a distinct group of animals: an early sixteenth-century author enumerates “beestes, byrdes, fysshes, reptyll” (“reptile, n.1” in OED). I suspect the identification of the “reptile” (creeping thing) with herptiles owes something to the Serpent in the Garden of Eden being condemned by God to move “upon thy belly” (Gen. 3:14). The adjective “amphibian” is attested in English as early as 1637, but in the sense of “having two modes of existence.” Not until 1835—after the efforts of Latreille and his English popularizer, T. H. Huxley—does the word come to refer to a particular class of animals (“amphibian, adj. and n.” in OED).

The crucial point here is that the distinction between the two groups, grounded though it may be in biology and phylogenetics, is an artifact of taxonomy, not a self-evident fact of the natural world. For early modern, medieval, and ancient observers, snakes and salamanders, turtles and toads all existed within an ill-defined territory of creeping, crawling things.

Black.George.14.cent.Museum.of.Russian.icon

Fourteenth-century icon of Saint George and a very snakelike dragon (photo credit: Museum of Russian Icons, Moscow)

The farther back we go, the more fantastic the category becomes, encompassing dragons, sea serpents, basilisks, and the like. Religion, too, played its part, as we have seen with Eve’s serpentine interlocutor: Egypt’s plague of frogs, the dragon of Revelation, the Leviathan, the scaly foes of saints like George and Margaret, were within the same “reptilian”—creeping, crawling—family. To be sure, the premodern observer was perfectly aware of the differences between frogs and lizards, and between different species (what could be eaten and what could not, what was dangerous and what was not). But they would not—and had no reason to—erect firm ontological boundaries between the two sorts of creatures.

When we go back to the key works of medieval and ancient natural philosophy, the same nebulosity prevails. Isidore of Seville’s magisterial Etymologies of the early seventh century includes an entry “On Serpents” (“De Serpentibus”), which notes,

the serpent, however, takes that name because it crawls [serpit] by hidden movements; it creeps not with visible steps, but with the minute pressure of its scales. But those which go upon four feet, such as lizards and geckoes [stiliones could also refer to newts], are not called serpents but reptiles. Serpents are also reptiles, since they creep on their bellies and breasts” (“Serpens autem nomen accepit quia occultis accessibus serpit, non apertis passibus, sed squamarum minutissimis nisibus repit. Illa autem quae quattuor pedibus nituntur, sicut lacerti et stiliones, non serpentes, sed reptilia nominantur. Serpentes autem reptilia sunt, quia ventre et pectore reptant.”) (XII.iv.3).

The forefather of premodern zoology, Aristotle, opines in Generation of Animals “there is a good deal of overlapping between the various classes;” he groups snakes with fish because they have no feet as easily he links them with lizards because they are oviparous (II.732b, trans. A. L. Peck).

Anim1991_-_Flickr_-_NOAA_Photo_Library

Atlantic puffin (Fratercula arctica), a reptile according to modern phylogenetics (photo credit: NOAA Photo Library—anim1991)

As it turns out, our modern category of “reptile” (class Reptilia) has proved similarly elastic. In evolutionary terms, this is because “reptiles” are not a clade—a group of organisms defined by a single ancestor species and all its descendants. Though visually closer to lizards, for example, genetically speaking the crocodile is a nearer relative to birds (class Aves). Scientists and science writers have thus claimed—sometimes facetiously—that the very category of reptile is a fiction (see Welbourne, “There’s no such thing as reptiles”). The clade Sauropsida, including reptiles and birds (as a subset thereof), was first mooted by Huxley and subsequently resurrected in the twentieth century to address the problem. Birds are now reptiles, though they seldom creep. If I may be permitted a piece of Isidorean etymological fantasy, perhaps this is the true import of the “reptile” as “creeping thing,” as they creep across and beyond taxonomic boundaries, eternally frustrating and fascinating those who seek to understand them.