Think pieces

Alexander and Wilhelm von Humboldt, Brothers of Continuity

By guest contributor Audrey Borowski

At the beginning of the nineteenth century, a young German polymath ventured into the heart of the South American jungle, climbed the Chimborazo volcano, crawled through the Andes, conducted experiments on animal electricity, and delineated climate zones across continents.  His name was Alexander von Humboldt (1769–1859). With the young French scientist Aimé Bonpland and equipped with the latest instruments, Humboldt tirelessly collected and compared data and specimens, returning after five years to Paris with trunks filled with notebooks, sketches, specimens, measurements, and observations of new species. Throughout his travels in South America, Russia and Mongolia, he invented isotherms and formulated the idea of vegetation and climate zones. Crucially, he witnessed the continuum of nature unfold before him and set forth a new understanding of nature that has endured up to this day. Man existed in a great chain of causes and effects in which “no single fact can be considered in isolation.” Humboldt sought to discover the “connections which linked all phenomena and all forces of nature.” The natural world was teeming with organic powers that were incessantly at work and which, far from operating in isolation, were all “interlaced and interwoven.” Nature, he wrote, was “a reflection of the whole” and called for a global understanding. Humboldt’s Essay on the Geography of Plants (1807) was the world’s first book on ecology in which plants were grouped into zones and regions rather than taxonomic units and analogies drawn between disparate regions of the globe.

In this manner, Alexander sketched out a Naturgemälde, a “painting of nature” that fused botany, geology, zoology and physics in one single picture, and in this manner broke away from prevailing taxonomic representations of the natural world. His was a fundamentally interdisciplinary approach, at a time when scientific inquiry was becoming increasingly specialized. The study of the natural world was no abstract endeavor and was far removed from the mechanistic philosophy that had held sway up till then. Nature was the object of scientific inquiry, but also of wonder and as such, it exerted a mysterious pull. Man was firmly relocated within a living cosmos broader than himself, which appealed equally to his emotions and imagination. From the heart of the jungle to the summit of volcanoes, “nature everywhere [spoke] to man in a voice that is familiar to his soul” and what spoke to the soul, Humboldt wrote, “escapes our measurements” (Views of Nature, 217-18). In this manner Humboldt followed in the footsteps of Goethe, his lifelong friend, and the German philosopher Friedrich Schelling, in particular the latter’s Naturphilosophie (“philosophy of nature”). Nature was a living organism it was necessary to grasp in its unity, and its study should steer away from “crude empiricism” and the “dry compilation of facts” and instead speak to “our imagination and our spirit.” In this manner, rigorous scientific method was wedded to art and poetry and the boundaries between the subjective and the objective, the internal and the external were blurred. “With an aesthetic breeze,” Alexander’s long-time friend Goethe wrote, the former had lit science into a “bright flame” (quoted in Wulf, The Invention of Nature, 146).

Alexander von Humboldt’s older brother, Wilhelm (1767-1835), a government official with a great interest in reforming the Prussian educational system, had been similarly inspired. While his brother had ventured out into the jungle, Wilhelm, on his side, had devoted much of his life to the exploration of the linguistic realm, whether in his study of Native American and ancient languages or in his attempts to grasp the relation between linguistic and mental structures. Like the German philosopher and literary critic Johann Gottfried Herder before him, Humboldt posited that language, far from being a merely a means of communication, was the “formative organ” (W. Humboldt, On the Diversity of Human Language, 54) of thought. According to this view, man’s judgmental activity was inextricably bound up with his use of language. Humboldt’s linguistic thought relied on a remarkable interpretation of language itself: language was an activity (energeia) as opposed to a work or finished product (ergon). In On the Diversity of Human Language Construction and its Influence on the Mental Development of the Human Species (1836), his major treatise on language, Wilhelm articulated a forcefully expressivist conception of language, in which he brought to bear the interconnectedness and organic nature of all languages and by extension, various worldviews. Far from being a “dead product,” an “inert mass,” language appeared as a “fully-fashioned organism” that, within the remit of an underlying universal template, was free to evolve spontaneously, allowing for maximum linguistic diversity (90).

Weimarer_Klassik

Left to Right: Friedrich Schiller, Wilhelm von Humboldt, Alexander von Humboldt, and Johann Wolfgang von Goethe, depicted by Adolph Müller (c.1797)

To the traditional objectification of language, Wilhelm opposed a reading of language that was heavily informed by biology and physiology, in keeping with the scientific advances of his time. Within this framework, language could not be abstracted, interwoven as it was with the fabric of everyday life. Henceforth, there was no longer one “objective” way of knowing the world, but a variety of different worldviews. Like his brother, Wilhelm strove to understand the world in its individuality and totality.

At the heart of the linguistic process lay an in-built mechanism, a feedback loop that accounted for language’s ability to generate itself. This consisted in the continuous interplay between an external sound-form and an inner conceptual form, whose “mutual interpenetration constitute[d] the individual form of language” (54). In this manner, rhythms and euphonies played a role in expressing internal mental states. The dynamic and self-generative aspect of language was therefore inscribed in its very core. Language was destined to be in perpetual flux, renewal, affecting a continuous generation and regeneration of the world-making capacity powerfully spontaneous and autonomous force, it brought about “something that did not exist before in any constituent part” (473).

As much as the finished product could be analyzed, the actual linguistic process defied any attempt at scientific scrutiny, remaining inherently mysterious. Language may well abide by general rules, but it was fundamentally akin to a work of art, the product of a creative outburst which “cannot be measured out by the understanding” (81). Language, as much as it was rule-governed and called for empirical and scientific study, originated somewhere beyond semio-genesis. “Imagination and feeling,” Wilhelm wrote, “engender individual shapings in which the individual character […] emerges, and where, as in everything individual, the variety of ways in which the thing in question can be represented in ever-differing guises, extends to infinity” (81). Wilhelm therefore elevated language to a quasi-transcendental status, endowing it with a “life-principle” of its own and consecrating it as a “mental exhalation,” the manifestation of a free, autonomous spiritual force. He denied that language was the product of voluntary human activity, viewing instead as a “mental exhalation,” a “gift fallen to [the nations] by their own destiny” (24) partaking in a broader spiritual mission. In this sense, the various nations constituted diverse individualities pursuant of inner spiritual paths of their own, with each language existing as a spiritual creation and gradual unfolding:

If in the soul the feeling truly arises that language is not merely a medium of exchange for mutual understanding, but a true world which the intellect must set between itself and objects by the inner labour of its power, then the soul is on the true way toward discovering constantly more in language, and putting constantly more into it (135).

While he seemed to share his brother’s intellectual demeanor, Wilhelm disapproved of many of Alexander’s life-choices, from living in Paris rather than Berlin (particularly during the wars of liberation against Napoleon), which he felt was most unpatriotic, to leaving the civilized world in his attempts to come closer to nature (Wulf 151). Alexander, the natural philosopher and adventurer, on his side reproached his brother for his conservatism and social and political guardedness. In a time marred by conflict and the growth of nationalism, science, for him, had no nationality and he followed scientific activity wherever it took him, especially to Paris, where he was widely celebrated throughout his life. In a European context of growing repression and censorship in the wake of Napoleon’s defeat, he encouraged the free exchange of ideas and information, and pleaded for international collaborations between scientists and the collection of global data; truth would gradually emerge from the confrontation of different opinions. He also gave many lectures during which he would effortlessly hop from one subject to another, in this manner helping to popularize science. More generally, he would help other scholars whenever he could, intellectually or financially.

As the ideas of 1789 failed to materialize, giving way instead to a climate of censorship and repression, Alexander slowly grew disillusioned with politics. His extensive travels had provided him insights not only on the natural world but also on the human condition. “European barbarity,” especially in the shape of colonialism, tyranny and serfdom had fomented dissent and hatred. Even the newly-born American Republic, with its founding principles of liberty and the pursuit of happiness, was not immune to this scourge (Wulf 171). Man with his greed, violence and ignorance could be as barbaric to his fellow man as he was to nature. Nature was inextricably linked with the actions of mankind and the latter often left a trail of destruction in its wake through deforestation, ruthless irrigation, industrialization and intensive cultivation. “Man can only act upon nature and appropriate her forces to his use by comprehending her laws.” Alexander would later write in his life, and failure to do so would eventually leave even distant stars “barren” and “ravaged” (Wulf 353).

Furthermore, while Wilhelm was perhaps the more celebrated in his time, it was Alexander’s legacy that would prove the more enduring, inspiring new generations of nature writers, including the American founder of the transcendentalist movement Henry David Thoreau, who intended his masterpiece Walden as an answer to Humboldt’s Cosmos, John Muir, the great preservationist, or Ernst Haeckel, who discovered radiolarians and coined our modern science of ecology” Another noteworthy influence was on Darwin and his theory of evolution. Darwin took Humboldt’s web of complex relations a step further and turned them into a tree of life from which all organisms stem. Humboldt sought to upend the ideal of “cultivated nature,” most famously perpetuated by the French naturalist the Comte de Buffon, whereby nature had to be domesticated, ordered, and put to productive use. Crucially, he inspired a whole generation of adventurers, from Darwin to Joseph Banks, and revolutionized scientific practice by tearing the scientist away from the library and back into the wilderness.

For all their many criticisms and disagreements, both brothers shared a strong bond. Alexander, who survived Wilhelm by twenty-four years, emphasized again and again Wilhelm’s “greatness of the character” and his “depth of emotions,” as well as his “noble, still-moving soul life.” Both brothers carved out unique trajectories for themselves, the first as a jurist, a statesman and a linguist, the second arguably as the first modern scientist; yet both still remained beholden to the idea of totalizing systems, each setting forth insights that remain more pertinent than ever.

2390168a

Alexander and Wilhelm von Humboldt, from a frontispiece illustration of 1836

Audrey Borowski is a historian of ideas and a doctoral candidate at the University of Oxford.

Ultimate Evil: Cultural Sociology and the Birth of the Supervillain

By guest contributor Albert Hawks, Jr.

In June 1938, editor Jack Leibowitz found himself in a time crunch. Needing to get something to the presses, Leibowitz approved a recent submission for a one-off round of prints. The next morning, Action Comics #1 appeared on newsstands. On the cover, a strongman in bright blue circus tights and a red cape was holding a green car above his head while people ran in fear. Other than the dynamic title “ACTION COMICS”, there was no text explaining the scene. In an amusing combination of hubris and prophecy, the last panel of Action Comics #1 proclaimed: “And so begins the startling adventures of the most sensational strip character of all time!” Superman was born.

Action_Comics_1

Comics are potentially incomparable resources given the cultural turn in the social sciences (a shift in the humanities and social sciences in the late 20th century toward a more robust study of culture and meaning and away from positivism). The sheer volume of narrative—somewhere in the realm of 10,000 sequential Harry Potter books– and social saturation—approximately 91-95% of people between the ages of six and eleven in 1944 read comics regularly according to a 1944 psychological study—remain singular today (Lawrence 2002, Parsons 1991).

Cultural sociology has shown us that “myth and narrative are elemental meaning-making structures that form the bases of social life” (Woodward, 671). In a lecture on Giacometti’s Standing Woman, Jeffrey Alexander pushes forward a way of seeing iconic experiences as central, salient components of social life. He argues:

 Iconographic experience explains how we feel part of our social and physical surroundings, how we experience the reality of the ties that bind us to people we know and people we don’t know, and how we develop a sense of place, gender, sexuality, class, nationality, our vocation, indeed our very selves (Alexander, 2008, 7).

He further suggests these experiences informally establish social values (Alexander, 2008, 9). Relevant to our purposes, Alexander stresses Danto’s work on “disgusting” and “offensive” as aesthetic categories (Danto, 2003) and Simmel’s argument that “our sensations are tied to differences” with higher and lower values (Simmel, 1968).

This suggests that theoretically the comic book is a window into pre-existing, powerful, and often layered morals and values held by the American people that also in turn helped build cultural moral codes (Brod, 2012; Parsons, 1991).

The comic book superhero, as invented and defined by the appearance of Superman, is a highly culturally contextualized medium that expresses particular subgroups’ anxieties, hopes, and values and their relationship to broader American society.

But this isn’t a history of comics, accidental publications, or even the most famous hero of all time. As Ursula LeGuin says, “to light a candle is to cast a shadow.” It was likely inevitable that the superhero—brightest of all the lights—would necessarily cast a very long shadow. Who after all could pose a challenge to Superman? Or what could occupy the world’s greatest detective? The world needed supervillains. The emergence of the supervillain offers a unique slice of moral history and a potentially powerful way to investigate the implicit cultural codes that shape society.

I want to briefly trace the appearance of recurring villains in comic books and note what their characteristics suggest about latent concepts of evil in society at the time. Given our limited space, I’m here only considering the earliest runs of the two most iconic heroes in comics: Superman (Action Comics #1-36) and Batman (Detective Comics #27-; Batman #1-4).

Ultra First AppearanceInitially, Superman’s enemies were almost exclusively one-off problems tied to socioeconomic situations. It wasn’t until June 1939 that readers met the first recurring comic book villain: the Ultrahumanite. Pursuing a lead on some run-of-the-mill racketeers, Superman comes across a bald man in a wheel chair: “The fiery eyes of the paralyzed cripple burn with terrible hatred and sinister intelligence.” His “crippled” status is mentioned regularly. The new villain wastes no time explaining that he is “the head of a vast ring of evil enterprises—men like Reynolds are but my henchmen” (Reynolds is a criminal racketeer introduced earlier in the issue), immediately signaling something new in comics. The man then formally introduces himself, not bothering with subtlety.

I am known as the ‘Ultra-humanite’. Why? Because a scientific experiment resulted in my possessing the most agile and learned brain on earth! Unfortunately for mankind, I prefer to use this great intellect for crime. My goal? Domination of the world!!

In issue 20, Superman discovers that, somehow, Ultra has become a woman. He explains to the Man of Steel: “Following my instructions, they kidnapped Dolores Winters yesterday and placed my mighty brain in her young vital body!” (Action Comics 20).

ultra dolores winters

Superman found his first recurring foil in unfettered intellect divorced from physicality. It’s hard not to wonder if this reflected a general distrust of the ever-increasing destructive power of science as World War II dawned. It’s also fascinating to note how consistently the physical status of the Ultrahumanite is emphasized, suggesting a deep social desire for physical strength, confidence, and respect.

After Ultra’s death, our hero would not be without a domineering, brilliant opponent for long. Action Comics 23 saw the advent of Lex Luthor. First appearing as an “incredibly ugly vision” of a floating face and lights, Luthor’s identity unfolds as a mystery. Superman pursues a variety of avenues, finding only a plot to draw countries into war and thugs unwilling to talk for fear of death. Lois actually encounters Luthor first, describing him as a “horrible creature”. When Luthor does introduce himself, it nearly induces déjà vu: “Just an ordinary man—but with the brain of a super-genius! With scientific miracles at my fingertips, I’m preparing to make myself supreme master of th’ world!”

dr deathThe Batman develops his first supervillain at nearly the same time as Superman. In July 1939, one month after the Ultrahumanite appeared, readers are introduced to Dr. Death. Dr. Death first appears in a lavish study speaking with a Cossack servant (subtly implying Dr. Death is anti-democratic) about the threat Batman poses to their operations. Death is much like what we would now consider a cliché of a villain—he wears a suit, has a curled mustache and goatee, a monocle, and smokes a long cigarette while he plots. His goal: “To extract my tribute from the wealthy of the world. They will either pay tribute to me or die.” Much like Superman’s villains, he uses science—chemical weapons in particular—to advance these sinister goals. In their second encounter, Batman prevails and Dr. Death appears to burn to death. Of course, in comics the dead rarely stay that way; Dr. Death reappears the very next issue, his face horribly scarred.

hugostrangeThe next regularly recurring villain to confront Batman appears in February 1940. Batman himself introduces the character to the reader: “Professor Hugo Strange. The most dangerous man in the world! Scientist, philosopher, and a criminal genius… little is known of him, yet this man is undoubtedly the greatest organizer of crime in the world.” Elsewhere, Strange is described as having a “brilliant but distorted brain” and a “malignant smile”. While he naturally is eventually captured, Strange becomes one of Batman’s most enduring antagonists.

The very next month, in Batman #1, another iconic villain appears: none other than the Joker himself.

Once again a master criminal stalks the city streets—a criminal weaving a web of death about him… leaving stricken victims behind wearing a ghastly clown’s grin. The sign of death from the Joker!

Also utilizing chemicals for his plots, the Joker is portrayed as a brilliant, conniving plotter who leads the police and Batman on a wild hunt. Unique to the Joker among the villains discussed is his characterization as a “crazed killer” with no aims of world power. The Joker wants money and murder. He’s simply insane.

DSC_0737

Some striking commonalities appear across our two early heroes’ comics. First, physical “flaws” are a critical feature. These deformities are regularly referenced, whether disability, scarring, or just a ghastly smile. Second, virtually all of these villains are genius-level intellects who use science to pursue selfish goals. And finally, among the villains, superpowers are at best a secondary feature, suggesting a close tie between physical health, desirability, and moral superiority. Danto’s aesthetic categories of “disgusting” and “offensive” certainly ring true here.

This is remarkably revealing and likely connected to deep cultural moral codes of the era. If Superman represents the “ideal type,” supervillains such as the Ultrahumanite, Lex Luthor, and the Joker are necessary and equally important iconic representations of those deep cultural moral codes. Such a brief overview cannot definitively draw out the moral world as revealed through comics and confirmed in history. Rather, my aims have been more modest: (1) to trace the history of the birth of the supervillain, (2) to draw a connective line between the strong cultural program, materiality, and comic books, and (3) to suggest the utility of comics for understanding the deep moral codes that shape a society. Cultural sociology allows us to see comics in a new light: as an iconic representation of culture that both reveals preexisting moral codes and in turn contributes to the ongoing development of said moral codes that impact social life. Social perspectives on evil are an actively negotiated social construct and comics represent a hyper-stylized, exceedingly influential, and unfortunately neglected force in this negotiation.

Albert Hawks, Jr. is a doctoral student in sociology at the University of Michigan, Ann Arbor, where he is a fellow with the Weiser Center for Emerging Democracies. He holds an M.Div. and S.T.M. from Yale University. His research concerns comparative Islamic social movements in Southeast and East Asia in countries where Islam is a minority religion, as well as in the American civil sphere.

The Emotional Life of Laissez-Faire: Emulation in Eighteenth-Century Economic Thought

By guest contributor Blake Smith

Capitalism is often understood by both critics and defenders as an economic system that gives self-interested individuals free reign to acquire, consume, and compete. There are debates about the extent to which self-interest can be ‘enlightened’ and socially beneficial, yet there seems to be a widespread consensus that under capitalism, the individual, egoist self is the basic unit of economic action. For many intellectuals from the right and the left capitalism seems, by letting such economic agents pursue their private interests, to erode traditional social structures and collective identities, in a process that is either a bracing, liberating movement towards freedom or an alienating, disorienting dissolution in which, as Marx famously phrased it, “all that is solid melts into air.”

Economic activity unfettered by government regulation was not always so obviously linked to the self-interest of atomized individuals. In fact, as historians inspired by the work of István Hont show, the first wave of laissez-faire political economists, who transformed eighteenth-century Europe and laid the foundation for modern capitalism, claimed that they were creating the conditions for a new era of mutual admiration and affective connections among economic agents. For these thinkers, members of the ‘Gournay Circle’, emotions, rather than mere self-interest, were the motor of economic activity. They specifically identified the kind of activity they wanted to promote with the feeling of ’emulation’, and touted the abolition of traditional protections on workers and consumers as a means of stoking this noble passion.

Emulation has only been recently been given center stage in the history of economic thought, thanks to scholars like John Shovlin, Carol Harrison and Sophus Reinert, but it has a long history. From Greco-Roman antiquity down through the Renaissance, it was understood as a force of benign mutual rivalry among people working in the same field. Emulation was said to set in motion a virtuous circle in which competitors, bound by mutual admiration and affection, pushed each other to ever-higher levels of achievement. When a sculptor, for example, sees a magnificent statue made by one of his fellow artists, he should experience an uplifting feeling of emulation that will inspire him to learn from his rival in order to make a still more magnificent statue of his own. Emulation thus leads to higher standards of production, generating a net gain for society as a whole; it also, critically, unites potential rivals in a bond of shared esteem rooted in a common identity. This form of friendly competition sustains communities.

Emulation was not for everyone, however. It was understood only to exist in the world of male elites, and only in non-economic domains where they could pursue glory without the taint of financial interest: the arts, politics, and war. The circle of young male artists in the orbit of the great painter Jacques-Louis David (1748-1825), as Thomas Crowe notes, could present their (not always harmonious) competition for attention, resources, and patronage in terms of emulation. In the homosocial space of the studio, anything so petty as jealous or avarice was abolished, and such artists as Antoine-Jean Gros, Anne-Louis Girodet and Jean-Germain Drouais could appear, at least in public, as a set of friends who admired and encouraged each other. Women, meanwhile, were largely excluded from the art world’s apprenticeships, studios, and galleries, on the grounds that their delicate psyches were not suited to the powerful emotions that drove emulation.

unnamed-2

Vincent de Gournay

The discourse of emulation shaped access to the arts, but, in a stroke of public relations genius, the members of the Gournay Circle realized that it could also reshape France’s mercantilist economy. Beginning in the 1750s, this group of would-be reformers coalesced around the commercial official and political economist Vincent de Gournay (1712-59). Largely forgotten today (but now increasingly visible thanks to scholars such as Felicia Gottmann), Gournay and his associates inspired the liberal political economists of the next generation, from Physiocrats in France to Adam Smith in Scotland. The Gournay Circle and those who moved in its wake called for the abolition of restrictions on foreign imports, price controls on grain, state monopolies, guilds—the institutions and practices around which economic life in Europe had been organized for centuries.

The Gournay Circle spoke to a France fearful of falling behind Great Britain, its rival for colonial and commercial power. Gournay and his associates argued that France was not making the most of its merchants, entrepreneurs and manufacturers, whose energies were hemmed in by antiquated regulations. To those worried that unleashing economic energies might heighten social tensions, making France weaker and more divided instead of stronger, the Gournay Circle gave reassurance. There was no conflict between fostering social harmony and deregulating the economy, because economic activity was not motivated by self-interested desires for personal gain. Rather, buying and selling, production and distribution were inspired by emulation, the same laudable hunger for the esteem of one’s peers that motivated painters, orators and warriors.

Just as a sculptor admires and strives to outdo the work of his colleagues, laissez-faire advocates argued, a merchant or manufacturer regards those in his own line of work with a spirit of high-minded, warm-hearted camaraderie. Potential competitors identify with each other, forging an emotional bond based on their shared effort to excel. Thus, for example, demolishing traditional guild controls on the number of individuals who could enter into a given field of trade would not only encourage competition, raise the quality of goods and reduce prices—most importantly, it would draw a greater number of people into emulation with each other. Social classes, too, would be drawn to emulate each other, and rather than stoking economic conflict among competing interests, deregulation would encourage economic actors to earn the admiration of their fellows. National wealth and national unity would both be promoted, joined by a common logic of affect.

Under the banner of emulation, reformers challenged the guilds and associations that had long offered some limited protections to workers. Since the Middle Ages, guilds throughout Europe had set standards of production, provided training for artisans, and offered forms of unemployment insurance. Critics observed that they also kept up wages by limiting the number of workers who could enter into specific trades, and further accused guilds of thwarting the introduction of new technologies. Anne Robert Jacques Turgot (1727-1781), a political economist linked to the Gournay Circle, believed that the best way to “incite emulation” among workers was “by the suppression of all the guilds.”

For a brief moment, Turgot (still hailed as a hero in libertarian circles) was able to put his pro-emulation agenda into action. Appointed Comptroller-General of Finances (the equivalent to a modern Minister of Finance) in 1774, Turgot launched a laissez-faire campaign that included the abolition of guilds and the suspension of state controls on the price and circulation of grain. The royal decree announcing his most infamous batch of policies, the Six Edits, declared: “we wish thus to nullify these arbitrary institutions… which cast away emulation.” Turgot’s policies provoked outrage across French society, from peasants who feared bread prices would spiral out of control, to guild members who faced the competition of unregulated production. He was forced to resign in 1776; his most hated policies were reversed. But the damage had been done. The guild system, permanently enfeebled, straggled on for another generation. Peasants and workers, to whom the fragility of the institutions that protected their access to food and labor had been made brutally obvious, remembered the lesson. Their outrage in the mid-1770s was a rehearsal for 1789.

unnamed

“Carte d’Entrée” for the first annual meeting of the Société d’Emulation

Emulation gradually faded away as a justification for liberal economic policies, although throughout much of the nineteenth century ’emulation clubs’ (sociétés d’émulation) remained a fixture of French municipal life, promoting business ventures while excluding groups whose capacity for emulation was considered questionable: women, Jews, and Protestants. As a key, albeit forgotten, concept in the development of modern economic thought, emulation reveals the extent to which the notion of the self-interested individual as the essential subject of economic activity is not in fact essential to capitalist ideology. In eighteenth-century France, laissez-faire policies aimed at increasing economic growth were justified in terms of their contribution to social harmony and emotional fulfillment. In the rhetoric that promoted these policies, the imagined economic subject was not an isolated, calculating egoist but a passionate striver who wanted, more than mere utility, welfare or profit, the admiration of other members of his community (that this community should exclude certain groups of people went without saying). Such arguments may well have been deployed by cynical activists agitating on behalf of powerful financial interests, yet they nevertheless speak to an affective dimension of economic life that is too often occluded. In its short-lived role as an economic concept, emulation showed that the history of capitalism is necessarily entangled with the history of emotions.

Blake Smith holds a PhD from Northwestern University and the Ecole des Hautes Etudes en Sciences Sociales. He is currently a Max Weber Fellow at the European University Institute, where he is preparing a study of the eighteenth-century French Orientalist Abraham Hyacinthe Anquetil-Duperron.

The Suffrage Postcard Project: A Replica Archive

by guest contributor Ana Stevenson

At the 2017 Australian Historical Association Conference, in a panel about digital history, Professor Victoria Haskins discussed what she described as a “replica archive.”  Haskins’ research is concerned with Indigenous domestic servants in Australia and the United States – women whose lives, she rightly notes, are often difficult to uncover in the archives.  Technology, however, has fundamentally changed the relationship historians have with archives.  Following the hours and hours of archival research undertaken across her long and distinguished career, Haskins has amassed copious photographs and photocopies which feature the voices of these women.  Bringing together these photographic fragments from many archives, Haskins suggests, creates a new archive – a replica archive.

 

The Suffrage Postcard Project can likewise be seen as a replica archive.  Women’s suffrage postcards, though considered ephemeral at their time of production, were numerous.  Postcard scholar Kenneth Florey suggests that more than 1000 suffrage-related postcards were printed in the United States during the 1910s and approximately 2000 in Britain.[1]  Suffrage memorabilia more generally was received enthusiastically by the American and British public, especially in the years prior to World War I.[2]

The majority of the women’s suffrage postcards were printed during the 1910s, a decade which would see the acquisition of qualified suffrage for British women in 1918 and the passing of the Nineteenth Amendment in the United States by 1920.  This era is broadly described by scholars as the “golden age” of picture postcards.[3]

 

1

 Image courtesy of the Catherine H. Palczewski Postcard Archive, University of Northern Iowa, Cedar Falls, IA. 

Women’s suffrage postcards were so numerous, in fact, that even today such ephemera is not inscrutably hidden in the archives.  Many archival collections, especially those which focus upon women’s history, hold large collections of suffrage postcards – for example, at Harvard University’s Arthur and Elizabeth Schlesinger Library on the History of Women in America and the Women’s Library at the London School of Economics.  Such collections feature both pro-suffrage and anti-suffrage postcards, which were predominantly produced during the first two decades of the twentieth century.  Suffrage organizations and commercial publishers alike produced women’s suffrage postcards.

But the partial nature of such collections, together with the geographical dispersion of the archives themselves, means scholars can only ever gain a fragmentary perspective.  Though archives such as these are partially digitized, they are often largely inaccessible to the public.  Aware of such limitations, Florey published his seminal work, American Woman Suffrage Postcards: A Study and Catalog (2015).  Bringing together digitally as many women’s suffrage postcards as possible, The Suffrage Postcard Project goes a step further.

The Suffrage Postcard Project is therefore an attempt to bring together as many women’s suffrage postcards as possible, and thus create a replica archive.  It features postcards from the personal collections of Catherine H. Palczewski, Joan Iverson, Ann Lewis, and Kenneth Florey, as well as postcards from various special collections in the United States.  This replica archive centers upon women’s suffrage postcards in a way that fragmented collections cannot and is also easily accessible to the public.

 

2

 Image courtesy of the Catherine H. Palczewski Postcard Archive, University of Northern Iowa, Cedar Falls, IA.

The postcards are now available as an ever-expanding digital corpus.  The field of digital humanities has presented other pertinent questions for conceptualizing such a digital corpus, specifically in relation to the nature and meaning of “the archive.”  Historians, literary and feminist scholars, and library and archive professionals have very different understandings of what constitutes an archive.  “In a digital environment,” Kenneth M. Price concludes, “archive has gradually come to mean a purposeful collection of surrogates.”[4]  Kate Theimer further argues that it is important for digital humanists to understand the differing ways in which archivists understand what constitutes “archive” and how collections are created.[5]  Haskins’s concept of the replica archive might help reconcile these disciplinary, methodological, and conceptual differences, as it forces practitioners’ cognizance of the created and curated nature of the digital archive.

This format enables scholars to apply new research methods.  Tagging the themes which appear in women’s suffrage postcards necessitates finding language to describe visual themes.  Jacqueline Wernimont and Julia Flanders discuss the process whereby they encode literary texts for the Women Writers Project.  This process, they argue, entails “many of the same difficulties encountered when reading it.”  Indeed, issues relating to “categorisation, explication, and description [are] central to digital text markup, forcing the digital scholar to grapple consciously with formal issues that might otherwise remain latent.”[6]

So how do we identify the visual themes in the postcards?  The process is called “tagging,” wherein specific words are used to identify repetitive themes.  Our preliminary response was to consider how to apply thematic tags such as “public” versus “private,” “domestic space,” “wife” or “woman” versus “mother,” “husband” or “man” versus “father,” and the subtle but nonetheless significant semantic differences associated with each individual choice.  Even the application of seemingly clear-cut concepts such as “pro-suffrage” and “anti-suffrage” could sometimes be nebulous.  As my co-founder Kristin Allukian and I worked together and alongside our research assistants, our discussions led us to expand upon how we initially conceptualized our approach to tagging the visual themes.

Such digital methods, then, enable scholars to ask unprecedented research questions about the early-twentieth-century women’s suffrage movement and its many detractors.  This also provokes new questions, as well as the reconsideration of old assumptions.

For example, observable trends become incontrovertible when analyzed using digital methods.  A scholar might discern that upper-middle-class adult white women are the primary subjects of suffrage cartoons.  However, when this impression is considered across hundreds of postcards, other trends emerge: children and animals are ubiquitous; men often appear as the subject of debate; white working-class people are depicted somewhat regularly; racial stereotypes about Irish and Chinese immigrants are evident, although rare; and African Americans are conspicuous due to their absence.  Scholars were not formerly unaware of such trends, but a digital humanities approach provides stronger evidence for such thematic claims.

 

3

 Image courtesy of the Catherine H. Palczewski Postcard Archive, University of Northern Iowa, Cedar Falls, IA.

Such research will contribute to the fields of women’s history and feminist visual culture, but also has significance for the interpretation of images in intellectual history.  My fellows and I are using digital humanities methods to gain new insights into questions of print pigmentation, gender, race, class, and parenthood as represented in women’s suffrage postcards.

The Suffrage Postcard Project also presents undergraduates with opportunities for intellectual development.  Since 2015, undergraduate and masters research assistants from the Georgia Institute of Technology and the University of South Florida have supported the digitization of the postcards.  In addition to acquiring valuable digital humanities and public history skills, these students have based research projects around the women’s suffrage postcards.

At the University of South Florida’s 2017 Undergraduate Research and Arts Colloquium, the 2016-2017 research assistants undertook an interview with The Intersection podcast.

https://soundcloud.com/ashely-tisdale/thats-how-we-dh

The Suffrage Postcard Project is always looking out for new additions to our digital corpus, contributions which can enrich our replica archive.  Should any interested reader have women’s suffrage postcards from a personal or institutional collection they might like to share, please do not hesitate to get in touch. Our twitter handle is @Suff_Postcards .

Ana Stevenson is a Postdoctoral Research Fellow in the International Studies Group at the University of the Free State, South Africa.  Her research centers upon the development of feminist in transnational social movements in the United States, Australia, and South Africa.  Follow her on Twitter @DrAnaStevenson.

[1] Kenneth Florey, American Woman Suffrage Postcards: A Study and Catalog (Jefferson: McFarland & Company, 2015).

[2] Kenneth Florey, Women’s Suffrage Memorabilia: An Illustrated Historical Study (Jefferson: McFarland & Company, 2013).

[3] Catherine H. Palczewski, “The Male Madonna and the Feminine Uncle Sam: Visual Argument, Icons, and Ideographs in 1909 Anti-Woman Suffrage Postcards,” Quarterly Journal of Speech 91, no. 4 (2005): 365; Florey, American Woman Suffrage Postcards, 4.

[4] Kenneth M. Price, “Edition, Project, Database, Archive, Thematic Research Collection: What’s in a Name?” Digital Humanities Quarterly 3, no. 3 (2009).

[5] Kate Theimer, “Archives in Context and as Context,” Journal of Digital Humanities 1, no. 2 (2012).

[6] Jacqueline Wernimont and Julia Flanders, “Feminism in the Age of Digital Archives: The Women Writers Project,” Tulsa Studies in Women’s Literature 29, no. 2 (2010): 432 and 427-428.

The First of Nisan, The Forgotten Jewish New Year, Part II

By guest contributor Joel S. Davidi

In my last post on the history of the first of Nisan as a Jewish new year I discussed the history of this now mostly forgotten holiday into the tenth century. Until this point, this festival was celebrated among the Jews of Eretz Israel as well as their satellite communities across the Middle East (including a small “Palestinian-rite” contingent in Iraq itself). Over the next one hundred years, however, the celebration of the first of Nisan became the domain of only a very small minority of Jews. In a large measure, this was due to the long standing disagreements between the two great centers of Jewish learning at the time, Eretz Israel and Babylonia/ Iraq.   

All in all, the competition between Babylonia and Eretz Israel ended in a decisive Babylonian victory. This was due to several factors not least of which is the fact that Babylonian Jewry experienced much more stability under Sassanian and later Islamic rule while its Eretz Israel counterpart was constantly experiencing persecution and uprooting. The final death knell for Minhag Eretz Yisrael was delivered in July of 1099 when an army of Crusaders broke through the walls of Jerusalem and massacred the city’s Jewish inhabitants, its Babylonian-rite,  Palestinian-rite communities and Karaite communities. With the destruction of its center began the decline and eventual disappearance of many unique Eretz Israel customs. It is only due to the discovery of the Cairo Genizah that scholars have become aware of many of those long-lost traditions and customs. At this time Babylonia’s prominence began to decline as the Sephardic communities of the Iberian Peninsula and the Ashkenazic communities of France and Germany were increasingly on the ascendancy. Both of these communities, however, maintained the Babylonian rite. (As Israel Ta Shma points out in his book on early Ashkenazic prayer, both the Sephardic and Ashkenazic rites have Eretz Israel elements. These are more evident in the Ashkenazi rite, probably due to the ties between the proto Ashkenazim and the Palestinian academy Academy in Byzantine Palestine.)

The latest evidence the celebration of the first of Nisan comes to us from the 13th century and it would seem that even by this time it was all but stamped out by those who were determined to establish the primacy of the Babylonian school. This period coincides with the increased activism of Rabbi Abraham Maimonides, the son of Moses Maimonides, the great Spanish codifier of Jewish law. Rabbi Abraham, who championed standardization based on his father’s codification, exerted great pressure against the Synagogue of the Palestinians in Fustat, Old Cairo to bring their ritual into line with Babylonian standards. He was for the most part successful but, as we have already seen, this unique  custom was retained (albeit in diminished form) among Egyptian Jews to this very day.

In an April 20, 1906 article for the English  Jewish Chronicle, Herbert Loewe provides an eyewitness account of an Al-Tawhid ceremony in the fashionable Abbasiya neighborhood of Cairo. Two years later, a more detailed description was recorded by the Chief Rabbi of Egypt, Refael Aharon ben Shimon in his book Nehar Misrayim (p. 65-6).

I reproduce it here (courtesy of Hebrewbooks.org):

Screen Shot 2017-10-01 at 10.47.56 PM

Screen Shot 2017-10-01 at 10.49.56 PM

After extolling this “beautiful custom”, ben Shimon laments how the custom had become so weakened and how so many had become lax in keeping it. He states that this was largely due to the fact that the city had experienced such large scale expansion and many members of the Jewish community had relocated to the suburbs. He concludes on an optimistic note with the hope that the custom will experience a renaissance in the near future.

Two other North African Jewish communities that I know of retain more pared-down versions of the celebration of the first of Nisan. In the communities of Tunisia and Libya, the ceremony is referred to as bsisa (and also maluhia). Bsisa is also the name given to a special dish that is prepared for this day which is made of wheat and barley flour mixed with olive oil, fruits and spices. Several prayers for the new year are recited whereupon the celebrants exchange new year greetings with each other. Many of these prayers contain similar themes to the Egyptian-Jewish Tahwid prayers I discussed in part I of this article. (For example: “Shower down upon us from your bounty and we shall give it over to others. That we shall never experience want– and may this year be better than the previous year.”) As in the Egyptian community, however, the new year aspect of the celebration is not especially stressed. As the eminent historian and expert on North African Jewry Nahum Slouschz points out in an article in Davar (April 7, 1944), “It is impossible not to see in these customs the footprints of an ancient rosh hashanah which was abandoned with the passage of time because of the tediousness of the Passover holiday and in favor of the holiness of the traditional [Tishrei] Rosh Hashana.”

(For more on the roots and contemporary practice of bsisa and maluhia see here, here, here, and here. For videos of the bsisa/ maluhia ceremony see here, here, and here.)

Although the observance of the First of Nissan is no longer as prominent as it once was in rabbinic Judaism, the two most prominent non-rabbinic Jewish communities, the Karaites and the Samaritans, have maintained the holiday into recent times. The Cairo Genizah contains leaves from a Karaite prayer book containing a service for the first of Nisan. This custom eventually fell out of the Karaite textual record as Karaite traditions fell in line with Rabbanite ones over the later middle ages. In his monumental study of the now extinct European Karaite community, historian Mikhail Kizilov discusses how Eastern-European Karaites underwent a gradual process of “dejudaidization” and “turkification” in the 1910s-20s. This was largely due to the work of their spiritual and political head, Seraya Shapshal, who, aware of growing  Anti-Semitism in Europe, was determined to present his flock as genetically unrelated to the Jews (claiming instead that they were descendants of Turkic and Mongol tribes). He likewise sought to recast Karaism as a syncretistic Jewish-Christian-Muslim-pagan creed. Among the reforms instituted by Shapshal was the changing of the Karaite calendar. Although the Karaites of old began the calendar year on Nisan, as per Exodus, they had long assimilated the Rabbinic custom of beginning the year in Tishrei. Shapshal sought to avoid a lining up of the Karaite and Rabbanite new years which is why he switched the Karaite new year to March-April, thereby ironically reverting back to the ancient Karaite custom. This particular reform never took off and the community continued to celebrate the new in year in Tishrei. Even the official Karaite calendars printed that date (which like the Rabbanites they called “Rosh Hashana”). Currently, Karaites do not actually celebrate this day or recite any special liturgy, however they do nominally recognize this day as Rosh Hashanah and they will exchange new year tidings.

Samaritans preserve the most extensive observance of this day. According to the Samaritan elder and scholar Benyamim Sedaka, the Samaritans celebrate the evening of the first day of the first Month – The Month of Aviv – as the actual Hebrew New Year. They engage in extended prayers on the day followed by festive family gatherings. They likewise bless one another with the traditional new year greeting “Shana Tova” and begin the observance, as the followers of the Palestinian rite once did, on the Sabbath preceding the day. The entire liturgy for the holiday is found in A. E. Cowley’s “The Samaritan Liturgy.” The fact that the Samaritans, who have functioned as a distinct religious community from Jews since at least the second century BCE, observe this tradition is the greatest indicator of its antiquity. The antiquity of this custom is also suggested by the fact that the springtime new year is likewise celebrated by many other ethnic communities from the Middle East including the Persians and Kurds (who call it Nowruz) and also, much closer to Jews linguistically and culturally, the Arameans and Chaldeans/Assyrians who call their New Year Kha (Or Khada) B’nissan (the first of Nissan) .

סידור_אר._ישראל

Siddur Eretz Yisrael, published by Machon Shiloh

Minhag Eretz Israel is now effectively extinct. Today, however, there is a small community of predominantly Ashkenazic Jews in Israel who seek to reconstruct this rite. Using the work of scholars who have labored to piece the Palestinian rite together based on the Cairo Genizah, this community endeavors to put it back into practical usage. Among many other customs, they celebrate the First of Nisan. The flagship institution of this movement is called Machon Shiloh and its founder and leader is an Australian-Israeli Rabbi named David bar Hayyim. In correspondence with me, Yoel Keren, a member of Machon Shilo, stated that his community observes the festival in the manner prescribed by the Geniza fragments. On the eve of the first of Nissan, the community waits outside to sight the new moon, then recites the kiddush prayer and finally sits down to a festive meal. The community has also recently published a prayer book called Siddur Eretz Yisrael, which is based on the ancient rite. You can listen to some prayers recited in this rite here, here, and here.

For an interesting interview with Rabbi Bar-Hayyim about the rite and its contemporary usage see here.

 

Appendix to Part I

Since publishing my original post about the first of Nissan’s history as a Jewish holiday a few other sources have come to light about the history of the day’s significance. Here are a few of the earliest sources that mention the day as a holiday (my thanks to Rabbi Reuven Chaim Klein for bringing some of these to my attention).

The earliest of these comes from the  book of Ezekiel (45:18-19):

Thus saith the Lord GOD: In the first month, in the first day of the month, thou shalt take a young bullock without blemish; and thou shalt purify the sanctuary.

Ezekiel contains numerous laws and festivals that are not found in the Pentateuch. Many interpret these as being meant for a future (third) Temple. Ezekiel does not explicitly describe the first of Nissan as a celebration of the new year per se but this description is nonetheless the earliest evidence of the day having special significance.

We find a similar reference in the Temple Scroll (11Q19) of the Dead Sea Scrolls. The Temple Scroll describes the ideal Temple of the Qumran sectarians. The Festival of the first day of the first month (Nissan) is one of three additional extra-biblical festivals that are mentioned in this work:

On the first day of the [first] month [the months (of the
year) shall start; it shall be the first month] of the year [for you. You shall
do no] work. [You shall offer a he-goat for a sin-offering.] It shall be
offered by itself to expiate [for you. You shall offer a holocaust: a
bullock], a ram, [seven yearli]ng ram lambs [without blemish] …
[ad]di[tional to the bu]r[nt-offering for the new moon, and a grain-
offering of three tenths of fine flour mixed with oil], half a hin [for each
bullock, and wi]ne for a drink-offering, [half a hin, a soothing odour to
YHWH, and two] tenths of fine flour mixed [with oil, one third of a hin.
You shall offer wine for a drink-offering,] one th[ird] of a hin for the ram,
[an offering by fire, of soothing odour to YHWH; and one tenth of fine
flour], a grain-offerin[g mixed with a quarter of a hinol oil. You shall
offer wine for a drink-offering, a quarter of a hin] for each [ram] …
lambs and for the he-g[oat] .

Joel S. Davidi is an independent ethnographer and historian. His research focuses on Eastern and Sephardic Jewry and the Karaite communities of Crimea, Egypt, California and Israel. He is the author of the forthcoming book Exiles of Sepharad That Are In Ashkenaz, which explores the Iberian Diaspora in Eastern Europe during the sixteenth and seventeenth centuries. He blogs on Jewish history at toldotyisrael.wordpress.com.

Violence, Intimate and Public, in Bel-Ami’s Republic

By Contributing Editor Eric Brandom

Mme Forestier, who was playing with a knife, added:

–Yes…yes…it is good to be loved…

And she seemed to press her dream further, to think of things she dared not say.

vangoghmuseum-p2747-006S2014-1920

“L’argent” (“Money”), from Félix Vallotton’s series “Intimités” (image credit: Van Gogh Museum)

These are lines from a dinner scene early in Guy de Maupassant’s 1885 Bel-Ami (I have consulted, but in places substantially modified or replaced, the Sutton translation). The novel follows the talentless and superficial George Duroy—eventually Du Roy, since the sparkle of aristocracy is all the more fascinating en République—as he makes his social ascent through seduction, daring, and a little judiciously applied journalism. Duroy is driven by desire, especially for wealth, status, to be adored by people in general, and to possess women. His lack of moral feeling for anyone but himself means that he is able to make good use of his one real advantage, which is that women find him uncommonly attractive. Robert Pattinson played him, perhaps without the requisite physicality, in the 2012 film. In this post, I want to think about violence in Maupassant’s novel. Indeed I would like to use the experience of reading to give historical depth and complexity to the notoriously ambiguous and freighted concept of violence.

 

Bel-Ami is a rich text, taking as major themes not only great passion betrayed, but also journalism, gender, and colonial politics in the early Third Republic. It does not appear particularly violent compared to, for instance, Zola’s Germinal (1885), which breaths misery and social politics from every page, or the same author’s Nana (1880), also about an implausibly sexually attractive individual. For just this reason it seems to me that we may learn something from Maupassant about what counts as violent, what registered as dangerous violence in the Third Republic. As the lines quoted above suggest, violence is by no means absent here. Violence is both presented to the reader in the action of the plot, often at an ironic distance, and also is an effect produced in the reader. These two sorts of violence do not line up. So here I consider several “violent” incidents, including those that are physically—manifestly or naively—violent and those that are not. Indeed it seems to me that it in this novel, and perhaps in the larger society out of which it came, we might look for the most dangerous violence at the juncture of what is spoken and what one does not dare say, of the public and the intimate.

Bel-Ami opens with Duroy as flâneur, going down the boulevard with barely enough money in his pocket to last out the month. He has a powerful thirst for a bock (beer), and covets the wealth of those he can see enjoying the pleasures of life in the cafés. He has just finished two years in “Africa” and the memories are not far away: “A cruel and happy smile passed over his lips at the memory of an escapade which had cost the lives of three men of the Ouled-Alane tribe, and secured for himself and his comrades twenty chickens, two sheep, some money, and something to laugh about for six months.” The novel’s plot is launched when Duroy by chance meets Charles Forestier, an old friend from the military. He is introduced to the borderline honorable professional of newspaper work at the fictional La vie française, owned by “le juif Walter.” We are introduced to Madeline Forestier, whose talent for political journalism and willingness to ghostwrite propelled her husband Charles into prominence, and will now do the same for Duroy.

French expansion in African is woven into the plot. Indeed the way in which the novel takes journalism in general, and the actualités of Third Republic colonial ventures in particular, as a theme is one source of scholarly interest. Duroy’s first publication in the newspaper, which meets with a success he is never able to emulate again without the assistance of Madeline, draws on his experiences in Africa. But there is a larger colonial venture in the background of the novel. Put briefly, the minister Laroche-Mathieu connives with Walter to convince the public that the French will never go into Tunisia. This has the effect of driving down to practically nothing the price of Tunisian government bonds. Then Laroche-Mathieu’s government does decide to invade, determining among other things to guarantee the solvency of the bonds. Walter turns out to own a great quantity of them. From merely wealthy he becomes among the richest men in Paris—from “le juif” he becomes “le riche Israélite.” This subplot ties the novel both to the current events of the 1880s and to Maupassant’s own newspaper career.

But colonial experience is manifest in the novel on quite other levels. Through no fault of his own Duroy becomes involved in an affair of honor, a duel with a reporter from another paper. In a darkly comic scene Duroy, who is capable of self-reflection only in the mode of self-justification, considers the ridiculous possibility that he will die. His military past returns, above all in its irrelevance: “He had been a soldier, he had shot at Arabs without much danger to himself, it is true, a little as one shoots at a boar on a hunt.” Unfortunately for him, “in Paris, it was something else.” The duel takes place, as it must; both parties fire and miss; honor is maintained. Such duels were relatively common among bourgeois men and especially among journalists on the right like Duroy. So well institutionalized was the practice of risking one’s life—even if relatively few people died—for one’s honor that it could be seized by women to criticize the gender divisions of the Third Republic. In an elaborate set piece that farcically repeats his own experience, Duroy attends a charity banquet involving a series of epée and saber duels as entertainment. One section of the spectacle is women sparring to the erotic delight or forbearance of all.

The violence of Algeria has no existential weight for Duroy, as little as do the semi-nude fencers. This has not to do with the victims (Duroy has no feelings for anyone beyond himself, Ouled-Alane or French, man or woman) or more surprisingly even with the objective risk of death.In Paris, there are other men looking at him. It is fame, unequal recognition—to seduce Paris—that Duroy really wants. The duel is violence that does not take place, mere potential violence, as meaningless as the long late-night monologue in which the poet-columnist Norbert de Varenne spills out to Duroy all that he has learned about life and death.

The duel, staged and public, is a comic event for the reader and, at least as he tells it in retrospect, for Duroy. But there are also many moments of intimate violence that are less comical. Charles Forestier succumbs to a long-term illness, and Duroy proposes himself to Madeline as a replacement at the deathbed. Eventually, she agrees. Later, however, Madeline stands in the way of Duroy’s plans to marry Suzanne, the prettier of the now fabulously wealthy Walter’s two daughters. But how to rid one’s self of a wife? Duroy brings in the police to make a public discovery of Madeline in a compromising situation with Laroche-Mathieu, and force a divorce (this, too, was topical–debates around its legalization in 1884 were intense). Duroy breaks down the door to the furnished apartment and the police commissioner follows him in. The policeman demands an account of what has obviously been going on from Madeline. When she is silent: “From the moment that you no longer wish to explain it, Madame, I will be obliged to verify it” (“Du moment que vous ne voulez pas l’avouer, madame, je vais être contraint de le constater”). Duroy is able to turn the revelation—which of course is nothing of the sort—to his own advantage not only by divorcing Madeline but also, in a series of newspaper articles, by destroying the career of Laroche-Mathieu.

vangoghmuseum-p2747-010S2014-1920

“La santé de l’autre” (“The Health of the Other”), from Félix Vallotton’s series “Intimités” (image credit: Van Gogh Museum)

This elaborately public scene with Madeline is to be contrasted with the scene between Duroy and his longtime mistress and benefactor, Clotilde de Marelle. They are together in the apartment that she rented for that purpose long ago; she has just learned, elsewhere, about his impending marriage to Suzanne Walter. Marelle, processing what he has done, how he has kept her in the dark about the plan, abuses him: “Oh! How crooked and dangerous you are!” He gets self righteous when she describes him as “crapule” and threatens to throw her out of the apartment—a miss step because she has been paying for the apartment all along, from back when he had no money at all. She accuses him of sleeping with Suzanne in order to force the marriage. As it happens, Duroy has not and this, it seems, is a bridge too far—at least so he can tell himself. He hits her; she continues to accuse him, “He pitched onto her and, holding her underneath him, struck her as though he were hitting a man.” After he recovers his “sang-froid” he washes his hands and tells her to return the key to the concierge when she goes. As he himself exits he tells the doorman, “You will tell the owner that I am giving notice for the 1st of October. It is now the16th of August, I am therefore within the limits.” It is almost as though Duroy was compelled to assert to a man, of whatever class, that he was “dans les limites.” As Eliza Ferguson succinctly remarks in her rich study of Parisian judicial records related to cases of intimate violence, “the proper use of violence was an integral component of masculine honorability.” In certain situations juries and even the law itself recognized that an honorable man might inflict even fatal violence on a woman. Duroy is of course not an example of honorable masculinity, but he is intensely concerned with that appearance. Familiar with his style, Marelle simply will not accept the appearance he wants to impose in the space of their intimate life. He resorts to physical violence of an extreme sort.

 

The only scene in the novel that does not follow Duroy in close third takes place between the Walters, when Madame Walter discovers that her daughter Suzanne has disappeared, doubtless with Duroy. Duroy, of course, had earlier seduced, used, and then grown bored of Madame Walter, a devout Catholic who had never previously done anything so immoral. Her relationship with Duroy is, again of course, unspeakable. As she explains to her husband that Duroy has made off with their daughter, Walter responds in a practical way. Rather than rage at betrayal, he is impressed by Duroy’s audacity: “Ah! How that rascal has played us…Anyway he is impressive. We might have found someone with a much better position, but not such intelligence and future. He is a man of the future. He will be deputy and minister.” His wife cannot explain the depths of betrayal she feels, at least without admitting her own culpability, so that she is rendered hypocritical even in her righteous anger. The public face of things, carefully arranged by Duroy, brings appalling suffering to the private.

L'absoute___[estampe]___Félix_[...]Vallotton_Félix_btv1b6951702s

“L’absoute” (“Absolution”), Félix Vallotton (image credit: Gallica)

The novel’s violent moments are at this juncture, when the not always unspoken code of illicit intimacy is broken. Violence is generally inflicted on women by Duroy, using publicity, using his capacity to apply the logic of public to that of private life, honor to desire. In Duroy’s Third Republic the deepest moral corruption, the most serious violence, is not corruption in the usual sense of the word, the turning of the public to private ends, which is how one might normally think of the Tunisian affair, but rather the brutal and repeated enforcement of the public in the intimate. Here, then, is a way of thinking about differentiation within the broader category of violence. Some violence mattered more than other violence in the Third Republic. Men beating women, French soldiers killing Arabs out of boredom, or a duel in defense of masculine honor—this was violent, but not serious. The interruption of logics of intimacy and desire by logics of publicity, the betrayal of a tacit agreement by spoken law, these are the sorts of transgressions that are not so easily sanitized by ironic distance.

 

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2

A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image

Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.

You Should Learn Descriptive Bibliography

By editor Erin Schreiner

This summer, I spent a week at Rare Book School at the University of Virginia doing something new and I loved it. I was a newcomer to a group of Lab Instructors guiding students through a weeklong intensive course, the Introduction to the Principles of Descriptive Bibliography, otherwise known as Des Bib boot camp.  Through the course of the week, students spend a solid six-to-eight hours each day in lectures, curated museums of printing, typography, and paper, and in something of a trial by fire: homework and “lab” sessions. In the last two students go to battle with books, writing collational formulas and statements of signing and pagination that describe, in a language codified by Fredson Bowers, the book’s structure. In the lab periods, students sit down with an instructor to see if the descriptions they wrote actually represent the book at hand.

It’s this last bit that’s the trickiest part of learning to write coherent, accurate, and concise bibliographical descriptions, because in order to describe a book you’ve got to understand why it looks the way it does now, how and when it got to be that way, and the questions to ask and the sources to consult to figure all that stuff out. Determining book format – folio? quarto? octavo? duodecimo? 32mo or 24mo? – requires not just an understanding of what those words mean, but also a substantial knowledge of historical printing, papermaking, and binding techniques. In other words, competent bibliographical description depends upon competent bibliographical analysis, and students learn to do both in this course at Rare Book School. It’s a lot to teach, but students catch on fast and many have a lot of fun with it.

Most students in this course fall into three categories: rare book curators, library catalogers, and booksellers. Academics, typically historians of English literature, have also been a fixture in the course and in years past their numbers have grown, particularly thanks to Rare Book School’s Mellon Fellowship Program. Curators and booksellers must know how to read and write descriptions because their reputations, their livelihood, and the collections they help to build depend on it. The value of a book depends upon whether or not it is complete, and the place it holds within that text’s publication history. The edition, issue, or state of a specific copy of a text impacts its monetary and scholarly value, and parties on both ends of the transaction must carefully examine the book at hand in order to know precisely what is on offer. Catalogers, too, must learn to read and write descriptions so that they can accurately represent the book in their institution’s collection to the reading public consulting its catalog.

51ru7s8cail-_sx334_bo1204203200_For curators, catalogers, and booksellers, the need to read and write detailed, Bowers-style bibliographical descriptions brings them to Charlottesville for the week. And this, in part, explains why fewer academics (even academics who work in bibliographically oriented areas of study like the history of books and reading) typically take the course: reading and writing Bowers-style formulas is not an essential skill for their scholarship. But after a week of living and breathing the Rare Book School curriculum – which relies heavily on Bowers’ Principles of Bibliographical Description and Philip Gaskell’s A New Introduction to Bibliography – I want to urge academics to consider how learning the basics of descriptive bibliography can benefit your work as scholars and teachers.

At Rare Book School, students learn to write collations for what’s known as the ideal copy of the text, which Bowers defines as “a book which is complete in all its leaves as it ultimately left the printer’s shop in perfect condition and in the complete state that he considered to represent the final and most perfect state of it.” (Principles, 13) Perhaps the stickiest wicket in all of bibliography, ideal copy addresses what G. Thomas Tanselle describes as “a central truth that affects everything a bibliographer does… the fact that books are not meant to be unique items and are normally printed in runs of what purport [my emphasis] to be duplicates.”

schedule2014

Studying a forme of type on the bed of a Vandercook Press at Rare Book School.

But bookmakers and book buyers have many marvellous ways of interfering with the consistent reproduction and distribution of a text. In the print shop proofreaders stop the press to correct errors they’ve discovered during production, pieces of type break or fall out of place, and pressworkers lose focus and sheets are mislaid on the press. In the bindery, gatherings might be bound out of order, sheets from one book can be bound into another, or all together left out by accident. Readers, of course, do all kinds of things to their books – they tear leaves out and add leaves in, bind one book with a text to which it is completely unrelated as far as publication is concerned, and leave inked notes about the text or anything else in the margins and on blank pages. Analytical bibliography is the practice of discovering and diagnosing these kinds of issues; descriptive bibliography is the practice of synthesizing analytical observations and recording them accurately from copy to copy and across an edition.

 

When thinking like a descriptive bibliographer, one must consider such changes with respect to their impact on ideal copy, and with every book in hand one asks, “what do other copies look like and how many can I get my hands on?” This develops an essential scholarly habit of mind, specifically one in which the concept of ideal copy as it relates to a specific edition drives the very close examination and analysis of that text in multiple copies. By comparing a book in multiple copies and making sense of what one finds, the scholar bibliographer establishes a well researched and materially based context for their research. Understood in these terms, intellectual historians and historians of books and reading in particular can turn to analytical and descriptive bibliography to uncover the material context that defines a historical reader’s experience of a text on the micro- and macro- levels. This is particularly true when one’s use of descriptive bibliography incorporates the theoretical and practical approaches of scholars like Don McKenzie. His “Printers of the Mind” and Bibliography and the Sociology of Texts cleared a new path for the discipline by articulating some of the pitfalls of the method when used exclusively, without the kinds of archival and secondary sources that book historians rely upon to establish historical context for their reading of a text. A printer’s relationship with an author or bookseller, for example, might impact the printed text, and that relationship might be revealed in the author’s letters or booksellers ledgers. A careful analysis of bibliographical clues will aim to uncover such details, and an accurate bibliographical description will record those facts alongside a description of the printed traces of those contextual details with precision.

Close readers will have noticed that I’ve often used the word accurate in reference to description. An accurate description might seem like obvious necessity for the scholar bibliographer, but it is not often easily achieved. As a teacher of descriptive bibliography, I aim to provide students with the tools they need to make well reasoned decisions about what they know they can say about a book at hand, and how to communicate conjecture. At the copy specific level, this type of description is a useful tool for scholars as they study a text in multiple copies because it is helpful to have a tool handy for consistent notetaking about the books you see in far-flung libraries. But more broadly, it’s also a useful tool for teaching students how to build a strong argument (or recognize a weak one) using material and textual evidence, which in part depends upon one’s ability to recognize what one does not or cannot know.

When I talk to my students about writing collational formulas, I tell them that they are writing a condensed argument about the way this book is, and they can explain how that happened in longer form areas of their descriptions. In our lab sessions, we bounce from book to collation and back to book to see how the two match-up, studying the evidence and understanding what it can lead us to conclude – or not – about that object. And while we look to Bowers for guidance on how to write all this stuff out clearly and concisely, learning descriptive bibliography is not an exercise in slavish adherence to the rules of a system of notation devised by a scholar of Elizabethan drama, nor is it an applicable only to books of the handpress period. Learning descriptive bibliography is about learning to look at as many instantiations of a text as possible, and knowing how to identify, synthesize, and interpret the material evidence presented in each copy.

Those of you who have followed my writing for JHI Blog will know that I’m not particularly interested in handpress era books. I started collecting Whole Earth Catalogs some years ago because I found The Last Updated Whole Earth Catalog in a bookshop and read the “How to Make a Whole Earth Catalog” section as a guide to the bibliographical analysis of 20th century counter-culture books. I applied what I learned there to all kinds of twentieth-century printed matter I encounter in my personal and professional life. Without a background in descriptive bibliography, I wouldn’t have read it that way, or started seeing so much in a set of books that I was naturally curious about. Studying bibliography taught me to see more and more clearly, and I’m not the only one. There might be a whole new set of questions under your nose, just waiting for your to learn how to see them. As we tell our students in Des Bib, start reading Gaskell and see what you’ve been missing.

What was life like as a female singer 3400 years ago?

By guest contributor Lynn-Salammbô Zimmermann

20864322_1877042938989339_1594451543_n

A singer and a musician on the royal standard of Ur.

In the mid-14th century BCE, a group of young female singers contracted an unknown disease. A corpus of letters from Nippur, a religious and administrative center in the Middle Babylonian kingdom (modern-day Iraq), tells us about the medical condition of these young females, who learned to become singers sharing the same quarters (cf. BE 17, 31, 32, 33, 47, N 969, and PBS 1/2, 71, 72, 82).

The letters about these girls’ medical conditions were exchanged between the physician Šumu-libši (and his colleague Bēlu-muballiṭ) and the governor of Nippur, Enlil-kidinnī. Šumu-libši provides the governor with meticulous reports of the girls’ symptoms, as well as his attempts to cure them. The symptoms include an inflammation of the chest, fever, perspiration and coughing. The girls are treated with poultices on the chest. Thus it is likely that Šumu-libši was an asû, a physician, and not an exorcist. An asû would have concentrated on the natural causes of symptoms, applying drugs and using the scalpel when dealing with the physical side of the disease, while an exorcist would have also spoken incantations (Geller, 2001: 27-33, 43-48, 56-61). So far the focus of research has unfortunately only been on the sender and recipient of these letters, but not on the female patients, due to the lack of information and their passive role in the narrative. This article aims to shift the perspective.

Unfortunately, we really do not know much more about these girls. We do not even know their names. They are all called “the daughter of NN” with the exception of a woman named Eṭirtu, who may have been in a higher position, such as that of a supervisor or a teacher.

The girls were most likely trained to become singers in a palace or a temple complex (Sallaberger, Vulliet, 2005: 634). Every report by Šumu-libši begins with the greeting: “Your servant Šumu-libši: I may die as my lord’s substitute. The male and female musicians, Eṭirtu and the house of my lord are well.” The governor, who is inquiring after the girls’ health, was not only responsible for the provincial administration of Nippur, but also for its temples, as he also held the position of the highest priest in the city (Petschow, 1983: 143-155; Sassmannshausen, 2001: 16-21). Additionally, he owned large estates, so we cannot exclude the possibility that he would employ singers for his private entertainment there. Since the kingdom had a patrimonial structure, and the concept of “privacy” separate from an official’s public role did not exist until later, “the house of my lord” could apply not only to the various official households under Enlil-kidinnī’s command, but also to his own estates.

20839967_1877043438989289_1183641232_o

Musicians and singers in Girsu. Louvre Museum, Paris. Photo by Lynn-Salammbô Zimmermann.

In general, musicians, both male and female, had a high status at the royal courts of the Old Babylonian period. This is consistent with the fact that the governor, who held the most important office of the Middle Babylonian kingdom, made inquiries about the young singers’ health. Despite the fact that the girls are rather passive in the letters, they can apparently give orders to the healing specialists, as is reported in the letter BE 17, 47, ll. 4-5: “they bandaged her with a poultice as (she) requested” (Sibbing Plantholt, 2014: 180).

During the Middle Babylonian period, Elamite and Subarean singers can be found at the royal court in Dūr-Kurigalzu (Ambos, 2008: 502). Foreign singers were exchanged as precious diplomatic gifts. Young female musicians often ended up in the royal harems (Ziegler, 2006: 247, 349). Nonetheless, in Mesopotamia—and especially in the “international” Middle Babylonian period—the ethnicity of a person cannot automatically be deduced from the language of their name. That being said, the majority of the names of the fathers of “our” girls appear to be Babylonian, one father bearing a supposedly Hurrian name (Hölscher, 1996: 85).

We can find out more about Šumu-libši’s patients by comparing their situation with that of other female singers in Mesopotamia. This unfortunate case of an epidemic infecting apprentice musicians is reminiscent of another disease among female singers at a royal court, some 400 years earlier (Ziegler, 1999: 28-29). The archive of this royal court, that of king Zimri-Lim (1775-1762 BC) in the city state of Mari (modern-day Syria), documents a large number of female musicians present (Ziegler, 1999: 69-82; Ziegler, 2006: 245). Many of the female musicians at court were actually concubines. We know this because some of them received oil after successfully giving birth, and since they were “unmarried”, we can conclude that they got pregnant by the king as members of his harem. One of Zimri-Lim’s favourite wives actually supervised a number of female musicians, who must have been very young, since according to the oil accounts they only received small allotments. We can see in the accounts of oil for their toilette and for the lighting of the palace quarters that there existed a strict hierarchy among these women (Ziegler, 1999: 22-24, 29-30; Ziegler, 2006: 346). According to their rank, the women received larger or smaller rations. The female singers were among the lower classes of the harem, being supervised by a governess (Lafont, 2001:135-136). In the Middle Babylonian letters, Eṭirtu might have been such a governess.

20861384_1877045242322442_1522685028_o

A model of the royal palace of Mari. The women’s quarters are in the lower right corner. Louvre Museum, Paris. Photo by Lynn-Salammbô Zimmermann.

Contrary to our imagination of an oriental harem, it is attested that these women could move beyond the scope of their quarters (Lafont, 2001: 136; Ziegler, 1999: 15-20). In the younger Middle Assyrian harem edicts, however, which were issued in Assyria during the Middle Babylonian period, the freedom of the women at court was much more limited, rendering them completely dependent on the king and palace officials (Roth, 1997: 196-209). If we assume that the Middle Babylonian patients were singers at court, then—according to the contemporary Middle Assyrian harem edicts—they were kept under strict surveillance by palace officials.

In both cases we see that the apprentices apparently shared the same quarters and had close daily contact with one another. This might have not only lead to the spread of a contagious disease, but also to conflicts: quarrels between women at court were addressed in the Assyrian edicts (Roth, 1997: 201-202). While “our” Middle Babylonian singers’ lives were valuable enough to their employer to receive medical care, the king of Mari ordered his queen in two letters to isolate sick women from the rest of the harem (Lafont, 2001. 138-139). In one of these letters (ARM X, 129), Zimri-Lim writes that a sick woman had infected other women in the palace. Therefore he orders his queen: “[G]ive strict orders that no one is to drink from the cup from which she drinks, or sit on the seat where she sits, or lie on the bed where she lies, so that she does not infect many women by her contact alone” (Lafont, 2001: 138). In the second letter (MARI III, 144), Zimri-Lim orders his queen to let the isolated woman die (illnesses were believed to be a divine punishment, cf. the arnu principle in Neumann, 2006:36): “So let this woman die, she alone, and that will cause the illness to abate” (Lafont, 2001: 138-139).

Where were Zimri-Lim’s concubines from? Apparently the king had his pick among the women whom he had brought back as booty from campaigns to the north. In the Middle Babylonian letter, however, nothing implicates that “our” girls were booty—not even the fathers’ names. It is also possible that the girls’ families wanted them to become singers, because it was a prestigious position at court or in a temple.

20840048_1877044255655874_926516137_o

Heads of votive figures of priestesses or ladies of the court at Mari. Louvre Museum. Photo by Lynn-Salammbô Zimmermann

How did the young women in Mari become singers? Since they were not only used for entertainment and/or the cult, but also functioned as concubines, physical attributes were the main criteria, rather than artistic or musical talents. Thus the king orders his queen to pick the prettiest ones (ARM X, 126): “Choose some thirty of them […] who are perfect and flawless, from their toenails to the hair of their head.” Only afterwards does the king want them to learn how to sing. Once the concubines were picked, they should also keep their weight according to the king’s orders: “Give [also] instructions concerning their food, so that their appearance may not be displeasing” (Lafont, 2001: 138). Such appearance-related pressure presumably applied to “our” girls as well. Even if they worked in temple premises at Nippur and not in a royal harem, the religious cult would have required an immaculate body due to purity regulations.

The Middle Babylonian (14th century BCE) letters themselves do not offer much information about “our” young female patients. This is consistent with the patriarchal nature of Mesopotamian society, resulting in the textual evidence mostly being written from the male perspective, reporting about women referring to their looks, their fertility and use as workforce (Note, though, that women had some legal rights, i.e. appearing at court and as contracting partners, and especially in the Middle Babylonian period as single heads of their families, cf. Paulus, 2014: 240-245). Research, focusing on the available information, has consequently followed this perspective. However, drawing parallels to the conditions of female singers at court 400 years earlier offers us a plausible glimpse into the possible living conditions of “our” female patients.

Lynn-Salammbô Zimmermann is a D.Phil. candidate in Assyriology at the University of Oxford, writing her thesis about the Middle Babylonian/Kassite period administration. She completed her undergraduate and graduate studies in Egyptology, Assyriology and Religious Studies in Münster, Germany.

“Doctrine according to need”: John Henry Newman and the History of Ideas

By guest contributor Burkhard Conrad O.P.L.

Any history of ideas and concepts hinges on the observation that ideas and concepts change over time. This notion seems to be so self-evident that the question of why they change is rarely addressed.

Interestingly enough, it was only during the course of the nineteenth century that the notion of a history of ideas, concepts, and doctrines became widespread. Ideas increasingly came to be seen as contingent or “situational,” as we might phrase it today. Ideas were no longer regarded as quasi-metaphysical entities unaffected by time and change. This hermeneutic shift from metaphysics to history, however, was far from sudden. It came about gradually and is still ongoing.

Portrait_of_John_Henry_Newman

John Henry Newman in 1844, a year before he wrote his Essay on the Development of Christian Doctrine (portrait by George Richmond)

The theologian and controversialist John Henry Newman (1801–1890) must be regarded as one of the intellectual protagonists of this shift from metaphysics to history. An eminent intellectual within the high-church Anglican Oxford Movement, Newman decided mid-life to join the Church of Rome, eventually to become one of the foremost voices of nineteenth century Roman Catholicism in Britain. Newman exerts an influence well into our time with figures such as Joseph Ratzinger (Pope Benedict XVI) paying tribute to his thought. Benedict eventually beatified Newman in 2010.

Rarely quoted in any non-theological study in the history of ideas, Newman’s work is eminently important for understanding both the quest for a historical understanding of ideas and the anxious existential situation of those thinkers who found themselves in the middle of a momentous intellectual revolution. In Newman’s day, it was not uncommon for such intellectual and personal queries to go hand in hand.

In Newman’s case, this phenomenon becomes particularly obvious when we look at his contribution to the history of ideas. In 1845, during his phase of conversion from Anglicanism to Roman Catholicism, Newman wrote his famous Essay on the Development of Christian Doctrine. I wish to focus on two of Newman’s central claims within the Essay. Firstly, he states that it is justifiable to argue that doctrines and ideas change over time. Confronting a more “scriptural”—i.e. Protestant—understanding, Newman affirms that ecclesiastical doctrines hinge not only on the acknowledgement of divine, biblical revelations, but also on a tradition of teaching that has evolved since the early church. Newman writes in one of his sermons that “Scripture (…) begins a series of development which it does not finish; that is to say, in other words, it is a mistake to look for every separate proposition of the Catholic doctrine in Scripture” (§28). He complements this thought by adding in the Essay that “time is necessary for the full comprehension and perfection of great ideas” (29), which is to say that doctrine is not something given only once, but rather possesses a dynamic quality.

But why does doctrine evolve in the first place? Why is it important to speak of a history of ideas? It is remarkable that Newman answers those questions in much the same way as we would do today. Doctrine, according to Newman, is ever-evolving because there is a need for such transformation. Ideas are simply “keeping pace with the ever-changing necessities of the world, multiform, prolific, and ever resourceful” (56).

Newman speaks of “a gradual supply [of doctrine] for a gradual necessity” (149). The logic behind doctrinal change may thus be explained as follows: a given or conventional doctrine comes into contact with or is challenged by alternative expressions of doctrine. These alternative expressions come to be regarded as false, as heresy. Hence, an adequate doctrinal reaction is necessary. This reaction may take the form either of a doctrinal change through the absorption of novel thought, or of a transformation through counter-reaction. Hence, Newman is even able to rejoice in the fact that false teaching may arise. He writes in the aforementioned sermon: “Wonderful, to see how heresy has but thrown that idea into fresh forms, and drawn out from it farther developments, with an exuberance which exceeded all questioning, and a harmony which baffled all criticism”(§6).

hans_blumenberg_namensgeber_1200x1600

Hans Blumenberg © Bildarchiv der Universitätsbibliothek Gießen und des Universitätsarchivs Gießen, Signatur HR A 603 a

The necessity for doctrinal development, hence, is born within situations of discursive tension. These situations—the philosopher Hans Blumenberg once called them “rhetorical situations” (Ästhetische und metaphorologische Schriften, 417)—demand a continuous, diachronical stream of cognitive solutions. The need has to be answered. As Quentin Skinner once noted, “Any statement (…) is inescapably the embodiment of a particular intention, on a particular situation, addressed to the solution of a particular problem, and thus specific to its situation in a way that is can only be naïve to try to transcend” (in Tully, Meaning and Context, 65). The need for development and change, thus, is born in distinctive historical settings with concrete utterances and equally concrete counter-utterances. That is why ideas and concepts change over time: they simply react. “One proposition necessarily leads to another” (Newman, Essay, 52). This change is required in order to come to terms with new challenges, both rhetoric and real.

To be frank, Newman would hardly agree with Skinner on the last part of his statement, namely, that statements are never able to transcend their particular context. For all his historical consciousness, Newman was, like most his contemporaries, fixated on the idea that, despite its dynamic character, the teaching of the Church forms a harmonious, logical and self-explaining system of thought, a higher truth. He writes, “Christianity being one, and all its doctrines are necessarily developments of one, and, if so, are of necessity consistent with each other, or form a whole” (96).

Newman was also convinced—again contrary to Quentin Skinner—that the history of theological ideas had to be looked at with a normative bias. He identified certain “corruptions” in the history of theological ideas (169). It was important to Newman to be able to distinguish between “genuine developments” and these “corruptions.” A large part of his Essay is devoted to setting out apparently objective criteria for such a normative classification.

Who is to decide what is genuine and what is corrupted? Newman’s second claim in the Essay deals with this question. According to Newman, the only true channel of any genuine tradition is found within the Roman Catholic Church, with the pope as the final and supreme arbiter. Newman could not do without such a clear attribution of doctrinal decision-making power. It was necessary “for putting a seal of authority upon those developments;” that is, those which ought to be regarded as genuine (79).

In consequence, Newman mobilized his first claim about the dynamic nature of doctrines with regard to his second claim, the idea of papal infallibility in deciding matters of doctrine. It was clear even to the nineteenth-century theologian that the notion of papal infallibility was not explicitly contemplated by anyone in the early Church. But, according to Newman, the requirement for an “infallible arbitration in religious disputes” became more and more pronounced as centuries of doctrinal disputes passed (89). And Newman says of his own century that “the absolute need of a spiritual supremacy is at present the strongest of arguments in favour of the fact of its supply” (89). The idea of infallibility thus came into existence because, according to Newman, it was needed to settle doctrinal uncertainty within his own time.

John_Henry_Newman_(by_Emmeline_Deane)

Cardinal John Henry Newman, painted in 1889 by Emmeline Deane

Having admitted the pope’s supremacy in matters of doctrine, a consequent existential decision by Newman had to follow. He had no choice but to leave Canterbury for Rome. It is only fair to say that Newman required the idea of a final arbitrator, someone to decide on genuine doctrinal development, in order to fulfill his own need for certainty and a spiritual home. Even a sympathetic interpreter like the historian and theologian Jaroslav Pelikan had to concede “that here Newman’s existential purpose did get in the way of his historical vision” (Development of Christian Doctrine, 144).

And so Newman’s account of a history of ideas developing according to need was born as much out of an intellectual interest in the history of theological ideas as it was triggered by biographical motives. But which twenty-first-century scholar in the humanities could claim that his or her research interests had nothing to do with personal circumstances?

Burkhard Conrad O.P.L., Ph.D., taught politics at the University of Hamburg, Germany. He is now an independent scholar working for the archdiocese of Hamburg. Among his research interests are political theology, Søren Kierkegaard, and the Oxford Movement. He is a lay Dominican and writes a blog at www.rotsinn.wordpress.com.