What We’re Reading: Week of 29th January

woman-reading-vasile-ion

Here are some pieces from around the internet that have caught the eyes of our editorial team this week:

Derek:

Garbage, Genius, or Both? Three Ways of Looking at Infinite Jest” (LitHub)

Editors, “Debating the Uses and Abuses of ‘Neoliberalism’: Forum” (Dissent)

Sean Wilentz, “The High Table Liberal” (NYRB)

Karen Kelsky, “When will we stop elevating predators?” (Chronicle of Higher Education)

 

Spencer:

Nick Richardson, “Even What Doesn’t Happen is Epic” (LRB)

Frederic Raphael, “Aryan Ghetto of One” (TLS)

David Dabydeen, “From royal trumpeter to chief diver, Miranda Kaufmann uncovers the Africans of Tudor Britain” (New Statesman)

Mark A. Michelson and John Ryle, “Remembering Paul Robeson” (NYRB)

Alex Ross, “The Rediscovery of Florence Price” (New Yorker)

Bennett Gilbert, “The Dreams of an Inventor in 1420” (Public Domain Review)

 

Sarah:

Charlotte Higgins, “The Cult of Mary Beard,” (Guardian)

Cressida Leyshon, “Jhumpa Lahiri on Writing in Italian,” (New Yorker)

Erik Moshe, “What I’m Reading: An Interview with Historian Ashley D. Farmer,” (History News Network)

Susan Pedersen, “One-Man Ministry,” (LRB)

 

Disha:

Bridget Minamore, “Black Men Walking: a hilly hike through 500 years of black British history” (The Guardian)

Gavin Walker and Ken Kawashima, “Surplus Alongside Excess: Uno Kōzō, Imperialism, and the Theory of Crisis” (Viewpoint Magazine)

The Origins of Autonomy: Not as Lonesome as You Might Expect

By Contributing Writer Molly Wilder

Autonomous man is–and should be–self-sufficient, independent, and self-reliant, a self-realizing individual who directs his efforts towards maximizing his personal gains. His independence is under constant threat from other (equally self-serving) individuals: hence he devises rules to protect himself from intrusion. Talk of rights, rational self-interest, expedience, and efficiency permeates his moral, social, and political discourse. (Lorraine Code 1991, What Can She Know? Feminist Theory and the Construction of Knowledge, p78)

Thus Lorraine Code describes the conception of autonomy in the popular imagination–and often in the academy as well. This conception of autonomy is obsessed with the self, as evidenced by the language Code uses to articulate it: “self-sufficient,” “self-reliant,” “self-realizing,” and “rational self-interest.” And the word ‘autonomous’ originally meant “self-rule” (derived from the Greek αὐτόνομος, from αὐτο-, ‘self’, and νόμος, ‘rule, law’). The image of the self that Code evokes is that of a citadel, forever warding off external attacks. These attacks are characterized as coming primarily from contact with other people—suggesting that relationships with other people are in themselves dangerous to the self. Though relationships may be valuable in some ways, they are a constant threat to the self’s interests.

Feminist philosophers have largely found this conception both accurate and deeply problematic. Though some feminists have therefore rejected the value of autonomy all together, many have instead sought to reclaim autonomy as a feminist value. Since the late 1980s, feminists have proposed and argued for a myriad of alternative conceptions of autonomy, which have collectively come to be known as theories of “relational autonomy.”

Theories of relational autonomy vary widely. Some, like Marilyn Friedman’s, still recognize the value of independence and conceive of autonomy as an internal procedure that is available to people of many different beliefs and circumstances. Such an internal procedure requires some sort of critical reflection on attitudes and actions, but places no limits on the outcome of the procedure. Thus, this sort of procedure makes it possible for a person to count as autonomous even if she endorses attitudes or actions that may seem incongruous with a liberal Western image of autonomy, such as discounting her own right to be respected or remaining in an abusive relationship.  In contrast, theories like Mariana Oshana’s put stringent requirements on the kind of actual practical control necessary for autonomy, significantly limiting those who can count as autonomous. Such theories might consider a person autonomous only if her circumstances meet certain conditions, such as economic independence or a wide range of available social opportunities—conditions that might not be met, for example, by a person in an abusive relationship.

And there are theories that aim somewhere in the middle, such as Andrea Westlund’s, whose conception of autonomy requires some accountability and connection to the outside world, but does so in a way that provides latitude for many different belief systems and social circumstances. Specifically, on Westlund’s account, a person is autonomous only if she holds herself open to criticism from other people. While this dialogical accountability is not a purely internal procedure like Friedman’s, as it involves people other than the agent herself, it does not inherently limit the outcome of the procedure as Oshana’s does. See this collection of essays for more on the theories of Friedman, Oshana, and Westlund, as well as other contemporary theorists of relational autonomy.

These theories, while diverse, share a rejection of the idea that autonomy is inherently threatened by relationships with others. On the contrary, they argue that certain relationships are in fact necessary to the development of autonomy, its maintenance, or both. These theories have provided a much needed new perspective on the concept of autonomy, and continue to provide new insights, particularly with respect to understanding the effect of oppression on selves.

But their core idea, that autonomy requires relationships, is an old one. Long before autonomy became so closely aligned with the protection of the self from others, a prominent strain of philosophy recognized relationships with others as crucial to the well-being of the self—rather than as a threat. To illustrate, consider these excerpts from an ancient philosopher, Aristotle, and a modern philosopher, Spinoza.

For Aristotle, the ultimate good in life, a kind of long-term happiness, is a self-sufficient good. The word he uses is ‘αὐτάρκης’ (derived from αὐτο-, “self,” and ἀρκέω, “to suffice”). He clarifies: “And by self-sufficient we mean not what suffices for oneself alone, living one’s life as a hermit, but also with parents and children and a wife, and friends and fellow citizens generally, since the human being is by nature meant for a city.” (Nicomachean Ethics, 1097b9-11, tr. Joe Sachs) Aristotle, then, explicitly understands self-sufficiency with respect to happiness to require certain kinds of relationships—those of family, friends, and political compatriots.

Though Aristotle does not discuss the concept of autonomy, this passage and others suggest that his ideal of independence was one that required intimate relationships, rather than being threatened by them. Aristotle famously wrote of humans as “political animals.” On a first reading of this phrase, it is apparent that humans are political simply in the sense that they tend to form social institutions by which to govern themselves. But the phrase might also be read to suggest that even at their most independent, humans are the kind of animals that rely on one another.

Spinoza, likewise, identifies the well-being of the self with happiness, and he argues that happiness consists in having the power to seek and acquire what is advantageous to oneself. One might reasonably summarize Spinoza’s view of happiness as the achievement of one’s rational self-interest. For a contemporary reader, Spinoza’s language naturally evokes the conception of autonomy articulated by Code, a conception in which the wellbeing of the self is naturally threatened by others.

Yet Spinoza explicitly argues that “nothing is more advantageous to man than man.” (Ethics, P18, Sch., trans. Samuel Shirley) On Spinoza’s view the only effective, and therefore rational, way for individual to seek her own advantage is with the help of others. In general, Spinoza criticizes those thoughts and emotions that push people apart—and he argues that when we fall prey to these things, we not only lose power, but we fail to act in the interest of our true selves. An individual’s true self-interest, he argues, is necessarily aligned with the true self-interest of others.

The examples I’ve given remind us that, despite the apparent radicalism of arguing that the concept of autonomy is inherently relational in our contemporary cultural context, the conjunction of terms of self and terms of relationality is both ancient and long-lived. The very concepts that Code uses to describe the kind of autonomy that sees relationships as a threat—self-sufficiency and rational self-interest—were once thought of as concepts that in fact required relationships.

Thirty years after she wrote it, Code’s depiction of autonomy as an atomistic individualism threatened by others still well-captures the general sense Americans have of autonomy. Although feminist philosophers have been fairly successful in gaining wide recognition of the importance of relationships to autonomy among philosophers who study autonomy, their impact has not been as wide as might be expected given the strength of their arguments. One major exception has been the field of bioethics, in which the discussion of feminist theories of relational autonomy is quite lively. Yet these theories have not been robustly taken up in other professional fields such as business or legal ethics. Nor have they been taken up in a pervasive way in mainstream philosophical ethics or political theory.

Moreover, they have been decidedly less successful in changing the popular conception of autonomy, particularly within the United States, where the threatened-self conception of autonomy is so revered in the nation’s mythology. Indeed, many Americans might be surprised to learn about the history of this conception and its relative novelty. While some philosophers are already doing this, perhaps it would be fruitful in going forward for people in all fields to spend some time tracing the development of their conceptions of autonomy and self—they might be surprised at what they find.

Perhaps one reason relational theories haven’t been taken up is because of their feminist origins. Some of the wariness, surely, is simply sexism, both explicit and implicit. But beyond that, there may be a perception that the theories are specifically tied to the interests of women. Yet, to borrow a delightfully biting phrase from Spinoza, if someone were to pay a modicum of attention, they would see that is not the case. The historical precursors of their ideas demonstrate this. While the contemporary standard bearers of relational autonomy may be feminists, the basic ideas are as old and general as philosophy itself, and if the ideas are true, they should prompt Americans to seriously reconsider their national assumptions and priorities. If autonomy is in fact relational, it calls into question standard American justifications and understandings of a huge array of policies and practices, everything from gun control to education to marriage.

 

Molly has just received her law degree from Georgetown University Law Center and is currently developing a dissertation that brings together the professional ethics of lawyers, neo-Aristotelian virtue ethics, and feminist theories of relational autonomy. She wants to know, can you be a (really) good person and a (really) good lawyer at the same time? Beyond her dissertation, Molly has varied philosophical interests, including philosophy of tort law, children’s rights, privacy, and communication. When not philosophizing, Molly enjoys reading children’s fantasy, finding places to eat great vegan food, and engaging in witty banter.

Silencing the Berbers

By guest contributor Rosalie Calvet

 A little less than a year ago, a prestigious American university hosted a conference about French-Algerian history, gathering the leading specialists of the topic.

A prominent French scholar closed his presentation by opening the debate to the audience. Immediately, one of his North American fellows asked “Since you do not speak Arabic, do you feel somewhat limited in your work on French Algeria?”

“I see what you mean,” he replied, “but fortunately, we have the archives of the colonial administration, so French is enough.”

Suddenly, a man, sitting on the first row of the audience, stood up, and, speaking in French, replied “I am Algerian. I was born before the Independence. You taught us French and nothing else. We had to learn Arabic after the War of Liberation. Arabic must come back to Algeria.”

And then, another man, sitting next to him, added “Arabic … and Berber. Nobody talks about Berber. Historians have forgotten that North Africa is the land of the Berbers.”

Who are the Berbers?

The indigenous population of North Africa, the Berbers call themselves i-Mazigh-en, “free-men” or “noble” in Tamazight. If over the centuries, the Berbers have split into smaller communities, the Chleus in Morocco, the Touaregs in Libya and the Kabyles in Algeria, they have remained faithful to a clear sense of unity. The history of the Berbers is that of an identity constantly reshaped by internal and external mutations, of cultural blending and ongoing intellectual developments and innovations. Invaded by the Phoenicians around 800 BC, the Berbers were incorporated into the Roman Empire in 200 BC and their land constituted the cradle of European Christianity. The Arab Conquest of the seventh century led to the merging of Berber and Arab culture, the conversion to Islam and the fall of the Christian Church. Between the eighth and ninth centuries, a series of Muslim-Berbers dynasties ruled over the Maghreb (the Arabic name for North Africa) achieving its territorial and political unity. Most of the region, except for Morocco, passed under Ottoman domination in 1553 and remained part of the empire until the nineteenth century. During this period, the three political entities composing modern North Africa emerged. While Tunisia and Morocco were to become protectorates of France, in 1881 and 1912 respectively, Algeria was to be French for over a century.

During the first decades of colonial rule (1830-1871), the French authorities privileged Berbers over their Arab fellows (8). The main goal of the administration was to eradicate Islam from Algerian identity (23). According to French observers, the Berbers seemed keener to renounce their Muslim legacy, as they more closely resembled the French and shared their Christian roots.

Delacroix

Eugène Delacroix, Fantasia Arabe (1833), Städelscher Museums, Germany. One of Delacroix’s most famous representations, of a “fantasia” (a traditional Berber military game played on horseback) he witnessed in North Africa. The composition, centered on three moving figures, reflects Delacroix’s fascination with the ‘wildness’ of the cultures he depicts.

To fuel this narrative, the French progressively constructed the “Kabyle Myth.” In 1826, the Abbé Raynal claimed that the Kabyles were of “Nordic descent, directly related to the Vandals, they are handsome with blues eyes and blond hair, their Islam is mild.” Tocqueville wrote in 1837 that the “Kabyle soul” was opened to the French (182). Ten years later, the politician Eugène Daumas claimed that the “Kabyle people, of German descent […] had accepted the Coran but had not embraced it [and that on many aspects] the Kabyles still lived accordingly to Christian principles” (423). This the reason why French colonial officer Henri Aucapitaine concluded that: “in one hundred years, the Kabyles will be French” (142).

The situation shifted in 1871 when two hundred and fifty Kabyle tribes, or a third of the Algerian population, revolted against the colonial authorities. The magnitude of the uprising was such that the French decided to “fight the Berber identity […] which in the [long-run] empowered the Arabs.”

From then on, the differences between the Berbers and the Arabs became irrelevant to France’s main priority: to maintain its control over the local populations by fighting Islam. The idea emerged that to be assimilated to the French Republic, Algerian subjects needed to be “purified” from their religious beliefs.

By the Senatus-Consulte of July 14th, 1865, the French had ruled that “Muslim Algerians were granted the right to apply for French citizenship […] once they had renounced their personal status as Muslims”(444). This law, which had established a direct link between religion on the one hand and political rights on the other, now further reflected the general sense of disregard towards the diversity of cultural groups in Algeria, all falling into the same overarching category of Muslim. After the 1880s, the French gave up on the Kabyle myth, marginalizing the Berbers who had become a source of agitation.

Rousseau

Henri Rousseau, La Baie d’Alger (1880), Private Collection. In this view of the Bay of Algiers, the Douanier Rousseau pictures a Berber tribe.

As the independent Republic of Algeria triumphed in the Fall of 1962, the newly funded regime identified the Berbers as posing an “existential threat to the Arabo-Muslim identity of the country” (103).

Repeating the French practice of destroying those regional identities allegedly challenging the legitimacy of an aggressively centralized and centralizing state, the leaders of Algeria denounced the political claims of the Berbers as a “separatist conspiracy,” and after 1965 the Arabization policy became systematic throughout the country.

To assess the respective impact of colonization, nineteenth and twentieth century nationalist pan-Arab ideologies and the role of post-independence Algerian leaders upon the persecution of the Kabyles after 1962 constitutes a somewhat limited debate.

It is, however, critical to acknowledge the responsibility of the French state in the marginalization of the Berbers after the 1871 Kabyle riot. Progressively, the colonial administration changed a model of mixed and complex identities strongly rooted the Maghreb tradition into a binary model (235). Within this two-term model, people could only define themselves on one side or the other of a rigid frontier separating authentic French culture from supposedly authentic colonized culture. As Franco Tunisian Historian Jocelyn Dakhlia argues in Remembering Africa, “the consequence of such a dualistic opposition of colonial identities was [… ] that the anticolonial movement stuck to this idea of an authentic native Muslim Arabic identity,  excluding the Berbers” (235).

The very existence of the Berbers thwarts any attempt to analyze Algerian society in a way that resorts to a rigid griddle, whether in racial, cultural or religious terms.

This is probably the reason why the French, and after them the independent Algerian state, have utterly repressed the legacy of Berber culture in the country: for the Berbers could not exist in the dualistic narrative underlying both colonial and anti-colonial. As historian Michel-Rolph Trouillot, would argue, they became unthinkable, and were silenced and excluded from History.

Yet, the most curious factor in this non-history is the paucity of French scholarship on the issue. (50). While some academics do focus on creating conversations and producing literature on the question of Berber identity, the most renowned French scholars systematically fail at doing so. As a direct consequence, most French academic discourses reproduce and maintain the somewhat convenient imperial division opposing the “Arabs” in the North to the “Blacks” in the South of Africa, thereby forgetting that the Sahara is not a rigid racial frontier, and that for centuries the Berbers have been circulated throughout the region.

Poster

Centenaire de l’Algérie – Grandes Fêtes Sahariennes, Affiche, Musée de l’Immigration, Paris. This poster, issued by the French government in 1930, is an invitation to a military parade featuring colonial soldiers to commemorate the centenary of the 1830 conquest of Algiers.

Ultimately, the Berbers blurry the lines between colonial and post-independent notions of identity in North Africa. To acknowledge the Berbers would require scholars to accept their fluidity – a direct threat to the Western appeal for systemic and pseudo-universalist thinking, still prevalent in French academia despite the emergence post-colonial studies in the 1960s.

Recognizing the Berbers necessitates first, as claimed by Algerian scholar Daho Djerbal, to ask: who is the subject of History? This is the only way in which one can hope to put an end to the overly simplistic politics of identity imposed by the political power—on both sides of the Mediterranean Sea, on both shores of the Atlantic Ocean.

Rosalie Calvet is a paralegal working in New York City, freelance journalist and Columbia class of 2017 graduate. As a history major, Rosalie specialized on the historiography of French imperial history. Her senior thesis, “Thwarting the Other: a Critical approach to the  Historiography of French Algeria” was awarded the Charles A. Beard History Prize. In the future, Rosalie wishes to continue reflecting on otherness in the West—both through legal and academic lenses. More about Rosalie and her work is available on her website.

What We’re Reading: Week of 22nd January.

The Orange Book

The Orange Book by Allen Tucker. Undated oil on canvas.

Here are a few pieces that have caught the attention of our editorial team this week:

Sarah:

Andy Beckett, “Post-Work: The Radical Idea of a World Without Jobs,” (Guardian)

Alison Croggan, “Now The Sky is Empty,” (overland)

Richard Eldridge, “What Was Liberal Education?” (LARB)

Julie Philipps, “The Subversive Imagination of Ursuala K. Le Guin,” (New Yorker)

 

Spencer

Martin Puchner, “The Technological Shift Behind the World’s First Novel” (The Atlantic)

Robert Bird, “Gateless Fortress” (TLS)

Michael Prodger, “The Cavalier Collection” (New Statesman)

Morten Høi Jensen, “Darwin on Endless Trial,” (LARB)

Simon Callow, “The Emperor Robeson” (NYRB)

 

Derek

Kathryn Schulz, “The Lost Giant of American Literature” (New Yorker)

Charlotte Gordon, “Mary Shelley: Abandoned by her creator and rejected by society” (LitHub)

Savannah Marquardt, “The Nashville Parthenon Glorifies Ancient Greece — and the Confederacy” (Eidolon)

Lisa Bitel, “What a medieval love saga says about modern-day sexual harassment” (The Conversation)

 

Disha:

Maximillian Alvarez, “The Year History Died” (The Baffler)

D.J. Fraser, “I Swear to Be Your Citizen Artist” (Canadian Art)

Jack Halberstam, “Towards a Trans* Feminism” (Boston Review)

Margarita Rosa, “Du’as of the Enslaved: The Malê Slave Rebellion in Bahía, Brazil” (Yaqeen Institute)

 

Eric:

Shuja Haider, “Postmodernism Did Not Take Place” (Viewpoint).

Daniel Rodgers, Julia Ott, Mike Konczal, NDB Connolly, Timothy Shenk, Forum on Rodgers and ‘Neoliberalism.’ (Dissent).

Colette Shade, “How to Build a Segregated City” (Splinter).

David Shaftel, “All Good Magazines Go to Heaven” (NYTStyle).

 

 

The Historical Origins of Human Rights: A Conversation with Samuel Moyn

By guest contributor Pranav Kumar Jain

picture-826-1508856803

Professor Samuel Moyn (Yale University)

Since the publication of The Last Utopia: Human Rights in History, Professor Samuel Moyn has emerged as one of the most prominent voices in the field of human rights studies and modern intellectual history. I recently had a chance to interview him about his early career and his views on human rights and recent developments in the field of history.

Moyn was educated at Washington University in St. Louis, where he studied history and French literature. In St. Louis, he fell under the influence of Gerald Izenberg, who nurtured his interest in modern French intellectual history. After college, he proceeded to Berkeley to pursue his doctorate under the supervision of Martin Jay. However, unexcited at the prospect of becoming a professional historian, he left graduate school after taking his orals and enrolled at Harvard Law School. After a year in law school, he decided that he did want to finish his Ph.D. after all. He switched the subject of his dissertation to a topic that could be done on the basis of materials available in American libraries. Drawing upon an earlier seminar paper, he decided to write about the interwar moral philosophy of Emmanuel Levinas. After graduating from Berkeley and Harvard in 2000-01, he joined Columbia University as an assistant professor in history.

Though he had never written about human rights before, he had become interested in the subject in law school and during his work in the White House at the time of the Kosovo bombings. At Columbia, he decided to pursue his interest in human rights further and began to teach a course called “Historical Origins of Human Rights.” The conversations in this class were complemented by those with two newly arrived faculty members, Mark Mazower and Susan Pedersen, both of whom were then working on the international history of the twentieth century. In 2008, Moyn decided that it was finally time to write about human rights.

9780674064348-lg

Samuel Moyn, The Last Utopia: Human Rights in History (Cambridge: Harvard University Press, 2012)

In The Last Utopia, Moyn’s aim was to contest the theories about the long-term origins of human rights. His key argument was that it was only in the 1970s that the concept of human rights crystallized as a global language of justice. In arguing thus, he sharply distinguished himself from the historian Lynn Hunt who had suggested that the concept of human rights stretched all the way back to the French Revolution. Before Hunt published her book on human rights, Moyn told me, his class had shared some of her emphasis. Both scholars, for example, were influenced by Thomas Laqueur’s account of the origins of humanitarianism, which focused on the upsurge of sympathy in the eighteenth century. Laqueur’s argument, however, had not even mentioned human rights. Hunt’s genius (or mistake?), Moyn believes, was to make that connection.

Moyn, however, is not the only historian to see the 1970s as a turning point. In his Age of Fracture (2012), intellectual historian Daniel Rodgers has made a similar argument about how the American postwar consensus came under increasing pressure and finally shattered in the 70s. But there are some important differences. As Moyn explained to me, Rodgers’s argument is more about the disappearance of alternatives, whereas his is more concerned with how human rights survived that difficult moment. Furthermore, Rodgers’s focus on the American case makes   his argument unique because, in comparison with transatlantic cases, the American tradition does not have a socialist starting point. Both Moyn and Rodgers, however, have been criticized for failing to take neoliberalism into account. Moyn says that he has tried to address this in his forthcoming book Not Enough: Human Rights in an Unequal World.

Some have come to see Moyn’s book as mostly about President Jimmy Carter’s contributions to the human rights revolution. Moyn himself, however, thinks that the book is ultimately about the French Revolution and its abandonment in modern history for an individualistic ethics of rights, including the Levinasian ethics which he once studied. In Moyn’s view, human rights are a part of this “ethical turn.” While he was working on the book, Moyn’s own thinking underwent a significant revolution. He began to explore the place of decolonization in the story he was trying to tell. Decolonization was not something he had thought about very much before but, as arguably one of the biggest events of the twentieth century, it seemed indispensable to the human rights revolution. In the book, he ended up making the very controversial argument that human rights largely emerged as the response of westerners to decolonization. Since they had now lost the interventionist tool of empire, human rights became a new universalism that would allow them to think about, care about, and perhaps intervene in places they had once ruled directly.

Though widely acclaimed, Moyn’s thesis has been challenged on a number of fronts. For one thing, Moyn himself believes that the argument of the book is problematic because it globalizes a story that it mostly about French intellectuals in the 1970s. Then there are critics such as Stefan-Ludwig Hoffmann, a German historian at UC Berkeley, who have suggested, in Moyn’s words, that “Sam was right in dismissing all prior history. He just didn’t dismiss the 70s and 80s.” Moyn says that he finds Hoffmann’s arguments compelling and that, if we think of human rights primarily as a political program, the 90s do deserve the lion’s share of attention. After all, Moyn’s own interest in the politics of human rights emerged during the 90s.

EleanorRooseveltHumanRights

Eleanor Roosevelt with a Spanish-language copy of the Universal Declaration of Human Rights

Perhaps one of Moyn’s most controversial arguments is that the field of the history of human rights no longer has anything new to say. Most of the questions about the emergence of the human rights movements and the role of international institutions have already been answered. Given the major debate provoked by his own work, I am skeptical that this is indeed the case. Plus, there are a number of areas which need further research. For instance, we need to better understand the connections between signature events such as the adoption of the Universal Declaration of Human Rights, and the story that Moyn tells about the 1970s. But I think Moyn made a compelling point when he suggested to me that we cannot continue to constantly look for the origins of human rights. In doing so, we often run the risk of anachronism and misinterpretation. For instance, some scholars have tried to tie human rights back to early modern natural law. However, as Moyn put it, “what’s lost when you interpret early modern natural law as fundamentally a rights project is that it was actually a duties project.”

Moyn is ambivalent about recent developments in the study and practice of history in general. He thinks that the rise of global and transnational history is a welcome development because, ultimately, there is no reason for methodological nationalism to prevail. However, in his view, this has had a somewhat adverse effect on graduate training. When he went to grad school, he took courses that focused on national historiographical canons and many of the readings were in the original language. With the rise of global history, it is not clear that such courses can be taught anymore. For instance, no teacher could demand that all the students know the same languages. Consequently, Moyn says, “most of what historians were doing for most of modern history is being lost.” This is certainly an interesting point and it begs the question of how graduate programs can train their students to strike a balance between the wide perspectives of global history and the deep immersion of a more national approach.

Otherwise, however, in contrast with many of his fellow scholars, Moyn is surprisingly upbeat about the current state and future of the historical profession. He thinks that we are living in a golden age of historiography with many impressive historians producing outstanding works. There is certainly more scope for history to be more relevant to the public. But historians engaging with the public shouldn’t do so in crass ways, such as suggesting that there is a definitive relevance of history to public policy. History does not have to change radically. It can simply continue to build upon its existing strengths.

lynn-hunt

Professor Lynn Hunt (UCLA)

In the face of Lynn Hunt’s recent judgment that the field of “history is in crisis and not just one of university budgets,” this is a somewhat puzzling conclusion. However, it is one that I happen to agree with. Those who suggest that historians should engage with policy makers certainly have a point. However, instead of emphasizing the uniqueness of history, their arguments devolve to what historians can do better than economists and political scientists. In the process, they often lose sight of the fact that, more than anything, historians are storytellers. History rightly belongs in the humanities rather than the social sciences. It is only in telling stories that inspire and excite the public’s imagination that historians can regain the respect that many think they have lost in the public eye.

Pranav Kumar Jain is a doctoral student in early modern history at Yale University.

Firebrand Infrastructures: Insights from the Society for the History of Alchemy and Chemistry Postgraduate Workshop

by guest contributor Alison McManus 

For less populated fields of history, a conference designed for intellectual exchange can occasionally double as an existence proof. The workshop for the Society for the History of Alchemy and Chemistry must have appeared to serve that double function when, during the concluding remarks, attendees addressed the question, “Why does the academy no longer advertise for historians of chemistry?” While I cannot dispute the relative lack of job searches that cater specifically to my chosen field, I will note the impression of that field I gleaned from this month’s SHAC workshop was anything but obscurity. To the contrary, my impression was one of robust materiality, critical for historical studies of science and of biology in particular.

Perhaps reflecting Europe’s special relationship with alchemy, the Society for the History of Alchemy and Chemistry held its first seven Postgraduate Workshops on the East side of the Atlantic. The eighth annual workshop was held in the United States for the first time on December 1st and 2nd at the Chemical Heritage Foundation in Philadelphia. This year’s workshop was titled “(Al)Chemical Laboratories: Imagining and Creating Scientific Work-Spaces.” As a graduate student, I was fortunate to attend the second day of the workshop, which emphasized chemistry in the 20th century. Focusing on materials, practices, and infrastructure, the SHAC workshop demonstrated the utility of fine-grained technical attention in the history of chemistry. Anchored in physical detail, the history of chemistry came alive through an alchemical demonstration, and when paired with the history of 20th-century biology, it imbued grander narratives of development with much-needed empirical nuance.

In historical studies of science, the relationship between 20th-century chemistry and biology has taken a variety of forms, few of which have been favorable for the former discipline. In Lavoisier and the Chemistry of Life (1987), Frederic Lawrence Holmes famously attributed Lavoisier’s chemical system to the influence of biological theories of respiration. Given Lavoisier’s foundational role in modern chemistry, Holmes implicitly recognized biology as the progenitor of modern chemistry itself. At SHAC, keynote speaker Angela Creager (Princeton) advocated a reversal of this causality. Her address and upcoming Ambix paper, “A Chemical Reaction to the History of Biology,” began with a simple observation: historians of science write the history of 20th century biology in one of three ways, as the story of genetics, of evolution, or of the neo-Darwinian synthesis of the two. Creager characterized Ernst Mayr’s The Growth of Biological Thought (1985) as a founding example of the third genre, in which genetics offers a mechanism to reconcile Mendelian heredity with Darwinian natural selection.

Drawing from scholars such as Vasiliki Smocovitis and Joe Cain, Creager suggested that teleological narratives of synthesis marginalize biological fields less preoccupied with issues of heredity, including physiology, ecology, and endocrinology. Such an historiographical oversight may be political in origin; biological subdisciplines further afield from evolutionary theory simply lack comparable socio-political clout. Here chemistry offers a solution. By focusing on material practices and laboratory infrastructure, Creager illuminated the “cryptic centrality” of chemistry to 20th century biology, at once reversing Holmes’s causal account and expanding the list of relevant biological subdisciplines beyond genetics and evolutionary theory. In line with her earlier work on radioisotopes, Creager recounted the story of G. Evelyn Hutchinson’s 1940s limnological experiments, in which radioisotopes enabled the study of phosphorus cycling in pond ecosystems. The centrality of chemical infrastructure to Hutchinson’s experiments suggested that chemistry did not merely act as cousin or offspring of 20th-century biology but rather allowed it new tools for making sense of life.

Appropriately, the final panel at SHAC featured two scholars working outside genetics and evolutionary biology. Gina Surita (Princeton) discussed Elwood V. Jensen’s discovery of the estrogen receptor, and CHF Fellow Lijing Jiang presented her research on Socialist China’s race to synthesize insulin during the Great Leap Forward. Juxtaposed with Creager’s keynote address, Jiang’s research lent the impression that the story of neo-Darwinian synthesis may resonate rather little with Chinese histories of 20th century biology. Due to the influence of Lysenkoism in Socialist China, the Insulin Project coincided with a ban on genetic engineering. Thus a high-profile research campaign operated in the absence of one major element of the historiographical canon.

alison mcmanus piece_alchemy

The final step of Jennifer Rampling’s and Lawrence Principe’s alchemical demonstration, in which the “gliding fire” corresponds to the gradual oxidation of lead. Image courtesy of Angela Creager.

The workshop concluded with a joint alchemical talk and presentation by Jennifer Rampling (Princeton) and Lawrence Principe (Johns Hopkins). Together with William Newman, Principe pioneered the genre of alchemical reenactment in the late 1980s and early 1990s. When applied to alchemical manuscripts, his chemical training has elucidated the central role of contaminants in the success of alchemical experiments, and in so doing, it has cast alchemy as an experimental rather than wholly imaginative field. Taking textual correspondence to reality as a given, Principe and Rampling sought to recreate Sam Norton’s 16th-century alchemical synthesis of the “vegetable stone,” a substance widely revered for its life-giving properties. Successful replication depended upon both historical and chemical expertise. Rampling recently demonstrated that an essential ingredient known to the alchemists as “sericon” in fact represented two possible ingredients, red lead and antimony, depending upon the age of the alchemical recipe. These components were identified by tracing the recipe’s historical origins. Likewise, Principe’s knowledge of silver refining suggested that copper was an essential contaminant that allowed the recipe to proceed as described.

Experiencing an alchemical reenactment was an exercise in humility. While I cannot attest to the reinvigorating properties of the “vegetable stone” (such claims must surely be relegated to the realm of alchemists’ imaginations), I was nonetheless struck by the correspondence between textual description and my own empirical observations. Sam Norton’s seemingly imaginative claim that “Fire will glide” through grey feces in the final step mapped quite reasonably onto the oxidation of lead, in which patches of bright orange and yellow lead (II, IV) oxide expanded slowly across grey powder. Furthermore, Principe was quick to emphasize a central problem in alchemical reenactments, namely the issue of accounting for failed replication. A gap between historical text and contemporary practice may reflect a misleading claim by the alchemist, but alternately, one may fault the modern experimenter’s chemical and historical competence. Nevertheless, relentless experimentation with material alchemy offers a means to close the gap.

 

At the conclusion of the workshop, I found myself attempting to reconcile a dissonance between the status of the discipline and the expository and corrective work underway within it. I now wonder to what extent that dissonance might itself be productive. During the SHAC workshop, the material history of chemistry operated both for its own sake and as a much-needed auxiliary to the history of biology. Surely, scholars working in the history of chemistry may yet expect to search for jobs defined primarily by period or region. Still, I might suggest that the lack of “historian of chemistry” jobs is far more pertinent to academics’ self-fashioning than the ranking of the field’s relevance. In providing infrastructure to 20th-century biology, the discipline of chemistry at once makes itself essential and leaves itself vulnerable to being overlooked. Restoring attention to these infrastructural elements enables the more modest field to issue a correction from below. In this sense, might humble fields be particularly insightful ones?

Alison McManus is a PhD student in History of Science at Princeton University, where she studies 20th century chemical sciences. She is particularly interested in the development and deployment of chemical weapons technologies.

What We’re Reading: Week of 15th January.

Georgios_Jakobides_Girl_reading_c1882

Girl Reading by Georgios Jakobides c. 1882.

Here are a few pieces that have caught the attention of our editorial team this week:

Derek

Brandon M. Terry, “MLK Now” (Boston Review)

Teresa Kroeger et al., “The state of graduate student employee unions” (Economic Policy Institute)

Lewis Lapham, “The Enchanted Loom” (Lapham’s Quarterly)

 

Spencer

A Strategy for Ruination: An Interview with China Miéville” (Boston Review)

Gavin Francis, “The Untreatable” (LRB)

James A. Marcum, “The Revolutionary Ideas of Thomas Kuhn” (TLS)

George Prochnik, “The Hasidic Question” (LARB)

Zac Bronson, “Thinking Weirdly with China Miéville” (LARB)

 

Sarah:

Neil Davidson, “History from Below,” (Jacobin)

Colin Kidd, “You Know Who You Are,” (LRB)

Angela Naimou, “Preface,” (Humanity)

Neil Roberts, “Black Aesthetics and the Philosophy of Race: Paul C. Taylor,” (Black Perspectives)

 

Disha:

Billy-Ray Belcourt, “Settler Structures of Bad Feeling” (Canadian Art)

Max Read, “The Awl and The Hairpin’s Best Stories, Remembered By Their Writers” (New York Magazine) (including “Negroni Season”, “Text Messages from a Ghost” and “When Alan Met Ayn: Atlas Shrugged and Our Tanked Economy”  

Doreen St. Félix, “Trump’s Fixation on Haiti, and the Abiding Fear of Black Self-Determination” (The New Yorker)

 

Cynthia:

From where I sit, I can watch the lights on the FDR, curving ribbons of white and red, flowing slow as molasses. I can also watch the planes take off from LaGuardia, small blips of light tracing diagonals against the night sky. The FDR is, according to Wikipedia, a “9.44-mile (15.19 km) freeway-standard parkway on the east side of the New York City borough of Manhattan.” Besides cute 3-letter acronyms, the FDR and LGA have something else in common–Robert Moses. It was Moses who provided the original designs for the FDR (in 1934), and it was also Moses who determined that there would be no train or subway service to LGA airport.

In his obituary of Moses, Paul Goldberger wrote, “His guiding hand made New York, known as a city of mass transit, also the nation’s first city for the automobile age. […] The Moses vision of New York was less one of neighborhoods and brownstones than one of soaring towers, open parks, highways and beaches – not the sidewalks of New York but the American dream of the open road. “ And that is why, if you’ve travelled to Manhattan via LGA, your experience of entering (or exiting) the city usually involves considerable time on the FDR.

Of course, we know where this story goes, and most of us are familiar with the Jane Jacobs critique. (And perhaps also with the critiques of the Jane Jacobs critique… Though, as Adam Gopnik noted, it is hard to criticize her.)

In the same essay, Gopnik observes: “London, Paris, New York, and Rome—whose political organizations and histories are radically unlike, and which live under regimes with decidedly different attitudes toward the state and toward enterprise—have followed an eerily similar arc during the past twenty-five years. After decades in which cities decline, the arrow turns around. The moneyed classes drive the middle classes from their neighborhoods, and then the middle classes, or their children, drive the working classes from theirs.”

New York was a different city in the midst of that decline. It created the conditions for the production of entirely new types of art–among them, the abandoned buildings that Gordon Matta-Clark carved into “urban equivalent[s] of Land Art.” Matta-Clark is now the subject of a retrospective at the Bronx Museum, a fact that underscores Gopnik’s point about the rising arc. In the 1970s, when New York was deep in its decline, Matta-Clark began thinking about using the city’s derelict buildings as a medium for his art. His first act of “anarchitecture” was to cut holes into the walls and floors of abandoned Bronx apartments. A short film, “Day’s End,” documents the artist creating a “sun and water temple” out of an abandoned pier near Gansevoort Street.

A line of descent runs from Matta-Clark to Rachel Whiteread, whose cast-concrete “House” (1993, destroyed 1994) was also a comment on the conditions of the city–though the city, in this case, was London, and the “conditions” in question were those that Gopnik might ascribe to the upward arc, the arrow turning around. Or we might call it gentrification. “House” was a cast of a specific London house, one located at 193 Grove Road, in an part of the East End called Wennington Green. A piece by Digby Warde-Aldam, published on the tenth anniversary of “House,” ruefully noted: “To say the early 1990s were a time of upheaval for the then-predominantly working class neighbourhoods of the East End doesn’t come close; the Conservative government had put an enormous amount of faith into constructing a new financial centre around Canary Wharf, a few miles south of Wennington Green. In the surrounding areas, ‘regeneration’ became a mantra. The terraces around the Green, heavily bombed during the Second World War, were among the first places marked for demolition: ‘Thatcher wanted to create a “green corridor” around Canary Wharf’, Rachel Whiteread says, ‘I had my studio nearby and used to cycle past. I was very conscious of the fact it was all about to change’.”

Whiteread, too, is getting a major retrospective. Her work can be undeniably beautiful, as the Tate’s installation of Untitled (One Hundred Spaces, 1995) demonstrates. One hundred cast resin volumes, each delineating the volume of space beneath a specific chair, carefully arrayed in a austere, neoclassical space. They look, to me, like so many pieces of pate de fruit, laid out against a backdrop of clean, neutral stone. Eleanor Birne described their effect beautifully: “At the Tate, when the sun comes out over the long glass roof of the gallery, the coloured blocks glow like boiled sweets, or like jewels. When the sun goes in they grow duller, mute. I visited on a bright sunny-cloudy day and they lit up and grew dim over and over again.”

If Matta-Clark was born of the city’s decline, Whiteread is undoubtedly a product of the upward arc. No deconstruction here. In the new city, the city awash in wealth, we create cast relics, we ‘mummify the air.’ We relish the thingness of things, their taste, their touch. One could eat Whiteread’s Due Porte (2016), with their hard candy sheen, ingesting both their beauty and the history imprinted in that translucent blue resin. The city changes, yes, but we can hold onto history here. We can almost taste it in our mouths.

 

Life and Likeness at the Portland Museum of Art

By Editor Derek O’Leary, in conversation with curator Diana Greenwold

It can be easy to imagine the early American republic as rushing headlong into the future during its first half-century—westward with the suppression of Indian society, seaborne to new markets with the products of southern plantations and western farms, upward in the growth of manufacturing hubs and cities, and in all cases away from the colonial past.  Newspaperman and staple of any US history survey, John O’Sullivan celebrated in this “Great Nation of Futurity” “our disconnected position as regards any other nation; that we have, in reality, but little connection with the past history of any of them, and still less with all antiquity, its glories, or its crimes.”

This forward orientation was a common enough sentiment during these decades, yet one bound up in a much broader and Janus-faced preoccupation with the nation’s place in time. Biography burgeoned as a literary form (finely explored in Scott Casper’s Constructing American Lives, 1999); leading authors leveraged historical fiction to fashion a mythic colonial and revolutionary past; historical, antiquarian, and genealogical societies flourished as civic institutions. And in innumerable households, individuals and families marked their passage through time during years of seemingly unprecedented change.

The Portland Museum of Art’s exhibit “Model Citizens: Art and Identity from 1770-1830” (on view through January 28) provides fascinating insight into that latter world. It assembles a diverse array of household and commercial practices of marking pivotal stages of life in the early United States. Drawing on rich collections in Maine and New England art, it places in conversation a range of self-representation, organized around the life cycle: birth and childhood, marriage, adulthood, death and mourning. The exhibition recognizes its bounds within the white household, but in this space explores a far greater variety of lives and likenesses than we would typically see from this period.

dearborn

Gilbert Stuart (United States, 1755-1828), Major General Henry Dearborn (1751-1829), 1812, oil on panel, Gift of Mary Gray Ray in memory of Mrs. Winthrop G. Ray, 1917.23

Diana Greenwold, PhD., who curated the exhibition, situated “stalwarts of the permanent collections”—such as the large and familiar oil portraits by Gilbert Stuart— alongside less elite likenesses produced in households and more accessible markets, such as samplers, shadow cutters, paintings by itinerant artists, and mourning embroidery (shown below).  “For a long time,” she explains, “that type of folk portraiture was understood as being less sophisticated and telling of the moment,” a bias which the exhibition helps to revise. “By using different media,” she continues, “you open up the opportunity to show how different social classes can get at a similar goal. Not everyone can engage Gilbert Stuart, but cut paper can serve in a similar way for families to represent themselves, to both themselves and those around them.”

In depicting the shared life cycle of individuals of such disparate means, the exhibition thoughtfully examines the uses of these varied self-representations. Sewn samplers produced by middle- and upper-class girls in finishing schools served as stages to perform discipline, literacy, numeracy, and piety. But alongside sewn renditions of the alphabet, numbers, and biblical verses, girls might also inscribe their own name and age, or indeed, as in this peculiar rendition of a genealogy, a truncated, textual family tree.

Mary Ann McLellan_Genealogy Sampler

Mary Ann McLellan (United States, 1803-1831), Stephen McLellan Genealogy Sampler, circa 1816, cotton on linen, museum purchase, 1981.1063

By the 1830s, genealogy would develop into a widespread household and academic practice, equipped with institutions, periodicals, and specialists who manipulated transatlantic connecting networks. (Francois Weil’s Family Trees (2013) is the recent major work on this phenomenon in the US.) Often, it sought to link the living in an unbroken chain backward, at least to the first Anglo-American settlements, and ideally eastward to their English origins.  Yet, in a decade when genealogy had yet to emerge as a widespread practice, Mary Ann McLellan’s genealogical sampler (above) is striking: it places her father atop a small familial hierarchy, above his first and second wife and their four children. Paternity is overt; maternity only deducible by examining dates of birth and death. In this riff on a genealogical tree, more important than connecting the present to the past is inscribing an inter-generational duty: overseen by an elder generation, undertaken through a younger, promised to a future. “Let us live so in youth that we blush not in age”, the poem insists. The admonishment is surely a basic feature of gendered household management, but one cannot help but hear echoes of the broader national anxiety about the character and prospect of the country during these years, when the trope of the cyclical rise, corruption, and fall of republics was most potent.

Expanding on this analogy, Greenwold explains that “these domestically-scaled ways of representing self or family could become proxies for larger questions of national identity.” Especially in the works of childhood (produced both of and by children), “for a person in the early US, memorializing their children as the first generation of native-born citizens could be an act of establishment, visualized in a permanent and lasting way around virtues that were stressed for a new republic: industry, piety, family, etc. This notion of a domestically-scaled object had bearing not just on how folks were understanding their own families, but within a larger-scale participation in a budding national family.”

Though many of these were household products intended for a domestic space, perusing the exhibition, one can also imagine the markets for likenesses springing up in these decades before the more mechanized means of the daguerreotype and its successors. The shadow-cutters are the cheapest, visually starkest, and perhaps most arresting of works on display. Greenwold notes that these profiles cut into beige paper and pasted on black background (and sometimes vice versa) were produced at once by itinerant artists, as a popular parlor game, or in such venues as Charles Willson Peale’s Museum by means of a physiognotrace. The exhibition explains the special appeal of the profile—which features the chin, nose, and forehead—in the field of physiognomy, which sought to discern character in the subject’s facial features. In these shadow cutters of the women of the Stone family, distinctive hair styles have been inked around the silhouettes. Historian Sarah Gold McBride, whose work examines the significance and uses of hair in the nineteenth century, argues that in addition to physiognomy, hair style, texture, style, and color conveyed clues to one’s character in this period. (See her dissertation “Whiskerology: Hair and the legible body in nineteenth-century America” (2017) for more on this.)

IMG_2898

Unidentified artist (United States, 19th century), Cut paper silhouettes of the Stone Family, 1917.11-.18

These were often products of fleeting popular or commercial transactions. However, in addition to revelations of character, as small and easily transferable objects these likenesses-and more specifically portrait miniatures painted in watercolor on ivory-could be more intimate than the finely painted portraits of Stuart or John Singleton Copley. Greenwold elaborates, that “they are meant to be very portable and physically held, near the heart or the body. That sort of physical embodiment of a loved one does something categorically different than something that hangs in a parlor, such as an oil on canvas portrait, that would be both for family and the larger group of visitors who would be in your home.”

If girls mainly executed the works of childhood in this collection, and mostly men those of adulthood, women undertook the tasks of mourning represented here. Though there are cases of men making some of the outlines of such mourning embroidery, Greenwold discusses that “in this nineteenth-century moment, women were becoming the vessels through which a family performs its mourning—the public face through which a family expresses grief, for instance.” Bedecked in Greco-Roman iconography, balanced around a central urn inscribed with the dates of the departed, these “classically-draped figures intertwine with a lexicon of Christianity, forming a hybrid language of pagan and Christian.” It is a common aesthetic aspired to by the middle to affluent classes in this period, but one which suggests again how the marking and performance of the life cycle in the early US was enmeshed with the larger concerns of the place of the American citizen and republic in history.

Memorial to Mrs. Lydia Emery

Susan Merrill (United States, 1791-1868), Memorial to Mrs. Lydia Emery, 1811, watercolor and needlework on silk, Gift of Helen Harrington Boyd in memory of Susan Merrill Adams Boyd, 1968.4

An “Extreme Turn”? Some Thoughts on Material Culture, Exploration, and Interdisciplinary Directions

By Contributing Writer Sarah Pickman

In 1848 Peter Halkett, a lieutenant in the British Royal Navy, published his designs for a most curious invention. Halkett was interested in the numerous exploratory expeditions the Navy had sent to the Canadian Arctic during the previous few decades. In particular, he’d learned that British explorers desired small boats for expeditions that were lightweight and could be carried overland when not in use. Halkett’s proposed solution, illustrated in a series of published engravings, was the “Boat-Cloak or Cloak-Boat,” an inflatable craft made of waterproof rubberized cloth – with a stylish windowpane check pattern, it might be added. Deflated, the boat could be worn as an outer cloak. When confronted with a body of water, the wearer could simply take off the cloak and inflate it. While Halkett’s craft was designed for polar explorers and not urban dandies, the figure in his illustrations wearing the deflated boat cuts a dashing silhouette for an 1840s London gentleman. Since Halkett assumed his wearers would be carrying walking sticks and umbrellas, he proposed that these fashionable accessories be used as shafts for boat paddles and sails, respectively.

 

Images from Boat-Cloak or Cloak-Boat, Constructed of MacIntosh India-rubber Cloth, Umbrella-sail, Bellows, &c. Also, an Inflated Indiarubber Cloth-boat for Two Paddlers. Invented by Lieutenant Peter Halkett, R.N., 1848. Image reproductions from National Maritime Museum, Greenwich.

While the Navy never adopted Halkett’s design for general use, the “Boat-Cloak” was an early example of a solution to challenges posed by Western exploratory voyages in extreme environments that also had an eye towards style. This melding of utilitarian expedition gear and high design is the subject of the exhibition Expedition: Fashion from the Extreme, now on view at the Museum at the Fashion Institute of Technology (MFIT) in New York. The exhibition, curated by MFIT’s deputy director Patricia Mears, is the first major study to address the work of high fashion designers inspired by Western exploration, particularly by expeditions of the last two centuries. It’s organized around five types of “extreme environments” that have been the subject of exploratory interest: polar, deep sea, outer space, mountains, and savannah/grasslands. Within each of the five environments, the exhibition draws on MFIT’s rich holdings and several unique loans, such as an Inuit-made fur ensemble worn by Matthew Henson, to juxtapose the work of twentieth and twenty-first century fashion designers with expedition garments that inspired them. For example, in the mountaineering section visitors can view original Eddie Bauer down-filled jackets and pants, made for high-altitude mountaineering in the 1930s, with iconic high fashion “puffer coats” by Charles James (1937), Norma Kamali (1978’s famous “sleeping bag coat”), and Joseph Altuzarra (2011) that were inspired by utilitarian down-filled outerwear. The interplay between designer, utilitarian, and in the polar section, indigenous-made, garments not only blurs the lines between categories like “fashionable” and “functional,” but asks visitors to consider the creative ways humans respond to extreme environments, or their perceptions of such environments, and their impact on them.

 

Third

Norma Kamali “sleeping bag” coat, c. 1977, Museum at the Fashion Institute of Technolgy. Gift of Linda Tain.

I was fortunate to be able to contribute an essay to the exhibition catalogue on the role of dress in polar exploration at the turn of the twentieth century. As someone interested in the material culture of exploration, especially clothing, it was gratifying to see Expedition: Fashion from the Extreme come to fruition, since both exhibition and catalogue (and also a symposium on the topic of “Fashion, Science, and Exploration,” organized in conjunction with the exhibition) are contributions from fashion scholarship to the growing body of work in the humanities on humans and the “extreme environment.”

In the last two decades, there has been a noticeable increase in the number of academic books on aspects of exploratory history, particularly European and Euro-American exploration from the eighteenth to the twentieth centuries. Ten years ago in an essay for History Compass, Dane Kennedy identified two main strands of inquiry in this area: one encompassing “the institutional, social, and intellectual forces…that inspired the exploration of other lands and oversaw its operations,” and the other addressing the “cultural encounter between explorers and indigenous peoples.” Kennedy himself has been a standard-bearer for such work, along with Michael F. Robinson, Felix Driver, Michael Bravo, Beau Riffenburgh, Helen Rozwadowski, Lisa Bloom, Johannes Fabian, and D. Graham Burnett, to name just a few. To the areas Kennedy identified we can now add studies of textual and visual media produced by expeditions; historiography of explorers; material tools of exploration; work on race and gender in exploration; and broad global surveys of exploration. This is to say nothing of the rich bodies of writing across the humanities with ties to exploration: work from history of science on scientific fieldwork and the role of local informants or go-betweens; studies of representations of landscape from art historians; work across disciplines on genealogies of natural history collecting and scientific museums. And the list goes on.

Along with the “cultural encounter between explorers and indigenous peoples” Kennedy described, “exploration” as a category provides a space for thinking through different human encounters with, and approaches to, environments. In this space, we might dovetail the growing body of work on exploration to new scholarship from history of science on the history of physiology in extreme environments. In this category we can include recent and forthcoming work from Rebecca M. Herzig, Sarah W. Tracy, Philip Clements, Matthew Wiseman, Matthew Farish, David P. D. Munns, and Vanessa Heggie, whose article “Why Isn’t Exploration a Science?” is a succinct entry to thinking about knowledge produced in the context of exploratory expeditions. These studies (by no means an exhaustive list) of European and Euro-American actors examining bodies in extreme environments – largely in the polar regions, on mountains, and in outer space – might be seen in conversation with scholarship on histories of tropical medicine, but in different geographic contexts.

Yet an examination of science in extreme environments specifically also provides a bridge between the “heroic” exploratory voyages of the long nineteenth century and the development of modern field-based sciences. It also allows us to think through how we, as humanities scholars, use the categories of “extreme” and “normal.” In other disciplines these terms are fairly well defined. In biology, for example, “extreme environment” is a category that has been in widespread use since the 1950s. Textbooks note that it describes places hostile to all forms of organic life save for some very highly adapted microorganisms. These places range from the rocky deserts of Antarctica, to extraordinarily alkaline lakes, to the bottom of the Mariana Trench, the deepest place in the world’s oceans. In 1974, R. D. MacElroy introduced the term “extremophile” in an article in the journal Biosystems as a grouping for these lifeforms.

But the history of the category of “extreme environment” as it pertains to human life has less to do with common inherent features of those environments and more to do with the kinds of historical actors interested in them. As Vanessa Heggie discusses in her forthcoming book Higher and Colder: A History of Extreme Physiology, by the first half of the twentieth century some field physiologists were beginning to group the Arctic, Antarctica, and high-altitude mountain ranges together. They often shared research and advice for traveling through these areas. Occasionally seasoned mountaineers took part in polar expeditions, and vice versa. But as Heggie notes, “There is an artificiality to these connections, since they are inventions of the human mind rather than necessarily reflecting an objective ‘natural’ relationship between very different geographical regions…So what really connects these environments is human beings – their motivations and specific interests.” “Specific interests,” in this case, referred to the performance of what Heggie calls “temperate-climate bodies” in these places. When indigenous populations existed in these places, nineteenth-century European and Euro-American explorers usually ignored them or employed them as guides but later downplayed their contributions to expeditions. Over the course of the twentieth century, some American and British physiologists were increasingly interested in isolating what they assumed must be innate biological features that allowed the indigenous inhabitants of these regions to thrive, but often with the goal of using this information to select soldiers for mountain or polar combat. (The U.S. military’s Arctic, Desert, and Tropic Information Center, established in 1942, was an example of grouping disparate environments together based on their challenges to conventional Western warfare).

While “extreme environment” may be a twentieth-century actors’ category, we can find earlier antecedents for grouping environments together in this way. By the late nineteenth century, there were numerous organizations in Europe and the United States that supported exploration, such as the Royal Geographical Society in Britain and the Explorers Club in the U.S., and their ranks were filled with members interested in a wide range of geographic settings, from the rainforests of Central Africa to the icy Arctic Ocean. Though they may not have used the term “extreme,” the members of these clubs arguably created a social space in which these disparate places could be talked about in the same breath. These were locations that tantalized Western explorers as “prizes” to be claimed via expeditions, while at the same time (or because) their environmental conditions resisted agriculture-based settler colonialism. Arguably, one can find the roots of the “extreme environment” even earlier in the “sublime environment” of the late eighteenth and early nineteenth centuries, which overwhelmed the viewer and provoked awe and terror at nature’s grandeur – but was also predicated on Western ways of seeing and understanding.

In short, the historical study of the human body in the extreme environment – considering exploration, field science, lab-based physiology, recreation, anthropology, travel and other related areas – is a fruitful space for scholars, and a place with the potential for productive, interdisciplinary work across the humanities and a way to reach beyond to the sciences. It poses questions for historical research: How did this “extreme” grouping work for historical actors, and how did they conceptualize the “normal” body in opposition to one transformed by harsh environments? How does extreme field science’s roots in heroic exploration inform the work of current scientists, such as those published in journals like Journal of Human Performance in Extreme Environments and Extreme Physiology and Medicine? Of all of the ways of pursuing knowledge, why did certain actors choose paths not in spite of their high risk for bodily harm or death, but because of it; as Michael F. Robinson has written, research where “Danger is not the cost of admission, but the feature attraction”? Most fundamentally, who sets the terms for which environments are considered extreme, particularly in places with indigenous populations? What’s at stake when one’s home region is the extreme to someone else’s normal, when human populations are considered to be biological extremophiles? It is important that we fully historicize our definitions of “normal” and “extreme” in the contexts of the body and the environment, especially at a time when anthropogenic climate change, biohacking, post-humanism, and commercial space travel – not to mention terrestrial “adventure tourism” – have the potential to shift them. The body of recent historical research cited here can provide a way to tackle these questions. Does this research constitute the cusp of an “extreme turn”? Possibly. But even if it is too soon to call it a “turn,” it is already a rich pool for study, and with work currently being undertaken by emerging scholars, a pool that is not likely to dry up soon.

I’d like to suggest that museums have a critical role to play in this ongoing conversation about the extreme, as spaces to engage not just with texts, but also with objects, which represent the tangible ways humans mediate bodily experience of environments. It’s notable that organizations like the Royal Geographical Society or the Arctic, Desert, and Tropic Information Center often served as clearinghouses for information about appropriate gear for explorers and soldiers headed to particular places. As Dehlia Hannah and Cynthia Selin have written, climate “must be understood as a lived abstraction,” and clothing especially “is a sensitive indicator and rich site for the critical exposition of our increasingly turbulent seasons.” Put another way, what we put on, in, and around our bodies reflects how we conceptualize our normal environment, and in contrast to it, the extreme environment. For example, let’s return to Halkett’s boat-cloak. It is an object that, at first glance, appears comically unusual. But the device was Halkett’s attempt to solve a problem posed by an unfamiliar environment – how to traverse both land and water, without carrying extraneous, heavy gear – while also appealing to the Victorian British sense of the comfortable and the familiar, by reconfiguring the expedition boat as an extension of the ubiquitous gentleman’s cloak. The polar environment might require the explorer to do something extraordinary, outside of his comfort zone. But rather than turning to, say, indigenous Arctic technologies, Halkett’s invention reassured users that recognizable British items could solve any problem with enough foresight and some creative reconfiguration. The boat-cloak demonstrates the power of the extreme, as a frame, to make sense of unusual things, and to reveal which boundaries, both physical and cultural, historical actors were and weren’t willing to cross.

Objects can provide entry points into how historical actors understood these categories, and since the study of material culture has always been interdisciplinary, it also allows a way of thinking about extremes that is interdisciplinary as well. “Fashion’s greatest designers have…continued to pursue the outer limits of their own creativity as they seek inspiration from the extreme,” Patricia Mears writes in the catalogue for Expedition. Likewise, historians, historians of science, and other humanist scholars can find in the idea of the “extreme” a space to push the boundaries of their own research in exciting and productive ways.

Expedition: Fashion from the Extreme is on view at the Museum at the Fashion Institute of Technology in New York until January 6, 2018. The accompanying catalogue, which contains the author’s essay “Dress, Image, and Cultural Encounter in the Heroic Age of Polar Exploration,” is available from Thames & Hudson.

The author would like to thank Michael F. Robinson, for his thoughtful comments on an early draft of this post, and Vanessa Heggie, for sharing a draft of her forthcoming book Higher and Colder: A History of Extreme Physiology.

Sarah Pickman is a Ph.D. student in History of Science and Medicine at Yale University. Her research centers on American and British exploration, anthropology, and natural history museums in the long nineteenth century, with a focus on the material culture of expeditions, particularly in the exploration of the Arctic and Antarctica. She holds a B.A. in Anthropology from the University of Chicago and an M.A. in Decorative Arts, Design History, and Material Culture from the Bard Graduate Center of Bard College.

What We’re Reading: Week of January 8.

 

Reading_the_Scriptures_MET_ap66.140

Reading the Scriptures by Thomas Waterman Wood, 1874.

Here are a few of the pieces the team at the JHI blog have been reading over the last week:

Derek

“The World in Time”: Interview with Eric Foner (Lapham’s Quarterly)

Anton Jaeger, “The Myth of Populism” (Jacobin)

Richard Holmes, “Out of Control” (NYRB)

Irvine Loudon, “A brief history of homeopathy” (JRSM)

 

Eric

Branko Milanovic, “What these early-2oth-century scholars got right” (Vox)

Eleanor Robertson, “Intersectional Identity and the Path to Progress” (Meanjin)

Nathan Robinson, “Orders from above” (Current Affairs)

Michelle Wolff, “Symposium: Ahmed, Living a Feminist Life” (syndicate)

 

Disha

Zoë Carpenter, “If We Lose Our Healthcare, We Will Begin to Die” (The Nation)

Moira Donegan, “My Name Is Moira Donegan” (The Cut)

Joan W. Scott, “How The Right Weaponized Free Speech” (The Chronicle of Higher Education)

Anthony Oliveira, “The Year in Apocalypses” (Hazlitt)

 

Brendan

Melinda Cooper, “Family Values: Between Neoliberalism and the New Social Conservatism”

Polygon’s Year In Review, especially Charles Yu on Universal Paperclips (play the game, too)

My friend Ilan Moscovitz’s series on AI: (parts one, two, three and four) (The Motley Fool)

 

Spencer

Ayelet Wenger, “Hokhmat Nashim” (The Lehrhaus)

Andrew Butterfield, “Divine Lust” (NYRB)

Marcel Theroux, “The post-truth Gospel” (TLS)

Brook Wilensky-Lanford, “Jonas Bendikson: Among the Messiahs” (Guernica)

Michael Valinsky, “To and from the Linguistic Shore of Ismail Kadare’s ‘A Girl in Exile’” (LARB)

 

Sarah

Bryan A. Banks and Erica Johnson, “Religion and the French Revolution: A Global Perspective,” (Age of Revolutions)

Julie Green, “Movable Empire,” (Jacobin)

Jennifer Anne Hart, “The Crown Goes to Ghana? Media Representation, Global Politics and African Histories,” (Ghana on the Go)

Cynthia

If you are anything like me, you are probably still writing 2017 when you actually mean 2018. I spent New Year’s Eve and New Year’s Day moving, so I had precious little time to make New Year’s resolutions. Still, that didn’t stop me from considering how I might improve myself over the course of these next 52 weeks. Or are there now only 50 left? I guess that gives me 2 less weeks to improve myself to death. Alexandra Schwartz’s piece will have you alternately making — and unmaking — resolutions. Maybe just take her suggestion and just go read a novel. In any case, I should probably refrain from buying either self-improvement manuals or novels for quite some time, as the act of unpacking my library has left me exhausted. If only I could find my copy of Illuminations, I could read Walter Benjamin’s “Unpacking My Library.”

Note that Benjamin never wrote an essay on unpacking his wardrobe. But clothes, like books, are repositories for memories. Fashion’s ability to conjure worlds out of memories is used to great effect in a trio of recent movies–Lady Bird, I, Tonya, and Call Me By Your Name. Lady Bird’s director Greta Gerwig told Sam Levy, the film’s director of photography, that she wanted the movie to “look like a memory.” In an interview with Vanity Fair, Levy, Lady Bird’s costume designer April Napier and production designer Chris Jones discuss how they achieved this aesthetic. (An unexpected source of inspiration: Lise Sarfati’s portraits of young women.) Giulia Persanti had a very different approach to the costume design for Call Me By Your Name. Unlike the pointed specificity of Lady Bird’s costumes, Persanti’s costumes were only loosely anchored in the film’s time period (1980s Italy). In an interview with British Vogue, Persanti said, “My main focus was to make a period film in which the costumes didn’t stand out as too ‘period-y’. More important was to send a clear message of the personality and origins of each character, choosing to give a casual, timeless, intimate style with a hint of inhibited adolescent sexuality.” I, Tonya has a very different relationship with clothing and memory. Jennifer Jones, the costume designer for I, Tonya, wanted to avoid saddling her characters with excessive nostalgia or kitsch. Jones discusses her approach in interviews with Entertainment Weekly and Deadline. Jones also wanted to avoid creating a caricature Tonya. Here, the clothes help open up the audience’s understanding of each character’s psychological development. We knew Tonya–and the people around her–primarily as tabloid and tv fodder. Jones hoped her costumes would restore some measure of their humanity, the humanity denied them when they were mocked, reviled–and played for laughs.