Think pieces

John Parkinson and the Rise of Botany in the 17th Century

By Guest Contributor Molly Nebiolo

220px-John-Parkinson.jpg

John Parkinson, depicted in his monumental Theatrum botanicum (1640).

The roots of contemporary botany have been traced back to the botanical systems laid out by Linnaeus in the eighteenth century. Yet going back in further in time reveals some of the key figures who created some of the first ideas and publications that brought horticulture forward as a science. John Parkinson (1567-1650) is one of the foremost in that community of scientists. Although “scientist” was a word coined in the nineteenth century, I will be using it because it embodies the systematic acts of observation and experimentation to understand how nature works that I take Parkinson to be exploring. While “natural philosophy” was the term more commonly in use at the time, the simple word “science” will be used for the brevity of the piece and to stress the links between Parkinson’s efforts and contemporary fields. Parkinson’s works on plants and gardening in England remained integral to botany, herbalism, and medicinal healing for decades after his death, and he was one of the first significant botanists to introduce exotic flowers into England in the 17th century to study their healing properties. He was a true innovator for the field of botany, yet his work has not been heavily analyzed in the literature on the early modern history of science. The purpose of this post is to underline some of the achievements that can be  attributed to Parkinson, and to examine his first major text, Paradisi in sole paradisus terrestris, a groundbreaking work in the field of history in the mid-1600s.

Parkinson grew up as an apprentice for an apothecary from the age of fourteen, and quickly rose in the ranks of society to the point of becoming royal apothecary to James I. His success resulted in many opportunities to collect plants outside of England, including trips to the Iberian Peninsula and northern Africa in the first decade of the seventeenth century. At the turn of the seventeenth century, collectors would commonly accompany trading expeditions to collect botanical specimens to determine if they could prosper in English climate. Being the first to grow the great Spanish daffodil in England, and cultivating over four hundred plants in his own garden by the end of his life, Parkinson was looked up to as a pioneer in the nascent field of botanical science. He assisted fellow botanists in their own work, but he also was the founder of the Worshipful Society of Apothecaries, and the author of two major texts as well.

His first book, Paradisi in sole paradisus terrestris (Park-in-Sun’s Terrestrial Paradise) reveals a humorous side to Parkinson, as he puts a play on words for his surname in the title: “Park-in-Sun.” This text, published in 1628, along with his second, more famous work published in 1640, Theatrum botanicum (The Theater of Plants), were both immensely influential to the horticultural and botanical corpori of work that were emerging during the first half of the 17th century. Just in the titles of both, we can see how much reverence Parkinson had for the intersection of fields he worked with: horticulture, botany, and medicine. By titling his second book The Theater of Plants, he creates a vivid picture of how he perceived gardens. Referencing the commonly used metaphor of the theater of the world, Parkinson compares plants as the actors in the the garden’s theatrum. It is also in Theatrum Botanicum that Parkinson details the medicinal uses of hundreds of plants that make up simple (medicinal) gardens in England. While both texts are rich for analysis, I want to turn attention specifically to Paradisus terrestris because I think it is a strong example of how botany and gardening were evolving into a new form of science in Europe during the seventeenth century.

Picture1.png

Title page woodcut image for Paradisus Terrestris. Image courtesy of the College of Physicians Medical Library, Philadelphia, PA.

The folio pages of Paradisus terrestris are as large and foreboding as those of any early modern edition of the Bible. Chock full of thousands of detailed notes on the origins, appearance, and medical and social uses for pleasure gardens, kitchen gardens and orchards, one could only imagine how long it took Parkinson to collect this information. Paradisus terrestris was one of the first real attempts of a botanist to organize plants into what we now would term genuses and species. This encyclopedia of meticulously detailed, imaged and grouped plants was a new way of displaying horticultural and botanical information when it was first published. While it was not the first groundbreaking example of the science behind gardens and plants in western society, Luci Ghini potentially being the first, Parkinson’s reputation and network within his circle of botany friends and the Worshipful Society of Apothecaries bridged the separation between the two fields. Over the course of the century,  the medicinal properties of a plant were coherently circulated in comprehensive texts like Parkinson’s as the Scientific Revolution and the colonization of the New World steadily increased access to new specimens and the tools to study them.

 

Picture1

Paradisus terrestris includes many woodcut images of the flowers Parkinson writes about to help the reader better study and identify them. Image courtesy of the Linda Hall Library, Kansas City, MO.

Another thing to note in Paradisus terrestris is the way Parkinson writes about plants in the introduction. While most of the book is more of a how-to narrative on how to grow a pleasure garden, kitchen garden, or orchard, the preface to the volume illustrates much about Parkinson as a botanist. Gardens to Parkinson are integral to life; they are necessary “for Meat or Medicine, for Use or for Delight” (2).  The symbiotic relationship between humans and plants is repeatedly discussed in how gardens should be situated in relationship to the house, and how minute details in the way a person interacts with a garden space can affect the plants. “The fairer and larger your allies [sic] and walks be the more grace your Garden shall have, the lesse [sic] harm the herbs and flowers shall receive…and the better shall your Weeders cleanse both the beds and the allies” (4). The preface divulges the level of respect and adoration Parkinson has towards plants. It illustrates the deep enthusiasm and curiosity he has towards the field, two features of a botanist that seemed synonymous for natural philosophers and collectors of the time.

John Parkinson was one of the first figures in England to merge the formalized study of plants with horticulture and medicine. Although herbs and plants have been used as medicines for thousands of years, it is in the first half of the seventeenth century that the medicinal uses of plants become a scientific attribute to a plant, as they were categorized and defined in texts like Paradisi in sole paradisus terrestris and Theatrum botanicum. Parkinson is a strong example of the way a collector’s mind worked in the early modern period, in the way he titled his texts and the adoration that can be felt when reading the introduction of Paradisus terrestris. From explorer, to collector, horticulturist, botanist, and apothecary, the many hats Parkinson wore throughout his professional career and the way he weaved them together exemplify the lives many of these early scientists lived as they brought about the rise of these new sciences.

Molly Nebiolo is a PhD student in History at Northeastern University. Her research covers early modern science and medicine in North America and the Atlantic world and she is completing a Certificate in Digital Humanities. She also writes posts for the Medical Health and Humanities blog at Columbia University.

Trinquier’s Dichotomy: Adding Ideology to Counterinsurgency

By guest contributor Robert Koch

After two world wars, the financial and ideological underpinnings of European colonial domination in the world were bankrupt. Yet European governments responded to aspirations for national self-determination with undefined promises of eventual decolonization. Guerrilla insurgencies backed by clandestine organizations were one result. By 1954, new nation-states in China, North Korea, and North Vietnam had adopted socialist development models, perturbing the Cold War’s balance of power. Western leaders turned to counterinsurgency (COIN) to confront national liberation movements. In doing so, they reimagined the motives that drove colonization into a defense of their domination over faraway nations.

COIN is a type of military campaign designed to maintain social control, or “the unconditional support of the people,” while destroying clandestine organizations that use the local populations as camouflage, thus sustaining political power (Trinquier, Modern Warfare, 8). It is characterized by a different mission set than conventional warfare. Operations typically occur amidst civilian populations. Simply carpet bombing cities (or even rural areas as seen in the Vietnam War), at least over an extended period of time results in heavy collateral damage that strips governments of popular support and, eventually, political power. The more covert, surgical nature of COIN means that careful justifying rhetoric can still be called upon to mitigate the ensuing political damage.

Vietnam was central to the saga of decolonization. The Viet Minh, communist cadres leading peasant guerrillas, won popular support to defeat France in the First (1945-1954) and the United States in the Second Indochina Wars (1955-1975) to consolidate their nation-state. French leaders, already sour from defeat in World War II, took their loss in Indochina poorly. Some among them saw it as the onset of a global struggle against communism (Paret, French Revolutionary Warfare, 25-29; Horne, Savage War for Peace, 168; Evans, Algeria: France’s Undeclared War, Part 2, 38-39). Despite Vietnam’s centrality, it was in “France,” that is, colonial French Algeria, that ideological significance was given to the tactical procedures of COIN. French Colonel Roger Trinquier, added this component while fighting for the French “forces of order” in the Algerian War (1954-1962) (Trinquier, Modern Warfare, 19). Trinquier’s ideological contribution linked the West’s “civilizing mission” with enduring imperialism.

In his 1961 thesis on COIN, Modern Warfare, Trinquier offered moral justification for harsh military applications of strict social control, a job typically reserved for police, and therefore for the subsequent violence. The associated use of propaganda characterized by a dichotomizing rhetoric to mitigate political fallout proved a useful addition to the counterinsurgent’s repertoire. This book, essentially providing a modern imperialist justification for military violence, was translated into English, Spanish, and Portuguese, and remains popular among Western militaries.

Trinquier’s experiences before Algeria influenced his theorizing. In 1934, a lieutenant in Chi-Ma, Vietnam, he learned the significance of local support while pursuing opium smugglers in the region known as “One Hundred Thousand Mountains” (Bernard Fall in Trinquier, Modern Warfare, x). After the Viet Minh began their liberation struggle, Trinquier led the “Berets Rouges” Colonial Parachutists Battalion in counterguerrilla operations. He later commanded the Composite Airborne Commando Group (GCMA), executing guerrilla operations in zones under Viet Minh control. This French-led indigenous force grew to include 20,000 maquis( rural guerrillas) and had a profound impact in the war (Trinquier, Indochina Underground, 167). Though France would lose their colony, Trinquier had learned effective techniques in countering clandestine enemies.

Upon evacuating Indochina in 1954, France immediately deployed its paratroopers to fight a nationalist liberation insurgency mobilizing in Algeria. Determined to avoid another loss, Trinquier (among others) sought to apply the lessons of Indochina against the Algerian guerillas’ Front de Libération Nationale (FLN). He argued that conventional war, which emphasized controlling strategic terrain, had been supplanted. Trinquier believed adjusting to “modern warfare” required four key reconceptualizations: a new battlefield, new enemy, how to fight them, and the repercussions of failure. Trinquier contended that warfare had become “an interlocking system of action – political, economic, psychological, military,” and the people themselves were now the battleground (Trinquier, Modern Warfare, 6-8).

Trinquier prioritized wining popular support, and to achieve this blurred insurgent motivations by lumping guerrillas under the umbrella term “terrorist.” Linking the FLN to a global conspiracy guided by Moscow was helpful in the Cold War, and a frequent claim in the French military, but this gimmick was actually of secondary importance to Trinquier. When he did mention communism, rather than as the guerrilla’s guiding light, it was in a sense of communist parties, many of whom publicly advocated democratic means to political power, as having been compromised. The FLN were mainly a nationalist organized that shunned communists, especially in the leadership positions, something Trinquier would have known as a military intelligence chief (Horne, Savage War for Peace, 138, 405). In Modern Warfare, although he accepted the claim that the FLN was communist, in fact he only used the word “communist” four times (Trinquier, Modern Warfare, 13, 24, 59, 98). The true threat were “terrorists,” a term used thirty times (Trinquier, Modern Warfare, 8, 14, 16-25, 27, 29, 34, 36, 43-5, 47-49, 52, 56, 62, 70, 72, 80, 100, 103-104, 109). The FLN did terrorize Muslims to compel support (Evans, Algeria: France’s Undeclared War, Part 2, 30). Yet, obscuring the FLN’s cause by labeling them terrorist complicated consideration of their more relatable aspirations for self-determination. Even “atheist communists” acted in hopes of improving the human condition. The terrorist, no civilized person could support the terrorist.

Trinquier’s careful wording reflects his strategic approach and gives COIN rhetoric greater adaptability. His problem was not any particular ideology, but “terrorists.” Conversely, he called counterinsurgents the “forces of order” twenty times (Trinquier, Modern Warfare, 19, 21, 32-33, 35, 38, 44, 48, 50, 52, 58, 66, 71, 73, 87, 100). A dichotomy was created: people could choose terror or order. Having crafted an effective dichotomy, Trinquier addressed the stakes of “modern warfare.”

The counterinsurgent’s mission was no less than the defense of civilization. Failure to adapt as required, however distasteful it may feel, would mean “sacrificing defenseless populations to unscrupulous enemies” (Trinquier, Modern Warfare, 5). Trinquier evoked the Battle of Agincourt in 1415 to demonstrate the consequences of such a failure. French knights were dealt crushing defeat after taking a moral stand and refusing to sink to the level of the English and their unchivalrous longbows. He concluded, if “our army refused to employ all the weaponsof modern warfare… national independence, the civilization we hold dear, our very freedom would probably perish” (Trinquier, Modern Warfare, 115). His “weapons” included torture, death squads, and the secret disposals of bodies – “dirty war” tactics that hardly represent “civilization” (Aussaresses, Battle of the Casbah, 21-22; YouTube, “Escuadrones de la muerte. La escuela francesa,” time 8:38-9:38). Trinquier was honest and consistent about this, defending dirty war tactics years afterward on public television (YouTube, “Colonel Roger Trinquier et Yacef Saadi sur la bataille d’Alger. 12 06 1970”). Momentary lapses of civility were acceptable if it meant defending civilization, whether it be adopting the enemy’s longbow or terrorist methods, to even the battlefield dynamics in “modern warfare.”

Trinquier’s true aim was preserving colonial domination, which had always been based on the possession of superior martial power. In order to blur distinctions between nationalists and communists, he linked any insurgency to a Soviet plot. Trinquier warned of the loss of individual freedom and political independence. The West, he warned, was being slowly absorbed by socialist—terrorist—insurgencies. Western Civilization would be doomed if it did not act against the monolithic threat.  His dichotomy justifies using any means to achieve the true end – sustaining local power. It is also exportable.

Trinquier’s reconfiguration of imperialist logic gave the phenomenon of imperialism new life. Its intellectual genealogy stretches back to the French mission civilisatrice. In the Age of Empire (1850-1914), European colonialism violently subjugated millions while claiming European tutelage could tame and civilize “savages” and “semi-savages.” During postwar decolonization, fresh off defeat in Indochina and facing the FLN, Trinquier modified this justification. The “civilizing” mission of “helping” became a defense of (lands taken by) the “civilized,” while insurgencies epitomized indigenous “savagery.”

The vagueness Trinquier ascribed to the “terrorist” enemy and his rearticulation of imperialist logic had unforeseeable longevity. What are “terrorists” in the postcolonial world but “savages” with modern weapons? His dichotomizing polemic continues to be useful to justify COIN, the enforcer of Western imperialism. This is evident in Iraq and Afghanistan, two countries that rejected Western demands and were subsequently invaded, as well as COIN operations in the Philippines and across Africa, places more peripheral to the public’s attention. Western counterinsurgents almost invariably hunt “terrorists” in a de facto defense of the “civilized.” We must carefully consider how rhetoric is used to justify violence, and perhaps how this logic shapes the kinds of violence employed. Trinquier’s ideas and name remain in the US Army’s COIN manual, linking US efforts to the imperialist ambitions behind the mission civilisatrice (US Army, “Annotated Bibliography,” Field Manual 3-24 Counterinsurgency, 2).

Robert Koch is a Ph.D. candidate in history at the University of South Florida.

The Principle of Theory: Or, Theory in the Eyes of its Students

By contributing writer John Handel. This and Jonathon Catlin’s “Theory Revolt and Historical Commitment” respond to the May 2018 “Theses on Theory and History” by Ethan Kleinberg, Joan Wallach Scott, and Gary Wilder.

“It is impossible, now more than ever, to dissociate the work we do, within one discipline or several, from a reflection on the political and institutional conditions of that work. Such a reflection is unavoidable. It is no longer an external complement to teaching and research; it must make its way through the very objects we work with, shaping them as it goes, along with our norms, procedures, and aims. We cannot not speak of such things.”

-Jacques Derrida (1983)

In 1967 Jacques Derrida famously asserted that “there is no outside-text.” This was not a literary formalist screed against historical context, it was rather that context as such was an unstable category–that context itself was endlessly shifting and was never a stable given. Derrida’s injunction to us, as scholars, in whichever discipline we worked, was to question and deny these categories of certainty, to see things in their most capacious sense, to know that our own sight was always already constrained and limited. It was imperative to constantly question the adequacy of our vision. Theory Revolt is a welcome, deconstructive, reminder on this front, as it lays bear many of the central assumptions of the epistemology of the historical profession and what they problematically overlook. Yet, despite its protestations that “Critical historians are self-reflexive,” and “recognize that they are…implicated in their objects of study,” there is little impetus on the part of Theory Revolt to question this further, to turn its sights inwards to the institutional site of knowledge production: the university (III.6). Bringing the ethics of deconstructive criticism back to the university especially in the context of the colonialist, military-industrial, production of post-war American knowledge, was a critical move for Derrida, and likewise, is a necessary mode of critique that Theory Revolt must take up if it is to effect the radical change in the profession that is desires.

Without actively thinking through the institutional forms of power and politics that structure the production of historical knowledge they wish to critique, Theory Revolt’s project itself is implicated in perpetuating them. “Structures of…politics, or even identity that do not conform with convention are ruled out or never seen at all,” they write. Indeed. In focusing so intently on assaulting the “guild” mentality of the current historical profession and its epistemological assumptions and practices, Theory Revolt misses what constitutes their central problem, for it is impossible to explain why we stopped arguing about theory without attending to the political economic transformation of the university: the collapse in undergraduate humanities enrollments, the implosion of the academic job market, and the subsequent proliferation of the precarious labor which manages to keep the university afloat.

This is, in large part, a generational blindness. Theory Revolt argues that the historical profession sees “theory as one more turn (a wrong one) in the ever-turning kaleidoscope of historical investigation. The lure of theory is taken to be an aberrant stage in the intellectual history of the discipline, happily outgrown, replaced by a return to more solidly grounded observation.” (II.3) But, as James Cook has argued in an essay entitled “The Kids Are Alright: On the ‘Turning’ of Cultural History,” “turning” becomes synonymous with a generational rite of passage—most typically, from the new social history of the 1970s to the new cultural history of the 1980s.” Theory Revolt’s emplotment of its own intellectual history rests on the specters of a generational revolt, in particular on the radical critique of post-structuralism that lost the generational fight to a new empiricism. But it is hard to have a generational revolt when your generation has been systematically excluded from the academy. The kids are definitely not alright.

My own brief trajectory at Berkeley provides some anecdotal evidence for this shift. I arrived at Berkeley in the fall of 2015 for graduate school, excited to come to the place that was the birth place of cultural history. I expected to find a place that was the center of radicalism in the United States for so long, the place that founded the journal Representations, that prided itself on being “happily at the margins” of the mainstream historical profession. Instead, I found an institution that was rife with crises. From a culture of systemic sexual harassment to a structural budget deficit that seemed to spell the end of the public research university; from an endemic housing crisis to a campus that was overrun by right-wing trolls and heavily armored police. These were inauspicious years to be at Berkeley.

Somewhere amid the institutional decay that blighted the landscape of Berkeley (not to mention the rest of higher education), the Berkeley of the cultural turn had also seemed to disappear. Where I thought I would find intellectual iconoclasm, all I found was methodological malaise. For instance, at the institution whose connection to Foucault helped bring his work into the mainstream of U.S. academic culture, one of the professors teaching the department’s “Historical Methods & Theory” course notoriously refused to read Foucault with us. “You’re all more or less cultural historians at Berkeley,” the professor informed us, “you all need to read Foucault in chronological order, start to finish on your own, anyways.” What was the use of actually discussing it in a seminar?  The next year this course was replaced with one on professionalization—on using twitter in academic contexts, on building a CV, on networking at conferences, etc. Professionalization for an almost non-existent academic job market had replaced theoretical stakes.

And therein lies the unseen problem of Theory Revolt’s critique of the mainstream practice of history. In ignoring the neoliberal transformation of the university, they miss the reason we stopped arguing about theory in the first place. It is hard to engage in a collaborative rethinking of historical methodology when most of your generation will not end up in the academy. The disciplinary and professional pressures of the historical guild weigh especially heavy when you are fighting for one of the few jobs available in your field. In the midst of institutional crisis, how can we orient ourselves to address both that crisis and the epistemological malaise of the historical profession that Theory Revolt rightly critiques? Ben Mechen argues that as graduate students we can use our precarious position in the university to inform both a new politics of solidarity and critique of the power structures that produced such precarity. Can Theory Revolt’s critiques be joined to this purpose?

One way Theory Revolt’s mode of critical history might work itself in and through its objects of study is in relation to the New History of Capitalism. HoC was a field born in direct response to the new political realities after 2008, as well as a response to the new institutional directives within universities. But the History of Capitalism remains notoriously under theorized. In a programmatic state-of-the-field essay in 2014, Seth Rockman claimed that HoC “has minimal investment in a fixed or theoretical definition of capitalism,” and that “the empirical work of discovery takes precedence over the application of theoretical categories.” In the words of Theory Revolt, this type of empiricism only serves to “reinforce disciplinary history’s tendency to artificially separate data from theory, facts from concepts, research from thinking. This leads ‘theory’ to be reified as a set of ready-made frameworks that can be ‘applied’ to data.” (I.9) Theory is not a lens that can be applied to historical work, it is rather the political stances that every historian, whether implicitly or explicitly, brings to their attempts to understand the past. Despite our protestations to the contrary, we can never get outside theory.

One of the most prescient and enduring critiques of this type of empiricism, especially as it relates to the history of capitalism and the economy, is from none other than Joan Scott, one of the authors of Theory Revolt. In her canonical Gender and the Politics of History, Scott performed a brilliant reading of mid-19th century French statistical reports. Rather than viewing these reports as “irrefutable quantitative evidence,” Scott argues that in merely using these reports in a purely empirical sense, we “have accepted at face value and perpetuated the terms of the nineteenth century according to which numbers are somehow purer and less susceptible to subjective influences than other sources of information.” These statistical reports were neither neutral representations nor sheer ideological constructions, she argued, but were rather “ways of establishing the authority of certain visions of social order, of organizing perceptions of “experience.” Scott’s project constituted a major post-structural attack on the categories that undergirded the basic assumptions of the dominant strains of Marxist and social history at the time and problematized what could be taken and used as historical “facts.”

Years later, Adam Tooze would revisit the ways in which economic historians in particular might make use of numbers as historical facts. “The polemical energy involved in tearing up the empirical foundations of economic and social history, to reveal them as rooted in institutional, political, and indeed economic history, overshot its mark,” he argued. Rather than jettisoning numbers as usable historical facts, Tooze argued that we should adopt a “hermenutical quantification,” where we “move from a generalized history of statistics as a form of governmental knowledge to a history of the construction and use of particular facts.” In other words, we could be attentive to the contingent, cultural construction of numbers, while also acknowledging that those numbers were operationalized and did work in the world for those who used them.

Tooze’s move from questioning the category of statistics to thinking about the particular moments & conjunctures by which certain quantitative facts, once constructed, gain authority is consonant with the radical strains of cultural history influenced by STS & ANT. Revealingly, Tooze takes most of his theoretical cues in that article from Bruno Latour (see for instance notes 51 and 52). Latour’s radical constructivism finds sympathetic allies in Scott’s post-structural wing of the cultural turn, for instance, Patrick Joyce’s insistence on not conceiving “culture” as an essentialist superstructure, but culture as a process, “as for or around practice,” and “located in practice and in material forms.” This continual attendance to the process of knowledge production, rather than ever taking forms of knowledge either in the past or presence as givens, is what I take Theory Revolt to be stressing in their insistence on a “Critical history (that) recognizes all ‘facts’ as always already mediated, categories as social, and concepts as historical; theory is worldly and concepts do worldly work.” (III.4). This kind of critical history, when brought to the history of capitalism and the economy, seems well positioned to cut through the endless methodological stand-offs between economic historians and historians of capitalism.

A critical history thus positioned is also poised to make sense of the broader political moment within which we live. In his 1987 work, Science in Action Bruno Latour asked: “Given the tiny size of fact production, how the hell does the rest of humanity deal with ‘reality’?” In our moment of Trumpian politics that has witnessed the collapse of post-war America’s key sites of fact and knowledge production—from universities to the media—it becomes clear that the answer to this question is “not well.”  Plenty of pseudo-intellectuals have been quick to proclaim that post-structuralism bears at least some responsibility for the collapse of these institutions and the  rise of Donald Trump. It did not take long for critics to marshal these claims against Theory Revolt, and rehearse the “dismissal oftheory as dangerous relativism.”(II.6).

Yet it is precisely here that the type of radical constructivism—a post-structuralist critical history that is always methodologically self-conscious—can make a key intervention in our current political moment. There is no doubt, for instance, that Donald Trump has launched a dangerous assault on facts and facticity, but post-structuralism didn’t cause Trump’s rise, it can help explain him. As Nils Gilman has argued, in the current maelstrom of fake news, “dry fact-checking does not work in the face of a deliberate assault on facts as such.” A critical history that refuses to take any fact as a given, and insists on historicizing how facts are constructed and operationalized, is far more politically exigent than a dry empiricism that attempts in vain to fact check those who refuse the very premise of factuality.

In the age of Trump, knowledge is up for grabs in ways previously unimaginable. On one hand, this is profoundly terrifying, as those in the highest offices of political power seemingly refuse to share the same factual reality with the world around them. But instead of driving us to a knee-jerk and uncritical defense of our universities, it should push us to further critique. The knowledge production of the academy has thus far not been up to facing the challenges within or without. This should force us within the academia to rethink not only the epistemological practices of our discipline but the institutional structures that created them. As the structures of the post-war university crumble, we can, finally, begin to imagine new collectivities, institutions, and forms of professionalism that do not have to replicate the neoliberal logics that have hollowed out these institutions to begin with. Theory Revolt’s critique of historical epistemology is a welcome part of this project, but rather than being a radical move forward, it is too often a nostalgic look backward to the generational revolts of the 1980s and 90s, obfuscating the tectonic shifts in the structure of our institutions and professions that have occurred in the meantime. But it doesn’t have to be. The critical history it advocates can and should be positioned not just to bring its critique to bear on histories of capitalism and neoliberalism, nor just on the high politics, but on the academic institutions that not only have failed to stop these transformations but have, in ways, been complicit in them. For Theory Revolt’s critiques to take shape, and to work their way into our practices of history, they cannot limit themselves to a critique of their own self-proclaimed outside Other—empiricist epistemology—but they must also turn inside, and examine their own sites and sights of power within the university itself.

John Handel is a Ph.D Candidate at UC Berkeley.

Colonial Knowledge, South Asian Historiography, and the Thought of the Eurasian Minority

This is the fifth in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer and fall. The first piece was by Andrew Klumpp, the second by Cynthia Houng, the third by Robert Greene II, and the fourth by Gloria Yu.

This last piece is by contributing writer Brent Howitt Otto, a PhD student in History at UC Berkeley.

It is hard to overstate the contemporary and enduring impact of British colonialism on the Indian Subcontinent. Bernard Cohn compellingly argued that the British conquest of India was a conquest of knowledge, as much as it was of land, peoples and markets. By combining the disciplinary tools of history and anthropology, Cohn helped birth a generation of historiography that has examined how the discursive categories of religion, caste and community (approximate to ‘ethnicity’ in South Asian usage) were deeply molded and in some instances created by the bureaucratic attempts to rationalize and systematize the exercise of colonial power over diverse peoples (Nicholas B. Dirks, Castes of Mind). These colonial knowledge systems not only helped colonial officials to think about India and Indians but has subsequently affected how Indians of all classes, castes and religions came to think about themselves in relation to one another and to the state. The anti-colonial nationalism of the late British Raj, far from freeing India of colonial categories and divisions, demonstrated their enduring and deepening power.

When discontent with British rule began to ferment in various forms of nationalist organizing and mobilization in the late nineteenth century, a preoccupation among Indian minorities—Muslims, Untouchables, Sikhs, and even the relatively small community of Eurasians (later known as Anglo-Indians)—emerged, that swaraj (self-rule) or indeed Independence would ultimately create a tyranny of the majority. Would the British Raj simply be replaced by a Hindu Raj, in which minorities would lose their already tenuous position in politics and society?

Dr._Bhimrao_Ambedkar

B. D. Ambedkar

Fear ran deepest among Muslims, who had been scapegoated by the British as the group responsible for the Sepoy Rebellion in 1857. Their fears were not irrational, for the Indian National Congress, as the largest expression of the nationalist movement, struggled to appear as anything but a party of English-educated elite Hindus. Despite Gandhi’s exhortation of personal moral conversion to a universal regard for all people, his message came packaged in the iconic form and practice of a deeply religious Hindu ascetic. Gandhi famously disagreed with the desire of B. D. Ambedkar, a leader of the Untouchables, to abolish the caste ‘system’. Muslims and other minorities called for ‘separate electorates,’ protected seats and separate voting mechanisms to ensure minorities were represented.

In part to pacify the anxieties of minorities and in part to further a ‘divide and rule’ agenda to prolong colonial rule, the British responded with a series of Round Table Conferences from 1930-32 in which India’s minorities represented their views. This resulted in the Communal Award of separate electorates for Muslims, Buddhists, Sikhs, Indian Christians, Anglo-Indians, Europeans and Depressed Classes (Scheduled Castes). Gandhi’s opposition rested on the principle that Separate Electorates would only impede unity and sow greater division, both in the movement to end British rule and the hope of a unified nation thereafter. Yet in the Poona Pact of September 1932 Gandhi acquiesced to Separate Electorates while coercing Ambedkar through a fast unto death to renounce them for Dalits.

440px-Jinnah1945c

Mohammad Ali Jinnah

British colonial knowledge had constructed blunt categories of India’s minorities, which failed to acknowledge their internal diversity. Muslims included numerous sects, schools of jurisprudence, regions and languages. Eurasians were divided internally by region (north, south, Burma), occupation (railways, government services, private trade and industry), lineage (Portuguese, English, Dutch, French) and class. The same was true for other minorities, and yet the British insisted upon dealing with each group by recognizing an organization and its leader as the ‘sole spokesman’ for that ‘community’s’ interests. For Muslims it was Mohammad Ali Jinnah and the Muslim League (Ayesha Jalal, The Sole Spokesman). For Eurasians (Anglo-Indians) it was the president of the All India Anglo-Indian Association, under the leadership of Sir Henry Gidney (1919-42) and Frank Anthony (1942 onward), which by no means could claim membership sufficient to represent the interests of a majority of Anglo-Indians.

Who is allowed to speak for the group? Which voices are suppressed or silenced? These are crucial questions for historians who seek to make an accurate reconstruction of the textures and contours of a group’s thinking over time, of their unity and disunity, internal dynamics, the ways they see themselves and others. Otherwise the scholar will only be able to conjure up an historical narrative that coheres with the sympathies of power, but gets no closer to representing the group on its own terms. The archive is often limited in what it can say, for it too is a construction of power: the editorial discretion of a newspaper, the policy and practice of record keeping and classification in an organization or a government, and the status and education implicit in any literary production. This has been a foremost concern and debate of Subaltern historiography in South Asia (see the journal Subaltern Studies and Gayatri Spivak, “Can the Subaltern Speak?“), and a motivating problem addressed by Anthro-History.

The scholarship on the mixed-race of colonial South Asia manifests some of these problems. Some histories have been written by important Anglo-Indian leaders and politicians, such as Herbert Alick Stark and Frank Anthony, constituting less an academic history than their own rhetorical attempt to shape Anglo-Indians’ view of themselves and of others’ views of Anglo-Indians. Indeed, these constitute primary sources that portray particular dominant, though not representative perspectives of the community. Even serious academic studies have erred by leaning too heavily on official sources to substantiate the community’s attitudes (e.g., Alison Blunt, Domicile and Diaspora) or by inordinate attachment to a social scientific theory such as “marginality” to explain the social position and self-consciousness of Anglo-Indians, at times entertaining untenable generalizations and ignorance of facts (See Noel P. Gist and Roy Dean Wright, Marginality and Identity, or Paul Fredrick Cressey, “The Anglo-Indians“). Other studies are too narrowly focused on Anglo-Indians of a particular place and time to include very much dialogue with the greater Anglo-Indian community or with other interlocutors such as the state (e.g., Kuntala Lahiri-Dutt, In Search of a Homeland, or Robyn Andrews, Christmas in Calcutta).

The new monograph by Uther Charlton-Stevens, Anglo-Indians and Minority Politics in South Asia: Race, Boundary Making and Communal Nationalism (London: Routledge, 2018) is a deeply textured historical study of the Eurasian community over its lengthy history. Uninterested in presenting a uniform narrative, Charlton-Stevens digs deeply into diverse sources to show the various interlocutors that Anglo-Indians and their leaders had, and the often discordant opinions they took with respect to their own history, concepts of race, Indian nationalism, the colonial state, and plans for their post-colonial future. Anglo-Indians were neither univocal, nor insular. Views among Anglo-Indians were diverse and power over them was contested. Skillfully Charlton-Stevens traces these various crisscrossing strands that shows Anglo-Indians were embedded in a web of local, colonial and international discourses, and were interacting with and speaking about concepts as diverse and far reaching as notions of nation and national self-determination, Zionism, and eugenics. Although the community had a sole spokesman as far as government was concerned, the voices of dissenting and contesting positions were louder and clearer than prior scholarship has ever made out.

Charlton-Stevens refreshingly situates the question of Anglo-Indian identity in the crucial context of race and eugenical theories current from the late 19th to mid-20th centuries. He explores in depth the writings of two Anglo-Indian figures who were not community leaders, yet had complex articulations of mixed race. Millicent Wilson of Bangalore wrote arguing that Anglo-Indians’ whiteness (and thus superiority) should be acknowledged on the supposed grounds of the dominance of white genes, and thus their predominance in mixed-race people. Wilson regarded Americans and Australians as exemplars of the success of whitening an admittedly hybrid race. In effect she argued against extreme theories of racial purity, while continuing to support a concept of a racial hierarchy that presumed the relative superiority of whiteness (Charlton-Stevens, 177–79, 194–96). Though seldom referenced in other studies on Anglo-Indians, Charlton-Stevens shows that Wilson’s work was read and responded to by Anglo-Indians, and that she engaged in disputes with Anglo-Indian leaders and critiqued those who promoted Anglo-Indians emigration from India. Though not conforming to the official positions of the Anglo-Indian Association, Wilson surely represents a strand of Anglo-Indian thinking on race.

Quite different from Wilson’s belief in a racial hierarchy into which she wanted to insinuate Anglo-Indians as ‘white,’ stand the writings of Anglo-Indian social scientist Cedric Dover. Contesting the alleged superiority of racial purity, Dover argued instead hybridization promoted genetic vigor. He predicted that mixed races would therefore define the future and spell the ultimate end of racial difference. He was a vocal opponent of the Nazi eugenics of racial purity, while himself promoting the eugenics of genetic mixing. As for his own community of Anglo-Indians, Dover believed they should identify as ‘Eurasians,’ a more expansive category than ‘Anglo-Indian,’ and forge a pan-Eurasian solidarity with other Eurasians outside of British India. This view was largely at odds with the stated aims and positions of official leaders of the Community. While Dover’s book, which was most explicitly directed at Anglo-Indians, is noted in the historiography, Charlton-Stevens goes further to demonstrate the effects and resonances of Dover’s ideas and other works on Anglo-Indian discourse about themselves and their future. At the same time, by drawing on the work of Nico Slate’s Colored Cosmopolitanism: The Shared Struggle for Freedom in the United States and India (Cambridge, MA: Harvard University Press, 2012) he shows how Dover saw through his academic work in the United States and the examples of W.E.B. DuBois and Booker T. Washington, a model of mixed-race success which supported his claims and which he recommended to Anglo-Indians (Charlton-Stevens, 191–96).

Then Charlton-Stevens carefully explores the numerous projects Anglo-Indians undertook as they prepared for a post-colonial future. Several schemes proposed for domestic colonial settlements—Abbott Mount (1920s), Whitefield (1882), McCluskigunge (1933) (Charlton-Stevens, 179–91). Others suggested overseas colonization—of the Andaman and Nicobar Islands (in 1922–3 and 1946), or the creation of an “Eurasia” in the former German New Guinea with League of Nations support, an idea which surfaced in the 1930s and then again in the 1950s (196–206). The Anglo-Indian promoters of these projects envisioned a degree of self-sufficiency, “emancipation” from dependency and colonial oppression, a “national homeland”.  Through a close reading of correspondence, committee reports, organization records, and letters to the editor in Anglo-Indian and English-language church sponsored newspapers in India, Charlton-Stevens shows that these aims do not only have incidental resonance but direct connection with the larger international discourses on race, the post-World War I “balkanization” that came with ethnic or racial conceptions of nationality and national self-determination, and drew on foreign models such as the Zionist success in the Palestine Mandate. Finally, numerous other associations and individuals promoted emigration, contrary to the stated position of the All India Anglo-Indian Association to remain in India—especially in the two years between the end of World War II and Independence. This even included as unlikely a destination as Brazil: ideologically branded as “Mestizism,” its promoters believed that as a mixed-race Christian people they would be accepted in a largely mixed-race Christian country. Others mainly sought to settle elsewhere within the British Commonwealth.

These are but a few of the most significant contributions of Charlton-Stevens’ book, which I have selected because they break new ground by foregrounding that Anglo-Indians were diverse in their thought, despite being forced to accept a sole spokesman who at times was the target of considerable resistance. Moreover they engaged with broader Indian and international discourses. Charlton-Stevens achieved this textured treatment of the ideas of Anglo-Indians on their own terms by a close, broad and critical reading of the archive as well as (in parts not mentioned above) ethnographic work and oral history that highlights the value of non-textual sources to a thoroughgoing historic account that interrogates power, expects diversity, and eschews easy generalizations.

Brent Howitt Otto is a graduate student in UC Berkeley’s Department of History.

How did Catholics Embrace Religious Liberty?

By guest contributor Udi Greenberg

This post is a companion piece to Prof. Greenberg’s article in the most recent issue of the Journal of the History of Ideas, “Catholics, Protestants, and the Tortured Path to Religious Liberty.”

A series of recent controversies in Europe and the United States have sparked intense interest in the scope and limits of religious liberty. Can governments make sure everyone has the right to freely practice their faith? Should they protect this right even if it clashes with other priorities and principles, such as national security imperatives or anti-discrimination statutes? While almost all the participants in these debates—politicians, jurists, commentators, and social thinkers—claim to be defenders of religious freedom, they assign profoundly different meanings, goals, and consequences to this term. Progressives have invoked it to decry anti-Muslim measures such as anti-veil laws in Europe or the “Muslim ban” in the United States, while conservatives have used religious liberty to defend the right to discriminate against single-sex couples, deny access to birth control, and ban displays of certain religious faiths. Perhaps because it is so heavily contested, the language of religious liberty has acquired a significant aura in contemporary public, political, and legal discourse. Like “democracy,” “justice,” and “freedom,” it is a term that radically different camps seek to claim as their own.

Gregory_XVI.jpg

Pope Gregory XVI

It can therefore be surprising to remember how recent religious liberty’s popularity is. Few institutions reflect this better than the Catholic Church, which as recently as the early 1960s openly condemned religious freedom as heresy. Throughout the nineteenth century and well into the twentieth, Catholic bishops and theologians claimed that the state was God’s “secular arm.” The governments of Catholic-majority countries therefore had the duty to privilege Catholic preaching, education, and rituals, even if they blatantly discriminated against minorities (where Catholic were minority, they could tolerate religious freedom as a temporary arrangement). As Pope Gregory XVI put it in his 1832 encyclical Mirari vos, state law had to restrict preaching by non-Catholics, for “is there any sane man who would say poison ought to be distributed, sold publicly, stored, and even drunk because some antidote is available?” It was only in 1965, during the Second Vatican Council, that the Church formally abandoned this conviction. In its Declaration on Religious Freedom, it formally proclaimed religious liberty as a universal right “greatly in accord with truth and justice.” This was one of the greatest intellectual transformations of modern religious thought.

Why did this change come about? Scholars have provided illuminating explanations over the last few years. Some have attributed it to the mid-century influence of the American constitutional tradition of state neutrality in religious affairs. Others claimed it was part of the Church’s confrontation with totalitarianism, especially Communism, which led Catholics to view the state as a menacing threat rather than ally and protector. My article in the July 2018 issue of the Journal of the History of Ideas uncovers another crucial context that pushed Catholics in this new direction. Religious liberty, it shows, was also fueled by a dramatic change in Catholic thinking about Protestants, namely a shift from centuries of hostility to cooperation and even a warm embrace. Well into the modern era, many Catholic writers continued to condemn Luther and is heirs, blaming them for the erosion of tradition, nihilism, and anarchy. But during the mid-twentieth century, Catholics swiftly abandoned this animosity, and came to see Protestants as brothers in a mutual fight against “anti-Christian” forces, such as Communism, Islam, and liberalism. French Theologian Yves Congar argued in 1937 that the Church transcends its “visible borders” and includes all those who have been baptized, while German historian Joseph Lortz published in 1938 sympathetic historical tomes that depicted Martin Luther and the Reformation as well-meaning Christians. This process of forging inter-Christian peace—which became known as ecumenism—reached its pinnacle in the postwar era. In 1964, it received formal doctrinal approval when Vatican II promulgated a Decree on Ecumenism (1964), which declared Protestants as “brethren.”

PiusXvatgarden

Pope Pius X

It was in this context that Catholic leaders also shed their opposition to religious liberty. Catholic thinkers had long demonized religious liberty as a Protestant conspiracy that allowed Luther’s heresy to thrive. This was the spirit in which Pope Pius X, in his famous 1910 encyclical Editae saepe, decried Protestants for “pav[ing] the way for modern rebellions and apostasy.” But after the Church embarked on its quest for cooperation with Protestants, it also reconsidered its approach to state institutions. They no longer required Catholic countries to impose Catholic education and practices. Indeed, for many Catholic writers, interdenominational peace required a new approach to the state, where no church held formal legal hegemony; they believed that the two intellectual projects—making peace with Protestants and revising Catholic teachings on the use of state power—were ultimately inseparable. It was no coincidence that the thinkers who drafted Vatican II’s Declaration on Religious Freedom also penned the Decree on Ecumenism. Both texts also emerged from the same organ, the Secretariat for Promoting Christian Unity.

This story may seem like a scholastic dive into arcane theological debates, but it has broader implications for our own debates about religion and politics. It raises questions about the origins of contemporary laws that regulate religion in Europe and the United States. Reflecting on recent controversies, some scholars have often attributed religious liberty laws to the ideology of “secularism” (or laïcité in French). If countries like France, they have asserted, routinely discriminate against Muslims through actions like banning the veil, it is in part (though not exclusively) because of an obsession with secular public affairs cannot digest certain religious behaviors or open displays of faith. Yet as this story of Catholic thinking reveals, religious liberty is not simply the product of secularist ideas. In some cases, it was the product of inter-confessional peace between Catholics and Protestants, whose architects had no aspirations of promoting universal religious equality. On the ideological level, ecumenical religious freedom in fact sought to maintain religious dominance in the public sphere by joining forces against “anti-Christian” enemies. It thus may be that religious liberty is best understood not only as the product of secular ideas and conditions. Rather, it was also the work of religious actors and ideas—a legacy that continues to profoundly shape contemporary political and public life.

Udi Greenberg is an associate professor of European history at Dartmouth College. He is currently writing a book titled Religious Pluralism in the Age of Violence: Catholics and Protestants from Animosity to Peace, 1879–1970. Together with Daniel Steinmetz-Jenkins, he edited a special forum on Christianity and human rights in the latest issue of the Journal of the History of Ideas; the introduction to that forum can be found here.

Functional Promiscuity: The Choreography and Architecture of the Zinc Gang

By Contributing Editor Nuala F. Caomhánach

300px-Zinc_finger_DNA_complex

Zinc Finger DNA Complex, image by Thomas Splettstoesser

 

Andreas_Sigismund_Marggraf-flip

Andreas Sigismund Marggraf

The tale about gag knuckles, taz-two, hairpin, ribbons, and treble clef is quite elusive.  Although they sound more like nicknames of a 1920’s bootlegging gang (at least to me) they are the formal nomenclature of a biochemical classification system, known commonly as zinc fingers (ZnF). The taxonomy of zinc fingers describes the morphological motif that the element creates when interacting with various molecules. Macromolecules, such as proteins and DNA, have developed numerous ways to bind to other molecules and zinc fingers are one such molecular scaffold. Zinc, an essential element for biological cell proliferation and differentiation, was first isolated in 1746 by the German chemist Andreas Marggraf (1709-–1782). Zinc fingers were important in the development of genome editing, and while CRISPR remains king, zinc is making a comeback.

 

Strings of amino acids fold and pleat into complex secondary and tertiary structures (for an overview, see this video from Khan Academy).

 

220px-Folding_funnel_schematic.svg

Fig. 1: The folding funnel

Energy-landscape-of-protein-folding-and-misfolding-From-the-unfolded-state-toward-the

Fig. 2: The energy landscape

Proteins with zinc finger domains—meganucleases—act as molecular DNA scissors, always ready to snip and organize genetic material. The return of these biochemical bootleggers, an older generation of genome editing tools, is due to the problem of exploring the invisible molecular world of the cell. In this age of genomic editing, biologists are debating the concept of protein stability and trying to elucidate the mechanism of protein dynamics within complex signalling pathways. Structural biologists imagine this process through two intermingled metaphors, the folding funnel (fig. 1) and the energy landscape (fig. 2). The energy landscape theory is a statistical description of a protein’s potential surface, and the folding funnel is a theoretical construct, a visual aid for scientists. These two metaphors get scientists out of Levinthal’s Paradox, which argues that finding the native or stable 3-dimensional folded state of a protein, that is, the point at which it becomes biologically functional, by a random search among all possible configurations, can take anywhere from years to decades. Proteins can fold in seconds or less, therefore, biologists assert that it cannot be random. Patterns surely may be discovered. Proteins, however, no longer seem to follow a unique or prescribed folding pathway, but move in different positions, in cellular time and space, in an energy landscape resembling a funnel of irregular shape and size. Capturing the choreography of these activities is the crusade of many types of scientists, from biochemists to molecular biologists.

 

Within this deluge of scientific terminology is the zinc atom of this story.  With an atomic number of 30, Zinc wanders in and out of the cell with two valence electrons (Zn2+). If Zinc could speak it might tell us that it wishes to dispose, trade, and vehemently rid-itself of these electrons for its own atomic stability. It leads with these electrons, recalling a zombie with outstretched arms. It is attracted to molecules and other elements as it moves around the cytoplasm and equally repelled upon being cornered by others. Whilst not as reactionary as the free radical Hydrogen Peroxide (H2O2), it certainly trawls its valency net in the hope of catching a break to atomic stasis, at least temporarily. In this world of unseen molecular movements a ritual occurs as Zinc finds the right partner to anchor itself to. Zinc trades the two electrons to form bonds making bridges, links, ties and connections which slowly reconfigure long strings of amino or nucleic acids into megamolecules with specific functions. All of this occurs beneath the phospholipid bilayer of the cell, unseen and unheard by biologists.

Zinc-58b6020f3df78cdcd83d332a

Model of a Zinc atom

The cell itself is an actor in this performance by behaving like a state security system, as it monitors Zn2+ closely. If the concentration gets too high, it will quarantine the element in cordoned-off areas called zincosomes. By arranging chains of amino acids, such as cysteine and histidine, close to each other, the zinc ion is drawn into a protein trap, and held in place to create a solid, stable structure. They connect to each other via a hydrogen atom, with hydrogen’s single electron being heavily pulled on by Carbon as if curtailing a wild animal on a leash. In the 1990s, when the crystal structures of zinc finger complexes were solved, they revealed the canonical tapestry of interactions. It was notable that unlike many other proteins that bind to DNA through the 2-fold symmetry of the double helix, zinc fingers are connected linearly to link sequences of varying lengths, creating the fingers of its namesake. Elaborate architectural forms are created.

Like a dance, one needs hairpins, or rather beta-hairpins, two strands of amino acids that form a long slender loop, just like a hairpin. To do the gag knuckle one needs two short beta-strands then a turn (the knuckle), followed by a short helix or loop. Want to be daring, do the retroviral gag knuckle. Add one-turn alpha-helix followed by the beta hairpin. The tref clef finger looks like a musical treble clef if you squint really hard. First, assemble around a zinc ion, then do a knuckle turn, loop, beta-hairpin and top it off with an alpha-helix. The less-complex zinc ribbon is more for beginners: two knuckle turns, throw in a hairpin and a couple of zinc ions. Never witnessed, biologists interpret this dance using x-ray crystallography, data that looks like redacted government documents, and computer simulated images.

genes-08-00199-g001

X-ray crystallography of a Zinc Finger Protein, image from Liu et al., “Two C3H Type Zinc Finger Protein Genes, CpCZF1 and CpCZF2, from Chimonanthus praecox Affect Stamen Development in Arabidopsis,” Genes 8 (2017).

klug_nobelportrait-450x450

Aaron Klug

In the 1980s, zinc fingers were first described in a study of transcription of an RNA sequence by the biochemist Aaron Klug. Since then a number of types have been delimited, each with a unique three-dimensional architecture. Klug used a model of the molecular world that required stasis in structure.The pathway towards this universal static theoretical framework of protein functional landscapes was tortuous. In 1904, Christian Bohr (1855–1911), Karl Albert Hasselbalch (1874-1962), and August Krogh (1874-1949) carried out a simple experiment. A blood sample of known partial oxygen pressure was saturated with oxygen to determine the amount of oxygen uptake. The biologists added the same amount of CO2 and the measurement repeated under the same partial oxygen pressure. They described how one molecule (CO2) interferes with the binding affinity of another molecule (O2) to the blood proteins, haemoglobin. The “Bohr effect” described the curious and busy interactions of how molecules bind to proteins.

JACOB1-obit-jumbo

Jacques Monod and François Jacob, image by Agence France-Presse

In 1961, Jacques Monod and Francois Jacob extended Bohr’s research. The word ‘allosteric’ was introduced by Monod in the conclusion of the Cold Spring Harbor Symposium of 1961. It distinguished simultaneously the differences in the structure of molecules and the consequent necessary existence of different sites across the molecule for substrates and allosteric regulators. Monod introduced the “induced-fit” model (proposed earlier by Daniel Koshland in 1958). This model states that the attachment of a particular substrate to an enzyme causes a change in the shape of the enzyme so as to enhance or inhibit its activity. Suddenly, allostery erupted into a chaotic landscape of multiple meanings that affected contemporary understanding of the choreography of zinc as an architectural maker. Zinc returned as the bootlegger of the cellular underworld.

F09-20bsmc

The “induced-fit” model

Around 2000, many biologists were discussing new views on proteins, how they fold, allosteric regulation and catalysis. One major shift in the debate was the view that the evolutionary history of proteins could explain their characteristics. This was brought about by the new imaginings of the space around the molecules as a funnel in an energy landscape. When James and Tawfik (2003) argued that proteins, from antibodies to enzymes, seem to be functionally promiscuous, they pulled organismal natural selection and sexual selection theory into the cellular world. They argued that promiscuity is of considerable physiological importance. Bonding of atoms is more temporal and more susceptible to change than thought of before. Whilst they recognized that the mechanism behind promiscuity and the relationship between specific and promiscuous activities were unknown they opened the door to a level of fluidity earlier models could not contain. Zinc fingers, thus, played an important role in being a “freer” elements with the ability to change its “mind”.

These ideas were not new. Linus Pauling proposed a similar hypothesis, eventually discarded as incorrect, to explain the extraordinary capacity for certain proteins (antibodies) to bind to any chemical molecule. This new wave of thinking meant that perhaps extinct proteins were more promiscuous than extant ones, or there was a spectrum of promiscuity, with selection progressively sifting proteins for more and more specificity. In 2007, Dean and Thornton, reconstructed ancestral proteins in vitro and supported this hypothesis.  If the term promiscuity as it has been placed on these macromolecules sticks, what exactly does it mean?  If promiscuity is defined as indiscriminate mingling with a number of partners on a casual basis, is zinc an enabler? A true bootlegger? This type of language has piqued the interest of biologists who see the importance of this term in evolution and the potential doors it opens in biotechnology today. Promiscuity makes molecular biology sexy.

To state that a protein, or any molecule, is not rigid or puritanical in nature but behaves dynamically is not the equivalent of stating that the origin of its catalytic potential and functional properties has to be looked for in its intrinsic dynamics, and that the latter emerged from the evolutionary history of these macromolecules.  The union of structural studies of proteins and evolutionary studies means biologists have not only (re)discovered their dynamics but also highlights the way that these properties have emerged in evolution. Today, evolutionary biologists consider that natural selection sieves the result, not necessarily the ways by which the result was reached: there are many different ways of increasing fitness and adaptation. In the case of protein folding with zinc fingers, what is sieved is a rapid, and efficient folding choreography. These new debates suggest that what has been selected was not a unique pathway but an ensemble of different pathways, a reticulating network of single-events, with a different order of bond formation.

If the complex signalling pathways inside of a cell begins with a single interaction, zinc plays a star role. Zinc, along with other elements such as Iron, are the underbelly of the molecular world. Until captured and tied down, the new view of proteins offers to bond mechanistic and evolutionary interpretations into a novel field.  This novelty is a crucial nuance to explain the functions and architecture of molecular biology. As the comeback king, zinc fingers are used in a myriad of ways. As a lookout, biologists infected cells with with DNA engineered to produce zinc fingers only in the presence of specific molecules. Zinc reports back as biologist looks for the presence of a molecule, such as a toxin, by looking for the surface zinc fingers rather than the toxin itself. Zinc fingers are obstructionists. In 1994 an artificially-constructed three-finger protein blocked the expression of an oncogene in a mouse cell line. As informants, they capture specific cells.  Scientists have experimented by creating a mixture of labeled and unlabeled cells and incubated them with magnetic beads covered in target DNA strands. The cells expressing zinc fingers were still attached to the beads. They introduce outsiders into the cell, such as delivering foreign DNA into cells in viral research. In tracking these bootleggers, scientists have pushed the limits of their own theories and molecular landscapes, but have landed on promiscuity, a culturally laden, inherently gendered word.

Graduate Forum: Excesses of the Eye and Histories of Pedagogy

This is the fourth in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer. The first piece was by Andrew Klumpp, the second by Cynthia Houng, and the third by Robert Greene II.

This fourth piece is by guest contributor Gloria Yu.

Whether a man is wise can be gathered from his eyes. So thought the Anglican bishop Joseph Hall in his 1608 two-volume collection of character sketches, Characters of Vertues and Vices. The wise man, Hall advised, “is eager to learn or to recognize everything but especially to know his strengths and weaknesses…His eyes never stay together, instead one stays at home and gazes upon himself and the other is directed towards the world…and whatever new things there are to see in it” (quoted in Whitmer, The Halle Orphanage, 77). The wise man’s eyes are the entry points of both self-knowledge and worldly knowledge, and their divergent gaze betrays his catholic curiosity (fig. 1).

hb_29.100.16.jpg

Figure 1. Joseph Hall might find the eyes, more than the books, to be the prominent ‘accessory’ in Bronzino’s Portrait of a Young Man with a Book (1530s). While the left eye is directed outward, ‘towards the world’, the gaze of the right eye (our left) seems unfocused on what is directly before it. It is not a look that invites sustained eye contact with the viewer; rather, an apparent disinterestedness suggests that the man’s attention is elsewhere. The aimless stare is a marker of a gaze turned inward. Image courtesy of the Metropolitan Museum of Art in New York.

The eye plays a central role in two recent snapshots from the history of Western pedagogy that find it productive to ground the historical construction of norms for knowing in educational contexts. Kelly Joan Whitmer’s The Halle Orphanage as Scientific Community: Observation, Eclecticism, and Pietism in the Early Enlightenment (Chicago, 2015) examines the scientific ethos that permeated and vitalized the premier experimental Pietist educational enterprise of the late seventeenth and eighteenth centuries. Orit Halpern’s Beautiful Data: A History of Vision and Reason since 1945 (Duke, 2015) offers a genealogy of contemporary obsessions with “data visualization,” their ontological and epistemological consequences, and the pedagogies that spurred them. Both works centralize the eye as the object of training and discipline. Both accounts trace the pedagogical principles behind the ways in which seeing must be learned, extending the functions of the visual apparatus beyond the role of passive reception into an active mode of evaluating and constructing. Seeing, in these histories, is imbued with cognitive powers so as to be near congruent with knowing itself. Considering these two works together prepares us to use the recurring motif of the trained eye as an opportunity for gauging future directions for histories of pedagogy.

Whitmer informs us that descriptions like the one above of the ‘character of the wise man’ were regularly incorporated into the curriculum at the Halle Orphanage’s schools for the purpose of training a particular way of seeing. The Orphanage’s founder, August Hermann Francke, believed these character sketches could “awaken” in children a love of virtue so that they might emulate the figures depicted. The end goal was the cultivation of an “inner eye,” or “the eye in people,” which Francke equated with prudent knowledge [Klugheit] and pious desires. Indeed, a pedagogy of seeing permeated Francke’s administration of the Orphanage. The training of the inner eye was based on the honing of an array of observational practices that engaged senses beyond just the visual. On one level, visual aids such as scientific instruments (e.g. the camera obscura, air pumps), wooden models, and color-coded maps cultivated tactile, spatial, and relational knowledge. In Francke’s correspondence with the mathematician Ehrenfried Walther von Tschirnhaus, the microscope figures as a particularly apt tool for lessons in seeing since it enhanced understanding of the configuration of parts and wholes while also providing the opportunity to reflect on the faculty of seeing itself. Through observing how minute changes in light transformed one’s image of an object, one could discern the possibilities and limits of human perception. On another level, visual aids, like the models of the geocentric and heliocentric heavenly spheres specially commissioned for the Orphanage, enhanced cognition and yielded spiritual benefits by teaching students how to behold incompatible representations of the world and to reconcile them. The Orphanage taught a manner of “seeing all at once,” wherein the eye was considered a “conciliatory medium” that could aid in transcending interconfessional disagreements.

For Whitmer, seeing and the eye operate as metaphors for perception, understanding, and cognition writ large, at times even as the privileged site for accessing divine truths. Notable is her use of the singular. If it is through seeing in a broad sense (materially, affectively, cognitively) that we encounter the world and acquire knowledge of it, it is the eye that mediates this encounter. From the ancient origins of the evil eye, to Avicenna’s comparison of the eye to a mirror, to the early modern possibility that a wrong stare could compel an accusation of witchcraft, the eye—and not only in the Western tradition—has perhaps always been overdetermined. Its activity always transcends the physiological processes behind mere visual perception. It was after all in tempting Eve to take from the tree of knowledge that the serpent said, “For God knows that in the day you eat of it, your eyes will be opened” (Genesis 3:5).

images

Orit Halpern, Beautiful Data: A History of Vision and Reason Since 1945 (Durham: Duke University Press, 2015).

“Vision,” as Halpern notes, “is thus a term that multiples—visualization, visuality, visibilities” (24). Dexterity in handling the conceptual differences of this plurality allows Halpern to explain how, since the end of World War II, we have come to bestow such a capacious range of talents to our ocular organs. In the context of increasing computational and data scientific approaches in information management, architecture and design, and the cognitive sciences, Halpern argues, vision has taken on a highly technicized form, meditated by screens, data sets, and algorithms that fundamentally expand the epistemological prowess of perception. In particular, postwar cybernetics and design and urban planning curricula had a hand in transforming the ontology of vision. Cybernetics’s preoccupation with prediction, control, and crisis management foregrounded information storage and retrieval and, thus, transformed the eye into a filter and translator of sorts, selecting relevant information for storage within a constant flow based on algorithms for pattern recognition. Whereas the eye at the Halle Orphanage dwelled on contradictory images, the eye of twentieth-century cybernetics constantly made choices.

The works of these authors suggest the advantages of approaching questions central to intellectual history—concerning the transmission, unity, and cultural impact of ideas—from the perspective of histories of pedagogies. Analyzing pedagogy allows Halpern to trace how philosophical ideas were transformed into teachable principles and how a certain paradigm of vision was promulgated through the materiality of our media environments. In her narrative, industrial designers, drawing from cybernetics, developed an “education of vision” grounded in “algorithmic seeing” (93). Here, vision mimicked the activity of a pattern generator: rather than representing the world, this ‘new’ vision captured the multitude of ways something could possibly be seen. Vision was about scale and quantity, about producing as many representations and recombinations of the visual field as possible. No longer could any pedestrian see the world; one had to learn through credentialed training to be considered an “expert” in vision (94). The screen-filled “smart” city of Songdo, South Korea is an embodiment of this paradigm, where older ways of seeing lose out to a vision continually registering and ordering human and environmental metrics toward the goal of “perfect management and logistical organization of populations, services, and resources” (2, see fig. 2).

most-sustainable-cities-songdo

Figure 2. The “smart” city Songdo, South Korea in the Incheon Free Economic Zone features “computers and sensors placed in every building and along roads to evaluate and adjust energy consumption.” Photo and description courtesy of Condé Nast Traveler.

Furthermore, histories of pedagogy, whether central to a project (as in Whitmer’s case) or part of a larger investigation measuring the scale of cultural transformation (as in Halpern’s), can answer questions that animate the intersection of history of knowledge, history of science, and historical epistemology. These fields have offered a robust vocabulary for interrogating the historical contingency of observation and objectivity; the historical, social, and cultural criteria for knowledge; and the methodological goals and tactics involved in fortifying scientific persona. Beside the fact that pedagogy is a science and set of practices specifically concerned with communicating knowledge, Whitmer and Halpern show that, in addition to articulating what it means to learn in certain moments and institutional contexts, pedagogical theories can contain latent or overt temporalities, subjectivities, and modes of vision that deepen our understanding of what in the past has made knowledge count as knowledge.

Yet, histories of pedagogies have even more fruits to bear. If earlier histories focused on matriculation rates, student body demographics, and the longevity of institutions, these two works prove future research capable of taking on the relationship between science and religion, the convergence and divergence of intellectual traditions, the crests and troughs in the history of humanism, and episodes in the centuries long contemplation of what it means to be human. Future research would follow precedent histories of pedagogy, not least the scholarship of our discussant, in treating sensitively the gaps between ideals of education and their execution, the slips between the social and political aims of education and their imperfect applications (Anthony Grafton and Lisa Jardine, From Humanism to the Humanities: Education and the Liberal Arts in Fifteenth- and Sixteenth-Century Europe). That pedagogy is a tradition embedded in text, reliant on practices, and crucial to the scaffolding of intellectual traditions places the excavation of its pasts well within the purview of intellectual historians.

One last note on the eye. Mention of training the eye calls to mind the familiar narrative that forms of discipline often get inscribed on bodies. While this may be true, that there are multiple histories of pedagogy suggests the probability that we are trained in numerous contexts, at different times in our lives, and in potentially incompatible ways in how to see and how to know. It is possible, then, to become at once wise and childish, to become discerning in one way and docile in another. As to whether we are overwhelmingly determined by systems of power and whether there are multiple systems at play, I answer with an optimistic agnosticism that leaves open the question of the place of freedom. I rely on the vagueness of the term freedom here to underscore that the day-to-day exercises in learning may fail to yield desired results or, even when they do, may be useful in ways other than intended. For the researcher willing to question the Foucaultian schema that binds discipline with docility, the histories of pedagogy—of pedagogies with a range of reaches, in quiet pockets of Europe or in grander governmental programs—can be treated as an approach capable of accommodating histories of discipline that remains sensitive to the rough trajectories of learning. If the Panopticon’s surveilling gaze, both omnipresent and invisible, inspires the self-policing of behavior, how does the prisoner see when he looks back?

Panopticon

Jeremy Bentham’s Panopticon

Gloria Yu is a Ph.D. candidate at the University of California, Berkeley. Her research focuses on the history of psychology and the concept of the will in nineteenth-century Europe. She thanks David Delano, Elena Kempf, and Thomas White for their incisive comments on earlier drafts.

J. M. W. Turner’s “Dissolving Views”

By guest contributor Jonathan Potter

War. The Exile and the Rock Limpet exhibited 1842 by Joseph Mallord William Turner 1775-1851

J. M. W. Turner, War. The Exile and the Rock Limpet (1842)

Reviewing the 1842 Royal Academy of Arts exhibition, the art critic John Eagles wrote of J. M. W. Turner’s paintings:

They are like the “Dissolving Views,” which, when one subject is melting into another, and there are but half indications of forms, and a strange blending of blues and yellows and reds, offer something infinitely better, more grand, more imaginative than the distinct purpose either view presents. We would therefore recommend the aspirant after Turner’s style and fame, to a few nightly exhibitions of the “Dissolving Views” at the Polytechnic, and he can scarcely fail to obtain the secret of the whole method […] Turner’s pictures […] should be called henceforth “Turner’s Dissolving Views” (“Exhibitions—Royal Academy,” Blackwood’s Edinburgh Magazine, July 1842, p. 26).

The comparison is no doubt intended to reduce the stature of Turner’s paintings from high art to the level of popular performance. Eagles was not a fan of Turner’s work – he begins by suggesting Turner suffered hallucinations—and he reused the dissolving view comparison the following year to note with approval that there were few imitators of Turner’s ““dissolving view” style” (“Exhibitions,” Blackwood’s Edinburgh Magazine, Aug 1843, p. 188). But Eagles was not alone. Much of the press for Turner’s paintings at the 1842 exhibition was negative, concentrating primarily on various aspects which broadly fall into the category of realism: clarity, recognisability, and believability of depiction, amid others.

Part of the problem also lay in Turner’s subject matter—reviewers struggled, for example, to relate the exiled Napoleon and the limpet in War. The Exile and the Rock Limpet with the sea burial of the artist David Wilkie in Peace – Burial at Sea. These two subjects (or three if you include the jarring juxtaposition of emperor and limpet) do not naturally fit within a traditional historiographical narrative or seem to follow a sequential logic.

Peace - Burial at Sea exhibited 1842 by Joseph Mallord William Turner 1775-1851

J. M. W. Turner, Peace – Burial at Sea (1842)

The paintings contradicted reviewers’ expectations by disregarding realist principles of depiction in both form and content. In order to understand them, we need to look beyond the traditions of fine art painting. This, indeed, is what Eagles suggested when he called the paintings “dreamy performances” and directed the reader to consider them as “dissolving views.”

A successor to the phantasmagorias, the dissolving view was a magic lantern show that used a gradual transition (the “dissolve”) from one image to another. This could utilize superimposition or, more often, involve a gradual dimming and elimination of light through one lens whilst proportionally increasing light through another. Dissolving view shows came to prominence sometime in the first part of the nineteenth century (Simon During suggests around 1825 [Modern Enchantments, 102-3]), taking over from the phantasmagoria as the chief magic lantern entertainment.

The dissolving view is unstable and, potentially at least, destabilising, offering an alternative to traditional sequential historiography. The dissolving view presents paired images that blur together as they transition. In dissolving, the images attain, lose, and regain focus and clarity, and, for transitory moments, appear to coincide and coexist with no clear distinction from one to the next. The dissolve blurs the visual field, but it also blurs the semantic fields of content and context.

A handbill for dissolving views at the Adelaide in London, for example, promises a variety of different subjects seen in different states or time frames. The “Water Girls of India” for instance appear in daylight and then in moonlight, followed by the Tower of London in daylight, then moonlight, then on fire. As the lantern changes from one lens to the other, the scene dissolves from day to night and the viewer is given the sense of time passing. Because the subject remains the same, often there is very little movement beyond the changing light or incidental details. The first image (either night or day) implicitly reiterates an aspect (in this case, diurnal/nocturnal light) of the next image which is the past—i.e. the nocturnal image acquires meaning in relation with the diurnal image—and this semantic return of the past implies the next stage in the cycle. The implied sequence follows a causational rationale (day to night to day) but its progression from past to present to future is also a progression from past to present to past. This rhythmic logic is further complicated by the progression to the next subject. There is no clear logical connection between the water girls of India and the Tower of London except that both share a rhythmic temporality (both transform from day to night). The teleology of cause and effect is replaced by coincidence and shared rhythms that are not causation but do allow a certain predictive logic.

This destabilisation is not without form or structure. These kinds of dissolving view present a cycle which intermittently reinstates something resembling linearity and perspectival order, but this linearity is caught in a revolutionary whirl from one to the next and (potentially at least) back again. This is a visual whirl in more than one sense: the blur of the images replicates the visual field of motion and, in fact, dissolving view images were often circular. In essence, the whirling of the dissolving view contains a sense of rhythmic regularity. In images which oscillate between summer and winter or night and day, as magic lantern dissolving views often did, a natural sequential rhythm supplants the linear progressions of dominant conceptions of time and history. Rather than succession and disjunction, the dissolving view infers repetition and conjunction. It acts as a conceptual counterpoint to the linearity of conventional historical thought that emphasizes the sequential logic of cause and effect.

We can see such effects in much of Turner’s paintings at the 1842 exhibition. Turner experimented with circular, octagonal, and square canvases of proportions reminiscent of lantern slides, on which colours characteristically whirl around a central point, and his images suggest forms of motion—most famously in his later painting Rain, Steam and Speed – The Great Western Railway (1844), but also in the angled column of smoke in “Peace”, or the vortexes of colours in the two deluge paintings.

 

If we follow Eagles’s suggestion and consider these as “dissolving views,” then we might consider the two most difficult paintings, Peace and War, as a cyclical binary. The bright sunshine of War melts into the dark clouds of Peace much as magic lantern slides might melt from day to night or summer to winter. The light sources also share a structural unity – that central beam of brightness eviscerating the darkness of Peace is mirrored by the sun’s reflections in War. Thematically, too, there is some unity in the shared representation of the sea, though in War this is a watery shoreline rather than sea proper.

But what about the difficulty of the central subjects? Exiled Napoleon does not so obviously dissolve into David Wilkie’s burial at sea, and any notion of the latter returning back again to the former is more than a little jarring. Perhaps these difficulties are part of the point. They are the difficulties faced by viewers looking for conventional socio-historical links in paintings that, as the metaphorical dissolving view often does, seem to defy such conventions. Turner’s paired paintings demand that the viewer think beyond established norms to reflect upon the meanings of teleological historiographical practice. In pairing Napoleon with the limpet, the human figure of the man is pulled away from the mythology of the emperor.

Turner’s “dissolving views,” notably blurred and indistinct except for their central subjects, seem to be locked in the moment between two images. The space of actual physical objects—the focal point of visible reality and the space of event—is very small in these pictures, confined to only thin bands of land the run across the mid-sections of canvas between water and sky. This is true of all of the 1842 exhibition paintings. The space of visible definition and physical solidity is caught between the indistinctions and contradictions of watery reflection and vaporous sky. The vast majority of painted space is given to indistinction, as though the whirling visual chaos around physical phenomena were as important as the phenomena themselves.

Turner might be, as various critics have suggested, drawing attention to the embodied subjectivity of vision, but he is drawing attention, too, to the ambiguity of interpretation. In the paired paintings, War and Peace, viewers used to history being “for” something (for understanding progress, nationhood, divine provenance, or a multitude of other values) are prevented from resolving the images into a coherent narrative. This is not history as chronology or ideology but as event, to be set within an interpretive framework only by individual observers in full knowledge that such frameworks are not naturally occurring, but imposed, and so necessarily reduce events to certain structures and values. This relates to the metaphorical dissolving view in precisely its insistence that visibility does not equate understanding and that, in the blurry vague expanse around the focussed subject, the flaws and ambiguities in our understanding are rendered visible—great gaps that rupture the certainty of the visible space.

The more certain we, as viewers, are that this is a view of Napoleon on Elba and this is a view of Wilkie’s funeral, the more uncertain we become of our interpretation of the pairing, and the more they converge in contradiction. These two events are juxtaposed, so that their meaning and relational dynamic is left more or less open to the viewer’s interpretation. The structures of historical force (of sequence, of continuum) are rendered visible. In the British solider for instance, Napoleon’s historical past is made visible, as are his imprisoned present and future. However, these forces are not the main agents of meaning in the images—they are peripheral, there to be identified, but attention is not purposely drawn to them. In this sense, these are extra-historical images which probe and question the history they ostensibly project. As dissolving views, these images do not resolve uncertainty, they generate it, they blur conventional structures and obscure dominant historical relations. They inculcate a historiographical perspective of complex relations that evade the organising structures of cause and effect, of sequential succession, of contradistinction and perspectival clarity.

Jonathan Potter recently completed his first book, Discourses of Vision in Nineteenth-Century Britain. He completed his PhD at the University of Leicester in 2015 and currently teaches at Coventry University. Find him on twitter at @DrJonPotter

Graduate Forum: The Radical African American Twentieth Century

This is the third in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer. The first piece was by Andrew Klumpp, the second by Cynthia Houng.

This third piece is by guest contributor Robert Greene II.

“Remember the ladies.” This is a line from Abigail Adams’ famous letter to her husband, John Adams, defending the idea of rights and equality for women. “Remember the ladies,” however, could easily also serve as the defining idea of modern African American intellectual history. Many historians of the African American intellectual tradition have taken great pains to emphasize the importance—indeed, the centrality—of African American women to that intellectual milieu. At the same time, other fundamental questions have been raised of not just who to privilege in this new turn in African American intellectual history, but what sources are appropriate for intellectual history. Finally, the ways in which the public remembers the past animates newer trends in African American intellectual history. In short, African American intellectual history’s recent historiographic turns offer much food for thought for all intellectual historians.

 

The field of African American intellectual history has come a long way since the heyday of historians August Meier and Earlie E. Thorpe, both prominent in the then-nascent field of African American intellectual history in the 1960s. Meier’s Negro Thought in America, 1880-1915 and Thorpe’s The Mind of the Negro: An Intellectual History of Afro-Americans were both written in the 1960s and set the standard for African American intellectual history for decades to come. Both books were focused heavily on male intellectuals, however. As such they both set the standard for the field and, along with so much of African American history up until the late 1980s, left out the important voices of many African American women.

The rise of historians like Evelyn Higginbotham in the early 1990s ushered in new ways of understanding the intersection of race and gender through American history. Her book Righteous Discontent (1992) and essay “African American Women’s History and the Metalanguage of Race” (1992) both provided templates for how to easily meld women’s history and African American history into texts that became essential works of understanding the past through viewpoints and sources normally ignored by most male historians.

Today, the field of African American intellectual history has been influenced by the evolution of several related fields: African American women’s history and Black Power studies. Both fields have attempted to both overturn older assumptions about African American history and do so by focusing on previously marginalized sources and historical figures. Much of the recent historiographic trends in African American history—namely, a deeper understanding of Black Nationalism and its relationship to broader ideological trends in both Black America and the African Diaspora—would not have been possible without both a deeper understanding of the importance of gender to African American history, and a willingness to expand the definition of who are “important” intellectuals “worthy” of study.

In just the last year alone, numerous books about the intersection of Black Nationalism and gender have challenged earlier assumptions about the histories of both fields in relationship to African American history. Both Keisha Blain’s Set the World On Fire: Black Nationalist Women and the Global Struggle for Freedom (University of Pennsylvania Press, 2018) and Ashley D. Farmer’s Remaking Black Power: How Black Women Transformed an Era (UNC Press, 2017) stretch the time period in which historians should understand the origins of Black Power—getting further away from just understanding the 1960s-era context and situating Black Power and larger Black Nationalist trends in a long era of resistance and struggle led and strategized by African American women.

Set the World on Fire follows up on other works about the Black Nationalism of the 1920s, arguing that it did not end with Marcus Garvey’s deportation from the United States in 1927. Instead, argues Blain, it was women such as his spouse Amy Jacques Garvey who kept Black Nationalist fervor alive across the United States. Meanwhile, Farmer’s book shows how the ideas of women associated with the Black Power movement of the 1960s owe a great deal to the longer arc of radical black women’s history in the twentieth century—from the agitation of black women within the Communist left of the 1930s and stretching well into the 1970s and 1980s. For Farmer, the history of a radical black nationalism does not end with the collapse of the Black Panther Party in the late 1970s.

Marcus_Garvey_with_Amy_Jacques_Garvey,_1922

Amy Jacques Garber, with her husband, Marcus Garvey.

 

WHITE008_500x500

Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (Gainesville: University Press of Florida, 2011)

Meanwhile, other trends within African American intellectual history point to the utilization of previously ignored or forgotten sources to provide a deeper understanding of the past. Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (University Press of Florida, 2011) argues for diving deeper into relatively recent African American intellectual history to provide a fuller picture of the post-Civil Rights Movement era. For White, the African American think tank was an important ideological clearing house for not just African Americans, but the broader Left in the 1970s.

 

A third movement within the field is the study of African American history itself. Pero Dagbovie has led the way in this, writing several key works detailing the rise of African American history over a broad timespan. Works such as African American History Reconsidered (University of Illinois Press, 2010) and The Early Black History Movement (University of Illinois Press, 2007) detail not only historiographic trends in the field, but the ways in which the institutions necessary for the growth of African American history were born and nurtured against the backdrop of Jim Crow segregation.

Finally, the importance of understanding memory to African American intellectual history has changed the way African American intellectual historians think about the intersection of ideas with public discourse. In reality, much of the understanding of “memory” by African American intellectual historians concerns forgetting by the vast public. Books such as Jeanne Theoharis’s A More Beautiful and Terrible History (Beacon Press, 2018) emphasizes how much of the American mainstream media—along with most politicians—have been complicit in hiding the deeper, more complicated histories of the Black freedom struggle in the United States.

African American intellectual history offers plenty of new opportunity for scholars interested in linking intellectual history to other sub-fields. African American activists and intellectuals never existed in a vacuum, whether geographic or ideological. They made alliances with a variety of groups and forces, all for the sake of freedom across the African diaspora. The new turns in African American intellectual history reflect this aspect of black history.

Robert Greene II is a Visiting Assistant Professor of History at Claflin University. He studies American intellectual and political history since 1945 and is the book review editor for the Society of US Intellectual Historians.

“Every Man is a Quotation from all his Ancestors:” Ralph Waldo Emerson as a Philosopher of Virtue Ethics

By guest contributor Christopher Porzenheim

Even the smallest display of virtuous conduct immediately inspires us. Simultaneously we: admire the deed, desire to imitate it, and seek to emulate the character of the doer. […] Excellence is a practical stimulus. As soon as it is seen it inspires impulses to reform our character. -Plutarch. [Life of Pericles. 2.2. Trans. Christopher Porzenheim.]

Ralph_Waldo_Emerson_ca1857_retouched

Ralph Waldo Emerson

Ralph Waldo Emerson has been characterized as a transcendentalist, a protopragmatist, a process philosopher, a philosopher of power, and a even moral perfectionist.” While Emerson was all of these, I argue he is best understood as a philosopher of social reform and virtue ethics, who combined Ancient Greco-Roman, Indian, and Classical Chinese traditions of social reform and virtue ethics into a form he saw as appropriate for nineteenth-century America.

Reform, of self and society, was the central concern of Emerson’s philosophy. Emerson saw that we as humans are by nature reformers, who should strive to mimic the natural and spontaneous processes of nature in our reform efforts. As he put in one of his earliest published essays, Man the Reformer (1841):

What is a man born for but to be a Reformer, a Remaker of what man has made; a renouncer of lies; a restorer of truth and good, imitating that great Nature which embosoms us all[?]

Reforming oneself, with models of moral and religious heroes from the past, and through one’s own example, others, and eventually society itself, was the idea at the center of Emerson’s philosophy. He would often echo the virtue ethicist Confucius’s (551–479 BCE) advice that “When you see someone who is worthy, concentrate on becoming their equal; when you see someone who is unworthy, use this as an opportunity to look within yourself [for similar vices].” [A.4.17.]

For example, in the essay History (1844), Emerson wrote that “there is properly no history; only biography” and argued that this “biography” exists to reveal the virtues and vices of exceptional individuals character:

So all that is said of the wise man by Stoic, or oriental or modern essayist, describes to each reader his own idea, describes his unattained but attainable self. All literature writes the character of the wise man. […]  A true aspirant, therefore, never needs look for allusions personal and laudatory in discourse. He hears the commendation, not of himself, but more sweet, of that character he seeks, in every word that is said concerning character[.]

For Emerson, the task, of all literature and history, was offering people enjoyable and memorable examples of virtue and vice for them to pattern their own character, relationships, and life by. “The student is to read history, actively and not passively; to esteem his own life the text, and books the commentary.” History is a biography of our own potential character.

The logical result of these beliefs, was Emerson’s later work, Representative Men (1850) a collection of essays which provided biographies of “wise men,” “geniuses” and “reformers” each illustrating certain virtues and vices for his readers to learn from.

Plato for example, represented to Emerson the virtues and vices of a character shaped by philosophy, Swedenborg a mystic, Montaigne a skeptic, Shakespeare a poet, Napoleon a man of the world, and finally Goethe, a writer.

Representative Men was in part a direct response to the work of Emerson’s friend Thomas Carlyle’s On Heroes and Hero Worship & The Heroic in History (1841). But both men’s works shared a common ancestor well known to their contemporaries, Plutarch’s Parallel Lives.

plutarch_of_chaeronea-03.jpg

A bust of Plutarch in his hometown of Chaeronea, Greece

Plutarch (46-120 CE), a Greco-Roman biographer, essayist and virtue ethicist, who was deeply influenced by Platonic, Aristotelian, Stoic and Epicurean philosophy, wrote a collection of biographies (now usually called The Lives) and a collection of essays (The Morals) which would both serve as a models for Emerson’s work.

Plutarch’s Lives come down to us as a collection of 50 surviving biographies. Typically in each, the fate and character of one exceptional Greek individual, is compared with those of one exceptional Roman individual. In doing so, as Hugh Liebert argues, Plutarch was showing Greek and Roman citizens how they could play a role in shaping first themselves, and, through their own example, the Roman world. In an era that perceived itself as modern, chaotic, and adrift from the past; Plutarch showed his readers how they could become like the heroes of the past by imitating their virtuous patterns of conduct.

Plutarch’s Lives provoke moral questioning about character without moralizing. They give us a shared set of stories, some might say myths, by which we can measure ourselves and each other other. They show in memorable stories and anecdotes what is (and is not) worth admiring; virtues and vices.

We might, for example, admire Alexander the Great’s superhuman courage. But, what of the time he “resolved” a conflict between his best friends by swearing to kill the one that started their next disagreement? Or, even worse, what of when he executed Parmenion, one of his oldest friends? The Lives are not hagiographies.

Instead, they are mirrors for moral self-cultivation. For Plutarch, the “mirror” of history delights and instructs. It reflects the good and bad parts of ourselves in the heroes and villains of the past. The Lives are designed as tools to help reform our character. They help us see who we are and could become because they portray the faces of virtue and vice, as Plutarch put it at the start of his biography of Alexander the Great:

I do not aim to write narratives of events, but biographies. For rarely do a person’s most famous exploits reveal clear examples of their virtue and vice. Character is less visible in: the fights with countless corpses, the greatest military tactics, and the consequential sieges of cities. More often a person’s character shows itself in the small things: the way they casually speak to others, play games, and amuse themselves.

I leave to other historians the grand exploits and struggles of each of my subjects – just as a painter of portraits leaves out the details on every part of his subject’s body. Their work focuses upon the face. In particular, the expression of the eyes. Since this is where character is most visible. In the same way my biographies, like portraits, aim to illuminate the signs of the soul. (Life of Alexander. 1.2-1.3. Trans. Christopher Porzenheim)

Confucius_Humblot

Eighteenth-century European depiction of Confucius

Emerson was in firm agreement with Plutarch about the relationship between our everyday conduct, virtue and character. In Self Reliance (1841), he wrote that “Character teaches above our wills. Men imagine that they communicate their virtue or vice only by overt actions, and do not see that virtue or vice emit a breath every moment.” This idea is axiomatic for Emerson. Hence why, in his essay Spiritual Laws (1841), he quotes Confucius’ claim: “Look at the means a man employs, observe the basis from which he acts, and discover where it is that he feels at ease. Where can he [his character] hide? Where can he [his character] hide?” [A.2.10] For Plutarch and Emerson, our character is revealed in the embodied way we act every moment; in the way we relate to others – in our spontaneous manners, etiquette, or lack thereof.

As Emersons approval of Confucius suggests, Plutarch’s Lives, and Greco-Roman philosophy in general was merely one great influence on Emerson ideals of self and societal reform.  It is to these other influences, from Confucian philosophy in particular, that we will turn in a subsequent post, in order to clarify Emerson’s philosophy of virtue ethics and social reform.

Christopher Porzenheim is a writer. He is currently interested in the legacy of Greco-Roman and Classical Chinese philosophy, in particular the figures of Socrates and Confucius as models for personal emulation. He completed his B.A. at Hampshire College studying “Gilgamesh & Aristotle: Friendship in the Epic and Philosophical Traditions.” When in doubt he usually opens up a copy of the Analects or the Meditations for guidance. See more of his work here.