Disappearing Spaces: Mapping Egypt’s Deserts across the Colonial Divide

by guest contributor Chloe Bordewich

In October 2016, government and opposition lawyers met in one of Egypt’s highest courts to battle over the fate of two tiny Red Sea islands, Tiran and Sanafir. When President Abdel Fattah al-Sisi suddenly announced the sale of the islands to Saudi Arabia earlier that year, pent-up rage toward the military regime that took power in 2013 poured out through a rare valve of permissible opposition: a legal case against the deal was allowed to proceed.

In this fight, both sides’ weapon of choice was the map. Reports circulated that state institutions had received orders to destroy maps that indicated that the islands in question were historically Egyptian. In response, the government paraded old atlases before the court proving, it argued, that the sale of the islands would only formalize the islands’ longstanding de facto Saudi ownership. Enclosed in one atlas, published by the Egyptian Geographical Society in 1928, were several maps on which the islands in question were shaded the color of Saudi Arabia. Khaled Ali, the lead opposition lawyer, pointed out that Saudi Arabia did not exist in 1928. The duel of maps continued over several sessions anyway, with Ali’s team accusing the state of intentional obstruction, obfuscation, and blatant fabrication, and the state denouncing some of Ali’s maps as inherently suspect because he had obtained them abroad.

This case drew public attention to the fraught modern history of the nation’s cartography. The Map Room of the Egyptian Survey Authority (Maslahat al-Misaha), where Ali and his associates had first gone looking for evidence, has made and sold maps of Egypt since 1898. Today it abuts the Giza Security Directorate, a mid-century fortress shielded by blocks of concrete barricades and checkpoints. Though the two may be accidental neighbors, their proximity conveys a dictum of the contemporary Egyptian state: maps are full of dangerous secrets.

How does a secret become a secret? In the case of Egypt’s maps, the answer is tangled up in the country’s protracted decolonization. An (almost) blank page tells some of that story. The map in question is of a stretch of land near Siwa Oasis on the edge of the Western Desert, far from the rocky islands of Tiran and Sanafir. Printed in 1930, it features only minor topographical contours in the lower left-hand corner. The rest is white. “Ghayr mamsūḥ,” the small Arabic print reads. “Unsurveyed.”

Siwa Map

This blank space is an artifact of the Desert Survey, a special division of the Survey Authority tasked between 1920 and 1937 with mapping Egypt’s sandy, sparsely populated expanses. (The Western Desert alone comprises more than half of Egypt’s land area beyond the Nile, but is home to a population only one-thirtieth that of greater Cairo.) The Desert Survey’s lifespan coincided almost exactly with the gradual retreat of British officials from the everyday administration of Egypt: it was born just after the 1919 revolution against British rule and dissolved a year after the Anglo-Egyptian Treaty that ostensibly formalized an end to occupation. Here decolonization is thus meant not in the comprehensive sense that Ngugi wa Thiong’o and Achille Mbembe have written about, as a cultural and intellectual challenge to Western thought’s insidiously deep claims to universality, but something much more literal: the withdrawal of colonial officials from the knowledge-producing institutions they ran in the colony.

The British cartographers who led the Desert Survey were keenly aware of their impending departure. As they prepared for it, they erected a final obstacle that would leave behind a legacy of paralysis and cartographic secrecy. Repeatedly accusing Egyptians of apathy toward the desert, colonial officials parlayed nescience into ignorance. In doing so, they sowed the seeds of an enduring anxiety among Egyptians over crucial spaces that remained unmapped.

The strident whiteness of the 1930 Siwa map looked different to the receding colonial state than it did to the emergent postcolonial one. To John Ball and George Murray, the successive British directors of the Desert Survey, blank space marked an incomplete but wholly completable project. For the post-colonial Egyptian state, the same blankness was the relic of a project it could not or would not complete, the bitter hangover of projected ignorance.

The Desert Survey’s vision was already more than two decades in the making when Ball was granted his own division of the more than 4000-member Survey Department in 1920. In 1898, colonial officials had commenced the ambitious cadastral survey that would eventually produce the Great Land Map of Egypt—the subject of Timothy Mitchell’s noted essay—and Ball departed for the oases of the Western Desert with his partner, Hugh Beadnell, to search for lucrative mineral deposits. From that point forward, the Survey Department viewed each unit’s work as a step toward total knowledge of every square meter of Egypt in every form and on every scale. It was only a matter of time, officials firmly believed, until the grid they had created at the Survey’s birth was filled in.

During the first decade of the twentieth century, the Survey Authority’s ambitions were directed inward: its cartographers wanted to know what was inside Egypt’s borders, even as sections of those borders remained fuzzy. But the First World War crystallized a broader imperial vision that linked Egypt’s Western and Eastern deserts to Jordan, Syria, and Iraq and saw the management of smugglers, nomads, geology, and development as related challenges directly correlated to the resilience of British rule (Fletcher, British Imperialism and “the Tribal Question”: Desert Administration and Nomadic Societies in the Middle East, 1919-1936; Ellis, Desert Borderland: The Making of Modern Egypt and Libya).

From the moment of the Desert Survey’s founding in 1920, and at an accelerating pitch over the 17 years that followed, British colonial officials justified their continued control of the Desert Survey even as most other institutions changed hands. One way they did this was by depicting survey work as a vocation and not merely a job. As Desert Survey chief John Ball wrote in 1930:

I shall try to keep our little show going and my little band of British surveyors at work in the deserts, but am not sure of success, as it has been said that Egyptians could do it equally well and we are therefore superfluous… but I have used Egyptian effendis in the deserts and I know them. Here and there you can find one really interested in his work, but 999 out of 1000 think only of promotion and pay, and you can’t manage… intrigue when you are hundreds of miles out in the wilderness.

(Royal Geographical Society CB9/Ball, John, letter to A.R. Hinks, February 28, 1930)

Not only did the Egyptian surveyors not know how to do the work the British experts were doing, Ball implied, but they did not want to know. If they did, it was for reasons that were crassly utilitarian by British standards.

Not having the proper expertise was an issue that could be resolved by more training. But by casting the fundamental issue as one of indifference—of not wanting to know, or wanting to know only for wrong, illogical reasons—officials like Ball were implying that even providing more training would not close the gap. Consequently, officials concluded, they would have to remain until they had shaded in the last empty expanses on the desert grid.

Survey authorities, in tandem with their associates in the colonial Frontier Districts Administration, thus articulated a position that held certain kinds of not-knowing to be acceptable, even desirable, and others to signal ignorance. In late 1924, Survey Director John Ball updated the members of the Cairo Scientific Society on the Survey’s progress in mapping the unknown regions of the desert. He reveled at the “gasps” his statistics elicited from an audience shocked at how much remained unknown (RGS CB9/Ball, John, letter to A.R. Hinks, December 19, 1924). Though he smugly reassured them that the unknown would soon vanish, a report published on the eve of independence in 1952 revealed that 43 percent of Egypt had by then been professionally surveyed, 24 percent was roughly known from reconnaissance, and 33 percent was still unknown. All that remained lay in the far Western Desert (George Murray, “The Work in the Desert of the Survey of Egypt,” Extrait du Bulletin de l’Institut Fouad Ier du Desert 2(2): July 1952, 133).

The desert did not disappear after 1952, of course; it came to occupy a central place in development dreams of the Nasser era, dreams that subsequent leaders have revived repeatedly. But the various incarnations of the project, aimed at facilitating the administration of economic development zones, had little in common with the colonial quantification—fetishization, even—of the unknown.

Maps articulate uncertainty more viscerally than the many other paper documents that similarly elude researchers and the public. The result is that vast spaces of the nation still reside primarily in foreign archives – the UK National Archives, the Royal Geographical Society in London, the Institut Français de l’Archéologie Orientale—where even the parties to the Tiran and Sanafir case turned for evidence. The obfuscation that drives us to these archives is not a product only of contemporary authoritarian politics, however; it, too, has a history. The projection of ignorance left a scar, an anxiety which can only be read through its shadows in colonial archives and its conspicuous absence in postcolonial archives.

Chloe Bordewich is a PhD Student in History and Middle Eastern Studies at Harvard University. She currently works on histories of information, secrecy, and scientific knowledge in the late and post-Ottoman Arab world, especially Egypt. She blogs at chloebordewich.wordpress.com.

The Pricing of Progress: Podcast interview with Eli Cook

By Contributing Editor Simon Brown

In this podcast, I’m speaking with Eli Cook, assistant professor of history at the University of Haifa, about his new book, The Pricing of Progress: Economic Indicators and the Capitalization of American Life (Harvard University Press, 2017). The book has been honored with the Morris D. Forkosch Book Prize from the Journal of the History of Ideas for the best first book in intellectual history, and with the Annual Book Prize of the Society for US Intellectual History for the best book in that field.

pricing of progress

In The Pricing of Progress, Cook tells the story of how American businessmen, social reformers, politicians, and labor unions came to measure progress and advocate policy in the language of projected monetary gains at the expense of other competing standards. He begins this account with the market for land in seventeenth-century England, and moves across the Atlantic to explain how plantation slavery, westward expansion, and the Civil War helped lead Americans to conceive of their country and its people as potential investments with measurable prices even before the advent of GDP in the twentieth century. He traces an intellectual history that leads the reader through the economic theories of thinkers like William Petty, Alexander Hamilton, and Irving Fisher on the one hand, and quotidian texts like household account books, business periodicals and price indices on the other. Throughout, he shows how the rise of capitalism brought with it the monetary valuation of not only land, labor and technology, but of everyday life itself.   

John Parkinson and the Rise of Botany in the 17th Century

By Guest Contributor Molly Nebiolo

220px-John-Parkinson.jpg

John Parkinson, depicted in his monumental Theatrum botanicum (1640).

The roots of contemporary botany have been traced back to the botanical systems laid out by Linnaeus in the eighteenth century. Yet going back in further in time reveals some of the key figures who created some of the first ideas and publications that brought horticulture forward as a science. John Parkinson (1567-1650) is one of the foremost in that community of scientists. Although “scientist” was a word coined in the nineteenth century, I will be using it because it embodies the systematic acts of observation and experimentation to understand how nature works that I take Parkinson to be exploring. While “natural philosophy” was the term more commonly in use at the time, the simple word “science” will be used for the brevity of the piece and to stress the links between Parkinson’s efforts and contemporary fields. Parkinson’s works on plants and gardening in England remained integral to botany, herbalism, and medicinal healing for decades after his death, and he was one of the first significant botanists to introduce exotic flowers into England in the 17th century to study their healing properties. He was a true innovator for the field of botany, yet his work has not been heavily analyzed in the literature on the early modern history of science. The purpose of this post is to underline some of the achievements that can be  attributed to Parkinson, and to examine his first major text, Paradisi in sole paradisus terrestris, a groundbreaking work in the field of history in the mid-1600s.

Parkinson grew up as an apprentice for an apothecary from the age of fourteen, and quickly rose in the ranks of society to the point of becoming royal apothecary to James I. His success resulted in many opportunities to collect plants outside of England, including trips to the Iberian Peninsula and northern Africa in the first decade of the seventeenth century. At the turn of the seventeenth century, collectors would commonly accompany trading expeditions to collect botanical specimens to determine if they could prosper in English climate. Being the first to grow the great Spanish daffodil in England, and cultivating over four hundred plants in his own garden by the end of his life, Parkinson was looked up to as a pioneer in the nascent field of botanical science. He assisted fellow botanists in their own work, but he also was the founder of the Worshipful Society of Apothecaries, and the author of two major texts as well.

His first book, Paradisi in sole paradisus terrestris (Park-in-Sun’s Terrestrial Paradise) reveals a humorous side to Parkinson, as he puts a play on words for his surname in the title: “Park-in-Sun.” This text, published in 1628, along with his second, more famous work published in 1640, Theatrum botanicum (The Theater of Plants), were both immensely influential to the horticultural and botanical corpori of work that were emerging during the first half of the 17th century. Just in the titles of both, we can see how much reverence Parkinson had for the intersection of fields he worked with: horticulture, botany, and medicine. By titling his second book The Theater of Plants, he creates a vivid picture of how he perceived gardens. Referencing the commonly used metaphor of the theater of the world, Parkinson compares plants as the actors in the the garden’s theatrum. It is also in Theatrum Botanicum that Parkinson details the medicinal uses of hundreds of plants that make up simple (medicinal) gardens in England. While both texts are rich for analysis, I want to turn attention specifically to Paradisus terrestris because I think it is a strong example of how botany and gardening were evolving into a new form of science in Europe during the seventeenth century.

Picture1.png

Title page woodcut image for Paradisus Terrestris. Image courtesy of the College of Physicians Medical Library, Philadelphia, PA.

The folio pages of Paradisus terrestris are as large and foreboding as those of any early modern edition of the Bible. Chock full of thousands of detailed notes on the origins, appearance, and medical and social uses for pleasure gardens, kitchen gardens and orchards, one could only imagine how long it took Parkinson to collect this information. Paradisus terrestris was one of the first real attempts of a botanist to organize plants into what we now would term genuses and species. This encyclopedia of meticulously detailed, imaged and grouped plants was a new way of displaying horticultural and botanical information when it was first published. While it was not the first groundbreaking example of the science behind gardens and plants in western society, Luci Ghini potentially being the first, Parkinson’s reputation and network within his circle of botany friends and the Worshipful Society of Apothecaries bridged the separation between the two fields. Over the course of the century,  the medicinal properties of a plant were coherently circulated in comprehensive texts like Parkinson’s as the Scientific Revolution and the colonization of the New World steadily increased access to new specimens and the tools to study them.

 

Picture1

Paradisus terrestris includes many woodcut images of the flowers Parkinson writes about to help the reader better study and identify them. Image courtesy of the Linda Hall Library, Kansas City, MO.

Another thing to note in Paradisus terrestris is the way Parkinson writes about plants in the introduction. While most of the book is more of a how-to narrative on how to grow a pleasure garden, kitchen garden, or orchard, the preface to the volume illustrates much about Parkinson as a botanist. Gardens to Parkinson are integral to life; they are necessary “for Meat or Medicine, for Use or for Delight” (2).  The symbiotic relationship between humans and plants is repeatedly discussed in how gardens should be situated in relationship to the house, and how minute details in the way a person interacts with a garden space can affect the plants. “The fairer and larger your allies [sic] and walks be the more grace your Garden shall have, the lesse [sic] harm the herbs and flowers shall receive…and the better shall your Weeders cleanse both the beds and the allies” (4). The preface divulges the level of respect and adoration Parkinson has towards plants. It illustrates the deep enthusiasm and curiosity he has towards the field, two features of a botanist that seemed synonymous for natural philosophers and collectors of the time.

John Parkinson was one of the first figures in England to merge the formalized study of plants with horticulture and medicine. Although herbs and plants have been used as medicines for thousands of years, it is in the first half of the seventeenth century that the medicinal uses of plants become a scientific attribute to a plant, as they were categorized and defined in texts like Paradisi in sole paradisus terrestris and Theatrum botanicum. Parkinson is a strong example of the way a collector’s mind worked in the early modern period, in the way he titled his texts and the adoration that can be felt when reading the introduction of Paradisus terrestris. From explorer, to collector, horticulturist, botanist, and apothecary, the many hats Parkinson wore throughout his professional career and the way he weaved them together exemplify the lives many of these early scientists lived as they brought about the rise of these new sciences.

Molly Nebiolo is a PhD student in History at Northeastern University. Her research covers early modern science and medicine in North America and the Atlantic world and she is completing a Certificate in Digital Humanities. She also writes posts for the Medical Health and Humanities blog at Columbia University.

Trinquier’s Dichotomy: Adding Ideology to Counterinsurgency

By guest contributor Robert Koch

After two world wars, the financial and ideological underpinnings of European colonial domination in the world were bankrupt. Yet European governments responded to aspirations for national self-determination with undefined promises of eventual decolonization. Guerrilla insurgencies backed by clandestine organizations were one result. By 1954, new nation-states in China, North Korea, and North Vietnam had adopted socialist development models, perturbing the Cold War’s balance of power. Western leaders turned to counterinsurgency (COIN) to confront national liberation movements. In doing so, they reimagined the motives that drove colonization into a defense of their domination over faraway nations.

COIN is a type of military campaign designed to maintain social control, or “the unconditional support of the people,” while destroying clandestine organizations that use the local populations as camouflage, thus sustaining political power (Trinquier, Modern Warfare, 8). It is characterized by a different mission set than conventional warfare. Operations typically occur amidst civilian populations. Simply carpet bombing cities (or even rural areas as seen in the Vietnam War), at least over an extended period of time results in heavy collateral damage that strips governments of popular support and, eventually, political power. The more covert, surgical nature of COIN means that careful justifying rhetoric can still be called upon to mitigate the ensuing political damage.

Vietnam was central to the saga of decolonization. The Viet Minh, communist cadres leading peasant guerrillas, won popular support to defeat France in the First (1945-1954) and the United States in the Second Indochina Wars (1955-1975) to consolidate their nation-state. French leaders, already sour from defeat in World War II, took their loss in Indochina poorly. Some among them saw it as the onset of a global struggle against communism (Paret, French Revolutionary Warfare, 25-29; Horne, Savage War for Peace, 168; Evans, Algeria: France’s Undeclared War, Part 2, 38-39). Despite Vietnam’s centrality, it was in “France,” that is, colonial French Algeria, that ideological significance was given to the tactical procedures of COIN. French Colonel Roger Trinquier, added this component while fighting for the French “forces of order” in the Algerian War (1954-1962) (Trinquier, Modern Warfare, 19). Trinquier’s ideological contribution linked the West’s “civilizing mission” with enduring imperialism.

In his 1961 thesis on COIN, Modern Warfare, Trinquier offered moral justification for harsh military applications of strict social control, a job typically reserved for police, and therefore for the subsequent violence. The associated use of propaganda characterized by a dichotomizing rhetoric to mitigate political fallout proved a useful addition to the counterinsurgent’s repertoire. This book, essentially providing a modern imperialist justification for military violence, was translated into English, Spanish, and Portuguese, and remains popular among Western militaries.

Trinquier’s experiences before Algeria influenced his theorizing. In 1934, a lieutenant in Chi-Ma, Vietnam, he learned the significance of local support while pursuing opium smugglers in the region known as “One Hundred Thousand Mountains” (Bernard Fall in Trinquier, Modern Warfare, x). After the Viet Minh began their liberation struggle, Trinquier led the “Berets Rouges” Colonial Parachutists Battalion in counterguerrilla operations. He later commanded the Composite Airborne Commando Group (GCMA), executing guerrilla operations in zones under Viet Minh control. This French-led indigenous force grew to include 20,000 maquis( rural guerrillas) and had a profound impact in the war (Trinquier, Indochina Underground, 167). Though France would lose their colony, Trinquier had learned effective techniques in countering clandestine enemies.

Upon evacuating Indochina in 1954, France immediately deployed its paratroopers to fight a nationalist liberation insurgency mobilizing in Algeria. Determined to avoid another loss, Trinquier (among others) sought to apply the lessons of Indochina against the Algerian guerillas’ Front de Libération Nationale (FLN). He argued that conventional war, which emphasized controlling strategic terrain, had been supplanted. Trinquier believed adjusting to “modern warfare” required four key reconceptualizations: a new battlefield, new enemy, how to fight them, and the repercussions of failure. Trinquier contended that warfare had become “an interlocking system of action – political, economic, psychological, military,” and the people themselves were now the battleground (Trinquier, Modern Warfare, 6-8).

Trinquier prioritized wining popular support, and to achieve this blurred insurgent motivations by lumping guerrillas under the umbrella term “terrorist.” Linking the FLN to a global conspiracy guided by Moscow was helpful in the Cold War, and a frequent claim in the French military, but this gimmick was actually of secondary importance to Trinquier. When he did mention communism, rather than as the guerrilla’s guiding light, it was in a sense of communist parties, many of whom publicly advocated democratic means to political power, as having been compromised. The FLN were mainly a nationalist organized that shunned communists, especially in the leadership positions, something Trinquier would have known as a military intelligence chief (Horne, Savage War for Peace, 138, 405). In Modern Warfare, although he accepted the claim that the FLN was communist, in fact he only used the word “communist” four times (Trinquier, Modern Warfare, 13, 24, 59, 98). The true threat were “terrorists,” a term used thirty times (Trinquier, Modern Warfare, 8, 14, 16-25, 27, 29, 34, 36, 43-5, 47-49, 52, 56, 62, 70, 72, 80, 100, 103-104, 109). The FLN did terrorize Muslims to compel support (Evans, Algeria: France’s Undeclared War, Part 2, 30). Yet, obscuring the FLN’s cause by labeling them terrorist complicated consideration of their more relatable aspirations for self-determination. Even “atheist communists” acted in hopes of improving the human condition. The terrorist, no civilized person could support the terrorist.

Trinquier’s careful wording reflects his strategic approach and gives COIN rhetoric greater adaptability. His problem was not any particular ideology, but “terrorists.” Conversely, he called counterinsurgents the “forces of order” twenty times (Trinquier, Modern Warfare, 19, 21, 32-33, 35, 38, 44, 48, 50, 52, 58, 66, 71, 73, 87, 100). A dichotomy was created: people could choose terror or order. Having crafted an effective dichotomy, Trinquier addressed the stakes of “modern warfare.”

The counterinsurgent’s mission was no less than the defense of civilization. Failure to adapt as required, however distasteful it may feel, would mean “sacrificing defenseless populations to unscrupulous enemies” (Trinquier, Modern Warfare, 5). Trinquier evoked the Battle of Agincourt in 1415 to demonstrate the consequences of such a failure. French knights were dealt crushing defeat after taking a moral stand and refusing to sink to the level of the English and their unchivalrous longbows. He concluded, if “our army refused to employ all the weaponsof modern warfare… national independence, the civilization we hold dear, our very freedom would probably perish” (Trinquier, Modern Warfare, 115). His “weapons” included torture, death squads, and the secret disposals of bodies – “dirty war” tactics that hardly represent “civilization” (Aussaresses, Battle of the Casbah, 21-22; YouTube, “Escuadrones de la muerte. La escuela francesa,” time 8:38-9:38). Trinquier was honest and consistent about this, defending dirty war tactics years afterward on public television (YouTube, “Colonel Roger Trinquier et Yacef Saadi sur la bataille d’Alger. 12 06 1970”). Momentary lapses of civility were acceptable if it meant defending civilization, whether it be adopting the enemy’s longbow or terrorist methods, to even the battlefield dynamics in “modern warfare.”

Trinquier’s true aim was preserving colonial domination, which had always been based on the possession of superior martial power. In order to blur distinctions between nationalists and communists, he linked any insurgency to a Soviet plot. Trinquier warned of the loss of individual freedom and political independence. The West, he warned, was being slowly absorbed by socialist—terrorist—insurgencies. Western Civilization would be doomed if it did not act against the monolithic threat.  His dichotomy justifies using any means to achieve the true end – sustaining local power. It is also exportable.

Trinquier’s reconfiguration of imperialist logic gave the phenomenon of imperialism new life. Its intellectual genealogy stretches back to the French mission civilisatrice. In the Age of Empire (1850-1914), European colonialism violently subjugated millions while claiming European tutelage could tame and civilize “savages” and “semi-savages.” During postwar decolonization, fresh off defeat in Indochina and facing the FLN, Trinquier modified this justification. The “civilizing” mission of “helping” became a defense of (lands taken by) the “civilized,” while insurgencies epitomized indigenous “savagery.”

The vagueness Trinquier ascribed to the “terrorist” enemy and his rearticulation of imperialist logic had unforeseeable longevity. What are “terrorists” in the postcolonial world but “savages” with modern weapons? His dichotomizing polemic continues to be useful to justify COIN, the enforcer of Western imperialism. This is evident in Iraq and Afghanistan, two countries that rejected Western demands and were subsequently invaded, as well as COIN operations in the Philippines and across Africa, places more peripheral to the public’s attention. Western counterinsurgents almost invariably hunt “terrorists” in a de facto defense of the “civilized.” We must carefully consider how rhetoric is used to justify violence, and perhaps how this logic shapes the kinds of violence employed. Trinquier’s ideas and name remain in the US Army’s COIN manual, linking US efforts to the imperialist ambitions behind the mission civilisatrice (US Army, “Annotated Bibliography,” Field Manual 3-24 Counterinsurgency, 2).

Robert Koch is a Ph.D. candidate in history at the University of South Florida.

From our editors: What we’re reading this month (2/2)

Picasso, Femme Couchee lisant (1960)

Disha:

Beyond Settler Time: Temporal Sovereignty and Indigenous Self-Determination by Mark Rifkin is a work of political and literary theory that re-interprets the axes and language of past and present as experienced by settlers and Native peoples in the Americas. Writing outside of a binary that forces a choice between casting indigenous peoples as keepers of the past or as necessarily co-eval with Europeans, Rifkin draws from philosophy, queer theory, and postcolonial theory to interpret texts and the experiences they bring to bear on a notion of “settler time”, a concept with he uses to draw out the stakes of thinking time along with politics, namely sovereignty and the temporal and spatial aspects of self-determination. I’m starting to work through this text as part of my foraging for helpful interpretations of political freedom, and it stands as another affirmation of the complicated relationship intellectual histories have to texts, which are so differently deployed in adjacent disciplines. (Rifkin is the Director of the Women’s and Gender Studies Program and Professor of English at  the University of North Carolina, Greensboro.)

A Specter Haunting Europe: The Myth of Judeo-Bolshevism by Paul Hanebrink details the rise of a paranoia at the beginning of the twentieth century that crystallized into one facet of a deadly ideology, and remained grafted onto each vision of white supremacy that came afterwards — including the one that persists today in Europe and North America. The myth that Jewish masterminds cooked up Communism to ruin Europe and take control of the world began, in Hanebrick’s telling, in the counterrevolutionary currents of the interwar period. He draws a line through the myth’s Cold War adaptation to today’s racist hand-wringing over Islam’s so-called global designs that often co-exists with its anti-Semitic ancestor. After a string of white supremacist attacks in the past weeks, and the direct line drawn by the Pittsburgh shooter from his hatred of Jewish people to the Hebrew Immigrant Aid Society, intellectual histories that connect the century-long entanglements of such strands seem like necessary (if also incredibly grim) reading.

On a lighter note, as I wind down my bi-annual re-read of Barbara Pym’s 1952 novel Excellent Women to the tune of some not insignificant fines from the local library, I’d also like to recommend it here. It details the life of a single woman in her thirties living in a London parish after the Second World War, and her chagrined and slightly titillated forays into the personal lives of her new neighbours, including an ex-Naval officer, a clergyman’s attractive widow, and a woman anthropologist (!). As the days become shorter and the impulse to eat dinner comes earlier and earlier during the workday, the attention to the small joys and indignities of being a person in Pym’s novels remains a welcome dose of comedy. Daniel Ortberg observes as such in his compilation of the most emotionally muted meals that appear in Excellent Women. Highly recommend. Please let’s talk about it. I’m going to have the Princeton Public Library’s copy out for another week, tops.

 

Cynthia:

Certain objects seem to perform a kind of magic upon the beholder–time doubles back on itself, and past and present somehow fold into one. The most famous example, of course, is the Proustian madeleine. In The Remembrance of Things Past, a whole host of objects play this role–of both signalling a specific moment in history, and blurring the boundaries of the beholder’s present, so that multiple temporalities crowd together and become one. Fashion, for Proust, is capable of casting that particular magic. In the final volume of Remembrance, “Time Regained,” the narrator yokes the year 1916 to this specific image: “As if by the germination of a tiny quantity of yeast, apparently of spontaneous generation, young women now went about all day with tall cylindrical turbans on their heads, as a contemporary of Mme. Tallien’s might have done, and from a sense of patriotic duty wore Egyptian tunics, straight and dark and very ‘war,’ over very short skirts; they wore thonged footwear recalling the buskin as worn by Talma, or else long gaiters recalling those of our dear boys at the front…” The passage continues for a while, describing the vogue for “rings or bracelets made out of fragments of exploded shells or copper bands from 75 millimetre ammunition,” and the decision to wear bonnets of “white English crepe” in lieu of traditional mourning attire. But the narrator is not writing this in 1916. The fashion of 1916, so current in 1916, now serves also to distance the narrator’s own moment from the one where young women wore tall cylindrical turbans and spoke of “our dear boys at the front.” The clarity of the memory, the exactness of each detail,  serves to confirm the pastness of the past.

*

Lately, I’ve been thinking about this magical quality of fashion while scrolling through the artist Guadalupe Rosales’s two intertwined projects, Veteranas and Rucas and Map Pointz. Rosales describes these projects as “digital archives found on Instagram.” Veteranas and Rucas came first, in 2015. The New York Times described the Veteranas and Rucas digital archive as “an Instagram feed dedicated to Latina youth culture in Southern California, mainly from the ’80s and ’90s, but sometimes dating back much earlier.” Map Pointz, dedicated to the SoCal “90s party crew and rave scene,” came a little bit later, in 2016. Both archives are largely crowdsourced. These two intertwined archives serve as both autobiography and history. “I was born in California in 1980, daughter of two Mexican parents,” writes Rosales, “I grew up on Los Angeles’ Eastside and lived in a house that faced Whittier Blvd. That is when I realized how rich my culture was and was not what we see in movies or in the television.  From the age of 14-17, I was part of the party crew scene- a subculture organized by and for the youth in a time when many of my friends and relatives were in gangs. The gatherings occurred on the weekends and some weekdays in residential backyards and industrial warehouses throughout Los Angeles. Like most youth subcultures, music played a key role – we listened Techno, House and New Wave. Then on Sundays we cruised down the boulevard while bumping some oldies and freestyle. The Boulevard was a place where boys and girls met and exchanged telephone numbers.

These two digital archives came into being because Rosales was hungry for a way to connect to her history, her past: “I focused my research on the Los Angeles youth cultures in hopes of finding a deeper identity. If I Google searched my experiences as a teenager, what would I look for and how would I describe myself and those experiences? Someone who lived in Los Angeles and in the midst of gang violence, the Los Angeles riots and numerous protests. I wanted to read and look at images the brown youth on the dance floor and backyard parties, cruising the boulevard or anything that had documented the (sub)culture that existed in the midst of violence, unfortunately I wasn’t finding anything. With very little success, I started an Instagram feed, titled Veteranas and Rucas and posted photos from my own personal collection as reference. Within a week of my initial posting, people began to submit their own photos through email and messaging them through Instagram.”

*

The rise of material culture studies in the 1980s and 1990s helped shift our concepts of archives. Suddenly historians wanted to write about posters, or embroidery samplers, or military parkas. Any set of objects could be an archive.

The Internet opened up the archive further–more users, more stories, more material, more access, more of everything. In some ways, the internet itself is one vast archive. The power of Rosales’s crowdsourced Instagram archives lies in their ability to evoke–and capture–emotion. And they are not cordoned off from everyday life. For the moment, Instagram is a platform that is fully integrated into the fabric of quotidian life. Which also means that I cannot easily “forget” these faces, these histories. They show up on my phone screen, they speak to me, intimate as family, their images and stories cradled in my palm.

 

Pranav:

Though reading about Isaac Newton’s theological views is not exactly my idea of a good time, I recently found myself digging deep into the subject. I am working as a teaching fellow at the Yale divinity school this semester and had to give a lecture to my class on the ‘scientific revolution.’ To better explain some of the historiographical problems associated with early modern science, I was on the lookout for a case study which I could introduce towards the end of the lecture as a way of summing up some of my main arguments. While preparing the lecture, Newton came to mind. I suspected that, while the students would know quite a bit about Newton’s work on calculus and optics, his theological views, especially his ardent anti-trinitarianism, may come as a surprise.

To get a better grasp of some of Newton’s basic theological positions, I picked up Rob Iliffe’s new book Priest of Nature: The Religious Worlds of Isaac Newton and it was nothing short of a revelation. Iliffe, who currently holds a chair in the history of science at Oxford, has been working on Newton’s religious views for over three decades now and the book clearly shows his impressive learning and incisive thinking on the topic (Iliffe is also the co-editor of the Newton papers project which has compiled and transcribed an astonishing number of Newton’s manuscripts which are scattered in collections across the UK and Israel).

Iliffe’s claim is simple: Newton, he argues, was a deeply devout man who took his religious thinking and theological research as seriously as his ‘scientific’ work. Though this may hardly count as a path-breaking insight on his own, it is Iliffe’s relentless quest to painstakingly document the evolution of Newton’s theological views and their impact on his scientific work that makes his book one of the most exciting that I have read in a while. In particular, I was fascinated by the chapter “Methodizing the apocalypse” which examines Newton’s obsession with prophetic images in the bible. It looks at his deep interest in eschatological thinking and explains how Newton drew upon the ideas of older thinkers such as Joseph Mede and some contemporaries such as Henry More. Iliffe is especially good at using Newton as a lens for thinking about some of the bigger issues in the history of science. For anyone interested in early modern science and theology, this book is a must read.

 

Simon

In the first half of the twentieth century, a handful of scholars writing mostly in England attempted to understand how capitalism worked to produce the kind of isolated and self-interested people that its defenders associated with the natural human condition. These critics included R.H Tawney, the labor activist, education reformer and early modern historian whose research on the intellectual and social history of capitalism in the seventeenth century left such a deep imprint on my own field that the period of English history he studied came to be known as “Tawney’s Century.” Tawney and the intellectuals he helped inspire are the subjects of Tim Rogan’s rich and incisive book The Moral Economists: R.H. Tawney Karl Polanyi, E.P. Thompson, and the Critique of Capitalism (PUP, 2017).

Rogan traces the development over half a century of the distinctly historical critique of a capitalist system that these three scholars saw as insufficient for human flourishing. While the three scholars were familiar with each other’s work, Rogan groups them together not because they identified each other with a common mission but rather because they shared some conception of a “moral economy” that had been suppressed over time, to varying degrees, by laissez-faire capitalism and the theologies and philosophies that had served as its handmaiden through history, whether puritanism, liberalism or utilitarianism. Rogan’s close attention to the continuities and the subtle differences in these thinkers’ narratives of history and accounts of human flourishing leads him to convincingly demonstrate how they were all talking about moral economy in distinct ways, even before Thompson popularized the term itself in a famous article from 1971. This rigorous reconstruction of the logic of their arguments also allows Rogan to end with an evaluation of those accounts which he believes offer promising paths to guide political thinking today. Rogan sees Polanyi’s framework in particular as potentially fruitful for the present. Polanyi unmoored his own critique from the specifically Christian theological conception of human nature that Tawney had expounded before him. This transition leads me to wonder whether it’s valuable to think about such a shift as a kind of “secularization” between Tawney and Polanyi, and how that rejection of theology as a guide for their politics might lead Polanyi, his contemporaries and successors to attribute a different kind of significance to theology within the histories they write.

Spectral Sovereigns and Divine Subalterns

by guest contributor Milinda Banerjee

Spectres of dead kings are haunting the world today. In a 2015 interview, Emmanuel Macron declared that since the death of Louis XVI, there has been a vacuum at the heart of French politics: an absent king. According to him, the Napoleonic and Gaullist moments were efforts to fill this vacuum. Since becoming President, Macron has been steadily emphasizing regal symbolism to represent his authority. Across the Atlantic, scholars have long observed the monarchic lineages, or even messianic roots, of the American Presidency via British-European constitutional thought. But the monarchic turn has intensified of late, as Donald Trump’s Christian supporters compare him to the Biblical monarchs David, Nebuchadnezzar, and Cyrus. Romans 13, the New Testament passage used for centuries to justify submission to rulers as supposedly ordained by God, now finds increasing traction in American discourse about Trump, especially surrounding immigration and foreign policy. In Egypt, President Abdel Fattah el-Sisi has invited comparison with Pharaohs, while academic discussions note continuities between interwar Arab monarchies and post-royal dictatorships in the region.

In India, when Narendra Modi, leader of the Bharatiya Janata Party, became Prime Minister in 2014, Hindu nationalists celebrated him as the first proper Hindu ruler in Delhi in 800 years since the defeat of King Prithviraj Chauhan at the hands of Turko-Afghan invaders. Bollywood has also been making blockbuster movies, celebrating – supposedly  Hindu nationalist – kings, while the soon-to-be-tallest statue in the world is being built off Mumbai, depicting Shivaji, a seventeenth-century monarch dear to Hindu-Indian nationalism. We are clearly witnessing a global phenomenon: the return of monarchic figures in political thought, comparison, ritual, and iconography, hand in glove with the rise of strongman leaderships and nationalisms.

To explain this planetary resurgence of kingly manes, we need to draw upon lenses of global intellectual history, enriched by scholarship on earlier epochs of connected waves of monarchism and state formation, such as by Sanjay Subrahmanyam and David Cannadine. We may also take a cue from models of political theology advanced by Carl Schmitt, Ernst Kantorowicz, and Giorgio Agamben. In my recently-published book The Mortal God: Imagining the Sovereign in Colonial India, I try to provide a historical genealogy for this global phenomenon through a focused study of modern India. I suggest that British administrators and intellectuals, like Viceroy Lord Lytton, Viceroy Lord Curzon, and the author Rudyard Kipling,  as well as elite Indians, like the socio-religious reformer Keshub Chunder Sen, frequently justified the construction of strong imperial and/or princely sovereign state apparatuses in late nineteenth and early twentieth century South Asia by using monarchic concepts and images. Often, Christian-inspired notions of monotheistic authority anchored their visions of providentially-mandated monistic-centralized state sovereignty. For example, Lieutenant-Governor Alfred Lyall quoted Bishop Eusebius of Caesarea to relate imperial unification to the triumph of monotheism, in the Roman Empire as well as in British India. The title of my book gestures towards this sacralisation of the state, and, more specifically, towards the widespread citations of Hobbes in modern India, especially by Indian intellectuals and politicians, to debate these constructions of sovereignty (Mortal God, Introduction, Chapters 1-3).

In challenge, middle-class Indian intellectuals like Bankimchandra Chattopadhyay, Nabinchandra Sen, and Mir Mosharraf Hossain created blueprints of non-colonial sovereignty, in the form of Hindu-Indian-nationalist or Islamic righteous kingdoms (in Sanskrit/Bengali, dharmarajya), which were ideologically anchored on the unity of a monotheistic divinity and/or sacred kingship. Many Indians were inspired by the monarchically-mediated nationalist unification of Italy and Germany. By the 1900s, Japanese monarchy and Shintoism offered templates of state-building to Hindu and Muslim actors. In the 1910s and early 1920s, the Ottoman Caliphate question inspired many Indians to combat colonial authority in the name of divine sovereignty: the trials of Muhammad Ali and Shaukat Ali embodied a fierce battleground. In interwar years, Indians also cited other royal models to imagine national sovereignty: Amanullah’s Afghanistan, Reza Shah Pahlavi’s Iran, Faisal I’s Iraq, Rama VII’s Siam/Thailand, and the (incipiently anti-Dutch) kingships of Java and Bali. Ultimately, an ancient Sanskrit word for kingship, sarvabhauma – literally, (lord) of all earth – offered the root word for ‘sovereignty’ in most Indian languages (Mortal God, Chapters 3 and 5).

For a proper global historical explanation of today’s monarchist resurgence, we need however to look beyond India. Hence, a book I edited with Charlotte Backerra and Cathleen Sarti draws on case studies from across Asia, Russia, Europe, North Africa, and Latin America, to conceptualize ‘royal nationhood’ as a transnationally-constructed category. My chapter uses lenses of global intellectual history, and offers various examples, including Walter Bagehot and Kakuzo Okakura, to show how actors from around the world learnt from other societies to place the figure of the (present, historical, and/or imagined) monarch as a (practical and/or symbolic) centre around which national unity and sovereignty could be built up, surpassing class and factional differences. Today, monarchic spectres are being resurrected again by sectarian nationalisms, which derive material strength from the inequality-breeding regimes of global capitalism and the grievances they invariably spawn among those left out. Ruling classes and angry populations are deploying these spectres to delineate majoritarian-national unity – a mythic unitary sovereign above classes and factions, with Caesarist and salvific promise – against vulnerable minorities, refugees, and aliens. Taking a cue from the comparison of Trump with the Biblical Nehemiah in terms of building walls – and Émile Benveniste’s discussion on the Indo-European rex/raja as a maker of boundaries between “the interior and the exterior, the realm of the sacred and the realm of the profane, the national territory and foreign territory” (Dictionary of Indo-European Concepts and Society, 312) – I would argue that sectarian nationalists today invoke regal manes to forge borders, segregation, and inequality. If sovereignty is seen as a motor of global conceptual travel, we can explain why the globalization of models of centralized and exclusionary state sovereignty over the last centuries has also propelled periodic and global waves of monarchic conceptualization, often even after the demise of real-life kingships: clear evidence how republics too are haunted by (to borrow Jacques Derrida’s words) the “patrimonial logic of the generations of ghosts” (Specters of Marx, 133). It is thus ironic, but fitting, that supporters of the defunct Italian monarchy should draw strength today from Trump’s aggressive nationalism, while reposing faith in a sovereign who would embody the nation by being super partes.

However, sovereignty is not a ‘thing’ which merely spreads top-down, via elite interventions and circulations. The mysterious pathways of sovereignty do not only translate ‘sacred’ hierarchies into human government, but also engender agonisms and dialectical transfigurations. Thus in colonial India, women like Sunity Devi, Nivedita, Sarojini Naidu, and Begum Rokeya, invoked Indian, European, Islamicate, and Chinese models of queenship, to demand women’s right to political authority. These interventions often became linked to transnational feminist and suffragette networks, and opened up spaces beyond strongman nationalism (Mortal God, Chapter 3). Simultaneously, peasant, ‘tribal’, and pastoral populations asserted royal ‘Kshatriya’ identity and divine selfhood, drawing on precolonial-origin models of community autonomy and regal theology, as well as liberal-democratic and socialist-Communist forms of association. They claimed democratic representation, dignity of labour, material betterment, and reservations in education and employment. They grounded their claims to rulership and divinity on practices like ploughing and animal husbandry, outlining ideals of nourishing and pastoral governance that bear comparison with (even as they sharply diverge from) those outlined by Michel Foucault. Politicians like Panchanan Barma in Bengal and Dasarath Deb in Tripura used Kshatriya organization to structure peasant resistance against high-caste Indian elites. Many of these ‘lower caste’ movements – and their ‘vernacular’ intellectual trajectories – remain powerful even today, rooting ideas of universal rights, equality, and democracy in the collectivization of divine and regal selfhood (Mortal God, Chapter 4).

Peasant and working-class agitation in colonial India also drew on varied messianic models, from ideas about the Mahdi’s advent and Allah’s sovereignty in relation to peasant autonomy, to the notion of ‘Gandhi Maharaj’. The Russian Revolution and Communism were sources of inspiration too. These popular utopianisms, instigated by discontent against colonial fiscal oppression and political-military brutality, fuelled grand insurrections, dismantling the British Empire in the subcontinent. Inspired by such struggles, the poet Kazi Nazrul Islam as well as Rajavamshi peasants devised sophisticated forms of materially-grounded dialectical theory, involving transition from servitude (dasatva) and heteronomy (parashasana) – when one alienated one’s self (sva-hin) – to the recovery of self and ethical-material autonomy (svaraj, atmashasana), leading finally to the anarchic cessation of all rule when one realised the fullness of divinity within oneself and others (Mortal God, Chapters 4 and 5).

These Indian cases invite wider comparisons, such as with the seventeenth-century Leveller Richard Overton’s statement about “every man by nature being a king, priest and prophet” or with Ludwig Feuerbach’s nineteenth-century conception of divinity as present in every human being. Rather than a theory of democracy predicated on an empty/disembodied centre, as in Claude Lefort, we are tempted to outline a radically novel conception of the democratic political embedded in the proliferation and multiplication of divine being. There is a barely-developed hint in Agamben’s recent opus, Karman (78-79, 83), of such a turn, drawing on ancient India, to inspire a new model of action. In our world of strongman sovereigns and unrelenting degradation of human and nonhuman actors, we need to recuperate such globally-oriented political theory and practice, while remaining critical towards, and abjuring, the chauvinistic and hierarchical elements historically present in them. If engaged with dialectical intimacy, visions which once inspired rebels to overthrow ruling classes can help us conjure today solidarities with peasants and refugees, neighbours and strangers. To see everyone as regal and divine, and act upon this, can become a lightning bolt to wield for unshackling democracies to come.

Milinda Banerjee is Research Fellow at Ludwig-Maximilians-Universität Munich from 2017 to 2019, as well as Assistant Professor in the Department of History at Presidency University (Kolkata, India). His dissertation, which offered an intellectual history of concepts and practices of rulership and sovereignty in colonial India (with a primary focus on Bengal, ca. 1858-1947), has now been published as The Mortal God: Imagining the Sovereign in Colonial India (Cambridge University Press, 2018). His research project at LMU is titled ‘Sovereignty versus Natural Law? The Tokyo Trial in Global Intellectual History’. 

Reconsidering Mechanization in the Industrial Revolution: The Dye Book of William Butt

By guest contributor Lidia Plaza

On my way to Covent Garden this summer, I spotted a Muji store and popped inside.  A few months earlier I had picked up a pair of Muji socks in Terminal 5 of JFK, which had since become my favorite pair.  Determined to acquire more, equally lovely socks, I studied the sock selection until I found some in the same style, size, and color as my beloved pair.  I grabbed them and headed to the till.  I didn’t bother to inspect the socks; I assumed the knit tensions were all perfectly even, the densities were consistent, the colors were identical.  I also assumed that they were exactly like the socks I had purchased a few months earlier in New York.  I didn’t compare the socks because I take consistency for granted.  I expect it.  I insist upon it. My expectation that socks I purchase from a Japanese retailer in New York will be identical to socks I find in London months later is a testament to the success of the Industrial Revolution.

In the history of the Industrial Revolution, the mechanization of cleaning, processing, spinning, and weaving textiles has become Chapter One of the gospel, but in this telling there has been undue emphasis put on the mechanization of manufacturing.  The triumph of the Industrial Revolution was not the machines themselves, but the processes that could produce consistent products at a mass scale; machines were just one tool of those processes.  This point is well illustrated in an often-overlooked verse of the gospel: dyeing.

Dyeing’s neglect is partially understandable, as dyeing is almost as difficult for the historian to study as it was for the eighteenth-century apprentice to learn.  Unlike paints, dyes must chemically bond with the textile fibers, and variations in the fibers, the pH of the water, the quality of added mordants and dye-assistants, or even the composition of the containers used can affect the results.  Only in the nineteenth century did chemists begin to understand dye chemistry, and when histories of industrialization include dyes, this, for instance, is often what they highlight. But early modern dyers spent their careers learning to achieve consistent, even dyes, and, more recently, scholars like Giorgio Riello have included dyeing innovations in their examinations of early textile industrialization.  It is now becoming clear that dyers and clothiers like William Butt were making critical strides in early textile industrialization.

imag6336.jpg

William Butt began his dye book on November 24, 1768.  As a clothier, Butt oversaw the entire process of producing a woolen textile from cleaning the raw fibers to weaving the fabric.  Yet his book, the product of almost daily work, is just about dyeing.  Why then did Butt devote so much effort to just one step in the manufacturing process?  The answer is simple: half the price of a finished textile could come just from the quality of its dyeing.  It was not uncommon for clothiers to set up their own dye houses, unwilling to trust someone else’s work with such a critical step.  William Butt was such a clothier.

Between 1768 and 1785, he recorded 794 recipes, filling over 88 pages with rows and columns of cryptic numbers, strange symbols, bizarre ingredients, and round little pieces of colored felt, stuffing little scraps of paper between its pages.  After weeks and months of pouring over the book in the reading room of the Beinecke Library, I made some sense of the book; each row documents a new recipe, and each column contains a separate piece of information about the recipe.  In this way, Butt recorded the amounts of wool he worked with, the merchant marks of his wool suppliers, and the dyestuffs used in each recipe, always providing samples of the results and assigning a unique recipe number.

11690821

The book shows that Butt was able to dye more wool with better results by systematically experimenting with his dyes.  Starting around 1777, about page 35 of his book, Butt began to treat his book less like a cookbook of collected recipes, and more like a lab notebook to record his experiments.  He started dating his recipes, and the dates suggest that Butt began to intensify his production.  Butt filled 35 pages between 1768 and 1777.  Assuming this was his only dye book, this means he only filled 3 or 4 pages a year during this period.  However, after 1777 he usually filled at least 6 pages each year.

Number of Pages Filled by Each Year in William Butt’s Dye Book Between May 1777-May 1785
Year Approximate Number of pages filled
1777 (May-December) 6
1778 6.5
1779 8
1780 6
1781 7
1782 6
1783 6.5
1784 5
1785 (February-May) 3

Not only was Butt working with more recipes but he also became more meticulous in his work.  He got pickier about how he classified a new dye recipe by assigning a new dye number to recipes that varied only slightly from other recipes in the book.  He began experimenting with his recipes, recreating previous recipes using wool from different suppliers, for example.  In another instance, he experimented with technique, noting that recipe 20129 was the same as 19917, “differing from 19917 in method only.”  His book gets more cluttered as he began recording the merchant marks of the merchants who supplied the wool in each recipe, and as he makes more notes and comments.

butt-1.jpeg

butt-2.jpeg

Part of Butt’s success may have come from the fact that he seems to have been a specialist.  He was clearly skilled at using many dyestuffs, but he relied on other dyers to provide him with indigo-dyed wool.  Indigo is a vat dye, which has an entirely separate chemical process for bonding to fibers than the other dyestuffs Butt worked with, which were almost exclusively mordant-dyes.  Not all dyers specialized, but there is evidence that indigo specialists were common, and so it is not surprising that Butt would also specialize in one type of dye.  By focusing on dyes that all required similar methods, Butt was able to refine those methods and increase his efficiency.  By the end of his book, Butt had more than doubled the amount of wool he dyed in each recipe.

Technology was at the heart of the Industrial Revolution, but, as Butt’s dye book illustrates, if all we imagine when we think of technology is machines, we are missing a large part of the picture.  Technology is simply the practical application of scientific knowledge, and in this sense William Butt’s dye book is as much a piece of technological advancement as the spinning jenny and the power loom.  He could not have known the chemistry underpinning his work, but he knew he could maximize his output by systematically experimenting with dyestuffs and applying what he learned to his processes.  All the spinning jennies and power looms in the world would have been useless if all those threads and fabrics could not be consistently and reliably dyed, but dyeing at larger scales required a better understanding of the dyes, not new machines.  Butt and the many others like him understood this.  Hiding in record offices and archives, there are other dye books, all written by clothiers and dyers trying to master dye processes.  It was their mastery of process that achieved the consistency that I take for granted every time I browse a wall of socks.

Lidia Plaza is a PhD student in British history at Yale University. Her research focuses on industrialization and British foreign policy in the late eighteenth- and early nineteenth-centuries.  

From our editors: What we’re reading this month (1/2)

Andrew Hines:

How to Read: Wittgenstein (2005) by Ray Monk

As someone with a background in post-Kantian European philosophy, whose interests had leaned quite heavily toward phenomenology, philosophical hermeneutics and deconstruction, I had unfairly dismissed Wittgenstein as “one of those analytic philosophers”. Recently, I’ve found my philosophical palette broadening as I’ve become increasingly concerned with understanding the broader context within which key, culture shaping ideas emerge as well as understanding why a particular thinker took the intellectual route he or she did. Ray Monk’s How to Read: Wittgenstein addresses both these interests on top of providing a lucid and authoritative introduction to Wittgenstein.

What comes through is Wittgenstein’s intellectual journey, the way that he continually reframes the problems he’s concerned with. Its evident that two of Wittgenstein’s abiding concerns are the limit and function of language. By focusing on Wittgenstein’s own biography alongside the ‘biography’ of his ideas, Monk not only provides an introduction to Wittgenstein’s main ideas, but an entire history of the development of one of Western philosophy’s key themes in the twentieth century. Not only did Wittgenstein revolt against his teacher Russell and assert that the activity of philosophy did not consist in providing a scientific like precision but in clarifying the true nature of the logic of language. He also built on Goethe and looked at the ‘morphology of expression’ and how philosophy provides the opportunity for a new perspective on a problem. Because of this, poets and musicians, Wittgenstein thought, can give us as much instruction as science.

The sheer breadth of this intellectual journey is far too often written off by students of ‘continental philosophy’ and I am perhaps the worst offender. Because of this, I would generally recommend ‘How to Read: Wittgenstein’ by Ray Monk to anyone looking to fill in the gaps of the intellectual-historical context of the early twentieth century. Particularly, however, I would recommend it for those, like myself, who had unfairly ignored Wittgenstein.

 

Spencer:

In one of my very favorite plays, Alan Bennett’s The History Boys (2004), one of the titular students proudly describes his attempt to impress the new teacher by referring a book he’s been reading, by one “Frederick Kneeshaw.” This beautiful malapropism, achingly relatable and touchingly human, has haunted the name “Nietzsche” for me ever since I first saw The History Boys some ten years ago. Just as in the Hebrew Bible, there is the word as pronounced, the qere (“what is read”), that may differ from the word as written, the ketiv (“what is written”)—most famously in the name of God—so I have spent a decade hearing “Kneeshaw” in my head whenever I have read (or more rarely written) the name of that great philosopher.

Yet, until this week, my knowledge of Kneeshaw’s oeuvre went no further than the handful of aphorisms that have become common currency: “God is dead.” “There are no facts, only interpretations.” “He who fights with monsters should look to it that he himself does not become a monster.”

Prompted by Carlo Ginzburg’s masterful History, Rhetoric, and Proof (1999), in which Nietzsche is a key interlocutor on rhetoric, I plunged a few days ago into On the Genealogy of Morals (1887). The going has been slow, more as a result of too many claims upon my time than any fault in Michael Scarpitti’s translation. Indeed, the quality of the prose has been a revelation, surpassed only by the humor. Take Nietzsche’s imagining of the moral systems of the weak (the lambs) and the strong (the eagles): “And when the lambs say among themselves, ‘Those birds of prey are evil, and he who is most unlike a bird of prey, who is most like its opposite, a lamb – is he not good?’ then there is nothing to cavil about in the setting-up of this ideal, except perhaps that the birds of prey will regard it with some measure of derision, and say to themselves, ‘We bear no ill will against these fine, goodly lambs, we even like them; nothing is tastier than a tender lamb.’”

There is always the danger that mellifluous prose and trenchant wit (a particular delight of mine) misdirect attention away from the ideas Nietzsche is propounding. And for a pacifist, Jewish reader such as yours truly, these must be awkward fare. Nietzsche’s veneration of the “blonde beast” does not wear so well in the wake of the twentieth century—and did not wear much better in the nineteenth. So, too, the denigration of the Jews as “a priestly nation of resentment par excellence” and the propagators of “slave morality” rankle all the more in the light of recent tragic events in Pittsburgh. I learn from Cathy Gere’s excellent Knossos and the Prophets of Modernism (2009) and other scholars that Nietzsche himself was a dedicated critic of anti-semitism and that the filiation between his works and the ideology of National Socialism was largely the creation of his vehemently anti-Semitic sister. Perhaps so, but Elisabeth Förster-Nietzsche was not without material to work with in her brother’s writings.

These uncomfortable flashes of prejudice aside, the content has been striking in its familiarity—my ignorance of Nietzsche’s work notwithstanding. What I have realized is that a century and more of thought and culture steeped in Nietzsche has made his ideas ubiquitous, even banal. The suggestion that morality is the creation of power does not shock in 2018, even for those to whom it is anathema.

 

Kristin:

Just this week, I picked up a slightly older book: P. Steven Sangren’s Chinese Sociologics: An Anthropological Account of the Role of Alienation in Social Reproduction. More than a work of ethnography in China, this volume is primarily a theoretical treatise which unfolds a slightly modernized Marxian understanding of social mechanisms and patterns while drawing upon the author’s fieldwork in Taiwan for examples and illustrations. While some aspects of the work merit critique (particularly the titling of the work as “Chinese Sociologics” when the vast majority of its basis is specifically Taiwanese, and a bit of a simplistic take on “Gender and Exploitation”), its primary purpose as a cohesive and thoughtful Marxian analysis is insightfully fulfilled.  With chapters focusing on classical Marxian cultural features such as production, alienation, circularity, etc., as well as copious citations from Marx, Durkheim, and other related scholars, this work serves as an interesting insight into the Marxian tradition of social theory as well as more modern attempts to incorporate Marxian theory into modern ethnography.

 

Brendan:

This month I’ve found myself reading quite a bit about the history of gunpowder. Gunpowder was first discovered by Chinese alchemists before the 11th century. The earliest European gunpowder recipes from the 13th century were written in code because the alchemists were fearful of the compound’s power: “No clap of thunder can compare with such noises. Some of them strike such terror to the sight that the thunders and lightnings of the clouds disturb it considerably less.” (Kelly DeVries, Gunpowder and Early Gunpowder Weapons, in Gunpowder: the History of an International Technology.) States were less fearful. Centuries of experimentation used gunpowder to make fireworks, rockets, cannons, blunderbusses, rifles, and pistols. For the next six hundred years, battlefields would be covered with a black brimstone smoke.

Gunpowder is a mixture of saltpeter, charcoal and sulfur. The result is a brownish, smelly powder that when exposed to flame can produce a fire so sudden that its shockwave hits the speed of sound—an explosion. Of gunpowder’s three ingredients, the most unusual and most important is saltpeter (potassium nitrate), which makes up 70% of most gunpowder recipes. There are some natural formations of saltpeter in cakes of whiteish powder forming a crust atop nitrate-rich soils. Damp caves with beds of guano or fetid houses sometimes produce a white salt on their walls—the salt of the rock (petrus). But this was not enough to meet states’ growing demand for gunpowder.

You can manufacture saltpeter as well. The big European powers employed roving armies of saltpeter men who were allowed to go into people’s dovecotes and mangers looking for guano and manure. They’d cart this ordure off to special factories, and soak it in urine (that of drunk men worked best) to leech out the saltpeter. Antoine Lavoisier, the great French chemist, searched for a way to make artificial saltpeter after the British seized the world’s great Indian saltpeter areas during the Seven Year’s War. Lavoisier’s success kept the French gunpowder barrels full over the next quarter century of war. (During the Revolution, Lavoisier’s apprentice, Éleuthère Irénée du Pont, fled to America where he set up a gunpowder mill—the birth of DuPont, the world’s biggest chemicals company.)

Saltpeter is a form of reactive nitrogen, and reactive nitrogen is one of the hidden foundations of the modern world. Nitrogen is plentiful: it makes up three quarters of the air, but this is tightly bound up in N2, hitched together with strong triple bonds. But reactive nitrogen is rare. To use nitrogen, we need to make reactive nitrogen like saltpeter and ammonia that can be used by plants and animals. Much comes from bolts of lightning turning atmospheric N2 into nitrogen oxide. Some plants (particularly legumes) have symbiotic bacteria in their roots that can make reactive nitrogen. Fertilizing crops with manure helps plants grow by giving them the reactive nitrogen bound up in our dung. Without nitrogen, no food.

In the early 20th century, German industrial chemists looking for a way to make explosives figured out how to turn atmospheric nitrogen into reactive nitrogen in a lab—the Haber-Bosch process. This is now the most important source of fertilizer on earth. Perhaps 80% of the nitrogen in your body comes from the Haber-Bosch process. All this readily-available reactive nitrogen is probably one of the reasons why we have nearly 8 billion people on earth today. Nitrogen kills: nitrogen feeds. The early alchemists treated gunpowder and other nitrates with wonder. Perhaps we should do the same.

 

From the Archive: The Early History of Arabic Printing in Europe

by Maryam Patton (April 2015)

In the middle of the ninth century, Paulus Alvarus complained about Spanish Christian youths who were abandoning Latin for the native Arabic of their new conquerors. Yet nearly seven hundred years later, when the last Muslim state of Grenada fell to the Reconquista in 1492, the sustained study of Arabic in Europe suffered a fatal blow. In the following years, royal decrees banning the use of Arabic and book and manuscript burnings, such as the one initiated by Archbishop of Toledo Ximénez de Cisneros in 1499, worked to undo the special relevance Arabic had had for Europeans (Toomer, 17). Until well into the seventeenth century, European interest in the philological pursuit of Arabic waxed and waned. The sources for this interest included the Crusades, scientific knowledge, the rediscovery and transmission of Greek classical texts from Arabic and Syriac translations, and faith-based missions to the Near East. These factors constituted a “first wave” of interest in Arabic study in the medieval period. It was not until a “second wave” of interest beginning in the sixteenth century that Arabic became a sustainable subject for philological inquiry (Russell, 1-19).

This second wave embodied some of the same concerns the original Arabists felt concerning the religious significance of Arabic. In addition to their evangelical missions, early modern students of Arabic sought to reconnect with Eastern Christian communities such as the Maronites and Coptic Christians. In 1584 Pope Gregory XIII founded the hugely successful Maronite College in Rome for the education of Jesuit missionaries traveling East. Meanwhile, growing pressure from the encroaching Ottomans, combined with Ottoman “capitulations” allowing for expanded economic involvement within Ottoman territories, offered economic incentives to study Arabic, as well as Persian and Turkish.

Yet, during the early modern period, an increasing emphasis came to be placed on studying Arabic for its own sake, rather than purely religious or economic concerns. Joseph Scaliger (1540–1609) was one significant example of an early modern scholar who argued for the study of Arabic as an end rather than a means. He stressed the importance of the Koran as a waypoint to understanding Arabic language and culture. His own knowledge of Arabic was limited, but his influence as a professor at Leiden and the example he set for his students ought to be emphasized. Upon his death he bequeathed his impressive library of Oriental manuscripts to the university, helping to establish the Netherlands as one of Europe’s most important centers for the study of Arabic (Toomer, 42-45).

The pursuit of Arabic for its own sake was facilitated by the appearance of printing presses sophisticated enough to print in Arabic using moveable type without relying on crude woodcuts. John Selden’s (1584–1654) 1614 book Titles of Honor for instance relied on woodcuts for the ‘words of the Eastern tongue’ like amir and sultan, but the letters looked strange and often appeared alone when they should instead have been connected to the following letter. In some cases, blanks were left in books where Arabic words were called for and were written in later by hand, like in Richard Brett’s Theses published at Oxford in 1597 (Roper, 12-13).

Proper Arabic type made it possible to finally print grammars and dictionaries. Previously, students had to rely on native speakers and others who already knew the language. The first book containing Arabic printed with moveable type was a book of hours printed in 1514 and intended for use in the near east. Though it was published independently by the Venetian Gregorio de Gregorii, it was paid for by Pope Julius II, and featured odd shapes for some of the letters (cut by Gregorii himself). The characters dal and dhal in particular were too large and should not have curved down below the baseline.

Book of Hours 2

selden_john-titles_of_honor_by_iohn_selden-stc-22177-1034_11-p221.jpg

A number of other religious texts intended for Christians in the East appeared soon thereafter, but the most impressive feat was a complete Koran published in 1538 in Venice by Paganino de Paganini and his son Alessandro. It was printed entirely in Arabic without any Latin characters whatsoever in an effort to disguise its Western origins, and was most likely intended for sale in the Ottoman Empire. The Ottomans did not establish their own printing presses for another two hundred years, with the efforts of Ibrahim Müteferrika. Sadly, a lack of demand and the costs associated with creating new Arabic type (not to mention the numerous errors contained therein) bankrupted Paganini. Only one extant copy of this text is known (Nuovo, 79-81).

First printed Qur'an in west

Italian printing in Arabic reached its height in Rome starting in 1584 with the founding of the Medici Oriental Press by Cardinal Ferdinando de Medici. Pope Gregory XIII again offered his support, and with a newly designed type from the famed typographer Robert Granjon, the Medici Press was in an ideal position to seriously advance Arabic studies. Yet the director, Giovanni Raimondi, grew too ambitious and published many obscure texts with a limited audience. The press faced criticism for its lack of fundamental books such as grammars and basic readers. Few scholars besides those already learned in the language could make much use of these advanced texts, and the press effectively shut down upon Raimondi’s death in 1614. Granjon’s elegant type was, at the very least, saved for later use by the Vatican Press and others, and helped raise the aesthetic standards of Arabic printing. As in the image below, Granjon’s type was far more rectangular than earlier fonts. These instead resembled the curvier handwriting of the manuscripts on which they were based, while Granjon’s type resembles modern Arabic fonts.Thomas-Stanford Plate11After the Medici Press ceased operation, the center for Arabic studies shifted to the Netherlands thanks to the diligent efforts of a few key individuals. Scaliger arrived at Leiden in 1593 and swiftly set to work encouraging others to pursue Arabic. His student Thomas Erpenius (1584–1624) was arguably “the first native European to achieve true excellence in Arabic.” Erpenius assumed his position as Professor of Oriental Languages in 1613 and in the same year published his masterful Grammatica Arabica. Its type was cast by Francis Raphelengius, and served as the authoritative grammar for many years to come with several updated editions and the addition of reading passages. Erpenius unfortunately died at the young age of 40, but his student and successor Jacob Golius (1596–1667) carried on in the same vein and produced an Arabic lexicon in 1653. His brother Petrus was then serving in Antioch, and Golius relied on this connection to build an extensive library of Arabic manuscripts rivaled only by Edward Pococke’s collection in England (Toomer, 43-45).

By this point, there was still no press capable of printing Arabic in England. Scholars instead would travel to Leiden to have them printed with Raphelengius, or rely on unsatisfactory woodcuts. Although William Bedwell succeeded in purchasing the type from the Raphelengius brothers when he visited Leiden, what arrived in England in 1614 were worn out types rather than the matrices from which fresh new types could be cast. England’s entry into Arabic printing was thus delayed until 1652 when Selden published Mare Clausum. Erpenius and Golius’ philological texts expanded the possibility for further Arabic study not only because students could be self-taught but also because they encouraged standardization in the teaching. Even after difficult financial setbacks and the technical challenges of a language with varying letter forms, the printing presses ultimately made it possible for serious advancements in the early modern period. As in the case of Greek, advances in typesetting expanded access to printed texts and made it possible for early modern scholars to learn the language from a grammar, instead of having to rely on someone who already knew the language (Dannenfeldt, 17).

Maryam Patton is a third-year PhD candidate at Harvard University in the dual History and Middle Eastern Studies program She studies early modern cultural and intellectual history, and her dissertation project focuses on time and temporality in the 15th and 16th century Mediterranean.

Tracing the perceived merits of Robert Orme’s History of the Military Transactions of the British Nation in Indostan (1763)

By guest contributor Laura Tarkka-Robinson

In the eighteenth century, the sundry genre of early-modern travel writing – or ‘travels’ – was not only popular but also notorious for leading gullible readers astray. In this regard, it is hardly remarkable that the improved second edition of John Henry Grose’s fairly inconsequential Voyage to the East Indies (1766) plagiarized a passage concerning judicial practice in India from another recent publication. Furthermore, given that this passage was added to increase the appeal of the Voyage as a source of knowledge, it might seem equally unremarkable that the text from which it was appropriated was still praised as better ‘than almost any of the more recent productions on that subject’ in 1805 (xxix).

Yet, Indian customs were not the express subject of the plagiarized book, the highly successful History of the Military Transactions of the British Nation in Indostan (1763), which earned its author Robert Orme the title of ‘the first official historiographer of the East India Company’. Thus, the hierarchic relationship between Grose’s eye-witness travel account and Orme’s military history becomes very interesting in light of the affinities which these works actually display. For in fact, both drew on the author’s personal experience in the service of the English East India Company while describing Indian customs and manners in the language of Oriental despotism, in accordance with Montesquieu’s notions on the influence of climate (10).

Hence, it is surprising that despite other Orientalists’ critique of subjective observations (34-35) Orme’s work gained and sustained a high status of authority on Hindu customs. I argue that this puzzle can, however, be solved by considering the opinions expressed by contemporary reviewers in conjunction with the structure of Orme’s History and its epigone, the new edition of Grose’s Voyage.

The trajectory of Grose’s Voyage helps us to recover the perceived merits of Orme’s History, for besides the plagiarized passages, the improved edition of the Voyage also boasted an additional volume describing the military affairs of the British in India, thus setting the two publications into a competitive relationship with each other. The addition of the second volume suggests that the anonymous editor of the Voyage was reacting to the rising taste for historical narratives, especially since some reviews had expressed impatience (318) with Grose’s miscellaneous observations. Indeed, although the Voyage had been swiftly translated into French and recommended for an abundance of reliable detail (viii) on Indian customs and manners, its sense of immediacy never attained as much appreciation (96) as the more literary performance of Orme.

However, while Orme’s manner of writing set him apart from first-person eyewitness accounts, the reception of the History of the Military Transactions of the British Nation in Indostan in the press indicates that the success of this work was owing to the symbiotic relationship (p. 305) of Orme’s ‘classical’ military history and the well-digested chorographical dissertation which he prefixed to it. Some reviewers were more interested in the actual History and some in the accompanying dissertation, but in both cases, Orme was commended for the character which he made as a historiographer.

Upon the first appearance of Orme’s History, The Critical Review found it ‘pleasing and perspicuous’ (249), ‘truly historical’, and ‘classical’ (258). Fifteen years later, The Monthly Review also praised the second volume as an example of ‘the true simplicity of historical narrative’, providing just enough detail ‘to fix the stamp of authenticity to the narrative, and to entitle the Author to the character of a faithful historian’ (431).

Orme’s favorable reception was perhaps partly based on tacit knowledge about his scholarly pursuits since, after establishing himself in Harley Street in 1760, Orme befriended numerous literary gentlemen of the day. Moreover, despite having left India on account of extortion charges, Orme still presented the East India Company in a favorable light. In contrast, though introducing himself (1) as an East India Company servant, Grose used his experience to criticize ‘the inexperience and aim at independence (38) in the appointed members of the several Courts’ in India, arguing that their authority was so dangerous that the Company’s royal charter had better not been obtained.

Moreover, Orme’s allegiance to the English company was no inhibition to becoming widely acclaimed abroad, as French and German reviewers praised his ‘liberal’ attitude and devotion to public rather than private interest. Thus, even though Orme’s History was about an Anglo-French conflict, Le journal des sçavans (677-679) found it devoid of national bias. Similarly, the preface to its ensuing French translation stressed that while misapprehended patriotism could entice historians to wrap their facts up in fables, this was not the case with Orme.

Another highly illuminating review in Allgemeine historische Bibliothek also commended the skillful, modest, and truthful manner in which Orme’s History described the characters of nations and individuals. Nevertheless, this review directed special attention to his dissertation of Indian customs, reading it as a summary of the current knowledge on this topic in Europe. The reviewer regretted that the English had not contributed more to such inquiries although they were not lacking capacity. This suggests that service in the EIC was perceived as an opportunity to communicate information that was both valuable and authentic. Indeed, the reviewer pointed out that no sources were listed for the military narrative itself, but the dissertation mentioned not only the well-known works of Herbelot and Bernier but also a lieutenant called Frazer, whose eye-witness character supported the authority of Orme’s words (222-234).

The perceived value (78) of Orme’s dissertation thus explains why some of its third section (24-27) ended up in the second edition of Grose’s Voyage – hidden away in the fifth book (336-338) to avoid the detection of plagiarism. While nothing suggests that this improved the status of the book in the literary market, it is striking how the recycled passages navigated around the question of sources, providing no assistance to critical readers. However, in all its ambiguity, especially the following excerpt (see also 162 here) was clearly relevant to on-going debates about the age and character of the Indian civilization:

Intelligent enquirers assert that there are no written laws amongst the Indians, but that a few maxims transmitted by tradition supply the place of such a code in the discussion of civil causes; and that the ancient practice, corrected on particular occasions by the good sense of the judge, decides absolutely in criminal ones.

Strikingly, Orme’s original dissertation was no more specific about the ‘intelligent enquirers’ whose assertions it invoked than Grose’s Voyage, because the strength Orme’s authorial voice was based on avoiding the interference of external references as well as refraining from the first-person statements. In so doing, it proved pleasant enough to stand the test of national rivalry in France and compelling enough to be favorably received in German translation as late as 23 years after its first appearance.

As Grose’s Voyage likewise appeared only belatedly in German, the Allgemeine deutsche Bibliothek complained that the observations it contained had already lost their novelty value. At this point, the translator’s copious references to further reading – including 18 works on Indian religion – only served to underline the outdated appearance of Grose’s Voyage, which the reviewer also perceived as dubiously unpatriotic (234-236). In a striking contrast, the adapted translation of Orme’s History, entitled Die Engländer in Indien (1786), was much more successful. Echoing to the translator Johann Wilhelm von Archenholz’s views (v-vi), the Historisch-politisches Magazin (13-14) noted that especially those Germans who practiced trade could easily sympathize with the English, and stressed the importance of becoming familiar with the ancient and cultivated Indian nation. This review fixed its attention to Orme’s dissertation, while that of the Allgemeine deutsche Bibliothek (202) celebrated Orme’s character as a military historian who, though English, could appreciate a great Frenchman.

Accordingly, the trajectory of Military Transactions of the British Nation in Indostan provides a further caveat to the notion of a sudden and sweeping turn to linguistics in eighteenth-century Orientalist scholarship. For according to his nineteenth-century biographer, Orme’s authority remained intact even though he had ‘little or no acquaintance with learned languages in Asia’, and therefore ‘appears’ to have relied on ‘his own actual observations’ (xxix). In addition, however, a comparison with the fate of Grose’s Voyage also suggests that much remains to be said about the concept of private interest in eighteenth-century travel writing, especially as regards its relation to the political nation.

Dr Laura Tarkka-Robinson studied history and comparative literature at the Universities of Helsinki, Hannover and Edinburgh, earning her PhD at the University of Helsinki in 2017. Currently a Visiting Research Fellow at the Centre for Intellectual History, University of Sussex, she is revising her doctoral thesis “Rudolf Erich Raspe and the Anglo-Hanoverian Enlightenment” to be published as a monograph while also working on a post-doctoral project concerning the transformative impact of eighteenth-century notions of ‘national character’ on the early-modern Republic of Letters. More generally, her research interests revolve around the transfer, translation and exchange of ideas, the construction of national literatures and cultures, as well as the scholarly use of travel literature and the conceptualisation of historical progress in the long eighteenth century.