When professional troll James Delingpole recently bemoaned in the Spectator the demise of “a real Oxbridge education” at the hands of misguided social justice initiatives, professional classicist Mary Beard ended her response with the following postscript: “… when I quickly scanned the first link I was sent and saw the phrase ‘sterile, conformist monoculture’ applied to Oxbridge, I assumed that you were referring to what Oxbridge was like when it was a blokeish public school monoculture before the women and the others were ‘let in’! Whoops.”
Beard implies that there is a sterile, conformist Oxbridge to react against, but that it’s not the one Delingpole is thinking of—and that it exists more in the past than the present. So what is this “blokeish public school monoculture” that Beard references, and how did it fade? If we wish to restore the context that Delingpole so sorely lacks, with a view to understanding why his tantrum is not only plain wrong but also founded on troubling premises, this strikes me as an important missing piece of the puzzle. We can do so with relative ease, thanks to a book whose title has a poetic resonance with Beard’s ironic comment that women were “let in”: Keep the Damned Women Out: The Struggle for Coeducation (2016) by Nancy Weiss Malkiel, Professor of History Emeritus and former Dean of the College at Princeton University.
On October 31, 2016, I went to a talk in honor of Keep the Damned Women Out at the Institute of Historical Research in London. It was appropriate that the event took place on Halloween, because, as I learned from Malkiel that evening, the main actors—with the exception of Mary Ingraham Bunting of Radcliffe College, yes, all men—found the prospect of women infiltrating male educational spaces very scary indeed. The book itself is no less intimidating: fire-engine red and, at almost seven hundred pages, as thick as my thumb is long. On the cover, the title stands out in large font and harsh invective, the heartwarming contribution of a Dartmouth alumnus who wrote in 1970 to the Chair of the Board of Trustees: “For God’s sake, for Dartmouth’s sake, and for everyone’s sake, keep the damned women out.”
“And he could not have been more typical in his sentiments,” Malkiel commented before pointing out more instances of thinly veiled contempt, rife among the elite institutions that form the core of her book—elite institutions, she clarified, because that’s where the story is. (She added in response to a post-lecture question that the most elite of the elite were especially slow to change because if you’ve been doing things a particular way for centuries to great success, you think, don’t fix what isn’t broken.) Some choice quotes from my own alma mater, Princeton, include a description of coeducation as a “death wish” and concern that women would “dilute Princeton’s sturdy masculinity.” We even see prudent consideration of finances: “A good old-fashioned whorehouse would be considerably more efficient, and much, much cheaper.”
Then how, in the face of such outrage, did the damned women sneak in? Something Malkiel made clear upfront was that admitting the women had little to do with educating them. In fact, women had little to do with the story at all. This story, like so many other stories, was about men: their interests, actions, and even their defeats (in the struggle against coeducation). Furthermore, coeducation was not the mission of men who had “drunk the social justice Kool-Aid,” as Delingpole would say. That is, coeducation did not happen because of “a high-minded moral commitment,” but because “it was in the strategic self-interest of all-male institutions.” This was true in both the United States and the United Kingdom, Malkiel added.
But let us examine the two places separately for a moment in order to tease out what such strategic self-interest entailed, exactly. In the late 1960s, the top American schools began to see declining application numbers and yield rates, as men decided that they no longer wanted to attend single-sex institutions. Harvard, for example, started pulling students away from Princeton and Yale because it had Radcliffe up the street, when previously the three had been neck-and-neck. It became clear that women were key to attracting and retaining the “best boys.”
Women played “the instrumental role of improving the educational experience of men,” so their own educational experiences were, unsurprisingly, less than ideal. One Dartmouth oceanographer included pictures of naked women when presenting a list of sea creatures. The Chair of Yale’s History department responded to a request for a women’s history course by saying that that would be like teaching the history of dogs. Again at Dartmouth, the song “Our Cohogs” (cohog being a derogatory term for coeds) won a fraternity-wide songwriting competition, and afterwards the judge, the Dean of the College, joined the winners in performing ten verses of sexual insults.
Around this time, there was a wave of social change, including the civil rights movement (incidentally, Malkiel’s last book to have the word “struggle” in the title was Whitney M. Young, Jr., and the Struggle for Civil Rights), the anti-war movement, and the women’s movement, the effects of which were felt in Europe as well. The composition of student bodies started to shift, as universities admitted more state-educated students, students from lower-income backgrounds, Catholic and Jewish students, and African-American students. Women were the natural next step. Men and women were also voting and protesting together, so it began to seem strange that they should not be educated together.
In the UK, Oxford’s and Cambridge’s prestige made the “best boys” problem less likely. Nevertheless, they found themselves competing for talent with newly-founded universities, which had modern approaches to education and no history of gender segregation. (Keep in mind that by the 1970s, Oxbridge had been educating women for about a century at separate women’s colleges, even though mixed colleges were a novelty.) Simultaneously, there was a push to triple student bodies through broader recruitment at state schools. At that point it felt silly to draw the diversity line at women.
Competition within the same university was another consideration. The first colleges in Oxbridge to admit women were generally not the most prestigious, richest ones, and they did so partly to climb the league tables. Indeed, women’s colleges sat at the top of the tables at the time, and coeducation was a way to steal not only the top women students but also the accomplished men who wanted to be educated with them.
In the British case, unlike its American counterpart, the faculty played the largest role in implementing coeducation, with the Fellows of Churchill College, Cambridge even overriding the objects of the Master, noted antifeminist Sir William Hawthorne. (As Lawrence Goldman, the Director of the IHR, noted in Q&A, you have a much smaller number of men making the decisions at each college, and they were all in residence and thus continuously interacting with each other.) And in contrast to the horror stories from the Ivy League, we have no evidence of women being harassed or asked for the “woman’s point of view” at Oxbridge—which, of course, doesn’t mean it didn’t happen. Overall, the process of integration seems to have gone smoothly, and women continued to do well.
“Are we there yet?” Malkiel asked toward the end of the talk. Clearly, issues remain: Gill Sutherland, a fellow emerita of Newnham College, Cambridge and a preeminent historian of education and women, happened to be in the audience, and she pointed out that a pyramid scheme still exists when it comes to women graduate students and faculty. And the mere fact that the Spectator gave Delingpole a soapbox shows that class, in addition to gender, persists as a problem. Nevertheless, Malkiel chose to end her talk on a confident note, saying that we’re “well on our way.” Are we where yet? Well on our way to what? Malkiel didn’t clarify. If anything, her copious research shows that coeducation was not one step on the road leading to A More Perfect University, but the result of complex, sometimes questionable decisions. The narrative is less about progress than it is about change.
Change does happen, and it can happen with such force that people forget things were ever any other way. Malkiel noted that at Cambridge and Oxford, respectively, Eric Ashby and Hrothgar Habakkuk assuaged some fears by saying that coeducation would be like the removal of the celibacy requirement for fellows a century earlier, which nobody gave a second thought about by the 1970s. But change hardly removes the traces of the past. As Goldman—who went to university during the final years of single-sex Cambridge—said in his introductory remarks, “You get so old, eventually they start writing history about your own experiences.” One day they’ll start writing history about yours.
Yung In Chae is the Associate Editor of Eidolon and an MPhil Candidate in Classics at the University of Cambridge, where she is a Gates Cambridge Scholar. Read more of her work here.
In 1893, Henry Balfour, curator of the Pitt Rivers Museum in Oxford, UK, conducted an experiment. He traced a drawing of a snail crawling over a twig, and passed it to another person, whom he instructed to copy the drawing as accurately as possible with pen and paper. This second drawing was then passed to the next participant, with Balfour’s original drawing removed, and so on down the line. Balfour, in essence, constructed a nineteenth-century version of the game of telephone, with a piece of gastropodic visual art taking the place of whispered phrases. As in the case of the children’s game, what began as a relatively easy echo of what came before resulted in a bizarre, near unrecognizable transmutation.
In the series of drawings, Balfour’s pastoral snail morphed, drawing by drawing, into a stylized bird—the snail’s eyestalks became the forked tail of the bird, while the spiral shell became, in Balfour’s words, “an unwieldy and unnecessary wart upon the, shall we call them, ‘trousers’ which were once the branching end of the twig” (28). Snails on twigs, birds in trousers—just what, exactly, are we to make of Balfour’s intentions for his experiment? What was Balfour trying to prove?
Balfour’s game of visual telephone, at its heart, was an attempt to understand how ornamental forms could change over time, using the logic of biological evolution. The results were published in a book, The Evolution of Decorative Art, which was largely devoted to the study of so-called “primitive” arts from the Pacific. The reason that Balfour had to rely on his constructed game and experimental results, rather than original samples of the “savage” art, was that he lacked a complete series necessary for illustrating his theory—he was forced to create one for his purposes. Balfour’s drawing experiment was inspired by a technique developed by General Pitt Rivers himself, whose collections formed the foundation of the museum. In 1875, Pitt Rivers—then known as Augustus Henry Lane Fox—delivered a lecture titled “The Evolution of Culture,” in which he argued that shifting forms of artifacts, from firearms to poetry, were in fact culminations of many small changes; and that the historical development of artifacts could be reconstructed by observing these minute changes. From this, Pitt Rivers devised a scheme of museum organization that arranged objects in genealogical fashion—best illustrated by his famous display of weapons used by the indigenous people of Australia.
Here, Pitt Rivers arranged the weapons in a series of changing relationships radiating out from a central object, the “simple cylindrical stick” (34). In Pitt Rivers’ system, this central object was the most “primitive” and “essential” object, from which numerous small modifications could be made. Elongate the stick, and eventually one arrived at a lance; add a bend, and it slowly formed into a boomerang. While he acknowledged that these specimens were contemporary and not ancient, the organization implied a temporal relationship between the objects. This same logic was extended to understandings of human groups at the turn of the twentieth century. So-called “primitive” societies like the indigenous groups of the Pacific were considered “survivals” from the past, physically present but temporally removed from those living around them (37). The drawing game, developed by Pitt Rivers in 1884, served as a different way to manipulate time: by speeding up the process of cultural evolution, researchers could mimic evolution’s slow process of change over time in the span of just a few minutes. If the fruit fly’s rapid reproductive cycle made it an ideal model organism for studying Mendelian heredity, the drawing game sought to make cultural change an object of the laboratory.
It is important to note the capacious, wide-ranging definitions of “evolution” by the end of the nineteenth century. Evolution could refer to the large-scale, linear development of entire human or animal groups, but it could also refer to Darwinian natural selection. Balfour drew on both definitions, and developed tools to help him to apply evolutionary theory directly to studies of decorative art. “Degeneration,” the idea that organisms could revert back to earlier forms of evolution, played a reoccurring role in both Balfour’s and Pitt Rivers’ lines of museum object-based study. For reasons never explicitly stated, both men assumed that decorative motifs originated with realistic images, relying on the conventions of verisimilitude common in Western art. This leads us back, then, to the somewhat perplexing drawing with which Balfour chose to begin his experiment.
Balfour wrote that he started his experiment by making “a rough sketch of some object which could be easily recognized” (24). His original gastropodic image relied, fittingly, on a number of conventions that required a trained eye and trained hand to interpret. The snail’s shell and the twig, for instance, appeared rounded through the artist’s use of cross-hatching, the precise placement of regularly spaced lines which lend a sense of three-dimensional volume to a drawing. Similarly, the snail’s shell was placed in a vague landscape, surrounded by roughly-sketched lines giving a general sense of the surface upon which the action occurred. While the small illustration might initially seem like a straightforward portrayal of a gastropod suctioned onto a twig, the drawing’s visual interpretation is only obvious to those accustomed to reading and reproducing the visual conventions of Western art. Since the image was relatively challenging to begin with, it provided Balfour with an exciting experimental result: specifically, a bird wearing trousers.
Balfour had conducted a similar experiment using a drawing of a man from the Parthenon frieze as his “seed,” but it failed to yield the surprising results of the first. While the particulars of the drawing changed, somewhat—the pectoral muscles became a cloak, the hat changed, and the individual’s gender got a little murky in the middle—the overall substance of the image remained unchanged. It did not exhibit evolutionary “degeneration” to the same convincing degree, but rather seemed to be, quite simply, the product of some less-than-stellar artists. While Balfour included both illustrations in his book, he clearly preferred his snail-to-bird illustration and reproduced it far more widely. He also admitted to interfering in the experimental process: omitting subsequent drawings that did not add useful evidence to his argument, and specifically choosing participants who had no artistic training (25, 27).
Balfour clearly manipulated his experiment and the resulting data to prove what he thought he already knew: that successive copying in art led to degenerate, overly conventionalized forms that no longer held to Western standards of verisimilitude. It was an outlook he had likely acquired from Pitt Rivers. In Notes and Queries on Anthropology (1892), a handbook circulated to travelers who wished to gather ethnographic data for anthropologists back in Britain, Pitt Rivers outlined a number of questions that travelers should ask about local art. The questions were leading, designed in a simple yes/no format likely to provoke a certain response. In fact, one of Pitt Rivers’ questions could, essentially, offer the verbal version of Balfour’s drawing game. “Do they,” he wrote, “in copying from one another, vary the designs through negligence, inability, or other causes, so as to lose sight of the original objects, and produce conventionalized forms, the meaning of which is otherwise inexplicable?” (119–21). Pitt Rivers left very little leeway—both for the artist and the observer—for creativity. Might the artists choose to depict things in a certain way? And might the observer interpret these depictions in his or her own way? Pitt River’s motivation was clear. If one did find such examples of copying, he added. “it would be of great interest to obtain series of such drawings, showing the gradual departure from the original designs.” They would, after all, make a very convincing museum display.
Laurel Waycott is a PhD candidate in the history of science and medicine at Yale University. This essay is adapted from a portion of her dissertation, which examines the way biological thinking shaped conceptions of decoration, ornament, and pattern at the turn of the 20th century.
Interview conducted by guest contributor Chloe Bordewich
Timothy Nunan’s recent book, Humanitarian Invasion: Global Development in Cold War Afghanistan(2016), sets global Cold War history on an Afghan stage. It is not, however, the familiar story of the decade-long war between the country’s Soviet-backed communist government and the U.S.-backed Islamic mujahidin. In this account, foreign visions for Afghanistan clash instead in the cedar forests of Paktia, the refugee camps of an imagined Pashtunistan, and the gas fields of Turkestan.
This is an Afghanistan of aid workers and technocrats. While American modernizers and European humanitarians play important roles, Nunan foregrounds Soviet development experts and their protracted attempt to fashion a successful socialist nation to the south. Afghanistan was a canvas across which these different foreign actors sketched out their aspirations for postcolonial states. But modernization, socialism, and humanitarianism all foundered on conceptual errors about the nature of Afghan territory, errors whose consequences were often devastating for Afghans.
When we follow the misadventures of development projects in Afghanistan, a second salient story emerges: the rise and fall on both sides of the Iron Curtain of a certain romance with the idea of the Third World nation-state. By the late 1970s, foreigners’ disillusionment with their attempts to mold Afghanistan resulted in the inversion of international mechanisms once designed to promote postcolonial sovereignty. Countries like Afghanistan were suddenly put on trial, exposed, and shown to be unjust.
In providing a nuanced look into shifting sites of postcolonial sovereignty, Nunan’s account of scholars, engineers, militants, murderous border guards, and traumatized orphans highlights the importance of juxtaposing histories of ideas with the real encounters that unsettle them.
JHI: How did you come to this project? Did you hope to revise popular misconceptions about the history of Afghanistan?
TN: Clearly, concerns about the ethics of humanitarian invention and the prospects of building a “functional state” in Afghanistan reflect what was going on while I was writing the book. But I did not sit down intending to write a history of U.S. involvement in Afghanistan, or Afghanistan at all. I came to this topic from the north – from the Soviet Union and the study of Soviet Central Asia. I originally thought I would write on the thaw in the 1950s and 1960s in Soviet Central Asia, to look differently at a story usually centered on Russia. However, when I arrived at the archives in Moscow and, later, Dushanbe (in Tajikistan) many of the files I discovered from the 1950s were wooden and bureaucratic. I struggled to think of how I could turn this archival material into a manuscript that would speak to broader concerns.
But in the State Archive of the Russian Federation, I found, for example, the long transcript of a conference in Moscow in 1982 to which Afghan socialist feminists were invited to talk about what a real women’s movement would look like in Afghanistan under conditions of socialist revolution. As I spent more time on Afghanistan, I became aware of the files of Komsomol (Soviet Youth League) advisors, which took me down to the village level. Quickly, I found myself being able to write a certain version of the history of Kandahar or Jalalabad in the 1980s, which seemed much more exciting and current.
JHI: In the first chapter, “How to Write the History of Afghanistan,” you map out in fascinating detail the epistemological framework of the Soviet area studies and development studies apparatus that facilitated, but also was at times in friction with actual Soviet development projects. As you point out, Soviet Orientology developed alongside anti-Western-imperialism, not as an accomplice of it – a hole in Edward Said’s map of Orientalism.
Today, the unipolarity of scholarship is striking and the Soviet knowledge apparatus has largely been forgotten. What happened to this alternative body of expertise with the fall of the Soviet Union? Do we see parallels emerging today that could challenge Euro-American hegemony over the narration of the history of the Third World?
TN: Soviet Orientology was very different from how graduate students [in Western Europe and North America] are trained to think about Orientalism. Anouar Abdel-Malek, the author of the entry on Orientalism in the Great Soviet Encyclopedia, was an Egyptian Coptic Marxist who came out of the same social background as Edward Said. But rather than challenging the Soviet Orientalist establishment, as Said did in the U.S. context, he was embedded in it.
Alfrid Bustanov, Masha Kirasirova, and others are doing outstanding work on how Russian and Soviet Orientological traditions affected nationalisms inside and outside the USSR, but there is still an enormous amount of Soviet scholarly engagement we don’t know much about.
The question of what happened afterward is a very good one, especially as we ponder what might come after this moment and the problems with the global history approach. Within the former Soviet space, after 1991, institutions of Soviet Orientology suffered from significant funding shortages and positions were cut, and many of the people I interviewed felt embattled.
I spend a lot of time reading mujahidin publications from the 1980s, mostly in Persian, and even when these journals translate works of propaganda written by Saudi scholars, they cite Russian orientalists such as Vasily Bartold. The Soviet Orientological tradition appears to have been received, processed, and understood by actors working in the Arabic- and Persian-speaking world. In Afghanistan, Syria, Iraq, Algeria – places that were strongly aligned with the Soviet Union – there were academies of sciences that employed dozens of people. What was it like to be a member of one of these institutions in Syria after 1970, or in Afghanistan after 1955, or 1978 or 1979? These are important stories that I was only able to gloss in Humanitarian Invasion, but which I hope future works will elucidate.
JHI: Some of the most interesting sources you use are interviews with these Soviet Orientologists who worked in and studied Afghanistan, mostly in the 1970s and 1980s. How did you track down these scholars, and how do you deploy their stories in the book?
TN: I wanted to access Soviet subjectivity of experiences in Afghanistan beyond the archive. What did Soviet Uzbeks and Tajiks think about Afghanistan? Did they suddenly convert to Wahhabism? Did they feel some special bond with Afghans?
The interviews would have been impossible without a yearbook that Komsomol advisors had produced about themselves around 2006. When I arrived in Dushanbe in summer 2013, I started Yandex-ing [Russian Googling] these people to find out where they were. One person responded and that led to more introductions. Their networks ran all the way from Kiev to the border of Afghanistan, and I was able to travel widely around the former Soviet Union to interview many of them. By talking with these people I identified figures and turning points that distilled the themes they themselves emphasized.
JHI: In your introduction, you write that you hope to cast Afghanistan not as the “graveyard of empires,” as it has often been known, but as the “graveyard of the Third World nation-state.” Just as the former has more to do with the foreign empires than with Afghanistan itself, the latter speaks to the idea of the Third World nation-state as it was championed by foreign actors and transnational bodies – and their eventual disillusionment with it. Could you elaborate on the life and death of the international romance with the Third World nation-state? What role did Afghanistan play in shaping it?
TN: Afghanistan gained its independence from the British Empire in 1919, and the Soviet Union was the first country to recognize it. But what did this recognition mean? From 1914 to 1945, countries could become independent, but in many cases didn’t have the geopolitical wherewithal to make this sovereignty meaningful. Furthermore, there was no significant international forum not already dominated by the imperial powers. This changed after 1945 and especially after 1960, when not only did independent nation-states have a forum, the United Nations, in which they could gain representation, but there were also new rules within that international organization that allowed them to effect a certain kind of power not commensurate with their GDP or whether or not they had nuclear weapons. We might point to 1960 as a turning point, when the UN General Assembly overwhelmingly affirms the independence of colonized people as a human right, and when “civilization” is erased as a criterion for admission into the United Nations.
This lack of commensurability between sovereignty at the United Nations and geopolitical heft began to have real effects on international society. Throughout the mid-1960s and especially from the 1970s onward, many Third World nation-states, including Afghanistan and often sponsored by the Soviet Union, began to realize that they could sponsor resolutions against Israel, the Portuguese empire, apartheid South Africa – and attempt to delegitimize entire states’ right to exist. By the mid-1970s, in addition to this power, however symbolic, at the United Nations, nations were taking control of their destinies with armed force. Broadly speaking, if you had enough Soviet or Chinese weapons, you could push back the imperialists and eventually gain enough power at the level of international organizations to delegitimize groups that disagreed with you.
However, Afghanistan was one of the turning points against this mood, starting in the late 1970s. European actors became disillusioned with this Third World nation-state form through events like the Vietnamese boat people crisis of the late 1970s, and the Pol Pot regime in Cambodia. Often, post-colonial sovereignty seemed more like an excuse to murder ethnic minorities and political dissidents than to realize a vision of freedom. Arguably, China’s post-1970s Chinese détente with the United States was a factor, as well. Leftists saw that China no longer offered a viable vision of revolution, but was just a lackey of American finance capital and imperialism. Many of the intellectuals who went on to found humanitarian NGOs had lost faith in the USSR as a revolutionary force since the Prague Spring, or, at the very latest, the publication of The Gulag Archipelago.
In short, by the late 1970s, these East Asian and Southeast Asian fantasies of the future were discredited. One place these groups turned was humanitarian action, rather than the Third World nation-state, as a new form of political organization. But the old tools of delegitimization and Third World politics were applied in reverse to places like Afghanistan. Forums pioneered for use against Israel or South Africa, such as the UN Special Rapporteur and human rights investigations, were flipped. It was suddenly no longer the oppression of black Africans or Palestinians qua colonized subjects but rather the oppression of Afghans qua humans under a Third World socialist regime that constituted the supreme crime within international society. The reversal of this Third World logic onto Third World nations is one of the key themes of the book.
JHI: One of the overarching themes of the book is sovereignty: sovereignty as it was imagined and sovereignty as it was performed. Could you flesh out for us some of the major disjunctions between the ways different foreign actors, as well as Afghan politicians, conceptualized Afghan sovereignty, and acts of sovereignty that were carried out on the ground?
TN: The Afghan government was extremely ambitious in claiming that other countries were parts of it, yet was very weakly territorialized. From 1947 onward, when Pakistan is formed, Afghanistan does not recognize its own entire eastern border. One official Afghan government map has a disclaimer on it saying “this map was composed in great haste and none of the information on it should be taken to be reliable.” There’s an odd mix of hyper-ambition and total insecurity. The indeterminacy of the border also creates catastrophic consequences for people living around it.
In the 1980s, Soviet border guards extend the Soviet border regime hundreds of kilometers inside Afghanistan, and murder Afghans within Afghanistan’s borders. Children are another interesting lens. On one hand, the Soviet Union says that children are the future of the nation and need to be educated and mobilized as symbols of the nation’s future. Orphans, especially, are taken to the Soviet Union. From the Soviet Union’s point of view, there’s nothing wrong with this. Insofar as states have a right to exist and defend their borders, it then follows that the state has a right to mobilize its citizens–men, in particular–to defend those borders and weave protection of the state with the citizen’s life-cycle.
In the 1970s and 1980s, however, humanitarian actors like Amnesty International become concerned with children having the right to a nationality and the right not to be trafficked out of the nation-state of their birth. And yet, those deploying this humanitarian logic, who are often concerned with diagnosing children as traumatized, have no problem taking the children out of their familiar contexts to receive medical treatment. Here we see two different logics of what the Third World nation-state project is supposed to be about: the solution for creating a national future, or the problem causing people to be traumatized for life.
JHI: We’re in a moment of deep suspicion not only toward internationalism, but also toward humanitarianism. In this context, a particularly timely thread of the book traces how states, Leftist activists, and eventually NGO workers envisioned social justice and moral responsibility toward distant people in need. What is the landscape of conviction in Humanitarian Invasion? Where does it intersect with expertise, on one hand, and geopolitical strategy on the other?
TN: While I see the humanitarian groups that I look at most closely – Doctors without Borders (MSF) and the Swedish Committee for Afghanistan (SCA) – as entangled in this geopolitical game, I don’t view them as having had nefarious intentions. Many of the groups that enter the Afghan theater via Pakistan in the 1980s initially try to stay very distant from a geopolitical focus. But there are different trajectories that these groups follow, with the Swedes trying to adopt a more consistent anti-imperialism and the French flirting with explicit engagement in politics.
Regardless of specific anti-imperialist or anti-totalitarian politics, new regimes of intervention are created from the late 1970s onward. Rather than saying, “OK, the Afghans or Cambodians have had their socialist revolution, now they should finally be free from foreign interference,” NGOs embed themselves in trans-border resistance movements that reframe those Third World citizens as subjects of new internationals regimes of governance. NGOs are able to diagnose Afghans as traumatized or suffering from disease, and this becomes grounds for further intervention, or shipment of supplies into a country without consulting its government. Over time, this contributes to a shift in which the dominant optic employed when engaging with Third World populations is not so much that of the guerrilla fighter but of the traumatized individual, the wounded girl. This reframing wasn’t intentionally nefarious, but did reframe subaltern actors as non-political.
There is a strange boomerang effect to all of this. In the 1980s, identifying trauma or certain types of wounds became a carte blanche for aiding armed insurrections in Third World countries–as in the case of Afghanistan, Cambodia, and Ethiopia. Today, however, as scholars like Miriam Ticktin have shown, refugees have to demonstrate exactly these kinds of wounds in order to gain the right to stay in European countries. In both cases, a discourse centered around individual, often corporeal trauma became the litmus test for whether states could maintain control of their borders, but a procedure that once allowed Europeans to insert themselves into Afghanistan now allows Afghans and others to claim a (marginal) space in European settings. Pushing back, governments like Germany have sought to classify entire countries, and specific provinces of Afghanistan, as “safe countries of origin” or “safe zones” from which it becomes procedurally impossible to file such an asylum claim. The boomerang, then, is that Europeans are grappling with these humanitarian claims in an obviously political way, even as the turn toward humanitarianism was itself motivated by an exhaustion with traditional left-right politics in the first place.
JHI: So the Soviets, while pursuing a parallel project, never really bought into the humanitarian discourse?
TN: Yes, though this does not mean they lacked something. The Soviets had a strong interest in childhood as a stage of life that is political and is protected, not, as we would put it, a stage of life that is protected and therefore should not be political.
Russian critiques of the creation of humanitarian protectorates in places like Bosnia, Kosovo, and even Libya and Afghanistan hold that humanitarian action without a strong central state is nonsense. Syria is the most dramatic instance of where these impulses are contrasting again. The Russian government claims that Syria is a sovereign member state of the United Nations that has invited Russia, Iran, and Hezbollah (not a state) to aid it in an act of collective self-defense—something permitted under the United Nations charter. Russia also provides humanitarian aid to government-held areas in Syria through its Ministry of Defense. In contrast, Russian diplomats would argue, Western media have conspired with Turkey, Qatar, and Saudi Arabia to portray the jihad against Damascus exclusively in terms of traumatized children, the destruction of Aleppo, and so on. Now as in Afghanistan in the 1980s, the tension has to do with the legitimacy of post-colonial states and reading the Syrian people’s aspirations not solely in terms of geopolitics or trauma.
JHI: Humanitarian Invasion gives an account of global actors making decisions with global repercussions, but it is at the same time firmly grounded in a particular place. So, where do you see global history heading as a field, and where does this book fit? What are the potential risks of global history?
TN: Obviously, Humanitarian Invasion is not a history of the world or of every place in the world. Rather, the book’s central concern is shifting meanings of postcolonial sovereignty during the Cold War. The Afghan-Pakistan borderlands form a particularly rich location to examine how this idea of the Third World nation-state was changing over time, precisely because so many different actors brought their own conceptual baggage to it. I would welcome anyone who wants to write a history of the Cambodian-Thai borderlands or, indeed, much of Ethiopia during the 1980s. MSF, in fact, had a larger presence in the Cambodian-Thai theater than in the Afghan one, and it would be fascinating to understand what difference it makes when these NGOs are collaborating against the Vietnamese, who had been their heroes only a decade before.
Yet as historians like Dipesh Chakrabarty have pointed out, the intensive language training and multi-archive projects of many global historians depend on the extensive resources that only wealthy American and Western European universities possess. One way we can correct this imbalance, learn from colleagues in other countries, and maintain a spirit of humility about our work is to remember, even while working on so-called global themes, that events are still taking place in actual places with local histories, and never to insist on a hierarchy in which NGO actors are more important than national stories.
For example, writing Humanitarian Invasion, I was not able to explore as much as I would like how Afghans themselves changed their political language to respond to the surge in humanitarian ideas (and funding streams) that emerged in the 1980s. I would have liked to probe more how much the massive changes in the 1980s actually affect the ways Afghans talk about politics and what they expect from an Afghan state, what needs they expect to be met by international organizations. How ideas and discourses are transmitted from North to South or South to North is a major interest for global historians today, and that’s an area where “local” scholars with a knowledge of Pashto and a deeper knowledge of regional political thought would be a great contribution.
JHI: What is your current project, and how did it evolve from Humanitarian Invasion?
TN: I would have liked to consider, more seriously, Afghan socialists as thinkers. What did socialism actually mean to them? How did they, on the front line of an Afghan national jihad and the emerging global jihadist movement, understand political Islam? The current project looks at how socialists in the Soviet Union and allied left-wing groups such as the Afghan Communists and Iranian Tudeh Party understood political Islam or Pan-Islamism, particularly in Iran and Afghanistan, where Islamists took violent control of states in the 1980s.
In 1914, the Russian orientalist Vasily Bartold writes that Pan-Islamism is totally bogus, that it’s a political program created by the Ottomans with German support. Fast-forward 60 or 70 years, and there’s enormous anxiety about Islam not only destabilizing client states such as Afghanistan or Syria, but also infiltrating the Soviet Union itself. I was shocked to discover a 1983 publication by an Adjarian nationalist from southwest Georgia describing Muslims as “something that crawled out of a trash heap, who need to be weeded out of our garden” – things you expect to hear from Geert Wilders, Marine Le Pen, or Steve Bannon today. I became really interested in how the Soviet Union and Russian scholars go from viewing Pan-Islamism as a potential ally in fomenting an anti-Western and anti-colonial global front, to viewing Muslims and Pan-Islamism as inherently opposed to the interests of the Soviet Union. In doing so, I hope to provide a unique perspective on contemporary concerns about the threat, real or imagined, of Muslim unity and Muslim communities in Europe and the United States.
The editors wish to thank Timothy Nunan for his graciousness in granting this interview.
Chloe Bordewich is a PhD Student in History and Middle Eastern Studies at Harvard University. She currently works on histories of information, secrecy, and scientific knowledge in the late and post-Ottoman Arab world, especially Egypt. She blogs at chloebordewich.wordpress.com.
In Chapter 3 of The History Manifesto, David Armitage and Jo Guldi support historians’ increasing willingness to engage with topics generally left to economists. Whereas the almost total dominance of game-theoretic modelling in economics has led to abstract explanations of events in terms of market principles, history, with its greater attention to ruptures and continuities of context and its “apprehension of multiple causality,” can push against overly reductionist stories of socio-economic problems (The History Manifesto, 87). Citing Thomas Piketty’s Capital as a possible example, Armitage and Guldi propose a longue-durée approach to the past that, by empirically documenting the evolution of a phenomenon (say, income inequality or land reform) over time, can disclose context-specific factors and patterns that economic models generally elide.
In this blog post, I ask what intellectual history in particular might have to gain (and contribute) by following Armitage and Guldi’s provocation and taking on a topic that Western academia has almost totally ceded to economics since the 1970s: the study of global poverty. Extreme or mass poverty in the Global South is a well-worn term in the literature on cosmopolitan justice, development economics, global governance, and foreign policy. Across economists like Jeffrey Sachs, Paul Collier, Abhijit Banerjee, and Esther Duflo, poverty is usually invoked as a sign of institutional failure—domestic or international—and a problem to be solved through aid or the reform of market governance. I want to suggest here that the contemporary dominance of economic analysis has foreclosed other approaches to mass poverty in the twentieth century. These are discourses that global intellectual history is uniquely able to excavate.
To illustrate my point, I want to turn to a common trope I have found while researching political thought in colonial India. Between approximately 1929-30 and 1950, the Indian National Congress and other organizations fighting for self-determination began to demand the introduction of universal adult franchise in British South Asia. The colony had seen very limited elections at the provincial level since 1892. Through a successive series of acts in 1909, 1919, and 1935, the British Government gradually widened the powers of legislatures with native representation, while keeping the electorate limited according to property ownership and income. In its report to Parliament in 1919, the Indian Franchise Committee under Lord Southborough emphasized that the ‘intelligence’ and ‘political education’ required for modern elections necessitated a strict property qualification (especially in a mostly rural country like India).
Against this background, extending rights to vote and hold office to laborers and the landless poor was anti-imperial both in the immediate sense of challenging British constitutional provisions and, more generally, in inverting the philosophy of the colonial state. Dipesh Chakrabarty has accurately and evocatively described the nationalist demand for universal suffrage as a gesture of “abolishing the imaginary waiting room of history” to which Indians had been consigned by modern European thought (Provincalizing Europe, 9). Indian demands for the adult franchise were almost always articulated with reference to the country’s economic condition. The poor, it was said, needed to directly participate in politics so that the state which governed them could adequately represent their interests.
M.K. Gandhi (1869-1948) began making such arguments in support of adult franchise soon after he gained leadership of the Indian independence movement around 1919. His ideal of a decentralized village-based democracy (panchayati raj) sought to address the deep socio-economic inequality of colonial society by bringing the rural poor into decision-making processes. Under the Gandhian program, fully participatory local village councils would combine legislative, judicial, and executive functions. As Karuna Mantena has noted in her recent study of Gandhi’s theory of the state, panchayati raj based on universal suffrage was seen to empower the poor by giving them an institutional mechanism to guard against the agendas of urban elites and landed rural classes.
Through the 1930s and 1940s, most demands for extending suffrage to the poor shared Gandhi’s premise. Even when leaders fundamentally disagreed with Gandhi’s idealization of village self-rule, they similarly considered the power to vote and hold office as a crucial safeguard against further economic vulnerability. In the Constitution of Free India he proposed in 1944, Manabendra Nath Roy (1887-1954), the ex-Communist leader of the Radical Democratic Party, argued for full enfranchisement and participatory local government on essentially defensive grounds, to protect “workers, employees, and peasants” from privileged interests (Constitution of Free India, 12).
By far one of the most sophisticated analyses of the problem of poverty for Indian politics during these decades came from B.R. Ambedkar (1891-1956), a jurist, anti-caste leader, and the main drafter of the Constitution of independent India in 1950. Ambedkar had been a vocal advocate for removing property, income, and literacy qualifications for voting and holding office since 1919, when he testified before Lord Southborough’s committee. As independence became increasingly likely from the 1930s, Ambedkar’s fundamental concern was to ensure that the poorest, landless castes of India had constitutional protections to vote and to represent themselves as separate groups in the legislature. Writing to the Simon Commission for constitutional reform in 1928, Ambedkar saw direct participation of the poor as the only way to forestall the rise of a postcolonial oligarchy: “the poorer the individual the greater the necessity of enfranchising him…. If the welfare of the worker is to be guaranteed from being menaced by the owners, the terms of their associated life must be constantly resettled. But this can hardly be done unless the franchise is dissociated from property and extended to all propertyless adults” (“Franchise,” 66).
During the height of the Indian independence movement in the 1930s and 1940s, there was thus an acute awareness of mass poverty as a key problem confronted by modern politics outside the West. Participatory democracy was in many ways the answer to an economic issue: colonialism’s creation of a large population without security of income or property, placed at the very bottom of networks of production and exchange that favored either Western Europe or a native elite. This was the population that Gandhi repeatedly described as holding onto its existence in a precarious condition of lifeless “slavery,” completely lacking any economic power. Only fundamental changes in the nature of the modern state, to make it accessible to those who had been constructed as objects of expert rule and as backward outliers to productivity and prosperity, could return dignity to the poor.
My intention in briefly reconstructing Indian debates around giving suffrage, self-representation, and engaged citizenship to some of the most vulnerable and powerless people in the world is straightforward: attempts to address the effects of inequality in the Global South through the vote and local democracy rather than exclusively through international governance and economic reconstruction need to have a central place in any story we tell about twentieth-century poverty. Before they were taken up in the literature on efficient economic institutions and the rhetoric of international aid and development in the early 1950s (a shift usefully analyzed by anthropologists like Akhil Gupta and Arturo Escobar), colonial narratives about Africa, Latin America, and Asia as regions of intractable, large-scale poverty, famine, and market failure informed the political thought of anti-imperial democracy. The idea that existing economic conditions in India were problematic and deeply unjust was the basis of giving greater political power to the poor. A global conceptual history of ‘mass poverty’ in the twentieth century can therefore situate popular Third World movements that sought to increase the agency of the poor alongside more familiar, and more hegemonic, projects of Western humanitarianism.
This brings me back to my earlier point about what we might gain by re-thinking, with The History Manifesto, the relationship between intellectual history and economics. Once we start to trace how the categories and variables deployed in economic analysis emerged and changed over time, and how they were interpreted and practiced in a wide range of historical contexts, we can access dimensions of these concepts that may be completely absent from economic modeling. On the specific question of global poverty, an intellectual history that documents how the concept travelled between Third World thought, social movements, and global governance might give us theories of poverty alleviation that entail much more than simply distributive justice and resource allocation. This would be a form of intellectual history committed, as Armitage and Guldi put it, to “speaking back” to the “mythologies” of economics by expanding the timeframes and theoretical traditions which inform the discipline’s methods (The History Manifesto, 81-85).
Tejas Parasher is a PhD candidate in political theory at the University of Chicago. His research interests are in the history of political thought, comparative political theory, and global intellectual history, especially on questions of state-building, decolonization, and market governance in the mid-twentieth century, with a regional focus on South Asia. His dissertation examines the rise of redistribution as a discourse of government and economic policy in India through the 1940s. He also writes more broadly on issues of socio-economic inequality in democratic and constitutional theory, human rights, and the history of political thought.
Quentin Skinner is a name to conjure with. A founder of the Cambridge School of the history of political thought. Former Regius Professor of History at the University of Cambridge. The author of seminal studies of Machiavelli, Hobbes, and the full sweep of Western political philosophy. Editor of the Cambridge Texts in the History of Political Thought. Winner of the Balzan Prize, the Wolfson History Prize, the Sir Isaiah Berlin Prize, and many others. On February 24, Skinner visited Oxford for the Ertegun House Seminar in the Humanities, a thrice-yearly initiative of the Mica and Ahmet Ertegun Graduate Scholarship Programme. In conversation with Ertegun House Director Rhodri Lewis, Skinner expatiated on the craft of history, the meaning of liberty, trends within the humanities, his own life and work, and a dizzying range of other subjects.
Names are, as it happens, a good place to start. As Skinner spoke, an immense and diverse crowd filled the room: Justinian and Peter Laslett, Thomas More and Confucius, Karl Marx and Aristotle. The effect was neither self-aggrandizing nor ostentatious, but a natural outworking of a mind steeped in the history of ideas in all its modes. The talk is available online here; accordingly, instead of summarizing Skinner’s remarks, I will offer a few thoughts on his approach to intellectual history as a discipline, the aspect of his talk which most spoke to me and which will hopefully be of interest to readers of this blog.
Lewis’s opening salvo was to ask Skinner to reflect on the changing work of the historian, both in his own career and in the profession more broadly. This parallel set the tone for the evening, as we followed the shifting terrain of modern scholarship through Skinner’s own journey, a sort of historiographical Everyman (hardly). He recalled his student days, when he was taught history as the exploits of Great Men, guided by the Whig assumptions of inevitable progress towards enlightenment and Anglicanism. In the course of this instruction, the pupil was given certain primary texts as “background”—More’s Utopia, Hobbes’s Leviathan, John Locke’s Two Treatises of Government—together with the proper interpretation: More was wrongheaded (in being a Catholic), Hobbes a villain (for siding with despotism), and Locke a hero (as the prophet of liberalism). Skinner mused that in one respect his entire career has been an attempt to find satisfactory answers to the questions of his early education.
Contrasting the Marxist and Annaliste dominance that prevailed when he began his career with today’s broad church, Skinner spoke of a shift “towards a great pluralism,” an ecumenical scholarship welcoming intellectual history alongside social history, material culture alongside statistics, paintings alongside geography. For his own part, a Skinner bibliography joins studies of the classics of political philosophy to articles on Ambrogio Lorenzetti’sThe Allegory of Good and Bad Government and a book on William Shakespeare’s use of rhetoric. And this was not special pleading for his pet interests. Skinner described a warm rapport with Bruno Latour, despite a certain degree of mutual incomprehension and wariness of the extremes of Latour’s ideas. Even that academic Marmite, Michel Foucault, found immediate and warm welcome. Where many an established scholar I have known snorts in derision at “discourses” and “biopolitics,” Skinner heaped praise on the insight that we are “one tribe among many,” our morals and epistemologies a product of affiliation—and that the tribe and its language have changed and continue to change.
My ears pricked up when, expounding this pluralism, Skinner distinguished between “intellectual history” and “the history of ideas”—and placed himself firmly within the former. Intellectual history, according to Skinner, is the history of intellection, of thought in all forms, media, and registers, while the history of ideas is circumscribed by the word “idea,” to a more formal and rigid interest in content. On this account, art history is intellectual history, but not necessarily the history of ideas, as not always concerned with particular ideas. Undergirding all this is a “fashionably broad understanding of the concept of the text”—a building, a mural, a song are all grist for the historian’s mill.
If we are to make a distinction between the history of ideas and intellectual history, or at least to explore the respective implications of the two, I wonder whether there is not a drawback to intellection as a linchpin, insofar as it emphasizes an intellect to do the intellection. To focus on the genesis of ideas is perhaps to the detriment of understanding how they travel and how they are received. Moreover, does this overly privilege intentionality, conscious intellection? A focus on the intellects doing the work is more susceptible, it seems to me, to the Great Ideas narrative, that progression from brilliant (white, elite, male) mind to brilliant (white, elite, male) mind.
At the risk of sounding like postmodernism at its most self-parodic, is there not a history of thought without thinkers? Ideas, convictions, prejudices, aspirations often seep into the intellectual water supply divorced from whatever brain first produced them. Does it make sense to study a proverb—or its contemporary avatar, a meme—as the formulation of a specific intellect? Even if we hold that there are no ideas absent a mind to think them, I posit that “intellection” describes only a fraction (and not the largest) of the life of an idea. Numberless ideas are imbibed, repeated, and acted upon without ever being much mused upon.
Skinner himself identified precisely this phenomenon at work in our modern concept of liberty. In contemporary parlance, the antonym of “liberty” is “coercion”: one is free when one is not constrained. But, historically speaking, the opposite of liberty has long been “dependence.” A person was unfree if they were in another’s power—no outright coercion need be involved. Skinner’s example was the “clever slave” in Roman comedies. Plautus’s Pseudolus, for instance, acts with considerable latitude: he comes and goes more or less at will, he often directs his master (rather than vice versa), he largely makes his own decisions, and all this without evident coercion. Yet he is not free, for he is always aware of the potential for punishment. A more nuanced concept along these lines would sharpen the edge of contemporary debates about “liberty”: faced with endemic surveillance, one may choose not to express oneself freely—not because one has been forced to do so, but out of that same awareness of potential consequences (echoes of Jeremy Bentham’s Panopticon here). Paradoxically, even as our concept of “liberty” is thus impoverished and unexamined, few words are more pervasive in present discourse.
On the other hand, intellects and intellection are crucial to the great gift of the Cambridge School: the reminder that political thought—and thought of any kind—is an activity, done by particular actors, in particular contexts, with particular languages (like the different lexicons of “liberty”). Historical actors are attempting to solve specific problems, but they are not necessarily asking our questions nor giving our answers, and both questions and answers are constantly in flux. This approach has been an antidote to Great Ideas, destroying any assumption that Ideas have a history transcending temporality. (Skinner acknowledged that art historians might justifiably protest that they knew this all along, invoking E. H. Gombrich.)
The respective domains of intellectual history and the history of ideas returned when one audience member asked about their relationship to cultural history. Cultural history for Skinner has a wider set of interests than intellectual history, especially as regards popular culture. Intellectual history, by contrast, is avowedly elitist in its subject matter. But, he quickly added, it is not at all straightforward to separate popular and elite culture. Theater, for instance, is both: Shakespeare is the quintessence of both elite art and of demotic entertainment.
On some level, this is incontestable. Even as Macbeth meditates on politics, justice, guilt, fate, and ambition, it is also gripping theater, filled with dramatic (sometimes gory) action and spiced with ribald jokes. Yet I query the utility, even the viability, of too clear a distinction between the two, either in history or in historians. Surely some of the elite audience members who appreciated the philosophical nuances also chuckled at the Porter’s monologue, or felt their hearts beat faster during the climactic battle? Equally, though they may not have drawn on the same vocabulary, we must imagine some of the “groundlings” came away from the theater musing on political violence or the obligations of the vassal. From Robert Scribner onwards, cultural historians have problematized any neat model of elite and popular cultures.
In any investigation, we must of course be clear about our field of study, and no scholar need do everything. But trying to circumscribe subfields and subdisciplines by elite versus popular subjects, by ideas versus intellection versus culture, is, I think, to set up roadblocks in the path of that most welcome move “towards a great pluralism.”
A distinctive feature of the early years of the Cambridge English Tripos (examination system), in which close “practical criticism” of individual texts was balanced by the study of the “life, literature, and thought” surrounding them, was that the social and intellectual background to literature acquired an equivalent importance to that of literature itself. Stefan Collini’s Ford Lectures, in common with his essay collections, Common Reading and Common Writing, have over the past several weeks richly demonstrated that the literary critics who were largely the products of that Tripos can themselves be read and historicized in that spirit. Collini, whose resistance to the disciplinary division between the study of literature and that of intellectual history has proved so fruitful over many years, has focused on six literary critics in his lecture series: T. S. Eliot, F. R. Leavis, L. C. Knights, Basil Willey, William Empson, and Raymond Williams. All, with the exception of Eliot, were educated at Cambridge; and all came to invest the enterprise of literary criticism with a particular kind of missionary importance in the early and middle decades of the twentieth century. Collini has been concerned to explore the intellectual and public dynamics of that mission, by focusing on the role of history in these critics’ thought and work. His argument has been twofold. First, he has emphasized that the practice of literary criticism is always implicitly or explicitly historical in nature. The second, and more intellectual-historical, element of his case has consisted in the suggestion that literary critics offered a certain kind of “cultural history” to the British public sphere. By using literary and linguistic evidence in order to unlock the “whole way of life” of previous forms of English society, and to reach qualitative judgements about “the standard of living” in past and present, critics occupied territory vacated by professional historians at the time, while also contributing to wider debates about twentieth-century societal conditions.
Collini’s lectures did not attempt to offer a full history of the development of English as a discipline in the twentieth century. Nevertheless, they raised larger questions for those interested in the history of the disciplines both of English and History in twentieth-century Britain, and what such histories can reveal about the wider social and cultural conditions in which they took shape. How should the findings from Collini’s penetrating microscope modify, or provide a framework for, our view of these larger organisms?
First, a question arises as to the relationship between the kind of historical criticism pursued by Collini’s largely Cantabrigian dramatis personae, and specific institutions and educational traditions. E. M. W. Tillyard’s mildly gossipy memoir of his involvement in the foundation of the Cambridge English Tripos, published in 1958 under the title of The Muse Unchained, recalls an intellectual environment of the 1910s and 1920s in which the study of literature was exciting because it was a way of opening up the world of ideas. The English Tripos, he held, offered a model of general humane education—superior to Classics, the previous such standard—through which the ideals of the past might nourish the present. There is a recognizable continuity between these aspirations, and the purposes of the cultural history afterwards pursued under the auspices of literary criticism by the subsequent takers of that Tripos whom Collini discussed—several of whom began their undergraduate studies as historians.
But how far did the English syllabuses of other universities, and the forces driving their creation and development, also encourage a turn towards cultural history, and how did they shape the kind of cultural history that was written? Tillyard’s account is notably disparaging of philological approaches to English studies, of the kind which acquired and preserved a considerably greater prominence in Oxford’s Honour School of “English Language and Literature”—a significant pairing—from 1896. Did this emphasis contribute to an absence of what might be called “cultural-historical” interest among Oxford’s literary scholars, or alternatively give it a particular shape? Widening the canvas beyond Oxbridge, it is surely also important to heed the striking fact that England was one of the last countries in Europe in which widespread university interest in the study of English literature took shape. If pressed to single out any one individual as having been responsible for the creation of the “modern” form of the study of English Literature in the United Kingdom—a hazardous exercise, certainly—one could do worse than to alight upon the Anglo-Scottish figure of Herbert Grierson. Grierson, who was born in Shetland in 1866 and died in 1960, was appointed to the newly-created Regius Chalmers Chair of English at Aberdeen in 1894, before moving to take up a similar position in Edinburgh in 1915. In his inaugural lecture at Edinburgh, Grierson argued for the autonomy of the study of English literature from that of British history. As Cairns Craig has recently pointed out, however, an evaluative kind of “cultural history” is unmistakably woven into his writings on the poetry of John Donne—which for Grierson prefigured the psychological realism of the modern novel—and his successors. For Grierson, the cultural history of the modern world was structured by a conflict between religion, humanism, and science—evident in the seventeenth century, and in the nineteenth—to which literature itself offered, in the present day, a kind of antidote. Grierson’s conception of literature registered his own difficulties with the Free Church religion of his parents, as well, perhaps, as the abiding influence of the broad Scottish university curriculum—combining study of the classics, philosophy, psychology and rhetoric—which he had encountered as an undergraduate prior to the major reforms of Scottish higher education begun in 1889. Did the heroic generation of Cambridge-educated critics, then, create and disseminate a kind of history inconceivable without the English Tripos? Or did they offer more of a local instantiation of a wider “mind of Britain”? A general history of English studies in British universities, developing for example some of the themes discussed in William Whyte’s recent Redbrick, is certainly a desideratum.
Collini partly defined literary critics’ cultural-historical interests in contradistinction to a shadowy “Other”: professional historians, who were preoccupied not by culture but by archives, charters and pipe-rolls. As Collini pointed out, the word “culture”—and so the enterprise of “cultural history”—has admitted of several senses in different times and in the usage of different authors. The kind of cultural history which critics felt they could not find among professional historians, and which accordingly they themselves had to supply, centered on an understanding of lived experience in the past; and on identifying the roots—and so, perhaps, the correctives—to their present discontents. This raises a second interesting problem, the answer to which should be investigated rather than assumed: what exactly became of “cultural history” in these senses within the British historical profession between around 1920 and 1960?
Peter Burke and Peter Ghosh have alike argued that the growing preoccupation of academic history with political history in the nineteenth and earlier twentieth centuries acted regrettably to constrict that universal application of historical method to all facets of human societies which the Enlightenment first outlined in terms of “conjectural history.” This thesis is true in its main outlines. But there were ways in which cultural history retained a presence in British academic history in the period of what Michael Bentley thinks of as historiographical “modernism,” prior to the transformative interventions of Keith Thomas, E. P. Thompson and others in the 1960s and afterwards. In the field of religious history, for example, Christopher Dawson – while holding the title of “Lecturer in the History of Culture” at University College, Exeter—published a collection of essays in 1933 entitled Enquiries into religion and culture. English study of socioeconomic history in the interwar and postwar years also often extended to, or existed in tandem with, interest in what can only be described as “culture.” Few episodes might appear as far removed from cultural history as the “storm over the gentry,” for example—a debate over the social origins of the English Civil War that was played out chiefly in the pages of the Economic History Review in the 1940s and 1950s. But the first book of one of the main participants in that controversy, Lawrence Stone, was actually a study entitled Sculpture in Britain: the middle ages, published in 1955 in the Pelican History of Art series. Although Stone came to regard it as a diversion from his main interests, its depictions of a flourishing artistic culture in late-medieval Britain, halted by the Reformation, may still be read as a kind of cultural-historical counterpart to his better-known arguments for the importance of the sixteenth and seventeenth centuries as a period of social upheaval. If it is true that literary criticism is always implicitly or explicitly historical, perhaps it is also true that few kinds of history have been found to be wholly separable from cultural history, broadly defined.
Joshua Bennett is a Junior Research Fellow in History at Christ Church, Oxford.