Categories
Think Piece

Empire of Abstraction: British Social Anthropology in the “Dependencies”

By Nile A. Davies

It would seem to be no more than a truism that no material can be successfully manipulated until its properties are known, whether it be a chemical compound or a society of human beings; and from that it would appear to follow that the science whose material is human society should be called upon when nothing else than the complete transformation of a society is in question.

Lucy Mair, “Colonial Administration as a Science” (1933)

On March 24th 1945, the British scientific journal Nature breathlessly reported that £120,000,000 of research funds (the equivalent of over 5 billion USD today) would be made available by the passing of the Colonial Development and Welfare Bill: a momentous commitment to the expansion of colonial study “which should be of interest to administrators, scientific men and technologists, and all who are concerned with the welfare and advancement of the British Colonial possessions.” The material conditions of colonial research would significantly determine the scope and energies of empirical labor in the social sciences. Specifically, ideas of colonial welfare drew conspicuously on the authority of experts in Social Anthropology—in its varying professional and institutional forms—to apprehend the flux and metamorphosis of human relations in a new international order.

Such extraordinary expenditures reflected broad desires throughout the previous decade for a science of administration—a means with which to know and understand a field of possibilities in an age of global “interpenetration” in colonized societies which, in their particularity, could not be addressed by “the application of general principles, however humanitarian.”[1] As data pertaining to the “forces and spirit of native institutions” were increasingly called upon for the maintenance of social cohesion, there emerged an imperative for the cultivation of “specially trained investigators [devoted to] comprehensive studies in the light of a sociological knowledge of the life of a community.”

Table of Contents from Lucy Mair, Welfare in the British Colonies (London: Royal Institute of International Affairs, 1944).

The history of colonial welfare recalls the contours of “governmentality,” the term coined by Michel Foucault to describe how power is secured through forms of expertise and intervention, attending to “the welfare of the population, the improvement of its condition, the increase of its wealth, longevity, health, etc.”[2] Drawn into the enterprise of administration, the design and formulation of social and economic research by anthropologists became increasingly associated with the high moral purpose of colonial reform. But, as Joanna Lewis notes, such a remit encompassed an impossibly wide range of aims and instruments for control: “animating civil society against social collapse; devising urban remedies for the incapacitated and the destitute; correcting the deviant” (75). Beset by the threat of rapid social change and indigenous nationalisms, the potential of the worldview offered up by intimate knowledge of the social structure suggested a means by which history itself might be forestalled. Well poised to anticipate the unforeseeable in a world of collapsing regimes, the great enthusiasm for structural functionalism in particular—akin to the field of international relations and its entanglements with empire—derived from its popular image as a tool of divination, seeming to equate the kinds of total social knowledge claimed by its practitioners with a scientifically derived vision of the future.

Formalizing the central importance of social analysis to the task of government, in 1944, the Colonial Social Science Research Council (CSSRC) was formed in order to advise the Secretary of State of the Colonies regarding “schemes of a sociological or anthropological nature.” Among the founding members of the council was Raymond Firth, the esteemed ethnologist of Polynesian society whose thesis (on the “Wealth and Work of the Maori”) had been supervised by Bronislaw Malinowski, pater familias of the discipline as it took form around the life and work of those who attended his seminar at the London School of Economics. But for professional academics striving for dominance amidst the competition of so-called “practical men,” the expansion of new territories for research raised serious questions about the value and legitimacy of knowledge production. As Benoît de L’Estoile has noted, the struggle for “a monopoly of competence on non-western social phenomena” generated new factions in the milieu of colonial expertise between academics and administrators, whose mutual engagements “in the field” marked divergent relationships to the value of colonial study as a means for the production of social theory.[3]

Front matter of Raymond Firth, Human Types (London: Nelson and Sons, 1938). Image by John Krygier via A Series of Series.

At the same time, increasing demand and material support for the study of the world-system had allowed a new generation of social and natural scientists to turn their attention towards the field from the metropole. For its part, the Royal Anthropological Institute awarded the Wellcome Medal each year “for the best research essay on the application of anthropological methods to the problems of native peoples, particularly those arising from intercourse between native peoples, or between primitive natives and civilised races.” Lucy Mair, another former student of Malinowski’s (cited at the beginning of this essay) received the award in 1935. This “immaterial” value of the colonies for the prospect of scholarship was shared by Lord Hailey, Chairman of the Colonial Research Committee. As he suggested in the preamble to the mammoth administrative compendium, An African Survey (1938):

A considerable part of the activity of the intellectual world is expended today in the study of social institutions, systems of law, and political developments which can now only be examined in retrospect. But Africa presents itself as a living laboratory in which the reward of study may prove to be not only the satisfaction of an intellectual impulse, but an effective addition of the welfare of the people. (xxiv)

Hailey’s romantic claims about the ends of imperial study proved to be prophetic for the postwar period, and spoke to the experimental approach in which such schemes were elaborated. While the natural sciences held out the promise of material riches to be “exploited” in an empire of neglect, anthropologists similarly stood to profit from their engagements in a social order that was shifting beyond recognition. Beyond the preservative impulse of ethnographic practice in the early 20th century, fixed on salvaging the “primitive” from the threshold of extinction, the contingencies of a collapsing empire presented the opportunity for colonial science to fulfil a gamut of ethical duties as the ideological arm of an administration that governed the flow of capital itself. As Hailey would later note in 1943:

No one can dispute the value of the humanitarian impulse which has in the past so often provided a corrective to practices which might have prejudiced the interests of native peoples. But we can no longer afford to regard policy as mainly an exercise in applied ethics. We now have a definite objective before us—the ideal of colonial self-government—and all our thinking on social and economic problems must be directed to organising the life of colonial communities to fit them for that end. […] It is in the light of this consideration that we must seek to determine the position of the capitalist and the proper function of capital.

What was this “proper function” of capital? In an address at Chatham House in April 1944, Bernard Bourdillon, then Governor of Nigeria, described the affective indifference, the ideological exhaustion of a precarious empire whose deprivation under the doctrine of laissez-faire could only suggest the great deception of the civilizing mandate itself. In the thrall of liberal torpor, the fate of Britain’s so-called “dependencies” had long been characterized by the slow violence of a debilitating austerity, borne out by starvation and disease in insolvent colonies, unable to develop their (often plentiful) resources in the absence of revenues. The receipt of financial assistance by the poorest colonies to balance their ailing budgets reflected the management of the population at its minimum, confined within the vicious cycle of deficiency: “regarded as poor relations, who could not, in all decency, be allowed to starve, but whose first duty was to earn a bare subsistence, and to relieve their reluctant benefactors of what was regarded as a wholly unprofitable obligation.”

O.G.R Williams to J.C Meggitt, “Housing conditions for poorer classes in and around Freetown” (C.S.O. M/54/34, 1939). Photograph by author.

As the tide of decolonization became an inescapable reality, desires for a deliberate strategy towards the improvement of social conditions both at home and abroad sought to recuperate the notion of mutual benefit between colony and metropole. The move to restore the ethical entanglements of a “People’s Empire,” long left out of mind, suggested the refraction of a burgeoning conception of the welfare state in Britain, whose origins in The Beveridge Report—published in 1942—turned towards the cause of “abolishing” society’s major ills: Want, Disease, Ignorance, Squalor and Idleness. In spite of an apparent commitment to universalism—in the establishment of a National Health Service in 1946, and state insurance for unemployment and pensions, for example—the report would garner criticism for privileging the model of the male breadwinner at the expense of working wives, whilst otherwise reflecting a palliative approach to poverty that failed to address its root causes. While ideas of domestic welfare shared many of the rhetorical devices that characterized the project of colonial reform (with improvements in public health, education and living standards chief among them), save for a single glancing reference, the Beveridge Report made no mention of the colonies or their place within this expansive and much-feted vision for postwar society.

On the contrary, the long road to economic solvency and the raising of living standards was understood to lie within colonial societies themselves, however enervated or held in abeyance by preceding policies. British plans for the autonomy of the overseas territories centered on the rhetoric of extraction under the general directive for colonized societies to exploit their own resources—as Bourdillon would note, “including that most important of all natural resources, the capacity of the people themselves.” Increased investment from the metropole would in turn provide for the welfare of colonial subjects in the event of their independence through the generation of something that might be called “human capital”, and by turning towards the earth itself as a repository of untapped value. The appointment of experts in the fields of imperial geology, agronomy and forestry turned the labors of scientific discovery towards a political economy of “growth” for the mitigation of social inequalities on a planetary scale.

But the professional and institutional entanglements of anthropologists to the field inextricably linked them to a social system of subjection that they could not fully claim to disavow. Senior anthropologists in particular appeared to retain a kind of primitivism, neglecting in their studies the administrative issues of growing urban centers for “tribal” or “village studies.” By the end of the 1940s, the earlier promise and possibility of Anthropology’s relationship to the colonial endeavor was increasingly questioned by its most prominent practitioners. At a special public meeting of the Royal Anthropological Institute in 1949, Firth spoke alongside the Oxford anthropologist E.E. Evans-Pritchard about the growing tensions and demands of professional practice in a period in which the vast majority of anthropological research was supported by state funds. “After long and shameful neglect by the British people and Government,” he declared, “it is now realised that it is impossible to govern colonial peoples without knowledge of their ways of life.” (179) And yet, Firth and Evans-Pritchard observed the anxieties in certain academic circles of what such a union would mean for the production of knowledge: “lest the colonial tail wag the anthropological dog—lest basic scientific problems be overlooked in favour of those of more pressing practical interest.” (138)

Buildings of the Makerere Institute of Social Research (MISR), founded in 1948 as the East African Institute of Social Research. Photograph via MISR.

Even before the conclusion of the Second World War, the experiences of fascism had proved to be a cautionary tale in which both the value and peril of social theory lay in its uses within a broader marketplace of applied science as an instrument of power-knowledge, capable of being wielded by states and their governments. Myopic fears of the “race war” to ensue from the collapse of white settler societies found their reflection in research agendas and the funding of applied studies. With an eye on neighboring Kenya, Audrey Richards—another of Malinowski’s “charmed circle”—became director of Uganda’s East African Institute of Social Research in 1950, a center established at Makerere College for the purpose of accumulating “anthropological and economic data on the peoples and problems of East Africa.”

This was also the scene of a burgeoning inquiry into “race relations.” In 1948, Firth’s student Kenneth Little published Negroes in Britain, a study of urban segregation and the fraught sentiments of “community” in Cardiff’s Tiger Bay, infamously portrayed by the Daily Express in 1936: “Half-Caste Girl: she presents a city with one of its big problems.” (49) Its streets would endure in the cultural imagination as a focal point of salacious reporting on the colonies of “coloured juveniles” born in the poor “slums” of seaport towns across the British Isles. Working class migrants in Cardiff’s Loudoun Square were captured in the pages of the left-leaning weekly, Picture Post byits staffphotographer Bert Hardy, whose efforts to represent the human face of residents in the “deeply-depressed quarter” are a complex amalgam of pity and social conscience documentary, recalling the iconic depictions of American poverty by photographers attached to the Farm Security Administration in the era of the New Deal. Meanwhile, the American sociologist St Clair Drake, who with Horace R. Clayton Jr. had co-authored the voluminous study Black Metropolis in 1945, had conducted research in Tiger Bay for his 1954 University of Chicago dissertation and responded directly to some of the claims made in Little’s study. Subjects of empire, he avowed, whether in Britain or its extremities, were united by their fate to be subjects of the survey and the study, misrepresented, slandered or otherwise examined with disciplinary instruments and the logics of reform and government.

Amidst revolutionary struggle and the rise of African nationalist movements, other scholarship emerging from this milieu appeared to display certain deficiencies in vision emanating from the colonial situation—the professional certitude and patronizing racism with which social scientists made and mythologized their objects. In 1955, the geographer Frank Debenham—another senior figure in the CSSRC’s council—published Nyasaland: Land of the Lake as part of The Corona Library, a series of “authoritative and readable” surveys sponsored by the Colonial Office.Writing in his review of Debenham’s book in the Journal of Negro Education, the historian Rayford Logan observed the bewildering disconnect between the well-documented experiences of civil discord under white-minority rule in the territory and the world as it was rendered in print:

[Debenham] seriously states: “We need not call the African lazy, since there is little obligation to work hard, but we must certainly call him lucky” (p. 104). He opposes a rigid policy of restricting freehold land for Europeans. His over-all view blandly disregards the discontent among the Africans in Nyasaland: “If only Nyasaland people are left to themselves and not incited from elsewhere there should be contentment under the new regime very soon, a return in fact to the situation of a few years ago when there was complete amity as a whole between black and white, and there were all the essentials for a real partnership satisfactory to both colours.”

In hindsight, these problems of perception appear to have become evident—if not exactly solvable—even to those most apparently endowed with the greatest faculties of interpretation and insight into the arcane mechanisms of the social world. Michael Banton, a student of Kenneth Little’s and the first editor of the journal Sociology, recalled his professional errata in the 2005 article “Finding, and Correcting, My Mistakes”. Writing candidly of his earliest forays into colonial research, he described the evolution and decline of structural functionalism, which was “founded upon a view of action as using scarce means to attain given ends but had in my, perhaps faulty, perception become a top-down theory of the social system.” Such reflections suggest the disenchantments of an analytical framework which threatened to occlude as much as it sought to understand, in which whole worlds went unnoticed or misread. More than 50 years after his earliest studies in the “coloured quarter” of London’s East End and Freetown, the capital of British West Africa, Banton still appeared—against all good intentions—stumped: “There were failings that should be accounted blind spots rather than mistakes…. Why was my vision blinkered?”


[1] Mair, Lucy. “Colonial Administration as a Science.” Journal of the Royal African Society 32, no. 129 (1933): 367.

[2] Foucault, Michel. The Foucault Effect: Studies in Governmentality. Edited by Graham Burchell, Colin Gordon and Peter Miller. Chicago: University of Chicago Press, 1991 [1978], p.100.

[3] Pels, Peter. “Global ‘experts’ and ‘African’ Minds: Tanganyika Anthropology as Public and Secret Service, 1925-61.” The Journal of the Royal Anthropological Institute 17, no. 4 (2011): 788-810. http://www.jstor.org/stable/41350755.


Nile A. Davies is a doctoral candidate in Anthropology and the Institute for Comparative Literature and Society at Columbia University. His dissertation examines the  politics and sentiments of reconstruction and the aftermaths of “disaster” in postwar Sierra Leone.

Featured Image: Cardiff’s Tiger Bay in the 1950s. Photograph by Bert Hardy, via WalesOnline.

Categories
Think Piece

Tory Marxism

by Charles Troup 

For many on the Right today, describing something as “Marxist” is sufficient to mark it out as something every decent conservative should stand against. Indeed, at first glance Marxism and conservatism may even look diametrically opposed. One is radically egalitarian, whilst the other has always found it necessary to defend inequality in some form. One demands that a society’s institutions express social justice; whilst the other asks principally that they be stable and capable of managing change. One proceeds from principle; the other prefers pragmatism.

But things weren’t always this way. The Right’s most creative thinkers have often drawn on an eclectic range of sources when expressing and renewing their creed—Marx not excepted. On the British Right, in fact, we can find surprisingly frank engagement with Marxism as recently as the 1980s: in particular amongst the Salisbury Group, a collection of “traditionalists” skeptical about the doctrines of neoliberalism which were conquering right-wing parties in the Western world one by one.

Roger Scruton

The influence of Marx is plain, for instance, in the philosopher Roger Scruton’s 1980 book The Meaning of Conservatism. Here Scruton made the striking claim that Marxism was a more suitable philosophical tradition than liberalism for conservatives to engage in dialogue because ‘it derives from a theory of human nature that one might actually believe’. This was because liberalism, for Scruton, began with a fictitious ideal of autonomous individual agents and believed that they could not be truly free under authority unless they had somehow consented to it. For Scruton, however, this notion “isolates man from history, from culture, from all those unchosen aspects of himself which are in fact the preconditions of his subsequent autonomy.” Liberalism lacked an account of how society and the self deeply interpenetrated each other. Scruton believed that the individuals yearned to see themselves reflected at some profound level in the way their society was organized, in its culture, and in the forms of collective membership it offered. Yet liberalism presented no idea of the self above its desires and no idea of self-fulfillment other than their satisfaction.

Marxism, on the other hand, possessed a philosophical anthropology which was much friendlier to the sort of “Hegelian” conservatism which Scruton advocated. He was particularly impressed with the concept of “species-being” or “human essence,” which Marx had borrowed from Ludwig Feuerbach and employed in the Manuscripts of 1844. It was this notion, Scruton reminded his readers, that underpinned the whole centrality of labour for Marxists, since they regarded it as an essential, intrinsically valuable human activity. Moreover, it was the estrangement of the individual from their labour under capitalism which caused the malaise of ‘alienation’: that condition of spiritual disaffection which, Scruton believed, conservatives should recognise in their own instincts about modernity’s deficiencies. Of course, the conservative would seek to ‘present his own description of alienation, and to try to rebut the charge that private property is its cause’; but Marxists should be praised for recognising ‘that civil order reflects not the desires of man, but the self of man’.

There was an urgent political stake in this discussion. Scruton had welcomed Thatcher’s victory in 1979 as an opportunity to recast British conservatism after its post-war dalliance with Keynesianism and redistributive social policy. Still, he felt a sense of foreboding about the ideological forces which had ushered her to victory. The Conservative Party, he complained, “has begun to see itself as the defender of individual freedom against the encroachments of the state, concerned to return to people their natural right of choice.” The result was damaging ‘urges to reform’, stirred by the newly ascendant language of “economic liberalism.” Scruton implored his fellow conservatives not to mistake this for true conservatism, but to recognize it as a derivation of its “principal enemy.”

In doing so, he once again compared Marxism and liberalism to demonstrate to conservatives the limitations of the latter. “The political battles of our time,” he wrote, “concern the conservation and destruction of institutions and forms of life: nothing more vividly illustrates this than the issues of education, political unity, the role of trade unions and the House of Lords, issues with which the abstract concept of ‘freedom’ fails to make contact.” Marxists at least understood that “the conflict concerns not freedom but authority, authority vested in a given office, institution or arrangement.” Their approach of course was “to demystify the ideal of authority’ and ‘replace it with the realities of power,” which Scruton thought reductive. But “in preferring to speak of power, the Marxist puts at the centre of politics the only true political commodity, the only thing which can actually change hands”—it “correctly locates the battleground.”

Scruton wasn’t the only figure in the Salisbury Group to engage with Marxism. So too did the historian Maurice Cowling, doyen of the “Peterhouse school” associated with the famously conservative Cambridge college. He believed that Marxism’s “explanatory usefulness can be considerable” and was even described by one of his admirers as a “Tory Marxist jester.”

Maurice Cowling

Cowling hated the Whiggish historians who dominated the English academy in the first half of the 20th century, and welcomed the rise of the English Marxist school in the 1950s—those figures around the journal Past & Present like E.P Thompson, Eric Hobsbawm, Dona Torr and Christopher Hill—as a breath of fresh air. Whereas Whig liberals gave bland and credulous accounts of the motive forces of British political history, the English Marxists were cynical and clear-eyed about power and conflict. As he explained in a 1987 radio interview for the BBC, he agreed with them that “class struggle” was “a real historical fact” and that we should “always see a cloven hoof beneath a principle.” Marxists knew that any set of institutions unequally apportioned loss amongst the social classes, making the business of politics that of deciding in whose image this constitution be made.

This was one point for Cowling where Marxists and conservatives parted ways: accepting the reality of class struggle didn’t mean picking the same side of the barricades. But Cowling believed that conservatives also diverged analytically from Marxists. One of their great errors, he wrote, was to believe that all forms of cultural or social attachment which entailed hierarchy were reducible to false consciousness; but Cowling believed that these were more concrete, especially if they connected to a sort of national consciousness he often referred to in quasi-mystical terms. The error made Marxists naïve about “the fertility and resourcefulness of established regimes.” For Cowling, it was the job of conservative political elites to enact this “resourcefulness:” to tap into the deep well of national sentiment and renew it for successive generations, and thus to blunt class conflict and insulate Britain’s political system from popular pressure.

We can see Cowling applying these ideas to contemporary politics most explicitly in the Salisbury Group’s first publication, the 1978 edited collection Conservative essays. Here he criticized Thatcher’s political rhetoric. Adam Smith might be a useful name to deploy against socialism, he wrote, but if carried to its “rationalistic” pretensions his political language was too rigid and unimaginative for the great task facing conservative elites. “If there is a class war—and there is—it is important that it should be handled with subtlety and skill […] it is not freedom that Conservatives want; what they want is the sort of freedom that will maintain existing inequalities or restore lost ones.” No class war could be managed by “politicians who are so completely encased in a Smithian straitjacket that they are incapable of recognizing that it is going on.” Conservatives needed to read more widely in search of insights to press into service against the reformers and revolutionaries of the age.

Marx rapidly fell out of favor as a source for creative borrowing, however. The collapse of the USSR was hailed by many conservatives as the ultimate indictment of socialism and Marx’s whole system along with it – something many on the Right still believe. Even Scruton became more reluctant to engage with Marx as the Cold War wore on (Cowling criticized him for making the journal Salisbury Review “crudely anti-Marxist” under his editorship). The frank openness to learning from Marx that we find in these texts looks like a historical curiosity today.

The story of the Salisbury Group is also something of a historiographical curiosity. The conservative revival of the 1970s has been the subject of much excellent work in recent British history; but the Group, despite its reputation on the Right and the status of its most prominent figures, has with a few exceptions been passed over for study. Thatcherism and its genealogy have understandably drawn the eye, but this has sometimes unhelpfully excluded its conservative critics or more skeptical fellow-travellers. Historians should seek now to tell more complex stories about the intellectual history of conservatism in this period: after all, the ascendance on the Right of the doctrines and rhetoric of neoliberalism was, in the words of philosopher John Gray, “perhaps the most remarkable and among the least anticipated developments in political thought and practice throughout the Western world in the 1980s.”

As for the present, whilst we shouldn’t expect a conservative re-engagement with Marx we should expect to see more creative re-appropriation of thinkers beyond the typical right-wing canon. This is especially so because the Tory Marxists of the 1970s were looking for something still sought by many conservatives today. That is a counterpoint to a neoliberalism which in its popular idiom increasingly rests upon a notion of individual freedom which fewer and fewer people experience as cohering with their aspirations, values or attachments; or which appeals to moralistic maxims about personal grit, endeavour and innovation which are belied by the inequalities and precarities of contemporary economic life. They seek a political perspective which issues from a holistic analysis of society and its constituent forces rather than individualistic axioms about entitlements and incentives, and which can speak to alienation and to conflict over authority. We can see this process underway already on the French Right, as Mark Lilla made clear in a recent article, where a new generation of intellectuals count the ‘communitarian’ socialists Alasdair MacIntyre, Christopher Lasch and Charles Péguy among their lodestars. And in a perhaps less self-conscious way we can see it on the American Right too, as the long-durable “fusionist” coalition between social conservatives and business libertarians comes under strain: witness Patrick Deneen’s surprise bestseller Why Liberalism Failed and the much-publicized debate between Sohrab Ahmari and David French over whether conservatives should reject or reconcile themselves to liberal institutions and norms. In this moment especially, we should expect to see more inspiration on the intellectual Right from strange places.


Charles Troup is a second-year Ph.D. student in Modern European History at Yale University. 

Categories
Think Piece

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg
Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original
Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens
Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

Categories
Think Piece

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2
A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image
Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.

Categories
Think Piece

Evolution Made Easy: Henry Balfour, Pitt Rivers, and the Evolution of Art

by guest contributor Laurel Waycott

In 1893, Henry Balfour, curator of the Pitt Rivers Museum in Oxford, UK, conducted an experiment. He traced a drawing of a snail crawling over a twig, and passed it to another person, whom he instructed to copy the drawing as accurately as possible with pen and paper. This second drawing was then passed to the next participant, with Balfour’s original drawing removed, and so on down the line. Balfour, in essence, constructed a nineteenth-century version of the game of telephone, with a piece of gastropodic visual art taking the place of whispered phrases. As in the case of the children’s game, what began as a relatively easy echo of what came before resulted in a bizarre, near unrecognizable transmutation.

Plate I. Henry Balfour, The Evolution of Decorative Art (New York: Macmillan & Co., 1893).

In the series of drawings, Balfour’s pastoral snail morphed, drawing by drawing, into a stylized bird—the snail’s eyestalks became the forked tail of the bird, while the spiral shell became, in Balfour’s words, “an unwieldy and unnecessary wart upon the, shall we call them, ‘trousers’ which were once the branching end of the twig” (28). Snails on twigs, birds in trousers—just what, exactly, are we to make of Balfour’s intentions for his experiment? What was Balfour trying to prove?

Balfour’s game of visual telephone, at its heart, was an attempt to understand how ornamental forms could change over time, using the logic of biological evolution. The results were published in a book, The Evolution of Decorative Art, which was largely devoted to the study of so-called “primitive” arts from the Pacific. The reason that Balfour had to rely on his constructed game and experimental results, rather than original samples of the “savage” art, was that he lacked a complete series necessary for illustrating his theory—he was forced to create one for his purposes. Balfour’s drawing experiment was inspired by a technique developed by General Pitt Rivers himself, whose collections formed the foundation of the museum. In 1875, Pitt Rivers—then known as Augustus Henry Lane Fox—delivered a lecture titled “The Evolution of Culture,” in which he argued that shifting forms of artifacts, from firearms to poetry, were in fact culminations of many small changes; and that the historical development of artifacts could be reconstructed by observing these minute changes. From this, Pitt Rivers devised a scheme of museum organization that arranged objects in genealogical fashion—best illustrated by his famous display of weapons used by the indigenous people of Australia.

Plate III. Augustus Henry Lane-Fox Pitt-Rivers, The Evolution of Culture, and Other Essays, ed. John Linton Myres (Oxford, Clarendon Press, 1906).

Here, Pitt Rivers arranged the weapons in a series of changing relationships radiating out from a central object, the “simple cylindrical stick” (34). In Pitt Rivers’ system, this central object was the most “primitive” and “essential” object, from which numerous small modifications could be made. Elongate the stick, and eventually one arrived at a lance; add a bend, and it slowly formed into a boomerang. While he acknowledged that these specimens were contemporary and not ancient, the organization implied a temporal relationship between the objects. This same logic was extended to understandings of human groups at the turn of the twentieth century. So-called “primitive” societies like the indigenous groups of the Pacific were considered “survivals” from the past, physically present but temporally removed from those living around them (37). The drawing game, developed by Pitt Rivers in 1884, served as a different way to manipulate time: by speeding up the process of cultural evolution, researchers could mimic evolution’s slow process of change over time in the span of just a few minutes. If the fruit fly’s rapid reproductive cycle made it an ideal model organism for studying Mendelian heredity, the drawing game sought to make cultural change an object of the laboratory.

It is important to note the capacious, wide-ranging definitions of “evolution” by the end of the nineteenth century. Evolution could refer to the large-scale, linear development of entire human or animal groups, but it could also refer to Darwinian natural selection. Balfour drew on both definitions, and developed tools to help him to apply evolutionary theory directly to studies of decorative art. “Degeneration,” the idea that organisms could revert back to earlier forms of evolution, played a reoccurring role in both Balfour’s and Pitt Rivers’ lines of museum object-based study. For reasons never explicitly stated, both men assumed that decorative motifs originated with realistic images, relying on the conventions of verisimilitude common in Western art. This leads us back, then, to the somewhat perplexing drawing with which Balfour chose to begin his experiment.

Balfour wrote that he started his experiment by making “a rough sketch of some object which could be easily recognized” (24). His original gastropodic image relied, fittingly, on a number of conventions that required a trained eye and trained hand to interpret. The snail’s shell and the twig, for instance, appeared rounded through the artist’s use of cross-hatching, the precise placement of regularly spaced lines which lend a sense of three-dimensional volume to a drawing. Similarly, the snail’s shell was placed in a vague landscape, surrounded by roughly-sketched lines giving a general sense of the surface upon which the action occurred. While the small illustration might initially seem like a straightforward portrayal of a gastropod suctioned onto a twig, the drawing’s visual interpretation is only obvious to those accustomed to reading and reproducing the visual conventions of Western art. Since the image was relatively challenging to begin with, it provided Balfour with an exciting experimental result: specifically, a bird wearing trousers.

Plate II. Henry Balfour, The Evolution of Decorative Art (New York: Macmillan & Co., 1893).

Balfour had conducted a similar experiment using a drawing of a man from the Parthenon frieze as his “seed,” but it failed to yield the surprising results of the first. While the particulars of the drawing changed, somewhat—the pectoral muscles became a cloak, the hat changed, and the individual’s gender got a little murky in the middle—the overall substance of the image remained unchanged. It did not exhibit evolutionary “degeneration” to the same convincing degree, but rather seemed to be, quite simply, the product of some less-than-stellar artists. While Balfour included both illustrations in his book, he clearly preferred his snail-to-bird illustration and reproduced it far more widely. He also admitted to interfering in the experimental process: omitting subsequent drawings that did not add useful evidence to his argument, and specifically choosing participants who had no artistic training (25, 27).

Balfour clearly manipulated his experiment and the resulting data to prove what he thought he already knew: that successive copying in art led to degenerate, overly conventionalized forms that no longer held to Western standards of verisimilitude. It was an outlook he had likely acquired from Pitt Rivers. In Notes and Queries on Anthropology (1892), a handbook circulated to travelers who wished to gather ethnographic data for anthropologists back in Britain, Pitt Rivers outlined a number of questions that travelers should ask about local art. The questions were leading, designed in a simple yes/no format likely to provoke a certain response. In fact, one of Pitt Rivers’ questions could, essentially, offer the verbal version of Balfour’s drawing game. “Do they,” he wrote, “in copying from one another, vary the designs through negligence, inability, or other causes, so as to lose sight of the original objects, and produce conventionalized forms, the meaning of which is otherwise inexplicable?” (119–21). Pitt Rivers left very little leeway—both for the artist and the observer—for creativity. Might the artists choose to depict things in a certain way? And might the observer interpret these depictions in his or her own way? Pitt River’s motivation was clear. If one did find such examples of copying, he added. “it would be of great interest to obtain series of such drawings, showing the gradual departure from the original designs.” They would, after all, make a very convincing museum display.

Laurel Waycott is a PhD candidate in the history of science and medicine at Yale University. This essay is adapted from a portion of her dissertation, which examines the way biological thinking shaped conceptions of decoration, ornament, and pattern at the turn of the 20th century.

Categories
Dispatches from the Archives

Stefan Collini’s Ford Lectures: ‘History in English criticism, 1919-1961’

by guest contributor Joshua Bennett

A distinctive feature of the early years of the Cambridge English Tripos (examination system), in which close “practical criticism” of individual texts was balanced by the study of the “life, literature, and thought” surrounding them, was that the social and intellectual background to literature acquired an equivalent importance to that of literature itself. Stefan Collini’s Ford Lectures, in common with his essay collections, Common Reading and Common Writing, have over the past several weeks richly demonstrated that the literary critics who were largely the products of that Tripos can themselves be read and historicized in that spirit. Collini, whose resistance to the disciplinary division between the study of literature and that of intellectual history has proved so fruitful over many years, has focused on six literary critics in his lecture series: T. S. Eliot, F. R. Leavis, L. C. Knights, Basil Willey, William Empson, and Raymond Williams. All, with the exception of Eliot, were educated at Cambridge; and all came to invest the enterprise of literary criticism with a particular kind of missionary importance in the early and middle decades of the twentieth century. Collini has been concerned to explore the intellectual and public dynamics of that mission, by focusing on the role of history in these critics’ thought and work. His argument has been twofold. First, he has emphasized that the practice of literary criticism is always implicitly or explicitly historical in nature. The second, and more intellectual-historical, element of his case has consisted in the suggestion that literary critics offered a certain kind of “cultural history” to the British public sphere. By using literary and linguistic evidence in order to unlock the “whole way of life” of previous forms of English society, and to reach qualitative judgements about “the standard of living” in past and present, critics occupied territory vacated by professional historians at the time, while also contributing to wider debates about twentieth-century societal conditions.

Collini’s lectures did not attempt to offer a full history of the development of English as a discipline in the twentieth century. Nevertheless, they raised larger questions for those interested in the history of the disciplines both of English and History in twentieth-century Britain, and what such histories can reveal about the wider social and cultural conditions in which they took shape. How should the findings from Collini’s penetrating microscope modify, or provide a framework for, our view of these larger organisms?

First, a question arises as to the relationship between the kind of historical criticism pursued by Collini’s largely Cantabrigian dramatis personae, and specific institutions and educational traditions. E. M. W. Tillyard’s mildly gossipy memoir of his involvement in the foundation of the Cambridge English Tripos, published in 1958 under the title of The Muse Unchained, recalls an intellectual environment of the 1910s and 1920s in which the study of literature was exciting because it was a way of opening up the world of ideas. The English Tripos, he held, offered a model of general humane education—superior to Classics, the previous such standard—through which the ideals of the past might nourish the present. There is a recognizable continuity between these aspirations, and the purposes of the cultural history afterwards pursued under the auspices of literary criticism by the subsequent takers of that Tripos whom Collini discussed—several of whom began their undergraduate studies as historians.

But how far did the English syllabuses of other universities, and the forces driving their creation and development, also encourage a turn towards cultural history, and how did they shape the kind of cultural history that was written? Tillyard’s account is notably disparaging of philological approaches to English studies, of the kind which acquired and preserved a considerably greater prominence in Oxford’s Honour School of “English Language and Literature”—a significant pairing—from 1896. Did this emphasis contribute to an absence of what might be called “cultural-historical” interest among Oxford’s literary scholars, or alternatively give it a particular shape? Widening the canvas beyond Oxbridge, it is surely also important to heed the striking fact that England was one of the last countries in Europe in which widespread university interest in the study of English literature took shape. If pressed to single out any one individual as having been responsible for the creation of the “modern” form of the study of English Literature in the United Kingdom—a hazardous exercise, certainly—one could do worse than to alight upon the Anglo-Scottish figure of Herbert Grierson. Grierson, who was born in Shetland in 1866 and died in 1960, was appointed to the newly-created Regius Chalmers Chair of English at Aberdeen in 1894, before moving to take up a similar position in Edinburgh in 1915. In his inaugural lecture at Edinburgh, Grierson argued for the autonomy of the study of English literature from that of British history. As Cairns Craig has recently pointed out, however, an evaluative kind of “cultural history” is unmistakably woven into his writings on the poetry of John Donne—which for Grierson prefigured the psychological realism of the modern novel—and his successors. For Grierson, the cultural history of the modern world was structured by a conflict between religion, humanism, and science—evident in the seventeenth century, and in the nineteenth—to which literature itself offered, in the present day, a kind of antidote. Grierson’s conception of literature registered his own difficulties with the Free Church religion of his parents, as well, perhaps, as the abiding influence of the broad Scottish university curriculum—combining study of the classics, philosophy, psychology and rhetoric—which he had encountered as an undergraduate prior to the major reforms of Scottish higher education begun in 1889. Did the heroic generation of Cambridge-educated critics, then, create and disseminate a kind of history inconceivable without the English Tripos? Or did they offer more of a local instantiation of a wider “mind of Britain”? A general history of English studies in British universities, developing for example some of the themes discussed in William Whyte’s recent Redbrick, is certainly a desideratum.

Collini partly defined literary critics’ cultural-historical interests in contradistinction to a shadowy “Other”: professional historians, who were preoccupied not by culture but by archives, charters and pipe-rolls. As Collini pointed out, the word “culture”—and so the enterprise of “cultural history”—has admitted of several senses in different times and in the usage of different authors. The kind of cultural history which critics felt they could not find among professional historians, and which accordingly they themselves had to supply, centered on an understanding of lived experience in the past; and on identifying the roots—and so, perhaps, the correctives—to their present discontents. This raises a second interesting problem, the answer to which should be investigated rather than assumed: what exactly became of “cultural history” in these senses within the British historical profession between around 1920 and 1960?

Peter Burke and Peter Ghosh have alike argued that the growing preoccupation of academic history with political history in the nineteenth and earlier twentieth centuries acted regrettably to constrict that universal application of historical method to all facets of human societies which the Enlightenment first outlined in terms of “conjectural history.” This thesis is true in its main outlines. But there were ways in which cultural history retained a presence in British academic history in the period of what Michael Bentley thinks of as historiographical “modernism,” prior to the transformative interventions of Keith Thomas, E. P. Thompson and others in the 1960s and afterwards. In the field of religious history, for example, Christopher Dawson – while holding the title of “Lecturer in the History of Culture” at University College, Exeter—published a collection of essays in 1933 entitled Enquiries into religion and culture. English study of socioeconomic history in the interwar and postwar years also often extended to, or existed in tandem with, interest in what can only be described as “culture.” Few episodes might appear as far removed from cultural history as the “storm over the gentry,” for example—a debate over the social origins of the English Civil War that was played out chiefly in the pages of the Economic History Review in the 1940s and 1950s. But the first book of one of the main participants in that controversy, Lawrence Stone, was actually a study entitled Sculpture in Britain: the middle ages, published in 1955 in the Pelican History of Art series. Although Stone came to regard it as a diversion from his main interests, its depictions of a flourishing artistic culture in late-medieval Britain, halted by the Reformation, may still be read as a kind of cultural-historical counterpart to his better-known arguments for the importance of the sixteenth and seventeenth centuries as a period of social upheaval. If it is true that literary criticism is always implicitly or explicitly historical, perhaps it is also true that few kinds of history have been found to be wholly separable from cultural history, broadly defined.

Joshua Bennett is a Junior Research Fellow in History at Christ Church, Oxford.