Categories
Think Piece

Empire of Abstraction: British Social Anthropology in the “Dependencies”

By Nile A. Davies

It would seem to be no more than a truism that no material can be successfully manipulated until its properties are known, whether it be a chemical compound or a society of human beings; and from that it would appear to follow that the science whose material is human society should be called upon when nothing else than the complete transformation of a society is in question.

Lucy Mair, “Colonial Administration as a Science” (1933)

On March 24th 1945, the British scientific journal Nature breathlessly reported that £120,000,000 of research funds (the equivalent of over 5 billion USD today) would be made available by the passing of the Colonial Development and Welfare Bill: a momentous commitment to the expansion of colonial study “which should be of interest to administrators, scientific men and technologists, and all who are concerned with the welfare and advancement of the British Colonial possessions.” The material conditions of colonial research would significantly determine the scope and energies of empirical labor in the social sciences. Specifically, ideas of colonial welfare drew conspicuously on the authority of experts in Social Anthropology—in its varying professional and institutional forms—to apprehend the flux and metamorphosis of human relations in a new international order.

Such extraordinary expenditures reflected broad desires throughout the previous decade for a science of administration—a means with which to know and understand a field of possibilities in an age of global “interpenetration” in colonized societies which, in their particularity, could not be addressed by “the application of general principles, however humanitarian.”[1] As data pertaining to the “forces and spirit of native institutions” were increasingly called upon for the maintenance of social cohesion, there emerged an imperative for the cultivation of “specially trained investigators [devoted to] comprehensive studies in the light of a sociological knowledge of the life of a community.”

Table of Contents from Lucy Mair, Welfare in the British Colonies (London: Royal Institute of International Affairs, 1944).

The history of colonial welfare recalls the contours of “governmentality,” the term coined by Michel Foucault to describe how power is secured through forms of expertise and intervention, attending to “the welfare of the population, the improvement of its condition, the increase of its wealth, longevity, health, etc.”[2] Drawn into the enterprise of administration, the design and formulation of social and economic research by anthropologists became increasingly associated with the high moral purpose of colonial reform. But, as Joanna Lewis notes, such a remit encompassed an impossibly wide range of aims and instruments for control: “animating civil society against social collapse; devising urban remedies for the incapacitated and the destitute; correcting the deviant” (75). Beset by the threat of rapid social change and indigenous nationalisms, the potential of the worldview offered up by intimate knowledge of the social structure suggested a means by which history itself might be forestalled. Well poised to anticipate the unforeseeable in a world of collapsing regimes, the great enthusiasm for structural functionalism in particular—akin to the field of international relations and its entanglements with empire—derived from its popular image as a tool of divination, seeming to equate the kinds of total social knowledge claimed by its practitioners with a scientifically derived vision of the future.

Formalizing the central importance of social analysis to the task of government, in 1944, the Colonial Social Science Research Council (CSSRC) was formed in order to advise the Secretary of State of the Colonies regarding “schemes of a sociological or anthropological nature.” Among the founding members of the council was Raymond Firth, the esteemed ethnologist of Polynesian society whose thesis (on the “Wealth and Work of the Maori”) had been supervised by Bronislaw Malinowski, pater familias of the discipline as it took form around the life and work of those who attended his seminar at the London School of Economics. But for professional academics striving for dominance amidst the competition of so-called “practical men,” the expansion of new territories for research raised serious questions about the value and legitimacy of knowledge production. As Benoît de L’Estoile has noted, the struggle for “a monopoly of competence on non-western social phenomena” generated new factions in the milieu of colonial expertise between academics and administrators, whose mutual engagements “in the field” marked divergent relationships to the value of colonial study as a means for the production of social theory.[3]

Front matter of Raymond Firth, Human Types (London: Nelson and Sons, 1938). Image by John Krygier via A Series of Series.

At the same time, increasing demand and material support for the study of the world-system had allowed a new generation of social and natural scientists to turn their attention towards the field from the metropole. For its part, the Royal Anthropological Institute awarded the Wellcome Medal each year “for the best research essay on the application of anthropological methods to the problems of native peoples, particularly those arising from intercourse between native peoples, or between primitive natives and civilised races.” Lucy Mair, another former student of Malinowski’s (cited at the beginning of this essay) received the award in 1935. This “immaterial” value of the colonies for the prospect of scholarship was shared by Lord Hailey, Chairman of the Colonial Research Committee. As he suggested in the preamble to the mammoth administrative compendium, An African Survey (1938):

A considerable part of the activity of the intellectual world is expended today in the study of social institutions, systems of law, and political developments which can now only be examined in retrospect. But Africa presents itself as a living laboratory in which the reward of study may prove to be not only the satisfaction of an intellectual impulse, but an effective addition of the welfare of the people. (xxiv)

Hailey’s romantic claims about the ends of imperial study proved to be prophetic for the postwar period, and spoke to the experimental approach in which such schemes were elaborated. While the natural sciences held out the promise of material riches to be “exploited” in an empire of neglect, anthropologists similarly stood to profit from their engagements in a social order that was shifting beyond recognition. Beyond the preservative impulse of ethnographic practice in the early 20th century, fixed on salvaging the “primitive” from the threshold of extinction, the contingencies of a collapsing empire presented the opportunity for colonial science to fulfil a gamut of ethical duties as the ideological arm of an administration that governed the flow of capital itself. As Hailey would later note in 1943:

No one can dispute the value of the humanitarian impulse which has in the past so often provided a corrective to practices which might have prejudiced the interests of native peoples. But we can no longer afford to regard policy as mainly an exercise in applied ethics. We now have a definite objective before us—the ideal of colonial self-government—and all our thinking on social and economic problems must be directed to organising the life of colonial communities to fit them for that end. […] It is in the light of this consideration that we must seek to determine the position of the capitalist and the proper function of capital.

What was this “proper function” of capital? In an address at Chatham House in April 1944, Bernard Bourdillon, then Governor of Nigeria, described the affective indifference, the ideological exhaustion of a precarious empire whose deprivation under the doctrine of laissez-faire could only suggest the great deception of the civilizing mandate itself. In the thrall of liberal torpor, the fate of Britain’s so-called “dependencies” had long been characterized by the slow violence of a debilitating austerity, borne out by starvation and disease in insolvent colonies, unable to develop their (often plentiful) resources in the absence of revenues. The receipt of financial assistance by the poorest colonies to balance their ailing budgets reflected the management of the population at its minimum, confined within the vicious cycle of deficiency: “regarded as poor relations, who could not, in all decency, be allowed to starve, but whose first duty was to earn a bare subsistence, and to relieve their reluctant benefactors of what was regarded as a wholly unprofitable obligation.”

O.G.R Williams to J.C Meggitt, “Housing conditions for poorer classes in and around Freetown” (C.S.O. M/54/34, 1939). Photograph by author.

As the tide of decolonization became an inescapable reality, desires for a deliberate strategy towards the improvement of social conditions both at home and abroad sought to recuperate the notion of mutual benefit between colony and metropole. The move to restore the ethical entanglements of a “People’s Empire,” long left out of mind, suggested the refraction of a burgeoning conception of the welfare state in Britain, whose origins in The Beveridge Report—published in 1942—turned towards the cause of “abolishing” society’s major ills: Want, Disease, Ignorance, Squalor and Idleness. In spite of an apparent commitment to universalism—in the establishment of a National Health Service in 1946, and state insurance for unemployment and pensions, for example—the report would garner criticism for privileging the model of the male breadwinner at the expense of working wives, whilst otherwise reflecting a palliative approach to poverty that failed to address its root causes. While ideas of domestic welfare shared many of the rhetorical devices that characterized the project of colonial reform (with improvements in public health, education and living standards chief among them), save for a single glancing reference, the Beveridge Report made no mention of the colonies or their place within this expansive and much-feted vision for postwar society.

On the contrary, the long road to economic solvency and the raising of living standards was understood to lie within colonial societies themselves, however enervated or held in abeyance by preceding policies. British plans for the autonomy of the overseas territories centered on the rhetoric of extraction under the general directive for colonized societies to exploit their own resources—as Bourdillon would note, “including that most important of all natural resources, the capacity of the people themselves.” Increased investment from the metropole would in turn provide for the welfare of colonial subjects in the event of their independence through the generation of something that might be called “human capital”, and by turning towards the earth itself as a repository of untapped value. The appointment of experts in the fields of imperial geology, agronomy and forestry turned the labors of scientific discovery towards a political economy of “growth” for the mitigation of social inequalities on a planetary scale.

But the professional and institutional entanglements of anthropologists to the field inextricably linked them to a social system of subjection that they could not fully claim to disavow. Senior anthropologists in particular appeared to retain a kind of primitivism, neglecting in their studies the administrative issues of growing urban centers for “tribal” or “village studies.” By the end of the 1940s, the earlier promise and possibility of Anthropology’s relationship to the colonial endeavor was increasingly questioned by its most prominent practitioners. At a special public meeting of the Royal Anthropological Institute in 1949, Firth spoke alongside the Oxford anthropologist E.E. Evans-Pritchard about the growing tensions and demands of professional practice in a period in which the vast majority of anthropological research was supported by state funds. “After long and shameful neglect by the British people and Government,” he declared, “it is now realised that it is impossible to govern colonial peoples without knowledge of their ways of life.” (179) And yet, Firth and Evans-Pritchard observed the anxieties in certain academic circles of what such a union would mean for the production of knowledge: “lest the colonial tail wag the anthropological dog—lest basic scientific problems be overlooked in favour of those of more pressing practical interest.” (138)

Buildings of the Makerere Institute of Social Research (MISR), founded in 1948 as the East African Institute of Social Research. Photograph via MISR.

Even before the conclusion of the Second World War, the experiences of fascism had proved to be a cautionary tale in which both the value and peril of social theory lay in its uses within a broader marketplace of applied science as an instrument of power-knowledge, capable of being wielded by states and their governments. Myopic fears of the “race war” to ensue from the collapse of white settler societies found their reflection in research agendas and the funding of applied studies. With an eye on neighboring Kenya, Audrey Richards—another of Malinowski’s “charmed circle”—became director of Uganda’s East African Institute of Social Research in 1950, a center established at Makerere College for the purpose of accumulating “anthropological and economic data on the peoples and problems of East Africa.”

This was also the scene of a burgeoning inquiry into “race relations.” In 1948, Firth’s student Kenneth Little published Negroes in Britain, a study of urban segregation and the fraught sentiments of “community” in Cardiff’s Tiger Bay, infamously portrayed by the Daily Express in 1936: “Half-Caste Girl: she presents a city with one of its big problems.” (49) Its streets would endure in the cultural imagination as a focal point of salacious reporting on the colonies of “coloured juveniles” born in the poor “slums” of seaport towns across the British Isles. Working class migrants in Cardiff’s Loudoun Square were captured in the pages of the left-leaning weekly, Picture Post byits staffphotographer Bert Hardy, whose efforts to represent the human face of residents in the “deeply-depressed quarter” are a complex amalgam of pity and social conscience documentary, recalling the iconic depictions of American poverty by photographers attached to the Farm Security Administration in the era of the New Deal. Meanwhile, the American sociologist St Clair Drake, who with Horace R. Clayton Jr. had co-authored the voluminous study Black Metropolis in 1945, had conducted research in Tiger Bay for his 1954 University of Chicago dissertation and responded directly to some of the claims made in Little’s study. Subjects of empire, he avowed, whether in Britain or its extremities, were united by their fate to be subjects of the survey and the study, misrepresented, slandered or otherwise examined with disciplinary instruments and the logics of reform and government.

Amidst revolutionary struggle and the rise of African nationalist movements, other scholarship emerging from this milieu appeared to display certain deficiencies in vision emanating from the colonial situation—the professional certitude and patronizing racism with which social scientists made and mythologized their objects. In 1955, the geographer Frank Debenham—another senior figure in the CSSRC’s council—published Nyasaland: Land of the Lake as part of The Corona Library, a series of “authoritative and readable” surveys sponsored by the Colonial Office.Writing in his review of Debenham’s book in the Journal of Negro Education, the historian Rayford Logan observed the bewildering disconnect between the well-documented experiences of civil discord under white-minority rule in the territory and the world as it was rendered in print:

[Debenham] seriously states: “We need not call the African lazy, since there is little obligation to work hard, but we must certainly call him lucky” (p. 104). He opposes a rigid policy of restricting freehold land for Europeans. His over-all view blandly disregards the discontent among the Africans in Nyasaland: “If only Nyasaland people are left to themselves and not incited from elsewhere there should be contentment under the new regime very soon, a return in fact to the situation of a few years ago when there was complete amity as a whole between black and white, and there were all the essentials for a real partnership satisfactory to both colours.”

In hindsight, these problems of perception appear to have become evident—if not exactly solvable—even to those most apparently endowed with the greatest faculties of interpretation and insight into the arcane mechanisms of the social world. Michael Banton, a student of Kenneth Little’s and the first editor of the journal Sociology, recalled his professional errata in the 2005 article “Finding, and Correcting, My Mistakes”. Writing candidly of his earliest forays into colonial research, he described the evolution and decline of structural functionalism, which was “founded upon a view of action as using scarce means to attain given ends but had in my, perhaps faulty, perception become a top-down theory of the social system.” Such reflections suggest the disenchantments of an analytical framework which threatened to occlude as much as it sought to understand, in which whole worlds went unnoticed or misread. More than 50 years after his earliest studies in the “coloured quarter” of London’s East End and Freetown, the capital of British West Africa, Banton still appeared—against all good intentions—stumped: “There were failings that should be accounted blind spots rather than mistakes…. Why was my vision blinkered?”


[1] Mair, Lucy. “Colonial Administration as a Science.” Journal of the Royal African Society 32, no. 129 (1933): 367.

[2] Foucault, Michel. The Foucault Effect: Studies in Governmentality. Edited by Graham Burchell, Colin Gordon and Peter Miller. Chicago: University of Chicago Press, 1991 [1978], p.100.

[3] Pels, Peter. “Global ‘experts’ and ‘African’ Minds: Tanganyika Anthropology as Public and Secret Service, 1925-61.” The Journal of the Royal Anthropological Institute 17, no. 4 (2011): 788-810. http://www.jstor.org/stable/41350755.


Nile A. Davies is a doctoral candidate in Anthropology and the Institute for Comparative Literature and Society at Columbia University. His dissertation examines the  politics and sentiments of reconstruction and the aftermaths of “disaster” in postwar Sierra Leone.

Featured Image: Cardiff’s Tiger Bay in the 1950s. Photograph by Bert Hardy, via WalesOnline.

Categories
Think Piece

Tory Marxism

by Charles Troup 

For many on the Right today, describing something as “Marxist” is sufficient to mark it out as something every decent conservative should stand against. Indeed, at first glance Marxism and conservatism may even look diametrically opposed. One is radically egalitarian, whilst the other has always found it necessary to defend inequality in some form. One demands that a society’s institutions express social justice; whilst the other asks principally that they be stable and capable of managing change. One proceeds from principle; the other prefers pragmatism.

But things weren’t always this way. The Right’s most creative thinkers have often drawn on an eclectic range of sources when expressing and renewing their creed—Marx not excepted. On the British Right, in fact, we can find surprisingly frank engagement with Marxism as recently as the 1980s: in particular amongst the Salisbury Group, a collection of “traditionalists” skeptical about the doctrines of neoliberalism which were conquering right-wing parties in the Western world one by one.

Roger Scruton

The influence of Marx is plain, for instance, in the philosopher Roger Scruton’s 1980 book The Meaning of Conservatism. Here Scruton made the striking claim that Marxism was a more suitable philosophical tradition than liberalism for conservatives to engage in dialogue because ‘it derives from a theory of human nature that one might actually believe’. This was because liberalism, for Scruton, began with a fictitious ideal of autonomous individual agents and believed that they could not be truly free under authority unless they had somehow consented to it. For Scruton, however, this notion “isolates man from history, from culture, from all those unchosen aspects of himself which are in fact the preconditions of his subsequent autonomy.” Liberalism lacked an account of how society and the self deeply interpenetrated each other. Scruton believed that the individuals yearned to see themselves reflected at some profound level in the way their society was organized, in its culture, and in the forms of collective membership it offered. Yet liberalism presented no idea of the self above its desires and no idea of self-fulfillment other than their satisfaction.

Marxism, on the other hand, possessed a philosophical anthropology which was much friendlier to the sort of “Hegelian” conservatism which Scruton advocated. He was particularly impressed with the concept of “species-being” or “human essence,” which Marx had borrowed from Ludwig Feuerbach and employed in the Manuscripts of 1844. It was this notion, Scruton reminded his readers, that underpinned the whole centrality of labour for Marxists, since they regarded it as an essential, intrinsically valuable human activity. Moreover, it was the estrangement of the individual from their labour under capitalism which caused the malaise of ‘alienation’: that condition of spiritual disaffection which, Scruton believed, conservatives should recognise in their own instincts about modernity’s deficiencies. Of course, the conservative would seek to ‘present his own description of alienation, and to try to rebut the charge that private property is its cause’; but Marxists should be praised for recognising ‘that civil order reflects not the desires of man, but the self of man’.

There was an urgent political stake in this discussion. Scruton had welcomed Thatcher’s victory in 1979 as an opportunity to recast British conservatism after its post-war dalliance with Keynesianism and redistributive social policy. Still, he felt a sense of foreboding about the ideological forces which had ushered her to victory. The Conservative Party, he complained, “has begun to see itself as the defender of individual freedom against the encroachments of the state, concerned to return to people their natural right of choice.” The result was damaging ‘urges to reform’, stirred by the newly ascendant language of “economic liberalism.” Scruton implored his fellow conservatives not to mistake this for true conservatism, but to recognize it as a derivation of its “principal enemy.”

In doing so, he once again compared Marxism and liberalism to demonstrate to conservatives the limitations of the latter. “The political battles of our time,” he wrote, “concern the conservation and destruction of institutions and forms of life: nothing more vividly illustrates this than the issues of education, political unity, the role of trade unions and the House of Lords, issues with which the abstract concept of ‘freedom’ fails to make contact.” Marxists at least understood that “the conflict concerns not freedom but authority, authority vested in a given office, institution or arrangement.” Their approach of course was “to demystify the ideal of authority’ and ‘replace it with the realities of power,” which Scruton thought reductive. But “in preferring to speak of power, the Marxist puts at the centre of politics the only true political commodity, the only thing which can actually change hands”—it “correctly locates the battleground.”

Scruton wasn’t the only figure in the Salisbury Group to engage with Marxism. So too did the historian Maurice Cowling, doyen of the “Peterhouse school” associated with the famously conservative Cambridge college. He believed that Marxism’s “explanatory usefulness can be considerable” and was even described by one of his admirers as a “Tory Marxist jester.”

Maurice Cowling

Cowling hated the Whiggish historians who dominated the English academy in the first half of the 20th century, and welcomed the rise of the English Marxist school in the 1950s—those figures around the journal Past & Present like E.P Thompson, Eric Hobsbawm, Dona Torr and Christopher Hill—as a breath of fresh air. Whereas Whig liberals gave bland and credulous accounts of the motive forces of British political history, the English Marxists were cynical and clear-eyed about power and conflict. As he explained in a 1987 radio interview for the BBC, he agreed with them that “class struggle” was “a real historical fact” and that we should “always see a cloven hoof beneath a principle.” Marxists knew that any set of institutions unequally apportioned loss amongst the social classes, making the business of politics that of deciding in whose image this constitution be made.

This was one point for Cowling where Marxists and conservatives parted ways: accepting the reality of class struggle didn’t mean picking the same side of the barricades. But Cowling believed that conservatives also diverged analytically from Marxists. One of their great errors, he wrote, was to believe that all forms of cultural or social attachment which entailed hierarchy were reducible to false consciousness; but Cowling believed that these were more concrete, especially if they connected to a sort of national consciousness he often referred to in quasi-mystical terms. The error made Marxists naïve about “the fertility and resourcefulness of established regimes.” For Cowling, it was the job of conservative political elites to enact this “resourcefulness:” to tap into the deep well of national sentiment and renew it for successive generations, and thus to blunt class conflict and insulate Britain’s political system from popular pressure.

We can see Cowling applying these ideas to contemporary politics most explicitly in the Salisbury Group’s first publication, the 1978 edited collection Conservative essays. Here he criticized Thatcher’s political rhetoric. Adam Smith might be a useful name to deploy against socialism, he wrote, but if carried to its “rationalistic” pretensions his political language was too rigid and unimaginative for the great task facing conservative elites. “If there is a class war—and there is—it is important that it should be handled with subtlety and skill […] it is not freedom that Conservatives want; what they want is the sort of freedom that will maintain existing inequalities or restore lost ones.” No class war could be managed by “politicians who are so completely encased in a Smithian straitjacket that they are incapable of recognizing that it is going on.” Conservatives needed to read more widely in search of insights to press into service against the reformers and revolutionaries of the age.

Marx rapidly fell out of favor as a source for creative borrowing, however. The collapse of the USSR was hailed by many conservatives as the ultimate indictment of socialism and Marx’s whole system along with it – something many on the Right still believe. Even Scruton became more reluctant to engage with Marx as the Cold War wore on (Cowling criticized him for making the journal Salisbury Review “crudely anti-Marxist” under his editorship). The frank openness to learning from Marx that we find in these texts looks like a historical curiosity today.

The story of the Salisbury Group is also something of a historiographical curiosity. The conservative revival of the 1970s has been the subject of much excellent work in recent British history; but the Group, despite its reputation on the Right and the status of its most prominent figures, has with a few exceptions been passed over for study. Thatcherism and its genealogy have understandably drawn the eye, but this has sometimes unhelpfully excluded its conservative critics or more skeptical fellow-travellers. Historians should seek now to tell more complex stories about the intellectual history of conservatism in this period: after all, the ascendance on the Right of the doctrines and rhetoric of neoliberalism was, in the words of philosopher John Gray, “perhaps the most remarkable and among the least anticipated developments in political thought and practice throughout the Western world in the 1980s.”

As for the present, whilst we shouldn’t expect a conservative re-engagement with Marx we should expect to see more creative re-appropriation of thinkers beyond the typical right-wing canon. This is especially so because the Tory Marxists of the 1970s were looking for something still sought by many conservatives today. That is a counterpoint to a neoliberalism which in its popular idiom increasingly rests upon a notion of individual freedom which fewer and fewer people experience as cohering with their aspirations, values or attachments; or which appeals to moralistic maxims about personal grit, endeavour and innovation which are belied by the inequalities and precarities of contemporary economic life. They seek a political perspective which issues from a holistic analysis of society and its constituent forces rather than individualistic axioms about entitlements and incentives, and which can speak to alienation and to conflict over authority. We can see this process underway already on the French Right, as Mark Lilla made clear in a recent article, where a new generation of intellectuals count the ‘communitarian’ socialists Alasdair MacIntyre, Christopher Lasch and Charles Péguy among their lodestars. And in a perhaps less self-conscious way we can see it on the American Right too, as the long-durable “fusionist” coalition between social conservatives and business libertarians comes under strain: witness Patrick Deneen’s surprise bestseller Why Liberalism Failed and the much-publicized debate between Sohrab Ahmari and David French over whether conservatives should reject or reconcile themselves to liberal institutions and norms. In this moment especially, we should expect to see more inspiration on the intellectual Right from strange places.


Charles Troup is a second-year Ph.D. student in Modern European History at Yale University. 

Categories
Think Piece

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg
Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original
Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens
Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

Categories
Think Piece

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2
A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image
Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.

Categories
Think Piece

The Idea of the Souvenir: Mauchline Ware

by guest contributor Tess Goodman

The souvenir is a relatively recent concept. The word only began to refer to an “object, rather than a notion” in the late eighteenth century (Kwint, Material Memories 10). Of course, the practice of carrying a small token away from an important location is ancient. In Europe, souvenirs evolved from religious relics. Pilgrims in the late Roman and Byzantine eras removed stones, dirt, water, and other organic materials from pilgrimage sites, believing that “the sanctity of holy people, holy objects and holy places was, in some manner, transferable through physical contact” (Evans, Souvenirs 1). We might call this logic synecdochic: the sacred power of the holy site is thought to remain immanent in pieces of it, chips from a temple or vials of water from a well.

As leisure travel became more common, souvenir commodities evolved from relics. By the eighteenth and nineteenth centuries, tourist-consumers had access to a large market of souvenir merchandise. Thad Logan describes china mugs, novelty needle cases, “sand pictures, seaweed albums,” tartan ware, and a wide range of other souvenir trinkets commonly found in Victorian sitting rooms (The Victorian Parlour, 186). Modern souvenirs are not very different. T-shirts from Hawaii and needle cases from Brighton both rely on the logic of metonymic association, as Logan (186) and Susan Stewart (On Longing, 136) point out. In order to memorialize a tourist’s experiences, the shapes and decorations of these souvenir trinkets evoke the site where those experiences took place.

How did synecdoche become metonymy? What changed? To begin to answer these questions, we can consider a test case: wooden souvenir trinkets from Victorian Scotland. These artifacts draw on both synecdochic and metonymic logic. Therefore, they provide evidence about a transitional phase in the history of the souvenir, and in the history of the way we derive meaning from objects. They do not represent a single moment of transition—this evolution was gradual and piecemeal, taking place over decades if not centuries. Instead, these souvenirs provide a useful case study, a point from which to consider a broader history.

Goodman image 1
Thomas A. Kempis. Golden Thoughts from the Imitation of Christ. N.p, n.d. Bdg.s.922. National Library of Scotland, Edinburgh.

These souvenirs were known as Mauchline ware—named for Mauchline, a town in Ayrshire (Trachtenberg, Mauchline Ware 22-23). Mauchline ware objects were made of wood, decorated in distinctive styles, and heavily varnished for durability. The earliest Mauchline ware pieces were snuffboxes. By mid-century, tourists could buy Mauchline ware pen knives, sewing kits, eyeglass cases, and many other miscellaneous objects. (The examples discussed in this blogpost all happen to be book bindings.) Some of Mauchline ware objects were decorated with a tartan pattern, immediately recognizable as emblems of Scotland. Equally popular were Mauchline ware objects decorated with transfer images of tourist sites. These trinkets functioned with metonymic logic, as modern souvenirs. For example, the binding below bears an iconic representation of Fingall’s Cave.

But sometimes, manufacturers of Mauchline ware took lumber from tourist sites to construct these souvenirs. Captions on the items would indicate the source of the material. Examples abound: a copy of The Dunkeld Souvenir was bound in wood “From the Athole Plantations Dunkeld” (Burns). The photograph below shows a copy of Sir Walter Scott’s Marmion bound in Mauchline ware, using wood “From [the] Banks of Tweed, Abbotsford.” More gruesomely, the boards on a copy of a Guide to Doune Castle were “made from the wood of Old Gallows Tree at Doune Castle” (Dunbar). These captions present the souvenirs as synecdochic artifacts—not religious, but geographical relics. Their purchasers could, quite literally, take home a piece of Scotland.

Goodman image 2
Walter Scott. Marmion. Edinburgh: Adam and Charles Black, 1873. Bdg.s.939. National Library of Scotland, Edinburgh.

These objects were part relic, part commodity. There was a commercial rationale for this combination: the publishers of these books leveraged the appeal of the wood as a relic, but they also transformed the raw material into a distinctively modern, distinctively Scottish consumer product. Contemporary accounts in a souvenir of Queen Victoria’s visit to the Scottish Borders expose some of the commercial logic behind the production process. Publisher’s advertisements in this 1867 book list the “fancy wood work” items its publishers sold in addition to books, and the original source of the wood used in the souvenirs (The Scottish Border 1-2). The binding on this copy states that the wood was “grown within the precincts of Melrose Abbey.” The advertisement provides more detail:  

‘Several years ago, when the town drain was being taken through the ‘Dowcot’ Park, […] a fine beam of black oak was discovered about six feet below the surface of the ground. It is now being taken up […] by Mr. Rutherfurd, stationer, Kelso, for the purpose of being turned into souvenirs. […].’ –Scotsman. Messrs. R may state that most of the “fine beam of black oak” […] split into fibres when exposed to the air and dried. Of the portions remaining good they have had the honour of preparing a box for Her Majesty in which to hold the Photographs of the district specially taken at the time of her visit. (2)

The wood was found on ground between Melrose Abbey and the Tweed, exhumed, and transformed into souvenirs. The publisher’s ad actually refers to these souvenirs as “Melrose Abbey Relics” (2). But they do not adhere to the logic of the early relic: these publishers describe the original wood as quasi-waste material that disintegrated into useless “fibres” when exposed to air. By using the wood for Mauchline ware, the publishers not only preserved the wood against further disintegration: they transformed organic waste into a valuable luxury product, rare and fine enough to present to the Queen. The organic source material lends some authenticity, but it was the process of commodification that added value and intellectual interest.

In short, the relic was not wholly abandoned: the relic and the souvenir co-existed, and some souvenir commodities borrowed ancient synecdochic logic. The gradual, piecemeal evolution from relic to commodity was part of the development of modern consumer culture. The publishers behind these Mauchline ware book bindings were scrambling to reach a new market. Their commercial innovations drew on both ancient and contemporary ideas about the relationships between object, place, and memory. Their publications allow us to consider the changing ideas that allow us to derive meaning from these souvenirs, and from objects like them. Of course, the ultimate meanings of these souvenirs were the personal memories they preserved for their owners. Those meanings remain mysterious, and always will.

Tess Goodman is a doctoral student at the University of Edinburgh. Her research explores book history and literary tourism, focusing on books sold as souvenirs in Victorian Scotland. Previously, she was Assistant Curator of Collections at Rare Book School at the University of Virginia.  

Categories
Think Piece

Humanist Pedagogy and New Media

by contributing editor Robby Koehler

Writing in the late 1560s, humanist scholar Roger Ascham found little to praise in the schoolmasters of early modern England.  In his educational treatise The Scholemaster, Asham portrays teachers as vicious, lazy, and arrogant.  But even worse than the inept and cruel masters were the textbooks, which, as Ascham described them, were created specifically to teach students improper Latin: “Two schoolmasters have set forth in print, either of them a book [of vulgaria] . . ., Horman and Whittington.  A child shall learn of the better of them, that, which another day, if he be wise, and come to judgement, he must be fain to unlearn again.”  What were these books exactly? And if they were so unfit for use in the classroom, then why did English schoolmasters still use them to teach students?  Did they enjoy watching students fail and leaving them educationally impoverished?

Actually, no. Then, as now, school teachers did not always make use of the most effective methods of instruction, but their choice to use the books compiled by Horman and Whittington was not based in a perverse reluctance to educate their students.  Ascham sets up a straw man here about the dismal state of Latin teaching in England to strengthen the appeal of his own pedagogical ideas.  As we will see, the books by Horman and Whittington, colloquially known as “vulgaria” or “vulgars” in schools of the early modern period, were a key part of an earlier Latin curriculum that was in the process of being displaced by the steady adoption of Humanist methods of Latin study and instruction and the spread of printed books across England.  Looking at these books, Ascham could see only the failed wreckage of a previous pedagogical logic, not the vital function such books had once served.  His lack of historical cognizance and wilful mischaracterization of previous pedagogical texts and practices are an early example of an argumentative strategy that has again become prevalent as the Internet and ubiquitous access to computers has led pundits to argue for the death of the book in schools and elsewhere.  Yet, as we will see, the problem is often not so much with books as much as with what students and teachers are meant to do with them.

“Vulgaria” were initially a simple solution to a complicated problem: how to help students learn to read and write Latin and English with the limited amount of paper or parchment available in most English schools.  According to literary scholar Chris Cannon, by the fifteenth century, many surviving notebooks throughout England record pages of paired English and Latin sentence translations.  It seems likely that students would receive a sentence in Latin, record it, and then work out how to translate it into English.  Once recorded, students held onto these notebooks as both evidence of their learning and as a kind of impromptu reference for future translations.  In the pre-print culture of learning, then, vulgaria were evidence of a learning process, the material embodiment of a student’s slow work of absorbing and understanding the mechanics of both writing and translation.

The advent of printing fundamentally transformed this pedagogical process.  Vulgaria were among the first books printed in England, and short 90-100 page vulgaria remained a staple of printed collections of Latin grammatical texts up to the 1530s.  Once in print, vulgaria ceased to be a material artifact of an educational process and now became an educational product for the use of students who were literate in either English or Latin to use while working on translations.  The culture of early modern English schools comes through vividly in these printed collections, often closing the distance between Tudor school rooms and our own.  For example, in the earliest printed vulgaria compiled by John Anwykyll, one can learn how to confess to a fellow student’s lackadaisical pursuit of study: “He studied never one of those things more than another.” Or a student might ask after a shouting match “Who made all of this trouble among you?”  Thus, in the early era of print, these books remained tools for learning Latin as a language of everyday life. It was Latin for school survival, not for scholarly prestige.

As Humanism took hold in England, vulgaria changed too, transforming from crib-books for beginning students to reference books for the use of students and masters, stuffed full of Humanist erudition and scholarship.  Humanist schoolmasters found the vulgaria a useful instrument for demonstrating their extensive reading and, occasionally, advancing their career prospects.  William Horman, an older schoolmaster and Fellow at Eton, published a 656 page vulgaria (about 5 times as long as the small texts for students) in 1519, offering it as a product of idle time that, in typical Humanist fashion, he published only at the insistence of his friends.  Yet, Horman’s book was still true to its roots in the school room, containing a melange of classical quotations alongside the traditional statements and longer dialogues between schoolmasters and students.

By the 1530s, most of the first wave of printed vulgaria went out of print, likely because they did not fit with the new Humanist insistence that the speaking and writing of Latin be more strictly based on classical models.  Vulgaria would have looked increasingly old-fashioned, and their function in helping students adapt to the day-to-day rigors of the Latinate schoolroom were likely lost in the effort to separate, elevate, and purify the Latin spoken and written by students and teachers alike.  Nothing more embodied this transformation that Nicholas Udall’s vulgaria Flowers for Latin Speaking (1533), which was made up exclusively of quotations from the playwright Terence, with each sentence annotated with the play, act, and scene from which the sentence was excerpted.

Loeb Facing Page Translation
Terence. Phormio, The Mother-In-Law, The Brothers. Ed. John Sargeaunt. Loeb Classical Library.  New York: G.P. Putnam’s Sons, 1920.  https://archive.org/details/L023NTerenceIIPhormioTheMotherInLawTheBrothers   

The vulgaria as printed crib-book passed out of use in the schoolroom after about 1540, so why was Ascham still so upset about their use in 1568 when he was writing The Schoolmaster?  By that time, Ascham could assume that many students had access to approved Humanist grammatical texts and a much wider variety of printed matter in Latin.  In a world that had much less difficulty gaining obtaining both print and paper, the vulgaria would seem a strange pedagogical choice indeed.  Ascham’s own proposed pedagogical practices assumed that students would have a printed copy of one or more classical authors and at least two  blank books for their English and Latin writing, respectively.  Whereas the vulgaria arose from a world of manuscript practice and a straitened economy of textual scarcity, Ascham’s own moment had been fundamentally transformed by the technology of print and the Humanist effort to recover, edit, and widely disseminate the works of classical authors.  Ascham could take for granted that students worked directly with printed classical texts and that they would make use of Humanist methods of commonplacing and grammatical analysis that themselves relied upon an ever-expanding array of print and manuscript materials and practices.  In this brave new world, the vulgaria and its role in manuscript and early print culture were alien holdovers of a bygone era.

Of course, Ascham’s criticism of the vulgaria is also typical of Humanist scholars, who often distanced themselves from their  predecessors and to assert importance and correctness of their own methods.  Ironically, this was exactly what William Horman was doing when he published his massive volume of vulgaria – exemplifying and monumentalizing his own erudition and study while also demonstrating the inadequacy of previous, much shorter efforts. Ascham’s rejection of vulgaria must be seen as part of the larger intergenerational Humanist pattern of disavowing and dismissing the work of predecessors who could safely be deemed inadequate to make way for one’s own contribution.  Ascham is peculiarly modern in this respect, arguing that introducing new methods of learning Latin can reform the institution of the school in toto.  One is put in mind of modern teachers who argue that the advent of the Internet or of some set of methods that the Internet enables will fundamentally transform the way education works.

In the end, the use of vulgaria was not any more related to the difficulties of life in the classroom or the culture of violence in early modern schools than any other specific pedagogical practice or object.  But, as I’ve suggested, Ascham’s claim that the problems of education can be attributed not to human agents but to the materials they employ is an argument that has persisted into the present.  In this sense, Ascham’s present-mindedness suggests the need to take care in evaluating seemingly irrelevant or superfluous pedagogical processes or materials.  Educational practices are neither ahistorical nor acontextual, they exist in institutional and individual time, and they bear the marks of both past and present exigencies in their deployment.  When we fail to recognize this, we, like Ascham, mischaracterize their past and present value and will likely misjudge how best to transform our educational institutions and practices to meet our own future needs.