Categories
Intellectual history

The Image of the British State in the Macartney Mission to Qing China

By Gongchen Yang



After the British commutation of the tea tax in 1784, the British purchased £1.3 million worth of tea in Canton in 1786 and paid for nearly half of this in silver bullion rather than other export goods (p. 272). At the same time, the British manufacturers created a vast array of new British goods (mainly textiles) that needed an export market. The trade deficit created by the Sino-British tea trade and the potential of the Chinese market drove British policymakers and merchants to send an official embassy to China to improve the restricted trading environment.

Earl George Macartney led an embassy to visit Qianlong Emperor in 1793 at his summer resort of Jehol (Rehe, now Chengde) to celebrate his 80th birthday. For the Qing side, the mandarin welcoming what is called ‘the Macartney embassy’ was Heshen (p. 288-290), the favorite Minister of the Qianlong Emperor. This was the first official encounter between Qing China and the British Empire. From the early Qing, many European missionaries served at the Qing court, offering their services in astronomy, mathematics, geography, and the arts. This think piece offers a ‘British-Sino diplomatic’ perspective on the further separation between China and the West after the Great Divergence, and it highlights the role of British and Chinese officials, Macartney and Heshen.

In the context of the Chinese tribute system, the wrong choice of gifts and Chinese officials’ filtering of information made the Mission’s attempts to demonstrate Britain’s technological superiority to the Qing court and to reverse its stereotypes fail. Eventually, the Qing court reinforced the original impression that the British were cunning and greedy for profit. The Mission brought back to the British Empire an image of the Qing Empire as “Gold and jade on the outside, rot and decay on the inside,” which significantly changed the British Empire’s perception of the Qing Empire.

The defeat of the American War of Independence at the end of the 18th century led the British Empire to pivot to Asia (p. 4, 6-7). As a commercial state, merchants were the most active and wealthy part of society and capital was of great concern to the government. At the same time, the British East India Company’s war with the Mysore Company (French ally) in South India also had dire prospects, war and territorial expansion nearly bankrupted the company. The British government sent an embassy to expand Sino-British trade to preserve EIC rule in India.

The victory in the Sino-Nepalese campaign of 1792 had given the Qing Dynasty remarkable military confidence. This campaign was the last of Qianlong’s Ten Great Campaigns (shiquan wugong 十全武功), which brought Gorkha (now Nepal) into the China-centered tribute system. The tribute system inherited from previous dynasties presumed the Middle Kingdom’s moral, material, and cultural superiority over other nations and required those who wished to deal or trade with China to come as supplicants to the emperor. Because the Qing court also tended to view maritime trade and traders as peripheral to its strategic and economic interests and as part of this manageable sphere, the European nations who came to trade at China’s ports were handled within this same tributary framework (p. 27-28).

Macartney had set high stakes for this Mission’s success. Among the European nations that traded with China, Macartney believed that “in consequence of irregularities committed by former Englishmen at Canton (like Lady Hughes Affair), Englishmen were considered as the worst among Europeans.” Therefore, believing that “the various important objects of the Embassy could be obtained through the good will of the Chinese,” Macartney ordered the Embassy to “impress the Chinese with new, more just, and more favorable ideas of Englishmen by a conduct particularly regular and circumspect” (p. 231).

Changing the Chinese ideas of the British became the key to the Mission’s success. Still, Macartney disagreed with the others about how to impress the most crucial person, the Qianlong Emperor. During the preparation phase of the Mission, the leading British manufacturers, like Matthew Boulton, wished to present products representing British industry as gifts to the Qianlong court. As a member of the Birmingham Lunar Society, Boulton conveyed the values of the Enlightenment and wanted to send buttons, buckles, plated wares, and other products that would make life more comfortable and of interest to Chinese consumers to China. In his letter to Macartney, King George III also demonstrated his desire to ‘communicate the arts and comforts of life to those parts of the world where it appeared they had been wanting.’ However, Macartney, a former diplomat to Russia, believed that ‘Asian courts would only be impressed by elaborate display, spectacle and pomp.’ Despite the aspirations of the manufacturers and the king, Macartney sent two coaches decorated in imperial yellow, an elaborate planetarium, and several luxury gifts (the introduction of gift, see p. 243-246).

Maxine Berg argued that Macartney’s Mission failed because the Embassy failed to express and convey the image of ‘useful knowledge’ to the Chinese ruling elite. And this probably stems from Macartney’s lack of knowledge of Chinese culture. Macartney was learning about China from works written by the early Catholic missionaries a hundred years earlier: knowledge of China’s recent court politics, which was crucial for diplomacy, was entirely absent (p. 69).

A few months before the embassy’s arrival, in June 1793 British and Chinese merchants in Canton submitted the official letters and translations of the embassy to the Beijing court in advance. The content of the translations is opaque and carefully avoids any clear statement of the embassy’s objectives, the main purpose of which seems to have been to bring birthday wishes to the emperor (p. 90). Once the embassy arrived in Chinese waters, officials conveyed a simple but misleading message to the Beijing court: Britain was showing deference to China by offering tribute to celebrate the emperor’s birthday. But Qianlong was not blind to these British visitors, he watched the Mission’s every move and had his favorite courtier Heshen, receive it.

The embassy arrived at Jehol in September 1793, but the supremacy of Chinese imperial power restricted Macartney’s communication with the Emperor. Apart from the Emperor’s birthday and a few necessary ceremonies, Macartney had no access to Qianlong. However, the exhibition of technology prepared by Macartney, who did not know the tribute system, angered Qianlong even before it was shown to him. Before arriving at Jehol, Macartney requested that four British artisans be sent to the royal palace to install the large planetarium, which would take a month. However, Qianlong believed that Chinese artisans could complete the installation and refused Macartney. After that, Macartney still stressed that this was a job only British artisans could do, leading Qianlong to believe that the British liked to show off. After they failed to impress the Emperor, the courtiers led by Heshen were the Mission’s last hope.

I argue that Macartney used close personal relationships to gain important political positions (p. 28-30), whereas Heshen’s promotion was more based on personal ability. With excellent language skills (Manchu, Mandarin, Mongolian, and Tibetan) and financial management skills, Heshen spent four years entering the Grand Council (junjichu 军機处), which is the chief policy-making body of the Qing court, at the age of 27, with members’ averaging age around 60. In 1786, at the age of 37, he was awarded the highest position in the Qing bureaucracy system and built a powerful political group under the emperor’s patronage. Investigating Heshen’s career path shows that the secret of his success was to please the Emperor, who could decide his fate. His interests were tied to Qianlong and his benefit. When Qianlong liked poetry, he then studied poetry so he could co-compose poems with Qianlong. When Qianlong liked calligraphy, he imitated Qianlong’s handwriting and could write on his behalf.


It was Heshen’s code of conduct to ‘side with the emperor,’ so he could not display too much interest in British technology. But Heshen’s pragmatism led him to display an ambivalent attitude towards British technology. On December 3, 1793, Macartney wished to show Heshen the latest European technology for hot air balloon ascension. Frustratingly, Heshen “not only discourage this experiment, but also the printed accounts of British empire that they prepared for Chinese courtier.” However, when suffering from hernia and rheumatism, the embassy’s doctor Gillan was sent to treat Heshen with western medicine (p. 321). Thus, had British technology been of visible benefit to him and his emperor, Heshen would have taken the initiative to learn about it and try it out.

Interestingly, Macartney’s inappropriate choice of British gifts may have been right for the future expansion of the British Empire in China, as transformative British technology could have been ‘stolen’ by China. For instance, Macartney decided not to take a steam engine, the emblematic technology of the Industrial Revolution. It may have impressed the Qianlong court. But it also risked being ‘stolen’ by Chinese artisans, who were talented in arts and crafts and had advanced technology. According to Mr. Barrow of the embassy, two Chinese artisans could “separated two magnificent glass lusters piece by piece and put them again together in a short time without difficulty and mistake, the whole consisting of thousand pieces, though they had never seen anything of the kind before” (vol.2, p. 98). Secondly, Qianlong intended to have Chinese artisans learn British technology. For the large planetarium, Qianlong sent two Chinese artisans to learn the skills of installation and repair (vol.19, p. 157). At this moment, Qing China lost the possibility of learning some transformative industrial and military technology.

Macartney had six demands: granting British merchants access to the Chinese market through Chusan (now Zhoushan), Ningpo (now Ningbo), and Tientsin (now Tianjin); to warehouse their goods in Pekin (now Beijing); owning a secure island for their unsold goods in Chusan and Canton; abolish trade duties between Macao and Canton; and prohibit all duties on English merchants over the Emperor’s diploma (see Canton System). For Embassy’s commercial demands, Heshen avoided discussing them with Macartney as much as possible and concealed these ‘excessive’ demands from the Emperor to avoid angering him. For many in the imperial court, these demands insulted the emperor’s dignity and, more practically, the empire’s stability.

While European missionaries stayed and served at the court in Beijing at the cost of not leaving China until they died, British Empire wanted to set up a permanent embassy compound in Beijing and to have some territory to live in. Macartney’s first meeting and commercial negotiations with Heshen took place a week before the Emperor’s birthday. It was not until after his birthday that the Emperor learned of the Mission’s commercial requirements through Heshen. Macartney could do nothing about Heshen’s evasion, saying, “I could not help admiring the address with which the Minister parried all my attempts to speak to him on business this day, and how artfully he evaded every opportunity that offered for any particular conversation with me, endeavoring to engage our attention solely by the objects around us.”

Other possible ways of impressing the Chinese were military exhibitions, but the information filtering by lower-ranking officials prevented the impact of military exhibitions from reaching the emperor. Macartney’s warship, the Lion, was the height of contemporary military technology (p. 73), but Chinese official reports stated that the Lion was simply a tribute ship. No one dared to truthfully report how the warship was a military threat or its advanced technology. Yet according to Macartney’s observations, “these mandarins had never seen a ship of the Lion’s construction, bulk, or loftiness. They were at a loss how to ascend her fides; but chairs were quickly fastened to tackles, by which they were lifted up, while they felt a mixture of dread and admiration at this safe, rapid, but apparently perilous, conveyance.” (p. 240). Poor translation skills meant no fruitful conversation about the Lion and naval warfare occurred. 

The Mission left Beijing on October 7, 1793. Qianlong sent several senior officials to accompany the Mission and report on all its movements. At the same time, Qianlong ordered the officials in Canton and Macau to prevent the British merchants from monopolizing trade and conspiring with foreign traders. Eventually, Macartney’s efforts to “impress the Chinese with new, more just, and more favorable ideas of Englishmen” failed, and the Qing court reinforced the original idea that the British were cunning and greedy for profit. In Macartney’s view, Qing China was not as strong economically, militarily, or technologically as it should have been. The Qing Empire was, in fact, “Gold and jade on the outside, rot and decay on the inside.”


Gongchen Yang is a PhD candidate in the Department of History at the University of Warwick. His doctoral research focues on the corruption of the Canton customs in Qing China. His general interests include corruption, Chinese maritime customs, and Sino-British interactions.

Edited by Tom Furse

Featured Image: Lord Macartney’s Embassy to China 1793. Creative Commons.

Categories
Think Piece

Empire of Abstraction: British Social Anthropology in the “Dependencies”

By Nile A. Davies

It would seem to be no more than a truism that no material can be successfully manipulated until its properties are known, whether it be a chemical compound or a society of human beings; and from that it would appear to follow that the science whose material is human society should be called upon when nothing else than the complete transformation of a society is in question.

Lucy Mair, “Colonial Administration as a Science” (1933)

On March 24th 1945, the British scientific journal Nature breathlessly reported that £120,000,000 of research funds (the equivalent of over 5 billion USD today) would be made available by the passing of the Colonial Development and Welfare Bill: a momentous commitment to the expansion of colonial study “which should be of interest to administrators, scientific men and technologists, and all who are concerned with the welfare and advancement of the British Colonial possessions.” The material conditions of colonial research would significantly determine the scope and energies of empirical labor in the social sciences. Specifically, ideas of colonial welfare drew conspicuously on the authority of experts in Social Anthropology—in its varying professional and institutional forms—to apprehend the flux and metamorphosis of human relations in a new international order.

Such extraordinary expenditures reflected broad desires throughout the previous decade for a science of administration—a means with which to know and understand a field of possibilities in an age of global “interpenetration” in colonized societies which, in their particularity, could not be addressed by “the application of general principles, however humanitarian.”[1] As data pertaining to the “forces and spirit of native institutions” were increasingly called upon for the maintenance of social cohesion, there emerged an imperative for the cultivation of “specially trained investigators [devoted to] comprehensive studies in the light of a sociological knowledge of the life of a community.”

Table of Contents from Lucy Mair, Welfare in the British Colonies (London: Royal Institute of International Affairs, 1944).

The history of colonial welfare recalls the contours of “governmentality,” the term coined by Michel Foucault to describe how power is secured through forms of expertise and intervention, attending to “the welfare of the population, the improvement of its condition, the increase of its wealth, longevity, health, etc.”[2] Drawn into the enterprise of administration, the design and formulation of social and economic research by anthropologists became increasingly associated with the high moral purpose of colonial reform. But, as Joanna Lewis notes, such a remit encompassed an impossibly wide range of aims and instruments for control: “animating civil society against social collapse; devising urban remedies for the incapacitated and the destitute; correcting the deviant” (75). Beset by the threat of rapid social change and indigenous nationalisms, the potential of the worldview offered up by intimate knowledge of the social structure suggested a means by which history itself might be forestalled. Well poised to anticipate the unforeseeable in a world of collapsing regimes, the great enthusiasm for structural functionalism in particular—akin to the field of international relations and its entanglements with empire—derived from its popular image as a tool of divination, seeming to equate the kinds of total social knowledge claimed by its practitioners with a scientifically derived vision of the future.

Formalizing the central importance of social analysis to the task of government, in 1944, the Colonial Social Science Research Council (CSSRC) was formed in order to advise the Secretary of State of the Colonies regarding “schemes of a sociological or anthropological nature.” Among the founding members of the council was Raymond Firth, the esteemed ethnologist of Polynesian society whose thesis (on the “Wealth and Work of the Maori”) had been supervised by Bronislaw Malinowski, pater familias of the discipline as it took form around the life and work of those who attended his seminar at the London School of Economics. But for professional academics striving for dominance amidst the competition of so-called “practical men,” the expansion of new territories for research raised serious questions about the value and legitimacy of knowledge production. As Benoît de L’Estoile has noted, the struggle for “a monopoly of competence on non-western social phenomena” generated new factions in the milieu of colonial expertise between academics and administrators, whose mutual engagements “in the field” marked divergent relationships to the value of colonial study as a means for the production of social theory.[3]

Front matter of Raymond Firth, Human Types (London: Nelson and Sons, 1938). Image by John Krygier via A Series of Series.

At the same time, increasing demand and material support for the study of the world-system had allowed a new generation of social and natural scientists to turn their attention towards the field from the metropole. For its part, the Royal Anthropological Institute awarded the Wellcome Medal each year “for the best research essay on the application of anthropological methods to the problems of native peoples, particularly those arising from intercourse between native peoples, or between primitive natives and civilised races.” Lucy Mair, another former student of Malinowski’s (cited at the beginning of this essay) received the award in 1935. This “immaterial” value of the colonies for the prospect of scholarship was shared by Lord Hailey, Chairman of the Colonial Research Committee. As he suggested in the preamble to the mammoth administrative compendium, An African Survey (1938):

A considerable part of the activity of the intellectual world is expended today in the study of social institutions, systems of law, and political developments which can now only be examined in retrospect. But Africa presents itself as a living laboratory in which the reward of study may prove to be not only the satisfaction of an intellectual impulse, but an effective addition of the welfare of the people. (xxiv)

Hailey’s romantic claims about the ends of imperial study proved to be prophetic for the postwar period, and spoke to the experimental approach in which such schemes were elaborated. While the natural sciences held out the promise of material riches to be “exploited” in an empire of neglect, anthropologists similarly stood to profit from their engagements in a social order that was shifting beyond recognition. Beyond the preservative impulse of ethnographic practice in the early 20th century, fixed on salvaging the “primitive” from the threshold of extinction, the contingencies of a collapsing empire presented the opportunity for colonial science to fulfil a gamut of ethical duties as the ideological arm of an administration that governed the flow of capital itself. As Hailey would later note in 1943:

No one can dispute the value of the humanitarian impulse which has in the past so often provided a corrective to practices which might have prejudiced the interests of native peoples. But we can no longer afford to regard policy as mainly an exercise in applied ethics. We now have a definite objective before us—the ideal of colonial self-government—and all our thinking on social and economic problems must be directed to organising the life of colonial communities to fit them for that end. […] It is in the light of this consideration that we must seek to determine the position of the capitalist and the proper function of capital.

What was this “proper function” of capital? In an address at Chatham House in April 1944, Bernard Bourdillon, then Governor of Nigeria, described the affective indifference, the ideological exhaustion of a precarious empire whose deprivation under the doctrine of laissez-faire could only suggest the great deception of the civilizing mandate itself. In the thrall of liberal torpor, the fate of Britain’s so-called “dependencies” had long been characterized by the slow violence of a debilitating austerity, borne out by starvation and disease in insolvent colonies, unable to develop their (often plentiful) resources in the absence of revenues. The receipt of financial assistance by the poorest colonies to balance their ailing budgets reflected the management of the population at its minimum, confined within the vicious cycle of deficiency: “regarded as poor relations, who could not, in all decency, be allowed to starve, but whose first duty was to earn a bare subsistence, and to relieve their reluctant benefactors of what was regarded as a wholly unprofitable obligation.”

O.G.R Williams to J.C Meggitt, “Housing conditions for poorer classes in and around Freetown” (C.S.O. M/54/34, 1939). Photograph by author.

As the tide of decolonization became an inescapable reality, desires for a deliberate strategy towards the improvement of social conditions both at home and abroad sought to recuperate the notion of mutual benefit between colony and metropole. The move to restore the ethical entanglements of a “People’s Empire,” long left out of mind, suggested the refraction of a burgeoning conception of the welfare state in Britain, whose origins in The Beveridge Report—published in 1942—turned towards the cause of “abolishing” society’s major ills: Want, Disease, Ignorance, Squalor and Idleness. In spite of an apparent commitment to universalism—in the establishment of a National Health Service in 1946, and state insurance for unemployment and pensions, for example—the report would garner criticism for privileging the model of the male breadwinner at the expense of working wives, whilst otherwise reflecting a palliative approach to poverty that failed to address its root causes. While ideas of domestic welfare shared many of the rhetorical devices that characterized the project of colonial reform (with improvements in public health, education and living standards chief among them), save for a single glancing reference, the Beveridge Report made no mention of the colonies or their place within this expansive and much-feted vision for postwar society.

On the contrary, the long road to economic solvency and the raising of living standards was understood to lie within colonial societies themselves, however enervated or held in abeyance by preceding policies. British plans for the autonomy of the overseas territories centered on the rhetoric of extraction under the general directive for colonized societies to exploit their own resources—as Bourdillon would note, “including that most important of all natural resources, the capacity of the people themselves.” Increased investment from the metropole would in turn provide for the welfare of colonial subjects in the event of their independence through the generation of something that might be called “human capital”, and by turning towards the earth itself as a repository of untapped value. The appointment of experts in the fields of imperial geology, agronomy and forestry turned the labors of scientific discovery towards a political economy of “growth” for the mitigation of social inequalities on a planetary scale.

But the professional and institutional entanglements of anthropologists to the field inextricably linked them to a social system of subjection that they could not fully claim to disavow. Senior anthropologists in particular appeared to retain a kind of primitivism, neglecting in their studies the administrative issues of growing urban centers for “tribal” or “village studies.” By the end of the 1940s, the earlier promise and possibility of Anthropology’s relationship to the colonial endeavor was increasingly questioned by its most prominent practitioners. At a special public meeting of the Royal Anthropological Institute in 1949, Firth spoke alongside the Oxford anthropologist E.E. Evans-Pritchard about the growing tensions and demands of professional practice in a period in which the vast majority of anthropological research was supported by state funds. “After long and shameful neglect by the British people and Government,” he declared, “it is now realised that it is impossible to govern colonial peoples without knowledge of their ways of life.” (179) And yet, Firth and Evans-Pritchard observed the anxieties in certain academic circles of what such a union would mean for the production of knowledge: “lest the colonial tail wag the anthropological dog—lest basic scientific problems be overlooked in favour of those of more pressing practical interest.” (138)

Buildings of the Makerere Institute of Social Research (MISR), founded in 1948 as the East African Institute of Social Research. Photograph via MISR.

Even before the conclusion of the Second World War, the experiences of fascism had proved to be a cautionary tale in which both the value and peril of social theory lay in its uses within a broader marketplace of applied science as an instrument of power-knowledge, capable of being wielded by states and their governments. Myopic fears of the “race war” to ensue from the collapse of white settler societies found their reflection in research agendas and the funding of applied studies. With an eye on neighboring Kenya, Audrey Richards—another of Malinowski’s “charmed circle”—became director of Uganda’s East African Institute of Social Research in 1950, a center established at Makerere College for the purpose of accumulating “anthropological and economic data on the peoples and problems of East Africa.”

This was also the scene of a burgeoning inquiry into “race relations.” In 1948, Firth’s student Kenneth Little published Negroes in Britain, a study of urban segregation and the fraught sentiments of “community” in Cardiff’s Tiger Bay, infamously portrayed by the Daily Express in 1936: “Half-Caste Girl: she presents a city with one of its big problems.” (49) Its streets would endure in the cultural imagination as a focal point of salacious reporting on the colonies of “coloured juveniles” born in the poor “slums” of seaport towns across the British Isles. Working class migrants in Cardiff’s Loudoun Square were captured in the pages of the left-leaning weekly, Picture Post byits staffphotographer Bert Hardy, whose efforts to represent the human face of residents in the “deeply-depressed quarter” are a complex amalgam of pity and social conscience documentary, recalling the iconic depictions of American poverty by photographers attached to the Farm Security Administration in the era of the New Deal. Meanwhile, the American sociologist St Clair Drake, who with Horace R. Clayton Jr. had co-authored the voluminous study Black Metropolis in 1945, had conducted research in Tiger Bay for his 1954 University of Chicago dissertation and responded directly to some of the claims made in Little’s study. Subjects of empire, he avowed, whether in Britain or its extremities, were united by their fate to be subjects of the survey and the study, misrepresented, slandered or otherwise examined with disciplinary instruments and the logics of reform and government.

Amidst revolutionary struggle and the rise of African nationalist movements, other scholarship emerging from this milieu appeared to display certain deficiencies in vision emanating from the colonial situation—the professional certitude and patronizing racism with which social scientists made and mythologized their objects. In 1955, the geographer Frank Debenham—another senior figure in the CSSRC’s council—published Nyasaland: Land of the Lake as part of The Corona Library, a series of “authoritative and readable” surveys sponsored by the Colonial Office.Writing in his review of Debenham’s book in the Journal of Negro Education, the historian Rayford Logan observed the bewildering disconnect between the well-documented experiences of civil discord under white-minority rule in the territory and the world as it was rendered in print:

[Debenham] seriously states: “We need not call the African lazy, since there is little obligation to work hard, but we must certainly call him lucky” (p. 104). He opposes a rigid policy of restricting freehold land for Europeans. His over-all view blandly disregards the discontent among the Africans in Nyasaland: “If only Nyasaland people are left to themselves and not incited from elsewhere there should be contentment under the new regime very soon, a return in fact to the situation of a few years ago when there was complete amity as a whole between black and white, and there were all the essentials for a real partnership satisfactory to both colours.”

In hindsight, these problems of perception appear to have become evident—if not exactly solvable—even to those most apparently endowed with the greatest faculties of interpretation and insight into the arcane mechanisms of the social world. Michael Banton, a student of Kenneth Little’s and the first editor of the journal Sociology, recalled his professional errata in the 2005 article “Finding, and Correcting, My Mistakes”. Writing candidly of his earliest forays into colonial research, he described the evolution and decline of structural functionalism, which was “founded upon a view of action as using scarce means to attain given ends but had in my, perhaps faulty, perception become a top-down theory of the social system.” Such reflections suggest the disenchantments of an analytical framework which threatened to occlude as much as it sought to understand, in which whole worlds went unnoticed or misread. More than 50 years after his earliest studies in the “coloured quarter” of London’s East End and Freetown, the capital of British West Africa, Banton still appeared—against all good intentions—stumped: “There were failings that should be accounted blind spots rather than mistakes…. Why was my vision blinkered?”


[1] Mair, Lucy. “Colonial Administration as a Science.” Journal of the Royal African Society 32, no. 129 (1933): 367.

[2] Foucault, Michel. The Foucault Effect: Studies in Governmentality. Edited by Graham Burchell, Colin Gordon and Peter Miller. Chicago: University of Chicago Press, 1991 [1978], p.100.

[3] Pels, Peter. “Global ‘experts’ and ‘African’ Minds: Tanganyika Anthropology as Public and Secret Service, 1925-61.” The Journal of the Royal Anthropological Institute 17, no. 4 (2011): 788-810. http://www.jstor.org/stable/41350755.


Nile A. Davies is a doctoral candidate in Anthropology and the Institute for Comparative Literature and Society at Columbia University. His dissertation examines the  politics and sentiments of reconstruction and the aftermaths of “disaster” in postwar Sierra Leone.

Featured Image: Cardiff’s Tiger Bay in the 1950s. Photograph by Bert Hardy, via WalesOnline.

Categories
Think Piece

Tory Marxism

by Charles Troup 

For many on the Right today, describing something as “Marxist” is sufficient to mark it out as something every decent conservative should stand against. Indeed, at first glance Marxism and conservatism may even look diametrically opposed. One is radically egalitarian, whilst the other has always found it necessary to defend inequality in some form. One demands that a society’s institutions express social justice; whilst the other asks principally that they be stable and capable of managing change. One proceeds from principle; the other prefers pragmatism.

But things weren’t always this way. The Right’s most creative thinkers have often drawn on an eclectic range of sources when expressing and renewing their creed—Marx not excepted. On the British Right, in fact, we can find surprisingly frank engagement with Marxism as recently as the 1980s: in particular amongst the Salisbury Group, a collection of “traditionalists” skeptical about the doctrines of neoliberalism which were conquering right-wing parties in the Western world one by one.

Roger Scruton

The influence of Marx is plain, for instance, in the philosopher Roger Scruton’s 1980 book The Meaning of Conservatism. Here Scruton made the striking claim that Marxism was a more suitable philosophical tradition than liberalism for conservatives to engage in dialogue because ‘it derives from a theory of human nature that one might actually believe’. This was because liberalism, for Scruton, began with a fictitious ideal of autonomous individual agents and believed that they could not be truly free under authority unless they had somehow consented to it. For Scruton, however, this notion “isolates man from history, from culture, from all those unchosen aspects of himself which are in fact the preconditions of his subsequent autonomy.” Liberalism lacked an account of how society and the self deeply interpenetrated each other. Scruton believed that the individuals yearned to see themselves reflected at some profound level in the way their society was organized, in its culture, and in the forms of collective membership it offered. Yet liberalism presented no idea of the self above its desires and no idea of self-fulfillment other than their satisfaction.

Marxism, on the other hand, possessed a philosophical anthropology which was much friendlier to the sort of “Hegelian” conservatism which Scruton advocated. He was particularly impressed with the concept of “species-being” or “human essence,” which Marx had borrowed from Ludwig Feuerbach and employed in the Manuscripts of 1844. It was this notion, Scruton reminded his readers, that underpinned the whole centrality of labour for Marxists, since they regarded it as an essential, intrinsically valuable human activity. Moreover, it was the estrangement of the individual from their labour under capitalism which caused the malaise of ‘alienation’: that condition of spiritual disaffection which, Scruton believed, conservatives should recognise in their own instincts about modernity’s deficiencies. Of course, the conservative would seek to ‘present his own description of alienation, and to try to rebut the charge that private property is its cause’; but Marxists should be praised for recognising ‘that civil order reflects not the desires of man, but the self of man’.

There was an urgent political stake in this discussion. Scruton had welcomed Thatcher’s victory in 1979 as an opportunity to recast British conservatism after its post-war dalliance with Keynesianism and redistributive social policy. Still, he felt a sense of foreboding about the ideological forces which had ushered her to victory. The Conservative Party, he complained, “has begun to see itself as the defender of individual freedom against the encroachments of the state, concerned to return to people their natural right of choice.” The result was damaging ‘urges to reform’, stirred by the newly ascendant language of “economic liberalism.” Scruton implored his fellow conservatives not to mistake this for true conservatism, but to recognize it as a derivation of its “principal enemy.”

In doing so, he once again compared Marxism and liberalism to demonstrate to conservatives the limitations of the latter. “The political battles of our time,” he wrote, “concern the conservation and destruction of institutions and forms of life: nothing more vividly illustrates this than the issues of education, political unity, the role of trade unions and the House of Lords, issues with which the abstract concept of ‘freedom’ fails to make contact.” Marxists at least understood that “the conflict concerns not freedom but authority, authority vested in a given office, institution or arrangement.” Their approach of course was “to demystify the ideal of authority’ and ‘replace it with the realities of power,” which Scruton thought reductive. But “in preferring to speak of power, the Marxist puts at the centre of politics the only true political commodity, the only thing which can actually change hands”—it “correctly locates the battleground.”

Scruton wasn’t the only figure in the Salisbury Group to engage with Marxism. So too did the historian Maurice Cowling, doyen of the “Peterhouse school” associated with the famously conservative Cambridge college. He believed that Marxism’s “explanatory usefulness can be considerable” and was even described by one of his admirers as a “Tory Marxist jester.”

Maurice Cowling

Cowling hated the Whiggish historians who dominated the English academy in the first half of the 20th century, and welcomed the rise of the English Marxist school in the 1950s—those figures around the journal Past & Present like E.P Thompson, Eric Hobsbawm, Dona Torr and Christopher Hill—as a breath of fresh air. Whereas Whig liberals gave bland and credulous accounts of the motive forces of British political history, the English Marxists were cynical and clear-eyed about power and conflict. As he explained in a 1987 radio interview for the BBC, he agreed with them that “class struggle” was “a real historical fact” and that we should “always see a cloven hoof beneath a principle.” Marxists knew that any set of institutions unequally apportioned loss amongst the social classes, making the business of politics that of deciding in whose image this constitution be made.

This was one point for Cowling where Marxists and conservatives parted ways: accepting the reality of class struggle didn’t mean picking the same side of the barricades. But Cowling believed that conservatives also diverged analytically from Marxists. One of their great errors, he wrote, was to believe that all forms of cultural or social attachment which entailed hierarchy were reducible to false consciousness; but Cowling believed that these were more concrete, especially if they connected to a sort of national consciousness he often referred to in quasi-mystical terms. The error made Marxists naïve about “the fertility and resourcefulness of established regimes.” For Cowling, it was the job of conservative political elites to enact this “resourcefulness:” to tap into the deep well of national sentiment and renew it for successive generations, and thus to blunt class conflict and insulate Britain’s political system from popular pressure.

We can see Cowling applying these ideas to contemporary politics most explicitly in the Salisbury Group’s first publication, the 1978 edited collection Conservative essays. Here he criticized Thatcher’s political rhetoric. Adam Smith might be a useful name to deploy against socialism, he wrote, but if carried to its “rationalistic” pretensions his political language was too rigid and unimaginative for the great task facing conservative elites. “If there is a class war—and there is—it is important that it should be handled with subtlety and skill […] it is not freedom that Conservatives want; what they want is the sort of freedom that will maintain existing inequalities or restore lost ones.” No class war could be managed by “politicians who are so completely encased in a Smithian straitjacket that they are incapable of recognizing that it is going on.” Conservatives needed to read more widely in search of insights to press into service against the reformers and revolutionaries of the age.

Marx rapidly fell out of favor as a source for creative borrowing, however. The collapse of the USSR was hailed by many conservatives as the ultimate indictment of socialism and Marx’s whole system along with it – something many on the Right still believe. Even Scruton became more reluctant to engage with Marx as the Cold War wore on (Cowling criticized him for making the journal Salisbury Review “crudely anti-Marxist” under his editorship). The frank openness to learning from Marx that we find in these texts looks like a historical curiosity today.

The story of the Salisbury Group is also something of a historiographical curiosity. The conservative revival of the 1970s has been the subject of much excellent work in recent British history; but the Group, despite its reputation on the Right and the status of its most prominent figures, has with a few exceptions been passed over for study. Thatcherism and its genealogy have understandably drawn the eye, but this has sometimes unhelpfully excluded its conservative critics or more skeptical fellow-travellers. Historians should seek now to tell more complex stories about the intellectual history of conservatism in this period: after all, the ascendance on the Right of the doctrines and rhetoric of neoliberalism was, in the words of philosopher John Gray, “perhaps the most remarkable and among the least anticipated developments in political thought and practice throughout the Western world in the 1980s.”

As for the present, whilst we shouldn’t expect a conservative re-engagement with Marx we should expect to see more creative re-appropriation of thinkers beyond the typical right-wing canon. This is especially so because the Tory Marxists of the 1970s were looking for something still sought by many conservatives today. That is a counterpoint to a neoliberalism which in its popular idiom increasingly rests upon a notion of individual freedom which fewer and fewer people experience as cohering with their aspirations, values or attachments; or which appeals to moralistic maxims about personal grit, endeavour and innovation which are belied by the inequalities and precarities of contemporary economic life. They seek a political perspective which issues from a holistic analysis of society and its constituent forces rather than individualistic axioms about entitlements and incentives, and which can speak to alienation and to conflict over authority. We can see this process underway already on the French Right, as Mark Lilla made clear in a recent article, where a new generation of intellectuals count the ‘communitarian’ socialists Alasdair MacIntyre, Christopher Lasch and Charles Péguy among their lodestars. And in a perhaps less self-conscious way we can see it on the American Right too, as the long-durable “fusionist” coalition between social conservatives and business libertarians comes under strain: witness Patrick Deneen’s surprise bestseller Why Liberalism Failed and the much-publicized debate between Sohrab Ahmari and David French over whether conservatives should reject or reconcile themselves to liberal institutions and norms. In this moment especially, we should expect to see more inspiration on the intellectual Right from strange places.


Charles Troup is a second-year Ph.D. student in Modern European History at Yale University. 

Categories
Think Piece

What has Athens to do with London? Plague.

By Editor Spencer J. Weinreich

2560px-Wenceslas_Hollar_-_Plan_of_London_before_the_fire_(State_2),_variant.jpg
Map of London by Wenceslas Hollar, c.1665

It is seldom recalled that there were several “Great Plagues of London.” In scholarship and popular parlance alike, only the devastating epidemic of bubonic plague that struck the city in 1665 and lasted the better part of two years holds that title, which it first received in early summer 1665. To be sure, the justice of the claim is incontrovertible: this was England’s deadliest visitation since the Black Death, carrying off some 70,000 Londoners and another 100,000 souls across the country. But note the timing of that first conferral. Plague deaths would not peak in the capital until September 1665, the disease would not take up sustained residence in the provinces until the new year, and the fire was more than a year in the future. Rather than any special prescience among the pamphleteers, the nomenclature reflects the habit of calling every major outbreak in the capital “the Great Plague of London”—until the next one came along (Moote and Moote, 6, 10–11, 198). London experienced a major epidemic roughly every decade or two: recent visitations had included 1592, 1603, 1625, and 1636. That 1665 retained the title is due in no small part to the fact that no successor arose; this was to be England’s outbreak of bubonic plague.

Serial “Great Plagues of London” remind us that epidemics, like all events, stand within the ebb and flow of time, and draw significance from what came before and what follows after. Of course, early modern Londoners could not know that the plague would never return—but they assuredly knew something about its past.

Early modern Europe knew bubonic plague through long and hard experience. Ann G. Carmichael has brilliantly illustrated how Italy’s communal memories of past epidemics shaped perceptions of and responses to subsequent visitations. Seventeenth-century Londoners possessed a similar store of memories, but their plague-time writings mobilize a range of pasts and historiographical registers that includes much more than previous epidemics or the history of their own community: from classical antiquity to the English Civil War, from astrological records to demographic trends. Such richness accords with the findings of the formidable scholarly phalanx investigating “the uses of history in early modern England” (to borrow the title of one edited volume), which informs us that sixteenth- and seventeenth-century English people had a deep and sophisticated sense of the past, instrumental in their negotiations of the present.

Let us consider a single, iconic strand in this tapestry: invocations of the Plague of Athens (430–26 B.C.E.). Jacqueline Duffin once suggested that writing about epidemic disease inevitably falls prey to “Thucydides syndrome” (qtd. in Carmichael 150n41). In the centuries since the composition of the History of the Peloponnesian War, Thucydides’s hauntingly vivid account of the plague (II.47–54) has influenced writers from Lucretius to Albert Camus. Long lost to Latin Christendom, Thucydides was slowly reintegrated into Western European intellectual history beginning in the fifteenth century. The first (mediocre) English edition appeared in 1550, superseded in 1628 with a text by none other than Thomas Hobbes. For more than a hundred years, then, Anglophone readers had access to Thucydides, while Greek and Latin versions enjoyed a respectable, if not extraordinary, popularity among the more learned.

4x5 original
Michiel Sweerts, Plague in an Ancient City (1652), believed to depict the Plague of Athens

In 1659, the churchman and historian Thomas Sprat, booster of the Royal Society and future bishop of Rochester, published The Plague of Athens, a Pindaric versification of the accounts found in Thucydides and Lucretius. Sprat’s Plague has been convincingly interpreted as a commentary on England’s recent political history—viz., the Civil War and the Interregnum (King and Brown, 463). But six years on, the poem found fresh relevance as England faced its own “too ravenous plague” (Sprat, 21).The savvy bookseller Henry Brome, who had arranged the first printing, brought out two further editions in 1665 and 1667. Because the poem was prefaced by the relevant passages of Hobbes’s translation, an English text of Thucydides was in print throughout the epidemic. It is of course hardly surprising that at moments of epidemic crisis, the locus classicus for plague should sell well: plague-time interest in Thucydides is well-attested before and after 1665, in England and elsewhere in Europe.

But what does the Plague of Athens do for authors and readers in seventeenth-century London? As the classical archetype of pestilence, it functions as a touchstone for the ferocity of epidemic disease and a yardstick by which the Great Plague could be measured. The physician John Twysden declared, “All Ages have produced as great mortality and as great rebellion in Diseases as this, and Complications with other Diseases as dangerous. What Plague was ever more spreading or dangerous than that writ of by Thucidides, brought out of Attica into Peloponnesus?” (111–12).

One flattering rhymester welcomed Charles II’s relocation to Oxford with the confidence that “while Your Majesty, (Great Sir) shines here, / None shall a second Plague of Athens fear” (4). In a less reassuring vein, the societal breakdown depicted by Thucydides warned England what might ensue from its own plague.

Perhaps with that prospect in mind, other authors drafted Thucydides as their ally in catalyzing moral reform. The poet William Austin (who was in the habit of ruining his verses by overstuffing them with classical references) seized upon the Athenians’ passionate devotions in the face of the disaster (History, II.47). “Athenians, as Thucidides reports, / Made for their Dieties new sacred courts. / […] Why then wo’nt we, to whom the Heavens reveal / Their gracious, true light, realize our zeal?” (86). In a sermon entitled The Plague of the Heart, John Edwards enlisted Thucydides in the service of his conceit of a spiritual plague that was even more fearsome than the bubonic variety:

The infection seizes also on our memories; as Thucydides tells us of some persons who were infected in that great plague at Athens, that by reason of that sad distemper they forgot themselves, their friends and all their concernments [History, II.49]. Most certain it is that by the Spirituall infection men forget God and their duty. (8)

Not dissimilarly, the tailor-cum-preacher Richard Kingston paralleled the plague with sin. He characterizes both evils as “diffusive” (23–24) citing Thucydides to the effect that the plague began in Ethiopia and moved thence to Egypt and Greece (II.48).

On the supposition that, medically speaking, the Plague of Athens was the same disease they faced, early modern writers treated it as a practical precedent for prophylaxis, treatment, and public health measures. Thucydides was one of several classical authorities cited by the Italian theologian Filiberto Marchini to justify open-field burials, based on their testimony that wild animals shunned plague corpses (Calvi, 106). Rumors of plague-spreading also stoked interest in the History, because Thucydides records that the citizens of Piraeus believed the epidemic arose from the poisoning of wells (II.48; Carmichael, 149–50).

Hippocrates_rubens
Peter Paul Rubens, Hippocrates (1638)

It should be noted that Thucydides was not the only source for early modern knowledge about the Plague of Athens. One William Kemp, extolling the preventative virtues of moderation, tells his readers that it was temperance that preserved Socrates during the disaster (58–59). This anecdote comes not from Thucydides, but Claudius Aelianus, who relates of the philosopher’s constitution and moderate habits, “[t]he Athenians suffered an epidemic; some died, others were close to death, while Socrates alone was not ill at all” (Varia historia, XIII.27, trans. N. G. Wilson). (Interestingly, 1665 saw the publication of a new translation of the Varia historia.) Elsewhere, Kemp relates how Hippocrates organized bonfires to free Athens of the disease (43), a story that originates with the pseudo-Galenic On Theriac to Piso, but probably reached England via Latin intermediaries and/or William Bullein’s A Dialogue Against the Fever Pestilence (1564). Hippocrates’s name, and supposed victory over the Plague of Athens, was used to advertise cures and preventatives.

 

With the exception of Sprat—whose poem was written in 1659—these are all fleeting references, but that is in some sense the point. The Plague of Athens, Thucydides, and his History had entered the English imaginary, a shared vocabulary for thinking about epidemic disease. To quote Raymond A. Anselment, Sprat’s poem (and other invocations of the Plague of Athens) “offered through the imitation of the past an idea of the present suffering” (19). In the desperate days of 1665–66, the mere mention of Thucydides’s name, regardless of the subject at hand, would have been enough to conjure the specter of the Athenian plague.

Whether or not one built a public health plan around “Hippocrates’s” example, or looked to the History of the Peloponnesian War as a guide to disease etiology, the Plague of Athens exerted an emotional and intellectual hold over early modern English writers and readers. In part, this was merely a sign of the times: sixteenth-century Europeans were profoundly invested in the past as a mirror for and guide to the present and the future. In England, the Great Plague came at the height of a “rage for historical parallels” (Kewes, 25)—and no corner of history offered more distinguished parallels than classical antiquity.

And let us not undersell the affective power of such parallels. The value of recalling past plagues was the simple fact of their being past. Awful as the Plague of Athens had been, it had eventually passed, and Athens still stood. Looking backwards was a relief from a present dominated by the epidemic, and from the plague’s warped temporality: the interruption of civic and liturgical rhythms and the ordinary cycle of life and death. Where “an epidemic denies time itself” (Calvi, 129–30), history restores it, and offers something like orientation—even, dare we say, hope.

 

Categories
Think Piece

Melodrama in Disguise: The Case of the Victorian Novel

By guest contributor Jacob Romanow

When people call a book “melodramatic,” they usually mean it as an insult. Melodrama is histrionic, implausible, and (therefore) artistically subpar—a reviewer might use the term to suggest that serious readers look elsewhere. Victorian novels, on the other hand, have come to be seen as an irreproachably “high” form of art, part of a “great tradition” of realistic fiction beloved by stodgy traditionalists: books that people praise but don’t read. But in fact, the nineteenth-century British novel and the stage melodrama that provided the century’s most popular form of entertainment were inextricably intertwined. The historical reality is that the two forms have been linked from the beginning: in fact, many of the greatest Victorian novels are prose melodramas themselves. But from the Victorian period on down, critics, readers, and novelists have waged a campaign of distinctions and distractions aimed at disguising and denying the melodramatic presence in novelistic forms. The same process that canonized what were once massively popular novels as sanctified examples of high art scoured those novels of their melodramatic contexts, leaving our understanding of their lineage and formation incomplete. It’s commonly claimed that the Victorian novel was the last time “popular” and “high” art were unified in a single body of work. But the case of the Victorian novel reveals the limitations of constructed, motivated narratives of cultural development. Victorian fiction was massively popular, absolutely—popularity rested in significant part on the presence of “low” melodrama around and within those classic works.

image-2
A poster of the dramatization of Charles Dickens’s Oliver Twist

Even today, thinking about Victorian fiction as a melodramatic tradition cuts against many accepted narratives of genre and periodization; although most scholars will readily concede that melodrama significantly influences the novelistic tradition (sometimes to the latter’s detriment), it is typically treated as an external tradition whose features are being borrowed (or else as an alien encroaching upon the rightful preserve of a naturalistic “real”). Melodrama first arose in France around the French Revolution and quickly spread throughout Europe; A Tale of Mystery, an uncredited translation from French considered the first English melodrama, appeared in 1802 (by Thomas Holcroft, himself a novelist). By the accession of Victoria in 1837, it had long been the dominant form on the English stage. Yet major critics have uncovered melodramatic method to be fundamental to the work of almost every major nineteenth-century novelist, from George Eliot to Henry James to Elizabeth Gaskell to (especially) Charles Dickens, often treating these discoveries as particular to the author in question. Moreover, the practical relationship between the novel and melodrama in Victorian Britain helped define both genres. Novelists like Charles Dickens, Wilkie Collins, Edward Bulwer-Lytton, Thomas Hardy, and Mary Elizabeth Braddon, among others, were themselves playwrights of stage melodramas. But the most common connection, like film adaptations today, was the widespread “melodramatization” of popular novels for the stage. Blockbuster melodramatic productions were adapted from not only popular crime novels of the Newgate and sensation schools like Jack Sheppard, The Woman in White, Lady Audley’s Secret, and East Lynne, but also from canonical works including David Copperfield, Jane Eyre, Rob Roy, The Heart of Midlothian, Mary Barton, A Christmas Carol, Frankenstein, Vanity Fair, and countless others, often in multiple productions for each. In addition to so many major novels being adapted into melodramas, many major melodramas were themselves adaptations of more or less prominent novels, for example Planché’s The Vampire (1820), Moncrieff’s The Lear of Private Life (1820), and Webster’s Paul Clifford (1832). As in any process of adaptation, the stage and print versions of each of these narratives differ in significant ways. But the interplay between the two forms was both widespread and fully baked into the generic expectations of the novel; the profusion of adaptation, with or without an author’s consent, makes clear that melodramatic elements in the novel were not merely incidental borrowings. In fact, melodramatic adaptation played a key role in the success of some of the period’s most celebrated novels. Dickens’s Oliver Twist, for instance, was dramatized even before its serialized publication was complete! And the significant rate of illiteracy among melodrama’s audiences meant that for novelists like Dickens or Walter Scott, the melodramatic stage could often serve as the only point of contact with a large swath of the public. As critic Emily Allen aptly writes: “melodrama was not only the backbone of Victorian theatre by midcentury, but also of the novel.”

 

This question of audience helps explain why melodrama has been separated out of our understanding of the novelistic tradition. Melodrama proper was always “low” culture, associated with its economically lower-class and often illiterate audiences in a society that tended to associate the theatre with lax morality. Nationalistic sneers at the French origins of melodrama played a role as well, as did the Victorian sense that true art should be permanent and eternal, in contrast to the spectacular but transient visual effects of the melodramatic stage. And like so many “low” forms throughout history, melodrama’s transformation of “higher” forms was actively denied even while it took place. Victorian critics, particularly those of a conservative bent, would often actively deny melodramatic tendencies in novelists whom they chose to praise. In the London Quarterly Review’s 1864 eulogy “Thackeray and Modern Fiction,” for example, the anonymous reviewer writes that “If we compare the works of Thackeray or Dickens with those which at present win the favour of novel-readers, we cannot fail to be struck by the very marked degeneracy.” The latter, the reviewer argues, tend towards the sensational and immoral, and should be approached with a “sentiment of horror”; the former, on the other hand, are marked by their “good morals and correct taste.” This is revisionary literary history, and one of its revisions (I think we can even say the point of its revisions) is to eradicate melodrama from the historical narrative of great Victorian novels. The reviewer praises Thackeray’s “efforts to counteract the morbid tendencies of such books as Bulwer’s Eugene Aram and Ainsworth’s Jack Sheppard,” ignoring Thackeray’s classification of Oliver Twist alongside those prominent Newgate melodramas. The melodramatic quality of Thackeray’s own fiction (not to mention the highly questionable “morality” of novels like Vanity Fair and Barry Lyndon), let alone the proactively melodramatic Dickens, is downplayed or denied outright. And although the review offers qualified praise of Henry Fielding as a literary ancestor of Thackeray, it ignores their melodramatic relative Walter Scott. The review, then, is not just a document of midcentury mainstream anti-theatricality, but also a document that provides real insight into how critics worked to solidify an antitheatrical novelistic canon.

image
Photographic print of Act 3, Scene 6 from The Whip, Drury Lane Theatre, 1909
Gabrielle Enthoven Collection, Museum number: S.211-2016
© Victoria and Albert Museum

Yet even after these very Victorian reasons have fallen aside, the wall of separation between novels and melodrama has been maintained. Why? In closing, I’ll speculate about a few possible reasons. One is that Victorian critics’ division became a self-fulfilling prophecy in the history of the novel, bifurcating the form into melodramatic “low” and self-consciously anti-melodramatic “high” genres. Another is that applying historical revisionism to the novel in this way only mirrored and reinforced a consistent fact of melodrama’s theatrical criticism, which too has consistently used “melodrama” derogatorily, persistently differentiating the melodramas of which it approved from “the old melodrama”—a dynamic that took root even before any melodrama was legitimately “old.” A third factor is surely the rise of so-called dramatic realism, and the ensuing denialism of melodrama’s role in the theatrical tradition. And a final reason, I think, is that we may still wish to relegate melodrama to the stage (or the television serial) because we are not really comfortable with the roles that it plays in our own world: in our culture, in our politics, and even in our visions for our own lives. When we recognize the presence of melodrama in the “great tradition” of novels, we will better be able to understand those texts. And letting ourselves find melodrama there may also help us find it in the many other parts of plain sight where it’s hiding.

Jacob Romanow is a Ph.D. student in English at Rutgers University. His research focuses on the novel and narratology in Victorian literature, with a particular interest in questions of influence, genre, and privacy.

Categories
Think Piece

The Idea of the Souvenir: Mauchline Ware

by guest contributor Tess Goodman

The souvenir is a relatively recent concept. The word only began to refer to an “object, rather than a notion” in the late eighteenth century (Kwint, Material Memories 10). Of course, the practice of carrying a small token away from an important location is ancient. In Europe, souvenirs evolved from religious relics. Pilgrims in the late Roman and Byzantine eras removed stones, dirt, water, and other organic materials from pilgrimage sites, believing that “the sanctity of holy people, holy objects and holy places was, in some manner, transferable through physical contact” (Evans, Souvenirs 1). We might call this logic synecdochic: the sacred power of the holy site is thought to remain immanent in pieces of it, chips from a temple or vials of water from a well.

As leisure travel became more common, souvenir commodities evolved from relics. By the eighteenth and nineteenth centuries, tourist-consumers had access to a large market of souvenir merchandise. Thad Logan describes china mugs, novelty needle cases, “sand pictures, seaweed albums,” tartan ware, and a wide range of other souvenir trinkets commonly found in Victorian sitting rooms (The Victorian Parlour, 186). Modern souvenirs are not very different. T-shirts from Hawaii and needle cases from Brighton both rely on the logic of metonymic association, as Logan (186) and Susan Stewart (On Longing, 136) point out. In order to memorialize a tourist’s experiences, the shapes and decorations of these souvenir trinkets evoke the site where those experiences took place.

How did synecdoche become metonymy? What changed? To begin to answer these questions, we can consider a test case: wooden souvenir trinkets from Victorian Scotland. These artifacts draw on both synecdochic and metonymic logic. Therefore, they provide evidence about a transitional phase in the history of the souvenir, and in the history of the way we derive meaning from objects. They do not represent a single moment of transition—this evolution was gradual and piecemeal, taking place over decades if not centuries. Instead, these souvenirs provide a useful case study, a point from which to consider a broader history.

Goodman image 1
Thomas A. Kempis. Golden Thoughts from the Imitation of Christ. N.p, n.d. Bdg.s.922. National Library of Scotland, Edinburgh.

These souvenirs were known as Mauchline ware—named for Mauchline, a town in Ayrshire (Trachtenberg, Mauchline Ware 22-23). Mauchline ware objects were made of wood, decorated in distinctive styles, and heavily varnished for durability. The earliest Mauchline ware pieces were snuffboxes. By mid-century, tourists could buy Mauchline ware pen knives, sewing kits, eyeglass cases, and many other miscellaneous objects. (The examples discussed in this blogpost all happen to be book bindings.) Some of Mauchline ware objects were decorated with a tartan pattern, immediately recognizable as emblems of Scotland. Equally popular were Mauchline ware objects decorated with transfer images of tourist sites. These trinkets functioned with metonymic logic, as modern souvenirs. For example, the binding below bears an iconic representation of Fingall’s Cave.

But sometimes, manufacturers of Mauchline ware took lumber from tourist sites to construct these souvenirs. Captions on the items would indicate the source of the material. Examples abound: a copy of The Dunkeld Souvenir was bound in wood “From the Athole Plantations Dunkeld” (Burns). The photograph below shows a copy of Sir Walter Scott’s Marmion bound in Mauchline ware, using wood “From [the] Banks of Tweed, Abbotsford.” More gruesomely, the boards on a copy of a Guide to Doune Castle were “made from the wood of Old Gallows Tree at Doune Castle” (Dunbar). These captions present the souvenirs as synecdochic artifacts—not religious, but geographical relics. Their purchasers could, quite literally, take home a piece of Scotland.

Goodman image 2
Walter Scott. Marmion. Edinburgh: Adam and Charles Black, 1873. Bdg.s.939. National Library of Scotland, Edinburgh.

These objects were part relic, part commodity. There was a commercial rationale for this combination: the publishers of these books leveraged the appeal of the wood as a relic, but they also transformed the raw material into a distinctively modern, distinctively Scottish consumer product. Contemporary accounts in a souvenir of Queen Victoria’s visit to the Scottish Borders expose some of the commercial logic behind the production process. Publisher’s advertisements in this 1867 book list the “fancy wood work” items its publishers sold in addition to books, and the original source of the wood used in the souvenirs (The Scottish Border 1-2). The binding on this copy states that the wood was “grown within the precincts of Melrose Abbey.” The advertisement provides more detail:  

‘Several years ago, when the town drain was being taken through the ‘Dowcot’ Park, […] a fine beam of black oak was discovered about six feet below the surface of the ground. It is now being taken up […] by Mr. Rutherfurd, stationer, Kelso, for the purpose of being turned into souvenirs. […].’ –Scotsman. Messrs. R may state that most of the “fine beam of black oak” […] split into fibres when exposed to the air and dried. Of the portions remaining good they have had the honour of preparing a box for Her Majesty in which to hold the Photographs of the district specially taken at the time of her visit. (2)

The wood was found on ground between Melrose Abbey and the Tweed, exhumed, and transformed into souvenirs. The publisher’s ad actually refers to these souvenirs as “Melrose Abbey Relics” (2). But they do not adhere to the logic of the early relic: these publishers describe the original wood as quasi-waste material that disintegrated into useless “fibres” when exposed to air. By using the wood for Mauchline ware, the publishers not only preserved the wood against further disintegration: they transformed organic waste into a valuable luxury product, rare and fine enough to present to the Queen. The organic source material lends some authenticity, but it was the process of commodification that added value and intellectual interest.

In short, the relic was not wholly abandoned: the relic and the souvenir co-existed, and some souvenir commodities borrowed ancient synecdochic logic. The gradual, piecemeal evolution from relic to commodity was part of the development of modern consumer culture. The publishers behind these Mauchline ware book bindings were scrambling to reach a new market. Their commercial innovations drew on both ancient and contemporary ideas about the relationships between object, place, and memory. Their publications allow us to consider the changing ideas that allow us to derive meaning from these souvenirs, and from objects like them. Of course, the ultimate meanings of these souvenirs were the personal memories they preserved for their owners. Those meanings remain mysterious, and always will.

Tess Goodman is a doctoral student at the University of Edinburgh. Her research explores book history and literary tourism, focusing on books sold as souvenirs in Victorian Scotland. Previously, she was Assistant Curator of Collections at Rare Book School at the University of Virginia.