Disha Karnad Jani and Professor Adom Getachew discuss her new book, Worldmaking after Empire The Rise and Fall of Self-Determination (Princeton, 2019)
By Zach Bates
This is a companion essay to the author’s “The Idea of Royal Empire and the Imperial Crown of England, 1542–1698”, published in the January 2019 edition of the JHI.
Into the twenty-first century, it has been a commonplace that Britain and its soon-to-become independent North American colonies diverged on ideological grounds (exacerbated, of course, by revolution and independence). This division appears dialectical, and has been expressed in multiple ways: a revolutionary Atlantic opposed to a conservative/loyalist Atlantic; or a republican America (sometimes restricted to the United States; other times encompassing many of the new nation-states in the Americas) against a monarchical Europe. These approaches can be found in Bernard Bailyn, J. C. D. Clark, and Carla Gardina Pestana (see The Ideological Origins of the American Revolution; The Language of Liberty; The English Atlantic in an Age of Revolution). Though ostensibly exorcised from the United States in the late eighteenth century, monarchy is far from a banished (or condemned) concept: Its imagery, symbolism, and constitutional vestiges (at least in the form of the executive branch as established in the United States Constitution) persist in American popular and political culture. Despite the apparent ideological divide, American ambivalence about monarchy has been a recurrent feature throughout colonial and national history. This companion piece to my recent article in the Journal of the History of Ideas will focus on showing the continuities in thinking about monarchy from what we might consider the monarchical culture of the Anglo-American world of the seventeenth and eighteenth centuries – that is, a political community throughout the Atlantic that referred to itself as the British Empire and included Britain and its overseas colonies – to an American society that has often identified itself as republican and modern.
Two strands of recent scholarship have reoriented our understandings of the Crown’s legal role in governing the American colonies and colonists’ relationship to their monarchs. Work on the seventeenth century has shown that the Crown had an important supervisory legal role and a symbolic role for its American subjects – inclusive of both Europeans and indigenous peoples (Ken MacMillan, Sovereignty and Possession in the English New World; Jenny Pulsipher, Subjects unto the Same King). Brendan McConville has argued for the continued importance of the monarchy in American culture after 1688 up to 1776, and argued against the importance of republican ideas during this period. According to his study, colonial America was enthusiastic and steadfast in its support of the monarchy, and this was only sundered by competing visions of the king (The King’s Three Faces). Eric Nelson has extended this line of thinking to the Revolution and composition of the Constitution; he argues that royalist ideas were influential in the opposition to King George III and the push for a strong executive in the early U.S. (The Royalist Revolution). These recent studies have led to a royalist revival in scholarship on the monarchy’s place in American legal thought during the seventeenth century and its cultural and intellectual heritage in colonial America and the early United States (for a study of continuing British cultural influence in the U.S. during the early nineteenth century, see Elisa Tamarkin, Anglophilia: Deference, Devotion, and Antebellum America).
My recent article seeks to link this recent scholarship that emphasizes the importance of monarchy in colonial America to the intellectual history of the British Empire. The eighteenth century is viewed as a key period for the development of identity (Linda Colley, Britons) and ideology (David Armitage, The Ideological Origins of the British Empire) within the British Empire. This approach fits in with earlier scholarship on this imperial entity as one that was acquired in “a fit of absence of mind” during the sixteenth and seventeenth centuries and that was only recognized as an expansive political community by eighteenth-century contemporaries (see J. R. Seeley, The Expansion of England; C. H. Firth, “The British Empire,” pp. 185-89; Richard Koebner, Empire). My article repositions the argument by emphasizing an intellectual means for seventeenth-century subjects of the English (sometimes considered “British”) to view themselves as members of a political community, at times referred to as a “royal empire,” that spanned the Atlantic possessions of the Stuart monarchs (and sometimes extended into Africa and Asia). Much as the scholarship on colonial America has discovered for eighteenth-century colonial populations, allegiance to the Crown was a way to politically identify oneself with others across vast geographies in the seventeenth century – and, as Steven Ellis has argued, even between Ireland and England in the fifteenth and sixteenth centuries (Steven G. Ellis, “Crown, Community and Government in the English Territories, 1450-1575,” pp. 187-204). With this in mind, my article suggests that the classical formulation of the British Empire as protestant, commercial, maritime, and free should be amended to include that it was royalist.
Monarchy was integral to the ideological origins of the British Empire and a vital cultural and intellectual force in colonial America. It has also had afterlives in the republican United States. There is, of course, a continuing fascination in popular culture with kings and queens: One needs only to think of the anointing of one of the best basketball players of our generation – LeBron James – as “King James”; the adoption of a monarchical gimmick and kingly imagery by the professional wrestler Triple H from 2006 into the present; the continuing spate of films centered on monarchs such as Elizabeth (1998), King Arthur (2004), The King’s Speech (2010), and Mary Queen of Scots (2018); and the interest in the details of each and every royal wedding.
Current political discourses retain several of the features from the seventeenth and eighteenth century debates regarding the benefits and potential pitfalls of monarchy. One such fear was that of an overmighty executive who could corrupt the Constitution and make slaves of citizens. The trend of increasing executive power in the United States has long been recognized by scholars, receiving the attention of academics since the 1960s and its own nomenclature, “The Imperial Presidency” (for the classical articulation of this term, see Arthur M. Schlesinger, Jr., The Imperial Presidency). This accumulation of power is often levied against both Democratic and Republican Presidents – though often the accusations of executive tyranny and unconstitutional exercise of power takes a partisan tone In any case, the increasing power exercised by the American executive has prompted comparisons to monarchy – sometimes arguing that the US has long been an “elected monarchy” without a literal crown and title (David Cannadine, “A Point of View: Is the US President an Elected Monarch?”); other times appropriating the language of the seventeenth and eighteenth centuries to claim that the President has become an absolute monarch and a tyrant, thus affirming the fears of the Anglo-American world of living in a corrupt society (see David Armitage, “Trump and the Return of Divine Right”; for an example of the potential for increasing executive power in a British political context, see Thomas Poole, “The Executive Power Project”).
However, there are also current arguments for the inherent stability and effectiveness of monarchies. In a New York Times op-ed from 2016, Count Nikolai Tolstoy argued for the creation of a monarchy in America, based on the current Canadian model, and that “democracy is perfectly compatible with constitutional monarchy” (“Consider a Monarchy, America”). Tolstoy is the Chancellor of the International Monarchist League, which seeks to “support the principle of Monarchy.” For monarchy in the United States, the Center for the Study of Monarchy, Traditional Governance, and Sovereignty was formed to further bolster the position and study of monarchies. Monarchists have gone so far as to argue that a monarchical system of government, if instituted in the United States, “would be not just a salve for a superpower in political turmoil, but also a stabilizing force for the world at large,” and point to a study showing that, according to economic measures, monarchies outperform other forms of government (“What’s the Cure for Ailing Nations? More Kings and Queens, Monarchists Say”). One response to a certain 2016 presidential campaign was to “Make America Great Britain Again” and re-admit the British monarch as the sovereign of the United States – one is left to wonder how flippant this slogan was meant to be. Indeed, one columnist at the New York Times has argued that Americans have been prone to “clamoring for a king” (Ross Douthat, “Give Us a King!”).
It seems unlikely that any American head of state will wear a crown anytime soon, but it is also difficult to deny the continuation of royal culture in the United States and the increasing relevance of monarchical rhetoric and discourses when discussing its leaders and the state itself; think of our references to the Clintons and Bushes as “dynasties,” the Trump administration being composed of “courtiers,” and the resurgence of scholarship that discusses America in the context of its being an empire (see especially A. G. Hopkins, American Empire: A Global History). Taken together with the recent scholarship that excavates the importance of monarchy in colonial America, my article attempts to develop the intellectual history of monarchy in the English-speaking world and provide this concept with a more nuanced etiology.
The author wishes to thank Mary Bates, Christine McLeod, and Derek O’Leary for reading earlier drafts of this piece and for their suggestions and comments.
Zach Bates is a Ph.D Candidate at the University of Calgary. His current dissertation project is a study of the political thought of Scottish colonial administrators in the Atlantic British Empire from 1710 to 1770. He has been awarded fellowships at the New-York Historical Society, the Library Company of Philadelphia, and the Huntington Library. In addition to his work on the intellectual history of the seventeenth and eighteenth century British Empire, he also has an upcoming article on the Sudan in British film during the first half of the twentieth century. You can reach him via email: email@example.com.
Our contributing editor Disha Karnad Jani introduces her interview with Prof. Jennifer Pitts (University of Chicago), focusing on her recent book Boundaries of the International: Law and Empire (Cambridge: Harvard University Press, 2018):
By guest contributor Robert Koch
After two world wars, the financial and ideological underpinnings of European colonial domination in the world were bankrupt. Yet European governments responded to aspirations for national self-determination with undefined promises of eventual decolonization. Guerrilla insurgencies backed by clandestine organizations were one result. By 1954, new nation-states in China, North Korea, and North Vietnam had adopted socialist development models, perturbing the Cold War’s balance of power. Western leaders turned to counterinsurgency (COIN) to confront national liberation movements. In doing so, they reimagined the motives that drove colonization into a defense of their domination over faraway nations.
COIN is a type of military campaign designed to maintain social control, or “the unconditional support of the people,” while destroying clandestine organizations that use the local populations as camouflage, thus sustaining political power (Trinquier, Modern Warfare, 8). It is characterized by a different mission set than conventional warfare. Operations typically occur amidst civilian populations. Simply carpet bombing cities (or even rural areas as seen in the Vietnam War), at least over an extended period of time results in heavy collateral damage that strips governments of popular support and, eventually, political power. The more covert, surgical nature of COIN means that careful justifying rhetoric can still be called upon to mitigate the ensuing political damage.
Vietnam was central to the saga of decolonization. The Viet Minh, communist cadres leading peasant guerrillas, won popular support to defeat France in the First (1945-1954) and the United States in the Second Indochina Wars (1955-1975) to consolidate their nation-state. French leaders, already sour from defeat in World War II, took their loss in Indochina poorly. Some among them saw it as the onset of a global struggle against communism (Paret, French Revolutionary Warfare, 25-29; Horne, Savage War for Peace, 168; Evans, Algeria: France’s Undeclared War, Part 2, 38-39). Despite Vietnam’s centrality, it was in “France,” that is, colonial French Algeria, that ideological significance was given to the tactical procedures of COIN. French Colonel Roger Trinquier, added this component while fighting for the French “forces of order” in the Algerian War (1954-1962) (Trinquier, Modern Warfare, 19). Trinquier’s ideological contribution linked the West’s “civilizing mission” with enduring imperialism.
In his 1961 thesis on COIN, Modern Warfare, Trinquier offered moral justification for harsh military applications of strict social control, a job typically reserved for police, and therefore for the subsequent violence. The associated use of propaganda characterized by a dichotomizing rhetoric to mitigate political fallout proved a useful addition to the counterinsurgent’s repertoire. This book, essentially providing a modern imperialist justification for military violence, was translated into English, Spanish, and Portuguese, and remains popular among Western militaries.
Trinquier’s experiences before Algeria influenced his theorizing. In 1934, a lieutenant in Chi-Ma, Vietnam, he learned the significance of local support while pursuing opium smugglers in the region known as “One Hundred Thousand Mountains” (Bernard Fall in Trinquier, Modern Warfare, x). After the Viet Minh began their liberation struggle, Trinquier led the “Berets Rouges” Colonial Parachutists Battalion in counterguerrilla operations. He later commanded the Composite Airborne Commando Group (GCMA), executing guerrilla operations in zones under Viet Minh control. This French-led indigenous force grew to include 20,000 maquis( rural guerrillas) and had a profound impact in the war (Trinquier, Indochina Underground, 167). Though France would lose their colony, Trinquier had learned effective techniques in countering clandestine enemies.
Upon evacuating Indochina in 1954, France immediately deployed its paratroopers to fight a nationalist liberation insurgency mobilizing in Algeria. Determined to avoid another loss, Trinquier (among others) sought to apply the lessons of Indochina against the Algerian guerillas’ Front de Libération Nationale (FLN). He argued that conventional war, which emphasized controlling strategic terrain, had been supplanted. Trinquier believed adjusting to “modern warfare” required four key reconceptualizations: a new battlefield, new enemy, how to fight them, and the repercussions of failure. Trinquier contended that warfare had become “an interlocking system of action – political, economic, psychological, military,” and the people themselves were now the battleground (Trinquier, Modern Warfare, 6-8).
Trinquier prioritized wining popular support, and to achieve this blurred insurgent motivations by lumping guerrillas under the umbrella term “terrorist.” Linking the FLN to a global conspiracy guided by Moscow was helpful in the Cold War, and a frequent claim in the French military, but this gimmick was actually of secondary importance to Trinquier. When he did mention communism, rather than as the guerrilla’s guiding light, it was in a sense of communist parties, many of whom publicly advocated democratic means to political power, as having been compromised. The FLN were mainly a nationalist organized that shunned communists, especially in the leadership positions, something Trinquier would have known as a military intelligence chief (Horne, Savage War for Peace, 138, 405). In Modern Warfare, although he accepted the claim that the FLN was communist, in fact he only used the word “communist” four times (Trinquier, Modern Warfare, 13, 24, 59, 98). The true threat were “terrorists,” a term used thirty times (Trinquier, Modern Warfare, 8, 14, 16-25, 27, 29, 34, 36, 43-5, 47-49, 52, 56, 62, 70, 72, 80, 100, 103-104, 109). The FLN did terrorize Muslims to compel support (Evans, Algeria: France’s Undeclared War, Part 2, 30). Yet, obscuring the FLN’s cause by labeling them terrorist complicated consideration of their more relatable aspirations for self-determination. Even “atheist communists” acted in hopes of improving the human condition. The terrorist, no civilized person could support the terrorist.
Trinquier’s careful wording reflects his strategic approach and gives COIN rhetoric greater adaptability. His problem was not any particular ideology, but “terrorists.” Conversely, he called counterinsurgents the “forces of order” twenty times (Trinquier, Modern Warfare, 19, 21, 32-33, 35, 38, 44, 48, 50, 52, 58, 66, 71, 73, 87, 100). A dichotomy was created: people could choose terror or order. Having crafted an effective dichotomy, Trinquier addressed the stakes of “modern warfare.”
The counterinsurgent’s mission was no less than the defense of civilization. Failure to adapt as required, however distasteful it may feel, would mean “sacrificing defenseless populations to unscrupulous enemies” (Trinquier, Modern Warfare, 5). Trinquier evoked the Battle of Agincourt in 1415 to demonstrate the consequences of such a failure. French knights were dealt crushing defeat after taking a moral stand and refusing to sink to the level of the English and their unchivalrous longbows. He concluded, if “our army refused to employ all the weaponsof modern warfare… national independence, the civilization we hold dear, our very freedom would probably perish” (Trinquier, Modern Warfare, 115). His “weapons” included torture, death squads, and the secret disposals of bodies – “dirty war” tactics that hardly represent “civilization” (Aussaresses, Battle of the Casbah, 21-22; YouTube, “Escuadrones de la muerte. La escuela francesa,” time 8:38-9:38). Trinquier was honest and consistent about this, defending dirty war tactics years afterward on public television (YouTube, “Colonel Roger Trinquier et Yacef Saadi sur la bataille d’Alger. 12 06 1970”). Momentary lapses of civility were acceptable if it meant defending civilization, whether it be adopting the enemy’s longbow or terrorist methods, to even the battlefield dynamics in “modern warfare.”
Trinquier’s true aim was preserving colonial domination, which had always been based on the possession of superior martial power. In order to blur distinctions between nationalists and communists, he linked any insurgency to a Soviet plot. Trinquier warned of the loss of individual freedom and political independence. The West, he warned, was being slowly absorbed by socialist—terrorist—insurgencies. Western Civilization would be doomed if it did not act against the monolithic threat. His dichotomy justifies using any means to achieve the true end – sustaining local power. It is also exportable.
Trinquier’s reconfiguration of imperialist logic gave the phenomenon of imperialism new life. Its intellectual genealogy stretches back to the French mission civilisatrice. In the Age of Empire (1850-1914), European colonialism violently subjugated millions while claiming European tutelage could tame and civilize “savages” and “semi-savages.” During postwar decolonization, fresh off defeat in Indochina and facing the FLN, Trinquier modified this justification. The “civilizing” mission of “helping” became a defense of (lands taken by) the “civilized,” while insurgencies epitomized indigenous “savagery.”
The vagueness Trinquier ascribed to the “terrorist” enemy and his rearticulation of imperialist logic had unforeseeable longevity. What are “terrorists” in the postcolonial world but “savages” with modern weapons? His dichotomizing polemic continues to be useful to justify COIN, the enforcer of Western imperialism. This is evident in Iraq and Afghanistan, two countries that rejected Western demands and were subsequently invaded, as well as COIN operations in the Philippines and across Africa, places more peripheral to the public’s attention. Western counterinsurgents almost invariably hunt “terrorists” in a de facto defense of the “civilized.” We must carefully consider how rhetoric is used to justify violence, and perhaps how this logic shapes the kinds of violence employed. Trinquier’s ideas and name remain in the US Army’s COIN manual, linking US efforts to the imperialist ambitions behind the mission civilisatrice (US Army, “Annotated Bibliography,” Field Manual 3-24 Counterinsurgency, 2).
Robert Koch is a Ph.D. candidate in history at the University of South Florida.
By guest contributor Professor Sumit Guha
This essay addresses the shifting connection between signifier and signified, word and thing, by looking at the history of an important and yet so protean sociological term: ‘tribe’. My argument is that ‘tribe’ is a ‘fossil word’ whose content has been replaced through centuries, just as a fossil’s soft tissue has been replaced by mineral compounds. In the process, it has changed to conform with the soil where it has lain. It is these shifts and their underlying discourses that I want to present in this post. In South Asia, the term was also in recent centuries permeated by traits based on the race theories of imperial-age Europe. It has therefore acquired a different connotation in the Republic of India than it has anywhere else. I begin however with its renewed presence in American public discourse now.
Tribes are in vogue (and even in Vogue) nowadays. Their primordial existence is invoked by journalists and academics to explain mass behavior today. This is of course not new: in the age of modernization theory (after World War II) ‘tribalism’ was frequently invoked to explain the political behavior of ‘not yet modern’ (read non-Western) peoples. Modernization would dissolve all that. John Locke wrote that in the beginning, all was as America: Francis Fukuyama that in the end all would be. History has proved more complex, and back in the USA, ‘tribalism’ is the word of the day. Nations disintegrate and states fail, but ‘Tribalism’ has frequently invoked ever since the U.S. election of 2016 to explain political behavior.
I will show how the same word has come to have distinct referents within and outside the modern Republic of India. As an exercise in the history of ideas, I will look at how and why this came about by bringing to light the racial theories in which the Indian understanding originated.
is well known, tribus is an old Latin noun, originally applied to the divisions of the people by the ancient Romans. It has been suggested that it was originally a compound meaning ‘the three peoples’ or ‘three orders’, though the number of Roman tribes later increased to over thirty. The word entered many European languages during the medieval period. In English we find it in a 1752 thesaurus, and it was used (from Hebrew) as a name for the twelve divisions of the people of Israel as described in Exodus and elsewhere.
After 1510, the Portuguese controlled all sea-borne European access to the Indian Ocean for almost a century. Asian ships largely sailed under stringent conditions that came with Portuguese permits. The Portuguese arrived at the beginning of the print revolution in the West. Western knowledge of Asia was mediated through Portuguese (and for the learned, Latin). Portuguese became a lingua franca around the littoral. The British commander Robert Clive addressed his Indian soldiery in that language in 1757. The social category of tribe (tribus) was however, little used by the Portuguese despite their Latin heritage. Other than ‘casta’ – a sociological term common to both Portuguese and Spanish empires–communities were usually called. Both terms referred to ‘people’ in the loosest sense of the term.
But the word ‘tribe’ was early employed by the major South Asian colonial power: Britain. That usage likely came from literate Protestants’ familiarity with the English Bible. But it was still loosely used to mean an ethno-political grouping. We therefore find great nomenclatural confusion in early English documents. The English East India Company took over the formerly Portuguese-ruled island of Bombay in 1665 and a few years later the governor wrote to his superiors that there were several different ‘nations’ (also described as ‘orders or tribes’) inhabiting the . (I have modernized his orthography by expanding abbreviations; all emphases added.)
[I]n order to preserve the Govern[ment] in constant regular method, free from that confusion which a body composed of so many nations will be subject to, it were requisite [that] [the] severall nations at pres[ent] inhabiting or hereafter to inhabit on the Island of Bombay be reduced or modelled into so many orders or tribes, & that each nation may have a Cheif (sic) or Consull of the same nation appointed over them by the Gover[nor] and Councell…
The distinctive twentieth-century anthropological use of ‘tribe’ and ‘caste’ was still absent as late as the 1820s. For example, in 1823, Thomas Marshall reporting to the Government of Bombay, wrote of one district: ‘the Weavers are either of the tribe of Lingayut [a religious community] or of another Kanaree tribe called Hutgur …’ (p.18). Today both of these would be classified as ‘castes’. He went on ‘the tribe of Bunyas [ a generic term for all Hindu and Jain merchant castes] educated to reading and accompts being unknown here ..’ (p.24).
To add to the confusion, any descent group could also be labeled ‘race’ – so Marshall writes that ‘a respectable Mahratta [today both an ethnonym and a caste-name] (to which race the institution is confined) …’ (p.83). From North India in the same period we find two Muslim communities self-classified as Sheikh [Arabic ‘chief’; used in India by many Muslims as a status label and Sayyad [descendant of the Prophet] referred to as races: ‘The village is divided into two [sections], corresponding with the two races (sic) by which it is occupied …’ This ethnographic looseness had however little administrative effect. (I have discussed English nineteenth century usage of ‘race’ elsewhere.) Its practical unimportance is why it was allowed to persist. But some officials realized that tribal organization could frame political life in some parts of Southern Asia. Once social categories became administrative ones it became necessary to label clearly and describe exactly.
Mountstuart Elphinstone, one of Marshall’s superior officers in Bombay (later Governor there) and realized the sociopolitical importance of sociological clarity during his visit to the then Eastern Afghanistan in 1808-10. (Shah Shuja refused him permission to go further than Peshawar but he indefatigably interviewed travelers, merchants, Afghan immigrants into India and others when compiling his account. I have discussed Elphinstone in a recent publication.) He noted the centrality of ‘tribal’ organization to the political life of the region and this led him to try and label it accurately. So he wrote (all emphases added):
I beg my readers to remark, that hereafter, when I speak of the great divisions of the Afghauns, I shall call them tribes ; and when the component parts of a tribe are mentioned with reference to the tribe, I shall call the first divisions clans : those which compose a clan, Khails, &c, as above. But when I am treating of one of those divisions as an independent body, I shall call it Oolooss, and its component parts clans, khails, &c, according to the relation they bear to the Oolooss, as if the latter were a tribe. Khail is a corruption of the Arabic word Khyle, a band or assemblage ; and Zye, so often affixed to the names of tribes, clans, and families, means son, and is added as Mac is prefixed by the [Scots]Highlanders.
(Oolooss/ulus was a term popularized across the 13th century Mongol Empire to refer to an aggregate of tribe and their grazing domain.)
Elphinstone wrote of the founder-king, King Ahmed Shah Durrani (ruled 1747-1772), that he had been wise enough to know that it would need less exertion to conquer all the neighboring countries (i.e. Elphinstone’s ‘great divisions’) of Afghanistan. Ahmed Shah founded a state that, while designated a monarchy, functioned as a confederation of tribes and chiefdoms (khanates) held together by the Shah’s redistribution of tribute and plunder from the region extending from western Uttar Pradesh to the Khyber Pass. In the nineteenth century, British and Russian arms and subsidies enabled the kings to acquire more authority. But that broke down with the overthrow of the monarchy and then of all settled government in the late 1970s and 1980s.
Over much of the world therefore, “tribe” has generally referred to strong organizations, often possessing much latent military power. Nor should we view this as a situation confined to specific regions of West Asia or North Africa. Decades ago, Owen Lattimore provocatively suggested that from their beginnings, civilizations gave birth to the barbarian tribes that they then sought to subdue or to exclude, and by whom they were periodically conquered. Tribes (he argued) emerged out of simple bands of foragers or farmers because emerging empires preyed upon small unorganized communities for slaves, encroached on their territories, or sought to incorporate them as servile peasants.
Tribes might also take shape out of conquering war-bands. The area northeast of Delhi, for example (now known as Rohilkhand), lost its medieval name (Katehar) in the eighteenth century as warrior-bands called Rohilla displaced the local Katehriya Rajput gentry that had tenaciously resisted the power of Delhi from the thirteenth century. The Rohillas in turn, maintained the local tradition of resistance to central power well into the British colonial period. But there was no original Rohilla tribe in Afghanistan. As Jos Gommans has demonstrated in his contribution to the volume for D.H.A. Kolff, the early Indian Rohillas had themselves assembled a ‘tribal’ community out of various Afghan war bands and miscellaneous local slaves drawn into the following of Daud Khan, a horse-trader turned soldier of fortune turned tribal chieftain.
‘Tribe’ is an important category in modern South Asia, especially in the Republic of India. Nandini Sundar, an important Indian academic recently published a valuable edited volume on the 100 million-plus people classed as members of ‘Scheduled Tribes’ in India. This part of the post only looks at the origins of this term and the evolution of its unusual usage in the India, one so distinct from that in other parts of the Old World.
The Republic has followed a late colonial classification, one that began by defining ‘tribes’ as This strain of thought originated in late colonial times and derived from the now abandoned effort to apply geological models of stratification to contemporary social organization. We find it exemplified in the work of the missionary ethnographer John Wilson. It was transferred to the hilly forests of Central India by the British official Charles Grant who also incorporated the emerging “Nordic” and “Aryan race” theory.
The rise of nationalism and its critiques of colonial rule increasingly irked the British imperial government. One of the ideological responses was to argue that various Indian communities needed the benevolent protection of the Empire against the oppressions of other Indians, especially those critical of the Empire. This discourse naturally gravitated to the communities already declared to be . By the early twentieth century that led to special protection for such peoples and thus began the movement towards a conception of tribes as composed of simple, timid and primitive peoples whose special traits made them deserving of protection by the British government of India. As I have written elsewhere, the government wanted to counter nationalist agitations by presenting itself as the protector of the simple aborigines against oppression by their fellow-Indians. One effect of this was the creation of ‘excluded areas’ beyond the inner frontiers, where ordinary law and civil process did not operate and the executive had wide powers. The separate existence of such areas was gradually terminated after Indian independence, but special measures of affirmative action were enshrined in the new Constitution (1950).
However, the defining traits of ‘tribal’ communities were taken from colonial ethnography. Thus India’s tribes are defined and have been officially defined since at least the 1960s in ways detailed in Sundar’s Introduction to her book. Megan Moodie recently pointed out that “shyness” was an important identifying trait. This has had real-world consequences for communities seeking inclusion in this category. For example, the failure of the Gurjar or Gujar community of northern India to display the necessary traits led a judicial commission appointed by the Government of Rajasthan to deny them entry into the list of Scheduled Tribes for that state. That commission reiterated the conclusions of an earlier report submitted to the government on 20 August 1981, which had said of them: “They are fairly well-off and suffer from no shyness of contact with people of other castes. Also, they do not have any primitive traits (for them) to be considered for inclusion in [Scheduled Tribe] ”.
Thus the same sociological term has radically different meanings. A few hundred miles west of the Indian border, adjoining Afghanistan, lies what was (until recently) known as Pakistan’s Federally Administered Tribal Area (FATA). Its residents have never been known for shyness or timidity: indeed, quite the opposite. It is here that we find the direct descendants of the ‘tribes’ that Elphinstone observed in 1810. Two hundred years later, they are still political and military communities that play a major part in the public life of Afghanistan and, to a lesser degree of Pakistan.
Sumit Guha is Professor of History at The University of Texas at Austin. He is indebted to Derek O’Leary for two careful readings and many good suggestions.