Think pieces

Amnesty International and conscientious objection in Australia’s Vietnam War

by guest contributor Jon Piccini.

Human rights are now the dominant language of political claim making for activists of nearly any stripe. Groups who previously looked to the state as a progressive institution conferring rights and duties now seek solace in our (at least, until recently) post-national world in global protections and norms – a movement ‘from the politics of the state to the morality of the globe’, as Samuel Moyn puts it.[1]

Yet, a long history of contestation and negotiation over human rights’ meaning belie the term’s now seemingly unchallengeable global salience. What constituted a ‘right’, who could claim them and what relation rights claiming had to the nation state are long and enduring questions. I want to explore these questions by focusing on the role that Amnesty International – a then struggling outfit employing a new, inventive form of human rights activism – played in campaigning against conscription in Australia during the 1960s. While a collective politics of mutual solidarity and democratic citizenship predominated in was dubbed the ‘draft resistance’ movement, Australian Amnesty members worked to have Conscientious Objectors recognised as ‘Prisoners of Conscience’ and adopted by groups around the world.

Founded in London in 1961, Amnesty struggled in its early years to stay afloat. By 1966, “The organization’s credibility was severely damaged by publicity surrounding its links to the British government and strife among the leadership”, as Jan Eckel puts it, and such problems were reflected in Australia.[2] Amnesty’s arrival in Australia was ad hoc: from 1964 onwards groups began emerging in different states, mainly New South Wales (NSW) and Victoria, which meant that Australia stood out as the only country without a national Amnesty section, but rather multiple State-based groups each struggling with finances and small membership.

I will argue that relating to the draft resistance movement actually posed many problems for Amnesty members. While for some a clear-cut violation of the Universal Declaration of Human Rights (UDHR), Australia’s two key State sections – NSW and Victoria – came to widely divergent interpretations of what constituted a prisoner of conscience, and what duties citizens had to the State: debates which made their way to the organisation’s centre in Europe. These illustrate how human rights had far from settled meanings in the 1960s, even for their adherents, and point towards the importance of local actors in understanding intellectual history.

Australia (re)introduced conscription for overseas service in 1964, with the conservative Coalition government fearful of a threatening Asia.[3] Troops, including conscripts, were committed to the growing conflict in Vietnam a year later. While initially popular, opposition to conscription began growing from 1966 when Sydney schoolteacher William ‘Bill’ White was jailed after his claim for conscientious objector status was rejected. White and other objectors were not “conscientiously” opposed to war in general, but held what the responsible minister labelled a “political” opposition to the Vietnam War, and as such did not meet strict legal guidelines.[4]

Bringing those believed to be ‘prisoners of conscience’ to light initially united both the New South Wale and Victorian sections. The Victorian section released a statement in support of White’s actions: “we feel it impossible…to doubt the sincerity of his convictions and are gravely concerned at the prospect of his continued detention under the provisions of military law”. Given “the grounds for an appeal to the Government on White’s behalf based on the sanctity of the individual conscience are substantial”, the section recommended White’s case to AI’s London office “for appropriate action”.[5]

The New South Wales section expressed near identical sentiments, reporting in August 1966 that “Conscription had been the overriding issue in much of our new work”, pointing to its transnational nature, with the section collecting material on Australian cases while campaigning for the release of conscientious objectors in the USA and East Germany: “the predicament of Bill White is shared by young men all over the world”.[6] White’s public statement of conscientious objection, reproduced in the NSW section’s newsletter, spoke of rights as “unalterable” and inhering in a person rather than being a “concession given by a government”, and as such these were “not something which the government has the right to take”.[7]

White’s release in December 1966 came before AI could adopt his cause internationally, but more objectors soon followed. What became problematic, however, was when the politics of conscientious objection moved to one of downright refusal – non-compliance with the laws of the land. Unlike White, part time postman John Zarb did not seek conscientious objector status but refused to register for military service altogether. His October 1968 jailing saw “Free Zarb” became a rallying cry for the anti-war movement: it was seen as representing the futility and double standards synonymous with the Vietnam War. As one activist leaflet put it: “In Australia – it is a crime not to kill”.[8] AI NSW section member Robert V Horn described in a long memorandum to London, written in late 1968 and sent after internal discussion some six months later, how “Conscription and Vietnam have become inter-mixed in public debate, and in contemporary style outbursts of demonstrations, protest marches, draft card burnings [and] sit-ins”.[9]

Zarb’s case was however nowhere near as clear cut for Amnesty members as White’s had been. Horn described that while “one might guess that many [AI] members are opposed to Australia’s participation in the Vietnam War” these individuals held “many shades of views”, particularly around the acceptability of law breaking.[10] Horn circulated a draft report on the situation in Australia that he had prepared for AI’s London headquarters to other AI members within his section and in Victoria, reactions to which demonstrate just how divisive the issue of conscientious objectors and non-compliers was for an organisation deeply wedded to due legal process. David McKenna, in charge of the Victorian section’s conscientious objection work, put this distinction quite clearly – arguing that those who “register for national service and apply for exemption”, but whose “applications fail either through some apparent miscarriage of justice or because the law does not presently encompass their objections…are prima facie eligible for adoption” as prisoners of conscience.[11]

However, those who “basically refuse to co-operate with the National Service Act” merely “maintain a right to disobey a law which they believe to be immoral”—and as such were not a concern for AI. McKenna here makes use of a similar typology as the Minister for National Service, casting refusal as a “purely political stand” as opposed to those who hold a “moral objection to conscription” and pursue this through the legal system. McKenna brought to his defence the UDHR, noting that in article 29/2 “freedom of conscience is not an absolute, nor is freedom to disobey in a democratic society”.[12] Concerns were raised about “to what extent we uphold disobedience to the law by adopting such persons”, noting that AI had chosen not to adopt prisoners “who refuse obedience to laws [such as] in South Africa or Portugal”, referencing recent debates regarding the adoption of prisoners who had advocated violence. Taking on prisoners who refused to obey laws not only opened the road to similar “freedom to disobey” claims – “are we to adopt people wo refuse to have a T.B. X Ray on grounds of conscience” – but McKenna also feared that in taking “such a radical step…our high repute would be seriously damaged”.[13]

Horn and others in the NSW section “decr[ied] such legalistic interpretation” – “the Non-Complier in gaol for conscientiously held and non-violently expressed views suffers no less than the [Conscientious Objector] who has tried in vain to act ‘according to the law”.[14] While at first divisions on this issue were across and between sections, by late 1969 the Victorian section had solidly decided “that non-compliers should not be adopted”, and sent a memorandum to London to this effect in preparation for the 1970 AI Executive Meeting, to be held in Stockholm.[15] The position of the NSW section was equally clear, expressed in a resolution adopted during ‘prisoner of conscience week’ in November 1969 requesting Amnesty and the UN General Assembly adopt “firm restrains upon legal and political repression of conscience”. “[T]he expression of honest opinions regarding matters of economics, politics, morality, religion or race is not a good and sufficient reason” to justify imprisonment of a person, the Section petitioned, and “no person should be penalised for refusing to obey a law…which infringes the principles here set forth”.[16] The Stockholm gathering backed the NSW Sections views, with the Victorian Section wondering whether this geographical placement and the strength of the Swedish Section – “who have the same problem as Australia and have come to the opposite view” – swayed results.[17]

This small case study provides insights into how the idea of human rights has been contested over time. Australia’s two Amnesty Sections – not amalgamated until the late 1970s – developed polar opposite views around the veracity of law breakers as beneficiaries of Amnesty’s human rights activism. This arguably came down to a fundamental opposition in how both groups conceptualised human rights – as global and inhering in the person, as such not requiring compliance with laws of the Nation State – or as the product and result of citizenship, which gave rights and imposed duties onto a subject. The AI Executive Council’s decision to stand on the side of the individual’s inalienable rights also provides a pre-history of how human rights moved from its 1960s meanings –, best exemplified by the 1968 Tehran Declaration’s deep wedding to the State – to a ‘rebirth’ in the 1970s as a global set of enforceable norms against states – a history that can be fruitfully explored at both the global and local levels.

Jon Piccini is a Postdoctoral Development Fellow at the University of Queensland, where he is working on a book provisionally titled Human Rights: An Australian History. His most recent book, Transnational Protest, Australia and the 1960s, appeared in 2016 with Palgrave. 

 

[1] Samuel Moyn, The Last Utopia: Human Rights in History (Harvard, Mass: Harvard University Press, 2010), 43.

[2]  Jan Eckel, “The International League for the Rights of Man, Amnesty International, and the Changing Fate of Human Rights Activism from the 1940s through the 1970s”, Humanity 4, No. 2 (Spring 2013), 183.

[3] Australia’s main two main conservative forces, the Liberal party and what was in the 1960s the Country party, but is now known as the National party, operate as a coalition in federal elections.

[4] Leslie Bury MP to Lincoln Oppenheimer, 31 March 1966, reproduced in Amnesty News 21 (May 1969), 3-4.

[5] “Statement from the Victorian Section of Amnesty International. Bill White Case”, Amnesty Bulletin 16 (November 1966).

[6] Lincoln Oppenheimer, “President’s Report”, Amnesty News 10 (August 1966), 3.

[7] “Copy of Statement by Mr W. White, Sydney Schoolteacher and Conscientious Objector”, Amnesty News 10 (August 1966), 2-3.

[8] “Australia’s Political Prisoner”, Undated leaflet, State Library of South Australia.

[9] Robert V Horn, Untitled Report on conscientious objection and noncompliance in Australia, Robert V Horn Papers, MLMSS 8123, Box 33, SLNSW.

[10] Ibid.

[11] David McKenna to Robert V Horn, 2 March 1969, Robert V Horn Papers, MLMSS 8123, Box 33, SLNSW

[12] Ibid.

[13] Ibid.

[14] Horn, Untitled Report.

[15] David McKenna to Robert V Horn, 19 February 1970, Robert V Horn Papers, MLMSS 8123, Box 33, SLNSW

[16] “RESOLUTION – Prisoner of Conscience Week, November 1969”, Amnesty News 24 (February 1970), 15-16.

[17] “International Council”, Amnesty Bulletin 28 (October 1970), 4-5.

How Victory Day became Russia’s most important Holiday

by guest contributor Agnieszka Smelkowska

At first, Russian TV surprises and disappoints with its conventional appearance.  A mixture of entertainment and news competes for viewers’ attention, logos flash across the screen, and pundits shuffle their notes, ready to pounce on any topic. However, the tightly controlled news cycle, the flattering coverage of President Vladimir Putin, and a steady indignation over Ukrainian politics serve as reminders that not all is well. Reporters without Borders, an international watchdog that annually ranks 180 states according to its freedom of press index, this year assigned Russia to a dismal 148th position. While a number of independent print and digital outlets persevere, television has been largely brought under state control. And precisely because of these circumstances, television programming tends to reflect priorities and concerns of the current administration. When an ankle sprain turned me into a reluctant consumer of state programming for nearly three weeks, I realized that despite various social and economic challenges, the Russian government remains preoccupied with the Soviet victory in WWII.

Celebrated on May 9th, Victory Day—or Den Pobedy (День Победы) as it is known in

Ivan's Childhood

Tarkovsky’s Ivan’s Childhood (1962)—the child protagonist encounters the reality of war.

Russia—marks the official capitulation of Nazi Germany in 1945 and is traditionally celebrated with a military parade on the Red Square. During the few weeks preceding the holiday, parade rehearsals regularly shut down parts of Moscow while normal programming gives way to a tapestry of war-related films. The former adds another challenge to navigating the already traffic-heavy city; the latter, however, provides a welcome opportunity to experience some of the most distinguished works of Soviet cinematography. Soviet directors, many veterans themselves, resisted simplistic war narratives and instead focused on capturing human stories against the historical background of violence. Films like Mikhail Kalatozov’s Cranes are Flying (1957), Andrei Tarkovsky’s Ivan’s Childhood (1962) or Elem Klimov’s Come and See (1985) are widely recognized for their emotional depth, maturity and an uncompromising depiction of the consequences of war. Unfortunately, these Soviet classics share the silver screen with newer Russian productions in the form of Hollywood-style action flicks or heavy-handed propaganda pieces that barely graze the surface of the historical events they claim to depict.

The 2016 adaptation of the iconic Panfilovtsy story exemplifies the problematic handling of historical material. The movie is based on an article published in a war-time Soviet newspaper, which describes how a division of 28 soldiers under the command of Ivan Panfilov distinguished itself during the defense of Moscow in late 1941. The poorly armed soldiers who came from various Soviet Republics managed to disable eighteen German tanks but were all killed in the process. The 1948 investigation, prompted by an unexpected appearance by some of these allegedly dead heroes, exposed the story as a journalist exaggeration, designed to reassure and inspire the country with tales of bravery and an ultimate sacrifice. Classified, the report remained unknown until 2015 when Sergei Mironenko, at the time director of state archives, used its findings to push against the mythologization of Panfilov and his men, which he saw as a sign of increasing politicization of the past. His action provoked severe, public scolding from the Culture Minister Vladimir Medinsky that eventually cost Mironenko his job, while Panfilov’s 28 (2016) was shown during this year’s Victory Day celebration.

Panfilov's 28.jpeg

Panfilov’s 28 (2016): Soviet heroism in modern Russian cinematography.

Many western commentators have already noted the significance of Victory Day, including Neil MacFarquhar, who believes that President Putin intentionally turned it into the “most important holiday of the year.” The scale of celebration seems commensurate with this rhetorical status and provides an impressive background for a presidential address. The most recent parade consisted of approximately ten thousand soldiers and over a hundred military vehicles—from the T-34, the venerable Soviet tank to the recently-developed Tor missile system, which can perform in arctic conditions. Predictably the Russian coverage differs from that presented in the western media. The stress does not fall on the parade or President Putin’s speech alone but extends to the subsequent march of veterans and their descendants, emphasizing the continuation between past and present. The broadcast of the celebration, which can be watched anywhere between the adjacent to Poland Kaliningrad Oblast and Cape Dezhnev only fifty miles east of Alaska, draws a connection between Russia’s current might and the Soviet victory in the war. Yet May 9th did not always hold its current status and only gradually became the cornerstone of modern Russian identity.

While in 1945 Joseph Stalin insisted on celebrating the victory with a parade on the Red Square, the holiday itself failed to take root in the Soviet calendar as the country strived for normalcy. The new leader Nikita Khrushchev discontinued some of the most punitive policies associated with Stalinism and promised his people peace, progress, and prosperity. The war receded into the background as the Soviet Union put a man into space while attempting to put every family into its own apartment. Only after twenty years was the Victory Day officially reinstated by Leonid Brezhnev and observed with a moment of silence on state TV. Brezhnev also approved the creation of a new Moscow memorial to Soviet soldiers killed in the war—a sign that Soviet history had taken a more

Tomb of unknown soldier image copy

Tomb of the Unknown Soldier, Aleksandrovsky Sad, Moscow – Russia, 2013. (Photo credit: Ana Paula Hirama/flickr)

solemn turn. Known as the Tomb of the Unknown Soldier (Могила Неизвестного Солдата), a bronze sculpture of a soldier’s helmet resting on a war banner with a hammer and sickle finial pointing towards the viewer symbolizes the massive casualties of the Soviet Union, many of whom were never identified. Keeping an exact list of the dead was not always feasible as the Red Army fought the Wehrmacht forces for four years before taking Berlin in April of 1945. Historians estimate that the Soviet Union lost approximately twenty million citizens—the largest absolute (if not proportional) human loss of any state involved in WWII. For this reason, in Russia and many former Soviet republics the 1941-1945 war is properly known as the Great Patriotic War (Великая Отечественная Война).

The victory over Nazi Germany, earned with a remarkable national sacrifice, was a shining moment in otherwise troubled Soviet history and a logical choice for the Russian Federation, a successor state of the Soviet Union, to anchor its post-ideological identity. Yet the current Russian government, which carefully manages the celebration, cannot claim credit for the popularity that the day enjoys among regular people. Many Russian veterans welcome an opportunity to remember the victory and their descendants come out on their own volition to celebrate their grandparents’ generation. This popular participation has underpinned the holiday since 1945. Before Brezhnev’s intervention, veterans would congregate informally and quietly to celebrate the victory and commemorate their fallen comrades. Today, as this war-time generation is leaving the historical stage, their children and grandchildren march across Moscow carrying portraits of their loved ones who fought in the war, forming what is known as the Immortal Regiment (Бессмертный полк). And the marches are increasingly spilling into other locations—both in Russia and worldwide. This year these processions took place in over fifty countries with a significant Russian diaspora, including Western Europe and North America.

Immortal regiment photo

The Immortal Regiment in London, 2017. (Photo credit: Gerry Popplestone/flickr)

This very personal, emotional dimension of the Victory Day has been often overlooked in western coverage, which reduces the event to a sinister political theater and a manifestation of military strength.  The holiday is used to generate a new brand of modern Russian patriotism precisely because it already resonates with the Russian public. People march to uphold the memory of their relatives regardless of their feelings towards the current administration, views on the annexation of Crimea, or attitude towards NATO. Although this level of filial piety can be manipulated, my Russian friends seem to understand when the government tries to capitalize on these feelings. Mikhail, my Airbnb host, who belongs to the new Russian middle class, and who few years ago carried a portrait of his grandfather during the Victory Day celebration, remarked that the government attached itself like a “parasite” to the Immortal Regiment phenomenon because of its popularity. The recent clash over the veracity of the Panfilovtsy story also given many Russians a more nuanced understanding of their history even as some enjoyed the movie’s action sequences. Additionally, the 1948 investigative report that Mironenko had posted online, remains accessible on the website of the archive.
At the same time, many Russians are genuinely frustrated with what they perceive as the western ignorance of their elders’ sacrifices or what seems to them like the Ukrainian attempt to rewrite the script of the Victory Day. In this respect, they are inadvertently playing to their government’s line. This interaction between the political and the personal, family history and national narrative occurs in every society but in Russia seems particularly explicit because the fall of the Soviet Union shattered Soviet identity, creating an urgent need for a new one. While the current administration is eager to supply the new formula, based on my recent experience in Moscow, Russian citizens are still negotiating.

Agnieszka Smelkowska is a Ph.D candidate in the History Department at UC Berkeley, where she is completing a dissertation about the German minority in Poland and the Soviet Union while attempting to execute a perfect Passata Sotto in her spare time.

Politics the only common ground

by Eric Brandom

Le congrès des ecrivains et artistes noirs took place in late September 1956, in Paris. Among the speakers was Aimé Césaire, and it is his intervention, “Culture and Colonization,” that is my focus here. This text has been the subject of significant scholarship. Like all of Césaire’s writings it is nonetheless worth reading carefully anew. I look to Césaire now in part to think through the differences between two attempts to take or retake a dialectical tradition for anticolonial politics. How might such a project take shape in and against specifically French political thought? In this post, I hope one unusual moment in Césaire’s talk can be useful.

Put broadly, for Césaire, the problem of culture in 1956 was colonization. By disintegrating, or attempting to disintegrate, the peoples over which it has domination, colonization removes the “framework…structure” that make cultural life possible. And it must be so, because colonization means political control and “the political organization freely developed by a people is a prominent part of that people’s culture, even as it also conditions that culture” (131). Peoples and nations, must be free because this is the condition of true living–having already made this argument with quotations from Marx, Hegel, and Lenin, Césaire next gave his listeners Spengler quoting Goethe. There was a politics to these citations. At issue here was Goethe’s vitalist point, from the heart of “European” culture, “that living must itself unfold.” This was contrary to Roger Caillois (also an object of enmity in the earlier Discourse) and others who “list…benefits” (132) of colonization. One might have “good intentions,” and yet: “there is not one bad colonization…and another…enlightened colonization…One has to take a side” (133).

On an earlier day of the conference Hubert Deschamps, a former colonial governor turned academic, had asked to say a few words from his chair. Inaudible, he had been allowed to ascend to the podium, and had then given a longer-than-expected ‘impromptu’ speech. Deschamps seems to have offered a limited defense of colonization on the basis of the ultimate historical good of the colonization “we French”—the Gauls—experienced by the Romans. Responding the next day Césaire plucked from his memory a pro-imperial Latin quatrain written in fifth century Gaul by Rutilius Namatianus, ending “Urbem fecisti quod prius orbis erat” (“thou hast made a city of what was erstwhile a world”). I pause over this performance of total cultural mastery for two reasons. First, this comparison of ancient and modern imperialism was more powerful for the French than one might think and second, Césaire was surprisingly ambivalent. He notes that both Deschamps and Namatianus come from the ruling group, so naturally see things positively. Of course like the modern French empire, Roman empire did mean the destruction of indigenous culture—and yet Césaire commented that “we may note in passing that the modern colonialist order has never inspired a poet” (134). It seems to me that Césaire was not without sympathy for the idea of the “Urbs,” but recognized its impossibility.

Culture cannot be “mixed [métisse]”, it is a harmony, a style (138). It develops—here Césaire is perhaps as Comtean as Nietzschean—in periods of “psychological unity…of communion” (139). The different origins, the hodgepodge, that results in anarchy is not a matter of physical origins, but of experience: in culture “the rule…is heterogeneity. But be careful: this heterogeneity is not lived as such. In the reality of a living civilization it is a matter of heterogeneity lived internally as homogeneity” (139). This cannot happen in colonialism. The result of the denial of freedom, of “the historical initiative” is that “the dialectic of need” cannot unfold in colonized countries. Quoting again from Nietzsche’s Untimely Meditations, Césaire compares the result of colonialism for the colonized to Nietzsche’s “concept-quake caused by science” that “robs man of the foundation of all his rest and security” (140). Colonialism denies its victims the capacity to constitute from the people a collective subject capable of taking action on the stage of the world. From this failed subjectification, everything else flows. In colonized countries, culture is in a “tragic” position. Real culture has withered and dies or is dead. What remains is an artificial “subculture” condemned to marginal “elites” (Césaire puts the word in quotes), and in fact “vast territories of culturally empty zones…of cultural perversion or cultural by-products” (140).

What, Césaire asks, is to be done? This is the question presented by the “situation that we black men of culture must have the courage to face squarely” (140). Césaire rejects the summary choice between “indigenous” or “European”: “fidelity and backwardness, or progress and rupture” (141). This opposition must be overcome, Césaire maintains, through the dialectical action of a people. Césaire’s language itself contains the fidelity and rupture he will no longer accept as alternatives: “I believe that in the African culture yet to be born…” (141). There will be no general destruction of the symbols of the past, nor a blind imposition of what comes from Europe. “In our culture that is to be born…there will be old and new. Which new elements? Which old elements?…The answer can only be given by the community” (142). But if the individuals present before Césaire in Paris—including among other luminaries his former student Fanon and old friend Senghor, as well as Jean Price-Mars, James Baldwin, and Richard Wright—cannot say what the answer will be, “at least we can confirm here and now that it will be given and not verbally but by facts and in action” (142). Thus “our own role as black men of culture” is not to be the redeemer, but rather “to proclaim the coming and prepare the way” for “the people, our people, freed from their shackles” (142). The people is a “demiurge that alone can organize this chaos into a new synthesis…We are here to say and demand: Let the peoples speak. Let the black peoples come onto the great stage of history” (142).

Baldwin, listening to Césaire, was “stirred in a very strange and disagreeable way” (157). His assessment of Césaire is critical:

Césaire’s speech left out of account one of the great effects of the colonial experience: its creation, precisely, of men like himself. His real relation to the people who thronged about him now had been changed, by this experience, into something very different from what it once had been. What made him so attractive now was the fact that he, without having ceased to be one of them, yet seemed to move with the European authority. He had penetrated into the heart of the great wilderness which was Europe and stolen the sacred fire. And this, which was the promise of their freedom, was also the assurance of his power. (158)

Political subjectivity, popularly constructed, is the necessary ground for cultural life–this was Césaire’s conclusion. Baldwin was not wrong to see in Césaire’s performance a certain implied political and cultural elitism. Here we can usefully return to Césaire’s great poem, the Cahier d’un retour au pays natal.

A. James Arnold, editor of the new critical edition of Césaire’s writings, there argues that the 1939 first version of the poem is essentially a lyric account of individual subjectivity. Arnold further argues, and Christopher Miller sharply disagrees, that the first version of the poem is superior, that subsequent versions (in 1947 and 1956) warp the form of the original with socio-political encrustations. Gary Wilder, in the first of his pair of essential books, reads the poem in terms of voice and subjectivity, seeing in it ultimately a failure. What began as critique ends with “a decontextualized and existentialist account of unalienated identity and metaphysical arrival” (288). Looking at Césaire’s attention in 1956 to the poetry of empire, his evident sympathy even in the face of Deschamps’ condescension for the Urbs, impossible though it be in the modern world, we may read the poem differently. Taken together with, for instance, Césaire’s appreciation for and active dissemination of Charles Péguy’s mystical republican poetry in Tropiques, we might see the subjectivity the poem dramatizes as essentially collective, and its project as the activation, the uprising, of this collectivity. It seems to me that we can read the shape of the dilemmas that Césaire confronted in the 1956 talk–between elite and people, decision and growth, culture and civilization, nation and diaspora–at least partly as the pursuit even at this late date, of an impossible republicanism.

Humanist Pedagogy and New Media

by contributing editor Robby Koehler

Writing in the late 1560s, humanist scholar Roger Ascham found little to praise in the schoolmasters of early modern England.  In his educational treatise The Scholemaster, Asham portrays teachers as vicious, lazy, and arrogant.  But even worse than the inept and cruel masters were the textbooks, which, as Ascham described them, were created specifically to teach students improper Latin: “Two schoolmasters have set forth in print, either of them a book [of vulgaria] . . ., Horman and Whittington.  A child shall learn of the better of them, that, which another day, if he be wise, and come to judgement, he must be fain to unlearn again.”  What were these books exactly? And if they were so unfit for use in the classroom, then why did English schoolmasters still use them to teach students?  Did they enjoy watching students fail and leaving them educationally impoverished?

Actually, no. Then, as now, school teachers did not always make use of the most effective methods of instruction, but their choice to use the books compiled by Horman and Whittington was not based in a perverse reluctance to educate their students.  Ascham sets up a straw man here about the dismal state of Latin teaching in England to strengthen the appeal of his own pedagogical ideas.  As we will see, the books by Horman and Whittington, colloquially known as “vulgaria” or “vulgars” in schools of the early modern period, were a key part of an earlier Latin curriculum that was in the process of being displaced by the steady adoption of Humanist methods of Latin study and instruction and the spread of printed books across England.  Looking at these books, Ascham could see only the failed wreckage of a previous pedagogical logic, not the vital function such books had once served.  His lack of historical cognizance and wilful mischaracterization of previous pedagogical texts and practices are an early example of an argumentative strategy that has again become prevalent as the Internet and ubiquitous access to computers has led pundits to argue for the death of the book in schools and elsewhere.  Yet, as we will see, the problem is often not so much with books as much as with what students and teachers are meant to do with them.

“Vulgaria” were initially a simple solution to a complicated problem: how to help students learn to read and write Latin and English with the limited amount of paper or parchment available in most English schools.  According to literary scholar Chris Cannon, by the fifteenth century, many surviving notebooks throughout England record pages of paired English and Latin sentence translations.  It seems likely that students would receive a sentence in Latin, record it, and then work out how to translate it into English.  Once recorded, students held onto these notebooks as both evidence of their learning and as a kind of impromptu reference for future translations.  In the pre-print culture of learning, then, vulgaria were evidence of a learning process, the material embodiment of a student’s slow work of absorbing and understanding the mechanics of both writing and translation.

The advent of printing fundamentally transformed this pedagogical process.  Vulgaria were among the first books printed in England, and short 90-100 page vulgaria remained a staple of printed collections of Latin grammatical texts up to the 1530s.  Once in print, vulgaria ceased to be a material artifact of an educational process and now became an educational product for the use of students who were literate in either English or Latin to use while working on translations.  The culture of early modern English schools comes through vividly in these printed collections, often closing the distance between Tudor school rooms and our own.  For example, in the earliest printed vulgaria compiled by John Anwykyll, one can learn how to confess to a fellow student’s lackadaisical pursuit of study: “He studied never one of those things more than another.” Or a student might ask after a shouting match “Who made all of this trouble among you?”  Thus, in the early era of print, these books remained tools for learning Latin as a language of everyday life. It was Latin for school survival, not for scholarly prestige.

As Humanism took hold in England, vulgaria changed too, transforming from crib-books for beginning students to reference books for the use of students and masters, stuffed full of Humanist erudition and scholarship.  Humanist schoolmasters found the vulgaria a useful instrument for demonstrating their extensive reading and, occasionally, advancing their career prospects.  William Horman, an older schoolmaster and Fellow at Eton, published a 656 page vulgaria (about 5 times as long as the small texts for students) in 1519, offering it as a product of idle time that, in typical Humanist fashion, he published only at the insistence of his friends.  Yet, Horman’s book was still true to its roots in the school room, containing a melange of classical quotations alongside the traditional statements and longer dialogues between schoolmasters and students.

By the 1530s, most of the first wave of printed vulgaria went out of print, likely because they did not fit with the new Humanist insistence that the speaking and writing of Latin be more strictly based on classical models.  Vulgaria would have looked increasingly old-fashioned, and their function in helping students adapt to the day-to-day rigors of the Latinate schoolroom were likely lost in the effort to separate, elevate, and purify the Latin spoken and written by students and teachers alike.  Nothing more embodied this transformation that Nicholas Udall’s vulgaria Flowers for Latin Speaking (1533), which was made up exclusively of quotations from the playwright Terence, with each sentence annotated with the play, act, and scene from which the sentence was excerpted.

Loeb Facing Page Translation

Terence. Phormio, The Mother-In-Law, The Brothers. Ed. John Sargeaunt. Loeb Classical Library.  New York: G.P. Putnam’s Sons, 1920.  https://archive.org/details/L023NTerenceIIPhormioTheMotherInLawTheBrothers   

The vulgaria as printed crib-book passed out of use in the schoolroom after about 1540, so why was Ascham still so upset about their use in 1568 when he was writing The Schoolmaster?  By that time, Ascham could assume that many students had access to approved Humanist grammatical texts and a much wider variety of printed matter in Latin.  In a world that had much less difficulty gaining obtaining both print and paper, the vulgaria would seem a strange pedagogical choice indeed.  Ascham’s own proposed pedagogical practices assumed that students would have a printed copy of one or more classical authors and at least two  blank books for their English and Latin writing, respectively.  Whereas the vulgaria arose from a world of manuscript practice and a straitened economy of textual scarcity, Ascham’s own moment had been fundamentally transformed by the technology of print and the Humanist effort to recover, edit, and widely disseminate the works of classical authors.  Ascham could take for granted that students worked directly with printed classical texts and that they would make use of Humanist methods of commonplacing and grammatical analysis that themselves relied upon an ever-expanding array of print and manuscript materials and practices.  In this brave new world, the vulgaria and its role in manuscript and early print culture were alien holdovers of a bygone era.

Of course, Ascham’s criticism of the vulgaria is also typical of Humanist scholars, who often distanced themselves from their  predecessors and to assert importance and correctness of their own methods.  Ironically, this was exactly what William Horman was doing when he published his massive volume of vulgaria – exemplifying and monumentalizing his own erudition and study while also demonstrating the inadequacy of previous, much shorter efforts. Ascham’s rejection of vulgaria must be seen as part of the larger intergenerational Humanist pattern of disavowing and dismissing the work of predecessors who could safely be deemed inadequate to make way for one’s own contribution.  Ascham is peculiarly modern in this respect, arguing that introducing new methods of learning Latin can reform the institution of the school in toto.  One is put in mind of modern teachers who argue that the advent of the Internet or of some set of methods that the Internet enables will fundamentally transform the way education works.

In the end, the use of vulgaria was not any more related to the difficulties of life in the classroom or the culture of violence in early modern schools than any other specific pedagogical practice or object.  But, as I’ve suggested, Ascham’s claim that the problems of education can be attributed not to human agents but to the materials they employ is an argument that has persisted into the present.  In this sense, Ascham’s present-mindedness suggests the need to take care in evaluating seemingly irrelevant or superfluous pedagogical processes or materials.  Educational practices are neither ahistorical nor acontextual, they exist in institutional and individual time, and they bear the marks of both past and present exigencies in their deployment.  When we fail to recognize this, we, like Ascham, mischaracterize their past and present value and will likely misjudge how best to transform our educational institutions and practices to meet our own future needs.

Reptiles, Amphibians, Herptiles, and other Creeping Things: Variations on a Taxonomic Theme

by Contributing Editor Spencer J. Weinreich

King Philip Came Over For Good Soup. Kingdom, Phylum, Class, Order, Family, Genus, Species. Few mnemonics can be as ubiquitous as the monarch whose dining habits have helped generations of biology students remember the levels of the taxonomic system. Though the progress of the field has introduced domains (above kingdoms), tribes (between family and genus) and a whole array of lesser taxons (subspecies, subgenus, and so on), the system remains central to identifying and thinking about organic life.

greenanaconda-001

Green anaconda (Eunectes murinus): a reptile, not an amphibian (photo credit: Smithsonian’s National Zoo)

Consider “reptiles.” Many a precocious young naturalist learns—and impresses upon their parents with zealous (sometimes exasperated) insistence—that snakes are not slimy. The snake is a reptile, not an amphibian, covered with scales rather than a porous skin. Reaching high school biology, this distinction takes on taxonomic authority: in the Linnaean system, reptiles and amphibians belong to separate classes (Reptilia and Amphibia, respectively). The division has much to recommend it, given the two groups’ considerable divergences in physiology, life-cycle, behavior, and genetics. But, like all scientific categories, the distinction between reptiles and amphibians is a historical creation, and of surprisingly recent vintage at that.

When Carl Linnaeus first published his Systema Naturæ in 1735, what we know as reptiles and amphibians were lumped together in a class named Amphibia. The class—“naked or scaly body; molar teeth, none, others, always; no feathers” (“Corpus nudum, vel squamosum. Dentes molares nulli: reliqui semper. Pinnæ nullæ”)—was divided among turtles, frogs, lizards, and snakes. Linnaeus concludes his outline with these words:

the benignity of the Creator chose not to extend the class of amphibians any further; indeed, if it should enjoy as many genera as the other classes of animals include, or if that which the teratologists fantasize about dragons, basilisks, and such monsters were true, the human race could hardly inhabit the earth” (“Amphibiorum Classem ulterius continuare noluit benignitas Creatoris; Ea enim si tot Generibus, quot reliquæ Animalium Classes comprehendunt, gauderet; vel si vera essent quæ de Draconibus, Basiliscis, ac ejusmodi monstris si οι τετραλόγοι [sic] fabulantur, certè humanum genus terram inhabitare vix posset”) (n.p.).

KONICA MINOLTA DIGITAL CAMERA

Sand lizard (Lacerta agilis), an amphibian according to Linnaeus (photo credit: Friedrich Böhringer)

In the 1758 canonical tenth edition of the Systema, Linnaeus provided a more elaborate set of characteristics for Amphibia: “a heart with a single ventricle and a single atrium, with cold, red blood. Lungs that breathe at will. Incumbent jaws. Double penises. Frequetly membranaceous eggs. Senses: tongue, nose, eyes, and, in many cases, ears. Covered in naked skin. Limbs: some multiple, others none” (“Cor uniloculare, uniauritum; Sanguine frigido, rubro. Pulmones spirantes arbitrarie. Maxillæ incumbentes. Penes bini. Ova plerisque membranacea. Sensus: Lingua, Nares, Oculi, multis Aures. Tegimenta coriacea nuda. Fulcra varia variis, quibusdam nulla”) (I.12). Interestingly, Linnaeus now divides Amphibia into three, based on their mode of
locomotion:

  1. Reptiles (“those that creep”), including turtles, lizards, frogs, and toads;
  2. Serpentes (“those that slither”), including snakes, worm lizards, and caecilians;
  3. Nantes (“those that swim”), including lampreys, rays, sharks, sturgeons, and several other types of cartilaginous fish (I.196).
Smokey_Jungle_Frog

Smokey jungle frog (Leptodactylus pentadactylus), a reptile according to Laurenti and Brongniart (photo credit: Trisha M. Shears)

Linnaeus’s younger Austrian contemporary, Josephus Nicolaus Laurenti, also groups modern amphibians and reptiles together, even as he excludes the fish Linnaeus had categorized as swimming AmphibiaLaurenti was the first to call this group Reptilia (19), and though its denizens have changed considerably in the intervening centuries, he is still credited as the “auctor” of class Reptilia. The French mineralogist and zoologist Alexandre Brongniart also subordinated “batrachians” (frogs and toads) within the broader class of reptiles. All the while, exotic specimens continued to test taxonomic boundaries: “late-eighteenth-century naturalists tentatively described the newly discovered platypus as an amalgam of bird, reptile, and mammal” (Ritvo, The Platypus and the Mermaid, 132).

It was not until 1825 that Brongniart’s compatriot and contemporary Pierre André Latreille’s Familles naturelles du règne animal separated Reptilia and Amphibia as adjacent classes. The older, joint classification survives in the field of herpetology (the study of reptiles and amphibians) and the sadly underused word “herptile” (“reptile or amphibian”).

“Herptile” is a twentieth-century coinage. “Reptile,” by contrast, appears in medieval English; derived from the Latin reptile, reptilis—itself from rēpō (“to creep”)—“reptile” originally meant simply “a creeping or crawling animal” (“reptile, n.1” in OED). The first instance cited by the Oxford English Dictionary is from John Gower’s Confessio Amantis (c.1393): “And every neddre and every snake / And every reptil which mai moeve, / His myht assaieth for to proeve, / To crepen out agein the sonne” (VII.1010–13). The Vulgate Latin Bible uses reptile, reptilis to translate the “creeping thing” (רֶמֶשׂ) described in Genesis 1, a usage carried over into medieval English, as in the “Adam and Eve” of the Wheatley Manuscript (BL Add. MS 39574), where Adam is made lord “to ech creature & to ech reptile which is moued on þe erþe” (fol. 60r). Eventually, these “creeping things” became a distinct group of animals: an early sixteenth-century author enumerates “beestes, byrdes, fysshes, reptyll” (“reptile, n.1” in OED). I suspect the identification of the “reptile” (creeping thing) with herptiles owes something to the Serpent in the Garden of Eden being condemned by God to move “upon thy belly” (Gen. 3:14). The adjective “amphibian” is attested in English as early as 1637, but in the sense of “having two modes of existence.” Not until 1835—after the efforts of Latreille and his English popularizer, T. H. Huxley—does the word come to refer to a particular class of animals (“amphibian, adj. and n.” in OED).

The crucial point here is that the distinction between the two groups, grounded though it may be in biology and phylogenetics, is an artifact of taxonomy, not a self-evident fact of the natural world. For early modern, medieval, and ancient observers, snakes and salamanders, turtles and toads all existed within an ill-defined territory of creeping, crawling things.

Black.George.14.cent.Museum.of.Russian.icon

Fourteenth-century icon of Saint George and a very snakelike dragon (photo credit: Museum of Russian Icons, Moscow)

The farther back we go, the more fantastic the category becomes, encompassing dragons, sea serpents, basilisks, and the like. Religion, too, played its part, as we have seen with Eve’s serpentine interlocutor: Egypt’s plague of frogs, the dragon of Revelation, the Leviathan, the scaly foes of saints like George and Margaret, were within the same “reptilian”—creeping, crawling—family. To be sure, the premodern observer was perfectly aware of the differences between frogs and lizards, and between different species (what could be eaten and what could not, what was dangerous and what was not). But they would not—and had no reason to—erect firm ontological boundaries between the two sorts of creatures.

When we go back to the key works of medieval and ancient natural philosophy, the same nebulosity prevails. Isidore of Seville’s magisterial Etymologies of the early seventh century includes an entry “On Serpents” (“De Serpentibus”), which notes,

the serpent, however, takes that name because it crawls [serpit] by hidden movements; it creeps not with visible steps, but with the minute pressure of its scales. But those which go upon four feet, such as lizards and geckoes [stiliones could also refer to newts], are not called serpents but reptiles. Serpents are also reptiles, since they creep on their bellies and breasts” (“Serpens autem nomen accepit quia occultis accessibus serpit, non apertis passibus, sed squamarum minutissimis nisibus repit. Illa autem quae quattuor pedibus nituntur, sicut lacerti et stiliones, non serpentes, sed reptilia nominantur. Serpentes autem reptilia sunt, quia ventre et pectore reptant.”) (XII.iv.3).

The forefather of premodern zoology, Aristotle, opines in Generation of Animals “there is a good deal of overlapping between the various classes;” he groups snakes with fish because they have no feet as easily he links them with lizards because they are oviparous (II.732b, trans. A. L. Peck).

Anim1991_-_Flickr_-_NOAA_Photo_Library

Atlantic puffin (Fratercula arctica), a reptile according to modern phylogenetics (photo credit: NOAA Photo Library—anim1991)

As it turns out, our modern category of “reptile” (class Reptilia) has proved similarly elastic. In evolutionary terms, this is because “reptiles” are not a clade—a group of organisms defined by a single ancestor species and all its descendants. Though visually closer to lizards, for example, genetically speaking the crocodile is a nearer relative to birds (class Aves). Scientists and science writers have thus claimed—sometimes facetiously—that the very category of reptile is a fiction (see Welbourne, “There’s no such thing as reptiles”). The clade Sauropsida, including reptiles and birds (as a subset thereof), was first mooted by Huxley and subsequently resurrected in the twentieth century to address the problem. Birds are now reptiles, though they seldom creep. If I may be permitted a piece of Isidorean etymological fantasy, perhaps this is the true import of the “reptile” as “creeping thing,” as they creep across and beyond taxonomic boundaries, eternally frustrating and fascinating those who seek to understand them.

 

 

 

The First of Nisan, The Forgotten Jewish New Year

by guest contributor Joel S. Davidi

It is late March and the weather is still cold. The sounds of Arabic music and exuberant conversation emanate from an elegant ballroom in Brooklyn New York. No, it’s not a wedding or a Bar Mitzvah. A Torah Scroll is unfurled and the cantor begins to read from Exodus 12: 1, “And God spoke to Moses and Aaron in Egypt, ‘This month is to be for you the first month, the first month of your year.’” The reading is followed by the chanting of liturgical poetry based on this Torah portion, “Rishon Hu Lakhem L’khodshei Hashanah”… Yom Nisan Mevorakh….” “The first month shall it be for you for the months of the year… the month of Nisan is blessed.”As they leave the event, men and women wish each other “Shana tova,” happy new year.

Something seems off. It is a Monday night and Rosh Hashanah, the traditional Jewish new year, is still six months away. Why the celebration and talk of a new year? This ritual is very familiar, however, to the members of Congregation Ahaba Veahva, a Synagogue that follows the Egyptian-Jewish rite. It is a vestige of a very ancient, almost extinct Jewish custom called Seder Al-Tawhid (Arabic, Seder Ha-Yikhud in Hebrew, the ritual of the unity). This ritual takes place annually on the first of Nisan. The name denotes a celebration of the unity of God and the miracles that he wrought during this month surrounding the Exodus from Egypt. The way the congregation celebrates it and how this custom survived illuminates important dynamics of how Jewish ritual has been standardized over time.

Ahba Veahva’s members celebrate Rosh Hashanah in September like other rabbinic Jews. The Seder al-Tahwid, however, is a remnant of an ancient custom of the Jews of the near East (variably referred to as Mustaribun or Shamim) to commemorate the first day of the Jewish month of Nisan as  a minor Rosh Hashanah as per Exodus 12:1. On their website, Congregation Ahaba Veahva explains the celebration as follows:

The Great Exodus of Egypt:
On Rosh Chodesh (the first of the month of Nisan), beni Yisrael (the children of Israel) heard the nes (miracle) that they were going to be redeemed on the night of the 15th, later in that very month. We hold this evening to remember the miracles and the hesed (kindness) that Hashem (God) does for His nation.
“In Nisan we were redeemed in the past, and in Nisan we are destined to be redeemed again.” (a midrashic quote (Exodus Rabbah 15:2) asserting that just as the Exodus from Egypt took place in Nisan so too will the ultimate messianic redemption)
We hold this evening to put everyone in the correct spiritual mindset- to realize with all their might that this could be the month of the Geulah (messianic redemption).

tuhid

The Alexandrian pamphlet describing the Seder al-Tahwid liturgy.

The only printed version of the Seder al-Tahwid liturgy is found in an anonymous 10 page pamphlet printed in Alexandria. The prayers focus on many themes found in the Rosh Hashana prayers such as blessing, sustenance and messianic redemption in the year to come. The liturgy is found in a somewhat longer form in a tenth century manuscript fragment from the Cairo Geniza, the  repository of documents found in the late nineteenth century in the synagogue in old Fustat.

The celebration of al-Tahwid begins with special liturgy on the Sabbath closest to the day and on the day itself the community refrains from unnecessary labor similar to intermediate days of Jewish holidays. They also recite a Kiddush (a prayer that sanctifies a day, recited over a cup of wine) followed by a festive meal and the recitation of liturgical poetry. One such poem presents a debate among the twelve months to determine which one will have primacy. In one stanza, for example, Nisan argues that the following month of Iyyar cannot be chosen since its zodiacal sign is Taurus, the same species as the golden calf that Israel made in the wilderness. The concluding stanza is a triumphal declaration from Nisan: שליט אנא וריש על כול”ן”
literally, I am the ruler and the head of all of you.
תקיפה עבדי פרוק לעמיה ובי הוא עתיד למפרוק יתהון
or, “A deliverance of slavery did I [Nisan] impart upon the nation and in me [Nisan] is he [God] destined to deliver them [again]” (as per BT Rosh Hashanah 10B). Other prayers more explicitly cast the day as the beginning of the new year. One liturgical poem begins:  יהי רצון מלפניך ה אלוהינו ואלוהי אבותינו…שתהיה השנה הזאת הבאה עלינו לשלום, translated as, “May it be your will lord our god and the god of our fathers…that this coming year should come upon us in peace.”

The celebration of the first of Nisan as the beginning of the new year is rooted both in Biblical and Talmudic sources. Exodus 12:1-2 states that Nisan is the first month in the intercalation of the new year and the Mishnah in Tractate Rosh Hashanah 1:1 describes the First of Nisan as one of the four beginnings of the Jewish New Year:

There are four new years. On the first of Nisan is the new year for kings and for festivals. On the first of Elul is the new year for the tithe of cattle. … On the first of Tishrei is the new year for years, for release and jubilee years, for plantation and for [tithe of] vegetables…. On the first of Shevat is the new year for trees…

In an article on the Seder al-Tahwid liturgy, liturgical scholar Ezra Fleischer postulates that the Kiddush ceremony on the holiday was based on an earlier Mishanic-era institution. The Mishnah in Rosh Hashanah 2:7 describes how the Sanhedrin, the high religious court of Talmudic-era Israel,  consecrated the new month by declaring “it is sanctified”, at which point the entire assemblage would respond in kind, “it is sanctified, it is sanctified”. This declaration was performed with pomp and publicity in order to make it clear that the final word in the intercalation of the Jewish calendar belonged to the rabbis of Eretz Yisrael and no one else. In the context of the Seder al-Tahwid this ritual serves to highlight Nisan’s role as the first month of the Jewish lunar year, the beginning of this process of sanctifying the new moon.

17310302_10155007549210772_4771235907729195067_o

If the first of Nisan is such an important date to both the Bible and Talmud then, why is the day celebrated today only by this small Jewish community? To answer this question we must look to the Geonic period of jewish history, corresponding roughly to the second half of the first millennium. Over  the past decade, historians increasingly see this period  as one in which a number of variations of Judaism were vying for supremacy. These included several schools of Jewish jurisprudence based in different geographic constituencies across the Mediterranean Diaspora. Two of the most prominent were the Babylonian (Minhag Babhel, based in Baghdad) and Palestinian (Minhag Eretz Yisrael) rites, as well as Karaite Jews who did not follow the Rabbis at all but formed their own, non rabbinic madhab or creed.

The Sanhedrin in Jerusalem was abolished in the 5th century by Byzantine decree. Its various successors could not recapture its prestige and the Rabbis of Eretz Yisrael gradually lost their power to sanction the new moon. The Karaites developed their own system of intercalation but within the rabbinic tradition, in the absence of the Sanhedrin, the Babylonians and Palestinians often found themselves at odds.

The most notorious controversy between the two schools involved the often-confrontational Saadiah ben Joseph Al-Faumi, the head of the Babylonian Academy better known as Saadiah Gaon, and Aharon ben Meir, the head of the Palestinian Academy. In 921-923, the two engaged in an extended and very public argument regarding the sanctification of the Hebrew year 4682 (921/22). While the core of this debate surrounded the complicated methods of calculating the Jewish calendar, it became a referendum on which academy and by extension rite would become authoritative in the diaspora. Saadiah emerged victorious (historians Marina Rustow and Sacha Stern argue that his authority on these matters may have resulted from his mastery of Abbasid advances in astronomy).

In Palestine, however, the Jewish community, based in Jerusalem, continued to follow the Minhag Eretz Yisrael, which also exerted influence on other Near Eastern Jewish communities such as Egypt. The heads of the Jerusalem academy still often insisted that the right to intercalate the year rested solely with them. As late as the 11th century, Rabbi Evyatar Ha-Kohen, the head of the Palestinian Academy (partially in exile in Cairo) would declare:

The land of Israel is not part of the exile such that it would be subject to an Exilarch (a title often applied to the head of the babylonian academy) and furthermore one may not contradict the authority of the Prince (a title at times applied to the head of the palestinian academy), on the word of whom [alone] may leap years be declared and the holiday dates set according to the order imposed by God before the creation of the world. For this is what we are taught in the secrets of intercalation.

ארץ ישראל אינה קרואה גולה שיהא ראש גולה נסמך בה, ועוד שאין עוקרין נשיא שבארץ ישראל, שעל פיו מעברין את השנה וקובעין את המועדות הסדורים לפני הקב”ה קודם יצירת העולם, דהכי גמרי בסוד העיבור

In a continuation of this post, I will elaborate as to how the Seder al-Tahwid was likely maintained as well as suppressed during the geonic period, similar practices that are preserved among non rabbinic communities and the ritual’s reception today.

Joel S. Davidi is an independent ethnographer and historian. His research focuses on Eastern and Sephardic Jewry and the Karaite communities of Crimea, Egypt, California and Israel. He is the author of the forthcoming book Exiles of Sepharad That Are In Ashkenaz, which explores the Iberian Diaspora in Eastern Europe during the fifteenth and sixteenth centuries. He blogs on Jewish history at toldotyisrael.wordpress.com.

Between Conservatism and Fascism in Troubled Times: Der Fall Bernhard

by guest contributor Steven McClellan

The historian Fritz K. Ringer claimed that for one to see the potency of ideas from great thinkers and to properly situate their importance in their particular social and intellectual milieu, the historian had to also read the minor characters, those second and third tier intellectuals, who were barometers and even, at times, agents of historical change nonetheless. One such individual who I have frequently encountered in the course of researching my dissertation, was the economist Ludwig Bernhard. As I learned more about him, the ways in which Bernhard formulated a composite of positions on pressing topics then and today struck me: the mobilization of mass media and public opinion, the role of experts in society, the boundaries of science, academic freedom, free speech, the concentration of wealth and power and the loss of faith in traditional party politics. How did they come together in his work?

IMG_9800 (1)

Ludwig Bernhard (1875-1935; Bundesarchiv, Koblenz Nl 3 [Leo Wegener], Nr. 8)

Bernhard grew up in a liberal, middle-class household. His father was a factory owner in Berlin who had converted from Judaism to Protestantism in 1872. As a young man, Bernhard studied both Munich and Berlin under two-heavyweights of the German economic profession: Lujo Brentano and Gustav Schmoller. Bernhard found little common ground with them, however. Bernhard’s friend, Leo Wegener, best captured the tension between the young scholar and his elders. In his Erinnerungen an Professor Ludwig Bernhard (Poznań: 1936, p. 7), Wegener noted that “Schmoller dealt extensively with the past,” while the liberal Brentano, friend of the working class and trade unions, “liked to make demands on the future.” Bernhard, however, “was concerned with the questions of the present.” He came to reject Schmoller and Brentano’s respective social and ethical concerns. Bernhard belonged to a new cohort of economists who were friendly to industry and embraced the “value-free” science sought by the likes of Max Weber. They promoted Betriebswirtschaft (business economics), which had heretofore been outside of traditional political economy as then understood in Germany. Doors remained closed to them at most German universities. As one Swiss economist noted in 1899, “appointments to the vacant academical [sic] chairs are made as a rule at the annual meetings of the ‘Verein für Socialpolitik’,” of which Schmoller was chairman from 1890-1917. Though an exaggeration, this was the view held by many at the time, given the personal relationship between Schmoller and one of the leading civil servants in the Prussian Ministerium der geistlichen, Unterrichts- und Medizinalangelegenheiten (Department of Education, church and medical affairs), Friedrich Althoff.

Part of Bernhard’s early academic interest focused on the Polish question, particularly the “conflict of nationalities” and Poles living in Prussia. Unlike many other contemporary scholars and commentators of the Polish question, including Max Weber, Bernhard knew the Polish language. In 1904 he was appointed to the newly founded Königliche Akademie in Posen (Poznań). In the year of Althoff’s death (1908), the newly appointed Kultusminister Ludwig Holle created a new professorship at the University of Berlin at the behest of regional administrators from Posen and appointed Bernhard to it. However, Bernhard’s placement in Berlin was done without the traditional consultation of the university’s faculty (Berufungsverfahren).

The Berliner Professorenstreit of 1908-1911 ensued with Bernhard’s would-be colleagues, Adolph Wagner, Max Sering and Schmoller protesting his appointment. It escalated to the point that Bernhard challenged Sering to a duel over the course lecture schedule for 1910/1911, the former claiming that his ability to lecture freely had been restricted. The affair received widespread coverage in the press, including attracting commentaries from notables, such as Max Weber. At one point, just before the affair seemed about to conclude, Bernhard published an anonymous letter in support of his own case, which was later revealed that he was in fact the author. This further poisoned the well with his colleagues. The Prussian Abgeordnetenhaus (Chamber of Deputies) would debate the topic: the conservatives supported Bernhard and the liberal parties defended the position of the Philosophical Faculty. Ultimately, Bernhard would keep his Berlin post.

url.jpg

Satire of the Professorenstreit (click for larger image)

The affair partly touched upon the threat of the political power and the freedom of the Prussian universities to govern themselves—a topic that Bernhard himself extensively addressed in the coming years. It also concerned the rise of the new discipline of “business economics” gaining a beachhead at German secondary institutions. Finally, the Professorenstreit focused on Bernhard himself, an opponent of much of what Schmoller and his colleagues in the Verein für Socialpolitik stood for. He proved pro-business and an advocate of the entrepreneur. Bernhard also showed himself a social Darwinist, deploying biological and psychological language, such as in his analysis of the German pension system in 1912. He decried what he termed believed the “dreaded bureaucratization of social politics.” Bureaucracy in the form of Bismarck’s social insurance program, Bernhard argued, diminished the individual and blocked innovation, allowing the workers to become dependent on the state. Men like Schmoller, though critical at times of the current state of Prussian bureaucracy, still believed in its potential as an enlightened steward that stood above party-interests and acted for the general good.

Bernhard could never accept this view. Neither could a man who became Bernhard’s close associate, the former director at Friedrich Krupp AG, Alfred Hugenberg. Hugenberg was himself a former doctoral student of another key member of the Verein für Socialpolitik , Georg Friedrich Knapp. Bernhard was proud to be a part of Hugenberg’s circle, as he saw them as men of action and practice. In his short study of the circle, he praised their mutual friend Leo Wegener for not being a Fachmann or expert. Like Bernhard, Hugenberg disliked Germany’s social policy, the welfare state, democracy, and—most importantly—socialism. Hugenberg concluded that rather than appeal directly to policy makers and state bureaucrats through academic research and debate, as Schmoller’s Verein für Socialpolitik had done, greater opportunities lay in the ability to mobilize public opinion through propaganda and the control of mass media. The ‘Hugenberg-Konzern’ would buy up controlling interests in newspapers, press agencies, advertising firms and film studios (including the famed Universum Film AG, or UfA).

In 1928, to combat the “hate” and “lies” of the “democratic press” (Wegener), Bernhard penned a pamphlet meant to set the record straight on the Hugenberg-Konzern. He presented Hugenberg as a dutiful, stern overlord who cared deeply for his nation and did not simply grow rich off it. Indeed, the Hugenberg-Konzern marked the modern equivalent to the famous Raiffeisen-Genossenschaften (cooperatives) for Bernhard, providing opportunities for investment and national renewal. Furthermore, Bernhard claimed the Hugenberg-Konzern had saved German public opinion from the clutches of Jewish publishing houses like Mosse and Ullstein.

Both Bernhard and Hugenberg pushed the “stab-in-the-back” myth as the reason for Germany’s defeat in the First World War. The two also shared a strong belief in fierce individualism and nationalism tinged with authoritarian tendencies. These views all coalesced in their advocacy of the increasing need of an economic dictator to take hold of the reins of the German economy during the tumultuous years of the late Weimar Republic. Bernhard penned studies of Mussolini and fascism. “While an absolute dictatorship is the negation of democracy,” he writes, “a limited, constitutional dictatorship, especially economic dictatorship is an organ of democracy.” (Ludwig Bernhard: Der Diktator und die Wirtschaft. Zurich: 1930, pg. 10).

Hugenberg came to see himself as the man to be that economic dictator. In a similar critique mounted by Carl Schmitt, Bernhard argued that the parliamentary system had failed Germany. Not only could anything decisive be completed, but the fact that there existed interest-driven parties whose existence was to merely antagonize the other parties, stifle action and even throw a wrench in the parliamentary system itself, there could be nothing but political disunion. For Bernhard, the socialists and communists were the clear violators here.

IMG_9823.JPG

Ludwig Bernhard, »Freiheit der Wissenschaft« (Der Tag, April 1933; BA Koblenz, Nl 3 [Leo Wegener)], Nr. 8, blatt 91; click for larger image)

The Nazis proved another story. Hitler himself would be hoisted in power by Hugenberg. Standing alongside him was Bernhard. In April 1933, Bernhard published a brief op-ed entitled “Freiheit der Wissenschaft,” which summarized much of his intellectual career. He began by stating, “Rarely has a revolution endured the freedom of science.” Science is free because it is based on doubt. Revolution, Bernhard writes, depends on eliminating doubt. It must therefore control science. According to Bernhard, this is what the French revolutionaries in 1789 attempted. In his earlier work on this topic, Bernhard made a similar argument, stating that Meinungsfreiheit (free speech) had been taken away by the revolutionary state just as it had been taken away by democratic Lügenpresse. Thankfully, he argued, Germany after 1918 preserved one place where the “guardians” of science and the “national tradition” remained—the universities, which had “resisted” the “criminal” organization of the Socialist Party’s Prussian administration. Bernhard, known for his energetic lectures, noted with pride in private letters the growth of the Nazi student movement. In 1926, after having supported the failed Pan-German plan to launch a Putsch (coup d’état) to eliminated the social democratic regime in Prussia, Bernhard spoke to his students, calling on the youth to save the nation. Now, it was time for the “national power” of the “national movement” to be mobilized. And in this task, Bernhard concluded, Adolf Hitler, the “artist,” could make his great “masterpiece.”

Ludwig Bernhard died in 1935 and therefore never saw Hitler’s completed picture of a ruined Germany. An economic nationalist, individualist, and advocate of authoritarian solutions, who both rebelled against experts and defended the freedom of science, Bernhard remains a telling example of how personal history, institutional contexts and the perception of a heightened sense of cultural and political crisis can collude together in dangerous ways, not least at the second-tier of intellectual and institutional life.

Steven McClellan is a PhD Candidate in History at the University of Toronto. He is currently writing his dissertation, a history of the rise and fall, then rebirth of the Verein für Sozialpolitik between 1872 and 1955.

“He shall not haue so much as a buske-point from thee”: Examining notions of Gender through the lens of Material Culture

by guest contributor Sarah Bendall

Our everyday lives are surrounded by objects. Some are mundane tools that help us with daily tasks, others are sentimental items that carry emotions and memories, and others again are used to display achievements, wealth and social status. Importantly, many of these objects are gendered and their continued use in various different ways helps to mould and solidify ideas, particularly, gender norms.

In the early modern period two objects of dress that shaped and reinforced gender norms were the busk, a long piece of wood, metal, whalebone or horn that was placed into a channel in the front of the bodies or stays (corsets), and the busk-point, a small piece of ribbon that secured the busk in place. During the sixteenth and seventeenth centuries these accessories to female dress helped to not only shape expressions of love and sexual desire, but also shaped the acceptable gendered boundaries of those expressions.

Busks were practical objects that existed to keep the female posture erect, to emphasize the fullness of the breasts and to keep the stomach flat. These uses were derived from their function in European court dress that complimented elite ideas of femininity; most notably good breeding that was reflected in an upright posture and controlled bodily movement. However, during the seventeenth century, and increasingly over eighteenth and nineteenth centuries, lovers not only charged busks and busk-points with erotic connotations but also saw them as tokens of affection. Thus, they became part of the complex social and gendered performance of courtship and marriage.

The sheer number of surviving busks that contain inscriptions associated with love indicate that busk giving during courtship must have been a normal and commonly practised act in early modern England and France. A surviving English wooden busk in the Victoria and Albert Museum contains symbolic engravings, the date of gifting, 1675, and a Biblical reference. On the other side of the busk is an inscription referencing the Biblical Isaac’s love for his wife, which reads: “WONC A QVSHON I WAS ASKED WHICH MAD ME RETVRN THESE ANSVRS THAT ISAAC LOVFED RABEKAH HIS WIFE AND WHY MAY NOT I LOVE FRANSYS”.

thumbnail_2006AE3205

‘English wooden Stay Busk, c.1675, Victoria and Albert Museum, London. Accession number W.56-1929’

Another inscription on one seventeenth-century French busk exclaims “Until Goodbye, My Fire is Pure, Love is United”. Three engravings correspond with each line: a tear falling onto a barren field, two hearts appearing in that field and finally a house that the couple would share together in marriage with two hearts floating above it.

Inscriptions found on other surviving busks go beyond speaking on behalf of the lover, and actually speak on behalf of busks themselves, giving these inanimate objects voices of their own. Another seventeenth-century French busk, engraved with a man’s portrait declares:

“He enjoys sweet sighs, this lover

Who would very much like to take my place”

This inscription shows the busk’s anthropomorphized awareness of the prized place that it held so close to the female body. John Marston’s The scourge of villanie Three bookes of satyres (1598, p. F6r-v) expressed similar sentiments with the character Saturio wishing himself his lover’s busk so that he “might sweetly lie, and softly luske Betweene her pappes, then must he haue an eye At eyther end, that freely might discry Both hills [breasts] and dales [groin].”

Although the busk’s intimate association with the female body was exploited in both erotic literature and bawdy jokes, the busk itself also took on phallic connotations. The narrator of Alexander Pope’s Rape of the Lock (1712, p. 12) describes the Baron with an ‘altar’ built by love. On this altar “lay the Sword-knot Sylvia‘s Hands had sown, With Flavia‘s Busk that oft had rapp’d his own …”  Here “His own [busk]” evokes his erection that Flavia’s busk had often brushed against during their love making. Therefore, in the context of gift giving the busk also acted as an extension of the male lover: it was an expression of his male sexual desire in its most powerful and virile form that was then worn privately on the female body. Early modern masculinity was a competitive performance and in a society where social structure and stability centred on the patriarchal household, young men found courtship possibly one of the most important events of their life – one which tested their character and their masculine ability to woo and marry. In this context, the act of giving a busk was a masculine act, which asserted not only a young man’s prowess, but his ability to secure a respectable place in society with a household.

Yet the inscriptions on surviving busks and literary sources that describe them often to do not account for the female experience of courtship and marriage. Although women usually took on the submissive role in gift giving, being the recipient of love tokens such as busks did not render them completely passive. Courtship encouraged female responses as it created a discursive space in which women were free to express themselves. Women could choose to accept or reject a potential suitor’s gift, giving her significant agency in the process of courtship. Within the gift-giving framework choosing to place a masculine sexual token so close to her body also led to a very intimate female gesture. Yet a woman’s desire for a male suitor could also take on much more active expressions as various sources describe women giving men their busk-points. When the character Jane in Thomas Dekker’s The Shoemaker’s Holiday (1600) discovers that the husband she thought dead is still alive, she abandons her new beau who tells her that “he [her old husband] shall not haue so much as a buske-point from thee”, alluding to women’s habit of giving busk-points as signs of affection and promise. John Marston’s The Malcontent (1603) describes a similar situation when the Maquerelle warns her ladies “look to your busk-points, if not chastely, yet charily: be sure the door be bolted.” In effect she is warning these girls to keep their doors shut and not give their busk-points away to lovers as keepsakes.

To some, the expression of female sexual desire by such means seems oddly out of place in a society where strict cultural and social practices policed women’s agency. Indeed, discussions of busks and busk-points provoked a rich dialogue concerning femininity and gender in early modern England. Throughout the sixteenth and seventeenth centuries, bodies (corsets) elongated the torso, until the part of the bodie that contained the busk reached to the lady’s “Honor” (Randle Holme, The Academy of Armory and Blazon…., p. 94)[1] In other words, the lowest part of the busk which contained the ‘busk-point’ sat over a woman’s sexual organs where chastity determined her honour. The politics involved in female honour and busk-points are expressed in the previously discussed scene from The Malcontent: busk-points functioned as both gifts and sexual tokens and this is highlighted by the Maquerelle’s pleas for the girls to look to them ‘chastely’.

As a result of the intimate position of the busk and busk-point on the female body these objects were frequently discussed in relation to women’s sexuality and their sexual honour. Some moralising commentaries blamed busks for concealing illegitimate pregnancies and causing abortions. Others associated busks with prostitutes, and rendered them a key part of the profession’s contraceptive arsenal. Yet much popular literature and the inscriptions on the busks themselves rarely depict those women who wore them as ‘whores’. Instead these conflicting ideas of the busk and busk-points found in sources from this period in fact mirror the contradictory ideas and fears that early moderns held about women’s sexuality. When used in a sexual context outside of marriage these objects were controversial as they were perceived as aiding unmarried women’s unacceptable forward expressions of sexual desire. However, receiving busks and giving away busk-points in the context of courtship and marriage was an acceptable way for a woman to express her desire precisely because it occurred in a context that society and social norms could regulate, and this desire would eventually be consummated within the acceptable confines of marriage.

Busks and busk-points are just two examples of the ways in which the examination of material culture can help the historian to tap into historical ideas of femininity and masculinity, and the ways in which notions of gender were imbued in, circulated and expressed through the use of objects in everyday life in early modern Europe. Although controversial at times, busk and busk-points were items of clothing that aided widely accepted expressions of male and female sexual desire through the acts of giving, receiving and wearing. Ultimately, discussions of these objects and their varied meanings highlight not only the ways in which sexuality occupied a precarious space in early modern England, but how material culture such as clothing was an essential part of regulating gender norms.
[1] Holme, The Academy of Armory and Blazon, p. 3.

Sarah A. Bendall is a PhD candidate in the Department of History at the University of Sydney. Her dissertation examines the materiality, consumption and discourses generated around stiffened female undergarments – bodies, busks, farthingales and bum rolls – to explore how these items of material culture shaped notions of femininity in England between 1560-1690. A longer article-length version of this blog post has appeared in the Journal of Gender & History and Sarah also maintains her own blog were she writes about the process of historical dress reconstruction.

Miscarriage, Auspicious Birth, and the Concept of Tulkuhood in Tibet

By guest contributor Kristin Buhrow

The selection of successors to political and religious leadership roles is determined by different criteria around the world. In the Himalayas, a unique form of determining succession is used: the concept of Tulkuhood. Based in Tibetan Vajrayana Buddhism, Himalayan communities, especially in Tibet, operate with the understanding that gifted leaders with a deep and accurate understanding of Buddhist cosmology and philosophy will, after death, return to their communities and continue as leaders in their next life. Upon the death of a spiritual or community leader, his or her assistants and close followers will go in search of the next incarnation—a child born shortly after the previous leader’s death. When satisfied that they have found the new incarnation, the committee of assistants or followers will officially recognize the child, usually under five years old, as a reincarnation, and the child will receive a rigorous education to prepare for their future duties as a community leader. These people are given the title “Tulku.”

This system of succession by reincarnation is occasionally mentioned in Western literature, and is always lent a sense of orientalist mysticism, but it is important to note that in the Tibetan cultural sphere, Tulku succession is the normal, working model of the present day. While Tibetan religious and political leadership structures have undergone great changes in recent years, the system of Tulku succession has been maintained in the religious sector, and for some top political positions was only replaced by democratic election in 2001. Large monasteries or convents in the Himalayas, which often serve as centers of religion, education, and local governance, are commonly associated with a Tulku who guides religious practice and political decision making at the monastery and in the surrounding area. In this way, Tulkus serve a life-long (or multi-life-long) appointment to community leadership.

Like any other modern system of succession, Tulku succession is the subject of much emic literature detailing the definition of Tulkuhood, discussing the powers of a Tulku, and otherwise outlining the concept and process. And, like any other functioning system of succession, some details of the definition of true Tulkuhood are hotly debated. One indication given special attention is that Tulkus go through birth and death under auspicious conditions. These conditions include but are not limited to: occurring on special days of the Tibetan calendar, occurring according to a prophecy that the Tulku himself made previously, unexplainable sweet smells, music issuing from an unknown source, colored lights appearing in the sky, dissolving or disappearance of the mind or body at death, flight, or Buddha images visible upon cremation. While these auspicious conditions of birth or death may go excused or unnoticed, it is thought that all Tulkus are born and die in such circumstances, whether or not the people around them notice. Considering this common understanding, we now turn to one particular Tulku, who makes a controversial and easily politicized assertion: that one of his previous incarnations is that of a miscarried fetus buried beneath his monastery.

***

Location_of_Mêdog_within_Xizang_(China)

A map showing the Pemako region (now called Mêdog) within Tibet

In the mountainous region of Pemako, to the east of Lhasa, there is a small town called Powo. According to Tibetan oral history, it was here, in 641 C.E., that the Chinese Princess Wencheng buried her stillborn child. After being engaged to marry the King of Tibet, Princess Wencheng was escorted from the Tang capital of Chang ‘An (present day Xi’an), to the Tibetan capital, Lhasa, by the illustrious Tibetan minister Gar. Along the way, Princess Wencheng became pregnant with Minister Gar’s child—a union oral history remembers fondly as the result of a loving relationship that had developed over the course of the journey. Some versions of the history assert that Minister Gar intentionally took Wencheng on the longest route to Lhasa so that she could have the baby along the way, but she suffered a miscarriage in Powo. While there was no monastery in Powo at the time, Wencheng, who was an expert Chinese geomancer, recognized the spiritual power of the location. After carefully selecting the gravesite, Wencheng buried her stillborn child, then left to meet her intended in Lhasa.

He Liyi Illustration

Aiqing Pan and Zhao Li, “Wencheng and the Tibetan Envoy,” from He Liyi, The Spring of Butterflies and Other Folk Tales of China’s Minority Peoples.

 

While some sources assert that Princess Wencheng constructed the Bhakha Monastery on top of the grave with her own hands, another origin story exists. Over one thousand years after Wencheng and Minister Gar passed through, a Buddhist teacher who had risen to prominence came to Powo near the time of his death. This teacher was recognized as a reincarnation of the powerful practitioner Dorje Lingpa, who was himself a reincarnation of Vairotsana, a Buddhist intellectual and influential translator. When this teacher came to Powo, he found the same spot where Wencheng had buried her baby, and he planted his walking stick in the ground, which miraculously grew into a pine tree. It was then that the Bhakha Monastery was built in the same vicinity. Regardless of the accuracy of these oral histories, it is clear that the presence of the infant grave was a known factor when the monastery was named, as Bhakha (རྦ་ཁ་ or སྦ་ཁ་) means “burial place”. The teacher whose walking stick demonstrated the miracle became known as the first Bhakha Tulku. We are now, as of 2017, in the time of the tenth Bhakha Tulku, who still has influence over the same monastery, as well as related monasteries in Bhutan and the United States.

btulku

The Tenth Bhakha Tulku (photo credit: Shambhala Center of New York)

In addition to the list of renowned former lives mentioned previously, the Bhakha Tulku has also claimed Wencheng and Minister Gar’s child as one of his past lives. While this claim to such a short and seemingly inauspicious life is unusual for a Tulku, it is not an impossibility according to the standards of Tulkuhood in the Nyingma sect, to which the Bhakha Tulku belongs. In the Nyingma tradition, and all other Vajrayana sects, life is considered to begin at conception. By this principle, a stillborn fetus constitutes a life, making it possible that that life could have been part of a Tulku lineage. According to a description by Tulku Thondup, a translator and researcher for the Buddhayana Foundation and contemporary Tulku associated with the Dodrupchen Monastery, in Incarnation: The History and Mysticism of the Tulku Tradition of Tibet, the primary role of a “Birth Manifested Body Tulku” is “to serve others […] in any form that leads a being or beings toward happiness, peace, and enlightenment, either directly or indirectly” (13). With this perception of Tulku leadership, the controversy surrounding the idea that a life culminating in miscarriage could be worthy of the title of Tulku is easily understood.

Some individuals participating in this debate assert that a miscarried fetus could not fulfill the requirement of compassionate servitude. Cameron David Warner goes so far as to argue that the emotional pain that miscarriage necessarily brings to the mother renders the life of that child devoid of any positive outcome—its only effect being a mother’s grief. However, one key idea posited by Tulku Thondup is that a Tulku’s life can help the spread of Buddhism and compassion, not only directly, but also indirectly. When viewed in the context of history, it is possible that the death of this child, for this life to go no further, was a course of events that allowed Buddhism to spread further into Tibet.

At the time of Wencheng’s arrival in Tibet, Buddhism was fairly uncommon in the area, the most common religion being a form of animistic shamanism called Bon. Both Tibetan and Chinese historiography cite Wencheng among the first foreigners to bring Buddhist artifacts into Tibet (see Blondeau and Buffetrille, Slobodnik, Kapstein). It is possible that if Wencheng were to arrive in Lhasa with a child that the Tibetan king, Songstan Gampo, would have reacted negatively, potentially harming Wencheng herself or the larger relationship between the Tibetan Empire and Tang China. While this is merely conjecture, it does present an argument in which a Tulku could have demonstrated compassionate servitude in this unusual form.

Interestingly, while the Bhakha Tulku’s claimed past life has inspired controversy, this particular assertion is not mentioned in official profiles or other monastic materials. This decision to avoid such a sensitive topic in public texts could be an attempt to smooth over controversy within the religious community (where the debate continues). Alternatively, this choice could be motivated by a desire to avoid politicization by Chinese government officials, who have been altering the historical narrative surrounding Wencheng’s life in Tibet to model a positive relationship between the generous paternal state of China and Tibet, its vassal. While the Bhakha Tulku currently devotes his energies to other issues, this unusual past life serves not only to symbolically connect China and Tibet, but also challenges the traditional notion of what it means to be a benevolent public servant.

Kristin Buhrow is a graduate student at the University of Oxford pursuing a Master’s degree in Tibetan & Himalayan Studies.

Islamic History: Beyond Sunni-Shia

by guest contributor Basma N. Radwan

Consider two vastly different versions of the same course “Introduction to Islamic Civilization.” In the first, an emphasis of political factors in Islamic group formation supersedes all other considerations. Shias, even before their inception as a distinct, self-identified group, are described as a uniquely political Islamic sect. In such analyses, theological, economic, and ethnic considerations are peripheral, if they at all constitute factors. To make the group intelligible to students predominantly acquainted with the history of the west, an instructor might offer a historical parallel to the French Legitimist tradition. The comparison’s extended implications render Orléanists out of the nonrelative Sahābah, Bonapartists out of Khawarīj, and neo-orientalists out of a fresh generation of young scholars.

In the second, interdisciplinary approaches can offer a different take. Beginning with the Covenant of Medina and a discussion on the nature of identity, course instructors can prompt students to ask themselves the following: when reading the history of Islam and its many groups, has modern scholarship excessively privileged objective over subjective identity? Do we identify early Islamic groups through our own contemporary dichotomies? Anyone who opens a newspaper will realize that it is hard to dispute that this is not the case. No doubt, contemporary political events parade the dichotomy as the fundamental operative in the history of the Middle East. The central idea (a well-intentioned one, I think) is an earnest attempt to discern some of the otherwise camouflaged nuances of contemporary politics. So be it—journalists, diplomats, and human rights groups use the dichotomy because it offers intelligible explanations for otherwise complex socio-political phenomena. But how useful is the chasm pedagogically? Even instructors who disagree with the claim that Sunni versus Shia is an overly simplistic heuristic must, nonetheless, consider what political and strategic purposes such a binary has come to serve.

Still, I would like to suggest that the Sunni versus Shia chasm, though useful in some scholarly endeavors, is of little value as a primary framework for the study of Islamic history. Those who plan to make use of it might consider the three following pedagogical drawbacks. First, privileging the Sunni-Shia dichotomy as the main framework for the study of Islamic history allots students little opportunity to discuss either tradition’s subgroups. Second, because the Sunni-Shia dichotomy is depicted as the product of a politico-theological dispute, economic, tribal, and geographical factors in group formation are easily overlooked. Third, the dichotomy inevitably runs the risk of “modern ideologies masquerading as historical truths.” Depicting a geopolitical rivalry between Iran and Saudi Arabia as the climax of a fourteen-hundred-year religious struggle is not far off from labeling Operation Iraqi Freedom as an extension of medieval crusades. Such grandiose historical ornamentations are highly caloric, yet offer little nutritional value—no matter how forcefully U.S. presidents, Iranian Ayatollahs, or Saudi Monarchs may have tried to persuade otherwise. So, what is to be done?

The importance of self-identification in the history of Islamic group formation suggests, according to one theory, that historians should reconsider and reexamine sources that provide clues to the group’s subjective identity. A group’s subjective identity is “how [they] conceive themselves to be, whereas [their] objective identity is how [they] might be viewed independently of how [they] view [themselves]” (p. 5). In this sense, it would be historically brute to claim that Ali was Shia. While he is labeled so retrospectively, his subjective identity could not be accounted for in those terms, as “the Sunni-Shia schism only materialized a century [after the prophet’s death]” (p. i). Even the use of proto-Shia or proto-Sunni as indicators of subjective identity proves problematic. These kinds of qualifications are, to borrow one historian’s description of Muslim heresiographies, “simply back-projections intended to validate subsequent political and theological developments” (p. 249).

There is also the question of what happens when a non-dominant group’s identification is rejected by a dominant one. Although a Sufi group may consider itself Sunni or Shia, in its legal affiliation for example, prominent orthodox Sunni or Shia groups may reject its claim. In a historical narrative in which the Sunni-Shia chasm dominates, Sufi groups are characterized by their objective identity, as dictated by the dominant group, as non-Shia/Sunni. By extension, there is the added risk of underappreciating the role of non-dominant groups’ subjective identity in the making of Sunni/Shia orthodoxy. In other words, we are blind to the process wherein Sunni and Shia define themselves not against one another, but rather through other “Others.”

But what about when a group’s subjective identity is non-Shia/Sunni? This dichotomy, as a heuristic, risks erasing the historical presence of groups whose subjective identity lies entirely outside of it: the early Khawarij, Murji’a, Ibāddiya and, more recently, the Aḥmadiyya and NOI . In these instances, it is the absence of Sunni-Shia elements in their subjective identity that places them in historical margins, resulting in a narrative dictated by dominant groups.

2014+24small-1.jpg

Cover of New Statesman (20-26 June 2014)

While renewed emphasis on subjective identity in Islamic group formation can soften an otherwise rigid dichotomy, it cannot, on its own, provide the reasons for differences in objective and subjective identity. Because the Sunni/Shia dichotomy is presented primarily as a politico-theological chasm, the impact of geographical, tribal, and economic factors in group formation is sidelined. The Kharijites (Khawarij), sometimes referred to as the first distinct sect in Islamic history, are one such example. Emerging in the aftermath of the Battle of Siffin (657), the name refers to the members of Ali’s troops who rejected his decision to negotiate with Mu’awiyah’s supporters. Derived from the Arabic word ‘Khawarij,’ seceders, Kharijite came to signify anyone who “left” Ali’s camp. Most historical narratives attribute the Kharijite secession to a theological dispute—namely their view that Ali’s acquiescence to negotiate with Mu’awiyah’s supporters was a violation of divine will.

Recent scholarship has signaled a shift from the theological interpretation, suggesting that the Kharjites’ secession is attributable to their Tamim tribal composition. The influence of Tamim tribal affiliation in the origins and development of the Kharijite led one historian to describe it as “a movement of democratic ideals that advocated a militant democracy [against an aristocratic Ummayad counterpart]” (p. 34). The group is as an example of how theological differences, while important, may at times be compromised, and at others corroborated, by tribal affiliations. The Sunni-Shia heuristic is inclined to overemphasize theological considerations or attribute them as a cause to non-theological divisions. Even within the category of Khairijite itself, a confluence of geographical, tribal, and economic factors eventually led to the creation of further subdivisions. According to one historian, Muslim heresiographers had accounted for four original Kharijite groups, “Azariqa, Najadat, Ibadiyya, and Suffriya” (p. 77). This double divergence is significant as an instance wherein tribal considerations supersede the theological and political factors are offset by their economic counterparts. The study of such groups, whose origins and development cannot be expounded by a simplified dichotomy or modern political terminology on their own, promises a more holistic account of the history of Islamic civilization.

Origins of the Shi'a_cover_Najam Haider.jpg

Najam Haider Origins of the Shi’a: Identity, Ritual and Sacred Space in Eighth-Century Kufa (Cambridge, Cambridge University Press: 2011)

The paucity of historical sources may be one explanation for why the Sunni-Shia chasm dominates literatures on the history of Islam—it proves convenient to otherwise source-less historians. Recently, the more innovative have found ways to remedy the source scarcity. In Origins of the Shia, Najam Haider shows how sources which may appear ahistorical at first glance can in fact elucidate elements of subjective identity—providing new insights on the history of Islamic groups. By drawing from innovations in “textual archaeology… [Haider is able] to identify traditions and views concerning specific ritual practices among jamā’ī-Sunnī, Zaydī, and Imāmī scholars in the early eight century Kufa (modern day Najaf)” (p. 1395). Haider’s method is nothing less than revolutionary in its pedagogical implications. For one, his rich and complex narrative, produced by emphasizing the role of ritual as one way to discern the consolidation of a group’s subjective identity, stands in stark contrast to histories crafted exclusively with reference to objective identities. Second, the work shows that when the Sunni-Shia binary framework is employed with reference to anachronistic formulations of politics, historians miss fundamental aspects of group formation. Accordingly, instructors of Islamic Civilization should be weary of investigating the fragmentation of the early Islamic community in sole reference to the political or theological.

In effect, the third pedagogical drawback—the risk of “modern ideologies masquerading as historical truths”—is already minimized when the former two are remedied. Distinguishing objective from subjective identity produces a fuller understanding of how and why dominant and non-dominant groups form and decidedly dispels a faux-history of dominant group rivalry. Using Sunni v. Shia as the ultimate explanatory signifier in the history of Islam produces a perpetual enmity that is, as one observer put it, “misguided at best and disingenuous at worst.” As a historical explanatory, it is reductionist. Used as a social scientific predictor, it is dangerous.

Sunni and Shia theological differences do have an important place in Islamic history. Of course, this is partially because this history is still being written: contested along the borders of modern nation-states, fought in violent armed struggle and frequently redefined by geo-political developments. But this phase of Islamic history is no longer, strictly speaking, “Islamic.” Transpiring in circumstances unintelligible in terms of regional or religious isolation, these events are part and parcel of globalization, neoliberalism, and post-colonial nationalism— anything but the climax of a fourteen-hundred-year theological dispute. There is little warrant to look at eighth century Kufa for these events’ origins—no more, anyways, than there is for young scholars to expect a rich history of Islamic civilization through the prism of an exaggerated historical enmity.

Basma N. Radwan is a doctoral student in the Department of Middle Eastern, South Asian and African Studies and the Institute for Comparative Literature and Society at Columbia University. Her interests include the history of political thought and the impact of colonialism in the making of modernity. She is currently writing about notions of racial difference in the work of Alexis de Tocqueville.