Think pieces

Between Conservatism and Fascism in Troubled Times: Der Fall Bernhard

by guest contributor Steven McClellan

The historian Fritz K. Ringer claimed that for one to see the potency of ideas from great thinkers and to properly situate their importance in their particular social and intellectual milieu, the historian had to also read the minor characters, those second and third tier intellectuals, who were barometers and even, at times, agents of historical change nonetheless. One such individual who I have frequently encountered in the course of researching my dissertation, was the economist Ludwig Bernhard. As I learned more about him, the ways in which Bernhard formulated a composite of positions on pressing topics then and today struck me: the mobilization of mass media and public opinion, the role of experts in society, the boundaries of science, academic freedom, free speech, the concentration of wealth and power and the loss of faith in traditional party politics. How did they come together in his work?

IMG_9800 (1)

Ludwig Bernhard (1875-1935; Bundesarchiv, Koblenz Nl 3 [Leo Wegener], Nr. 8)

Bernhard grew up in a liberal, middle-class household. His father was a factory owner in Berlin who had converted from Judaism to Protestantism in 1872. As a young man, Bernhard studied both Munich and Berlin under two-heavyweights of the German economic profession: Lujo Brentano and Gustav Schmoller. Bernhard found little common ground with them, however. Bernhard’s friend, Leo Wegener, best captured the tension between the young scholar and his elders. In his Erinnerungen an Professor Ludwig Bernhard (Poznań: 1936, p. 7), Wegener noted that “Schmoller dealt extensively with the past,” while the liberal Brentano, friend of the working class and trade unions, “liked to make demands on the future.” Bernhard, however, “was concerned with the questions of the present.” He came to reject Schmoller and Brentano’s respective social and ethical concerns. Bernhard belonged to a new cohort of economists who were friendly to industry and embraced the “value-free” science sought by the likes of Max Weber. They promoted Betriebswirtschaft (business economics), which had heretofore been outside of traditional political economy as then understood in Germany. Doors remained closed to them at most German universities. As one Swiss economist noted in 1899, “appointments to the vacant academical [sic] chairs are made as a rule at the annual meetings of the ‘Verein für Socialpolitik’,” of which Schmoller was chairman from 1890-1917. Though an exaggeration, this was the view held by many at the time, given the personal relationship between Schmoller and one of the leading civil servants in the Prussian Ministerium der geistlichen, Unterrichts- und Medizinalangelegenheiten (Department of Education, church and medical affairs), Friedrich Althoff.

Part of Bernhard’s early academic interest focused on the Polish question, particularly the “conflict of nationalities” and Poles living in Prussia. Unlike many other contemporary scholars and commentators of the Polish question, including Max Weber, Bernhard knew the Polish language. In 1904 he was appointed to the newly founded Königliche Akademie in Posen (Poznań). In the year of Althoff’s death (1908), the newly appointed Kultusminister Ludwig Holle created a new professorship at the University of Berlin at the behest of regional administrators from Posen and appointed Bernhard to it. However, Bernhard’s placement in Berlin was done without the traditional consultation of the university’s faculty (Berufungsverfahren).

The Berliner Professorenstreit of 1908-1911 ensued with Bernhard’s would-be colleagues, Adolph Wagner, Max Sering and Schmoller protesting his appointment. It escalated to the point that Bernhard challenged Sering to a duel over the course lecture schedule for 1910/1911, the former claiming that his ability to lecture freely had been restricted. The affair received widespread coverage in the press, including attracting commentaries from notables, such as Max Weber. At one point, just before the affair seemed about to conclude, Bernhard published an anonymous letter in support of his own case, which was later revealed that he was in fact the author. This further poisoned the well with his colleagues. The Prussian Abgeordnetenhaus (Chamber of Deputies) would debate the topic: the conservatives supported Bernhard and the liberal parties defended the position of the Philosophical Faculty. Ultimately, Bernhard would keep his Berlin post.

url.jpg

Satire of the Professorenstreit (click for larger image)

The affair partly touched upon the threat of the political power and the freedom of the Prussian universities to govern themselves—a topic that Bernhard himself extensively addressed in the coming years. It also concerned the rise of the new discipline of “business economics” gaining a beachhead at German secondary institutions. Finally, the Professorenstreit focused on Bernhard himself, an opponent of much of what Schmoller and his colleagues in the Verein für Socialpolitik stood for. He proved pro-business and an advocate of the entrepreneur. Bernhard also showed himself a social Darwinist, deploying biological and psychological language, such as in his analysis of the German pension system in 1912. He decried what he termed believed the “dreaded bureaucratization of social politics.” Bureaucracy in the form of Bismarck’s social insurance program, Bernhard argued, diminished the individual and blocked innovation, allowing the workers to become dependent on the state. Men like Schmoller, though critical at times of the current state of Prussian bureaucracy, still believed in its potential as an enlightened steward that stood above party-interests and acted for the general good.

Bernhard could never accept this view. Neither could a man who became Bernhard’s close associate, the former director at Friedrich Krupp AG, Alfred Hugenberg. Hugenberg was himself a former doctoral student of another key member of the Verein für Socialpolitik , Georg Friedrich Knapp. Bernhard was proud to be a part of Hugenberg’s circle, as he saw them as men of action and practice. In his short study of the circle, he praised their mutual friend Leo Wegener for not being a Fachmann or expert. Like Bernhard, Hugenberg disliked Germany’s social policy, the welfare state, democracy, and—most importantly—socialism. Hugenberg concluded that rather than appeal directly to policy makers and state bureaucrats through academic research and debate, as Schmoller’s Verein für Socialpolitik had done, greater opportunities lay in the ability to mobilize public opinion through propaganda and the control of mass media. The ‘Hugenberg-Konzern’ would buy up controlling interests in newspapers, press agencies, advertising firms and film studios (including the famed Universum Film AG, or UfA).

In 1928, to combat the “hate” and “lies” of the “democratic press” (Wegener), Bernhard penned a pamphlet meant to set the record straight on the Hugenberg-Konzern. He presented Hugenberg as a dutiful, stern overlord who cared deeply for his nation and did not simply grow rich off it. Indeed, the Hugenberg-Konzern marked the modern equivalent to the famous Raiffeisen-Genossenschaften (cooperatives) for Bernhard, providing opportunities for investment and national renewal. Furthermore, Bernhard claimed the Hugenberg-Konzern had saved German public opinion from the clutches of Jewish publishing houses like Mosse and Ullstein.

Both Bernhard and Hugenberg pushed the “stab-in-the-back” myth as the reason for Germany’s defeat in the First World War. The two also shared a strong belief in fierce individualism and nationalism tinged with authoritarian tendencies. These views all coalesced in their advocacy of the increasing need of an economic dictator to take hold of the reins of the German economy during the tumultuous years of the late Weimar Republic. Bernhard penned studies of Mussolini and fascism. “While an absolute dictatorship is the negation of democracy,” he writes, “a limited, constitutional dictatorship, especially economic dictatorship is an organ of democracy.” (Ludwig Bernhard: Der Diktator und die Wirtschaft. Zurich: 1930, pg. 10).

Hugenberg came to see himself as the man to be that economic dictator. In a similar critique mounted by Carl Schmitt, Bernhard argued that the parliamentary system had failed Germany. Not only could anything decisive be completed, but the fact that there existed interest-driven parties whose existence was to merely antagonize the other parties, stifle action and even throw a wrench in the parliamentary system itself, there could be nothing but political disunion. For Bernhard, the socialists and communists were the clear violators here.

IMG_9823.JPG

Ludwig Bernhard, »Freiheit der Wissenschaft« (Der Tag, April 1933; BA Koblenz, Nl 3 [Leo Wegener)], Nr. 8, blatt 91; click for larger image)

The Nazis proved another story. Hitler himself would be hoisted in power by Hugenberg. Standing alongside him was Bernhard. In April 1933, Bernhard published a brief op-ed entitled “Freiheit der Wissenschaft,” which summarized much of his intellectual career. He began by stating, “Rarely has a revolution endured the freedom of science.” Science is free because it is based on doubt. Revolution, Bernhard writes, depends on eliminating doubt. It must therefore control science. According to Bernhard, this is what the French revolutionaries in 1789 attempted. In his earlier work on this topic, Bernhard made a similar argument, stating that Meinungsfreiheit (free speech) had been taken away by the revolutionary state just as it had been taken away by democratic Lügenpresse. Thankfully, he argued, Germany after 1918 preserved one place where the “guardians” of science and the “national tradition” remained—the universities, which had “resisted” the “criminal” organization of the Socialist Party’s Prussian administration. Bernhard, known for his energetic lectures, noted with pride in private letters the growth of the Nazi student movement. In 1926, after having supported the failed Pan-German plan to launch a Putsch (coup d’état) to eliminated the social democratic regime in Prussia, Bernhard spoke to his students, calling on the youth to save the nation. Now, it was time for the “national power” of the “national movement” to be mobilized. And in this task, Bernhard concluded, Adolf Hitler, the “artist,” could make his great “masterpiece.”

Ludwig Bernhard died in 1935 and therefore never saw Hitler’s completed picture of a ruined Germany. An economic nationalist, individualist, and advocate of authoritarian solutions, who both rebelled against experts and defended the freedom of science, Bernhard remains a telling example of how personal history, institutional contexts and the perception of a heightened sense of cultural and political crisis can collude together in dangerous ways, not least at the second-tier of intellectual and institutional life.

Steven McClellan is a PhD Candidate in History at the University of Toronto. He is currently writing his dissertation, a history of the rise and fall, then rebirth of the Verein für Sozialpolitik between 1872 and 1955.

“He shall not haue so much as a buske-point from thee”: Examining notions of Gender through the lens of Material Culture

by guest contributor Sarah Bendall

Our everyday lives are surrounded by objects. Some are mundane tools that help us with daily tasks, others are sentimental items that carry emotions and memories, and others again are used to display achievements, wealth and social status. Importantly, many of these objects are gendered and their continued use in various different ways helps to mould and solidify ideas, particularly, gender norms.

In the early modern period two objects of dress that shaped and reinforced gender norms were the busk, a long piece of wood, metal, whalebone or horn that was placed into a channel in the front of the bodies or stays (corsets), and the busk-point, a small piece of ribbon that secured the busk in place. During the sixteenth and seventeenth centuries these accessories to female dress helped to not only shape expressions of love and sexual desire, but also shaped the acceptable gendered boundaries of those expressions.

Busks were practical objects that existed to keep the female posture erect, to emphasize the fullness of the breasts and to keep the stomach flat. These uses were derived from their function in European court dress that complimented elite ideas of femininity; most notably good breeding that was reflected in an upright posture and controlled bodily movement. However, during the seventeenth century, and increasingly over eighteenth and nineteenth centuries, lovers not only charged busks and busk-points with erotic connotations but also saw them as tokens of affection. Thus, they became part of the complex social and gendered performance of courtship and marriage.

The sheer number of surviving busks that contain inscriptions associated with love indicate that busk giving during courtship must have been a normal and commonly practised act in early modern England and France. A surviving English wooden busk in the Victoria and Albert Museum contains symbolic engravings, the date of gifting, 1675, and a Biblical reference. On the other side of the busk is an inscription referencing the Biblical Isaac’s love for his wife, which reads: “WONC A QVSHON I WAS ASKED WHICH MAD ME RETVRN THESE ANSVRS THAT ISAAC LOVFED RABEKAH HIS WIFE AND WHY MAY NOT I LOVE FRANSYS”.

thumbnail_2006AE3205

‘English wooden Stay Busk, c.1675, Victoria and Albert Museum, London. Accession number W.56-1929’

Another inscription on one seventeenth-century French busk exclaims “Until Goodbye, My Fire is Pure, Love is United”. Three engravings correspond with each line: a tear falling onto a barren field, two hearts appearing in that field and finally a house that the couple would share together in marriage with two hearts floating above it.

Inscriptions found on other surviving busks go beyond speaking on behalf of the lover, and actually speak on behalf of busks themselves, giving these inanimate objects voices of their own. Another seventeenth-century French busk, engraved with a man’s portrait declares:

“He enjoys sweet sighs, this lover

Who would very much like to take my place”

This inscription shows the busk’s anthropomorphized awareness of the prized place that it held so close to the female body. John Marston’s The scourge of villanie Three bookes of satyres (1598, p. F6r-v) expressed similar sentiments with the character Saturio wishing himself his lover’s busk so that he “might sweetly lie, and softly luske Betweene her pappes, then must he haue an eye At eyther end, that freely might discry Both hills [breasts] and dales [groin].”

Although the busk’s intimate association with the female body was exploited in both erotic literature and bawdy jokes, the busk itself also took on phallic connotations. The narrator of Alexander Pope’s Rape of the Lock (1712, p. 12) describes the Baron with an ‘altar’ built by love. On this altar “lay the Sword-knot Sylvia‘s Hands had sown, With Flavia‘s Busk that oft had rapp’d his own …”  Here “His own [busk]” evokes his erection that Flavia’s busk had often brushed against during their love making. Therefore, in the context of gift giving the busk also acted as an extension of the male lover: it was an expression of his male sexual desire in its most powerful and virile form that was then worn privately on the female body. Early modern masculinity was a competitive performance and in a society where social structure and stability centred on the patriarchal household, young men found courtship possibly one of the most important events of their life – one which tested their character and their masculine ability to woo and marry. In this context, the act of giving a busk was a masculine act, which asserted not only a young man’s prowess, but his ability to secure a respectable place in society with a household.

Yet the inscriptions on surviving busks and literary sources that describe them often to do not account for the female experience of courtship and marriage. Although women usually took on the submissive role in gift giving, being the recipient of love tokens such as busks did not render them completely passive. Courtship encouraged female responses as it created a discursive space in which women were free to express themselves. Women could choose to accept or reject a potential suitor’s gift, giving her significant agency in the process of courtship. Within the gift-giving framework choosing to place a masculine sexual token so close to her body also led to a very intimate female gesture. Yet a woman’s desire for a male suitor could also take on much more active expressions as various sources describe women giving men their busk-points. When the character Jane in Thomas Dekker’s The Shoemaker’s Holiday (1600) discovers that the husband she thought dead is still alive, she abandons her new beau who tells her that “he [her old husband] shall not haue so much as a buske-point from thee”, alluding to women’s habit of giving busk-points as signs of affection and promise. John Marston’s The Malcontent (1603) describes a similar situation when the Maquerelle warns her ladies “look to your busk-points, if not chastely, yet charily: be sure the door be bolted.” In effect she is warning these girls to keep their doors shut and not give their busk-points away to lovers as keepsakes.

To some, the expression of female sexual desire by such means seems oddly out of place in a society where strict cultural and social practices policed women’s agency. Indeed, discussions of busks and busk-points provoked a rich dialogue concerning femininity and gender in early modern England. Throughout the sixteenth and seventeenth centuries, bodies (corsets) elongated the torso, until the part of the bodie that contained the busk reached to the lady’s “Honor” (Randle Holme, The Academy of Armory and Blazon…., p. 94)[1] In other words, the lowest part of the busk which contained the ‘busk-point’ sat over a woman’s sexual organs where chastity determined her honour. The politics involved in female honour and busk-points are expressed in the previously discussed scene from The Malcontent: busk-points functioned as both gifts and sexual tokens and this is highlighted by the Maquerelle’s pleas for the girls to look to them ‘chastely’.

As a result of the intimate position of the busk and busk-point on the female body these objects were frequently discussed in relation to women’s sexuality and their sexual honour. Some moralising commentaries blamed busks for concealing illegitimate pregnancies and causing abortions. Others associated busks with prostitutes, and rendered them a key part of the profession’s contraceptive arsenal. Yet much popular literature and the inscriptions on the busks themselves rarely depict those women who wore them as ‘whores’. Instead these conflicting ideas of the busk and busk-points found in sources from this period in fact mirror the contradictory ideas and fears that early moderns held about women’s sexuality. When used in a sexual context outside of marriage these objects were controversial as they were perceived as aiding unmarried women’s unacceptable forward expressions of sexual desire. However, receiving busks and giving away busk-points in the context of courtship and marriage was an acceptable way for a woman to express her desire precisely because it occurred in a context that society and social norms could regulate, and this desire would eventually be consummated within the acceptable confines of marriage.

Busks and busk-points are just two examples of the ways in which the examination of material culture can help the historian to tap into historical ideas of femininity and masculinity, and the ways in which notions of gender were imbued in, circulated and expressed through the use of objects in everyday life in early modern Europe. Although controversial at times, busk and busk-points were items of clothing that aided widely accepted expressions of male and female sexual desire through the acts of giving, receiving and wearing. Ultimately, discussions of these objects and their varied meanings highlight not only the ways in which sexuality occupied a precarious space in early modern England, but how material culture such as clothing was an essential part of regulating gender norms.
[1] Holme, The Academy of Armory and Blazon, p. 3.

Sarah A. Bendall is a PhD candidate in the Department of History at the University of Sydney. Her dissertation examines the materiality, consumption and discourses generated around stiffened female undergarments – bodies, busks, farthingales and bum rolls – to explore how these items of material culture shaped notions of femininity in England between 1560-1690. A longer article-length version of this blog post has appeared in the Journal of Gender & History and Sarah also maintains her own blog were she writes about the process of historical dress reconstruction.

Miscarriage, Auspicious Birth, and the Concept of Tulkuhood in Tibet

By guest contributor Kristin Buhrow

The selection of successors to political and religious leadership roles is determined by different criteria around the world. In the Himalayas, a unique form of determining succession is used: the concept of Tulkuhood. Based in Tibetan Vajrayana Buddhism, Himalayan communities, especially in Tibet, operate with the understanding that gifted leaders with a deep and accurate understanding of Buddhist cosmology and philosophy will, after death, return to their communities and continue as leaders in their next life. Upon the death of a spiritual or community leader, his or her assistants and close followers will go in search of the next incarnation—a child born shortly after the previous leader’s death. When satisfied that they have found the new incarnation, the committee of assistants or followers will officially recognize the child, usually under five years old, as a reincarnation, and the child will receive a rigorous education to prepare for their future duties as a community leader. These people are given the title “Tulku.”

This system of succession by reincarnation is occasionally mentioned in Western literature, and is always lent a sense of orientalist mysticism, but it is important to note that in the Tibetan cultural sphere, Tulku succession is the normal, working model of the present day. While Tibetan religious and political leadership structures have undergone great changes in recent years, the system of Tulku succession has been maintained in the religious sector, and for some top political positions was only replaced by democratic election in 2001. Large monasteries or convents in the Himalayas, which often serve as centers of religion, education, and local governance, are commonly associated with a Tulku who guides religious practice and political decision making at the monastery and in the surrounding area. In this way, Tulkus serve a life-long (or multi-life-long) appointment to community leadership.

Like any other modern system of succession, Tulku succession is the subject of much emic literature detailing the definition of Tulkuhood, discussing the powers of a Tulku, and otherwise outlining the concept and process. And, like any other functioning system of succession, some details of the definition of true Tulkuhood are hotly debated. One indication given special attention is that Tulkus go through birth and death under auspicious conditions. These conditions include but are not limited to: occurring on special days of the Tibetan calendar, occurring according to a prophecy that the Tulku himself made previously, unexplainable sweet smells, music issuing from an unknown source, colored lights appearing in the sky, dissolving or disappearance of the mind or body at death, flight, or Buddha images visible upon cremation. While these auspicious conditions of birth or death may go excused or unnoticed, it is thought that all Tulkus are born and die in such circumstances, whether or not the people around them notice. Considering this common understanding, we now turn to one particular Tulku, who makes a controversial and easily politicized assertion: that one of his previous incarnations is that of a miscarried fetus buried beneath his monastery.

***

Location_of_Mêdog_within_Xizang_(China)

A map showing the Pemako region (now called Mêdog) within Tibet

In the mountainous region of Pemako, to the east of Lhasa, there is a small town called Powo. According to Tibetan oral history, it was here, in 641 C.E., that the Chinese Princess Wencheng buried her stillborn child. After being engaged to marry the King of Tibet, Princess Wencheng was escorted from the Tang capital of Chang ‘An (present day Xi’an), to the Tibetan capital, Lhasa, by the illustrious Tibetan minister Gar. Along the way, Princess Wencheng became pregnant with Minister Gar’s child—a union oral history remembers fondly as the result of a loving relationship that had developed over the course of the journey. Some versions of the history assert that Minister Gar intentionally took Wencheng on the longest route to Lhasa so that she could have the baby along the way, but she suffered a miscarriage in Powo. While there was no monastery in Powo at the time, Wencheng, who was an expert Chinese geomancer, recognized the spiritual power of the location. After carefully selecting the gravesite, Wencheng buried her stillborn child, then left to meet her intended in Lhasa.

He Liyi Illustration

Aiqing Pan and Zhao Li, “Wencheng and the Tibetan Envoy,” from He Liyi, The Spring of Butterflies and Other Folk Tales of China’s Minority Peoples.

 

While some sources assert that Princess Wencheng constructed the Bhakha Monastery on top of the grave with her own hands, another origin story exists. Over one thousand years after Wencheng and Minister Gar passed through, a Buddhist teacher who had risen to prominence came to Powo near the time of his death. This teacher was recognized as a reincarnation of the powerful practitioner Dorje Lingpa, who was himself a reincarnation of Vairotsana, a Buddhist intellectual and influential translator. When this teacher came to Powo, he found the same spot where Wencheng had buried her baby, and he planted his walking stick in the ground, which miraculously grew into a pine tree. It was then that the Bhakha Monastery was built in the same vicinity. Regardless of the accuracy of these oral histories, it is clear that the presence of the infant grave was a known factor when the monastery was named, as Bhakha (རྦ་ཁ་ or སྦ་ཁ་) means “burial place”. The teacher whose walking stick demonstrated the miracle became known as the first Bhakha Tulku. We are now, as of 2017, in the time of the tenth Bhakha Tulku, who still has influence over the same monastery, as well as related monasteries in Bhutan and the United States.

btulku

The Tenth Bhakha Tulku (photo credit: Shambhala Center of New York)

In addition to the list of renowned former lives mentioned previously, the Bhakha Tulku has also claimed Wencheng and Minister Gar’s child as one of his past lives. While this claim to such a short and seemingly inauspicious life is unusual for a Tulku, it is not an impossibility according to the standards of Tulkuhood in the Nyingma sect, to which the Bhakha Tulku belongs. In the Nyingma tradition, and all other Vajrayana sects, life is considered to begin at conception. By this principle, a stillborn fetus constitutes a life, making it possible that that life could have been part of a Tulku lineage. According to a description by Tulku Thondup, a translator and researcher for the Buddhayana Foundation and contemporary Tulku associated with the Dodrupchen Monastery, in Incarnation: The History and Mysticism of the Tulku Tradition of Tibet, the primary role of a “Birth Manifested Body Tulku” is “to serve others […] in any form that leads a being or beings toward happiness, peace, and enlightenment, either directly or indirectly” (13). With this perception of Tulku leadership, the controversy surrounding the idea that a life culminating in miscarriage could be worthy of the title of Tulku is easily understood.

Some individuals participating in this debate assert that a miscarried fetus could not fulfill the requirement of compassionate servitude. Cameron David Warner goes so far as to argue that the emotional pain that miscarriage necessarily brings to the mother renders the life of that child devoid of any positive outcome—its only effect being a mother’s grief. However, one key idea posited by Tulku Thondup is that a Tulku’s life can help the spread of Buddhism and compassion, not only directly, but also indirectly. When viewed in the context of history, it is possible that the death of this child, for this life to go no further, was a course of events that allowed Buddhism to spread further into Tibet.

At the time of Wencheng’s arrival in Tibet, Buddhism was fairly uncommon in the area, the most common religion being a form of animistic shamanism called Bon. Both Tibetan and Chinese historiography cite Wencheng among the first foreigners to bring Buddhist artifacts into Tibet (see Blondeau and Buffetrille, Slobodnik, Kapstein). It is possible that if Wencheng were to arrive in Lhasa with a child that the Tibetan king, Songstan Gampo, would have reacted negatively, potentially harming Wencheng herself or the larger relationship between the Tibetan Empire and Tang China. While this is merely conjecture, it does present an argument in which a Tulku could have demonstrated compassionate servitude in this unusual form.

Interestingly, while the Bhakha Tulku’s claimed past life has inspired controversy, this particular assertion is not mentioned in official profiles or other monastic materials. This decision to avoid such a sensitive topic in public texts could be an attempt to smooth over controversy within the religious community (where the debate continues). Alternatively, this choice could be motivated by a desire to avoid politicization by Chinese government officials, who have been altering the historical narrative surrounding Wencheng’s life in Tibet to model a positive relationship between the generous paternal state of China and Tibet, its vassal. While the Bhakha Tulku currently devotes his energies to other issues, this unusual past life serves not only to symbolically connect China and Tibet, but also challenges the traditional notion of what it means to be a benevolent public servant.

Kristin Buhrow is a graduate student at the University of Oxford pursuing a Master’s degree in Tibetan & Himalayan Studies.

Islamic History: Beyond Sunni-Shia

by guest contributor Basma N. Radwan

Consider two vastly different versions of the same course “Introduction to Islamic Civilization.” In the first, an emphasis of political factors in Islamic group formation supersedes all other considerations. Shias, even before their inception as a distinct, self-identified group, are described as a uniquely political Islamic sect. In such analyses, theological, economic, and ethnic considerations are peripheral, if they at all constitute factors. To make the group intelligible to students predominantly acquainted with the history of the west, an instructor might offer a historical parallel to the French Legitimist tradition. The comparison’s extended implications render Orléanists out of the nonrelative Sahābah, Bonapartists out of Khawarīj, and neo-orientalists out of a fresh generation of young scholars.

In the second, interdisciplinary approaches can offer a different take. Beginning with the Covenant of Medina and a discussion on the nature of identity, course instructors can prompt students to ask themselves the following: when reading the history of Islam and its many groups, has modern scholarship excessively privileged objective over subjective identity? Do we identify early Islamic groups through our own contemporary dichotomies? Anyone who opens a newspaper will realize that it is hard to dispute that this is not the case. No doubt, contemporary political events parade the dichotomy as the fundamental operative in the history of the Middle East. The central idea (a well-intentioned one, I think) is an earnest attempt to discern some of the otherwise camouflaged nuances of contemporary politics. So be it—journalists, diplomats, and human rights groups use the dichotomy because it offers intelligible explanations for otherwise complex socio-political phenomena. But how useful is the chasm pedagogically? Even instructors who disagree with the claim that Sunni versus Shia is an overly simplistic heuristic must, nonetheless, consider what political and strategic purposes such a binary has come to serve.

Still, I would like to suggest that the Sunni versus Shia chasm, though useful in some scholarly endeavors, is of little value as a primary framework for the study of Islamic history. Those who plan to make use of it might consider the three following pedagogical drawbacks. First, privileging the Sunni-Shia dichotomy as the main framework for the study of Islamic history allots students little opportunity to discuss either tradition’s subgroups. Second, because the Sunni-Shia dichotomy is depicted as the product of a politico-theological dispute, economic, tribal, and geographical factors in group formation are easily overlooked. Third, the dichotomy inevitably runs the risk of “modern ideologies masquerading as historical truths.” Depicting a geopolitical rivalry between Iran and Saudi Arabia as the climax of a fourteen-hundred-year religious struggle is not far off from labeling Operation Iraqi Freedom as an extension of medieval crusades. Such grandiose historical ornamentations are highly caloric, yet offer little nutritional value—no matter how forcefully U.S. presidents, Iranian Ayatollahs, or Saudi Monarchs may have tried to persuade otherwise. So, what is to be done?

The importance of self-identification in the history of Islamic group formation suggests, according to one theory, that historians should reconsider and reexamine sources that provide clues to the group’s subjective identity. A group’s subjective identity is “how [they] conceive themselves to be, whereas [their] objective identity is how [they] might be viewed independently of how [they] view [themselves]” (p. 5). In this sense, it would be historically brute to claim that Ali was Shia. While he is labeled so retrospectively, his subjective identity could not be accounted for in those terms, as “the Sunni-Shia schism only materialized a century [after the prophet’s death]” (p. i). Even the use of proto-Shia or proto-Sunni as indicators of subjective identity proves problematic. These kinds of qualifications are, to borrow one historian’s description of Muslim heresiographies, “simply back-projections intended to validate subsequent political and theological developments” (p. 249).

There is also the question of what happens when a non-dominant group’s identification is rejected by a dominant one. Although a Sufi group may consider itself Sunni or Shia, in its legal affiliation for example, prominent orthodox Sunni or Shia groups may reject its claim. In a historical narrative in which the Sunni-Shia chasm dominates, Sufi groups are characterized by their objective identity, as dictated by the dominant group, as non-Shia/Sunni. By extension, there is the added risk of underappreciating the role of non-dominant groups’ subjective identity in the making of Sunni/Shia orthodoxy. In other words, we are blind to the process wherein Sunni and Shia define themselves not against one another, but rather through other “Others.”

But what about when a group’s subjective identity is non-Shia/Sunni? This dichotomy, as a heuristic, risks erasing the historical presence of groups whose subjective identity lies entirely outside of it: the early Khawarij, Murji’a, Ibāddiya and, more recently, the Aḥmadiyya and NOI . In these instances, it is the absence of Sunni-Shia elements in their subjective identity that places them in historical margins, resulting in a narrative dictated by dominant groups.

2014+24small-1.jpg

Cover of New Statesman (20-26 June 2014)

While renewed emphasis on subjective identity in Islamic group formation can soften an otherwise rigid dichotomy, it cannot, on its own, provide the reasons for differences in objective and subjective identity. Because the Sunni/Shia dichotomy is presented primarily as a politico-theological chasm, the impact of geographical, tribal, and economic factors in group formation is sidelined. The Kharijites (Khawarij), sometimes referred to as the first distinct sect in Islamic history, are one such example. Emerging in the aftermath of the Battle of Siffin (657), the name refers to the members of Ali’s troops who rejected his decision to negotiate with Mu’awiyah’s supporters. Derived from the Arabic word ‘Khawarij,’ seceders, Kharijite came to signify anyone who “left” Ali’s camp. Most historical narratives attribute the Kharijite secession to a theological dispute—namely their view that Ali’s acquiescence to negotiate with Mu’awiyah’s supporters was a violation of divine will.

Recent scholarship has signaled a shift from the theological interpretation, suggesting that the Kharjites’ secession is attributable to their Tamim tribal composition. The influence of Tamim tribal affiliation in the origins and development of the Kharijite led one historian to describe it as “a movement of democratic ideals that advocated a militant democracy [against an aristocratic Ummayad counterpart]” (p. 34). The group is as an example of how theological differences, while important, may at times be compromised, and at others corroborated, by tribal affiliations. The Sunni-Shia heuristic is inclined to overemphasize theological considerations or attribute them as a cause to non-theological divisions. Even within the category of Khairijite itself, a confluence of geographical, tribal, and economic factors eventually led to the creation of further subdivisions. According to one historian, Muslim heresiographers had accounted for four original Kharijite groups, “Azariqa, Najadat, Ibadiyya, and Suffriya” (p. 77). This double divergence is significant as an instance wherein tribal considerations supersede the theological and political factors are offset by their economic counterparts. The study of such groups, whose origins and development cannot be expounded by a simplified dichotomy or modern political terminology on their own, promises a more holistic account of the history of Islamic civilization.

Origins of the Shi'a_cover_Najam Haider.jpg

Najam Haider Origins of the Shi’a: Identity, Ritual and Sacred Space in Eighth-Century Kufa (Cambridge, Cambridge University Press: 2011)

The paucity of historical sources may be one explanation for why the Sunni-Shia chasm dominates literatures on the history of Islam—it proves convenient to otherwise source-less historians. Recently, the more innovative have found ways to remedy the source scarcity. In Origins of the Shia, Najam Haider shows how sources which may appear ahistorical at first glance can in fact elucidate elements of subjective identity—providing new insights on the history of Islamic groups. By drawing from innovations in “textual archaeology… [Haider is able] to identify traditions and views concerning specific ritual practices among jamā’ī-Sunnī, Zaydī, and Imāmī scholars in the early eight century Kufa (modern day Najaf)” (p. 1395). Haider’s method is nothing less than revolutionary in its pedagogical implications. For one, his rich and complex narrative, produced by emphasizing the role of ritual as one way to discern the consolidation of a group’s subjective identity, stands in stark contrast to histories crafted exclusively with reference to objective identities. Second, the work shows that when the Sunni-Shia binary framework is employed with reference to anachronistic formulations of politics, historians miss fundamental aspects of group formation. Accordingly, instructors of Islamic Civilization should be weary of investigating the fragmentation of the early Islamic community in sole reference to the political or theological.

In effect, the third pedagogical drawback—the risk of “modern ideologies masquerading as historical truths”—is already minimized when the former two are remedied. Distinguishing objective from subjective identity produces a fuller understanding of how and why dominant and non-dominant groups form and decidedly dispels a faux-history of dominant group rivalry. Using Sunni v. Shia as the ultimate explanatory signifier in the history of Islam produces a perpetual enmity that is, as one observer put it, “misguided at best and disingenuous at worst.” As a historical explanatory, it is reductionist. Used as a social scientific predictor, it is dangerous.

Sunni and Shia theological differences do have an important place in Islamic history. Of course, this is partially because this history is still being written: contested along the borders of modern nation-states, fought in violent armed struggle and frequently redefined by geo-political developments. But this phase of Islamic history is no longer, strictly speaking, “Islamic.” Transpiring in circumstances unintelligible in terms of regional or religious isolation, these events are part and parcel of globalization, neoliberalism, and post-colonial nationalism— anything but the climax of a fourteen-hundred-year theological dispute. There is little warrant to look at eighth century Kufa for these events’ origins—no more, anyways, than there is for young scholars to expect a rich history of Islamic civilization through the prism of an exaggerated historical enmity.

Basma N. Radwan is a doctoral student in the Department of Middle Eastern, South Asian and African Studies and the Institute for Comparative Literature and Society at Columbia University. Her interests include the history of political thought and the impact of colonialism in the making of modernity. She is currently writing about notions of racial difference in the work of Alexis de Tocqueville.

Writing the History of University Coeducation

by Emily Rutherford

When Yung In Chae told me that she was going to Nancy Malkiel’s book talk, I begged her to cover it for the blog. After all, my dissertation is a new, comprehensive history of coeducation in British universities, and as I was writing my prospectus Malkiel helped to put coeducation back into historians’ headlines. As Yung In’s account shows, Malkiel’s weighty tome restores some important things that have been missing in previous histories of university coeducation: attention to the intricacy of the politics through which institutions negotiated coeducation (and an emphasis on politics as a series of negotiations between individuals, often obeying only the logic of unintended consequences), and attention to the men who were already part of single-sex institutions and considered whether to admit women to them. Histories of coeducation usually focus on the ideas and experiences of women who sought access to the institutions, whether as teachers or as students. But that tends to imply a binary where women were progressives who supported coeducation and men were reactionaries who opposed it. As Malkiel shows—and as we might know from thinking about other questions of gender and politics like women’s suffrage—it just doesn’t work like that.

Malkiel’s book strikes me as a compelling history of gender relations at a specific set of universities at a particular moment—the 1960s and ’70s, which we all might point to as a key period in which gender norms and relations between men and women came under pressure on both sides of the Atlantic. But we should be wary, I think, of regarding it as the history of coeducation (Malkiel isn’t suggesting this, but I think that’s how some people might read it—not least when glancing at the book’s cover and seeing the subtitle, “The Struggle for Coeducation”). Malkiel’s story is an Ivy League one, and I’m not sure that it can help us to understand what coeducation looked like at less selective universities whose internal politics were less dominated by admissions policy; at universities in other countries (like the UK) which existed in nationally specific contexts for institutional structure and cultural norms surrounding gender; or in terms of questions other than the co-residence of students. Some of Malkiel’s cases are unusual universities like Princeton and Dartmouth which admitted women very late in the game, but others are about the problem of co-residency: merging men’s and women’s institutions like Harvard and Radcliffe that already essentially shared a campus and many resources and administrative structures, or gender-integrating the Oxford and Cambridge colleges, and thus meaning that men and women students would live alongside each other. But at these institutions, as at other, less elite universities, student life was already significantly coeducational: men and women had some, though not all, teaching in common; they joined mixed extracurricular organizations; they socialized together—though this was limited by curfews and parietal rules, which in 1960s style became the focus of student activism around gender relations. Women teachers and administrators faced other, historically specific challenges about how to be taken seriously, or how to balance a career and marriage. Those who opposed coeducation and sought to support single-sex institutions did so—as Malkiel shows—in ways specific to the political and social context of the 1960s.

But my dissertation research suggests that lasting arguments about co-residency that persisted into the 1960s—and ultimately resulted in the coeducation of hold-out institutions like Princeton and Dartmouth—were the product of an earlier series of conflicts in universities over coeducation and gender relations more broadly, whose unsatisfactory resolution in some institutions set up the conflicts Malkiel discusses. Let’s take the British case, which is not perfectly parallel to the US case but is the focus of my research. My dissertation starts in the 1860s, when there were nine universities in Great Britain but none admitted women. The university sector, like the middle class, exploded in the nineteenth century, and as this happened, the wives, sisters, and daughters of a newly professionalized class of university teachers campaigned for greater educational opportunities for middle-class women. In the late 1870s, Bristol and London became the first universities to admit women to degrees, and activists founded the first women’s colleges at Oxford and Cambridge, though they were not yet recognized by the universities. By 1930, there were seventeen universities in Britain as well as many colleges, all except Cambridge granting women degrees. Cambridge would not admit women to the BA until 1948, and as Malkiel shows the Oxford and Cambridge colleges wouldn’t coeducate until the 1970s. Indeed, higher education did not become a mass system as in the US until the period following the 1963 Robbins Report, and national numbers of women undergraduates did not equal men until the higher education system was restructured in 1992. But it’s already possible to see that a definition of coeducation focused not on co-residency but on women’s admission to the BA nationally, and on the first women on university campuses—as teachers, as students, and also as servants or as the family members or friends of men academics—changes the periodization of the story of coeducation, placing the focal point somewhere around the turn of the twentieth century and taking into account the social and cultural changes wrought by significant factors within British history such as massive urbanization or the First World War. Of course, it’s not just about the BA, and the cultural aspects of this shift in norms surrounding gender relations in Britain are an important part of the story—as middle-class men and women (particularly young men and women) found themselves confronting the new social experience of being friends with each other, an experience which many found perplexing and awkward, but which the more liberal sought out regardless of whether they were educated at the same institutions or whether there were curfews and other regulations governing the ways they could meet each other. University administrators had to confront the same questions among their own generation, while also making decisions about institutional priorities: should accommodation be built for women students? should it look different from the accommodation offered to men students? should women be allowed into the library or laboratory or student union? should they be renovated to include women’s restrooms? how would these projects be funded? would philanthropists disgruntled by change pull their donations? These were questions universities faced in the 1920s as much as in the 1960s—or today.

I’m still early in my research, but one focus of my inquiries is those who opposed coeducation. They haven’t been given as much attention as those who fought for it—but what did they perceive to be the stakes of the question? What did they think they stood to lose? Who were they, and how did they make their claims? I already know that they included both men and women, and that while many of them were garden-variety small-c conservatives, not all of them were. I also know that for many, homoeroticism played an important role in how they explained the distinctive value of single-sex education. By 1920, the battle over women being admitted to the BA was over at all British institutions except Cambridge, but these opponents put up a strong fight. They help to show that coeducation wasn’t foreordained in a teleology of progress, but was the outcome of certain compromises and negotiations between factions, whose precise workings varied institutionally. Yet the opponents also were in many respects successful. After their institutions admitted women to the BA, they carved out spaces in which particular forms of single-sex sociability could continue. The Oxbridge collegiate system enabled this, but it also happened through single-sex student organizations (and persists, it might be noted, in universities that today have vibrant fraternity and sorority cultures), many of which were sponsored and fostered by faculty, alumni, or donors who had a stake in the preservation of single-sex spaces. Coeducation is often viewed as a process that ended when women were admitted to the BA. But even after this formal constitutional change, single-sex spaces persisted: colleges, residence halls, extracurricular organizations, informal bars to women’s academic employment, and personal choices about whom teachers and students sought to work, study, and socialize alongside. Understanding how this happened in the period from, say, 1860 to 1945 helps to explain the causes and conditions of the period on which Malkiel’s work focuses, whose origins were as much in the unresolved conflicts of the earlier period of coeducation as they were in the gender and sexuality foment of the 1960s. I suspect, too, that there may be longer-lasting legacies, which continue to structure the politics and culture of gender in the universities in which we work today.

Evolution Made Easy: Henry Balfour, Pitt Rivers, and the Evolution of Art

by guest contributor Laurel Waycott

In 1893, Henry Balfour, curator of the Pitt Rivers Museum in Oxford, UK, conducted an experiment. He traced a drawing of a snail crawling over a twig, and passed it to another person, whom he instructed to copy the drawing as accurately as possible with pen and paper. This second drawing was then passed to the next participant, with Balfour’s original drawing removed, and so on down the line. Balfour, in essence, constructed a nineteenth-century version of the game of telephone, with a piece of gastropodic visual art taking the place of whispered phrases. As in the case of the children’s game, what began as a relatively easy echo of what came before resulted in a bizarre, near unrecognizable transmutation.

Plate I. Henry Balfour, The Evolution of Decorative Art (New York: Macmillan & Co., 1893).

In the series of drawings, Balfour’s pastoral snail morphed, drawing by drawing, into a stylized bird—the snail’s eyestalks became the forked tail of the bird, while the spiral shell became, in Balfour’s words, “an unwieldy and unnecessary wart upon the, shall we call them, ‘trousers’ which were once the branching end of the twig” (28). Snails on twigs, birds in trousers—just what, exactly, are we to make of Balfour’s intentions for his experiment? What was Balfour trying to prove?

Balfour’s game of visual telephone, at its heart, was an attempt to understand how ornamental forms could change over time, using the logic of biological evolution. The results were published in a book, The Evolution of Decorative Art, which was largely devoted to the study of so-called “primitive” arts from the Pacific. The reason that Balfour had to rely on his constructed game and experimental results, rather than original samples of the “savage” art, was that he lacked a complete series necessary for illustrating his theory—he was forced to create one for his purposes. Balfour’s drawing experiment was inspired by a technique developed by General Pitt Rivers himself, whose collections formed the foundation of the museum. In 1875, Pitt Rivers—then known as Augustus Henry Lane Fox—delivered a lecture titled “The Evolution of Culture,” in which he argued that shifting forms of artifacts, from firearms to poetry, were in fact culminations of many small changes; and that the historical development of artifacts could be reconstructed by observing these minute changes. From this, Pitt Rivers devised a scheme of museum organization that arranged objects in genealogical fashion—best illustrated by his famous display of weapons used by the indigenous people of Australia.

Plate III. Augustus Henry Lane-Fox Pitt-Rivers, The Evolution of Culture, and Other Essays, ed. John Linton Myres (Oxford, Clarendon Press, 1906).

Here, Pitt Rivers arranged the weapons in a series of changing relationships radiating out from a central object, the “simple cylindrical stick” (34). In Pitt Rivers’ system, this central object was the most “primitive” and “essential” object, from which numerous small modifications could be made. Elongate the stick, and eventually one arrived at a lance; add a bend, and it slowly formed into a boomerang. While he acknowledged that these specimens were contemporary and not ancient, the organization implied a temporal relationship between the objects. This same logic was extended to understandings of human groups at the turn of the twentieth century. So-called “primitive” societies like the indigenous groups of the Pacific were considered “survivals” from the past, physically present but temporally removed from those living around them (37). The drawing game, developed by Pitt Rivers in 1884, served as a different way to manipulate time: by speeding up the process of cultural evolution, researchers could mimic evolution’s slow process of change over time in the span of just a few minutes. If the fruit fly’s rapid reproductive cycle made it an ideal model organism for studying Mendelian heredity, the drawing game sought to make cultural change an object of the laboratory.

It is important to note the capacious, wide-ranging definitions of “evolution” by the end of the nineteenth century. Evolution could refer to the large-scale, linear development of entire human or animal groups, but it could also refer to Darwinian natural selection. Balfour drew on both definitions, and developed tools to help him to apply evolutionary theory directly to studies of decorative art. “Degeneration,” the idea that organisms could revert back to earlier forms of evolution, played a reoccurring role in both Balfour’s and Pitt Rivers’ lines of museum object-based study. For reasons never explicitly stated, both men assumed that decorative motifs originated with realistic images, relying on the conventions of verisimilitude common in Western art. This leads us back, then, to the somewhat perplexing drawing with which Balfour chose to begin his experiment.

Balfour wrote that he started his experiment by making “a rough sketch of some object which could be easily recognized” (24). His original gastropodic image relied, fittingly, on a number of conventions that required a trained eye and trained hand to interpret. The snail’s shell and the twig, for instance, appeared rounded through the artist’s use of cross-hatching, the precise placement of regularly spaced lines which lend a sense of three-dimensional volume to a drawing. Similarly, the snail’s shell was placed in a vague landscape, surrounded by roughly-sketched lines giving a general sense of the surface upon which the action occurred. While the small illustration might initially seem like a straightforward portrayal of a gastropod suctioned onto a twig, the drawing’s visual interpretation is only obvious to those accustomed to reading and reproducing the visual conventions of Western art. Since the image was relatively challenging to begin with, it provided Balfour with an exciting experimental result: specifically, a bird wearing trousers.

Plate II. Henry Balfour, The Evolution of Decorative Art (New York: Macmillan & Co., 1893).

Balfour had conducted a similar experiment using a drawing of a man from the Parthenon frieze as his “seed,” but it failed to yield the surprising results of the first. While the particulars of the drawing changed, somewhat—the pectoral muscles became a cloak, the hat changed, and the individual’s gender got a little murky in the middle—the overall substance of the image remained unchanged. It did not exhibit evolutionary “degeneration” to the same convincing degree, but rather seemed to be, quite simply, the product of some less-than-stellar artists. While Balfour included both illustrations in his book, he clearly preferred his snail-to-bird illustration and reproduced it far more widely. He also admitted to interfering in the experimental process: omitting subsequent drawings that did not add useful evidence to his argument, and specifically choosing participants who had no artistic training (25, 27).

Balfour clearly manipulated his experiment and the resulting data to prove what he thought he already knew: that successive copying in art led to degenerate, overly conventionalized forms that no longer held to Western standards of verisimilitude. It was an outlook he had likely acquired from Pitt Rivers. In Notes and Queries on Anthropology (1892), a handbook circulated to travelers who wished to gather ethnographic data for anthropologists back in Britain, Pitt Rivers outlined a number of questions that travelers should ask about local art. The questions were leading, designed in a simple yes/no format likely to provoke a certain response. In fact, one of Pitt Rivers’ questions could, essentially, offer the verbal version of Balfour’s drawing game. “Do they,” he wrote, “in copying from one another, vary the designs through negligence, inability, or other causes, so as to lose sight of the original objects, and produce conventionalized forms, the meaning of which is otherwise inexplicable?” (119–21). Pitt Rivers left very little leeway—both for the artist and the observer—for creativity. Might the artists choose to depict things in a certain way? And might the observer interpret these depictions in his or her own way? Pitt River’s motivation was clear. If one did find such examples of copying, he added. “it would be of great interest to obtain series of such drawings, showing the gradual departure from the original designs.” They would, after all, make a very convincing museum display.

Laurel Waycott is a PhD candidate in the history of science and medicine at Yale University. This essay is adapted from a portion of her dissertation, which examines the way biological thinking shaped conceptions of decoration, ornament, and pattern at the turn of the 20th century.

Revolutions Are Never On Time

by contributing editor Disha Karnad Jani

9780231179423In Enzo Traverso’s Left-Wing Melancholia: Marxism, History, and Memory, timing is everything. The author moves seamlessly between such subjects as Goodbye Lenin, Gustave Courbet’s The Trout, Marx’s Eighteenth Brumaire, and the apparently missed connection between Theodor Adorno and C.L.R. James to guide the reader through the topography of the Left in the twentieth century. The book is an investigation of left-wing culture through some of its most prominent (and dissonant) participants, alongside the images and metaphors that constituted the left of the twentieth century as a “combination of theories and experiences, ideas and feelings, passions and utopias” (xiii). By defining the left not in terms of those political parties to be found on the left of the spectrum, and rather gathering his subjects in ontological terms, Traverso prepares the laboratory prior to his investigation, but not through a process of sterilization. Rather, the narrative of the “melancholic dimension” of the last century’s left-wing seems assembled almost by intuition, as we follow along with affinities and synchronicities across the decades. In its simultaneously historical, theoretical, and programmatic ambitions, Left-Wing Melancholia sits in the overlapping space between the boundaries of intellectual history and critical theory.

In a series of essays, Traverso explores the left’s expressive modes and missed opportunities: the first half of the book is an exploration of Marxism and memory studies (one dissolved as the other emerged), the melancholic in art and film, and the revolutionary image of Bohemia. The second half of the book is a series of intellectual and personal meetings, which Traverso adjudicates for their usefulness to the left: Theodor Adorno and C.L.R. James’ abortive friendship, Adorno and Walter Benjamin’s correspondence, and Daniel Bensaïd’s work on Benjamin. The “left-wing culture” these affinities is meant to trace is defined as the space carved out by “movements that struggled to change the world by putting the principle of equality at the center of their agenda” (xiii). Since that landscape is rather vast, Traverso relies on resonant juxtaposition and very real exchanges in order to erect monuments to the melancholia he reads throughout their shared projects.

The nineteenth and twentieth centuries burst forth onto the stage of history buoyed by the French and Russian Revolutions, surging confidently forwards into a future tinged with utopia. In devastating contrast, the twenty-first century met a future foreclosed to the possibility of imagining a world outside of triumphant capitalism and post-totalitarian, neoliberal humanitarianism. While successive defeats served to consolidate the ideas of socialism in the past, the defeat suffered by the left in 1989 withheld from memory (and therefore from history) any redemptive lesson. In Left-Wing Melancholia, the reader is thus led gently through the rubble of the emancipatory project of the last two hundred years, and invited to ruminate on what could come of “a world burdened with its past, without a visible future” (18).

As critical theory, Left-Wing Melancholia uses the history of socialism and Marxism over the last two hundred years and its defeat in 1989 in order to name the problem of the left today. As intellectual history, it may be found wanting, at least if one seeks in its tracing of left-wing culture some semblance of linearity. If, however, a reader is willing to follow, instead of context à la Skinner, or concept à la Koselleck, a feeling – then Left-Wing Melancholia will soothe, disturb, and offer an alternative: Traverso assures us that “the utopias of the twenty-first century still have to be invented” (119). Indeed, Traverso argues that Bensaïd “rediscovered a Marx for whom ‘revolutions never run on time’ and the hidden tradition of a historical materialism à contretemps, that is, as a theory of nonsynchronous times or non-contemporaneity” (217). Traverso’s own project could be read as part of this now-unearthed tradition.

It is clear that Traverso is aware of the reconfiguration of enshrined histories of socialism and Marxism implicit here, that he has skewed any orthodox narrative by reading through disparate political projects the feeling of melancholia. Ascribing a single ontology to the left over the course of the twentieth century and representing its culture in such a frenetic fashion makes this book vulnerable to the criticism of the empiricist. For instance, he speculates on the lost opportunity of Adorno’s and James’s friendship with “counterfactual intellectual history”: “what could have produced a fruitful, rather than missed encounter between Adorno and James…between the first generation of critical theory and Black Marxism? It probably would have changed the culture of the New Left and that of Third Worldism” (176). In such statements, it is startling to see at work the faith Traverso has in the dialogue between intellectuals, and in intellectuals’ power to change the course of history.

breaking-wall-water-berlin-wall.jpg__800x450_q85_crop_upscale

Hammering through the Berlin Wall. Photograph by Alexandra Avakian, from Smithsonian Mag.

He also eschews the Freudian use of the term “melancholia,” representing it instead as a feeling of loss and impossibility, expressed through writing, monuments, art, film, and his repeated articulations of how “we” felt after 1989. Presumably, this “we” is those of us who existed in a world that contained the Berlin Wall, and then witnessed it come down and had to take stock afterwards. This “we” is transgenerational, as it is also the subject that “discovered that revolutions had generated totalitarian monsters” (6). This same collective subject is a left-wing culture that had its memory severed by 1989, but also remembers in an internalist melancholic mode: “we do not know how to start to rebuild, or if it is even worth doing” (23). (I ask myself how the “we” that was born after 1989 fits in here, if the transgenerational memory of the left was severed in that year. Leftist post-memory, perhaps?) This book is addressed to fellow travelers alone. The reader is brought into the fold to mourn a loss assumed to be shared: “…we cannot escape our defeat, or describe or analyze it from the outside. Left-wing melancholy is what remains after the shipwreck…” (25). Thus, Traverso demonstrates the possibility of fusing intellectual history and critical theory, where one serves the other and vice versa; in his discussion of Benjamin, he remarks: “To remember means to salvage, but rescuing the past does not mean trying to reappropriate or repeat what has occurred or vanished; rather it means to change the present” (222). Left-Wing Melancholia has the explicit purpose of rehabilitating the generation paralyzed by the triumph of neoliberal capitalism. It is a long history of left-wing melancholy that puts struggles for emancipation in our own moment in perspective. And for all its morose recollection, Left-Wing Melancholia contains moments of hope: “we can always take comfort in the fact that revolutions are never ‘on time,’ that they come when nobody expects them” (20).

Towards an Intellectual History of Modern Poverty

by guest contributor Tejas Parasher

 

Picture 1In Chapter 3 of The History Manifesto, David Armitage and Jo Guldi support historians’ increasing willingness to engage with topics generally left to economists. Whereas the almost total dominance of game-theoretic modelling in economics has led to abstract explanations of events in terms of market principles, history, with its greater attention to ruptures and continuities of context and its “apprehension of multiple causality,” can push against overly reductionist stories of socio-economic problems (The History Manifesto, 87). Citing Thomas Piketty’s Capital as a possible example, Armitage and Guldi propose a longue-durée approach to the past that, by empirically documenting the evolution of a phenomenon (say, income inequality or land reform) over time, can disclose context-specific factors and patterns that economic models generally elide.

In this blog post, I ask what intellectual history in particular might have to gain (and contribute) by following Armitage and Guldi’s provocation and taking on a topic that Western academia has almost totally ceded to economics since the 1970s: the study of global poverty. Extreme or mass poverty in the Global South is a well-worn term in the literature on cosmopolitan justice, development economics, global governance, and foreign policy. Across economists like Jeffrey Sachs, Paul Collier, Abhijit Banerjee, and Esther Duflo, poverty is usually invoked as a sign of institutional failure—domestic or international—and a problem to be solved through aid or the reform of market governance. I want to suggest here that the contemporary dominance of economic analysis has foreclosed other approaches to mass poverty in the twentieth century. These are discourses that global intellectual history is uniquely able to excavate.

Picture 2

Delegates at the London Round Table Conferences (1930-1932) on constitutional reform, representation, and voting in British India. Hulton Archive, Getty Images. Sept. 14 1931.

To illustrate my point, I want to turn to a common trope I have found while researching political thought in colonial India. Between approximately 1929-30 and 1950, the Indian National Congress and other organizations fighting for self-determination began to demand the introduction of universal adult franchise in British South Asia. The colony had seen very limited elections at the provincial level since 1892. Through a successive series of acts in 1909, 1919, and 1935, the British Government gradually widened the powers of legislatures with native representation, while keeping the electorate limited according to property ownership and income. In its report to Parliament in 1919, the Indian Franchise Committee under Lord Southborough emphasized that the ‘intelligence’ and ‘political education’ required for modern elections necessitated a strict property qualification (especially in a mostly rural country like India).

Against this background, extending rights to vote and hold office to laborers and the landless poor was anti-imperial both in the immediate sense of challenging British constitutional provisions and, more generally, in inverting the philosophy of the colonial state. Dipesh Chakrabarty has accurately and evocatively described the nationalist demand for universal suffrage as a gesture of “abolishing the imaginary waiting room of history” to which Indians had been consigned by modern European thought (Provincalizing Europe, 9). Indian demands for the adult franchise were almost always articulated with reference to the country’s economic condition. The poor, it was said, needed to directly participate in politics so that the state which governed them could adequately represent their interests.

M.K. Gandhi (1869-1948) began making such arguments in support of adult franchise soon after he gained leadership of the Indian independence movement around 1919. His ideal of a decentralized village-based democracy (panchayati raj) sought to address the deep socio-economic inequality of colonial society by bringing the rural poor into decision-making processes. Under the Gandhian program, fully participatory local village councils would combine legislative, judicial, and executive functions. As Karuna Mantena has noted in her recent study of Gandhi’s theory of the state, panchayati raj based on universal suffrage was seen to empower the poor by giving them an institutional mechanism to guard against the agendas of urban elites and landed rural classes.

Through the 1930s and 1940s, most demands for extending suffrage to the poor shared Gandhi’s premise. Even when leaders fundamentally disagreed with Gandhi’s idealization of village self-rule, they similarly considered the power to vote and hold office as a crucial safeguard against further economic vulnerability. In the Constitution of Free India he proposed in 1944, Manabendra Nath Roy (1887-1954), the ex-Communist leader of the Radical Democratic Party, argued for full enfranchisement and participatory local government on essentially defensive grounds, to protect “workers, employees, and peasants” from privileged interests (Constitution of Free India, 12).

By far one of the most sophisticated analyses of the problem of poverty for Indian politics during these decades came from B.R. Ambedkar (1891-1956), a jurist, anti-caste leader, and the main drafter of the Constitution of independent India in 1950. Ambedkar had been a vocal advocate for removing property, income, and literacy qualifications for voting and holding office since 1919, when he testified before Lord Southborough’s committee. As independence became increasingly likely from the 1930s, Ambedkar’s fundamental concern was to ensure that the poorest, landless castes of India had constitutional protections to vote and to represent themselves as separate groups in the legislature. Writing to the Simon Commission for constitutional reform in 1928, Ambedkar saw direct participation of the poor as the only way to forestall the rise of a postcolonial oligarchy: “the poorer the individual the greater the necessity of enfranchising him…. If the welfare of the worker is to be guaranteed from being menaced by the owners, the terms of their associated life must be constantly resettled. But this can hardly be done unless the franchise is dissociated from property and extended to all propertyless adults” (“Franchise,” 66).

During the height of the Indian independence movement in the 1930s and 1940s, there was thus an acute awareness of mass poverty as a key problem confronted by modern politics outside the West. Participatory democracy was in many ways the answer to an economic issue: colonialism’s creation of a large population without security of income or property, placed at the very bottom of networks of production and exchange that favored either Western Europe or a native elite. This was the population that Gandhi repeatedly described as holding onto its existence in a precarious condition of lifeless “slavery,” completely lacking any economic power. Only fundamental changes in the nature of the modern state, to make it accessible to those who had been constructed as objects of expert rule and as backward outliers to productivity and prosperity, could return dignity to the poor.

Picture 3

File photo from the 1952 general election, the first conducted with universal adult suffrage. Photo No. 21791a (Jan. 1952). Photo Division, Ministry of Information and Broadcasting, Govt. of India.

My intention in briefly reconstructing Indian debates around giving suffrage, self-representation, and engaged citizenship to some of the most vulnerable and powerless people in the world is straightforward: attempts to address the effects of inequality in the Global South through the vote and local democracy rather than exclusively through international governance and economic reconstruction need to have a central place in any story we tell about twentieth-century poverty. Before they were taken up in the literature on efficient economic institutions and the rhetoric of international aid and development in the early 1950s (a shift usefully analyzed by anthropologists like Akhil Gupta and Arturo Escobar), colonial narratives about Africa, Latin America, and Asia as regions of intractable, large-scale poverty, famine, and market failure informed the political thought of anti-imperial democracy. The idea that existing economic conditions in India were problematic and deeply unjust was the basis of giving greater political power to the poor. A global conceptual history of ‘mass poverty’ in the twentieth century can therefore situate popular Third World movements that sought to increase the agency of the poor alongside more familiar, and more hegemonic, projects of Western humanitarianism.

This brings me back to my earlier point about what we might gain by re-thinking, with The History Manifesto, the relationship between intellectual history and economics. Once we start to trace how the categories and variables deployed in economic analysis emerged and changed over time, and how they were interpreted and practiced in a wide range of historical contexts, we can access dimensions of these concepts that may be completely absent from economic modeling. On the specific question of global poverty, an intellectual history that documents how the concept travelled between Third World thought, social movements, and global governance might give us theories of poverty alleviation that entail much more than simply distributive justice and resource allocation. This would be a form of intellectual history committed, as Armitage and Guldi put it, to “speaking back” to the “mythologies” of economics by expanding the timeframes and theoretical traditions which inform the discipline’s methods (The History Manifesto, 81-85).

Tejas Parasher is a PhD candidate in political theory at the University of Chicago. His research interests are in the history of political thought, comparative political theory, and global intellectual history, especially on questions of state-building, decolonization, and market governance in the mid-twentieth century, with a regional focus on South Asia. His dissertation examines the rise of redistribution as a discourse of government and economic policy in India through the 1940s. He also writes more broadly on issues of socio-economic inequality in democratic and constitutional theory, human rights, and the history of political thought.

“Towards a Great Pluralism”: Quentin Skinner at Ertegun House

by contributing editor Spencer J. Weinreich

Quentin Skinner is a name to conjure with. A founder of the Cambridge School of the history of political thought. Former Regius Professor of History at the University of Cambridge. The author of seminal studies of Machiavelli, Hobbes, and the full sweep of Western political philosophy. Editor of the Cambridge Texts in the History of Political Thought. Winner of the Balzan Prize, the Wolfson History Prize, the Sir Isaiah Berlin Prize, and many others. On February 24, Skinner visited Oxford for the Ertegun House Seminar in the Humanities, a thrice-yearly initiative of the Mica and Ahmet Ertegun Graduate Scholarship Programme. In conversation with Ertegun House Director Rhodri Lewis, Skinner expatiated on the craft of history, the meaning of liberty, trends within the humanities, his own life and work, and a dizzying range of other subjects.

Professor Quentin Skinner at Ertegun House, University of Oxford.

Names are, as it happens, a good place to start. As Skinner spoke, an immense and diverse crowd filled the room: Justinian and Peter Laslett, Thomas More and Confucius, Karl Marx and Aristotle. The effect was neither self-aggrandizing nor ostentatious, but a natural outworking of a mind steeped in the history of ideas in all its modes. The talk is available online here; accordingly, instead of summarizing Skinner’s remarks, I will offer a few thoughts on his approach to intellectual history as a discipline, the aspect of his talk which most spoke to me and which will hopefully be of interest to readers of this blog.

Lewis’s opening salvo was to ask Skinner to reflect on the changing work of the historian, both in his own career and in the profession more broadly. This parallel set the tone for the evening, as we followed the shifting terrain of modern scholarship through Skinner’s own journey, a sort of historiographical Everyman (hardly). He recalled his student days, when he was taught history as the exploits of Great Men, guided by the Whig assumptions of inevitable progress towards enlightenment and Anglicanism. In the course of this instruction, the pupil was given certain primary texts as “background”—More’s Utopia, Hobbes’s Leviathan, John Locke’s Two Treatises of Government—together with the proper interpretation: More was wrongheaded (in being a Catholic), Hobbes a villain (for siding with despotism), and Locke a hero (as the prophet of liberalism). Skinner mused that in one respect his entire career has been an attempt to find satisfactory answers to the questions of his early education.

Contrasting the Marxist and Annaliste dominance that prevailed when he began his career with today’s broad church, Skinner spoke of a shift “towards a great pluralism,” an ecumenical scholarship welcoming intellectual history alongside social history, material culture alongside statistics, paintings alongside geography. For his own part, a Skinner bibliography joins studies of the classics of political philosophy to articles on Ambrogio Lorenzetti’s The Allegory of Good and Bad Government and a book on William Shakespeare’s use of rhetoric. And this was not special pleading for his pet interests. Skinner described a warm rapport with Bruno Latour, despite a certain degree of mutual incomprehension and wariness of the extremes of Latour’s ideas. Even that academic Marmite, Michel Foucault, found immediate and warm welcome. Where many an established scholar I have known snorts in derision at “discourses” and “biopolitics,” Skinner heaped praise on the insight that we are “one tribe among many,” our morals and epistemologies a product of affiliation—and that the tribe and its language have changed and continue to change.

Detail from Ambrogio Lorenzetti’s “Allegory of the Good Government.”

My ears pricked up when, expounding this pluralism, Skinner distinguished between “intellectual history” and “the history of ideas”—and placed himself firmly within the former. Intellectual history, according to Skinner, is the history of intellection, of thought in all forms, media, and registers, while the history of ideas is circumscribed by the word “idea,” to a more formal and rigid interest in content. On this account, art history is intellectual history, but not necessarily the history of ideas, as not always concerned with particular ideas. Undergirding all this is a “fashionably broad understanding of the concept of the text”—a building, a mural, a song are all grist for the historian’s mill.

If we are to make a distinction between the history of ideas and intellectual history, or at least to explore the respective implications of the two, I wonder whether there is not a drawback to intellection as a linchpin, insofar as it emphasizes an intellect to do the intellection. To focus on the genesis of ideas is perhaps to the detriment of understanding how they travel and how they are received. Moreover, does this overly privilege intentionality, conscious intellection? A focus on the intellects doing the work is more susceptible, it seems to me, to the Great Ideas narrative, that progression from brilliant (white, elite, male) mind to brilliant (white, elite, male) mind.

At the risk of sounding like postmodernism at its most self-parodic, is there not a history of thought without thinkers? Ideas, convictions, prejudices, aspirations often seep into the intellectual water supply divorced from whatever brain first produced them. Does it make sense to study a proverb—or its contemporary avatar, a meme—as the formulation of a specific intellect? Even if we hold that there are no ideas absent a mind to think them, I posit that “intellection” describes only a fraction (and not the largest) of the life of an idea. Numberless ideas are imbibed, repeated, and acted upon without ever being much mused upon.

Skinner himself identified precisely this phenomenon at work in our modern concept of liberty. In contemporary parlance, the antonym of “liberty” is “coercion”: one is free when one is not constrained. But, historically speaking, the opposite of liberty has long been “dependence.” A person was unfree if they were in another’s power—no outright coercion need be involved. Skinner’s example was the “clever slave” in Roman comedies. Plautus’s Pseudolus, for instance, acts with considerable latitude: he comes and goes more or less at will, he often directs his master (rather than vice versa), he largely makes his own decisions, and all this without evident coercion. Yet he is not free, for he is always aware of the potential for punishment. A more nuanced concept along these lines would sharpen the edge of contemporary debates about “liberty”: faced with endemic surveillance, one may choose not to express oneself freely—not because one has been forced to do so, but out of that same awareness of potential consequences (echoes of Jeremy Bentham’s Panopticon here). Paradoxically, even as our concept of “liberty” is thus impoverished and unexamined, few words are more pervasive in present discourse.

Willey Reverly’s 1791 plan of the Panopticon.

On the other hand, intellects and intellection are crucial to the great gift of the Cambridge School: the reminder that political thought—and thought of any kind—is an activity, done by particular actors, in particular contexts, with particular languages (like the different lexicons of “liberty”). Historical actors are attempting to solve specific problems, but they are not necessarily asking our questions nor giving our answers, and both questions and answers are constantly in flux. This approach has been an antidote to Great Ideas, destroying any assumption that Ideas have a history transcending temporality. (Skinner acknowledged that art historians might justifiably protest that they knew this all along, invoking E. H. Gombrich.)

The respective domains of intellectual history and the history of ideas returned when one audience member asked about their relationship to cultural history. Cultural history for Skinner has a wider set of interests than intellectual history, especially as regards popular culture. Intellectual history, by contrast, is avowedly elitist in its subject matter. But, he quickly added, it is not at all straightforward to separate popular and elite culture. Theater, for instance, is both: Shakespeare is the quintessence of both elite art and of demotic entertainment.

On some level, this is incontestable. Even as Macbeth meditates on politics, justice, guilt, fate, and ambition, it is also gripping theater, filled with dramatic (sometimes gory) action and spiced with ribald jokes. Yet I query the utility, even the viability, of too clear a distinction between the two, either in history or in historians. Surely some of the elite audience members who appreciated the philosophical nuances also chuckled at the Porter’s monologue, or felt their hearts beat faster during the climactic battle? Equally, though they may not have drawn on the same vocabulary, we must imagine some of the “groundlings” came away from the theater musing on political violence or the obligations of the vassal. From Robert Scribner onwards, cultural historians have problematized any neat model of elite and popular cultures.

Frederick Wentworth’s illustration of the Porter scene in Macbeth.

In any investigation, we must of course be clear about our field of study, and no scholar need do everything. But trying to circumscribe subfields and subdisciplines by elite versus popular subjects, by ideas versus intellection versus culture, is, I think, to set up roadblocks in the path of that most welcome move “towards a great pluralism.”

French Liberals and the Capacity for Citizenship

by guest contributor Gianna Englert

M.L. Bosredon, “L’urne et le fusil” (April 1848). A worker sets aside his weapon to engage in the act of voting—a faith that universal suffrage would mitigate violence. This was a claim that liberals rejected.

2017 has done a lot for the history of ideas. “Post-truth” politics, tyranny, nationalism, and the nature of executive power have pushed us to make sense of the present by appealing to the past. The history of political thought offers solid ground, a way to steady ourselves—not to venerate what has come before, but to use it to clarify or challenge our own ideas.

Debates surrounding citizenship also lend themselves to this approach. They return us to foundational political questions. They force us to ask who is in, who is out, and why.

These questions are not new, nor are they distinctly American. We can learn about them from a seemingly unlikely time and place: from liberal theorists in nineteenth-century France, who were similarly concerned to find solid ground. As Jeremy Jennings notes in his Revolution and the Republic, “the Revolution, and the Republic it produced, gave birth to a prolonged and immensely sophisticated debate about what it meant to be a member of a political community…. It was a debate about the very fundamentals of politics” (25). Democrats used the language of rights, primarily the right to vote. Liberals defended capacité (capacity) as that which preceded political rights. Capacity conferred the title to political participation—it defined who was and who was not a participant in the franchise. There was much at stake in this definition. Only a capable citizenry could overcome revolutionary passion by reason, and safeguard the freedoms and institutions that would ensure a stable nation.

The discourse of capacity drew criticism from liberalism’s nineteenth-century opponents and later scholars. French liberals have been criticized for espousing exclusionary politics that tied citizenship to wealth and social class. Yet this interpretation misrepresents their theory of citizenship. Capacity was actually an elastic, potentially expansive standard for political inclusion. The liberal definition of the citizen was similarly flexible, designed to evolve alongside changing social and economic conditions.

François Guizot (1787-1874)

The discourse of capacité originated with François Guizot (1787-1874), whose politics and personality have long been associated with the revolution of 1848. But his historical lectures, delivered from 1820-22 and again in 1828, offer an alternative to his image as unpopular, uncompromising politician. Prior his role in the July Monarchy, he was known for his narrative history of European institutions—praised by theorists in France, Germany, and even by John Stuart Mill in England. His method of “philosophical history” linked politics to society through the study of the past. Political institutions had to fit the given “social state,” a term that encompassed both economic and class structure. “Before becoming a cause,” Guizot wrote, “political institutions are an effect; society produces them before being modified by them” (Essays on the History of France, 83). Philosophical history was most valuable for how it informed French politics. From the perspective it provided, Guizot saw that neither aristocratic nor democratic rule was well suited to post-revolutionary French society. He championed the alternative of a representative government with capable rather than universal suffrage.

What did it mean to be capable? Guizot associated capacity with individual rationality, independence, and economic participation. Most importantly, the capable citizen could recognize and promote “the social interest,” a standard apart from the individual and the family. The citizen, a participant in the franchise, was first and foremost a member of the community, capable of recognizing what the public good demanded. Guizot named commercial participation among the signs of capacity, as it revealed one’s engagement—indeed, one’s membership—in society.

Those signs of capacity were also variable. Just as political institutions depended upon historically variable social conditions, so too did the requirements of citizenship change over time. Given capacity’s historical character, it was simply wrong to define the capable electorate as a permanently exclusive class. Capacity should remain ever open to “legal suspicion,” since:

The determination of the conditions of capacity and that of the external characteristics which reveal it, possess, by the very nature of things, no universal or permanent character. And not only is it unnecessary to endeavor to fix them, but the laws should oppose any unchangeable prescription regarding them. (History of the Origins of Representative Government in Europe, 337)

But Guizot’s own political career was at odds with his theory. If we limit our study to his writings and speeches under the July Monarchy, the image of a dogmatic, inflexible thinker inevitably surfaces. In 1847, he condemned universal suffrage before the Chamber as a destructive force, “whose day would never come” (Speech of 26 March, Histoire Parliamentaire V, 383). Absent from Guizot’s later thought is any mention of the fluidity of capacity or of the potentially more inclusive electorate that might follow.

Guizot’s politics show the path liberalism took on the question of citizenship. Liberals tried to impede the progress of mass politics, and to restrict the franchise to a small, permanent class of the capable. Unsurprisingly, they failed.  In an effort to avoid the rule of the multitude, liberals proposed increasingly stringent residency and property requirements for suffrage, which at once disenfranchised and frustrated much of the population. In a democratizing society, liberals vainly stood astride history. But they also failed to live according to their own standards. They tried to preserve as static that which was intended, as Guizot first argued, to be elastic: the concept of capacity and the idea of the citizen. When liberals sacrificed their theoretical foundations to defend political power, they lost the battle for both.

Despite liberalism’s political limits, we should not dismiss the promise of its theory. In his historical discussion of capacity, Guizot separated citizenship from social class. Capacity was not an exclusive, permanent possession for certain persons or classes, but an evolving, potentially progressive standard for inclusion. If capacity was tied to history, extension of the franchise was possible and in some cases, necessary.

Can liberalism’s past help us make sense of the present? The tradition’s complicated history, marked by tension between theory and practice, offers both a rich vision of citizenship and a cautionary tale of political exclusion. Guizotian capacity would preclude exclusions based explicitly on ascriptive characteristics like race, ethnicity, and gender. But as Guizot’s practical politics revealed, “capacity” could also be co-opted to justify these kind of exclusions, or to import fixed standards for citizenship under the guise of so-called progressive appeals to rationality or independence. This is the darker side of any standard for inclusion, and we should be worried about the potential abuses associated with such standards. The political positions of nineteenth century liberals remind us of these darker possibilities, which persist under different forms even in present-day liberal democracies.

Still, capacity has advantages for thinking about liberal citizenship more broadly. Though French liberals most often addressed the right to vote, they also explored what made someone an informal member of the community, with ties to a given place, way of life, and common cause. And they urged that these informal elements of social membership distinguished the individual from the citizen, arguing that the law ought to track social realities rather than resist them—that citizenship was not just suffrage, but a set of practices and relationships that the law ought to recognize. This resonates with our contemporary experience. There are entire groups of people who are undoubtedly members of American communities without being citizens, who participate in society without benefit of the full complement of civil and political rights. Guizot’s thought shows that we need not invoke thick, idealized conceptions of participation to inform liberal democratic practice or its standards for inclusion. For all of its difficulties, the liberal discourse of capacity prompts us to reconsider what it means to be a member of a political community—a question that has not lost any urgency.

Gianna Englert is a Postdoctoral Research Associate at the Political Theory Project at Brown University. She holds a Ph.D. in Government from Georgetown University. Her current book manuscript, Industry and Inclusion: Citizenship in the French Liberal Tradition, explores the economic dimensions of citizenship and political membership in nineteenth-century French liberalism.