For those with an interest in fashion history, springtime in New York City heralds the opening of The Metropolitan Museum of Art’s annual summer Costume Institute exhibition. The Costume Institute show (this year’s is “Camp: Notes on Fashion”) and its glittery opening gala in early May inevitably attract a huge amount of press and public attention – so much that they threaten to overshadow other fashion exhibitions. But any museum visitor who wishes to understand more about how these exhibitions developed as a scholarly practice will be rewarded by a trip to the Museum at the Fashion Institute of Technology (MFIT). MFIT – originally called the Design Laboratory and Galleries at FIT – opened in 1969. Since then, it has played host to over two hundred exhibitions, most organized by MFIT’s dedicated team of in-house curators. In celebration of the fiftieth anniversary of the museum’s founding, earlier this year MFIT opened “Exhibitionism: 50 Years of the Museum at FIT.” This show is a reminder of how fashion scholarship has developed over the past few decades, and the potential for even the most eye-catching garments to be pedagogical tools.
A little less than a year ago, a prestigious American university hosted a conference about French-Algerian history, gathering the leading specialists of the topic.
A prominent French scholar closed his presentation by opening the debate to the audience. Immediately, one of his North American fellows asked “Since you do not speak Arabic, do you feel somewhat limited in your work on French Algeria?”
“I see what you mean,” he replied, “but fortunately, we have the archives of the colonial administration, so French is enough.”
Suddenly, a man, sitting on the first row of the audience, stood up, and, speaking in French, replied “I am Algerian. I was born before the Independence. You taught us French and nothing else. We had to learn Arabic after the War of Liberation. Arabic must come back to Algeria.”
And then, another man, sitting next to him, added “Arabic … and Berber. Nobody talks about Berber. Historians have forgotten that North Africa is the land of the Berbers.”
Who are the Berbers?
The indigenous population of North Africa, the Berbers call themselves i-Mazigh-en, “free-men” or “noble” in Tamazight. If over the centuries, the Berbers have split into smaller communities, the Chleus in Morocco, the Touaregs in Libya and the Kabyles in Algeria, they have remained faithful to a clear sense of unity. The history of the Berbers is that of an identity constantly reshaped by internal and external mutations, of cultural blending and ongoing intellectual developments and innovations. Invaded by the Phoenicians around 800 BC, the Berbers were incorporated into the Roman Empire in 200 BC and their land constituted the cradle of European Christianity. The Arab Conquest of the seventh century led to the merging of Berber and Arab culture, the conversion to Islam and the fall of the Christian Church. Between the eighth and ninth centuries, a series of Muslim-Berbers dynasties ruled over the Maghreb (the Arabic name for North Africa) achieving its territorial and political unity. Most of the region, except for Morocco, passed under Ottoman domination in 1553 and remained part of the empire until the nineteenth century. During this period, the three political entities composing modern North Africa emerged. While Tunisia and Morocco were to become protectorates of France, in 1881 and 1912 respectively, Algeria was to be French for over a century.
To fuel this narrative, the French progressively constructed the “Kabyle Myth.” In 1826, the Abbé Raynal claimed that the Kabyles were of “Nordic descent, directly related to the Vandals, they are handsome with blues eyes and blond hair, their Islam is mild.” Tocqueville wrote in 1837 that the “Kabyle soul” was opened to the French (182). Ten years later, the politician Eugène Daumas claimed that the “Kabyle people, of German descent […] had accepted the Coran but had not embraced it [and that on many aspects] the Kabyles still lived accordingly to Christian principles” (423). This the reason why French colonial officer Henri Aucapitaine concluded that: “in one hundred years, the Kabyles will be French” (142).
The situation shifted in 1871 when two hundred and fifty Kabyle tribes, or a third of the Algerian population, revolted against the colonial authorities. The magnitude of the uprising was such that the French decided to “fight the Berber identity […] which in the [long-run] empowered the Arabs.”
From then on, the differences between the Berbers and the Arabs became irrelevant to France’s main priority: to maintain its control over the local populations by fighting Islam. The idea emerged that to be assimilated to the French Republic, Algerian subjects needed to be “purified” from their religious beliefs.
By the Senatus-Consulte of July 14th, 1865, the French had ruled that “Muslim Algerians were granted the right to apply for French citizenship […] once they had renounced their personal status as Muslims”(444). This law, which had established a direct link between religion on the one hand and political rights on the other, now further reflected the general sense of disregard towards the diversity of cultural groups in Algeria, all falling into the same overarching category of Muslim. After the 1880s, the French gave up on the Kabyle myth, marginalizing the Berbers who had become a source of agitation.
As the independent Republic of Algeria triumphed in the Fall of 1962, the newly funded regime identified the Berbers as posing an “existential threat to the Arabo-Muslim identity of the country” (103).
Repeating the French practice of destroying those regional identities allegedly challenging the legitimacy of an aggressively centralized and centralizing state, the leaders of Algeria denounced the political claims of the Berbers as a “separatist conspiracy,” and after 1965 the Arabization policy became systematic throughout the country.
To assess the respective impact of colonization, nineteenth and twentieth century nationalist pan-Arab ideologies and the role of post-independence Algerian leaders upon the persecution of the Kabyles after 1962 constitutes a somewhat limited debate.
It is, however, critical to acknowledge the responsibility of the French state in the marginalization of the Berbers after the 1871 Kabyle riot. Progressively, the colonial administration changed a model of mixed and complex identities strongly rooted the Maghreb tradition into a binary model (235). Within this two-term model, people could only define themselves on one side or the other of a rigid frontier separating authentic French culture from supposedly authentic colonized culture. As Franco Tunisian Historian Jocelyn Dakhlia argues inRemembering Africa, “the consequence of such a dualistic opposition of colonial identities was [… ] that the anticolonial movement stuck to this idea of an authentic native Muslim Arabic identity, excluding the Berbers” (235).
The very existence of the Berbers thwarts any attempt to analyze Algerian society in a way that resorts to a rigid griddle, whether in racial, cultural or religious terms.
This is probably the reason why the French, and after them the independent Algerian state, have utterly repressed the legacy of Berber culture in the country: for the Berbers could not exist in the dualistic narrative underlying both colonial and anti-colonial. As historian Michel-Rolph Trouillot, would argue, they became unthinkable, and were silenced and excluded from History.
Yet, the most curious factor in this non-history is the paucity of French scholarship on the issue. (50). While some academics do focus on creating conversations and producing literature on the question of Berber identity, the most renowned French scholars systematically fail at doing so. As a direct consequence, most French academic discourses reproduce and maintain the somewhat convenient imperial division opposing the “Arabs” in the North to the “Blacks” in the South of Africa, thereby forgetting that the Sahara is not a rigid racial frontier, and that for centuries the Berbers have been circulated throughout the region.
Ultimately, the Berbers blurry the lines between colonial and post-independent notions of identity in North Africa. To acknowledge the Berbers would require scholars to accept their fluidity – a direct threat to the Western appeal for systemic and pseudo-universalist thinking, still prevalent in French academia despite the emergence post-colonial studies in the 1960s.
Recognizing the Berbers necessitates first, as claimed by Algerian scholar Daho Djerbal, to ask: who is the subject of History? This is the only way in which one can hope to put an end to the overly simplistic politics of identity imposed by the political power—on both sides of the Mediterranean Sea, on both shores of the Atlantic Ocean.
Rosalie Calvet is a paralegal working in New York City, freelance journalist and Columbia class of 2017 graduate. As a history major, Rosalie specialized on the historiography of French imperial history. Her senior thesis, “Thwarting the Other: a Critical approach to the Historiography of French Algeria” was awarded the Charles A. Beard History Prize. In the future, Rosalie wishes to continue reflecting on otherness in the West—both through legal and academic lenses. More about Rosalie and her work is available on her website.
In June 1938, editor Jack Leibowitz found himself in a time crunch. Needing to get something to the presses, Leibowitz approved a recent submission for a one-off round of prints. The next morning, Action Comics #1 appeared on newsstands. On the cover, a strongman in bright blue circus tights and a red cape was holding a green car above his head while people ran in fear. Other than the dynamic title “ACTION COMICS”, there was no text explaining the scene. In an amusing combination of hubris and prophecy, the last panel of Action Comics #1 proclaimed: “And so begins the startling adventures of the most sensational strip character of all time!” Superman was born.
Comics are potentially incomparable resources given the cultural turn in the social sciences (a shift in the humanities and social sciences in the late 20th century toward a more robust study of culture and meaning and away from positivism). The sheer volume of narrative—somewhere in the realm of 10,000 sequential Harry Potter books– and social saturation—approximately 91-95% of people between the ages of six and eleven in 1944 read comics regularly according to a 1944 psychological study—remain singular today (Lawrence 2002, Parsons 1991).
Cultural sociology has shown us that “myth and narrative are elemental meaning-making structures that form the bases of social life” (Woodward, 671). In a lecture on Giacometti’s Standing Woman, Jeffrey Alexander pushes forward a way of seeing iconic experiences as central, salient components of social life. He argues:
Iconographic experience explains how we feel part of our social and physical surroundings, how we experience the reality of the ties that bind us to people we know and people we don’t know, and how we develop a sense of place, gender, sexuality, class, nationality, our vocation, indeed our very selves (Alexander, 2008, 7).
He further suggests these experiences informally establish social values (Alexander, 2008, 9). Relevant to our purposes, Alexander stresses Danto’s work on “disgusting” and “offensive” as aesthetic categories (Danto, 2003) and Simmel’s argument that “our sensations are tied to differences” with higher and lower values (Simmel, 1968).
This suggests that theoretically the comic book is a window into pre-existing, powerful, and often layered morals and values held by the American people that also in turn helped build cultural moral codes (Brod, 2012; Parsons, 1991).
The comic book superhero, as invented and defined by the appearance of Superman, is a highly culturally contextualized medium that expresses particular subgroups’ anxieties, hopes, and values and their relationship to broader American society.
But this isn’t a history of comics, accidental publications, or even the most famous hero of all time. As Ursula LeGuin says, “to light a candle is to cast a shadow.” It was likely inevitable that the superhero—brightest of all the lights—would necessarily cast a very long shadow. Who after all could pose a challenge to Superman? Or what could occupy the world’s greatest detective? The world needed supervillains. The emergence of the supervillain offers a unique slice of moral history and a potentially powerful way to investigate the implicit cultural codes that shape society.
I want to briefly trace the appearance of recurring villains in comic books and note what their characteristics suggest about latent concepts of evil in society at the time. Given our limited space, I’m here only considering the earliest runs of the two most iconic heroes in comics: Superman (Action Comics #1-36) and Batman (Detective Comics #27-; Batman #1-4).
Initially, Superman’s enemies were almost exclusively one-off problems tied to socioeconomic situations. It wasn’t until June 1939 that readers met the first recurring comic book villain: the Ultrahumanite. Pursuing a lead on some run-of-the-mill racketeers, Superman comes across a bald man in a wheel chair: “The fiery eyes of the paralyzed cripple burn with terrible hatred and sinister intelligence.” His “crippled” status is mentioned regularly. The new villain wastes no time explaining that he is “the head of a vast ring of evil enterprises—men like Reynolds are but my henchmen” (Reynolds is a criminal racketeer introduced earlier in the issue), immediately signaling something new in comics. The man then formally introduces himself, not bothering with subtlety.
I am known as the ‘Ultra-humanite’. Why? Because a scientific experiment resulted in my possessing the most agile and learned brain on earth! Unfortunately for mankind, I prefer to use this great intellect for crime. My goal? Domination of the world!!
In issue 20, Superman discovers that, somehow, Ultra has become a woman. He explains to the Man of Steel: “Following my instructions, they kidnapped Dolores Winters yesterday and placed my mighty brain in her young vital body!” (Action Comics 20).
Superman found his first recurring foil in unfettered intellect divorced from physicality. It’s hard not to wonder if this reflected a general distrust of the ever-increasing destructive power of science as World War II dawned. It’s also fascinating to note how consistently the physical status of the Ultrahumanite is emphasized, suggesting a deep social desire for physical strength, confidence, and respect.
After Ultra’s death, our hero would not be without a domineering, brilliant opponent for long. Action Comics 23 saw the advent of Lex Luthor. First appearing as an “incredibly ugly vision” of a floating face and lights, Luthor’s identity unfolds as a mystery. Superman pursues a variety of avenues, finding only a plot to draw countries into war and thugs unwilling to talk for fear of death. Lois actually encounters Luthor first, describing him as a “horrible creature”. When Luthor does introduce himself, it nearly induces déjà vu: “Just an ordinary man—but with the brain of a super-genius! With scientific miracles at my fingertips, I’m preparing to make myself supreme master of th’ world!”
The Batman develops his first supervillain at nearly the same time as Superman. In July 1939, one month after the Ultrahumanite appeared, readers are introduced to Dr. Death. Dr. Death first appears in a lavish study speaking with a Cossack servant (subtly implying Dr. Death is anti-democratic) about the threat Batman poses to their operations. Death is much like what we would now consider a cliché of a villain—he wears a suit, has a curled mustache and goatee, a monocle, and smokes a long cigarette while he plots. His goal: “To extract my tribute from the wealthy of the world. They will either pay tribute to me or die.” Much like Superman’s villains, he uses science—chemical weapons in particular—to advance these sinister goals. In their second encounter, Batman prevails and Dr. Death appears to burn to death. Of course, in comics the dead rarely stay that way; Dr. Death reappears the very next issue, his face horribly scarred.
The next regularly recurring villain to confront Batman appears in February 1940. Batman himself introduces the character to the reader: “Professor Hugo Strange. The most dangerous man in the world! Scientist, philosopher, and a criminal genius… little is known of him, yet this man is undoubtedly the greatest organizer of crime in the world.” Elsewhere, Strange is described as having a “brilliant but distorted brain” and a “malignant smile”. While he naturally is eventually captured, Strange becomes one of Batman’s most enduring antagonists.
The very next month, in Batman #1, another iconic villain appears: none other than the Joker himself.
Once again a master criminal stalks the city streets—a criminal weaving a web of death about him… leaving stricken victims behind wearing a ghastly clown’s grin. The sign of death from the Joker!
Also utilizing chemicals for his plots, the Joker is portrayed as a brilliant, conniving plotter who leads the police and Batman on a wild hunt. Unique to the Joker among the villains discussed is his characterization as a “crazed killer” with no aims of world power. The Joker wants money and murder. He’s simply insane.
Some striking commonalities appear across our two early heroes’ comics. First, physical “flaws” are a critical feature. These deformities are regularly referenced, whether disability, scarring, or just a ghastly smile. Second, virtually all of these villains are genius-level intellects who use science to pursue selfish goals. And finally, among the villains, superpowers are at best a secondary feature, suggesting a close tie between physical health, desirability, and moral superiority. Danto’s aesthetic categories of “disgusting” and “offensive” certainly ring true here.
This is remarkably revealing and likely connected to deep cultural moral codes of the era. If Superman represents the “ideal type,” supervillains such as the Ultrahumanite, Lex Luthor, and the Joker are necessary and equally important iconic representations of those deep cultural moral codes. Such a brief overview cannot definitively draw out the moral world as revealed through comics and confirmed in history. Rather, my aims have been more modest: (1) to trace the history of the birth of the supervillain, (2) to draw a connective line between the strong cultural program, materiality, and comic books, and (3) to suggest the utility of comics for understanding the deep moral codes that shape a society. Cultural sociology allows us to see comics in a new light: as an iconic representation of culture that both reveals preexisting moral codes and in turn contributes to the ongoing development of said moral codes that impact social life. Social perspectives on evil are an actively negotiated social construct and comics represent a hyper-stylized, exceedingly influential, and unfortunately neglected force in this negotiation.
Albert Hawks, Jr. is a doctoral student in sociology at the University of Michigan, Ann Arbor, where he is a fellow with the Weiser Center for Emerging Democracies. He holds an M.Div. and S.T.M. from Yale University. His research concerns comparative Islamic social movements in Southeast and East Asia in countries where Islam is a minority religion, as well as in the American civil sphere.
Quentin Skinner is a name to conjure with. A founder of the Cambridge School of the history of political thought. Former Regius Professor of History at the University of Cambridge. The author of seminal studies of Machiavelli, Hobbes, and the full sweep of Western political philosophy. Editor of the Cambridge Texts in the History of Political Thought. Winner of the Balzan Prize, the Wolfson History Prize, the Sir Isaiah Berlin Prize, and many others. On February 24, Skinner visited Oxford for the Ertegun House Seminar in the Humanities, a thrice-yearly initiative of the Mica and Ahmet Ertegun Graduate Scholarship Programme. In conversation with Ertegun House Director Rhodri Lewis, Skinner expatiated on the craft of history, the meaning of liberty, trends within the humanities, his own life and work, and a dizzying range of other subjects.
Names are, as it happens, a good place to start. As Skinner spoke, an immense and diverse crowd filled the room: Justinian and Peter Laslett, Thomas More and Confucius, Karl Marx and Aristotle. The effect was neither self-aggrandizing nor ostentatious, but a natural outworking of a mind steeped in the history of ideas in all its modes. The talk is available online here; accordingly, instead of summarizing Skinner’s remarks, I will offer a few thoughts on his approach to intellectual history as a discipline, the aspect of his talk which most spoke to me and which will hopefully be of interest to readers of this blog.
Lewis’s opening salvo was to ask Skinner to reflect on the changing work of the historian, both in his own career and in the profession more broadly. This parallel set the tone for the evening, as we followed the shifting terrain of modern scholarship through Skinner’s own journey, a sort of historiographical Everyman (hardly). He recalled his student days, when he was taught history as the exploits of Great Men, guided by the Whig assumptions of inevitable progress towards enlightenment and Anglicanism. In the course of this instruction, the pupil was given certain primary texts as “background”—More’s Utopia, Hobbes’s Leviathan, John Locke’s Two Treatises of Government—together with the proper interpretation: More was wrongheaded (in being a Catholic), Hobbes a villain (for siding with despotism), and Locke a hero (as the prophet of liberalism). Skinner mused that in one respect his entire career has been an attempt to find satisfactory answers to the questions of his early education.
Contrasting the Marxist and Annaliste dominance that prevailed when he began his career with today’s broad church, Skinner spoke of a shift “towards a great pluralism,” an ecumenical scholarship welcoming intellectual history alongside social history, material culture alongside statistics, paintings alongside geography. For his own part, a Skinner bibliography joins studies of the classics of political philosophy to articles on Ambrogio Lorenzetti’sThe Allegory of Good and Bad Government and a book on William Shakespeare’s use of rhetoric. And this was not special pleading for his pet interests. Skinner described a warm rapport with Bruno Latour, despite a certain degree of mutual incomprehension and wariness of the extremes of Latour’s ideas. Even that academic Marmite, Michel Foucault, found immediate and warm welcome. Where many an established scholar I have known snorts in derision at “discourses” and “biopolitics,” Skinner heaped praise on the insight that we are “one tribe among many,” our morals and epistemologies a product of affiliation—and that the tribe and its language have changed and continue to change.
My ears pricked up when, expounding this pluralism, Skinner distinguished between “intellectual history” and “the history of ideas”—and placed himself firmly within the former. Intellectual history, according to Skinner, is the history of intellection, of thought in all forms, media, and registers, while the history of ideas is circumscribed by the word “idea,” to a more formal and rigid interest in content. On this account, art history is intellectual history, but not necessarily the history of ideas, as not always concerned with particular ideas. Undergirding all this is a “fashionably broad understanding of the concept of the text”—a building, a mural, a song are all grist for the historian’s mill.
If we are to make a distinction between the history of ideas and intellectual history, or at least to explore the respective implications of the two, I wonder whether there is not a drawback to intellection as a linchpin, insofar as it emphasizes an intellect to do the intellection. To focus on the genesis of ideas is perhaps to the detriment of understanding how they travel and how they are received. Moreover, does this overly privilege intentionality, conscious intellection? A focus on the intellects doing the work is more susceptible, it seems to me, to the Great Ideas narrative, that progression from brilliant (white, elite, male) mind to brilliant (white, elite, male) mind.
At the risk of sounding like postmodernism at its most self-parodic, is there not a history of thought without thinkers? Ideas, convictions, prejudices, aspirations often seep into the intellectual water supply divorced from whatever brain first produced them. Does it make sense to study a proverb—or its contemporary avatar, a meme—as the formulation of a specific intellect? Even if we hold that there are no ideas absent a mind to think them, I posit that “intellection” describes only a fraction (and not the largest) of the life of an idea. Numberless ideas are imbibed, repeated, and acted upon without ever being much mused upon.
Skinner himself identified precisely this phenomenon at work in our modern concept of liberty. In contemporary parlance, the antonym of “liberty” is “coercion”: one is free when one is not constrained. But, historically speaking, the opposite of liberty has long been “dependence.” A person was unfree if they were in another’s power—no outright coercion need be involved. Skinner’s example was the “clever slave” in Roman comedies. Plautus’s Pseudolus, for instance, acts with considerable latitude: he comes and goes more or less at will, he often directs his master (rather than vice versa), he largely makes his own decisions, and all this without evident coercion. Yet he is not free, for he is always aware of the potential for punishment. A more nuanced concept along these lines would sharpen the edge of contemporary debates about “liberty”: faced with endemic surveillance, one may choose not to express oneself freely—not because one has been forced to do so, but out of that same awareness of potential consequences (echoes of Jeremy Bentham’s Panopticon here). Paradoxically, even as our concept of “liberty” is thus impoverished and unexamined, few words are more pervasive in present discourse.
On the other hand, intellects and intellection are crucial to the great gift of the Cambridge School: the reminder that political thought—and thought of any kind—is an activity, done by particular actors, in particular contexts, with particular languages (like the different lexicons of “liberty”). Historical actors are attempting to solve specific problems, but they are not necessarily asking our questions nor giving our answers, and both questions and answers are constantly in flux. This approach has been an antidote to Great Ideas, destroying any assumption that Ideas have a history transcending temporality. (Skinner acknowledged that art historians might justifiably protest that they knew this all along, invoking E. H. Gombrich.)
The respective domains of intellectual history and the history of ideas returned when one audience member asked about their relationship to cultural history. Cultural history for Skinner has a wider set of interests than intellectual history, especially as regards popular culture. Intellectual history, by contrast, is avowedly elitist in its subject matter. But, he quickly added, it is not at all straightforward to separate popular and elite culture. Theater, for instance, is both: Shakespeare is the quintessence of both elite art and of demotic entertainment.
On some level, this is incontestable. Even as Macbeth meditates on politics, justice, guilt, fate, and ambition, it is also gripping theater, filled with dramatic (sometimes gory) action and spiced with ribald jokes. Yet I query the utility, even the viability, of too clear a distinction between the two, either in history or in historians. Surely some of the elite audience members who appreciated the philosophical nuances also chuckled at the Porter’s monologue, or felt their hearts beat faster during the climactic battle? Equally, though they may not have drawn on the same vocabulary, we must imagine some of the “groundlings” came away from the theater musing on political violence or the obligations of the vassal. From Robert Scribner onwards, cultural historians have problematized any neat model of elite and popular cultures.
In any investigation, we must of course be clear about our field of study, and no scholar need do everything. But trying to circumscribe subfields and subdisciplines by elite versus popular subjects, by ideas versus intellection versus culture, is, I think, to set up roadblocks in the path of that most welcome move “towards a great pluralism.”
Mordecai Noah was one of the first Jews to reach national prominence in America. A politician, newspaper publisher, and man of letters, Noah was notoriously dismissed from his post as Consul of Tunisia by Secretary of State James Monroe in 1815. Monroe cited Noah’s religion as having been a hindrance to his professional duties. The event spurred widespread public outrage and criticism from prominent politicians who saw it as an outright display of religious intolerance. A decade later, the Sephardic Jewish playwright entered the national spotlight again through his plan to offer persecuted European Jews a refuge on an island near Buffalo, New York. Although this plan had enthusiastic support from local Christians and some Jews at its inauguration, the project failed within days. Noah then devised plans to settle Palestine with Jews, once again earning himself large-scale notoriety, becoming one of the first American proto-Zionists.
Noah’s story reflects elements of both of the two dominant explanatory approaches taken by scholars to the relationship of America to proto-Zionism/Zionism. Scholars studying this relationship generally approach it either from the field of religious cultural history or the history of American public policy. Thus, the United States’ contemporary support for Israel can be characterized either by the philo-Semitic Protestant religious tradition, often referred to as Christian Zionism, or through a study of the public policy and diplomatic history of the United States. However, Noah’s story also hints at another, usually overlooked arena that has often fueled American support for Israel: pop culture. Noah received support largely from sympathetic Christians but he also drew support and clout on the basis of his role as a State Department functionary. By all accounts, however, much of the attention Noah’s schemes received was based on the celebrity they earned him and the intrigue they generated beyond the small ranks of dogmatic Christian Zionists.
The pop-cultural dimension of the American–Israel relationship is absent from both religious-cultural and public policy-based accounts of the subject. Scholars who take the religious-cultural approach see the relationship as embedded in Christian Zionism, which in America is rooted in the religious tradition of premillennial dispensationalism. This eschatology maintains that Jesus will physically return to earth to bring his true followers to heaven before the rapture occurs. Jesus’s return is to be followed by a 1,000-year period of earthly peace. It differs on this point with the more mainstream postmillennialism, according to which the 1,000-year period of earthly peace is to take place before the Second Coming. Premillennial dispensationalists place an emphasis on a Jewish return to the Holy Land to trigger the cataclysmic Second Coming of Christ. This has been encouraged by the fact that some dispensationalists have seen Jews as being proto-Protestants due to their dogged resistance to Catholic conversion. The widespread circulation of the dispensationalist Scofield Reference Bible (first published in 1909) after World War I was particularly influential in transmitting premillennial beliefs in Anglophone countries.
A couple notable examples of religious-cultural approaches to the American relationship with Zionism are Fuad Sha’ban’s, For Zion’s Sake: The Judeo-Christian Tradition in American Culture and Stephen Spector’s Evangelicals and Israel: The Story of American Christian Zionism. While the two scholars of literature are far apart politically, they take similar approaches to the topic. They both argue that many Protestant Americans are inclined to be supportive of the State of Israel because of their evangelical thinking. Shaban argues that this relationship has been made even more important to many evangelicals because they see America itself as representing a New Zion (Sha’ban, 14-19). These accounts are both compelling, but, like most works of the religious-cultural school, they never draw a direct line from these trends to American policy.
Scholars who take public policy approaches to the question of American Zionism generally see the latter as a result of special interests and focus on the political interactions between Congress, the State Department, the executive branch, and lobbyist groups. Many of these scholars see the State Department of the past as a foil to the current America-Israel relationship because of its perceived history of anti-Semitism. Certainly the case of Mordecai Noah provides can provide an opening salvo for this argument. They argue that the State Department should be a rigid guarantor of American interests without regards for back room politics and they urge the State Department to return to the strict protection of purely American interests. Some representatives of this realpolitik line of thinking like John Mearsheimer, Stephen Walt, George Ball, and Clifford Kiracofe, argue that the relationship between Israel and the United States is one which subverts domestic democracy, tarnishes America’s image in the world, and returns no tangible benefits. These studies largely focus on the political interactions between Congress, the State Department, the executive branch, and lobbyist groups. Many scholars may be understandably averse to discussing the influence of a particular ethnic or religious group’s lobby on American politics. However, these works generally provide a fierce criticism of both Jewish and Christian Zionist politics. They argue that organizations such as these stifle criticism and debate about American/Israeli relations and American foreign policy in the greater Middle East. In these analyses, members of Congress are not animated so much by a philo-Semitic Zionism as they are by campaign contributions. A major drawback of this approach is that it often delegates too much primacy to lobbyist groups on Capitol Hill.
Both of these approaches are helpful in understanding the American-Israeli relationship, and scholars are increasingly adopting elements of both in their analyses of the subject. For instance, Robert O. Smith persuasively argues that the Cartwright Petition of 1649 to have Jews readmitted to England was one of the first Zionist political actions, in that it was advocated by Messianic Puritans (Smith, 96). He uses this argument to highlight the Christian roots and incubation of the idea of Zionism, contextualizing the pre- and post-Herzlian political history of Zionism. Smith goes on to demonstrate the influence of Christian Zionist ideas on important actors in the political history of Zionism, from Lord Balfour to Ronald Reagan (although the impact of these ideas on Jews, who took ownership of Zionism by the end the nineteenth century, remains to explored).
However, in the era of mass consumption, the impact of novels and other works of literature for didactic or propaganda purposes should not be discounted. For instance, the scholarly attention paid to Leon Uris’s best-selling 1958 novel Exodus has been scant in comparison to its impact. More attention has been given to specifically Christian Zionist literature in this regard, such as Tim Lahaye’s best-selling Left Behind series of novels and Hal Lindsey’s The Late Great Planet Earth. These works were the product of a growing confidence among pre-millennialists who saw in the Israeli military victory of 1967 a confirmation of their worldview. The growing acceptance of these beliefs in American society can be seen as a reflection of the Cold War threats of nuclear annihilation, which to many premillennial Christians further seemed to indicate that the end-times were near. These phenomena all led many members of mainstream American society to begin sharing a similar apocalyptic outlook with that of pre-millennial dispensationalists. However, most of those who were influenced by these ideas never became premillennialists themselves. Rather, these ideas impacted them as a part of popular culture of the day.
After World War II, newsreels featuring images of emaciated Holocaust survivors and victims were viewed by large audiences throughout the United States. While viewers of the images were shocked and horrified, no mass mobilization for a Jewish state materialized based on American’s knowledge of the Holocaust, even as Jewish organizations cautiously lobbied for the creation of a Jewish state behind closed doors. Similarly, there was not widespread support for Zionism on the part of American Christians between the end of the war and the Eichmann Trial, and it is unclear what exactly gave Zionism legitimacy in the state department after the war. Rather, it was only between Israel’s declaration of statehood in 1948 and the 1967 war—after the appearance of major pop-cultural works that cast Zionism in a positive light—that the US saw growing popular enthusiasm for Israel and Zionism.
Kyle Stanton is a PhD student in history at the University at Albany-SUNY. His research interests include Judaic Studies, nationalism, and the history of tourism.
Boarding on the Siberia train, in the mid-1930s, the German Sinologist Ernst Cordes traveled across the Manchurian-Russia border to the cities of Harbin, then Manchukuo’s “New Capital” (formerly Changchun), and Mukden (Shenyang). Cordes went south through the border at the Manchuria Station to Harbin and finally set foot in Beijing, the final stop of his trip to China. In his travelogue The Youngest Empire: Sleeping, Awakening Manchukuo (Das jüngste Kaiserreich. Schlafendes, wachendes Mandschukuo), Cordes drew a picture of what he saw as the sophisticated, big, serious nation of China in the 1930s. Dissatisfied with the xenophobia and colonial mentality toward the Far East among his fellow Europeans at that time, Cordes considered his travelogue as an opportunity to showcase the color of China, its landscape and the city people’s everyday life in its vicissitudes. For the most part, his travelogue reads like a classic perspective painting contemplating the horizon from afar, giving you a penetrating look into the panorama. Yet from time to time, Cordes’s lens zoomed into something, taking up a curious, sometimes rosy, and tender tone. Arriving in Beijing in the summer, Cordes described one city evening that emerged out of the heat exhaustion and busting out with vivid hues of color:
It was in such a hot summer evening, close to nine o’clock, as the air started to cool down. The sun was like a big bloody fireball dropping against the West Mountain. I was just strolling around the Beipin city wall. The view one sees from there is unique, nothing like this can be found anywhere else in the world. The cityscape [of Beipin] is of a straight and simple grid, setting up a background of balance and harmony. Accompanied [the cityscape] are the colors of dark green, the golden yellow, and the blue of glazed glass roof. It is a grandiose, beautiful picture that renders [the view] an unforgettable scene. The scenario even condenses into a dreamy milieu of this old capital of China, giving out an even deeper [feeling of the city].
Interestingly, this piece of Cordes’ writing on the time he spent in Beijing was later excerpted and translated by Ling Shaung into Chinese and published in a local Beijing magazine, the Monthly Journal (Yue Bao), in 1937 under the title of ‘The Walnut Rubbing Chinese’ (PDF: 揉核桃的中國人 月報1937第一卷第一期). Translating this piece into English, I was fascinated by the journal centering it upon Cordes’ detailed description of his encounter with a Beijing gentleman who carried himself in a noble manner with a pair of walnuts on his palm. Cordes writes:
I took a close look at the man’s toy when he was not paying attention. They are two very smooth walnuts. The ample color [of the walnuts] looked so deep, almost turning into red. With his slow swirling rhythm his fingers play, he seemed to touch and caress [the walnuts] with love. The surface of the walnuts’ shell was uneven and with cracks, therefore the rubbing of the two walnuts created a slight sound – as if the grinding sound of food with teeth.
Curious about this rubbing and swirling of walnuts, Cordes struck up a conversation with the Chinese man. After exchanging courteous words, Cordes mustered his courage and asked, “Sir, what’s the thing that you are playing with on your hand?” Pulling his hands out from behind his back, the Chinese man showed him the two walnuts he’d been treasuring for years. Speaking to Cordes, he explained:
These are normal walnuts. They are no different from the normal walnuts that we eat. Just that they have smoother shell. These two [that I have here] happened to be very old walnuts. They had been played since my great grandfather was alive. The habit [of playing walnuts] is an ancient custom. I can’t tell you how ancient it is. But it must have existed for more than a thousand years. You probably have read about this kind of walnut in old Chinese books. The older they are, the more valuable they become. But they have to be kept perfect, avoiding being damaged. In order to achieve this goal, we have to hold on to the walnuts everyday, to touch and play with them. This would render the scent on our body onto the walnuts, in order to bath them with it. They eventually would be filled with our lives. As the time goes by, they [walnuts] would become the part of us naturally. We would never want to part with them. For this purpose, it’s the most difficult thing to purchase a real old pair of walnuts. You know that we Chinese people are superstitious. If you lost or damaged such a walnut, you took it as a bad omen. Those old walnuts displayed in the antique shops are not real ones. They are counterfeits, produced to cheat foreign tourists. Of course, if you are lucky, sometimes you can buy a real pair of walnuts. Yet that would cost you a great fortune! They are as expensive as jewelries.
Mesmerized by this eloquent speech, Cordes urged the gentleman to further explain this walnut-rubbing hobby. “My friend,” the Chinese man replied, “if one has never played with this kind of thing, it’s hard to understand the wonder and mystery of it. This thing carries the function of cultivating your soul.” With this manifesto, the Chinese gentleman elaborate on the ways in which one’s mind and body can be satiated with serenity through such a form of self-cultivation. Cordes recorded this conversation faithfully as it continued:
“Yes, it can function as cultivating your soul.” He repeated the phrase, while pointed his forehead as if there exists the secret of soul. “ The slow motion, the rhythm of rubbing walnuts makes one’s spirit feel relaxed and comfortable. When I feel exhausted, unhappy, and the worrisome ideas catch up with me, depriving me the rest I need, I’d always pick up this pair of walnuts. Look, I rub them in this way: tender, smoothly, slowly, with complete focuses poured onto the two walnuts. Therefore I throw out any mundane problems above the sky. When you rub the walnuts for many hours, you’d feel a slight stinging sensation on your palm. Following that, the stinging sensation would climb up to your shoulder, and finally you’d feel as if your brain is given a massage by a woman with her tender hands. This would make all your worries go away. Both your mind and body would be bathed in a limitless feeling of relief. You would feel the comforting sensation of relaxation as if you just took a hot bath. Oh this thing of walnuts is a real magic of massaging your soul ….”
The mixing of the stinging sensation of numbing pain, created by one’s rubbing of the pointy shell of walnuts, and the relaxing feeling arising afterwards centers the Chinese gentleman’s illustration on the gestalt of such an urban hobby. In the 1930s, Beijing found itself in a political void as the Republican government moved its capital to the southern city of Nanking in 1928—ending Beijing’s more than six hundred and fifty years of being designated as the country’s capital. Various forms of urban hobbies began to emerge and prosper in the period, alongside the folklore marketplace mushrooming in the city. Before the Communist government reassigned Beijing as the capital of the People’s Republic of China in 1949, Beijing enjoyed a unique historical time when its urban identity seized its chance to fully emerge, filling people’s everyday live with teahouse theaters, folklore story-telling, street performances, and those devoted personal, intimate hobbies such as the cultivation of walnuts. Outside of the serene city walls, it also proved a time of great historical turmoil for China.
Reading Cordes’ words printed in Chinese, on a yellowish newspaper page in the Spring of 2013, I was fascinated by this man, his sojourn across borders from Europe to Beijing, but mostly on his acute caption of the poetics wrapped up in a trivial urban hobby deeply embedded in the city milieu at that time. If the archive is to tell us something richer and subtler alongside the day-in and day-out scholar labor we spend facing rubrics of documents plucked from a microfilm in a basement reading room, it is only possible through discovering the unexpected wonder such as Cordes’ travelogue gently folded in the archive. Is the cultivation of the walnuts a personal escape from the serial of wars and political upheavals stuffing China in the early twentieth century? Perhaps especially so as it reflects upon Cordes’ own endeavor to escape the Europe simmering in turmoil in the 1930s into a China filled with colorful hues? I sit inside of an office building on this November day of 2015, looking at the smog-infused grey sky of Shanghai outside of my window and my pair of walnuts lying on my table, wondering.
I-Yi Hsieh is a teaching fellow of Global Perspectives on Society at NYU-Shanghai. Her research sheds light on the intersection of urban material culture, UNESCO’s world heritage program, and the rise of folklore markets in Beijing. She maintains an academia.edu page.