Categories
Think Piece

We Have Never Been Presentist: On Regimes of Historicity

by guest contributor Zoltán Boldizsár Simon

It is great news that François Hartog’s Regimes of Historicity: Presentism and Experiences of Time has finally come out in English. The original French edition dates back to 2003, and my encounter with the book took place a few years later in the form of its Hungarian edition. What I wish to indicate by mentioning this small fact is that Anglo-American academia is catching up with ideas that already made their career. But to be more precise, it is perhaps better to talk about a single idea, because at the core of Hartog’s book there is one strong thesis, namely, that since the fall of the Berlin Wall and the collapse of the Soviet Union we live in a presentist “regime of historicity.”

The thesis makes sense only within a long-term historical trajectory, in relation to previous “regimes of historicity” other than the presentist one. Furthermore, it makes sense only if one comes to grips beforehand with Hartog’s analytical categories, which is not the easiest task. As Peter Seixas notes in a review, despite Hartog’s effort to articulate what he means by a “regime of historicity,” the term remains elusive. For the sake of simplicity, let’s say that it denotes an organizational structure which Western culture imposes on experiences of time, and that changes in “regimes of historicity” entail changes in the way Western culture configures the relationship between past, present, and future.

As to the historical trajectory that Hartog sketches, it goes as follows: around the French Revolution, a future-oriented modern regime of historicity superseded a pre-modern one in which the past served as a point of orientation, illuminating the present and the future. So far this accords with Reinhart Koselleck’s investigations concerning the birth of our modern notion of history. Conceptualizing the course of events as history between 1750 and 1850—the period Koselleck called Sattelzeit—opened up the possibility and the expectation of change in the shape of a historical process supposedly leading to a better future. Where Hartog departs from Koselleck is the claim that even this modern regime that came about with the birth of our modern notion of history has now been replaced by one that establishes its sole point of orientation in the present.

I believe that Hartog’s main thesis about our current presentist “regime of historicity” can be fundamentally challenged. I am with Hartog, Koselleck, and many others (such as Aleida Assmann) in exploring the characteristics of the “modern regime of historicity.” What I doubt is not even Hartog’s further claim that Western culture left behind this modern regime, but that it happened sometime in the late 1980s and early 1990s, and that the modern regime is followed by a presentist one in which we live. In other words, what I doubt is the feasibility of the story that Hartog tells about how we became presentist.

Let me tell you another story—the story of how we have never been presentist. It does not begin with the fall of the Berlin Wall and it does not begin when the Cold War ends. Instead, it begins in the early stage of the postwar period, when Western culture finally killed off the three major (and heavily interrelated) future-oriented endeavors it launched since the late Enlightenment: classical philosophy of history, ideology, and political utopianism.

By the 1960s, skepticism towards the idea of a historical process supposed to lead to a “better” future already discredited philosophies of history. The complementary endeavors of ideology and political utopianism shared this fate, given that the achievement of their purpose depended on the discredited idea of a historical process within which it was supposed to take place. In other words, dropping the idea of a historical process necessarily entailed putting a ban on all future-oriented endeavors that were rendered possible by postulating such a process. These are, I believe, fairly well known phenomena. Since Horkheimer’s and Adorno’s Dialectic of Enlightenment (1947 [1944]), or at least since Karl Popper’s The Poverty of Historicism (1957), Judith Shklar’s After Utopia (1957) or Daniel Bell’s The End of Ideology (1960), the bankruptcy of the three major future-oriented endeavors of Western culture have become a fairly recurrent theme in intellectual discussion.

This is not to say that traces of these endeavors did not remain present as implicit assumptions in cultural practices, however. It took a post-1960s “theory boom” and decades of postcolonial and gender critique even to attempt to deconstruct the prevailing assumptions of Western universalism and essentialism. But the point I would like to make is not whether this did or did not prove a successful intellectual operation; rather, I would like to emphasize that regardless of the question of overall success, Western culture self-imposed some sort of a presentism already in the 1960s by putting a ban on its own future-oriented endeavors.

Yet this self-imposed presentism remains only one side of the coin as concerns the ideological-utopian project. The other side is the proliferation of technological imagination and the future visions simultaneously launched when Western ideological-political imagination had been declared bankrupt. You can think of the space programs of the same period or of the sci-fi enthusiasm of the 1950s and 1960s, both in cinema and literature, which was inspired by actual technological visions reflected in the foundation of artificial intelligence research as a scientific field, splitting out of cybernetics in 1956. Today, this technological vision is more omnipresent than ever before. You cannot escape it as soon as you go to the movies or online. Just like every second blockbuster or like DeLillo’s latest novel, magazine stories and public debates now habitually address issues of transhumanism, bioengineering, nanotechnology, cryonics, human enhancement, artificial intelligence, technological singularity, plans to colonize Mars, and so on.

Hartog shows himself to be well aware of this technological vision, just as he remains aware of how the notion of history brought about by classical philosophies of history was abandoned in the postwar years, entailing the collapse of ideology and political utopianism. I can think of only one reason why he still fails to consider this as the abandonment of the modern regime of historicity. It seems to me that Hartog mistakes matters of political history like the fall of the Berlin Wall (1989) and the collapse of the Soviet Union (1991) for matters of intellectual history like the skepticism toward grand ideological-political designs of the common future that had already taken root in the 1950s-1960s. This must be the fundamental ground upon which Hartog places “the collapse of the communist ideal” alongside the fall of the Wall, as if the intellectual “ideal” could simply collapse together with the material collapse of the Wall or the political collapse of the communist bloc. This elision prevents Hartog and other critics from seeing that, first, the loss of Western ideological-utopian future-orientation was self-imposed and, second, that it did not result in overall presentism but in exchanging an ideological future-orientation for a technological-scientific one that emerged simultaneously with the abandonment of the former.

Of course the emerging technological-scientific vision (again, vision, and not necessarily the actual technological advancement, which one can debate) can be considered ideological as well, but that is beside the point. More importantly, the obvious omnipresence of the technological-scientific vision hardly enables us to talk about “a world so enslaved to the present that no other viewpoint is considered admissible” as Hartog does. Not to mention that the temporal structure of the technological vision may be completely other than the developmental structure that underlay late Enlightenment and nineteenth-century future-oriented endeavors. If these past endeavors were deliberately dropped for good reasons, whatever future endeavor Western culture ventures into, it simply cannot be a return to an abandoned temporality. If the future itself has changed, it necessarily entails a change in the mode by which we configure the relation of this future to the present and the past.

I think—and Hartog might agree if he reconsidered future-orientation—that the principal task of historians and philosophers of history today remains coming to terms with our current future vision. It is the principal task because insofar as we have a future vision, we imply a historical process; and if the technological-scientific vision is characteristically other than the abandoned ideological-utopian one, then the historical process it implies must be different too. What this means is that – using Hartog’s vocabulary – we may already have a new regime of historicity which we have yet to explore and understand.

Yet even if we do not fully grasp what regime of historicity we live in, one thing is certain: it is anything but presentist. In fact, we have never been presentist.

Zoltán Boldizsár Simon is a doctoral research associate at Bielefeld University. Lately he devotes articles to the question of how our future prospects and visions inform our notion of history, not only as related to the technological vision, but also with respect to our ecological concerns and within the framework of a quasi-substantive philosophy of history. You can also find Zoltán on Twitter and Academia.edu.

Categories
Think Piece

Chronology’s Forgotten Medieval Pioneers

by guest contributor Philipp Nothaft

According to a metaphor once popular among early modern scholars, chronology is one of the “two eyes of history” (the other being geography), which is an apt shorthand for expressing its tremendous utility in imposing order on the past and thereby facilitating its interpretation. Yet in spite of the undiminished importance chronology possesses for the study and teaching of history, latter-day historians tend to lose relatively little sleep over the accuracy of the years and dates they insert into their works. Assyriologists may continue to argue among themselves about variant Bronze Age chronologies, but for that happy majority focused on the development of civilization since the dawn of the first millennium BCE, errors in historical dating appear to be a local possibility rather than a global one. We may be wrong in attributing a Greek astrological papyrus or the foundation of a Roman military fortress to, say, the late second rather than the early third century of the common era, but even then we remain secure in the knowledge that the centuries themselves retain their accustomed place, containing as they do a fixed and familiar inventory of historical events. On the whole, it looks like the timeline is under our control.

Like so many amenities of modern life, this feeling of security is the hard-won result of a long process of trial and error, one that can be shown to have started a good deal earlier than commonly assumed. For the thirteenth-century Dominican philosopher Giles of Lessines, who turned to historical chronology in a pioneering Summa de temporibus (ca. 1260–64), the intervals of years between the major events of biblical and profane history were still a bewildering patchwork of individual puzzles, not all of which allowed for an easy solution. Problems were posed above all by the Old Testament, which in spite of its status as a divinely inspired, and hence exceptionally reliable, record of history since the world’s creation confronted Christian historians with a range of pitfalls. Even those who felt equipped to smooth out some of the contradictions encountered in the Bible’s chronological record still had to admit the existence of two discrepant versions: the Hebrew Masoretic text, represented to Latin Christians by St Jerome’s Vulgate translation, and the Greek Septuagint, which differed from the former in several numerical details. On Friar Giles’s count, the Greek translation added a total of 1374 to the Vulgate’s tally of years between Creation and Christ, which effectively double the nine different chronological readings he had been able to extract from the “Hebrew truth,” leaving him with a range of possibilities between 3967 and 5541 years. The margin of plausible intervals was mystifying and threatened to expose the scriptural exegete to the same sort of uncertainty that was encountered in profane chronicles and works of history, where scribal corruption, but also mendacity and ignorance on the part of authors, could mean that dates, years, or even centuries suddenly vanished or were retroactively inserted into the historical record.

In spite of such discouraging signals, Giles of Lessines believed that he had identified a class of sources that was worthy of his unreserved trust: works of astronomy, which linked observed phenomena such as eclipses of the sun and moon to particular dates in history, usually identified according to years of the reigns of ancient kings and emperors. Since these observations provided the raw material for astronomical theories, which in turn underpinned computational algorithms and the tables based on them, it was possible to assess their trustworthiness long after the event. Astronomical books, Giles wrote, “depend on years in the past being noted down with utmost certainty—otherwise the rules and principles they contain would not be dependable for the future” (Summa de temporibus, bk. 2, pt. 3, ch. 2). The predictive success of mathematical astronomy hence guaranteed the accuracy of the underlying chronological data, and vice versa. Friar Giles’s pièce de résistance in exploiting this insight were three lunar eclipses the Alexandrian astronomer Ptolemy had observed during the reign of the Roman emperor Hadrian: more specifically in years 133, 134, and 136 CE. As a seasoned astronomical calculator, Giles was able to use the specific criteria of these eclipses (their time, magnitude, and location) to establish the interval between Ptolemy’s observations and the present. The exercise gave him an entering wedge into the chronology of the Roman Empire, which, among other benefits, made it possible to confirm—against certain medieval critics—that the Church’s practice of calculating the years of Christ’s birth from 1 CE rested on a sound historical basis.

Giles of Lessines was far from the only medieval author to experiment with astronomical techniques in an effort to put chronology on a sure footing. A prominent case is the English Franciscan friar Roger Bacon (ca. 1214–1292?), who had read Giles’s Summa de temporibus, and who was to use astronomical tables to establish the date when Jesus died on the cross. His result of 3 April 33 CE, though unorthodox at the time, continues to be viewed as plausible by many contemporary scholars. In the following century, the application of astronomy to history was pursued by authors such as the Swabian astronomer Heinrich Selder, who used Ptolemy’s eclipses to bring order into biblical, but also ancient Greek and Near Eastern, chronology. Others, like the Benedictine monk Walter Odington (Summa de etate mundi, 1308/16) and the Oxford astrologer John Ashenden (Summa iudicialis de accidentibus mundi, 1347/48), tried to tame the timeline by bringing in assumptions of an astrological, as opposed to just astronomical, nature. In Odington’s case, his efforts to extort the age of the world from a calculation based on 360-year planetary circles proved irreconcilable with biblical chronology, prompting him to boldly surmise that the numbers encountered in Scripture had to be read in an allegorical rather than a plain historical way—an idea that stands in striking contrast to the assumptions made by present-day Young Earth Creationists.

Seven centuries down the line, we possess sufficient hindsight to discern more or less exactly where authors such as Giles of Lessines and Walter Odington went wrong or, conversely, where their arguments produced results of lasting validity. More so than any particular date proposed in these medieval texts, what remains unchanged is the fundamental soundness of their insight that the predictive capacities of natural science can furnish historical chronology with the sort of security its conclusions would otherwise be lacking. To this day, astronomical phenomena, from comets and the positions of stars to the intervals revealed by ancient eclipses, remain absolutely essential to the basic grid of ancient dates displayed in our reference works. In addition, the range of possibilities has been greatly expanded by novel chronological tools such as stratigraphy, radiometric dating, dendrochronology, and the study of Greenland ice cores.

Owing to these methodological developments, our conventional chronology of the past three millennia rests on such a solid basis that twenty- and twenty-first century attempts to subvert it have been staged almost exclusively from the fringes of respectable scholarship. One of the few flavors of such chronology revisionism to have captivated a larger audience is Heribert Illig’s so-called phantom time hypothesis, which argues for the fictitious character of the period we usually refer to as the Early Middle Ages. If Illig is right, which is more than unlikely, the reign of Charles the Great and all the other persons and events historians of medieval Europe assign to the years 614–911 were no more than an invention, retroactively inserted into the historical record by a cabal of powerful men involving the Holy Roman Emperor Otto III (980–1002) and Pope Silvester II (999–1003).

Beyond the tiresome hermeneutics of suspicion and outright falsehoods that pervade the hypothesis propagated by Illig and his followers lies a valuable reminder to the effect that historians should, at least on occasion, try to assure themselves of the foundations on which their accepted narratives rest. In a sense, the revisionists are indeed correct in assuming that some of these foundations can be unearthed deep in the Middle Ages. Their actual shape, of course, looks very different from what they would have us believe.

C. Philipp E. Nothaft is a post-doctoral research fellow at All Souls College, Oxford. He is the author of “Walter Odington’s De etate mundi and the Pursuit of a Scientific Chronology in Medieval England,” which appears in the April 2016 issue of the Journal of the History of Ideas.

Categories
Think Piece

Historicizing Failure

by guest contributor Disha Jani

Making meaning out of the past requires sifting: turning flotsam and jetsam into units of time and entities of subjecthood. One of the most basic ways in which historians sift is with beginnings and ends as markers. This debris, the fragments of failure, is what fascinates us when we ask, “Why didn’t they succeed?” or “Why did this come to an end?” When the end of an era is what has drawn us to it in the first place, how does this making of meaning in retrograde affect our narrative writing and historiographical context? When we study the end of things, we encounter a particular set of questions. I believe the first among them should be, “Why does this failure fascinate me?”

There is a particular historiographical problem surrounding the study of failure. Certain phrases or dates are widely associated with a spectacular finish. Cleopatra’s reign. The League of Nations. 1989. Some historical moments are swept from our collective memory by virtue of their quite lackluster ends—and it falls to the historian to resurrect them, and explain why we had never heard of them in the first place. Infamy and erasure both color scholarship, because they often drive the aims of the historian herself, in her role as interpreter between past and present. When a scholar selects such a topic for study, the reason for this choice is a curiosity with how something came to an end, how a particular individual or group failed to achieve their aims. Historicizing the concept of failure, then, has implications for why and how we make meaning out of particular moments in history.

What constitutes failure in the eyes of the historian? The War of 1812 was miraculously won and lost by both the British and the Americans simultaneously: the British defended their colonies, and the Americans maintained that “not one inch of territory [was] ceded or lost.” What a happy conclusion that was! No one had to go home a loser, and we were treated to such fun re-enactments at the bicentennial.

Failure in the eyes of the historian can come from one of several places. If the historical subjects stated their aims at the beginning of a project and were not able to fulfill those aims, that can constitute failure. If a bounded and sovereign entity, such as a nation-state or empire, ceases to exercise control over its former territories, or its former territories remain basically the same but are re-named and ruled differently—that can constitute failure. And even if the stated aims of a project are achieved by most accounts, but the central or guiding logic of the project is not upheld—historians can see that too as a failure. The Spanish Popular Front. The Roman Empire. American democracy in 1776. In all of these instances, once a failure has been identified and defended, the historian can begin to explain why it happened: why their subjects were unable to fulfill their aims either materially or essentially. This designation of success or failure cannot occur in a vacuum, of course. Often, the identification of a marker of failure comes from a popular understanding of that event, and the historian’s desire to either explain it or debunk it.

Making meaning out of a subject’s failure is similarly manifold. On the most basic level, the chronological markers of beginnings and ends can perform the intellectual and affective work of periodization. For me, the word “interwar” carries all the hope and loss of the 1920s and 1930s, but to a historian of the Middle Kingdom, it might be meaningless. When we lay varied timelines on top of one another, more and more complex renderings of the past emerge—as we consider legal milestones, influential popular culture, social and demographic shifts, and linguistic divides alongside the reigns of kings and ministers.

When narrative is the medium through which we deliver this meaning to our readers, the trajectory and emotion our stories imbibe can be governed by our knowledge of our subjects’ fate. When a political project lies at the center of a study, often the stakes for those involved were life and death. When a subject has great cultural or moral significance, or exists larger than life in contested popular imaginations, it is difficult, and perhaps dishonest, to try and step outside the looming shadow of the eventual end. For example, it is difficult not to write a melancholy or angry account of a quashed slave rebellion. Even accounts of the joy and creativity within such a project can read tinny and sharp against the harsh knowledge of the centuries that followed. The dramatic irony the reader dubiously enjoys allows the historian to use detail and sources in a particularly captivating way, since we know what our subjects do not—they will fail, and their friends and family will die. Identifying turning points in a series of events becomes almost perverse, when you know each decision or happenstance leads, thundering, down a path of no return.

Finally, the historian must defend the significance of her subject to the reader (and her colleagues) by learning a lesson from it. She may conclude that a project did not, in fact, fail—due to some criteria that were not considered in the initial assessment. She may conclude that a project was a failure, but remains significant because of how it changed its participants, or changed its environment, or paved the way for another, more obviously significant occurrence. If the failure was spectacular and earth-shattering, then this defense is unnecessary, but the historian needs to say something new about an event everyone considers common knowledge.

This is, of course, if you allow your narrative to be governed by the failure it explains. What could be the alternative? Historians tend to acknowledge, to varying degrees, the effect of their positionality on their work and use of sources. From our perch in the present, does a methodology exist that would allow us to suspend our knowledge of our subjects’ eventual failure, and proceed, as it were, spoiler-free? I don’t think that’s something anyone wants or believes to be possible, but it is an interesting intellectual space to imagine, if only for a moment.

Let us imagine that we don’t know how the story ends. Of course, such a space exists, and it is called the present. How do historians assess the significance of political projects in the immediate wakes of their demise? Accounts of events of significance to us today were written about from the moment they happened, and part of assembling our own narratives involves sifting through the ones that came before. But how do we, as historical actors ourselves, historicize the successes and failures of our present and recent past?

The Occupy movement began on July 13, 2011 when Micah White and Kalle Lasn of the Vancouver-based Adbusters magazine released a tactical briefing to their mailing list calling for the occupation of Wall Street and a new form of citizen-led protest. This year will be five years since the beginning of Occupy, and to many activists, writers, and organizers, this movement is far from finished. In March, White published his book The End of Protest: calling for a shift away from old protest tactics, towards a rescaling and reorientation of the terms of revolt. The book describes the history of protest and the form a new protest might take, and emphasizes the spiritual and non-hierarchical nature of this new protest as key to its success and resonance. I spoke with White last week about how he sees the history of the movement he helped create, and how we might view the failures of our fledgling century in light of how we have written about and internalized the successes and failures of the last one.

White referred to Occupy as a “constructive failure”:

DJ: What does that mean for Occupy’s role when we think about protest in the 21st century?

MW: I think that’s the only real revolutionary way to look at it. We are part of a five-thousand-year revolutionary uprising that has been passed from generation to generation. Everything that’s come before has been in some way, a constructive failure. The Russian Revolution was a constructive failure. The Paris Commune was a constructive failure…. There’s a tremendous inertia within contemporary activism not to learn from our past failures… the goal has to be revolution, but people don’t want revolution, they’re afraid of revolution. But seeing things as a constructive failure allows you to move closer to a revolution.

As participants in the present moment, we begin immediately to historicize the present and thereby forge the recent past. It is impossible to know the signposts future historians will use to separate us from our parents or grandparents, creating eras and pre- and post- where there once were just lives lived. But it is clear that we are leaving historians much more preservable data than they could ever sift through, and much more than our predecessors were given. Knowing this, we will see in our lifetimes the grand narratives of 21st-century failure written and re-written. The particular problems involved with writing the history of a failed project deserve our careful thought, since they reveal a great deal about what we consider a loss and what we consider collateral damage. Lives lost during conflict can amount to an overall failure in policy, but a peace conference in Geneva can render it successful all over again. It is already happening: histories of the invasion of Iraq, of the 2008 housing bubble, of the Syrian civil war, of austerity and of police brutality—these could be the crises that define our time, used as buzzwords and explanatory notes on why the next decades would unfold as they did.

Disha Jani is a writer based in Toronto. Her research follows the movements and writings of anti-imperialist organizers in the British Empire between the First and Second World Wars. Broadly speaking, Disha is curious about the intersection of socialist, post-colonial, nationalist, and imperial histories, and the ways in which memory and narrative mediate the past, present, and future for historical subjects and people living today. She will be a Ph.D candidate in History at Princeton University in the fall.

Categories
Think Piece

The Methodology of Genealogy: How to Trace the History of an Idea

by guest contributor Yung In Chae

We all know the story of Man the Hunter: thousands of years ago, cavemen went out and hunted food for cavewomen and cavechildren, who sat idly at home and depended on this masculine feat for survival. Physical strength was the most important attribute in primordial times, so it was only natural that men, whose physical strength surpasses and for the most part continues to surpass that of women, ruled the world. Even now, some people will refer to Man the Hunter in order to justify rigid gender roles: look, they say, evolutionary biology is on my side.

This, of course, is problematic: even if that had been the practice of cavepeople, we aren’t cavepeople. But Man the Hunter isn’t even true in the first place. In an article entitled “Shooting Down Man the Hunter” for Harper’s Magazine, Rebecca Solnit references a plethora of anthropological evidence that contradicts it. The nuclear family described in the story doesn’t so much resemble thousands of years ago as it does the gender norms of sixty years ago.

My point here is not about the Man the Hunter myth itself, but about something larger that it illustrates: the genetic fallacy. You commit a genetic fallacy when you appeal to the origins of an idea in order to make a claim about the truth of the idea. As Brian Leiter put it in a podcast I was listening to, “If you learn that your beliefs were arrived at the wrong kind of way, that ought to make you suspicious about them.” Similarly, Nietzsche, in writing The Genealogy of Morals, wanted to say that if morals come from a not-so-good place, the notion of having morals is not obviously good in itself. Genealogies, both true and false, can be and have been used to prop up and discredit, empower and oppress. Genealogy is a theoretical practice that has tangible consequences; it can be provocative and even dangerous.

How does one trace the history of an idea, anyway? We do it often, but it is unclear how to do it well and with methodological rigor. Nevertheless, in this post I wish to question what it means to go back to the “origins” of something, borrowing ideas from Nietzsche, Foucault, and Agamben.

In 1971, Foucault published an essay entitled “Nietzsche, Genealogy, and History,” in which he discusses Nietzsche’s use of “origin” words: Ursprung, or “origin”; Herkunft, or “descent”; and Entstehung, or “emergence,” “the moment of arising.” For example, Nietzsche uses Ursprung or Entstehung for the origin of logic and knowledge in The Genealogy of Morals, and Ursprung, Entstehung, or Herkunft for the origin of logic and knowledge in The Gay Science. Some uses don’t seem to mean anything beyond “origin” in the general sense, and in those cases the terms are more or less interchangeable. But other uses of Ursprung, specifically, are what Foucault calls “stressed,” and at times have an ironic cast (e.g., for morality and religion).

At the beginning of The Genealogy of Morals, Nietzsche says that his objective is to find the Herkunft of moral preconceptions. He started this project because he wanted to find the origin of evil, a question that he finds amusing in retrospect and calls a search for Ursprung. Later on, he refers to genealogical analyses as Herkunfthypothesen, despite the fact that in a number of his own texts that deal with the origins of morality, asceticism, and justice (starting with Human, All Too Human), he uses the term Ursprung.

Nietzsche, then, exhibits a good amount of skepticism about Ursprungen in The Genealogy of Morals, a rejection of his earlier views. According to Foucault, Nietzsche doubts that it is possible to find origins because “it is an attempt to capture the exact essence of things, their purest possibilities, and their carefully protected identities; because this search assumes the existence of immobile forms that precede the external world of accident and succession” (78). In other words, there is no singular point at which a pure and essential “morality” or “religion” popped up. To talk of origins is actually against the spirit of genealogy.

Foucault thinks that Herkunft and Enstehung are more appropriate terms, because they do not try to “capture the exact essence of things.” He describes Herkunft as such: “…the equivalent of stock or descent; it is the ancient affiliation to a group, sustained by the bonds of blood, tradition, or social status” (80-81). The problem with Herkunft is that it sometimes leads us to “pretend to go back in time to restore an unbroken continuity that operates beyond the dispersion of oblivion” (81), which he argues is the wrong way to trace the history of an idea. After all, many ideas have not come down to us in a coherent, unbroken chain—there are just as many discontinuities as there are continuities, if not more.

There is something especially attractive about Entstehung, because it neither assumes that an idea has an essence nor requires continuity. Instead, it allows for messy interplay in bringing something about. In fact, it is precisely this clash of forces that Foucault finds interesting:

Emergence is thus the entry of forces; it is their eruption, the leap from the wings to center stage, each in its youthful strength. […] As descent qualifies the strength or weakness of an instinct and its inscription on a body, emergence designates a place of confrontation, but not as a closed field offering the spectacle of a struggle among equals (84).

I, too, think that Entstehung is appealing because it explains why writing the history of an idea is a fascinating but by no means neat process. If Entstehung means that an idea emerges without the pretense of being essential or linked to something that came before it, then I think this is a more honest reflection of how the history of ideas seems to work.

Perhaps when we do genealogy, we are looking for emergences, not origins, and when we claim to find origins we are from the beginning negating the very mission we propose to carry out. But I also find myself drawn to the argument Agamben makes in his discussion of Foucault’s essay on Nietzsche in The Signature of All Things. Agamben quotes the section in which Foucault says, “The genealogist needs history to dispel the chimeras of the origin.” He then points out that the French word that Foucault uses for “dispel” is conjurer (like the English word conjure). Conjurer, like “cleave” or “screen,” is one of those strange words that is also its own opposite—it means both “to conjure up” and “to dispel.”

Foucault probably meant to have the genealogist dispel, not conjure up, the chimeras of the origin, but the wordplay could not have been lost on him. “Or perhaps the two meanings are not opposites, for dispelling something—a specter, a demon, a danger—first requires conjuring it […] The operation involved in genealogy consists in conjuring up and eliminating the origin and the subject” (84), Agamben argues. Foucault more or less agreed, stating in a 1977 interview, “It is necessary to get rid of the subject itself by getting rid of the constituting subject, that is, to arrive at an analysis that would account for the constitution of the subject in the historical plot” (84).

When did things become different? What changed? It is the change itself that is important in tracing the history of an idea. But to speak of emergence is also to speak of a beginning, even if we do not lay claim to something as pure and essential as an origin. Agamben asks the question that Foucault does not answer:

But what comes to take [the origin and the subject’s] place? It is indeed always a matter of following the threads back to something like the moment when knowledge, discourses, and spheres of objects are constituted. Yet this “constitution” takes place, so to speak, in the non-place of the origin. When then are “descent” (Herkunft) and “the moment of arising” or “emergence” (Entstehung) located, if they are not and can never be in the position of the origin? (84)

So our investigation must go ad originem nevertheless, and expect to find something else. In professing to find origins, we deny they ever existed, and in order to deny they exist, we conjure up the origins.

Yung In Chae is a Master’s student in History and Civilizations at the École des hautes études en sciences sociales, where she is writing about Simone de Beauvoir’s classical education. She graduated from Princeton University in 2015 with an A.B. in Classics. She is also a Research Fellow at the Paideia Institute and edits Eidolon, its online journal.

Categories
Think Piece

Towards a Global Intellectual History?

by guest contributor Sarah Dunstan

9780231160483Speaking of the emerging calls for transnational and global intellectual history in a 2011 article, David Armitage wrote that ‘[w]hat is certain is that the possibilities for such a global history – or even for multiple histories under this rubric – remain enticingly open-ended.’ In the five years that have passed since that article’s publication, scholarship contemplating the potential of such a history has proliferated. Not least of these contributions is Samuel Moyn’s and Jeremy Sartori’s brilliant edited collection Global Intellectual History, published in 2013. That volume – like their essay posted on the Imperial & Global forum – sought to ask the hard questions about the idea of ‘global intellectual history’ – not only questions of how historians might write such histories but whether or not the sub-field should exist at all. The diverse range of approaches that appear in the volume itself certainly speak for the multiplicity of possible understandings and methodologies historians might adopt.

As is clear from Sanjay Subrahmanyam’s critical review of Global Intellectual History in History and Theory, there remain those uncomfortable with the addition of the term ‘global’ to the sub-field of intellectual history. Subrahmanyam worries that the use of the word ‘global’ is just another fad that offers little analytical benefit. Moyn and Sartori have convincingly responded to his criticisms of Global Intellectual History but Subrahmanyam’s review, like the volume itself, illustrates rather clearly that we have certainly not arrived at a consensus for what might constitute such a history. Questions abound, including: Who do we mean when we use the term ‘intellectual’? How do we approach the spread and reception of an idea, particularly when we consider the permutations and changes that occur to ideas when they cross linguistic and cultural barriers? What role is played by the everyday experiences of the men and women whose ideas travel the world?

aimOne possible way of answering these questions lies in the latest book from the German historian, Michael Goebel. Entitled Anti-imperial Metropolis: The Seeds of Third World Nationalism’ the book foregrounds the role of migration to and from interwar Paris as a precipitating force in creating movements against imperial structures. Crafting a compelling narrative that rests upon a platform of meticulous archival research, Goebel argues that the experiences of migrants in interwar Paris were a significant catalyst for the spread of nationalist thinking and anti-imperial movements both within the French empire and beyond.

As Moyn and Sartori have argued, one of the key questions facing historians attempting to construct global intellectual histories pertains to the exploration of the ‘historical processes that have made intellectual exchange and connection possible beyond conventional national boundaries.’ The vehicle of migration is not one that has been unattended in the historical literature on Paris but it is one that deserves more attention in intellectual histories, as Goebel demonstrates. Rather than conceptualising his historical actors as ‘intellectuals’, Goebel treats them first and foremost as migrants or, as he calls them, ‘ethno-entrepreneurs’. This approach is key because it allows him to interrogate community formation as the origins of intellectual connections between groups that are usually studied as distinct entities. In the case of Paris, historians have long assumed physical proximity as suitable causation. Goebel, however, takes this thinking further, demonstrating how physical proximity in conjunction with the experience of similar pressures, despite linguistic and cultural difference, made becoming anti-imperial a transcultural phenomenon, albeit one that manifested in myriad forms across different migrant groups. As such, Goebel’s methodology is a departure from more traditional intellectual histories. He himself acknowledges that his work is ‘much more of a social history of migration than an intellectual history of anti-imperialism’ but he contends that it is impossible to understand the processes behind the latter without the context of the former.

Goebel’s argument about the significance of treating these individuals and groups as ‘ethno-entrepreneurs’ rather than intellectuals, as migrants rather than activists, is most compelling for groups such as the Algerian and Vietnamese communities. The Latin-American presence in Paris during this period is not usually treated by migration histories, not least because they comprised a much wealthier echelon of Parisian society during the period. Their own accounts of the time spent in the French metropole, however, characterise it as a turning point in their political development. For example, this may have owed less to the socio-economic hardship endured by the Algerian or black African contingents and more to the opportunities it offered for exchange between members of Latin American countries not usually in contact. Goebel argues that this created a sharper sense of solidarity against US imperialism. What’s more, the gap in opportunity between those who were subjects of the French empire and those who were foreigners in the French capital, also impelled thinkers grasping for a cogent understanding of their situation to come to terms with the realities of imperialism.

Goebel thus argues we can trace the global spread of the claim for national sovereignty to the urban space of interwar Paris. Goebel’s work is outstanding and his study might yet provide a methodological blueprint for writing ‘global intellectual history’. Of course, there will be those who question the combination of social and intellectual history. Paris, too, might be a unique case. In the book’s conclusion, Goebel points to the sheer magnitude of the populations of colonial subjects in the city during the interwar period as exceptional. Likewise, for the purposes of pursuing an intellectual genealogy of nationalism and anti-imperialism, it is impossible to ignore the significance of the republican discourse embedded in the Parisian political landscape, a feature that was not so true of cities such as London or Berlin. Moreover, in privileging the examination of community and group formation over a more intensive study of the individual trajectories of those individuals who were at the forefront of anti-imperial movements later on– like Ho Chi Minh, Zhou Enlai, Léopold Senghor – it is impossible to get at the nuances of their ideological development. Of course, Goebel’s study never set out to do such a thing: my comment here is not a criticism of his work but is instead an observation about the implications of anchoring an intellectual history so firmly in a social history framework.

By focusing in upon the pathways of the Parisian landscape, both literal and figurative, Michael Goebel has offered a persuasive study that goes some way in answering the question of why nationalism and anti-imperialism became ‘global’ ideas in the aftermath of World War II. His work elegantly links the ‘local’ with the global’ through the meticulous exploration of rich archival sources not usually studied in tandem. The kinds of substantive and methodological questions it raises for the practice of ‘global intellectual history’ are sure to invite a great deal of reflection and debate. Personally, I would like to suggest that Anti-Imperial Metropolis itself is a persuasive argument for intellectual historians to explore the possibilities of social history as a means to understanding the ways that intellectual exchange was facilitated on a global scale. For the moment, it seems as if ‘global intellectual history’ is here to stay, an observation confirmed by the launch in the last fortnight of a new journal, Global Intellectual History.

Sarah Dunstan is a PhD Candidate on an Australian Postgraduate Award at the University of Sydney. She was a Fulbright Postgraduate Fellow at Columbia University New York for the 2014-2015 academic year and is currently a Visiting Postgraduate Scholar at Columbia Global Centers, Paris. In 2016, she was awarded the James Holt Prize for the best article published in the Australian Journal of American Studies in the previous two years. She has a new article forthcoming in Callaloo. Her research focuses on francophone and African American intellectual collaborations over ideas of rights and citizenship.

Categories
Think Piece

The Archival Agenda: Thinking Through Scientific Archives at the Royal Society

by guest contributor Brooke Palmieri

Imagine that an archivist’s child is raised from birth as a professional archivist to see how they documented their life. Imagine that toddler making a finger painting, taking a digital image, filing away the physical copy into an acid-free folder, alerting its parents as to the proper terms from the Dublin Core Metadata Initiative, uploading it to Omeka. Even with training from birth by archivist parents, decisions would be made about what to keep as much as what to discard, how to form an archive from preservation as well as loss. Maybe there would be moments of sabotage, like the burning of notebooks filled with teenage poetry. The experiences that matter most in life would be over-represented, like vacations.

Beyoncé in the archives (wallpaperstock)
Beyoncé in the archives (wallpaperstock)

Or imagine Beyoncé’s “Crazy Archive”: it includes every photograph, interview, and performance she’s ever done, combined with tens of thousands of hours of footage. Unlike most, Beyoncé has the resources to employ “visual directors,” who have documented every day of her life since 2005.

Neither example provides total recall, but rather a lesson in managing expectations: all archives are meant to include just what they include, and it’s only the expectations of outsiders that find fault or shortcomings in their contents. For example, it is likely that scholars in days to come will ask: what did Beyoncé do every day of her life before 2005, before the birth of her daily archive?

Either way, experimenting on children or following the example of Beyoncé is the closest we could get to a true “archival agenda” shaping the product, rather than some other agenda that happens to manifest as an archive. Which is to say that there is no such thing as an archive that is first and foremost archival. Archives are secondary in nature, and that isn’t necessarily a bad thing: it doesn’t take away from their importance in preserving heritage, nor does it make reading their contents any less empowering or infuriating or educational. Which brings us to the archives of scientists, and an excellent conference that was held on 2 June at the Royal Society: “Archival Afterlives: Life, Death, and Knowledge-Making in Early Modern British Scientific and Medical Archives.”

One of the most important remarks made in the conference was by Victoria Sloyan, an archivist at the Wellcome Library. When asked about the contents of the Collecting Genomics Project, which brings together born-digital contents from a number of different scientists from different institutions who were part of the Human Genome Project, Sloyan stressed that different people have different ideas of what an archive should contain, and act accordingly. Some scientists hand everything over; some preserve only evidence enough to trace the the progress of “successful” ideas. In practice, individual archives fall somewhere between these extremities, but such decisions haunt the work of anyone accessing their material.

Theoretically, not much has changed between Sloyan’s very contemporary moment of archival collection, highly transparent in its obstacles (including born-digital material and a whole lot of floppy disks), and those of scientists past. The multiplication of media platforms has multiplied the kinds of things that can be found in the archive, and the trouble it takes to get them there, but human emotions around the preservation of knowledge only have so many expressions. It was useful then, as the conference did, to ask: “How did disorderly collections of paper come to be the archives of the Scientific Revolution?” Presentations sought to examine the many factors baked into the survival of archives: for instance, Elizabeth Yale looked at the “entanglement of emotion and paper” that is the naturalist John Ray’s archive. Its survival depended on his wife and daughters and the relationship his publishers maintained with them, but also the significance of his biographer Samuel Dale’s role in establishing a legacy for Ray. That legacy also happened to include rewriting and recasting Ray’s lifelong religious beliefs from nonconformist to Church of England.

This is easier to stomach within the libraries and archives of the humanities: we have long sought to make silences speak, fill the gaps between shortcomings, question received truths and their canonical authors. For decades, the dialectic of the humanities archive has been between preservation and loss, histories from above and histories from below. By contrast the dialectic of the scientific archive tends to formulate itself as objectively observed rather than subjectively felt. Conferences like “Archival Afterlives” implicitly fire the opening shots in a bigger battle to knock the sciences down a peg, to reframe the argument as one familiar to humanities classrooms, as something we’ve known all along. The information of scientific archives is more likely to be medical, natural, or celestial, symptoms of disease, fossils, planetary angles—but that makes it no less subject to the inherent distortion of human intervention.

petiver herbarium
A page from James Petiver’s herbarium (c. 1718), featuring specimens of the Carolina Laurel Cherry (Prunus caroliniana Aiton) and the Virginia Willow (Itea virginica L.). (East Carolina University Digital Collections)

In his presentation, Arnold Hunt completely recovered the reputation of James Petiver, dragging him out from the shadow of Hans Sloane and piecing together his dispersed archive and natural history collection to show a methodical, self-taught collector of high quality: only his humble beginnings had caused him to be dismissed as a serious naturalist thinker. In other words, religion, politics, and class have always mediated admittance to the pantheon of thinkers we exalt. And if contemporary circumstances were not enough of a minefield for the archival process, Leigh Penman’s presentation created fresh dangers. His work on Samuel Hartlib’s papers highlighted the formative role of loss: “all his best papers” had “suffered… embezzlement” in Hartlib’s lifetime, including his universal bibliography, meaning that what survives is a collection of “loose papers” of little value to Hartlib himself.

Overall, “Archival Afterlives” subjected scientific knowledge to the more elemental truth: that every archive is an afterlife. Despite the many uses and transmissions of archival information, we cannot forget that there was a first purpose, an initial passion, and a strategy for survival that saw the production and persistence of an archive, and that sets a path for possible uses afterward. The archive begins as the residue of some encounter or event, and only later does it accrue layers of meaning through varieties of use. Maybe this is one of the greatest paradoxes of endurance: renewed interest in mining the archives ensures their survival, both as cited sources and as bodies of material that require funding to remain intact, but at the same time, new agendas have a tendency to obscure old ones, and it’s the old ones that archives have a difficult time preserving. Nevertheless, it is crucial to work to understand their occult influences over the shape of the historical record as much as the scientific record. The first step in doing so is in making the archival agenda visible. Otherwise, we risk misidentifying invisibility as infallibility.