history of philosophy

Book Forum: Symbols, collective memory, and political principles 

by guest contributor Andrew Dunstall

The JHI Blog is pleased to announce a new occasional feature, a forum bringing together faculty across disciplines to discuss recent works in intellectual history over consecutive Fridays. The inaugural forum is devoted to Jeffrey Andrew Barash’s book Collective Memory and the Historical Past (University of Chicago, 2016).

Jeffrey Andrew Barash has written a very scholarly book that proves both a philosophical work and a history of ideas. The one offers a conceptual account of collective memory, and the other a narrative of changing conceptions and ideological uses of “memory.” In both cases, he argues that careful attention to the border between memory and history is politically significant for criticising appeals to mythical bases of political unity. I have some thoughts on that, but first it is worth summarising what I take to be key contributions of the book.

IMG_0100.JPG

Obama’s Inaugural Celebration at the Lincoln Memorial (Steve Jurvetson)

The main contention of the book is this: collective memory designates a restricted sphere of past references. These particular references operate in practical life within a community, a “web of experience” (p. 52). Crucially for Barash, such a web consists of symbols (defined on p. 47-50). Symbols “confer spontaneous sense on experience by lending it communicable order at the primary level of its organization and articulation” (p. 47). What Barash means here is not that we attach various symbols to our everyday experience in a secondary process; rather, our perception is originally patterned in symbolic ways according to our learning, habits, and interactions.

Barash gives an example of an illuminating contrast. The quiet, “sacred” space of a church, and the banal (but still perhaps quiet and still) space of a car garage. Each space is meaningful in perception, because we are acquainted with their style and the activities that take place in them. Even when we are not familiar with the setting, we pick up cues from others or elements of the scene.  Experience is hermeneutic, which Barash refers to as “symbolic embodiment”. Our ability to communicate with each other in, and about, our experiences rests on this spontaneous symbolising activity.

Also note that we are not locked into our original perceptions. Experience is neither a private language, nor fixed, nor voluntarist. We constantly layer and re-layer interpretations of our lives as a matter of course. We can, for instance, understand somebody who describes their car garage as a shrine or sacred space, transporting the qualities of the cathedral to the domestic site of mechanical pursuits. We are readily able to creatively adapt our references through conversation and imaginative reconstructions. We can understand each other—even when we have radically differing interpretations.

IMG_0101

Martin Luther King press conference 1964 (Marion S. Trikosko)

Thus our memories come to be shared with those whom we regularly interact; for Barash, collective memory is this web of interaction. He gives an excellent example by analysing Luther King’s famous “I have a dream” speech, which included his own memories of watching it on television (p. 55). This sets up well the key distinction between experience “in the flesh” as opposed to that mediated by communication technologies (analysed in detail in chapter five). An important clarification is made here. When we are talking about events that are supposed to be real—like King’s speech, then we expect that they will mesh well with the other references our fellows make, and which are materially present in our environment. When they do not cohere, we are justified to be suspicious about the claims being made. And that disjunction motivates a critical reappraisal. Symbols themselves do not differentiate between reality and fictional states, but their overall network does. Thus imagination is essential to the “public construction of reality”, but such a construction is neither arbitrary nor imaginary (p. 49).

Collective memory is therefore neither a fiction nor a mere metaphor, but refers to a web of symbols formed through communicative interaction, reaching as far as that sphere of interaction does—across several generations, and within the context of a shared language, set of public symbols, and common purposes. Barash, however, carefully emphasises a corresponding diffuseness, differentiation and inconsistencies of such memory—and he insists on its epistemological limits to a living generation. Knowledge of life passes beyond living ken when it fails to be maintained in any real sense by a coming generation. Too often, such discontinuities are not benign: displacement, war, and oppression can be its cause.

IMG_0103

Mémoire (Benhard Wenzl)

Collective memory needs to be distinguished from the reflective activity of historians, as Barash clearly argues in his choice of title for the book. The critical targets of the book are twofold. Recent scholarship, on the one, that has conflated the work of history with the idea of collective memory (see p. 173ff.). On the other hand, Barash is all-too-mindful of the way in which collective memory is invoked for political purposes.  There is a normative-critical point to the distinction between collective memory and historical work. The historian or scholar of collective memory is someone who holds our memory work to account, scrupulously attending to myth-making and historical over-reach in political discourse. The historian is in the business of re-contextualising, of rediscovering the coherence of a set of events in the real. And so Barash takes a position on the reality of the past, and we see the importance of establishing the level of the real in the analysis of symbols (p. 175ff).

We can see some clear implications for historical methods as a result of Barash’s careful analysis. Collective memory is a part of how archives and diverse sources come into existence. History work needs collective memory, and it needs to understand its various forms. However, rather than take up debates about the reality of the past, or the distinction between the forms of collective memory and historical understanding, I am interested in a rather different and less explicit theme, which my preceding commentators have already raised. Let us think a bit more about the normative and political emphasis that Barash lays on historical understanding.

IMG_0102

Fête de l’Être suprême (Pierre-Antoine Demachy, 1794)

While collective memory is limited to living generations, there are nevertheless long-term patterns to community life that reach beyond memory. Martin Luther King, for example, called attention to the political promises of the Declaration of Independence, and of Lincoln, in the shadow of whose memorial he stood with those who had gathered with him. King, Lincoln, and their contemporaries belonged to a larger unity, an ethos; a particular rendering of democratic freedom. Michael Meng argues that Barash is drawing on a democratic emphasis in his insistence on finitude. Êthos, as in Aristotle’s Politics, translates as “custom.” Barash’s symbolically oriented theory incorporates ethos as an “articulation of long-term continuities in the symbolic reservoir upon which collective memory draws” (p. 105). While Barash’s examples consistently point to progressive and radical democratic examples (he also discusses the French Revolution’s republican calendar), the concept of ethos launches an analysis of the ideological invocation of memory by radical right wing movements.

IMG_0104

Front National (Marie-Lan Nguyen, 2010)

Right wing groups sometimes evoke age-long memory in direct connection to social homogeneity. There is a French focus; Maurice Barrès, the late 19th and early 20th century conservative political figure and novelist, and Jean-Marie Le Pen and his party, the Front National, are Barash’s primary examples. Le Pen wrote in 1996: “When we denounce the terrible danger of the immigration invasion, we speak on behalf of our ancient memory” (cited at p. 109). The theoretical construction of symbolic collective memory has reached its sharp, critical point. For the finitude of collective memory, in its anchoring in a living generation, disallows the age-long memory and homogeneity of national identity that the right call upon. So while collective memory is not simply imaginary, as Barash has shown, the latter metaphorical use of it is mythical. Finitude must be, ought to be, reasserted.

Finitude is a common hymn amongst intellectuals today. And yet the normative argument for the critical function of historical work is not very strong here. I disagree with Meng’s interpretation then. Finitude does not supply a normative principle which would tell us how collective memory ought to be invoked. The alternative progressive examples show the point. Martin Luther King could equally draw on an ethos; so too should progressives today. And this is a practical, normative point, as Sophie Marcotte Chénard suggests. Repetition is not continuity, however. We must draw on the historical past and collective memory to defend progressive normative principles. Where else do they come from? Of course, a normative choice by the historian is that—a choice of what to inherit.

Barash bases his argument on a formal analysis of memory, symbols, and temporal intentionality. Finitude for him is a matter of logical form: living memory can only extend a certain length; the selection of what we remember is secondary for him. Finitude itself supplies no clear ethical principle, however. Which normative struggles, which injustices breathe life into “living memory”? Often such struggles far exceed that memory, as I have argued elsewhere. Barash, to my mind, implies these questions at various points, but does not make them explicit. Barash’s work is a provocative opening. When we come to reflect on our heritage, whether age-long or recent, the point is to choose what is worth preserving, and what needs changing.

Andrew Dunstall  is Lecturer in Philosophy at the University of Wollongong, where he teaches political philosophy. His research interests are in phenomenology and critical theory. His recent work studies the way that normative principles draw upon historical precedents, especially those beyond the “modern” era. You can read more about his work here.

“Towards a Great Pluralism”: Quentin Skinner at Ertegun House

by contributing editor Spencer J. Weinreich

Quentin Skinner is a name to conjure with. A founder of the Cambridge School of the history of political thought. Former Regius Professor of History at the University of Cambridge. The author of seminal studies of Machiavelli, Hobbes, and the full sweep of Western political philosophy. Editor of the Cambridge Texts in the History of Political Thought. Winner of the Balzan Prize, the Wolfson History Prize, the Sir Isaiah Berlin Prize, and many others. On February 24, Skinner visited Oxford for the Ertegun House Seminar in the Humanities, a thrice-yearly initiative of the Mica and Ahmet Ertegun Graduate Scholarship Programme. In conversation with Ertegun House Director Rhodri Lewis, Skinner expatiated on the craft of history, the meaning of liberty, trends within the humanities, his own life and work, and a dizzying range of other subjects.

Professor Quentin Skinner at Ertegun House, University of Oxford.

Names are, as it happens, a good place to start. As Skinner spoke, an immense and diverse crowd filled the room: Justinian and Peter Laslett, Thomas More and Confucius, Karl Marx and Aristotle. The effect was neither self-aggrandizing nor ostentatious, but a natural outworking of a mind steeped in the history of ideas in all its modes. The talk is available online here; accordingly, instead of summarizing Skinner’s remarks, I will offer a few thoughts on his approach to intellectual history as a discipline, the aspect of his talk which most spoke to me and which will hopefully be of interest to readers of this blog.

Lewis’s opening salvo was to ask Skinner to reflect on the changing work of the historian, both in his own career and in the profession more broadly. This parallel set the tone for the evening, as we followed the shifting terrain of modern scholarship through Skinner’s own journey, a sort of historiographical Everyman (hardly). He recalled his student days, when he was taught history as the exploits of Great Men, guided by the Whig assumptions of inevitable progress towards enlightenment and Anglicanism. In the course of this instruction, the pupil was given certain primary texts as “background”—More’s Utopia, Hobbes’s Leviathan, John Locke’s Two Treatises of Government—together with the proper interpretation: More was wrongheaded (in being a Catholic), Hobbes a villain (for siding with despotism), and Locke a hero (as the prophet of liberalism). Skinner mused that in one respect his entire career has been an attempt to find satisfactory answers to the questions of his early education.

Contrasting the Marxist and Annaliste dominance that prevailed when he began his career with today’s broad church, Skinner spoke of a shift “towards a great pluralism,” an ecumenical scholarship welcoming intellectual history alongside social history, material culture alongside statistics, paintings alongside geography. For his own part, a Skinner bibliography joins studies of the classics of political philosophy to articles on Ambrogio Lorenzetti’s The Allegory of Good and Bad Government and a book on William Shakespeare’s use of rhetoric. And this was not special pleading for his pet interests. Skinner described a warm rapport with Bruno Latour, despite a certain degree of mutual incomprehension and wariness of the extremes of Latour’s ideas. Even that academic Marmite, Michel Foucault, found immediate and warm welcome. Where many an established scholar I have known snorts in derision at “discourses” and “biopolitics,” Skinner heaped praise on the insight that we are “one tribe among many,” our morals and epistemologies a product of affiliation—and that the tribe and its language have changed and continue to change.

Detail from Ambrogio Lorenzetti’s “Allegory of the Good Government.”

My ears pricked up when, expounding this pluralism, Skinner distinguished between “intellectual history” and “the history of ideas”—and placed himself firmly within the former. Intellectual history, according to Skinner, is the history of intellection, of thought in all forms, media, and registers, while the history of ideas is circumscribed by the word “idea,” to a more formal and rigid interest in content. On this account, art history is intellectual history, but not necessarily the history of ideas, as not always concerned with particular ideas. Undergirding all this is a “fashionably broad understanding of the concept of the text”—a building, a mural, a song are all grist for the historian’s mill.

If we are to make a distinction between the history of ideas and intellectual history, or at least to explore the respective implications of the two, I wonder whether there is not a drawback to intellection as a linchpin, insofar as it emphasizes an intellect to do the intellection. To focus on the genesis of ideas is perhaps to the detriment of understanding how they travel and how they are received. Moreover, does this overly privilege intentionality, conscious intellection? A focus on the intellects doing the work is more susceptible, it seems to me, to the Great Ideas narrative, that progression from brilliant (white, elite, male) mind to brilliant (white, elite, male) mind.

At the risk of sounding like postmodernism at its most self-parodic, is there not a history of thought without thinkers? Ideas, convictions, prejudices, aspirations often seep into the intellectual water supply divorced from whatever brain first produced them. Does it make sense to study a proverb—or its contemporary avatar, a meme—as the formulation of a specific intellect? Even if we hold that there are no ideas absent a mind to think them, I posit that “intellection” describes only a fraction (and not the largest) of the life of an idea. Numberless ideas are imbibed, repeated, and acted upon without ever being much mused upon.

Skinner himself identified precisely this phenomenon at work in our modern concept of liberty. In contemporary parlance, the antonym of “liberty” is “coercion”: one is free when one is not constrained. But, historically speaking, the opposite of liberty has long been “dependence.” A person was unfree if they were in another’s power—no outright coercion need be involved. Skinner’s example was the “clever slave” in Roman comedies. Plautus’s Pseudolus, for instance, acts with considerable latitude: he comes and goes more or less at will, he often directs his master (rather than vice versa), he largely makes his own decisions, and all this without evident coercion. Yet he is not free, for he is always aware of the potential for punishment. A more nuanced concept along these lines would sharpen the edge of contemporary debates about “liberty”: faced with endemic surveillance, one may choose not to express oneself freely—not because one has been forced to do so, but out of that same awareness of potential consequences (echoes of Jeremy Bentham’s Panopticon here). Paradoxically, even as our concept of “liberty” is thus impoverished and unexamined, few words are more pervasive in present discourse.

Willey Reverly’s 1791 plan of the Panopticon.

On the other hand, intellects and intellection are crucial to the great gift of the Cambridge School: the reminder that political thought—and thought of any kind—is an activity, done by particular actors, in particular contexts, with particular languages (like the different lexicons of “liberty”). Historical actors are attempting to solve specific problems, but they are not necessarily asking our questions nor giving our answers, and both questions and answers are constantly in flux. This approach has been an antidote to Great Ideas, destroying any assumption that Ideas have a history transcending temporality. (Skinner acknowledged that art historians might justifiably protest that they knew this all along, invoking E. H. Gombrich.)

The respective domains of intellectual history and the history of ideas returned when one audience member asked about their relationship to cultural history. Cultural history for Skinner has a wider set of interests than intellectual history, especially as regards popular culture. Intellectual history, by contrast, is avowedly elitist in its subject matter. But, he quickly added, it is not at all straightforward to separate popular and elite culture. Theater, for instance, is both: Shakespeare is the quintessence of both elite art and of demotic entertainment.

On some level, this is incontestable. Even as Macbeth meditates on politics, justice, guilt, fate, and ambition, it is also gripping theater, filled with dramatic (sometimes gory) action and spiced with ribald jokes. Yet I query the utility, even the viability, of too clear a distinction between the two, either in history or in historians. Surely some of the elite audience members who appreciated the philosophical nuances also chuckled at the Porter’s monologue, or felt their hearts beat faster during the climactic battle? Equally, though they may not have drawn on the same vocabulary, we must imagine some of the “groundlings” came away from the theater musing on political violence or the obligations of the vassal. From Robert Scribner onwards, cultural historians have problematized any neat model of elite and popular cultures.

Frederick Wentworth’s illustration of the Porter scene in Macbeth.

In any investigation, we must of course be clear about our field of study, and no scholar need do everything. But trying to circumscribe subfields and subdisciplines by elite versus popular subjects, by ideas versus intellection versus culture, is, I think, to set up roadblocks in the path of that most welcome move “towards a great pluralism.”

“Many thanks to Teddie Adorno”: Negative Dialectics at Fifty

by guest contributor Jonathon Catlin

Ten days after the fateful U.S. presidential election, several leading scholars of the Frankfurt School of critical theory gathered at Harvard University to reevaluate the legacy of the German-Jewish philosopher Theodor W. Adorno. The occasion—“Negative Dialectics at Fifty”—marked a half-century since the publication of Adorno’s magnum opus in 1966. Fitting with the mood of the political moment, co-organizer Max Pensky (Binghamton) recalled Adorno’s 1968 essay “Resignation” in his opening remarks: “What once was thought cogently must be thought elsewhere, by others.” To use Walter Benjamin’s phrase, dialectical work as demanding as Adorno’s has a Zeitkern, or temporal core: its meaning unfolds over time through constant re-interpretation. As participants reflected on this work’s profound legacy, they also translated its messages into terms relevant today. Time has served Negative Dialectics well. Fulfilling Adorno’s call for philosophy to restore the life sedimented in concepts, the critical energy of this conference demonstrated that both the course of time and the practice of intellectual history do not necessarily exhaust texts, but can instead reinvigorate them.

Adorno’s work has received a surge of recent attention for the ways in which it speaks to our present moment. The New Yorker’s Alex Ross went so far as to title a recent article, “The Frankfurt School Knew Trump Was Coming,” writing: “If Adorno were to look upon the cultural landscape of the twenty-first century, he might take grim satisfaction in seeing his fondest fears realized.” Ross notes that our time’s “combination of economic inequality and pop-cultural frivolity is precisely the scenario Adorno and others had in mind: mass distraction masking élite domination.” In his opening remarks, co-organizer Peter E. Gordon (Harvard)—author of “Reading Adorno in the Age of Trump”—addressed the sense of intellectual defeat palpable in the wake of the election. Yet this prognosis endowed what followed with a certain urgency that made the rigorous intellectual history of Gordon and Martin Jay (Berkeley) feel just as timely as the new critical work of theorists like Rahel Jaeggi (Berlin) and Jay Bernstein (New School).

Participants gathered in Harvard’s Center for European Studies, which, suiting Adorno’s cultural tradition, once housed the university’s Germanic Museum and resembles a fin-de-siècle European villa, ornamented with sculpture and inscribed dictums from the likes of Kant, Goethe, and Schiller. Yet Michael Rosen (Harvard) rightly described Adorno as a “Jeremiah” within German society for decrying the ways it had not come to grips with its Nazi past and then foretelling catastrophe still to come in the wake of the Cuban Missile Crisis. Adorno’s disciple Jürgen Habermas once remarked that he wouldn’t speak of Adorno and his time’s other great German philosopher, Martin Heidegger, in the same breath—and Adorno himself once claimed that he returned to Germany to “finish off” his compromised rival. But while Heidegger’s affinities with Nazism have recently left his work mired in controversy, Adorno’s corpus has re-emerged as a source of critical resistance. Without the space to address each conference speaker’s contributions, my aim here is to illustrate some reasons why they all still found Negative Dialectics compelling today.

photo 1.jpg

Axel Honneth, Rahel Jaeggi, and Martin Jay (author’s photo)

It would have been quite a feat to produce a unified conversation around a work like Negative Dialectics, which proclaims itself an anti-system and even anti-philosophy. Alas, the nearly three-hour-long conference panels often meandered far from their prescribed topics. Still, there was one theme nearly every presenter circled around: Adorno’s resistance to the overweening tendencies of concepts, a problem engaged most concretely through Adorno’s relation to his dialectical forbearer, Hegel. Negative Dialectics attempted to develop a mode of dialectical thought that harnessed Hegel’s negativity to counter his triumphalist narrative of the “slaughter-bench” of world history, which glorified existing reality and thereby undermined the possibility of critiquing it. As Adorno wrote in Negative Dialectics, “Regarding the concrete utopian possibility, dialectics is the ontology of the wrong state of things. The right state of things would be free of it: neither a system nor a contradiction.” The end goal of negative dialectics, then, is to transform present conditions into those that would render dialectics itself superfluous. If the challenge confronting Adorno in the 1940s was to develop a form of critical thought that could break through the blind, positivistic reproduction of the world order that had produced Nazism, Negative Dialectics can be seen as proposing a constellation of tentative solutions.

Gordon’s paper situated Negative Dialectics in a long intellectual tradition of disenchantment (Entzauberung) from the Enlightenment to our present “secular age.” Yet beyond the traditional sense of alienation, Gordon also identified a positive, critical potential in another use of the term: “the disenchantment of the concept.” If one historical marker came to represent disenchantment in modernity for Adorno, it was Auschwitz, an event referred to in nearly every presentation. Lambert Zuidervaart (Toronto) in particular grappled with Adorno’s search for a new mode of truth that would be adequate to a new reality after Auschwitz: “The need to lend a voice to suffering is a condition of all truth.” He suggested that this demand necessitates a material turn in Adorno’s thought toward a “thinking against thought” undergirded by a bodily revulsion at what happened at Auschwitz. Zuidervaart argued that metaphysical experience “after Auschwitz” must make contact with the nonidentical, which he connected to a practice of redemption, an undying hope for the possibility that reality could yet be otherwise.

Negative Dialectics is thus centrally motivated by a critical relationship to history. As Henry Pickford (Duke) remarked, Adorno’s task in Negative Dialectics was to attempt to see things “in their becoming,” opening up a space of possibility beyond the “hardened objects” and “sedimented history” of the reified social world as it exists in late capitalist modernity. In one of the most original presentations, Rahel Jaeggi grappled with how Adorno’s philosophy of history at the same time challenges and requires a view of history as progress. As “indispensable as it is disastrous,” Jaeggi remarked, a progressive view of history must be posited if history is to become available to consciousness as something changeable. Hegelian universal history thus exists insofar as antagonism, agency, and resistance do. Jaeggi called for an Adornian philosophy of history open enough to allow progress without requiring it. The task of progressive history would be, fitting with Adorno’s “determinate negation,” to extract progress step-by-step from the regressive historical problems with which we are confronted. This process would define progress not as teleological but as free-standing, open, and even anarchic.

Yet this critical operation remains as difficult as ever, and several speakers questioned its political efficacy. Max Pensky’s presentation on “disappointment,” noted the difficulty of Adorno’s uneasy hybrid of philosophy and empirical social theory. Since negative dialectics offers no moral sanctuary or inner realm, a critical thinker must be both in the midst of objects and outside them. Yet philosophy’s isolation from the world, its essential “loneliness,” would also seem to entail permanent disappointment. Pensky’s reading of Negative Dialectics as an anti-progress narrative was sharpened by the observation that progress for Adorno means that “the next Auschwitz will be less bad”—echoing the title of a recent book on Adorno’s practical philosophy, Living Less Wrongly. Pensky began from Negative Dialectics’ opening line: “Philosophy, which once seemed obsolete, lives on because the moment to realize it was missed.” This recalls Hegel’s image of the “owl of Minerva,” whereby philosophy always comes too late to shape the present that has always already outpaced it. Marx’s last “Thesis on Feuerbach” subsequently called for philosophy to stop interpreting the world and start changing it. For Maeve Cooke (University College Dublin), who sought to reconcile Adorno’s apparent “resignation” with a form of political protest, Adorno always seems in the end to side with Hegel against the possibility of changing history. Seeing no way to square Adorno’s thought with Horkheimer’s early conception of critical theory as inherently emancipatory for the proletariat, Cooke instead proposed an analogy with the “vagabond” or “resistant” politics of the French novelist Jean Genet, who participated in activism from Algeria to the Black Panthers but refused to ever sign a manifesto or explicitly declare his revolutionary intentions. In his response to Pensky, Rosen connected Adorno’s disappointment to present responses to the election of Donald Trump: like Auschwitz for Adorno, Trump represents for the intellectual left today the sudden dissolution of a shared humanistic project of an ongoing, regulative commitment to liberal Enlightenment values.

One of the final panels on aesthetics led the participants, at the urging of Lydia Goehr (Columbia), into the sunny renaissance-styled courtyard, where a makeshift “chapel” was arranged.

photo 2.JPG

Peter Gordon, Lydia Goehr, Espen Hammer, and Lambert Zuidervaart (author’s photo)

The session addressed a question raised earlier by Brian O’Connor (University College Dublin) as to whether, if intuitions of truth are in the end not “reportable” in language, philosophy is not therefore “singular,” or fundamentally lonely. Goehr and Bernstein noted that, for Adorno, only particular, fragmentary works of art can be true precisely in their singularity and incommunicability. It is the negativity of this aesthetic that gestures toward utopia, for, as Adorno wrote, “In semblance nonsemblance is promised.” Goehr convincingly argued for the centrality of aesthetics in Adorno’s negative dialectics: his notion of “necessary semblance” holds that art does not merely point toward rational truth, but rather constitutes a conception of truth that is itself aesthetic.

As Pickford remarked, Adorno has at once been seen as a failed Marxist, a closeted Heideggerian, and a precocious postmodernist. Peter Sloterdijk has critiqued him for positing “a priori pain,” while Georg Lukacs accused him of taking up residence in the “Grand Hotel Abyss.” Adorno may have been among the loneliest of philosophers and yet, as Seyla Benhabib reminded us, he was also a Sozialpädagog for a generation of Germans who listened to his lectures and unrelenting radio addresses on everything from classical music to “working through” the Nazi past. Perhaps it is on account of, and not in spite of, such contradictions that Adorno continues to engage philosophers, Germanists, political theorists, and intellectual historians in equal measure. Martin Jay no doubt spoke for many in the room when he aptly dedicated his paper: “Many thanks to Teddie Adorno, who’s been troubling our sleep since the 1970s.”

Jonathon Catlin is a PhD student in History at Princeton University, where he studies modern European intellectual history. He is particularly interested in responses to catastrophe in German and Jewish thought.

Foucault from Beyond the Grave

by guest contributor Michael C. Behrent

Few living thinkers have been as prolific as the dead Michel Foucault. In the thirty-two years since his death, he has published thirteen book-length lecture courses, four volumes of interviews and papers (totaling over 3,500 pages), and countless bootlegs. Meanwhile, the fourth volume of his History of Sexuality, completed shortly before his death, sits, inaccessible to all, in an archive in Normandy—a rare text to have found no way around his estate’s prohibition on posthumous publications.

His will notwithstanding, one can only imagine that Foucault himself would have reacted to this state of affairs with a caustic laugh. For as two recently published volumes remind us, Foucault was haunted by the bond between language and death, as well as the notion that writing always, in a sense, comes from “beyond the grave.”

41pbmufcnrlThe two books in question both appear in a series put out by the Éditions de l’École des hautes études en sciences sociales called Audiographie, which publishes texts that were first delivered in a spoken form. La grande étrangère (The Great Foreigner, 2013), consists of a radio program on madness and literature from 1963, two lectures on literature given in Brussels in 1964, and a talk on the Marquis de Sade delivered at SUNY Buffalo in 1970. The other, Le beau danger (The Beautiful Danger, 2011), is the transcript of an extended interview on the theme of writing that Foucault gave to the literary critic and journalist Claude Bonnefoy in 1968, but which has never before appeared in print.

If there is a common theme linking these interventions, it is that of Foucault’s obsession with the connection between writing and death. The texts in these volumes all deal with literature and writing; the problem of death figured prominently in the literary essays that Foucault, in the 1960s, devoted to Bataille, Blanchot, and Roussel. Yet what the Audiographie books make clear is that the problem of literature and death was not, for Foucault, some esoteric side problem. It was integral to the ideas he was developing in his major publications. Thus modern literature exemplifies, Foucault maintains, the fact that the modern mind is steeped in what, in The Order of Things (1966), he dubbed the “analytic of finitude.” One of the many consequences of the growing consciousness of the radically finite character of human existence that follows the death of God is, he argues, the enormous significance that modern society assigns to literature. The value we attribute to literature is inseparable, Foucault suggests, from a cultural horizon shaped by human mortality.

In the 1964 Brussels lectures, Foucault contends that early modern Europe (during what he calls “the classical age”) did not, strictly speaking, have literature—at least in the way we have since come to understand the term—for the simple reason that it interpreted itself culturally as the tributary of the word of God. People in this period, of course, wrote novels. Some even experimented with the kind of knowing self-consciousness about their own literary artifices—referring in writing to the fact that they were writing—that would later become associated with literary modernism (Foucault offers a fascinating analysis, for instance, of Diderot’s Jacques le fataliste). Yet what distinguishes these earlier endeavor from the literature of the modern age is the fact that, during the classical age, “any work of language existed as a function of a certain mute and primitive language, that the work was charged with restoring.” This “language that [came] before languages” was the “word of God, it was the truth, it was the model” (La Grande étrangère, 100). Rhetoric was the means through which human utterances, in all their obtuseness, could acquire something of the limpidity of divine speech. But what we have come to call literature only emerges once God has died—or become dumb, to be precise. Literature is the attempt from within the unremitting chatter of discourse to mark language, to dent it, possibly to re-enchant or overcome it—hence modern literature’s frequently transgressive character. But once it has ceased to represent the word of God, once it has become simple the words filling a page, literature becomes an emblem of human finitude. As such, it cannot be other than “beyond the grave” (104).

Foucault’s claim that, strictly speaking, literature does not exist as an independent realm of discourse until the late eighteenth century parallels the claim he would soon make in The Order of Things that “man” (in the sense of the “human”) did not exist as a specific object of knowledge until the same period. The birth of the human sciences and the genesis of literature are both, Foucault, maintains, consequences of t God’s retreat.

The problem of writing also lies at the heart of Foucault’s 1970 lecture on Sade. His question is simply: why did Sade write? What compelled him to fill volume after volume with his transgressive yet mind-numbingly repetitive fantasies? Foucault’s analysis is characteristically complex, yet his argument harkens back, however indirectly, to the themes of the Brussels lecture. Sade’s libertinism is, needless to say, directed against God. Yet it is not atheistic as such; God is not dismissed as mere illusion. God, Sade believes, exists, but as an abomination, as evidenced by the “meanness” (méchanceté) of the world—and indeed, by the fact that there are libertines. In Sade’s peculiar logic (which Foucault calls “anti-Russellian” [199]), it is because God is abominable that it is necessary that he not exist. This theme illustrates what Foucault sees as the ultimate function of Sade’s writing: the intertwining of discourse, truth, and desire. Sade needs God “insofar as he does not exist, and insofar as he must be destroyed at each instant” (204), as both his writing and his desire depend on him.

41W+Fo8Tv1L.jpgThe reason Sade wrote is thus because in discourse, truth and desire become enmeshed in spirals of reciprocal stimulation and impulsion. Yet his originality, Foucault claims, lies in the way he emancipated desire from truth’s tutelage, pulling it out from under “the great Platonic edifice that ordered desire on truth’s sovereignty” (218). The point is not (as with Freud) that desire has its own truth, which is more or less hypocritically covered up by social norms; it is also, Foucault seems to be saying, that truth is a form of desire. Truth is not the neutral and transparent element through which words can name beings. It is a libidinal force, as seen in Sade’s relentless insistence, despite his novels’ preposterous plots, that he is telling the truth. Foucault’s account of the truth function in Sade recalls the themes of his first Collège de France lectures, on the “will to knowledge” in ancient Greece, which he would deliver the following year: the sophists, who believed that arguments are not proven logically, but won or lost like battles, resemble in many ways Sade’s approach to writing. Language, here, is no longer just a rumbling murmur that literature seeks to transform into a voice. God is dead, and we—or our truth-creating discourse—have killed him.

Yet at least according to Foucault’s position in Le Beau danger, language—or at least writing—has less to do with killing than with—as he put it in Madness and Civilization—the “already thereness of death” (“le déjà là de la mort”; cf. Histoire de la folie à l’âge classique (Paris: Gallimard, 1972 [1961]), 26). Foucault explains: “I would say that writing, for me, is tied to death, perhaps essentially to the death of others, but that does not mean that writing would be like murdering others,” in a way that “would open before me a free and sovereign space.” Writing, rather, means “dealing with others insofar as they are already dead. I speak, in a sense, over the corpses of others. I must confess, I kind of postulate their death” (Le Beau danger, 36-37).

In this sense, the death of God, Foucault suggests, is not only the cultural situation that his thought attempts to assess; it is the condition of possibility of his own work. The idea of writing as a form of resurrection, a way of rendering present the “living word” of “men and—most likely—God” is, he says, “profoundly alien” to him. Writing, for Foucault, is “the drifting that follows death, and not the progression to the source of life.” He muses: “It is perhaps in this sense that my form of language is profoundly anti-Christian”—even more so than themes that he addresses (39).

In these texts, the reader will find few of the concepts for which Foucault is best known. There is no or little mention of archaeology, epistemes, genealogy, or power (discourse is the one exception, though it is discussed in a far less technical manner than in, say, The Archaeology of Knowledge). What they remind us of are the philosophical preoccupations that presided over his early work—and that no doubt continued to shape his later thought, works such as Discipline and Punish and The History of Sexuality, albeit in a more subterranean way. Here, we have a Foucault concerned with finitude, mortality, and the death of God. Perhaps this Foucault is in need of—how else to put it?—resurrection.

Michael C. Behrent teaches modern European history at Appalachian State University. He is currently working on a book exploring the origins of Foucault’s project.

Coming to Terms with the Cybernetic Age

by guest contributor Jamie Phillips

Rare the conference attracting a crowd on a cold December Saturday morning, but such happened recently at NYU’s Remarque Institute. Space filled out early for the conclusion of a two-day conference on Cybernetics and the Human Sciences (PDF). The turnout bore out the conference’s contention of a renewed historiographical and philosophical interest in cybernetics, the science of “control and communication in the animal and the machine,” as Norbert Wiener subtitled his 1948 work that gave the interdisciplinary movement its name. As Leif Weatherby, co-organizer of the conference along with Stefanos Geroulanos, noted in his introductory remarks, the twentieth century was a cybernetic century, and the twenty-first must cope with its legacy. Even as the name has faded, Weatherby suggested, cybernetics remains everywhere in our material and intellectual worlds. And so for two days scholars came to cope, to probe that legacy, to trace its contours and question its ramifications, to reevaluate the legacy of cybernetics as a history of the present.

The range of presenters proved particularly well-suited to such a reevaluation, with some working directly on cybernetics itself, while others approached the subject more obliquely, finding, as it were, the cybernetic in their work even where it had not been named. Ronald R. Kline, author of the recent The Cybernetics Moment: Or Why We Call Our Age the Information Age, set the tone early in emphasizing the disunity of cybernetics. Despite the claims of some of its advocates and latter-day commentators, Kline contended, cybernetics never was one thing. On this point general consensus emerged the conference tended to eschew a search for definitions or classifications in favor of a wide-ranging exploration of the many faces of cybernetics’ legacy. And wide-ranging it indeed was as papers and discussion touched on topics from international relations theory and the restrainer of the Antichrist, to Soviet planning in Novosibirsk, the manufacture of telephones, brain implants and bullfights, Voodoo death, and starfish embryos.

A number of papers spoke to the pre-history (or rather pre-histories) of cybernetics. Mara Mills emphasized the importance of the manufacturing context for the emergence of ideas of quality control, as a crucial site for the development of cybernetic conceptions of feedback. Geroulanos addressed physiological theories of organismic integration, stemming from WWI studies of wound shock and concerns with the body on the verge of collapse, and leading to Walter B. Cannon’s concept of homeostasis, so pivotal for early cyberneticians. Other papers spoke to the varying trajectories of cybernetics in different national contexts. Diana West discussed the appeal of cybernetics in the Soviet Union in the 1970s and 1980s as offering promise of a more dynamic form of large-scale regional planning, a promise expressed in abstract theoretical modeling and premised on a computing power that never came. Isabel Gabel explored the intersections of biology, embryology and metaphysics in the work of French philosopher Raymond Ruyer. Jacob Krell gave an entertaining appraisal of the strange humanist engagement with cybernetics by the heterogeneous “Groupe des dix” in post-68 France, while Danielle Carr spoke to the anxious reaction against visions of human mind control in the Cold War United States, through the work of José Manuel Rodriguez Delgado. Other papers still, particularly those of Weatherby and Luciana Parisi, directly confronted a cybernetic metaphysics, and between them they raised questions concerning its novelty and significance with respect to the history of philosophy and contemporary media theory.

img_2057

Stefanos Geroulanos

Taken together, the papers compellingly demonstrated the ubiquity and diversity of the cybernetic across disciplines, decades, and geographical and political contexts. Taken together, however, they also raised a question that has long been posed to cybernetics itself. Here we might cite the words of Georges Boulanger, president of the International Association of Cybernetics, who asked, in 1969: “But after all what is cybernetics? Or rather what is it not, for paradoxically the more people talk about cybernetics the less they seem to agree on a definition” (quoted in Kline, The Cybernetics Moment, 7). Indeed, just as cybernetics itself declined as it expanded into everything, there is perhaps a risk that in finding cybernetics everywhere we lose hold of the object itself. To push the point further, we might echo the frustration of one of the interviewees cited by Diana West in her talk (and here I paraphrase): ‘They promised us cybernetics, but they never gave us cybernetics.’

Over two days, the conference answered this challenge through the productive discussion it generated. The more people talked about cybernetics, the more they seemed to find common ground for engagement.. Beyond the numerous schematics that served as the immediate graphic markers of the cybernetic imagination (see image), conversation coalesced around a loose conceptual vocabulary—of information, of feedback and system, of mechanism and organism, of governance, error and self-organization—that effectively bridged topics and disciplines, and that gave promise of discerning a certain conceptual coherence in the cybernetic age.

img_2012

A cybernetic schematic: “A Functional Diagram of Information Flow in Foreign Policy Decisions,” from Karl Deutch’s 1963 The Nerves of Government (courtesy of Stefanos Geroulanos)

This proved true even when (or perhaps especially when) understandings of the cybernetic seemed to point in very different directions. A panel of papers by David Bates and Nicolas Guilhot was particularly exemplary in this regard. Bates and Guilhot brought contrasting approaches to the question of the political in the cybernetic age. Bates presented his paper in the form of a question—on the face of it paradoxical, or simply unpromising—of whether we might think a concept of the political in the cybernetic age through the work of Carl Schmitt. Referring to Schmitt’s concept of the katechon (from his post-war work The Nomos of the Earth) as the Restrainer of the Anti-Christ, Bates proposed thinking the political as a deferral of chaos, a notion he linked to the idea of an open system that maintains itself through constant disequilibration, and to an organism that establishes its norms through states of exception. Recalling, through Schmitt, Hobbes’ conception of the Leviathan as an artificial man in which sovereignty is an artificial soul, Bates argued for a concept of the political that would enable us to think mechanism and organism together, that could recover the human without abandoning technology.

img_2055

Nicolas Guilhot, David Bates, and Alexander Arnold (courtesy of Stefanos Geroulanos)

Guilhot, by contrast, looked at the place of cybernetics in international relations theory and the work of political theorists in the 1960s and 1970s. Cybernetics, Guilhot suggested, here offered the promise of an image of the political that was not dependent on sovereign actors and judgment, one that could do away with decision making in favor of structure, system, and mechanistic process. Where Bates expressed concern that the technical had overrun the capacity of humans to participate in their own systems, for Guilhot’s theorists this was precisely the appeal: coming at a moment of a widely perceived crisis of democracy, cybernetics promised to replace politics with governance as such. For Guilhot here too, though, there was a critical intervention at stake: the image of the political as a system does not remove decision making, he contended, but rather obfuscates it. Prompted by the panel chair to respond to each other directly, Bates and Guilhot agreed that their papers were indeed complementary, with Bates speaking to an earlier moment of concern in the history of cybernetics that had subsequently been lost. The lively discussion that ensued served as proof of the productive engagement that can come from bringing it to the fore again.

Seen in this light, it was a fitting—if unwitting—coda to the conference as a whole that the menu at the post-conference lunch that Saturday afternoon rendered the title of the conference as “Cybernetics and the Human Services” (see image). One might take this as an occasion to think about the flow of information, about the place of error in systems of control and communication. But for present purposes, and for the present author, this fortuitous transposition of ‘human sciences’ into ‘human services’ serves rather to bring to the fore the question implicit in the conference’s agenda: how does the effort to reevaluate the legacy of cybernetics as a single history of the present change our possibilities for understanding and acting within it. What service, in short, can the human sciences render?

IMG_6139.JPG

(© Jamie Phillips)

In his paper that concluded the conference, Weatherby referred to an occasion at one of the Macy Conferences where the participants, considering the question of whether the brain was digital, confronted the further problem of defining the digital itself. Here, Weatherby suggested, they suffered from a lack of contribution from the humanities—no participant could themselves help the group to arrive upon a definition of cybernetics, what it does, how it works. Such is the work, it seems, that awaits the return to cybernetics. As the conference amply demonstrated, this will not and cannot be simply a matter of narrow definition: any attempt to come to terms with the cybernetic age and our continued place within it must pay heed to the pluralities, the disunities, the dispersed and intertwined trajectories that constitute that legacy; for all its own promise to unify the sciences, cybernetics was never one thing. At the same time, coming to terms with the cybernetic age will entail an effort to find a commonality in the plurality: if cybernetics indeed saturates the human and social sciences, how can we distill it; if it is everywhere without being named, what does it mean to name it, and what does it allow us to see. In this respect, one hopes, the menu will not be the last word, but will point rather to the urgency of continuing the ongoing reevaluation. An edited volume, I am told, is in the works.

Jamie Phillips is a Ph.D. candidate in modern European history at NYU. His dissertation examines the history of psychoneurology as a total science of the human in early twentieth century Russia, and its relation to the project of creating a ‘New Man.’ 

Keeper of Language Games: G.H. von Wright at 100

by guest contributor David Loner

This past month I attended a symposium held at Lucy Cavendish College, Cambridge in memory of the Finnish logician and Cambridge professor of philosophy G.H. von Wright (1916-2003), who this June would have been 100. Titled “Von Wright and Wittgenstein in Cambridge,” the event was organized by Bernt Österman, Risto Vilkko and Thomas Wallgren (University of Helsinki) and sponsored by the International Ludwig Wittgenstein Society, the Antti Wihuri Foundation and the Oskar Öflund Foundation. For four days, philosophers and intellectual historians from Europe and the United States met at von Wright’s former residence of Strathaird to discuss topics from intellectual biography to metatextual analysis, forging a greater historical appreciation for von Wright’s life and work. In particular, by discussing his legacy as Wittgenstein’s student, participants reflected not only on von Wright’s place in the history of analytic philosophy but also his postwar ambivalence towards what I have elsewhere termed Wittgenstein’s absent-minded training regime.

Von Wright and Wittgenstein at Strathaird. By permission of the family of Knut Erik Tranøy.

Von Wright and Wittgenstein at Strathaird. By permission of the family of Knut Erik Tranøy.

Georg Henrik von Wright (pronounced von Vrikht) was born June 16, 1916 to a Swedish-speaking family in Helsinki. Influenced early in his training by the scientific philosophy of the Vienna Circle, von Wright first enrolled as a post-graduate student at Cambridge in March 1939. A student of the Finnish psychologist Eino Kaila, von Wright held an interest in the logical justification of induction, a topic he wished to continue studying as a Ph.D. student. Thus, with intelligence and affability, he would quickly gain the attention of a number of noted dons, specifically the Knightbridge Professor of Philosophy Charles Dunbar Broad. As faculty chair, Broad encouraged von Wright to attend courses and colloquia in philosophy and the philosophy of mathematics. Most notably he suggested those lectures held at Whewell’s Court in Trinity College by the recently-elected Professor of Philosophy Ludwig Wittgenstein.

Though by no means a traditional logician, Wittgenstein, by 1939, had accumulated in his Cambridge talks a retinue of devotees, each one taken by both his “overpowering personality” and cant critique of scientific philosophy as “diseases of the understanding”. In his 1990 biography of Wittgenstein, Ray Monk describes these devotees as never ceasing in their thinking “seriously and deeply” about philosophical problems. Yet, in point of fact, many of his students can be said to have strayed from their master’s treatment of the subject. To be sure, von Wright was no exception. Appreciative of Wittgenstein’s sincerity as both a philosopher and “restless genius,” von Wright nevertheless maintained scholarly priorities which stood in stark contrast to the Spenglerian pessimism of Wittgenstein’s later thought, towards a more optimistic discussions of the purchase scientific philosophy held for humanity.

Indeed, throughout his intellectual life, von Wright is said to have held a “profound respect for the achievements of the exact sciences.” This, at least, is according to Ilkka Niiniluoto, whose “opening words” on behalf of the organizers of the symposium testified to the ambivalent tone von Wright took in his reception of Wittgenstein’s later philosophy. Beginning with his 1941 Ph.D. thesis, “The Logical Problem of Induction,” and continuing well into his later studies in “deontic logic,” von Wright is said to have entertained in his research, pace Wittgenstein, questions of not only logical form but as well normative propositional coherence. What is more, due to his lifelong collegial connections with older Cambridge luminaries, including Broad, G.E. Moore and Richard Braithwaite, von Wright is known to have acquired in his role as lecturer a “fatherly” affect—quite the departure from the vociferous Wittgenstein.

This portrayal of von Wright as both erudite and accommodating was re-emphasized time and again throughout the event’s proceedings, not least by Niiniluoto’s Cambridge colleague Jonathan Smith. In his talk, “Why Cambridge?: A Historical examination of the ‘living tradition in inductive logic’”, Smith marshaled Tripos examination papers and biographical material in order to suggest that alongside Wittgenstein’s well-known absent-minded training regime there existed in Cambridge a far richer tradition of learned, mathematical-based knowledge-making, remembered as much for “its pursuit of science” as its social utility. This tradition, Smith argued, appealed as a corrective to past atomistic assumptions about logic and to Wittgenstein’s own scurrilous depictions of scientific philosophy. For, as he noted, “[t]he primacy of mathematics [in the Cambridge Moral Sciences at 1939] both starved it of students…and yet provided high-quality graduates schooled in mathematics who were drawn to logical aspects of philosophy.”

It is no coincidence, then, that while von Wright admired Wittgenstein as a great man of talent, he stood opposed to his teacher in terms of both character and method, preferring instead what can be viewed as the more congenial and practical folkways of the day. Yet despite their differences, Wittgenstein is said by all accounts to have remained fond of his Finnish pupil, who in 1948 would briefly succeed Wittgenstein as professor of philosophy at Cambridge (later resigning in 1951 in order to return to Helsinki.) “In Cambridge von Wright gained the impression that Wittgenstein greatly appreciated the substantial differences of their philosophical methods and personalities,” Bernt Österman and Risto Vilkko remarked in their memorial piece, “Georg Henrik von Wright: A Philosopher’s Life.” In fact, “Wittgenstein is reported to have said that von Wright was the only one of his students who he had not spoiled with his teaching and guidance, the only one who made no attempt to imitate his way of thinking or mode of expression.” (Osterman and Vilkko, 53). While not quite an accurate representation of Wittgenstein’s devotees, this rendering of von Wright’s post-graduate study did afford presenters a historical template from which to further problematize his later work, most notably as literary executor and editor of the Wittgenstein Nachlass.

A key facet of contemporary research in Wittgenstein studies, presenters Joachim Schulte and Susan Edwards-McKie both commented in their respective talks on the interventionist approach von Wright took to compiling Wittgenstein’s posthumous papers, following the philosopher’s April 1951 death. Culling from letters and correspondence belonging to fellow literary executor Rush Rhees and maintained at Swansea University, Edwards-McKie’s paper, “The Charged Dialectic: Von Wright and Rhees” confirmed that mathematical certainty (rather than any overriding attempt at an anti-intellectual moral epistemology) drove editors in their initial postwar conversations on composition and publication of Wittgenstein’s later writings. Meanwhile, in his talk “Georg Henrik von Wright as editor of Wittgenstein’s writings”, Schulte recalled from his own experience as a pupil of von Wright’s the numerous exegetical “modifications” the philosopher would take in pairing together such otherwise under-developed texts as Remarks on the Foundations of Mathematics (1956) and Philosophical Grammar (1969). Altogether, presenters attested that under von Wright, Wittgenstein’s original aspiration—to establish a thoroughly non-academic form of “doing philosophy”, divorced from the “mental cramps” which persisted throughout the practice of logic—would by 1969 transform into a sub-discipline of its own—a discipline which was at once specialized, routinized and authorial, all those things which Wittgenstein, in life, had denounced as unbecoming of philosophy.

Speaker Christian Erbacher giving his talk. Author photo.

Speaker Christian Erbacher giving his talk. Author photo.

How exactly such a transformation was allowed to occur in the first place was thus the topic of speaker Christian Erbacher’s concluding keynote address, “The correspondence between Wittgenstein’s literary executors—a source for studying the work of philosophical editors”. Convinced of the presence of an “asymmetry” in the correspondence shared between von Wright and fellow editors Rhees and Elizabeth Anscombe, Erbacher identified in his talk several key moments in the “history of reproducing Wittgenstein” wherein von Wright took a decisive lead. Chief among them was the 1968 codification of Cornell University’s Wittgenstein microfilm collection. Fascinated by the “externalities” of Wittgenstein’s writings, most notably their classification and historical “strata”, von Wright was said to have, in his concern for curation, shared affinities with the archivist, setting him well apart from Rhees and Anscombe, each of whom were often overwhelmed by the task of deciphering and setting to order Wittgenstein’s unfinished manuscripts. In cooperation with Wittgenstein’s most renowned American pupil, Norman Malcolm, then, von Wright, in 1966, would set to work on a Cornell edition of the Wittgenstein Nachlass. Reproduced via microfiche, the Cornell set enabled von Wright to at once disseminate Wittgenstein’s posthumous papers to a much wider audience of philosophers and accommodate those academics already committed in their own work to the further elaboration of his and Anscombe and Rhees’s published accounts. According to Erbacher, all of this, in addition to von Wright’s own enumeration of the Nachlass’ contents, provides insight into the mechanisms of Wittgensteinian philosophy’s disciplinary formation, insofar as it offers a “‘blackbox’ of paradigms in the humanities”; that is, von Wright’s efforts bear witness to students’ active rehabilitation of Wittgenstein’s work well after his teacher’s own failure to codify his adversarial ethos of discipleship as a viable teaching method in philosophy.

The ambivalence von Wright held towards Wittgenstein’s absent-minded training regime thus points to a tension within the history of analytic philosophy that no metaphilosophy of “family resemblances” can adequately describe without mistaking real institutional upheaval for a “robust distinctive phenomenon”. For while philosophers and intellectual historians may insist that Wittgenstein’s later philosophy reflects a continued attempt to do away with “the sickness of philosophical problems,” at the same time they remain staunchly opposed to the idea that analytic philosophy’s postwar “web of beliefs” failed to convey a congenial and scientifically-conscious means of doing philosophy. As I hope my remarks on this symposium have shown, greater attention to the international academic culture and reputation of Cambridge at mid-century, and to the influence it held over students’ own professional identity formation, might very well offer the means by which to begin to circle this. For in treating von Wright on his own terms, we begin to recognize the greater curricular space that acolytes of Wittgenstein straddled in their careers as keepers of the language games.

Conference participants gather outside Strathaird. Author photo.

Conference participants gather outside Strathaird. Author photo.

David Loner is a third-year Ph.D. student in history at the University of Cambridge. His research focuses on the disciplinary formation of Wittgensteinian philosophy and the changing parameters of student-instructor collaboration in the twentieth century Moral Sciences at Cambridge. He can be reached at jdl50@cam.ac.uk.

Leibniz and Deleuze on Paradox

by guest contributor Audrey Borowski

Paradox features prominently in Leibniz’s thought process, and yet has failed to receive much attention within mainstream scholarship. The French philosopher Gilles Deleuze, however, devoted his book The Logic of Sense to the analysis of paradox. I undertake to shed light on Leibniz’s deployment of paradox through the prism of Deleuze’s reflection. Deleuze greatly admired Leibniz, and even dedicated a book to the latter’s “art of the fold”—his particular treatment of and extensive recourse to continuums—in 1988. And yet Deleuze’s connection to Leibniz may run still deeper.

For Deleuze, paradox designated that which ran counter to common opinion (doxa) in its dual incarnations, good sense and common sense. He defined it in the following manner: “The paradox therefore is the simultaneous reversal of good sense and common sense: on one hand, it appears in the guise of the two simultaneous senses or directions of the becoming-mad and the unforeseeable; on the other hand, it appears as the nonsense of the lost identity and the unrecognizable” (Logic of Sense, 78). In this manner, paradox ushered in a novel type of thought process, one which broke free from the strictures of simple causality implied by good sense and proved deeply unsettling by constantly oscillating between two poles, “pulling in both directions at once” (Logic of Sense, 1). Paradox was to be distinguished from contradiction: while the former applied to the realm of impossibility, the latter was confined to the real and the possible from which it was derived. Paradox operated on a different conceptual plane; it lay beyond the framework of pure signification altogether.

For evoking impossible entities, paradox has too easily been dismissed as philosophically suspect. Yet, far from entailing error, paradox suggests a “certaine valeur de vérité,” a particular type of truth inherent to language: after all, “It is language which fixes the limits… but it is language as well which transcends the limits and restores them to the infinite equivalence of an unlimited becoming” (Logic of Sense, 2-3). In this manner, a squared circle, for instance, possessed sense even though it lacked signification. While they lacked real referents—and thus failed to exist—paradoxical entities inhered in language: they opened up an uncanny wedge between language and existence.

In fact, Leibniz had previously cultivated to perfection the dissolution of seeming contradictions into productive tensions. He formulated his mathematical “Law of Continuity” most clearly in his Cum Prodiisset in 1701, in which the rules of the finite were found to succeed in any infinite continuous transition. By virtue of this reasoning, rest could be construed as “infinitely small motion,” coincidence as “infinitely small distance” (GM IV, 93), elasticity as “nothing other than extreme hardness,” and equality as “infinitely small inequality” (and vice versa) (GP II, 104-5). In fact, Leibniz’s epistemological project essentially hinged on a process of reconfiguration: whereby finite and infinite were no longer pitted against each other, but correlated through the recourse to “well-founded fictions” which were not rigorously true: whilst they were uniquely “apt for determining real things” (GM IV, 110), they constituted finite ideal projections of which they were “none in nature” and which strictly speaking “[were] not possible” (GM III, 499-500). Reality was henceforth accessible primarily through fiction.

With his infinitesimals, Leibniz tread an ambiguous middle ground, whose lack of empirical counterpart or referent earned him much criticism from a number of contemporary mathematicians. French mathematician Jean le Rond d’Alembert conveyed the sense of dismay which Leibniz’s constructions elicited even more than fifty years later: “a quantity is something or it is nothing: if it is something, it has not yet disappeared; if it is nothing, it has literally disappeared. The supposition that there is an intermediate state between these two states is chimerical” (d’Alembert (1763), 249–250). Simply put, an intermediate state between “something or nothing” was simply inconceivable.

And yet, according to Deleuze, such absurd mental objects, whilst they lacked a signification, had a sense: “Impossible objects —square circles, matter without extension, perpetuum mobile, mountain without alley, etc.—are objects ‘without a home,’ outside of being, but they have a precise and distinct position within this outside: they are of ‘extra being’—pure, ideational events, unable to be realised in a state of affairs” (Logic of Sense, 35).


In Deleuze’s account, paradox stood not only as that which destroys “good sense as the only direction,” but also as that “which destroys common sense as the assignation of fixed identities” (Logic of Sense, 3). It emerged as the unforeseeable or the “becoming mad” and acted as the “force of the unconscious” which threatened identity and recognition. This was perhaps all the truer in Leibniz’s deployment of paradox in his metaphysics. Leibniz’s concept of “compossibility,” whereby the world was only made up of individuals that could logically co-exist, went beyond mere adherence to the principle of non-contradiction.

In the preface to his New Essays on Human Understanding, Leibniz defined his principle of continuity. According to it, “nature never makes leaps.” The world was organized according to an infinitely divisible continuum, in which everything was interconnected and change took place gradually. In it, “diversity [was] compensated by identity” (Elementa juris naturalis, 1671 (A VI, 484) in the image of the monad, that foundational spiritual entity which acted as a “perpetual living mirror of the universe,” albeit from its own particular perspective (Monadology, § 57, 56). In this manner, reality folded and unfolded indefinitely and rationally in an “uninterrupted” process of continuous transformation, whereby one state naturally “disappeared” into the next, a sufficient reason “always subsist[ing]… whatever alterations or transformations might befall” throughout the transition (New Essays). Each state was simultaneously the product of “that which had immediately preceded it” and “pregnant with the future”. Simply put, “something also was what it wasn’t” (Belaval (1976), 305).

Leibniz consecrated a thoroughly fluid and dynamic outlook, one in which truth was “hallucinatory” and lay in the very act of vanishing itself (Deleuze, The Fold: Leibniz and the Baroque). It could no longer be reduced to the fixed identities of “common sense,” but was essentially diachronic, governed as it was by “logic of becoming“:”The paradox of this pure becoming, with its capacity to elude the present, is the paradox of infinite identity (the infinite identity of both directions or senses at the same time- of future and past, of the day before and the day after, of more and less, of too much and not enough, of active and passive, and of cause and effect.)” (Logic of Sense 2).

According to Deleuze’s critique of the “regimes of representation” in Difference and Repetition, Leibniz had made representation infinite instead of overcoming it, thereby producing a “delirium” which “is only a pre-formed false delirium which poses no threat to the repose or serenity of the identical” (63). In The Fold, Deleuze asserted that “one must see Leibniz’s philosophy as an allegory of the world, and no longer in the old way as the symbol of a cosmos” (174).

In this manner, fixed identity had given way to infinite iteration in the shape of a “continuous metaphor” (metaphora continuata). This infinite deferral of proper meaning made incessant creation possible; truth was “infinitely determined,” each viewpoint becoming “the condition of the manifestation of truth” (Deleuze, Lecture on Leibniz, 16 December 1986). Leibniz’s philosophy of “mannerism” consisted in “constructing the essence from the inessential, and conquering the finite by means of an infinite analytic identity” (Difference and Repetition 346). Truth lay in infinite variation itself, each moment eliciting a different modality of an essentially elusive essence that could never be directly grasped or circumscribed.
Leibniz turned paradox into a marvelously fruitful tool, emblematic of the audacity and subtlety which drove the endless twists and turns of his broader thought process. Far from being the mark of a flawed system, it ensured that the system would remain contradiction-free by bringing about the “coincidence of opposites” (The Fold 33) in the process, confirming Leibniz as the quintessential philosopher of the Baroque age.

Ultimately both Leibniz and Deleuze inveighed against the poverty of the conventional thought process and set out to open up new horizons of thought by recovering the genetic force of paradox. Paradox threatened to overturn the very foundations of philosophical reason. And yet, by unsettling and challenging us, it forced us to think.

Audrey Borowski is a DPhil student in the History of Ideas at the University of Oxford.

Reestablishing Philosophy in a Destroyed Country: Karl Löwith’s Return to Germany

by guest contributor Mike Rottmann

Almost one year after the end of war, on July 20, 1946, a leading executive of the Department of Education in the State of Baden sent a letter to the President of Heidelberg University:

With regard to the letter of the Dean of the Faculty of Philosophy from May 23, 1946 […], we have—lacking any files—reconstructed by looking at the last staff appointment scheme that at the end of the Nazi regime only a single ordinary professorship of Philosophy existed, which is now filled with Professor Jaspers.

Karl Jaspers. By permission of Karl Jaspers Stiftung, Basel

Karl Jaspers. By permission of Karl Jaspers Stiftung, Basel

Within the newly constructed scheme, we have budgeted the reestablishment of a second professorship. We would like to inquire now how the fields of specialization of both chairs should be defined and how the new chair can be entitled. The new chair is to be filled by Professor Ernst Hoffmann as part of compensation. A precise labeling should be desirable, so that several lines of thinking are kept permanently in Heidelberg.

Gerhard Hess. By permission of Deutsche Forschungsgemeinschaft, Bonn

Gerhard Hess. By permission of Deutsche Forschungsgemeinschaft, Bonn

One week later, on July 28, Dean Gerhard Hess formulated his answer and underlined the Faculty’s “exceptional satisfaction” with Ernst Hoffmann’s return. In the same letter, however, the dean emphasized that at this University “the fields of specialization have never been circumscribed” and that each professor always has the right “to cultivate the entire field” of philosophy. This system, Hess argued, was based on a “fundamental understanding of philosophy” and should be maintained.

In October that year, Hess sent a list with three nominees to the Head of Baden’s Department of Education. A university commission nominated the three candidates. The first (and most preferred) candidate was Erich Frank. As a renowned historian of ancient philosophy, he succeeded Martin Heidegger as professor of philosophy at the University of Marburg in 1928. After the Nazi seizure of power in 1933 and the introduction of the Gesetz zur Wiederherstellung des Berufsbeamtentums (Law for the Restoration of the Professional Civil Service), Erich Frank, who was of Jewish descent, was forced to resign his office. After a brief imprisonment in a concentration camp, Frank emigrated to the United States where he taught at both Harvard University and Bryn Mawr College. The second candidate was Hans-Georg Gadamer, professor at Leipzig University since 1939. Gadamer tried to leave the Soviet occupation zone, and moved to Frankfurt in October 1947. The third was Gerhard Krüger from Münster. Like Gadamer and, in some respects, Frank, he spent most of his career in Marburg, where he wrote his doctoral dissertation under the supervision of Nicolai Hartmann and became a close friend of the famous New Testament scholar Rudolf Bultmann.

While the government of the State of Baden entered into negotiations with the nominees, Karl Jaspers suddenly accepted a job offer from the University of Basel, Switzerland. This directly influenced the faculty and their dean, and they were under immediate pressure to make a decision: both chairs were vacant and, due to his disappointment with the political developments in the immediate postwar period, a prominent figure of West-German academia was gone. As a result of Jasper’s departure, filling his prestigious chair became top priority. Since Jaspers supported Gerhard Krüger, Krüger received the appointment in April 1948. For reasons that are difficult to understand—especially from today’s perspective—the Government of Baden was unable to allure him to Heidelberg. One external cause might have been the fact that there was no appropriate residence available! Certainly, the decisive reasons were of a personal nature and thus difficult to reconstruct. In any case, by the end of the year 1949, Hans-Georg Gadamer succeeded Karl Jaspers as chair of philosophy.

In July 1950 a second commission came to an agreement about three new candidates: Karl Löwith as primo loco, followed by Oskar Becker and Helmuth Plessner. In the report we can find the following statement as an introduction:

To re-establish the chair of Philosophy, the commission was guided by the viewpoint of bringing a new note to academic instruction.

Commission report on Löwith's candidacy. Deutsches Literaturarchiv Marbach, Nachlass Hans-Georg Gadamer

Commission report on Löwith’s candidacy. Deutsches Literaturarchiv Marbach, Nachlass Hans-Georg Gadamer

About Löwith one can read this opinion:

L. is a brilliant author. All his literary works are thoroughly made and demonstrate an unusual intellectual personality. His point of origin lies completely in the Weltanschauliche Problematik of the 19th century. Yet in his most recent work especially, he traces this problem back to its origins in the Apostolic Age in a most systematic way. There he proves again a complete mastery of the Abendländische Geistesgeschichte (western history of ideas) L. is a brilliant lecturer and he is able to stimulate and to encourage the students. As a restrained, calm and likeable scholar, he would be a pleasant colleague. It is to be expected that he would accept a call to Germany because the American teaching style remains strange to him.

Karl Löwith. By permission of Universitätsarchiv Heidelberg

Karl Löwith. By permission of Universitätsarchiv Heidelberg

In April 1951, Löwith, Professor at the New School since 1949 (as successor of Leo Strauss), received the call. But before the “causa Löwith” was able to pass through the committees, another difficulty had to be resolved. Ernst Hoffman and Raymond Klibansky had begun an edition of the works of Nicholas of Cusa in 1927, based at the Heidelberg Academy of Sciences and Humanities. When the interest in Löwith became known, the Academy intervened: with respect to this most significant project, the University should require a more appropriate candidate to follow Hoffmann. A solution was found when Gadamer agreed to burrow into the edition of Cusa’s works.

Another problem came up when Löwith wrote to Karlsruhe that he would expect about 10,000 DM to cover his moving expenses. He also noted the great difference between the salary he was currently receiving and the salary he would get in Germany. In the appeal record, one finds the notice of a circumstance which increased the pressure on the decision-makers. After the death of Nicolai Hartmann in October 1950, the administration of the University of Göttingen made inquiries about the status of the proceedings—which could be interpreted to suggest as though Göttingen were also considering an appointment of Löwith. 
Hess contacted the Federal Government in Bonn and asked for grants to fund the emigration. An Oberregierungsrat (senior civil servant) rejected this request and recommended that Hess contact the State of Hessen, since the University of Marburg had dismissed Löwith in 1935 without any claims.

Although the high academic importance of Löwith was consistently emphasized, and a failure of the negotiations due to a few thousand DM was considered unforgivable, the meagerness of the public budget could not be overcome. At the end, 7,500 DM had to be enough. 
A “cosy 2-bedrom-apartment,” an annual salary of 11,600 DM plus 2,000 DM seminar fee, and a study room at the Institute of Philosophy were initially the only conditions the university was able to offer. Löwith, who had become an American citizen, was not required to swear an oath on the constitution. The government also did not demand that he should take German citizenship again.

By the end of 1954, Löwith received a call from the University of Hamburg and a second from the University of Cologne. He could demand more concessions: a secretary as well as the highest possible salary.

Document dismissing Löwith from Marburg. By permission of Universitätsarchiv Marburg

Document dismissing Löwith from Marburg. By permission of Universitätsarchiv Marburg

Löwith's personal data form. By permission of Universitätsarchiv Marburg

Löwith’s personal data form. By permission of Universitätsarchiv Marburg

In 1954, the University of Marburg was also looking for somebody to follow Julius Ebbinghaus. The appointment commission invited Josef König to prepare a report on the candidates: Walter Bröcker, Walter Schulz, Ludwig Landgrebe, Klaus Reich and Karl Löwith. In January 1955, König wrote:

It seems to me that Mr. Löwith holds an exceptional position among those who are doing Philosophy these days. I regard him as a genuine philosopher in a broader sense of the word. He is a gentle, restrained nature. He stands with a certain distance against the Welttreiben, but also (in some way) against the going-ons of the Philosophers. But behind this form of distance his original solicitousness and connoisseurship becomes distinctly and visibly. He is absolutely not an aesthete, but his artistic nature is appreciable. Insofar, general things are of lesser interest for him, but the individual human being. These are the roots of his interest of historical situations, of psychology, sociology, and phenomenology. And this is why he is deeply moved through existentialism and, especially, through Heidegger. Therefore it is well-founded that, in his book From Hegel to Nietzsche, the following became his topic: the conflict between Hegel’s philosophy which allocate towards general things, and the philosophy of the great individuals. At the same time, this is also the conflict between philosophy and religious self-awareness. I estimate this book and the later published Meaning in History to the best that was issued since the end of war. Löwith has got a dialogic power. He is in full possession of his own and his competence. His standing should be generally accepted.

König's report on Löwith. By permission of Universitätsarchiv Göttingen

König’s report on Löwith. By permission of Universitätsarchiv Göttingen


Cod_Ms_Koenig_137-2

Only a short time later, on January 28, Rudolf Bultmann, in a letter to his friend Gerhard Krüger, stated that

there is still no decision made about the succession of Ebbinghaus. Löwith, who initially had the chance to get the first position on the list, has ruined the favour of his friends through his lecture on knowledge and faith. Now Bröcker seems to have a chance.

There may well also have been objective reasons due to which an appointment to Marburg was no longer being considered. However, it is evident how stubbornly and unrestrainedly Martin Heidegger counteracted an appeal of his former student. To Bultmann he wrote in October 1954:

Löwith is an extraordinarily learned and versatile man, but he cannot think. In principle he always says “No!” where it is essential to go into the matter. Basically he is a skeptic who is able to achieve to utilize the Christlichkeit for his skepticism.

In the end, Klaus Reich became successor of Julius Ebbinghaus, while Löwith stayed in Heidelberg until his retirement in 1964. In 1961, Löwith served as visiting professor in Basel to replace the chair of Karl Jaspers.

Mike Rottmann is writing his MA thesis at the University of Jena on religious discourse in literature around 1800. He has studied modern German literature, Jewish studies and philosophy.

Alice Ambrose and Life Unfettered by Philosophy in Wittgenstein’s Cambridge

by guest contributor David Loner

As the first and only official post-graduate advisee of the celebrated Austrian thinker and Cambridge philosopher Ludwig Wittgenstein, Alice Ambrose (1906-2001) typified in her 1932-38 Ph.D. course the complex social experience interwar upper-middle-class women underwent as unofficial members of the University of Cambridge. Compelling yet reserved, Ambrose toed a line between subordination and originality which Cambridge dons often expected their female pupils to exhibit in the years following the Cambridge University Senate’s 1921 university ordinance on “title of degree,” or unofficial courses for women students (women were not made full university members until 1948). Yet, despite this carefully-negotiated and normative gender performance, Wittgenstein ultimately denounced his protégée and her work as morally “indecent”—precipitating a contest between the Austrian thinker and his fellow dons over the place of women and high academic distinction in mid-twentieth-century Cambridge philosophy.

As an American graduate student at the University of Wisconsin-Madison after the First World War, Alice Ambrose’s initial research focused on the work of the Dutch logician L.E.J. Brouwer and his conjecture that intuition, not metaphysics, best served as the epistemic foundation to all mathematical thought. Forming the basis of her first Ph.D. dissertation, this work would enable Ambrose to successfully apply in 1932 for Wellesley College’s one-year post-doctoral fellowship with the University of Cambridge (what would a year later become a second fully-fledged PhD course). Cambridge may have been male-dominated, but Ambrose’s choice was intentional. For, as she claimed in correspondence with her Madison advisor E.B. McGilvary, only through discourse with the renowned Cambridge junior fellow and author of the Tractatus Logico-Philosophicus, Ludwig Wittgenstein, would her future employment as a lecturer in philosophy be assured (October 16, 1932, MS Add.9938 Box 2, Folio 2, Cambridge University Library).

Having previously stunned interwar readers in its provocative albeit bewildering analysis of the metaphysics of logic, Wittgenstein’s Tractatus—written during his four-year service as an infantry soldier for Austria-Hungary in the First World War—was a blatant departure from the technical program of his one-time prewar Cambridge supervisor Bertrand Russell and their forerunner Gottlob Frege. The Tractatus argued that, while capable of depicting the world within a coherent symbolism, “logical pictures” nevertheless indicate a greater ethical realm, inaccessible to humans (not just scholars) by symbolic speech acts. It uncompromisingly declared that “what we cannot think, that we cannot think: what we cannot therefore say what we cannot think” (TLP 5.61). For, as Wittgenstein posited, “in fact what solipsism means is quite correct, only it cannot be said, but it show itself.” (5.62). “The sense of the world,” then, Wittgenstein argued, must lie outside the world (6.41), in a moral reality composed not of propositions but instead of silence (7).

Immediately conceding their debt to Wittgenstein in both private letters and published reviews, scholars like Russell and the Cambridge mathematician Frank P. Ramsey praised the Austrian thinker’s remarks on solipsism and the tautological nature of logical propositions as indispensable. Even so, for these highly-trained professional men, as well as for their more neophyte pupils, the ethical import of Wittgenstein’s emphasis on the sense of the world as somehow outside the world (and within silence) would remain largely ineffectual in combating the greater problems of philosophy. Ambrose was no exception to this. Writing to McGilvary on October 16, 1932, a week after the start of her course at Cambridge, Ambrose would confirm that while “there’s no doubt his thoughts are original,” Wittgenstein’s mode of conveying ideas in his course lectures left one wanting. “He is extremely hard to follow,” she wrote; “he forgets what he set out to say, rears ahead of himself—says Whoa!…settles down rigidly then and thinks with his head in his hands, stammers, says ‘Poor Miss Ambrose’, swears, and ends up with ‘It is very diff-i-cult’” (MS Add.9938 Box 2, Folio 2, CUL).

Letter from Ambrose to McGilvary, MS Add.9938 Box 2, Folio 2, Cambridge University Library. By permission of the Syndics of Cambridge University Library.

Letter from Ambrose to McGilvary, MS Add.9938 Box 2, Folio 2, Cambridge University Library. By permission of the Syndics of Cambridge University Library.

Exemplifying what Stefan Collini has referred to as the “absent-mindedness” of the twentieth-century British intellectual, Wittgenstein at once appeared to Ambrose in her initial course lectures and supervisions as “as good as a babe in arms at advising one about any practical matters.” “It is true he is not English,” she wrote in the same letter to McGilvary, “not enough dignity, not proper enough”; “he lectures without a gown and doesn’t insist on students wearing theirs; and he goes with his shirt open at his throat.” Yet in his absent-mindedness as a Cambridge junior lecturer, Wittgenstein would go far beyond the pale of Collini’s characterization. For, as Ambrose indicates elsewhere in her notes on Wittgenstein’s lectures, Wittgenstein now not only denied that there were any intellectuals in Cambridge, but refuted the entire project of philosophy as sheer nonsense. “Nonsense is produced by trying to express in a proposition something which belongs to the grammar of our language”—a practice all too common among scholars. “The verification of my having toothache is having it,” Wittgenstein remarked; “[i]t makes no sense for me to answer the question, ‘How do you know you have a toothache?’, by ‘I know it because I feel it’. In fact there is something wrong with the question; and the answer is absurd.”

For the anti-intellectual Wittgenstein, then, scholarly inquiry in philosophy, be it at Cambridge or elsewhere was, much like a toothache, a disease—in need of constant therapeutic relief. This alleviation of nonsense, he argued, was possible only through the disappearance of obsession (98) or a life unfettered by disciplinary philosophy, engaged in full duty to oneself. For Ambrose, however, her status as a woman student in need of employment as a university lecturer prohibited her from such thinking. Indeed, despite Wittgenstein’s insistence that his students apply his adversarial ethos to their study of philosophy, interwar female pupils like Ambrose would by and large continue in their courses to affirm the same Cantabridgian virtues of industry, conviviality and hierarchy which the Austrian thinker disparaged as absurd. The result, then, as Paul R. Deslandes has elsewhere detailed, was a “highly gendered little world” of “intense institutional loyalty” which rewarded only the most subordinate, if original, of women students—a paradoxical imbalance which Ambrose attempted to maintain in her own work on finite logical propositions.

When not helping Wittgenstein to record his latest research (what eventually would be published in 1961 as The Blue and Brown Books), Ambrose thus pursued quite separately in her postdoctoral research an investigation into the epistemic purchase of Wittgenstein’s concept of grammar in philosophy. In particular, her April 1935 article “Finitism in Mathematics [I],” the first published exposition of Wittgenstein’s post-Tractarian philosophy, situated his denunciation of philosophical “nonsense” within a much broader scholarly conversation on the intuitively finite nature of logical operations in mathematics or “verbal forms.” Referencing Wittgenstein’s own turn of phrase throughout her article, Ambrose argued that “what the finitist can justifiably claim” can be “in many cases…a statement of what he should claim as opposite of what he does claim” (188). That is, logical dilemmas stifling scholars’ findings were more often than not expositional confusions regarding the grammar of intuition, brought about by the inexactness of the philosopher’s language. Guided, then, by the same remarks her supervisor offered in his 1932-35 Cambridge lectures, Ambrose’s article presented Wittgenstein’s absent-minded posture not an affront to academic philosophers, but rather as a bulwark in their continued acquiescence to male-dominated high academic distinction. Yet despite her initiative in re-imagining Wittgenstein’s new philosophy as a boon for postwar scholars’ ongoing investigations in the philosophy of mathematics, Ambrose’s citation would prompt the full wrath of the Cambridge junior fellow.

On May 16, 1935, in a letter to Ambrose, Wittgenstein decried his protégée’s publication as “indecent,” refusing any further cooperation on his part with her. Only two days later, he reiterated this point, denigrating Ambrose’s behavior to his Cambridge colleague, professor of philosophy G.E. Moore. “I think you have no idea in what a serious situation she is,” he wrote. “I don’t mean serious, because of the difficulty to find a job; but serious because she is now actually standing at a crossroad. One road leading to perpetual misjudging of her intellectual powers and thereby to hurt pride and vanity etc. etc. The other would lead her to a knowledge of her own capacities and that always has good consequences.” To Moore, who would subsequently chair Ambrose’s 1938 Cambridge Ph.D. viva, this assessment was clearly misguided. As her supervisor, he knew Ambrose to be a competent scholar, despite the double standard imposed on her as a women student. Yet unlike Moore, Wittgenstein continued to feel no obligation to assist Ambrose in her quest to maintain the careful balance between humility and assertion necessary to advance one’s career in academic philosophy. For, as he would later argue during his disastrous postwar tenure as chairman of the Cambridge Moral Sciences Club, logical perspicacity required but one thing from the serious philosopher, whether man or woman: the full denial of disciplinary philosophy as a worthwhile life.


David Loner is a second-year PhD student in history at the University of Cambridge. His research focuses on the milieu of students and scholars associated with the twentieth-century Cambridge philosopher Ludwig Wittgenstein.

How Many Things Are There? Ways of Counting in Medieval Metaphysics

by guest contributor Aline Medeiros Ramos

When I see two brown dogs, how many things are really there? Are there two particular dogs alongside each other, or is there only one kind of thing (dog, or “dogness”)? Or are there two things and one kind of thing? In other words: what is the ontological status of universals such as “dog-ness,” “brownness” or animality, and what is the ontological status of kinds, such as “dog” used more broadly to refer to all individual dogs? Do they exist as real entities external to us, or only as terms or concepts within our minds?

In the late Middle Ages, two opposing views on this kind of question divided philosophers. Realists (reales) argued that particulars (each individual dog) and universals (dog, dog-ness, brownness, animality) both existed as real beings outside the mind. Nominalists (nominales), on the other hand, argued that only particulars (each particular brown dog) had real existence, and that so-called universals were just words we used in our language to convey thoughts and ideas. Nominalists believed universals were mere nomina—names.

Nominalism might seem like the more obvious view on the issue: each beagle, poodle, human being, desk chair, kitchen table, pencil, or pen exists in reality, whereas universals (dog, animality, “furniture-ness,” “stationery-ness,” whiteness, blackness) have no real existence, and exist only in our minds, as terms or ideas. This was the view held by philosophers like Peter Abelard (1079-1142), William of Ockham (c. 1287-1347) and John Buridan (c. 1295- c. 1361).

With that in mind, why did medieval philosophers like Thomas Aquinas (1225-1274) and John Duns Scotus (c. 1266-1308) want to postulate that universals exist in reality? There are obviously no (perceptible) animalities, dog-nesses, brownnesses or furniture-nesses around us. But one of the motivations behind realism was that even at a very inchoate stage of intellectual development, we are readily capable of grouping things together, grasping them as being particular manifestations of some more all-encompassing universal. When I say that all animals are mortal, I seem to be attributing mortality to each individual animal that has ever existed and that will ever exist not in virtue of their being the individuals they are, but rather because of this universal “animality” which inheres in each one. So, the realist will argue, things seem to have an inherent feature with some kind of existence outside our minds, which we grasp and which allows us, among other things, to recognize things that we are only now seeing for the first time. This indicates that these universals must exist in reality, independently of us, and not be just mere terms or features of language.

This medieval variation on the so-called “Problem of Universals,” which dates back to Plato, originated in differing interpretations of Aristotle’s Categories, and was especially important in the realm of theology. The denial of universals had heretical implications, especially in Christology and Trinitarian theology. One’s opinion on whether the human and divine natures of Christ both had real, extra-mental existence could result in a heresy accusation: Nestorianism, which posited this, was still considered unacceptable in the eyes of the Catholic Church in the fourteenth century. Similar concerns applied to the nature of each one of the parts of the trinity, as well as the question of whether God and his attributes—such as perfection and wisdom—are one and the same, or if they are really distinct and separate.

This dispute between realism and nominalism (or between the “antiqui” and “moderni,” referring to the “old way”—via antiqua—of Aquinas and Scotus against the “modern way”—via moderna—of Ockham and Buridan) played an important role in the late-medieval academic scene, influencing how aspiring philosophers and theologians pursued their studies. Much like the analytic-continental divide in philosophy today, the two medieval viae also had geographical divisions. Some universities had rules concerning their teaching: Ockham’s works, for instance, were first banned from the University of Paris between 1339 and 1360, and again in 1474, this time because of an official order from Louis XI, in which he oddly listed Aquinas’ and Scotus’ names next to Ockham’s and those of other “renovating doctors” such as Averroes and Bonaventure. Students and teachers protested, and the edict was not taken too seriously, completely disregarded by the beginning of the sixteenth century (L. Thorndike, University Records and Life in the Middle Ages, 355-360). While these were not always hospitable times for nominalists in the university milieu in Paris, nominalism was the rule at Oxford, where Ockham’s legacy prevailed. In Heidelberg, on the other hand, both viae were usually taught, under the condition that neither was criticized (R. Pasnau, Metaphysical Themes 1274-1671, 83-88).

This quarrel may have lost momentum as a major philosophical theme in the sixteenth century, but the question regarding the nature of universals persists; it has simply acquired a new form. Realism with regard to universals has its contemporary supporters in philosophers such as David Armstrong (1926-2014). And variations of nominalism are still very popular in contemporary analytic philosophy. W. V. O. Quine (1908-2000) is well known for his rejection of universals in metaphysics (although he came to revise his views on the subject throughout his life), as are Ruth Barcan Marcus (1921-2012) and David Lewis (1941-2001). It is no coincidence that these three philosophers have been highly influential in the field of philosophy of language. In metaphysics, this nominalist approach to treating universals as mere terms has paved the way for questions regarding our access to objects and how these objects relate to our language—understood as both the multitude of vernacular languages that have existed throughout history and as a possible, overarching mental language common to all human beings at all times (C. Panaccio, Les Mots, les Concepts et les Choses: la sémantique de Guillaume d’Occam et le nominalisme d’aujourd’hui).

Aline Medeiros Ramos is a Ph.D. candidate in philosophy at the Université du Québec à Montréal. She is also specializing in manuscript studies at the Pontifical Institute for Mediaeval Studies, in Toronto. Her research focuses on late-medieval philosophy and virtue epistemology, especially John Buridan’s account of epistemic virtues.