Categories
Intellectual history

In Theory: Disha Karnad Jani interviews Jessica Whyte about Human Rights and Neoliberalism

In Theory co-host Disha Karnad Jani interviews Jessica Whyte, Associate Professor of Philosophy at the University of New South Whales, about her new book, Morals of the Market: Human Rights and the Rise of Neoliberalism (Verso: 2019).

In Theory: The JHI Blog Podcast · Human Rights and Neoliberalism: Disha Karnad Jani interviews Jessica Whyte

Categories
Intellectual history

Old People and Ancient Cellar Holes: Colonial Memory in Late Eighteenth-Century Maine

By Daniel Bottino

Symonds Baker, a thirty-two year old resident of Little River, near the mouth of the Androscoggin River in mid-coast Maine, testified in 1796 that during May of the previous year “I was walking along By a Place (at the Head of the Ten Miles Falls so called on Amorouscoggin River) Where there was an appearance of an old cellar and Chimney and that I found the Blade of a small sword.”  In another deposition taken two days later, Abraham Witney, also of Little River, confirmed Baker’s find of a sword and added that “about twenty four or five years ago there was an old iron hoe with an iron handle ploughed or dug up about fifty rods from Purchases Cellar which appeared to be a garden hoe.”  The “old cellar” mentioned by these two men was the presumed house site of Thomas Purchase, one of the earliest English colonists of Maine, who had come to the area around the mouth of the Androscoggin sometime during the 1620s. 

The depositions of Baker and Witney, along with more than 50 others taken around the same time, were recorded with the object of determining the location of Purchase’s long abandoned dwelling, a subject of considerable controversy and legal import.  A thoughtful historical analysis of these depositions, currently held as part of the Pejepscot Papers at the Maine Historical Society, reveals much more than local disagreements between neighbors in late eighteenth-century Maine.  For through these documents I believe we are afforded a rare glimpse into the processes of colonialist memory creation and perpetuation.  In the multitude of voices that speak from this collection of depositions, a dominant theme emerges: the belief that English colonization could leave an indelible trace upon the landscape, thereby creating a memory that could endure through the generations as a justification for continued possession and colonization of the land. 

The legal case that occasioned the inquiry into Thomas Purchase’s house site was a dispute between the state government of Massachusetts (of which Maine was a part until 1820) and the Pejepscot Company, a land company that had sponsored eighteenth-century English colonization in the region just north of modern-day Portland, Maine.  As the Pejepscot Company’s claims to legal title could be traced back to the original land grant of Charles I of England to Thomas Purchase, it seems that the court put considerable effort into ascertaining where Purchase had lived in the hope of establishing the correct bounds of the Pejepescot Company’s land claims.  Significantly, there seems to have been no extant copies of the land patent held by Purchase by the time of the court case.  As a 1683 deposition in the Pejepscot Papers reveals, Purchase’s personal copy of his patent from the king was lost when his house burned down sometime before 1653.  The loss of this document must have been of great concern to Purchase, for another deposition from 1693 reveals that, aged nearly 100 years, he sailed to England “as he said purposely to look after & secure his said Pattent.”  It is not known whether he found the document, and I do not believe it to exist today.  Thus, after Purchase’s death, in the absence of this written document, it was the disputed and contradictory oral memory of Purchase’s inhabitation that assumed the primary burden of validating his legal claim to colonization in Maine. 

This was a rather dubious enterprise, as the testimony of the deponents, a group consisting for the most part of middle-aged and elderly men, demonstrates that opinion was split between two possible sites of Purchase’s house, with no conclusive proof for either site.  But what all these men did agree on was a shared conviction that “one Purchase,” as he was often termed in their testimony, had at some distant time early in the seventeenth century begun the English colonization of the land, a colonization maintained and continued by the region’s white inhabitants in their own time.

It is in this context that we should understand Baker and Whitney’s discovery of a sword and a hoe at the site where they believed Purchase had lived, for these were symbols of successful English colonization par excellence.  In the minds of late-eighteenth Mainers who saw themselves as heirs to Purchase’s pioneering colonization, he had asserted his land claim through his military prowess (the sword) and his willingness to farm and thus “improve” the land (the hoe.)  Whether these rusty artifacts did really belong to Purchase, or indeed existed at all, the assertion of their existence by Baker and Witney represents the attempted creation of a memory of English colonization tied to the landscape in which they lived.  Indeed, it is the apparently evident Englishness of Purchase’s supposed house site that appears over and over in the depositions.  For example, John Dunlop recounted that “I have seen at the head of s[ai]d falls on the easterly side of the river, a place which appeared to be the settlement of some English planter where a large cellar was dug, and remained still to be seen, which I never saw in any Indian settlement, which appears to be very ancient.”

Dunlop’s account stresses what he saw as the non-Indian appearance of the cellar hole—his mention of “some English planter” also emphasizes the fundamental association of English colonization with agriculture.  This “improvement” of the land, as it was often termed in seventeenth and eighteenth-century documents, appears in many of the depositions.  Thus Richard Knowls told the court that as early as 1742 he had seen “a Celler on said Place and the Old People told me that this Celler and Place had before that time been improved by a Mr Purchas…there was likewise a peace of Mowing ground on said Caring place which produced good English Grass which Mowing ground the said Old People informed me was cleared & brought to by the said Purchas while he lived there.”  In describing the site’s “good English Grass,” Knowls marks the abandoned cellar hole as a mnemonically powerful site, whose grass perpetuates the memory of Purchase’s colonization across more than a century.  In doing so, Knowls and the “Old People” who passed on their memories of Purchase to him were participating in the creation of a history of successful English colonization in Maine, a fabricated history that also, critically, aimed to erase the memory of Native inhabitation.

Deposition of Richard Knowles, Collections of Maine Historical Society, Coll. 61, Volume 7, Box, 5, Folder 21, https://www.mainehistory.org/.

However, this process of colonialist memory creation was neither stable nor unchallenged, and the depositions reveal this quite clearly.  In addition to the most prominent flaw in the construction of the memory of Thomas Purchase, namely the disagreement over the location of his house, some deponents even raised doubts over the supposed Englishness of one of the possible house sites (the other site was undoubtably English, and dispute centered around its age).  Most notable in this respect are the words of Timothy Tebbetts, who when asked how he thought the supposed cellar hole had been made, answered “Either by Indians or English it look’d like the work of some human Creature.”  Although other deponents, such as John Dunlop, had testified that this same cellar hole was undoubtably English in origin, in Tebbetts’ testimony we see an element of doubt enter into the picture.  At least for Tebbetts, the mnemonic power of the landscape was insufficient to distinguish English settlement from Indian settlement—if the cellar hole had indeed once been Purchase’s house, in Tebbetts’ estimation the ruins were no longer able to transmit the memory of Purchase’s ancient inhabitation.   

This deposition confirms a fundamental truth about the colonial memory I have explored here: it was rightly perceived as extremely fragile in all of its forms, whether textual, oral, or as physical marks upon the landscape.  This is why so many times in the depositions, as we have seen in the case of Knowls, “old people,” often fathers or relatives, are described as revealing widely held historical memories to young men.  This was a mainly oral tradition whose stories were only rarely written down.  And it is important to remember that written texts were also fragile, especially in the context of life in rural Maine, as we have seen with the example of the loss of Purchase’s original land patent.  As the marks Purchase made upon the landscape were overtaken by nature, the fragile oral memory of his inhabitation slowly faded out.  But despite the tenuousness of the memory of Purchase by the 1790s, it still endured to be recorded in the written depositions that have survived to the present day. 

Above all else, these depositions reveal a deep-seated desire among many male white inhabitants of the region to assert the success of earlier English colonization in Maine, of which they were the symbolic heirs.  For these men, ancient claims of English ownership of the land assumed pre-eminent importance, such that they even dug around in old cellar holes in search of rusted artifacts as proofs.  But it is important to remember that this view was not shared by everyone.  And so I will end with the deposition of Martha Merrill, the only deposition in the archive of the Pejepscot Papers made by a woman alone.  Speaking to an agent of the Pejepscot Company in reference to the company’s land claims, Merrill declared that “I would not give him two Coppers for it all.”


Daniel Bottino is a PhD student at Rutgers University, where he studies early modern European and early American history.  His dissertation research focuses on the interaction of cultural memory with the law and the physical landscape in early modern England, Ireland and colonial Maine, with an emphasis on the role of cultural memory in the process of English colonization.

Featured Image: Cellar Hole in Vaughan Woods State Park, South Berwick, Maine. Photo courtesy of author.

Categories
Intellectual history

The Virus, the Virtual, the Virtuoso’s Cabinet, and the World

By Saara Penttinen

This blog post was supposed to be about something else.

Being inspired by the ongoing coronavirus situation is not something I expected or wished to happen. In fact, I feel conflicted about even admitting it despite many rather stimulating articles written in apparent lighting speed exploring, for example, epidemics and plagues in history, sprouting up in recent weeks. There might never be a return back to ‘normal’ – everything might have changed before we got a chance to say good-bye. There’s nothing else to do except to adapt, as people have done countless of times in history. I haven’t been able to write in about a month, but today I felt it was the time. Perhaps this is me, adapting.

Nevertheless, I’ve had some difficulties in centering my scattered thoughts, especially since the focus of this text has changed so drastically. But I want to start from the beginning: in the original marble lid of the famous Tradescant tomb in Lambeth. Since the early modern collections commonly known as cabinets of curiosity became a popular research subject in the last decades of the 20th century, the inscription – dedicated to father and son John Tradescant, eager seventeenth-century curiosity collectors – has been quoted in numerous publications The most quoted lines go as follows:

By their choice collection may appear
Of what is rare, in land, in seas, and air:
Whilst they (as HOMER’s Iliad in a nut)
A world of wonders in one closet shut.

Especially the last line describing the Tradescant collection, aptly called The Ark, as “a world of wonders in one closet shut” has been perceived to sum up nicely the microcosmic nature of these collections; in other words, they were understood as worlds in a miniature form. But how exactly were they ‘worlds’? What was their relationship with the wider world, the macrocosm? Were they considered substitutions of the real thing, simulations, or worlds in themselves? Since asking these questions in the beginning of my PhD studies, I have fallen deeper and deeper into the abyss of early modern ambiguities and analogies. Especially one of my key concepts seems to evade me. This concept is virtuality.

Virtuality is one of those terms that gets thrown around a lot nowadays – I mean a lot. It has been used so much especially since the beginning of the internet age that it has lost its novelty and started to appear commonplace, even mundane. Despite its popularity, virtuality as a concept is more often than not taken for granted and not actually understood very well. What does virtuality, in fact, mean?

Nowadays everything seems to be virtual from shopping and entertainment to therapists, maps and communities. Through a huge array of avatars on different social platforms also our social lives and most intimate communications are, at least partially, virtual. But this was the situation only in February – it is nothing compared with were we are right now. If there ever was a time for virtuality, that time is undoubtedly now. How to be present without being present – that is the question on everyone’s lips. How to work, or go to school? How to visit elderly relatives? How to stay sane, or to feel connected with the world, even just a little bit? How to substitute the experiences that were taken away from us? Is it okay to drink a bottle of wine alone, if you’re doing it on Zoom?

Virtuality as a term holds many futuristic connotations, even if the future seems, to some extent, to be already here. The virtual future might mean an era of connectivity, shared experiences, and democratic opportunities. It can also mean a time of blurred lines between right and wrong and losing the touch of reality. Though many things are arguably gained in the current dystopia, many are also lost, perhaps for good. Besides the devastating human and economic cost, some of the loss happens in the very translation of the actual to the virtual, never to be recovered again.

Despite the futuristic connotations, and the contemporary usage of the term, virtuality has a history just like everything else. The first associations for most people are the different technologies, such as virtual memory, simulations, and virtual realities. The history of virtual reality is usually stated to have started in the 1930s, sometimes with a mention of earlier technologies, such as 19th-century stereoscopes and panoramas. The main function of virtual reality technologies seems to be in creating a sensory immersion of a kind – essentially, in fooling the eye and sometimes other senses too to feel like the experience is taking place somewhere completely different. Oliver Grau’s 2003 book Virtual Art takes a media history’s point of view and traces the history of illusory techniques in art from modern days all the way to Antiquity. My own research period, the seventeenth century, had a huge array of ‘virtual art’ alongside a multitude of devices for creating spatial and optical illusions. In general, the early modern period can be described as an era of un utmost interest in modifying the sensory experience.

Nevertheless, virtuality doesn’t only entail technologies, devices, or even illusory techniques. What’s the thread holding it all together – what’s the essence of virtuality itself? Alongside the computer-related meanings, the modern-day definition of virtuality according to a (virtual) dictionary is “in effect or essence, if not in fact or reality; imitated, simulated” while actual is “existing in act or reality, not just potentially”. Therefore, the term could be used in context of something being ‘as good as’ something else – as can be perceived in the everyday use of the adverb virtually.

The etymology of virtual most likely originates from Medieval Latin’s virtualis derived from the notoriously equivocal Latin word virtus, meaning for example, ‘excellence, potency, efficacy’. In a late 14th-century meaning, virtual meant something in the lines of “influencing by physical virtues or capabilities, effective with respect to inherent natural qualities.” In the beginning of the 15th century, this had been concentrated to the modern meaning: “capable of producing a certain effect”, and therefore, “being something in essence or effect, though not actually or in fact”.

Virtual travel is a concept that instantaneously comes to mind when thinking of substituting the real thing with something ‘as good as’. For most seventeenth-century people, travelling virtually was the only way to see the world. Most classes, professions, and age groups rarely travelled. Religions and customs often frowned upon the concept of ‘worldliness’, and people were suspicious about wanderers and rootless people. Even those that were able to travel, usually only got to do one big journey in their lifetimes – such experiences were cherished and relived, and eventually turned into travel writings, plays, and collections for other people’s armchair travel. Experience didn’t necessarily have to be direct; even an ad vivum picture could be made by an artist only consulting a previous representation of the subject.

Before March, I had no idea I wanted to travel (or at least, to wander aimlessly through hardware stores) as much as I do now. A couple of weeks ago, at the pajamas stage of the pandemic, I witnessed a morning show host pointing at the window behind him, and with a straight face suggesting, that for those viewers not being able to go outside, their windows could substitute the world outside. I didn’t know whether to laugh or to cry. The question of substitution becomes especially important when thinking about people who can’t experience things actually: How to replace the world for people confined to their homes? Even before the COVID-19 this was an important issue. For decades, there has been a growing market for technological innovations designed for the lonely, the disabled, and the sick. Now that a vast number of people have found that their world has been taken away, there’s suddenly a desperate need for some kind of a window back to it. If virtual is something that can replace the actual, the question is: what can be replaced, and what cannot?

However, virtuality is not simply synonymous with replacement; according to Wolfgang Welsch, its philosophical roots go back at least to Aristotle’s concept of dynamis, meaning literally potentiality – something later writers, starting from Thomas Aquinas, called virtual. However, dynamis, or Aquinas’ virtual, didn’t mean an alternative to the actual, but a prerequisite; a possibility, in the limits of which reality was able to actualize. In later centuries the nature of the concept changed with different writers, slowly disconnecting virtual from the actual. As the one-two step connection vanished, the virtual realms could in some cases even exist separately from the actual. The eventual actualization didn’t necessarily empty the realm of the virtual possibilities; they stayed alive, in some other realm. At some point, we ended up in the situation we’re in now, with the virtual and the actual existing in completely different, but in some ways, mutually supportive realms. (3–6)

The idea of having something beyond or before actuality, something to possibly support, to substitute or even to replace it, fits well to many phenomena in different eras, cultures, media, and genres: the idea might be generally human and global, a concept larger than the etymology of the term itself. In fact, all time periods and people have had virtual media of some kind, and experienced virtual travel in some form or another: reading books, going to the theatre, listening to stories, daydreaming, playing, and creating – all of them have a quality of substitution, of making plans awaiting realization, of dry running, of conceptualizing. Virtuality might just be a way of life for humans, to some extent.

The multiple nuances of the term enrich (or confuse) my research on the relationships of the cabinets of curiosity and similar collections had with the world: they can be seen, all at once, as representations of the wider world; as private worlds of the collectors; as cultural lenses to the world; or as worlds in themselves. Could the collections, just like the television and the internet nowadays, substitute the world at large, and be a replacement for travel? And whose travel was that – and whose world: the visitors’, the collector’s, or someone else’s?

The research on the virtual worlds in cabinets of curiosity can open up interesting connections to the modern-day conversations on virtuality. There is surprisingly little research that takes into account or even acknowledges the full history of the concept – usually the focus is on some niche contemporary meaning. Virtuality is not solely about technologies and illusionism, but about much larger and more fundamental themes: what is reality? What is experience? What is the effect of different kinds of media to experience and its authenticity? How does cognition work? How do we make sense, simulate, and create worlds?

This new and sudden era of virtuality might be a modern society’s attempt to cling onto an old version of the world; to save it in an ark – in ‘one closet shut’, as the Tradescants did with their collection. At the same time, something brand new is beginning to form. We might even realize that some of the old is not worth replacing after all. Perhaps the situation will force us to re-evaluate our priorities: is it more important to be constantly productive and active, or to take it slow, to keep in touch, to touch, to walk in the park, to be able to sit down outside and watch the spring arrive. What is the right ratio of the actual and the virtual for the recipe of human happiness? What even is real, and what does it mean to be real?

I’m sure that in the end, we will find the balance – we will all, yet again, adapt to the new world we are thrown into.


Saara Penttinen is a PhD Student at the University of Turku department of European and World History working on virtual worlds in seventeenth-century English cabinets of curiosity. She’s currently a visiting associate at the Department of English at Queen Mary University of London.

Featured Image: Engravings of the Tradescant Tomb from Samuel Pepys Drawings, Philosophical Transactions, 1773.

Categories
Intellectual history

The Dissident Voice inside Our Heads

By Justin Abraham Linds

What are we to do with the dissident voices inside our heads—the ones telling us that there are gradations of permissible disobedience to the quarantine? Surely, repression is one response. You suppress the thought as irresponsible and selfish and accuse others who confess this thought to you of being careless and thinking only of themselves. Denial, too, is a way forward: You claim that this is not a thought you have had, that you are a completely obedient subject of governmental and public health management, and you continue to pursue a moral and perfectionist goal of total compliance with regulations that are changing every day, that are disagreed about, and that are really only available to the most privileged members of our society. There, too, is moderate embrace, which can be distinguished from total embrace. A total embrace of dissidence looks like the “anti-quarantine” protests, which have emerged around the world in places like India, Germany, and most heavily in the United States. People attend these protests in complete defiance of social distancing orders and yell about government over-reach. In Michigan, total embrace of dissidence during COVID-19 quarantines looks like protesters showing up with rifles to the State Capital building.

I am interested here in querying the meaning of moderate embrace of dissent during a global health quarantine or state-ordered lockdown without relying on simple condemnation or ignorant endorsement. There are forms of dissidence that move from a place of collective care rather than violent self-interest. And, perhaps, by decoupling dissidence from anti-facticity and spotlighting instead its skeptical practices, we can redeem the creativity, knowledge, and progressive visions that lie within dissident actions. At the tense intersection of social dissidence and compliance during dangerous times there is a road, not very much travelled, that runs through a rich history of authors, including Frederick Douglas, David Walker, Franz Fanon, and Aimé Césaire, who have explored the social, political, emotional, and creative potential of disobedience.

In his lecture of March 1, 1978, the French philosopher Michel Foucault questioned the forms of political revolt that are enacted against pastoral power—forms of power that shepherd subjects and their behavior. Looking at a large swath of time from the Middle Ages to “modern forms,” Foucault offered refusing to bear arms, political parties as counter-societies, and “medical dissent” as three examples of revolt against governance as bodily management. Since the end of the eighteenth century, Foucault put forward, medical knowledge, institutions, and practices have been primary sources of management for the pastorate, so the refusal of medical governance has likewise been a primary site of dissidence:

From the refusal of certain medications and certain preventative measures like vaccination, to the refusal of a certain type of medical rationality: the attempt to constitute sorts of medical heresies around practices of medicine using electricity, magnetism, herbs, and traditional medicine; the refusal of medicine tout court, which is frequently found in certain religious groups (200)

Foucault eventually gave the name “counter-conduct” to these forms of dissidence that refuse a type of power that “assumes the task of conducting men in their life and daily existence” (200). Counter-conduct is a rejection of society’s values. It is a withdrawal from behavior contributing to the “nation’s salvation” (198). A concept that appears haunting if somewhat opaque, even for Foucault, counter-conduct is “a refusal of the relationship to the death of others and of oneself” (198).

Perhaps you’re now thinking: counter-conduct sounds dangerous. Indeed, it is. Or, at least, it’s risky. It certainly is disobedient, but it does not involve so much an attempt at erasing power as a process of finding a new source of power. Counter-conduct is marginal and risks being labeled as mad. In fact, according to Foucault, some forms of counter-conduct “may well be found in fact in delinquents, mad people, and patients.” (202) But the potential madness of counter-conduct does not disqualify dissident behaviors outright. Elsewhere, I have written about counter-conduct across species in which humans choose non-human lifeforms as entities to guide their behavior. In that piece, I looked to AIDS activists from the beginning of the North American AIDS epidemic who found in viruses and bacteria guides for how to behave in a pandemic. Their behavior did not produce a cure for AIDS (no behavior has yet produced a cure for AIDS), but it did offer a range of disobedient tactics for surviving with AIDS beyond the treatments offered by biomedicine. Counter-conduct is not about abandoning responsibility, for it is also productive: counter-conduct produces, organizes, and solidifies new truths, new forms of existence, new leaders for guiding conduct. I wish to expand on this work here and think through the work of two AIDS activists who present a unique case study to examine the possibility of ethical dissidence during an epidemic. Mapping a historical example of counter-conduct might allow us to glimpse valuable forms of dissidence for our present moment and provide a tentative answer for the question: what do I do with my dissident urges?

“How to Have Sex in an Epidemic: One Approach,” cover page.

Richard Berkowitz and Michael Callen, both twenty-eight years old at the time, wrote and published “How to Have Sex in an Epidemic: One Approach” in 1983. By the time Tower Press had printed five thousand copies of “How to Have Sex in an Epidemic”, approximately 2,118 people in the U.S. had died from AIDS. In 1983, the mortality rate of AIDS was close to three out of four, and even if the exact cause of AIDS was unknown, many understood that sexual activity was a mode of transmission. Despite mounting calls for abstinence and widespread fear about sex in the gay community (or perhaps because of these things), “How to Have Sex in an Epidemic” affirmed the need for sex. The forty-page document was stigma-bashing —“Sex doesn’t make you sick–diseases do” (3)— and life affirming —“Our challenge is to figure out how we can have gay, life-affirming sex, satisfy our emotional needs, and stay alive!” (4). It took on toxic masculinity and fear, and it affirmed that gay men must find ways to love each other “despite continuing and often overwhelming pressure” not to do so (38).        

Importantly, “How to Have Sex” did not seek to change the minds of people who were choosing to avoid sex. Instead, it addressed itself to men who still craved sexual intimacy and pleasure, to people who may have even been more turned on as they used sex to distract themselves and connect with others while their friends and lovers were dying and their world felt like it was crumbling. The authors of “How to Have Sex” knew that governmental power bearing down and prescribing abstinence, or talking about sex as a taboo, rarified, and vanishing thing, might just make some people crave sexual pleasure even more. So instead of criticizing their behavior as “unsafe,” it sought to help them.

“How to Have Sex in an Epidemic,” epigraph.

In 1983, “How to Have Sex” first acknowledged that sex had become risky behavior but then proceeded by suggesting ways to diminish riskiness without giving up sex. The document’s authors sought to pin the word “responsible” to sex, a place it perhaps had never gone. They sought to create a community—rather than a governing body— of sexually active people looking out for each other’s health. They took gay sexual desire as a shepherd leading people’s behavior and preached forms of conduct for following like sheep.

Without reservation, for instance, the pamphlet states, “Unfortunately, sucking your partner can not be made risk free (unless your partner is wearing a rubber!).” But the authors were not so naive as to think that just because a sexual behavior is deemed risky, people will stop exhibiting it. The document carries on, consequently, with advice for people who are not going to stop having oral sex: “If you want to REDUCE your risk… suck but don’t let your partner come in your mouth”(18). This is insightful, realistic, harm-reducing public health talk: if you are going to do something, it advises, here is how to do so in a safer way. Instead of seeking an ineffective ban on human contact, the pamphlet sought to mitigate the riskiness of the contact. “How to Have Sex”, though, even goes a step further: “If your partner ‘accidentally’ comes in your mouth or if you get a taste of pre-come fluid, spitting it out will probably reduce your risk” (18). The apostrophes around accidentally here speak volumes: do not distract yourself justifying, explaining away, or shaming yourself because you have had risky sex, they imply. Rather, they contend that even up to the point of the riskiest sex acts, there are still ways for you to be responsible.

“How to Have Sex in an Epidemic,” table of contents.

Certain modifications to your behavior can “probably” reduce risk for yourself and your community; although what we see in “How to Have Sex” is probable harm-reduction, not proven. The authors of “How to Have Sex” argued that it was possible for gay men to imagine and practice thoughtful, ethical, and pleasurable forms of sexual behavior even when they lacked comprehensive guarantees of the ‘rightness’ of their choices offered to them by scientific, governmental, epidemiological, and moral discourses relying on their own brands of authority.

The irony I find most compelling about “How to Have Sex” is that its recommendations are based on scientific knowledge we now know to be incorrect, and yet those recommendations and the document itself remain inspiring and useful. Without getting into the details of cytomegalovirus and the multi-factorial thesis, it is clear that the authors of “How to Have Sex in an Epidemic” based their recommendations on a theory of AIDS transmission that is now disproven. While the science is faulty, though, the findings hold up. How is that possible? Because ethical, caring advice based on imperfect medical knowledge is still valuable knowledge. The authority of the document does not come from its scientific findings. Quite the opposite: the authority of the document comes from its dissidence in the face of the dominant discourse, its embrace of care, and its interest in relaying subjugated knowledges.

The document is ‘incorrect’ about the disease but ‘correct’ about the treatment, and that is a form of dissidence potentially useful to us now. Equitable ethics is more important than a kiss blown at six feet, or mimed at three feet, or done with bodies up against one another. Innovative and creative acts of social togetherness that tweak quarantine regulations to mitigate other harms not prioritized by the state are more important than vying to be the most successful adherent to social distancing. Julia Marcus, professor of population medicine at Harvard Medical School, has recently made a similar observation and she too finds this lesson in “How to Have Sex in an Epidemic: One Approach.” Governments manage bodies at a population-scale and struggle to nimbly respond to nuance, difference, subculture, and resistance. In response to state-ordered ‘social distancing’, marginal communities —especially those with already fraught relationships to government managers — inevitably devise their own ways of responding. Communities evaluate risks and then come up with ways of mitigating those risks. In 1983, this looked like masturbation clubs (“they provide a unique atmosphere which is friendly, communal, well-lit and intensely erotic” (31)) or a ‘closed circle’ of sex buddies (“merely an expanded version of monogamy” (30)).

Today we would add to this list by recommending phone-sex, sexting, webcam sex, and any kind of sexual play that is digitally mediated or spaced out. Indeed, the New York City Health Department currently recommends “video dates, sexting, or chat rooms” as forums for sexual pleasure during quarantine. Similarly, the National Institute for Public Health and the Environment in the Netherlands suggests seeking a “seksbuddy” for coping while social distancing measure are in effect. However, I think an “expanded version of monogamy” is highly suggestive and deserves a brief consideration. In New York City, where I live, I have noticed that popular public spaces like parks have been nearly purged of groups that are not heterosexual pairs or heterosexual pairs plus babies. It’s as if we all agreed that the nuclear family was the safest unit of togetherness. However, letting the nuclear family naturally imply safety is risky in itself because it suggests that other human groupings become dangerous the more they diverge from the epidemiologically sanctioned norm. Friend groups, mutual aid networks, and queer families all of a sudden look like people disobeying social distancing orders when in fact they are essential forms of togetherness that keep people alive.

To end, “How to Have Sex” preaches a generous definition of love. “It is vital to the survival of each member of the sexually active gay community”, the pamphlet affirms, “that the issues of your own health and the health of your partner(s) never become separated” (15). Through an expanded notion of love the document encourages people to think broadly about care for their partners: “If you love the person you are fucking with–even for one night–you will not want to make them sick” (39). Is it possible that love could be a shepherd of counter-conduct? Foucault does not suggest it. But as the United States feels more and more jingoistic, more and more interested in individualistic betterment for only certain citizens, more and more supportive of grotesque wealth accumulation for a select few individuals rather than large groups of laboring communities, it seems like collective behavior motivated by love, care, or affection could actually be quite dissident. A conclusion from “How to Have Sex in an Epidemic” might also be a conclusion for the question of what to do about that dissident voice: “Maybe affection is our best protection” (39).


Justin Abraham Linds is a PhD candidate at New York University.

Header image: Snails in search of affection. Wikimedia Commons.

Categories
Intellectual history

A Dandelion Story, from Medieval Herbals to Whole Foods

By Luna Sarti

Probably the most hated weed in North America, dandelion has over the past couple of years secured a space on the shelves of “premium retailer” grocery stores, such as Wholefoods and Sprouts Farmers Market. Having grown up eating dandelion greens, I am certainly grateful to Wholefoods for validating my weird commitment to treat “weeds” as valuable plants which I protect in the lawn. Week after week, as I observe homeowners in my neighborhood invest energy, time, and money in the Sisyphean enterprise of lawn maintenance, I delve deeper and deeper into the history of dandelions: when did dandelion become a weed, and why is now the time for its shifting back into “a specialty food”?

In 1990 Peter Gail, known as the “King of Dandelions”, published The Dandelion Celebration: A Guide to Unexpected Cuisine – an attempt to change the perception of dandelions and prompt the American public to recognize them as food. Dr. Gail, who earned a Ph.D. in Plant Ecology from Rutgers University and was Associate Professor of Urban and Environmental Studies at Cleveland State University, advocated for plant literacy as a strategy for fighting inequalities in access to industrialized food.  As the director of the Goosefoot Acres Center for Resourceful Living in Ohio, he had a significant role in spurring concern of pesticide use dangers and environmental awareness.

Cover of the 1990 edition of Peter Gail’s
The Dandelion Celebration

In The Dandelion Celebration Gail reports how dandelions, under the name “ciccoria”, used to be a popular food not only for Italian-American communities, but also for many individuals with diverse national and cultural backgrounds, including the English, Germans, Koreans, Lebanese, Greeks, and Armenians (12). Dandelion, which is defined as “an invited species” in the Introduced Species Project (ISP) sponsored by Columbia University, was broadly understood as a crop and used in an incredible number of dishes. As a matter of fact, the ISP stresses how evidence shows that throughout history “dandelions have been purposely carried across oceans and continents by human beings”, and “European settlers brought these plants intentionally to America”.

Peter Gail remarks how Italian-American communities who considered the plant an asset of their diet used the word “ciccoria” to refer to dandelions. Like for many other plants, the history of the name “dandelion” is debated and unclear. However, it is interesting to observe that such a linguistic variance, with different names assigned to the plant depending on its attributed value, characterizes much of the history of the social life of dandelions. The Italian “cicoria” indicates in fact an edible bitter plant (“ciccoria” pronounced and spelled with a double “cc” seems to indicate a dialectal variant), while any dictionary will indicate that the proper translation for dandelion is dente di leone (literally “lion’s tooth”) or tarassaco (from taraxacum officinale).

Most discussions of the English word dandelion focus on its likely derivation from the French “dent-de-leon”. In The names of plants, David Gledhill traces the origin of the term taraxacum to “the Arabic names tarakhshagog, for ‘disturber’, or to talkhchakok, indicating” -like the term cic(c)oria – “a bitter herb” (371). Although the history of plant names presents innumerable challenges, including the issue of identification, it is interesting to observe that two main semantic areas are activated by etymological research around dandelions. The reference to plant morphology, its behavior, or its flavor suggests in fact the existence of two distinct epistemological approaches to dandelion, both at the intersection between plant and human experience. In one approach, with etymologies highlighting the pointy leaf shape or the ubiquitous presence of the plant, the origin of the name is traced back to the phenomenological characteristics of dandelions as they emerge through sight and observation. The other approach, on the contrary, focuses on the plant flavor and properties when ingested by humans, thus proceeding through taste and a form of “metabolic understanding”. We could perhaps consider the two approaches as the result of either a botanic or a medical understanding of plants.

Image from MS Egerton 747. London Library.

As a matter of fact, earlier medieval texts seem to prefer the definition “lactuca” under images of dandelions, which might be explained by the fact that dandelions, like other plants in the family, produce a milky substance when cut. The term “dens leonis” (lion’s tooth) which highlights the morphology of the plant seems to be a late term. It appears in fact in the most popular 16th-century herbals written in Latin by the Italian Pietro Andrea Mattioli and the German Leonhart Fuchs, and rendered in English by William Turner. I could not find the Latin “dens leonis” in any of the medieval herbals I consulted, nor in canonical Latin texts such as Pliny’s Naturalis Historia. In Dioscorides’ On Medical Material and Theophrastus’ Enquiry into Plants there are references to plants which are usually identified as dandelions, but there seem to be no references to the lion imaginary regarding the appearance of the plant. According to a 1526 German edition of the Hortus sanitatis, the term “dens leonis” was coined by a German surgeon who was very fond of the plant.

The issue of naming came to constitute the coordinates orienting my journey through shifting understandings of dandelions. As a matter of fact, in my own experience, the more Italian name of “dente di leone” slowly erased the familiar terms “cicoria” and “piscialetto” (literally “pee-the-bed!”) that I learned while foraging the herb as a vegetable with my grandmother in the fields around Badia Pozzeveri (a place which recently gained fame for its Field School in Medieval Archaeology and Bioarchaeology).  It wasn’t until I moved to Florence and started going to an urban school that I learned that the plant, with the name of “dente di leone”, was a weed. The awareness of the distinction between weed and food came for me with a new name, which erased the more familiar term “cicoria”.

A dandelion plant in the city center of Florence, Italy. Photo by author.

Weeds occupy ambiguous space in thinking about and with plants. Scholars in plant-ecology and plant- philosophy encourage us to reflect on such an inherent ambiguity with the aim to engage in a way of thinking that is “edifying, conversational, and interpretative” rather than “demonstrative” (Gianni Vattimo and Santiago Zabala, xii-xiii).” The assignment of plants to the category of weed, ornament, or food in fact allows us to expose the different patterns of human ecological practices – growing crops, making roads, maintaining city surfaces, designing backyards – while reflecting on their underlying cultural assumptions. Among the multifaceted directionality of plant-thinking, the distinction between weeds and non-weeds constitutes an interesting site for exploring the implications and contradictions of taxonomic approaches to the world. Being defined in relation to human interests and spaces, such a distinction is not only based on a hierarchical organization of living beings, but also on shifting socio-economic practices that define what plants are allowed to exist or not and where. The fact that certain plants have been historically understood as either food, ornaments, or weeds speaks to the many socio-economic variables at play in the categorization of plants. Names often preserve traces of the social life of plants. With this in mind, along with a lot of other personal reasons, I will continue to pay attention to the story of dandelions, and their ambiguous existence on the border between weeds and food.


Author Bio

Featured Image: Dandelion plant in the city center of Florence, Italy (detail). Photo by author.

Categories
Intellectual history

Richard Rorty as a Post-Straussian

By David Kretz

Richard Rorty and Leo Strauss are not often considered together and admirers of one tend to strongly dislike the other.[1] Rorty’s writing, as one Straussian once put it, is “full of those vices we Straussians (if you will permit me) love to hate—relativism, historicism, easygoing atheism, anti-philosophic rhetoric, vapid leftist political opinions, uncritical progressivism, and seemingly a general indifference to virtue.“ Rorty respected Strauss but poured scorn on ‘Straussianism,’ which he saw as an anti-democratic cult. Polemics aside, however, Rorty’s entire project can be profitably put in dialogue with Strauss’ thought and even cast as a response to Strauss’ questions. While differing sharply in their answers, it can be shown that the two thinkers often respond to the same concerns. The groundwork for this mostly implicit dialogue was laid when Rorty and Strauss overlapped at the University of Chicago—the first being a teenage student, the second serving as a professor—from 1949-1952. Strauss’ Walgreen Lectures, on which he based his opus magnum Natural Right and History (1953), fall into that time and Allan Bloom, Strauss’ most influential student, enrolled at the University of Chicago in 1946, like Rorty and just one year older.

As Rorty writes in his autobiographical essay “Trotsky and the Wild Orchids” (1992), his original motivation for going into philosophy had been his hope for an answer to the question of the good life. Specifically, he desired to know how to synthesize the passion for social justice he had inherited from his Trotskyite parents with such idle and apolitical pursuits as his equally strong passion for the wild orchids that grew in the mountains around the town where he had grown up. Strauss thought that, for Western man, there were just two answers to the question for the best life, associated with the names of Jerusalem and Athens respectively: revealed religion and philosophy. Like Strauss, Rorty initially thought these were the two options. He considered Jerusalem in his student days but quickly found that his character and talents predisposed him for Athens. For several years, he became a Platonist in search for the True Answer to the question of how to hold the different things that mattered to him in life in one overarching synoptic vision of the universe and man’s purpose in it. 

When Platonist philosophy didn’t provide the answers, he first turned to Hegel with the hope for a philosophical-historical narrative that would culminate in such a synthesis, and then to Proust, who taught him how to weave everything one encounters into a literary narrative “without asking how that narrative would appear under the aspect of eternity” (Trotsky and the Wild Orchids, 11). He learned to appreciate the pragmatist Dewey again, the philosophical hero of his parents, whom he had earlier learned to scorn in youthful rebellion under the influence of his Chicago teachers. He became convinced that philosophy could only provide an Answer, a Synthesis by turning itself into a form of religion, a “non-argumentative faith in a surrogate parent who, unlike any real parent, embodied love, power, and justice in equal measure” (12). Note the emphasis on faith and irrationality here. Strauss, too, sometimes talks as if revealed religion in its purest and sharpest contrast to philosophy takes the form of an existentialist Protestantism that started with Kierkegaard and culminated in the crisis theology of Brunner, Barth, and Bultmann, which he had encountered in Germany in the 1920s.[2] Indeed, neither Strauss nor Rorty have much patience for either Catholic syntheses of fides and ratio, nor are they really interested in less faith-centric and more ritual-oriented forms of religious life.

While they are close in their understanding of religion, Rorty and Strauss part ways in their understanding of philosophy. Athens, for Strauss, stands for a life of endless questioning in pursuit of natural truths. Philosophy is coeval with the discovery of nature as the idea that there is a necessity which limits divine power (hence, it stands in starkest contrast to revealed religion, which turns centrally on the idea of divine omnipotence). This natural order gives point and purpose to the philosophers’ questioning pursuit of it, and yet it poses so many riddles that we will never run out of questions to ask in any human lifetime. Without the existence of an unchanging nature, which includes human nature, philosophical questioning would be a directionless wandering rather than a directed pursuit, yet we never need to worry about the question of what we would do once we had arrived at a perfect comprehension of (human) nature. Rorty vehemently rejects this picture. Invoking nature and its eternal order and truth, for him, is just metaphysics, i.e. theology in disguise. As soon as we think that nature has a preferred description of itself, which is not merely useful but true simpliciter, we turn it into a divine person. Persons speak languages, they have preferred descriptions of themselves in their own preferred language, and they are the only entities to do so. Nature has no preferred description of itself, and does not speak any languages, not even those of mathematics or metaphysics. Both are human creations, like all other languages. While each may be useful at times, neither is true in an absolute sense. To claim that nature is truthfully described in only one idiom is to turn it into an absolute, non-human authority, i.e. a God-surrogate.

Yet for Rorty, too, there is a kind of endless intellectual pursuit, which is similar to the philosophic life according to Strauss at least in so far as it always threatens to turn on its foundations and question its basic presuppositions. Instead of a process of discovery, it is one of creation: the endless proliferation of basic vocabularies—clusters of concepts around which our explanations and justifications revolve. The paradigm for this life is the poet, understood in a broad sense, which for Rorty includes Hegel, Yeats, and even Galileo: all those who ‘make things new’ by finding useful new ways of describing the world. The paradigmatic genre is literature. Literature or, one could say, the Romantics (once they have been weaned off their metaphysics of the deep authentic Self) are a third to Jerusalem and Athens for Rorty. Strauss would, of course, deny that and call it an evasion of the true alternative out of existential despair, a kind of escapism into licentious modern vulgarity. Yet, both natural discovery or poetic creation are conceptions of intellectual life as endless relative to the human lifespan, and both also frame it as a life that constantly questions its own foundations. Neither Rorty nor Strauss, presumably, would appreciate focusing on the parallels here over the differences. Whether philosophy discovers truths or creates them was of great concern to both. It is, however, the second similarity I noted—that philosophy, rightly done, will often undo basic, deeply held convictions—which  leads both Rorty and Strauss to thinking a lot about the relation of philosophy to politics.

For Strauss, the philosophic life is essentially a life for the few. The philosophical initiates know that the laws of the polis are conventional laws but feel themselves beholden only to the one true, natural order. The pursuit of natural truths is a threat to all conventional orders. The masses can never become philosophers. To the contrary, telling the people in the cave that they are cave-dwellers will get the philosophers, who have seen the light outside, killed. They need to treat carefully if they are to avoid Socrates’ fate. Rorty, too, recommends that we each expand our basic vocabularies beyond the terms of the polis and as a society let go of cherished metaphysical convictions. He does not think, however, that public morals or political liberalism depend on metaphysical notions of Truth or the Good any more than they depend on, say, Christian faith. In the short run, the links between philosophy and politics are even more feeble. Philosophers generally just do not have the social influence to really steer things either way, and, since nobody of importance is listening, at least in the liberal democracies of the West, they do not have to fear persecution either. Political elites of either the left or the right will often appropriate philosophical language to their ends and use it as rhetorical ammunition. In fact, like Mark Lilla, this is what he thought had happened to Strauss at the hands of some members of the Bush administration. Yet for the most part, philosophers only influence other philosophers.

However, Rorty does think that some particular philosophers happen to pose a particular kind of threat to the left-liberal, social democratic politics he endorses. Nietzsche, Heidegger, Foucault, and Derrida, he fears, might lead young readers to believe that proper philosophical radicalism is incompatible with social-democratic hopes. His 1989 book Contingency, Irony, and Solidarity addresses this challenge, associated particularly with the name of Heidegger, whose status as a philosophical giant and petty Nazi was as much a stumbling block to Strauss as it was for an entire generation of German-Jewish philosophy students, who listened to him lecture in Marburg before the Nazis rose to power. Strauss sees in Heidegger a radical historicist and, hence, nihilist. With Heidegger, he thinks the antidote to the evils of modernity lies with the Greeks—though not with the archaic poets and pre-Socratic poet-thinkers, but with classical philosophy, which holds to a natural order against historicist relativism. Rorty, by contrast, argues that invoking the authority of Nature is just another form of invoking divine authority and welcomes Heidegger’s turn away from philosophy and towards poetry.

He does, however, try to persuade his readers that projects of metaphysical search for truth or poetic self-creation, however one conceives of intellectual life, are personal, private projects. The vocabularies of towering philosophical precursors might mean everything to some but they do not have to mean much to most. The vocabulary of social democracy, by contrast, is pretty useless for intellectuals trying to discover metaphysical truths or craft poetic personas, yet it is eminently useful from a public point of view. Young intellectuals who are fascinated by Heidegger (or Nietzsche, Foucault, Derrida, etc.) need not buy into Heidegger’s politics with his philosophy. They can engage with the philosophy in their personal life but adopt a completely different, progressive political vocabulary when they engage in politics. Only the thought that private and public pursuits have to be articulated in a single vocabulary, that social justice and private perfection have to be held in a single synoptic vision, would lead us to think that we have to choose between privately useful authors like “Kierkegaard, Nietzsche, Baudelaire, Proust, Heidegger, and Nabokov,” on the one hand, and publicly useful ones like “Marx, Mill, Dewey, Habermas” on the other (Contingency, Irony, and Solidarity, xiv).

Curiously, this means that Strauss can arguably be thought of as someone who anticipated in his practice what Rorty would later counsel in his writing. Strauss shared the anti-modernism of his master Heidegger, yet he did not seek to ally his philosophy to any party to guide the revolutionary overthrow of a decadent democracy. Instead of (conservative) revolution, he chose the quiet life of study, and founded a school. This school occasionally tries to educate political leaders in what they think is virtue, so that these can minimize the worst excesses of modern vice. For the most part, it will tolerate any political regime as long as it allows those who are so gifted and inclined to quietly pursue a philosophic life. Where exactly the emphasis should lie between educating political leaders and philosophical study in the recluse of the school is a matter of fierce debate between Straussians, of course.

Some critics of Rorty on the left have no doubt sensed this possibility when they argued that his distinction between the public and the private is just a different version of the Straussian distinction between esoteric philosophical truth for the few and exoteric political dogma for the many. Rorty lists Sheldon Wolin and Terry Eagleton among these critics in “Trotsky and the Wild Orchids” and Melvin Rogers has raised similar concerns in the past. Yet, this charge does not stick for two reasons. (The Straussians, by the way, know very well that he is not one of them.) First, Rorty, unlike Strauss, does not think that some people are philosophers by nature and others are not. He does not see a qualitative difference at all between intellectuals on the one hand, for whom self-creation involves coming to terms with philosophical and literary precursors and who write books that narrate their coming to terms, and non-intellectuals on the other, whose life story involves coming to terms with family members and friends and who do not write books about it. Freud, he thinks, has taught us to see every unconscious as equally fascinating and endlessly creative and thus has democratized poetic genius.

Secondly, while Strauss believes that the philosophic life is higher than the merely political life, Rorty insists on the priority of democracy over philosophy. While he thinks it silly to exorcize thinkers and writers who have nothing useful to contribute to the furthering of progressive political goals, as they can still be useful for projects of private self-creation, Rorty encourages everyone, and especially every intellectual, to not just dedicate their time to projects of private self-creation but to also work towards the realization of social-democratic hopes for ever increasing social solidarity and justice. The greatest happiness of the greatest number was infinitely more important to him than that a few gifted people could live the life of Socrates, even when his own inclinations and talents lay that way. The fact that he believed his political convictions to be the contingent outcome of a long chain of historical accidents, with no metaphysical arguments to back them up in a non-circular way that would convince even a Nazi, did not mean that he did not stand for his convictions unflinchingly.


[1] I’d like to thank Hannes Kerber, Antoine Pageau St.-Hilaire, and Anne Schult for helpful comments on an earlier draft of this, though they should be in no way held accountable for anything I here propose.

[2] Cf. Leo Strauss, “Notes on Philosophy and Revelation” in Heinrich Meier, Leo Strauss and the Theologico-Political Problem, Cambridge UP 2006.


David Kretz is a Ph.D. student in the Department of Germanic Studies and the Committee on Social Thought at the University of Chicago. His current project contrasts poets and translators as complementary paradigms of historical agency in times of crisis.

Featured Image: Leo Strauss (left) and Richard Rorty (right). Source: Wikipedia (Strauss), Youtube channel ‘Philosophy Overdose’ (Rorty).