Think pieces

Functional Promiscuity: The Choreography and Architecture of the Zinc Gang

By Contributing Editor Nuala F. Caomhánach

300px-Zinc_finger_DNA_complex

Zinc Finger DNA Complex, image by Thomas Splettstoesser

 

Andreas_Sigismund_Marggraf-flip

Andreas Sigismund Marggraf

The tale about gag knuckles, taz-two, hairpin, ribbons, and treble clef is quite elusive.  Although they sound more like nicknames of a 1920’s bootlegging gang (at least to me) they are the formal nomenclature of a biochemical classification system, known commonly as zinc fingers (ZnF). The taxonomy of zinc fingers describes the morphological motif that the element creates when interacting with various molecules. Macromolecules, such as proteins and DNA, have developed numerous ways to bind to other molecules and zinc fingers are one such molecular scaffold. Zinc, an essential element for biological cell proliferation and differentiation, was first isolated in 1746 by the German chemist Andreas Marggraf (1709-–1782). Zinc fingers were important in the development of genome editing, and while CRISPR remains king, zinc is making a comeback.

 

Strings of amino acids fold and pleat into complex secondary and tertiary structures (for an overview, see this video from Khan Academy).

 

220px-Folding_funnel_schematic.svg

Fig. 1: The folding funnel

Energy-landscape-of-protein-folding-and-misfolding-From-the-unfolded-state-toward-the

Fig. 2: The energy landscape

Proteins with zinc finger domains—meganucleases—act as molecular DNA scissors, always ready to snip and organize genetic material. The return of these biochemical bootleggers, an older generation of genome editing tools, is due to the problem of exploring the invisible molecular world of the cell. In this age of genomic editing, biologists are debating the concept of protein stability and trying to elucidate the mechanism of protein dynamics within complex signalling pathways. Structural biologists imagine this process through two intermingled metaphors, the folding funnel (fig. 1) and the energy landscape (fig. 2). The energy landscape theory is a statistical description of a protein’s potential surface, and the folding funnel is a theoretical construct, a visual aid for scientists. These two metaphors get scientists out of Levinthal’s Paradox, which argues that finding the native or stable 3-dimensional folded state of a protein, that is, the point at which it becomes biologically functional, by a random search among all possible configurations, can take anywhere from years to decades. Proteins can fold in seconds or less, therefore, biologists assert that it cannot be random. Patterns surely may be discovered. Proteins, however, no longer seem to follow a unique or prescribed folding pathway, but move in different positions, in cellular time and space, in an energy landscape resembling a funnel of irregular shape and size. Capturing the choreography of these activities is the crusade of many types of scientists, from biochemists to molecular biologists.

 

Within this deluge of scientific terminology is the zinc atom of this story.  With an atomic number of 30, Zinc wanders in and out of the cell with two valence electrons (Zn2+). If Zinc could speak it might tell us that it wishes to dispose, trade, and vehemently rid-itself of these electrons for its own atomic stability. It leads with these electrons, recalling a zombie with outstretched arms. It is attracted to molecules and other elements as it moves around the cytoplasm and equally repelled upon being cornered by others. Whilst not as reactionary as the free radical Hydrogen Peroxide (H2O2), it certainly trawls its valency net in the hope of catching a break to atomic stasis, at least temporarily. In this world of unseen molecular movements a ritual occurs as Zinc finds the right partner to anchor itself to. Zinc trades the two electrons to form bonds making bridges, links, ties and connections which slowly reconfigure long strings of amino or nucleic acids into megamolecules with specific functions. All of this occurs beneath the phospholipid bilayer of the cell, unseen and unheard by biologists.

Zinc-58b6020f3df78cdcd83d332a

Model of a Zinc atom

The cell itself is an actor in this performance by behaving like a state security system, as it monitors Zn2+ closely. If the concentration gets too high, it will quarantine the element in cordoned-off areas called zincosomes. By arranging chains of amino acids, such as cysteine and histidine, close to each other, the zinc ion is drawn into a protein trap, and held in place to create a solid, stable structure. They connect to each other via a hydrogen atom, with hydrogen’s single electron being heavily pulled on by Carbon as if curtailing a wild animal on a leash. In the 1990s, when the crystal structures of zinc finger complexes were solved, they revealed the canonical tapestry of interactions. It was notable that unlike many other proteins that bind to DNA through the 2-fold symmetry of the double helix, zinc fingers are connected linearly to link sequences of varying lengths, creating the fingers of its namesake. Elaborate architectural forms are created.

Like a dance, one needs hairpins, or rather beta-hairpins, two strands of amino acids that form a long slender loop, just like a hairpin. To do the gag knuckle one needs two short beta-strands then a turn (the knuckle), followed by a short helix or loop. Want to be daring, do the retroviral gag knuckle. Add one-turn alpha-helix followed by the beta hairpin. The tref clef finger looks like a musical treble clef if you squint really hard. First, assemble around a zinc ion, then do a knuckle turn, loop, beta-hairpin and top it off with an alpha-helix. The less-complex zinc ribbon is more for beginners: two knuckle turns, throw in a hairpin and a couple of zinc ions. Never witnessed, biologists interpret this dance using x-ray crystallography, data that looks like redacted government documents, and computer simulated images.

genes-08-00199-g001

X-ray crystallography of a Zinc Finger Protein, image from Liu et al., “Two C3H Type Zinc Finger Protein Genes, CpCZF1 and CpCZF2, from Chimonanthus praecox Affect Stamen Development in Arabidopsis,” Genes 8 (2017).

klug_nobelportrait-450x450

Aaron Klug

In the 1980s, zinc fingers were first described in a study of transcription of an RNA sequence by the biochemist Aaron Klug. Since then a number of types have been delimited, each with a unique three-dimensional architecture. Klug used a model of the molecular world that required stasis in structure.The pathway towards this universal static theoretical framework of protein functional landscapes was tortuous. In 1904, Christian Bohr (1855–1911), Karl Albert Hasselbalch (1874-1962), and August Krogh (1874-1949) carried out a simple experiment. A blood sample of known partial oxygen pressure was saturated with oxygen to determine the amount of oxygen uptake. The biologists added the same amount of CO2 and the measurement repeated under the same partial oxygen pressure. They described how one molecule (CO2) interferes with the binding affinity of another molecule (O2) to the blood proteins, haemoglobin. The “Bohr effect” described the curious and busy interactions of how molecules bind to proteins.

JACOB1-obit-jumbo

Jacques Monod and François Jacob, image by Agence France-Presse

In 1961, Jacques Monod and Francois Jacob extended Bohr’s research. The word ‘allosteric’ was introduced by Monod in the conclusion of the Cold Spring Harbor Symposium of 1961. It distinguished simultaneously the differences in the structure of molecules and the consequent necessary existence of different sites across the molecule for substrates and allosteric regulators. Monod introduced the “induced-fit” model (proposed earlier by Daniel Koshland in 1958). This model states that the attachment of a particular substrate to an enzyme causes a change in the shape of the enzyme so as to enhance or inhibit its activity. Suddenly, allostery erupted into a chaotic landscape of multiple meanings that affected contemporary understanding of the choreography of zinc as an architectural maker. Zinc returned as the bootlegger of the cellular underworld.

F09-20bsmc

The “induced-fit” model

Around 2000, many biologists were discussing new views on proteins, how they fold, allosteric regulation and catalysis. One major shift in the debate was the view that the evolutionary history of proteins could explain their characteristics. This was brought about by the new imaginings of the space around the molecules as a funnel in an energy landscape. When James and Tawfik (2003) argued that proteins, from antibodies to enzymes, seem to be functionally promiscuous, they pulled organismal natural selection and sexual selection theory into the cellular world. They argued that promiscuity is of considerable physiological importance. Bonding of atoms is more temporal and more susceptible to change than thought of before. Whilst they recognized that the mechanism behind promiscuity and the relationship between specific and promiscuous activities were unknown they opened the door to a level of fluidity earlier models could not contain. Zinc fingers, thus, played an important role in being a “freer” elements with the ability to change its “mind”.

These ideas were not new. Linus Pauling proposed a similar hypothesis, eventually discarded as incorrect, to explain the extraordinary capacity for certain proteins (antibodies) to bind to any chemical molecule. This new wave of thinking meant that perhaps extinct proteins were more promiscuous than extant ones, or there was a spectrum of promiscuity, with selection progressively sifting proteins for more and more specificity. In 2007, Dean and Thornton, reconstructed ancestral proteins in vitro and supported this hypothesis.  If the term promiscuity as it has been placed on these macromolecules sticks, what exactly does it mean?  If promiscuity is defined as indiscriminate mingling with a number of partners on a casual basis, is zinc an enabler? A true bootlegger? This type of language has piqued the interest of biologists who see the importance of this term in evolution and the potential doors it opens in biotechnology today. Promiscuity makes molecular biology sexy.

To state that a protein, or any molecule, is not rigid or puritanical in nature but behaves dynamically is not the equivalent of stating that the origin of its catalytic potential and functional properties has to be looked for in its intrinsic dynamics, and that the latter emerged from the evolutionary history of these macromolecules.  The union of structural studies of proteins and evolutionary studies means biologists have not only (re)discovered their dynamics but also highlights the way that these properties have emerged in evolution. Today, evolutionary biologists consider that natural selection sieves the result, not necessarily the ways by which the result was reached: there are many different ways of increasing fitness and adaptation. In the case of protein folding with zinc fingers, what is sieved is a rapid, and efficient folding choreography. These new debates suggest that what has been selected was not a unique pathway but an ensemble of different pathways, a reticulating network of single-events, with a different order of bond formation.

If the complex signalling pathways inside of a cell begins with a single interaction, zinc plays a star role. Zinc, along with other elements such as Iron, are the underbelly of the molecular world. Until captured and tied down, the new view of proteins offers to bond mechanistic and evolutionary interpretations into a novel field.  This novelty is a crucial nuance to explain the functions and architecture of molecular biology. As the comeback king, zinc fingers are used in a myriad of ways. As a lookout, biologists infected cells with with DNA engineered to produce zinc fingers only in the presence of specific molecules. Zinc reports back as biologist looks for the presence of a molecule, such as a toxin, by looking for the surface zinc fingers rather than the toxin itself. Zinc fingers are obstructionists. In 1994 an artificially-constructed three-finger protein blocked the expression of an oncogene in a mouse cell line. As informants, they capture specific cells.  Scientists have experimented by creating a mixture of labeled and unlabeled cells and incubated them with magnetic beads covered in target DNA strands. The cells expressing zinc fingers were still attached to the beads. They introduce outsiders into the cell, such as delivering foreign DNA into cells in viral research. In tracking these bootleggers, scientists have pushed the limits of their own theories and molecular landscapes, but have landed on promiscuity, a culturally laden, inherently gendered word.

Graduate Forum: Excesses of the Eye and Histories of Pedagogy

This is the fourth in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer. The first piece was by Andrew Klumpp, the second by Cynthia Houng, and the third by Robert Greene II.

This fourth piece is by guest contributor Gloria Yu.

Whether a man is wise can be gathered from his eyes. So thought the Anglican bishop Joseph Hall in his 1608 two-volume collection of character sketches, Characters of Vertues and Vices. The wise man, Hall advised, “is eager to learn or to recognize everything but especially to know his strengths and weaknesses…His eyes never stay together, instead one stays at home and gazes upon himself and the other is directed towards the world…and whatever new things there are to see in it” (quoted in Whitmer, The Halle Orphanage, 77). The wise man’s eyes are the entry points of both self-knowledge and worldly knowledge, and their divergent gaze betrays his catholic curiosity (fig. 1).

hb_29.100.16.jpg

Figure 1. Joseph Hall might find the eyes, more than the books, to be the prominent ‘accessory’ in Bronzino’s Portrait of a Young Man with a Book (1530s). While the left eye is directed outward, ‘towards the world’, the gaze of the right eye (our left) seems unfocused on what is directly before it. It is not a look that invites sustained eye contact with the viewer; rather, an apparent disinterestedness suggests that the man’s attention is elsewhere. The aimless stare is a marker of a gaze turned inward. Image courtesy of the Metropolitan Museum of Art in New York.

The eye plays a central role in two recent snapshots from the history of Western pedagogy that find it productive to ground the historical construction of norms for knowing in educational contexts. Kelly Joan Whitmer’s The Halle Orphanage as Scientific Community: Observation, Eclecticism, and Pietism in the Early Enlightenment (Chicago, 2015) examines the scientific ethos that permeated and vitalized the premier experimental Pietist educational enterprise of the late seventeenth and eighteenth centuries. Orit Halpern’s Beautiful Data: A History of Vision and Reason since 1945 (Duke, 2015) offers a genealogy of contemporary obsessions with “data visualization,” their ontological and epistemological consequences, and the pedagogies that spurred them. Both works centralize the eye as the object of training and discipline. Both accounts trace the pedagogical principles behind the ways in which seeing must be learned, extending the functions of the visual apparatus beyond the role of passive reception into an active mode of evaluating and constructing. Seeing, in these histories, is imbued with cognitive powers so as to be near congruent with knowing itself. Considering these two works together prepares us to use the recurring motif of the trained eye as an opportunity for gauging future directions for histories of pedagogy.

Whitmer informs us that descriptions like the one above of the ‘character of the wise man’ were regularly incorporated into the curriculum at the Halle Orphanage’s schools for the purpose of training a particular way of seeing. The Orphanage’s founder, August Hermann Francke, believed these character sketches could “awaken” in children a love of virtue so that they might emulate the figures depicted. The end goal was the cultivation of an “inner eye,” or “the eye in people,” which Francke equated with prudent knowledge [Klugheit] and pious desires. Indeed, a pedagogy of seeing permeated Francke’s administration of the Orphanage. The training of the inner eye was based on the honing of an array of observational practices that engaged senses beyond just the visual. On one level, visual aids such as scientific instruments (e.g. the camera obscura, air pumps), wooden models, and color-coded maps cultivated tactile, spatial, and relational knowledge. In Francke’s correspondence with the mathematician Ehrenfried Walther von Tschirnhaus, the microscope figures as a particularly apt tool for lessons in seeing since it enhanced understanding of the configuration of parts and wholes while also providing the opportunity to reflect on the faculty of seeing itself. Through observing how minute changes in light transformed one’s image of an object, one could discern the possibilities and limits of human perception. On another level, visual aids, like the models of the geocentric and heliocentric heavenly spheres specially commissioned for the Orphanage, enhanced cognition and yielded spiritual benefits by teaching students how to behold incompatible representations of the world and to reconcile them. The Orphanage taught a manner of “seeing all at once,” wherein the eye was considered a “conciliatory medium” that could aid in transcending interconfessional disagreements.

For Whitmer, seeing and the eye operate as metaphors for perception, understanding, and cognition writ large, at times even as the privileged site for accessing divine truths. Notable is her use of the singular. If it is through seeing in a broad sense (materially, affectively, cognitively) that we encounter the world and acquire knowledge of it, it is the eye that mediates this encounter. From the ancient origins of the evil eye, to Avicenna’s comparison of the eye to a mirror, to the early modern possibility that a wrong stare could compel an accusation of witchcraft, the eye—and not only in the Western tradition—has perhaps always been overdetermined. Its activity always transcends the physiological processes behind mere visual perception. It was after all in tempting Eve to take from the tree of knowledge that the serpent said, “For God knows that in the day you eat of it, your eyes will be opened” (Genesis 3:5).

images

Orit Halpern, Beautiful Data: A History of Vision and Reason Since 1945 (Durham: Duke University Press, 2015).

“Vision,” as Halpern notes, “is thus a term that multiples—visualization, visuality, visibilities” (24). Dexterity in handling the conceptual differences of this plurality allows Halpern to explain how, since the end of World War II, we have come to bestow such a capacious range of talents to our ocular organs. In the context of increasing computational and data scientific approaches in information management, architecture and design, and the cognitive sciences, Halpern argues, vision has taken on a highly technicized form, meditated by screens, data sets, and algorithms that fundamentally expand the epistemological prowess of perception. In particular, postwar cybernetics and design and urban planning curricula had a hand in transforming the ontology of vision. Cybernetics’s preoccupation with prediction, control, and crisis management foregrounded information storage and retrieval and, thus, transformed the eye into a filter and translator of sorts, selecting relevant information for storage within a constant flow based on algorithms for pattern recognition. Whereas the eye at the Halle Orphanage dwelled on contradictory images, the eye of twentieth-century cybernetics constantly made choices.

The works of these authors suggest the advantages of approaching questions central to intellectual history—concerning the transmission, unity, and cultural impact of ideas—from the perspective of histories of pedagogies. Analyzing pedagogy allows Halpern to trace how philosophical ideas were transformed into teachable principles and how a certain paradigm of vision was promulgated through the materiality of our media environments. In her narrative, industrial designers, drawing from cybernetics, developed an “education of vision” grounded in “algorithmic seeing” (93). Here, vision mimicked the activity of a pattern generator: rather than representing the world, this ‘new’ vision captured the multitude of ways something could possibly be seen. Vision was about scale and quantity, about producing as many representations and recombinations of the visual field as possible. No longer could any pedestrian see the world; one had to learn through credentialed training to be considered an “expert” in vision (94). The screen-filled “smart” city of Songdo, South Korea is an embodiment of this paradigm, where older ways of seeing lose out to a vision continually registering and ordering human and environmental metrics toward the goal of “perfect management and logistical organization of populations, services, and resources” (2, see fig. 2).

most-sustainable-cities-songdo

Figure 2. The “smart” city Songdo, South Korea in the Incheon Free Economic Zone features “computers and sensors placed in every building and along roads to evaluate and adjust energy consumption.” Photo and description courtesy of Condé Nast Traveler.

Furthermore, histories of pedagogy, whether central to a project (as in Whitmer’s case) or part of a larger investigation measuring the scale of cultural transformation (as in Halpern’s), can answer questions that animate the intersection of history of knowledge, history of science, and historical epistemology. These fields have offered a robust vocabulary for interrogating the historical contingency of observation and objectivity; the historical, social, and cultural criteria for knowledge; and the methodological goals and tactics involved in fortifying scientific persona. Beside the fact that pedagogy is a science and set of practices specifically concerned with communicating knowledge, Whitmer and Halpern show that, in addition to articulating what it means to learn in certain moments and institutional contexts, pedagogical theories can contain latent or overt temporalities, subjectivities, and modes of vision that deepen our understanding of what in the past has made knowledge count as knowledge.

Yet, histories of pedagogies have even more fruits to bear. If earlier histories focused on matriculation rates, student body demographics, and the longevity of institutions, these two works prove future research capable of taking on the relationship between science and religion, the convergence and divergence of intellectual traditions, the crests and troughs in the history of humanism, and episodes in the centuries long contemplation of what it means to be human. Future research would follow precedent histories of pedagogy, not least the scholarship of our discussant, in treating sensitively the gaps between ideals of education and their execution, the slips between the social and political aims of education and their imperfect applications (Anthony Grafton and Lisa Jardine, From Humanism to the Humanities: Education and the Liberal Arts in Fifteenth- and Sixteenth-Century Europe). That pedagogy is a tradition embedded in text, reliant on practices, and crucial to the scaffolding of intellectual traditions places the excavation of its pasts well within the purview of intellectual historians.

One last note on the eye. Mention of training the eye calls to mind the familiar narrative that forms of discipline often get inscribed on bodies. While this may be true, that there are multiple histories of pedagogy suggests the probability that we are trained in numerous contexts, at different times in our lives, and in potentially incompatible ways in how to see and how to know. It is possible, then, to become at once wise and childish, to become discerning in one way and docile in another. As to whether we are overwhelmingly determined by systems of power and whether there are multiple systems at play, I answer with an optimistic agnosticism that leaves open the question of the place of freedom. I rely on the vagueness of the term freedom here to underscore that the day-to-day exercises in learning may fail to yield desired results or, even when they do, may be useful in ways other than intended. For the researcher willing to question the Foucaultian schema that binds discipline with docility, the histories of pedagogy—of pedagogies with a range of reaches, in quiet pockets of Europe or in grander governmental programs—can be treated as an approach capable of accommodating histories of discipline that remains sensitive to the rough trajectories of learning. If the Panopticon’s surveilling gaze, both omnipresent and invisible, inspires the self-policing of behavior, how does the prisoner see when he looks back?

Panopticon

Jeremy Bentham’s Panopticon

Gloria Yu is a Ph.D. candidate at the University of California, Berkeley. Her research focuses on the history of psychology and the concept of the will in nineteenth-century Europe. She thanks David Delano, Elena Kempf, and Thomas White for their incisive comments on earlier drafts.

J. M. W. Turner’s “Dissolving Views”

By guest contributor Jonathan Potter

War. The Exile and the Rock Limpet exhibited 1842 by Joseph Mallord William Turner 1775-1851

J. M. W. Turner, War. The Exile and the Rock Limpet (1842)

Reviewing the 1842 Royal Academy of Arts exhibition, the art critic John Eagles wrote of J. M. W. Turner’s paintings:

They are like the “Dissolving Views,” which, when one subject is melting into another, and there are but half indications of forms, and a strange blending of blues and yellows and reds, offer something infinitely better, more grand, more imaginative than the distinct purpose either view presents. We would therefore recommend the aspirant after Turner’s style and fame, to a few nightly exhibitions of the “Dissolving Views” at the Polytechnic, and he can scarcely fail to obtain the secret of the whole method […] Turner’s pictures […] should be called henceforth “Turner’s Dissolving Views” (“Exhibitions—Royal Academy,” Blackwood’s Edinburgh Magazine, July 1842, p. 26).

The comparison is no doubt intended to reduce the stature of Turner’s paintings from high art to the level of popular performance. Eagles was not a fan of Turner’s work – he begins by suggesting Turner suffered hallucinations—and he reused the dissolving view comparison the following year to note with approval that there were few imitators of Turner’s ““dissolving view” style” (“Exhibitions,” Blackwood’s Edinburgh Magazine, Aug 1843, p. 188). But Eagles was not alone. Much of the press for Turner’s paintings at the 1842 exhibition was negative, concentrating primarily on various aspects which broadly fall into the category of realism: clarity, recognisability, and believability of depiction, amid others.

Part of the problem also lay in Turner’s subject matter—reviewers struggled, for example, to relate the exiled Napoleon and the limpet in War. The Exile and the Rock Limpet with the sea burial of the artist David Wilkie in Peace – Burial at Sea. These two subjects (or three if you include the jarring juxtaposition of emperor and limpet) do not naturally fit within a traditional historiographical narrative or seem to follow a sequential logic.

Peace - Burial at Sea exhibited 1842 by Joseph Mallord William Turner 1775-1851

J. M. W. Turner, Peace – Burial at Sea (1842)

The paintings contradicted reviewers’ expectations by disregarding realist principles of depiction in both form and content. In order to understand them, we need to look beyond the traditions of fine art painting. This, indeed, is what Eagles suggested when he called the paintings “dreamy performances” and directed the reader to consider them as “dissolving views.”

A successor to the phantasmagorias, the dissolving view was a magic lantern show that used a gradual transition (the “dissolve”) from one image to another. This could utilize superimposition or, more often, involve a gradual dimming and elimination of light through one lens whilst proportionally increasing light through another. Dissolving view shows came to prominence sometime in the first part of the nineteenth century (Simon During suggests around 1825 [Modern Enchantments, 102-3]), taking over from the phantasmagoria as the chief magic lantern entertainment.

The dissolving view is unstable and, potentially at least, destabilising, offering an alternative to traditional sequential historiography. The dissolving view presents paired images that blur together as they transition. In dissolving, the images attain, lose, and regain focus and clarity, and, for transitory moments, appear to coincide and coexist with no clear distinction from one to the next. The dissolve blurs the visual field, but it also blurs the semantic fields of content and context.

A handbill for dissolving views at the Adelaide in London, for example, promises a variety of different subjects seen in different states or time frames. The “Water Girls of India” for instance appear in daylight and then in moonlight, followed by the Tower of London in daylight, then moonlight, then on fire. As the lantern changes from one lens to the other, the scene dissolves from day to night and the viewer is given the sense of time passing. Because the subject remains the same, often there is very little movement beyond the changing light or incidental details. The first image (either night or day) implicitly reiterates an aspect (in this case, diurnal/nocturnal light) of the next image which is the past—i.e. the nocturnal image acquires meaning in relation with the diurnal image—and this semantic return of the past implies the next stage in the cycle. The implied sequence follows a causational rationale (day to night to day) but its progression from past to present to future is also a progression from past to present to past. This rhythmic logic is further complicated by the progression to the next subject. There is no clear logical connection between the water girls of India and the Tower of London except that both share a rhythmic temporality (both transform from day to night). The teleology of cause and effect is replaced by coincidence and shared rhythms that are not causation but do allow a certain predictive logic.

This destabilisation is not without form or structure. These kinds of dissolving view present a cycle which intermittently reinstates something resembling linearity and perspectival order, but this linearity is caught in a revolutionary whirl from one to the next and (potentially at least) back again. This is a visual whirl in more than one sense: the blur of the images replicates the visual field of motion and, in fact, dissolving view images were often circular. In essence, the whirling of the dissolving view contains a sense of rhythmic regularity. In images which oscillate between summer and winter or night and day, as magic lantern dissolving views often did, a natural sequential rhythm supplants the linear progressions of dominant conceptions of time and history. Rather than succession and disjunction, the dissolving view infers repetition and conjunction. It acts as a conceptual counterpoint to the linearity of conventional historical thought that emphasizes the sequential logic of cause and effect.

We can see such effects in much of Turner’s paintings at the 1842 exhibition. Turner experimented with circular, octagonal, and square canvases of proportions reminiscent of lantern slides, on which colours characteristically whirl around a central point, and his images suggest forms of motion—most famously in his later painting Rain, Steam and Speed – The Great Western Railway (1844), but also in the angled column of smoke in “Peace”, or the vortexes of colours in the two deluge paintings.

 

If we follow Eagles’s suggestion and consider these as “dissolving views,” then we might consider the two most difficult paintings, Peace and War, as a cyclical binary. The bright sunshine of War melts into the dark clouds of Peace much as magic lantern slides might melt from day to night or summer to winter. The light sources also share a structural unity – that central beam of brightness eviscerating the darkness of Peace is mirrored by the sun’s reflections in War. Thematically, too, there is some unity in the shared representation of the sea, though in War this is a watery shoreline rather than sea proper.

But what about the difficulty of the central subjects? Exiled Napoleon does not so obviously dissolve into David Wilkie’s burial at sea, and any notion of the latter returning back again to the former is more than a little jarring. Perhaps these difficulties are part of the point. They are the difficulties faced by viewers looking for conventional socio-historical links in paintings that, as the metaphorical dissolving view often does, seem to defy such conventions. Turner’s paired paintings demand that the viewer think beyond established norms to reflect upon the meanings of teleological historiographical practice. In pairing Napoleon with the limpet, the human figure of the man is pulled away from the mythology of the emperor.

Turner’s “dissolving views,” notably blurred and indistinct except for their central subjects, seem to be locked in the moment between two images. The space of actual physical objects—the focal point of visible reality and the space of event—is very small in these pictures, confined to only thin bands of land the run across the mid-sections of canvas between water and sky. This is true of all of the 1842 exhibition paintings. The space of visible definition and physical solidity is caught between the indistinctions and contradictions of watery reflection and vaporous sky. The vast majority of painted space is given to indistinction, as though the whirling visual chaos around physical phenomena were as important as the phenomena themselves.

Turner might be, as various critics have suggested, drawing attention to the embodied subjectivity of vision, but he is drawing attention, too, to the ambiguity of interpretation. In the paired paintings, War and Peace, viewers used to history being “for” something (for understanding progress, nationhood, divine provenance, or a multitude of other values) are prevented from resolving the images into a coherent narrative. This is not history as chronology or ideology but as event, to be set within an interpretive framework only by individual observers in full knowledge that such frameworks are not naturally occurring, but imposed, and so necessarily reduce events to certain structures and values. This relates to the metaphorical dissolving view in precisely its insistence that visibility does not equate understanding and that, in the blurry vague expanse around the focussed subject, the flaws and ambiguities in our understanding are rendered visible—great gaps that rupture the certainty of the visible space.

The more certain we, as viewers, are that this is a view of Napoleon on Elba and this is a view of Wilkie’s funeral, the more uncertain we become of our interpretation of the pairing, and the more they converge in contradiction. These two events are juxtaposed, so that their meaning and relational dynamic is left more or less open to the viewer’s interpretation. The structures of historical force (of sequence, of continuum) are rendered visible. In the British solider for instance, Napoleon’s historical past is made visible, as are his imprisoned present and future. However, these forces are not the main agents of meaning in the images—they are peripheral, there to be identified, but attention is not purposely drawn to them. In this sense, these are extra-historical images which probe and question the history they ostensibly project. As dissolving views, these images do not resolve uncertainty, they generate it, they blur conventional structures and obscure dominant historical relations. They inculcate a historiographical perspective of complex relations that evade the organising structures of cause and effect, of sequential succession, of contradistinction and perspectival clarity.

Jonathan Potter recently completed his first book, Discourses of Vision in Nineteenth-Century Britain. He completed his PhD at the University of Leicester in 2015 and currently teaches at Coventry University. Find him on twitter at @DrJonPotter

Graduate Forum: The Radical African American Twentieth Century

This is the third in a series of commentaries in our Graduate Forum on Pathways in Intellectual History, which is running this summer. The first piece was by Andrew Klumpp, the second by Cynthia Houng.

This third piece is by guest contributor Robert Greene II.

“Remember the ladies.” This is a line from Abigail Adams’ famous letter to her husband, John Adams, defending the idea of rights and equality for women. “Remember the ladies,” however, could easily also serve as the defining idea of modern African American intellectual history. Many historians of the African American intellectual tradition have taken great pains to emphasize the importance—indeed, the centrality—of African American women to that intellectual milieu. At the same time, other fundamental questions have been raised of not just who to privilege in this new turn in African American intellectual history, but what sources are appropriate for intellectual history. Finally, the ways in which the public remembers the past animates newer trends in African American intellectual history. In short, African American intellectual history’s recent historiographic turns offer much food for thought for all intellectual historians.

 

The field of African American intellectual history has come a long way since the heyday of historians August Meier and Earlie E. Thorpe, both prominent in the then-nascent field of African American intellectual history in the 1960s. Meier’s Negro Thought in America, 1880-1915 and Thorpe’s The Mind of the Negro: An Intellectual History of Afro-Americans were both written in the 1960s and set the standard for African American intellectual history for decades to come. Both books were focused heavily on male intellectuals, however. As such they both set the standard for the field and, along with so much of African American history up until the late 1980s, left out the important voices of many African American women.

The rise of historians like Evelyn Higginbotham in the early 1990s ushered in new ways of understanding the intersection of race and gender through American history. Her book Righteous Discontent (1992) and essay “African American Women’s History and the Metalanguage of Race” (1992) both provided templates for how to easily meld women’s history and African American history into texts that became essential works of understanding the past through viewpoints and sources normally ignored by most male historians.

Today, the field of African American intellectual history has been influenced by the evolution of several related fields: African American women’s history and Black Power studies. Both fields have attempted to both overturn older assumptions about African American history and do so by focusing on previously marginalized sources and historical figures. Much of the recent historiographic trends in African American history—namely, a deeper understanding of Black Nationalism and its relationship to broader ideological trends in both Black America and the African Diaspora—would not have been possible without both a deeper understanding of the importance of gender to African American history, and a willingness to expand the definition of who are “important” intellectuals “worthy” of study.

In just the last year alone, numerous books about the intersection of Black Nationalism and gender have challenged earlier assumptions about the histories of both fields in relationship to African American history. Both Keisha Blain’s Set the World On Fire: Black Nationalist Women and the Global Struggle for Freedom (University of Pennsylvania Press, 2018) and Ashley D. Farmer’s Remaking Black Power: How Black Women Transformed an Era (UNC Press, 2017) stretch the time period in which historians should understand the origins of Black Power—getting further away from just understanding the 1960s-era context and situating Black Power and larger Black Nationalist trends in a long era of resistance and struggle led and strategized by African American women.

Set the World on Fire follows up on other works about the Black Nationalism of the 1920s, arguing that it did not end with Marcus Garvey’s deportation from the United States in 1927. Instead, argues Blain, it was women such as his spouse Amy Jacques Garvey who kept Black Nationalist fervor alive across the United States. Meanwhile, Farmer’s book shows how the ideas of women associated with the Black Power movement of the 1960s owe a great deal to the longer arc of radical black women’s history in the twentieth century—from the agitation of black women within the Communist left of the 1930s and stretching well into the 1970s and 1980s. For Farmer, the history of a radical black nationalism does not end with the collapse of the Black Panther Party in the late 1970s.

Marcus_Garvey_with_Amy_Jacques_Garvey,_1922

Amy Jacques Garber, with her husband, Marcus Garvey.

 

WHITE008_500x500

Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (Gainesville: University Press of Florida, 2011)

Meanwhile, other trends within African American intellectual history point to the utilization of previously ignored or forgotten sources to provide a deeper understanding of the past. Derrick White’s The Challenge of Blackness: The Institute of the Black World and Political Activism in the 1970s (University Press of Florida, 2011) argues for diving deeper into relatively recent African American intellectual history to provide a fuller picture of the post-Civil Rights Movement era. For White, the African American think tank was an important ideological clearing house for not just African Americans, but the broader Left in the 1970s.

 

A third movement within the field is the study of African American history itself. Pero Dagbovie has led the way in this, writing several key works detailing the rise of African American history over a broad timespan. Works such as African American History Reconsidered (University of Illinois Press, 2010) and The Early Black History Movement (University of Illinois Press, 2007) detail not only historiographic trends in the field, but the ways in which the institutions necessary for the growth of African American history were born and nurtured against the backdrop of Jim Crow segregation.

Finally, the importance of understanding memory to African American intellectual history has changed the way African American intellectual historians think about the intersection of ideas with public discourse. In reality, much of the understanding of “memory” by African American intellectual historians concerns forgetting by the vast public. Books such as Jeanne Theoharis’s A More Beautiful and Terrible History (Beacon Press, 2018) emphasizes how much of the American mainstream media—along with most politicians—have been complicit in hiding the deeper, more complicated histories of the Black freedom struggle in the United States.

African American intellectual history offers plenty of new opportunity for scholars interested in linking intellectual history to other sub-fields. African American activists and intellectuals never existed in a vacuum, whether geographic or ideological. They made alliances with a variety of groups and forces, all for the sake of freedom across the African diaspora. The new turns in African American intellectual history reflect this aspect of black history.

Robert Greene II is a Visiting Assistant Professor of History at Claflin University. He studies American intellectual and political history since 1945 and is the book review editor for the Society of US Intellectual Historians.

“Every Man is a Quotation from all his Ancestors:” Ralph Waldo Emerson as a Philosopher of Virtue Ethics

By guest contributor Christopher Porzenheim

Even the smallest display of virtuous conduct immediately inspires us. Simultaneously we: admire the deed, desire to imitate it, and seek to emulate the character of the doer. […] Excellence is a practical stimulus. As soon as it is seen it inspires impulses to reform our character. -Plutarch. [Life of Pericles. 2.2. Trans. Christopher Porzenheim.]

Ralph_Waldo_Emerson_ca1857_retouched

Ralph Waldo Emerson

Ralph Waldo Emerson has been characterized as a transcendentalist, a protopragmatist, a process philosopher, a philosopher of power, and a even moral perfectionist.” While Emerson was all of these, I argue he is best understood as a philosopher of social reform and virtue ethics, who combined Ancient Greco-Roman, Indian, and Classical Chinese traditions of social reform and virtue ethics into a form he saw as appropriate for nineteenth-century America.

Reform, of self and society, was the central concern of Emerson’s philosophy. Emerson saw that we as humans are by nature reformers, who should strive to mimic the natural and spontaneous processes of nature in our reform efforts. As he put in one of his earliest published essays, Man the Reformer (1841):

What is a man born for but to be a Reformer, a Remaker of what man has made; a renouncer of lies; a restorer of truth and good, imitating that great Nature which embosoms us all[?]

Reforming oneself, with models of moral and religious heroes from the past, and through one’s own example, others, and eventually society itself, was the idea at the center of Emerson’s philosophy. He would often echo the virtue ethicist Confucius’s (551–479 BCE) advice that “When you see someone who is worthy, concentrate on becoming their equal; when you see someone who is unworthy, use this as an opportunity to look within yourself [for similar vices].” [A.4.17.]

For example, in the essay History (1844), Emerson wrote that “there is properly no history; only biography” and argued that this “biography” exists to reveal the virtues and vices of exceptional individuals character:

So all that is said of the wise man by Stoic, or oriental or modern essayist, describes to each reader his own idea, describes his unattained but attainable self. All literature writes the character of the wise man. […]  A true aspirant, therefore, never needs look for allusions personal and laudatory in discourse. He hears the commendation, not of himself, but more sweet, of that character he seeks, in every word that is said concerning character[.]

For Emerson, the task, of all literature and history, was offering people enjoyable and memorable examples of virtue and vice for them to pattern their own character, relationships, and life by. “The student is to read history, actively and not passively; to esteem his own life the text, and books the commentary.” History is a biography of our own potential character.

The logical result of these beliefs, was Emerson’s later work, Representative Men (1850) a collection of essays which provided biographies of “wise men,” “geniuses” and “reformers” each illustrating certain virtues and vices for his readers to learn from.

Plato for example, represented to Emerson the virtues and vices of a character shaped by philosophy, Swedenborg a mystic, Montaigne a skeptic, Shakespeare a poet, Napoleon a man of the world, and finally Goethe, a writer.

Representative Men was in part a direct response to the work of Emerson’s friend Thomas Carlyle’s On Heroes and Hero Worship & The Heroic in History (1841). But both men’s works shared a common ancestor well known to their contemporaries, Plutarch’s Parallel Lives.

plutarch_of_chaeronea-03.jpg

A bust of Plutarch in his hometown of Chaeronea, Greece

Plutarch (46-120 CE), a Greco-Roman biographer, essayist and virtue ethicist, who was deeply influenced by Platonic, Aristotelian, Stoic and Epicurean philosophy, wrote a collection of biographies (now usually called The Lives) and a collection of essays (The Morals) which would both serve as a models for Emerson’s work.

Plutarch’s Lives come down to us as a collection of 50 surviving biographies. Typically in each, the fate and character of one exceptional Greek individual, is compared with those of one exceptional Roman individual. In doing so, as Hugh Liebert argues, Plutarch was showing Greek and Roman citizens how they could play a role in shaping first themselves, and, through their own example, the Roman world. In an era that perceived itself as modern, chaotic, and adrift from the past; Plutarch showed his readers how they could become like the heroes of the past by imitating their virtuous patterns of conduct.

Plutarch’s Lives provoke moral questioning about character without moralizing. They give us a shared set of stories, some might say myths, by which we can measure ourselves and each other other. They show in memorable stories and anecdotes what is (and is not) worth admiring; virtues and vices.

We might, for example, admire Alexander the Great’s superhuman courage. But, what of the time he “resolved” a conflict between his best friends by swearing to kill the one that started their next disagreement? Or, even worse, what of when he executed Parmenion, one of his oldest friends? The Lives are not hagiographies.

Instead, they are mirrors for moral self-cultivation. For Plutarch, the “mirror” of history delights and instructs. It reflects the good and bad parts of ourselves in the heroes and villains of the past. The Lives are designed as tools to help reform our character. They help us see who we are and could become because they portray the faces of virtue and vice, as Plutarch put it at the start of his biography of Alexander the Great:

I do not aim to write narratives of events, but biographies. For rarely do a person’s most famous exploits reveal clear examples of their virtue and vice. Character is less visible in: the fights with countless corpses, the greatest military tactics, and the consequential sieges of cities. More often a person’s character shows itself in the small things: the way they casually speak to others, play games, and amuse themselves.

I leave to other historians the grand exploits and struggles of each of my subjects – just as a painter of portraits leaves out the details on every part of his subject’s body. Their work focuses upon the face. In particular, the expression of the eyes. Since this is where character is most visible. In the same way my biographies, like portraits, aim to illuminate the signs of the soul. (Life of Alexander. 1.2-1.3. Trans. Christopher Porzenheim)

Confucius_Humblot

Eighteenth-century European depiction of Confucius

Emerson was in firm agreement with Plutarch about the relationship between our everyday conduct, virtue and character. In Self Reliance (1841), he wrote that “Character teaches above our wills. Men imagine that they communicate their virtue or vice only by overt actions, and do not see that virtue or vice emit a breath every moment.” This idea is axiomatic for Emerson. Hence why, in his essay Spiritual Laws (1841), he quotes Confucius’ claim: “Look at the means a man employs, observe the basis from which he acts, and discover where it is that he feels at ease. Where can he [his character] hide? Where can he [his character] hide?” [A.2.10] For Plutarch and Emerson, our character is revealed in the embodied way we act every moment; in the way we relate to others – in our spontaneous manners, etiquette, or lack thereof.

As Emersons approval of Confucius suggests, Plutarch’s Lives, and Greco-Roman philosophy in general was merely one great influence on Emerson ideals of self and societal reform.  It is to these other influences, from Confucian philosophy in particular, that we will turn in a subsequent post, in order to clarify Emerson’s philosophy of virtue ethics and social reform.

Christopher Porzenheim is a writer. He is currently interested in the legacy of Greco-Roman and Classical Chinese philosophy, in particular the figures of Socrates and Confucius as models for personal emulation. He completed his B.A. at Hampshire College studying “Gilgamesh & Aristotle: Friendship in the Epic and Philosophical Traditions.” When in doubt he usually opens up a copy of the Analects or the Meditations for guidance. See more of his work here.

Social Defiance and Counter-Institutions: What Aesthetic Philosophy Misses in the Ontology of Rock Music

By guest contributor Jake Newcomb

1200px-Rage_Against_The_Machine

Rage Against the Machine End of set, before leaving stage
Vegoose Music Festival, 28 October 2007. https://www.flickr.com/photos/penner/1977294428/

With the publication of Rhythm and Noise: An Aesthetics of Rock, philosopher Theodore Gracyk made the first breach into a modern ontology of rock music in 1996. Gracyk’s ontology postulated the idea that rock music is separated from other genres of music by identifying sound recording as the most important facet of the genre of rock music – which contrasts the centrality of live performance in classical music. By downplaying the role of live performances, songwriting, and songs themselves in the ontology of rock, Gracyk’s left his assertion open to critique. Some of the critiques that followed Gracyk’s ontology, like Andrew Kania’sin 2006, keep the ontology of rock music “recording-centered,” but try to refine the exact contours of what “recording” is. Kania also acknowledged that rock music is a recording centered phenomenon, and argued that the primary goal of rock musicians is to “construct [recorded] tracks,” as opposed to writing songs. Kania used this assertion to then make the claim that recording tracks precedes the existence of a song, and that it isn’t until after a track is recorded, that a “song” can exist. In 2015, Franklin Bruno entered the debate, arguing not only that songwriting and the existence of songs can precede the recording of a track, but also that the quality of songwriting and songs are as important to the fans of rock music as the skill of recording tracks. Bruno emphasized the viewpoint that songs and recordings are not mutually exclusive, but mutually dependent.

This chain of argumentation leaves out the cultural and social dimensions of rock music, which could be of particular interest to historians. Rock musicians have openly defied the status quo and socio-political norms of their times, and became symbols of resistance to the social structures that surrounded their art. Defining moments in the development of rock, and how that development was perceived by the public, are entangled with the contemporary political and economic situations that surrounded those moments. The examples below demonstrate how rock musicians have symbolized social and political conflict.

Take for example, the identity of Nirvana’s Kurt Cobain in the 1990s. In, The Man Whom the World Sold, Mark Mazullo argued that Cobain’s aggressive music and superstar status in the early 1990s for many people brought into the public consciousness a generational conflict between “Generation X” and the “Baby Boomers.” Nirvana’s confrontational music resonated with Generation X, who felt that the generation that directly preceded them, the Baby Boomers, had raised them in a system that could not provide the the pathways to happiness and prosperity that were promised to them by the idealism of post-war America. Mazullo quotes Sarah Ferguson, a journalist, who published an article after Cobain’s suicide saying that, “the hit ‘Smells Like Teen Spirit’ was… a …resounding fuck you to the Boomers and all the false promises they saddled us with.” Following Cobain’s suicide in 1994 “every major print venue in the country ran obituaries and commentaries on Cobain’s heroic cultural role,” and “suicide hotlines were established across the country,” because there was an immediate fear of copycat suicides. Lorraine Ali writing for the New York Times chalked up the significance of Cobain’s death to the generational tension that allowed Nirvana to sell “millions of albums to peers who can relate to their rootless anger.” Here, Ali uses the word “peer” to describe the people who purchased Nirvana’s music, not “fans,” because to many listeners of Nirvana, Cobain was one of them, not just a rockstar. Cobain, plagued by many of the same ills that the listeners of his music were, triggered a deep emotional response from others in his generation who suffered from drug addiction, depression, chronic illness, and childhood trauma, like himself. For many young people in his generation, those characteristics of his personality and the social conditions in which he published his music were as important as the songs he wrote and the tracks that he recorded, and it is a big part of the reason why Nirvana’s music had immense and immediate commercial success.

Punk rock, a subgenre of rock that influenced Nirvana, actively resisted the mainstream and commercial entertainment industry. According to Dawson Barrett, punk engaged in “direct action” politics and “built its own elaborate network of counter-institutions, including music venues, media, record labels, and distributors,” that acted as “cultural and economic alternatives to corporate entertainment industry,” instead of trying to “[petition] the powerful for inclusion.” Which, Barrett argues, “should also be understood as sites of resistance to the privatizing agenda of neoliberalism,” because creating their own institutions and economic circuits was a conscious political choice to not participate in the dominant cultural economic ideology and social system. Barrett’s article, DIY Democracy: The Direct Action Politics of U.S. Punk Collectives, asserts that punk culture descended from “New Left Principles,” like “consensus-based decision-making, voluntary participation, and relatively horizontal leadership structures,” in direct defiance of the neoliberal ideology that evolved alongside punk in the United States. The punk rock musicians who engaged in “DIY democracy” had no desire to exist within the neoliberal commercial entertainment structure and built their own structures. Punk musicians created new modes of living to accompany their art. The development of punk music is also a history of a counter culture openly defiant of materialism, consumer culture, and mainstream political thought.

Vaclav_Havel_cropped

Václav Havel, photograph by Jiří Jiroutek

The Czechoslovakian rock movement in the 1970s also created “counter-institutions.” The existence of these counter-institutions and the desire to create modes of living outside of the state-accomodated social structures of the communist regime came to a head in 1976 when the members of a rock band called “The Plastic People of the Universe,” were put on trial and convicted for their music and their concerts. The state believed that the music and community of musicians put Czechoslovakia at risk. The trial, and the verdict, were the catalysts for the creation of the Charter 77 organization in Czechoslovakia. Charter 77, made up of the Czech intelligentsia who feared that the state could and would remove their freedoms of speech and assembly, was instrumental in bringing down the communist regime in the Velvet Revolution in 1989. Vaclav Havel, a founding member of Charter 77 and a spectator at the trial of The Plastic People of the Universe, wrote in his highly influential essay “The Power of the Powerless,” that the trial between the communist judicial system and the rock band was a confrontation between “two differing conceptions of life.” The state feared that the rock band’s music and concerts could undermine the solidarity and the morality of the state. Havel criticized the state’s viewpoint and actions as an “attack on the very notion of living within the truth, on the real aims of life.” The state according to Havel, acting from fear, took judicial measures to prevent the rock band from living and acting according to principles of freedom of expression and individual choice, because the very act of “living within truth” put the rigid totalitarian ideology of the Czechoslovakian state into question.

Havel identified that the Plastic People of the Universe had created their own counter-institutions, and he defined them as a “parallel polis.” In the Czechoslovakian state structure, Havel theorized that rock bands, and dissident groups in general, created their own organizational and economic structures in order to perpetuate their existence, and these “parallel polis” structures developed naturally as a response to the state rigidity and limitations of communist Czechoslovakia. Creating and maintaining these “parallel polis” structures was the only way that these musicians could live out their truth, playing the music that they wanted to play and living out the communal experiences that they wanted to have. The parallel polis that The Plastic People of the Universe created had two enormous effects on Czechoslovakian society: 1) it forced the state to take judicial action and punish an entity that they perceived as a threat, which began a new crackdown on elements in Czechoslovakia that the state perceived to be subversive, and 2) inspired part of the intelligentsia to found the Charter 77 organization in order to protect the rights of free speech and criticism. The Plastic People of the Universe inspired immediate action on both sides of a political conflict. While Cobain embodied a generational conflict, the Plastic People of the Universe’s embodied a political one.

The ontologies of rock music developed by Gracyk, Kania, and Bruno all abstract rock music from the social and political conditions that surround the art. At best this is incomplete, but at worst it is misleading. The aforementioned examples elucidate rock music’s tendency to embody social and political conflict, and in the case of The Plastic People of the Universe, inspire action on both sides of a political conflict. Kurt Cobain became a spokesman for his generation not only because of his ability to write popular songs and record popular tracks, but also because he was seen by some as a martyr-like figure. Punk rock musicians often denied mainstream consumer culture and replaced it with counter-institutions and grassroots organization to avoid having to work within the neoliberal economic system. These aspects of rock music are as important to our understanding of its history as the fact that the artform is primarily mediated from the artist to the recipient through sound recording. To move toward a more comprehensive ontology of rock, the cultural and political symbolism that rock music and its practitioners embodied should be taken into account.

Jake Newcomb is an MA student in the Rutgers History Department, and a musician. His essays on his personal experience with music can be found at jakenewcomb.tumblr.com

Personal Memory and Historical Research

By Contributing Editor Pranav Kumar Jain

51Z3BE64J5L._SX309_BO1,204,203,200_

Eric Hobsbawm, Interesting Times (2002)

During a particularly bleak week in the winter of 2013, I picked up a copy of Eric Hobsbawm’s modestly titled autobiography Interesting Times: A Twentieth-Century Life (2002), perhaps under the (largely correct) impression that the sheer energy and power of Hobsbawm’s prose would provide a stark antidote to the dullness of a Chicago winter. I had first encountered Hobsbawm the year before when he had died a day before my first history course in college. The sadness of the news hung heavy on the initial course meeting and I was curious to find out more about the historian who had left such a deep impression on my professor and several classmates. Over the course of the next year or so, I had read through several of his most important works, and ending with his autobiography seemed like a logical way of contextualizing his long life and rich corpus.

Needless to say, Interesting Times was an absolutely riveting read. Hobsbawm’s attempt to bring his unparalleled observational skills and analytical shrewdness to his own work and career revealed a life full of great adventures and strong convictions. Yet throughout the book, apart from marveling at his encounters with figures like the gospel singer and civil rights activist Mahalia Jackson, I was most stuck by what can best be described as the intersection of historical techniques and personal memory. Though much of the narrative is written from his prodigious memory, Hobsbawm regularly references his own diary, especially when discussing his days as a Jewish teenager in early 1930s Berlin and then as a communist student in Cambridge. In one instance, it allows his later self to understand why he didn’t mingle with his schoolmates in mid-1930s London (his diary indicates that he considered himself intellectually superior to the whole lot). In another, it helps him chart, at least in his view, the beginnings of peculiarly British Marxist historical interpretations. Either way, I was fascinated by his readings of what counts as a primary source written by himself. He naturally brought the historian’s skepticism to this unique primary source, repeatedly questioning his own memory against the version of events described in the diary and vice versa. This inter-mixing of personal memory with the historian’s interrogation of primary sources has long stayed with me and I have repeatedly sought out similar examples since then.

In recent years, there has been a remarkable flowering of memoirs or autobiographies written by historians. Amongst others, Carlos Eire and Sir J. H. Elliott’s memoirs stand out. Eire’s unexpectedly hilarious but ultimately depressing tale of his childhood in Cuba is a moving attempt to recover the happy memories long buried by the upheavals of the Cuban Revolution. In a different vein, Elliott ably dissects the origins of his interests in Spanish history and a Protestant Englishman’s experiences in the Catholic south. The intermingling of past and present is a constant theme. Elliott, for example, was once amazed to hear the response of a Barcelona traffic policeman when he asked him for directions in Catalan instead of Castilian. “Speak the language of the empire [Hable la lengua del imperio],” said the policeman, which was the exact phrase that Elliott had read in a pamphlet from the 1630s that attacked Catalans for not speaking the “language of the empire.” As Elliott puts it, “it seemed as though, in spite of the passage of three centuries, time had stood still” (25). (There are also three memoirs by Sheila Fitzpatrick and one by Hanna Holborn Gray, none of which, regrettably, I have yet had a chance to read.)

51OMnkIcZyL._SX331_BO1,204,203,200_

Mark Mazower, What You Did Not Tell (2017)

Yet, while Eire and Elliott’s memoirs are notably rich in a number of ways, they have little to offer in terms of the Hobsbawm-like connection between historical examination and personal memory that had started me on the quest in the first place. However, What You Did Not Tell (2017) Mark Mazower’s recent account of his family’s life in Tsarist Russia, the Soviet Union, Nazi Germany, France, and the tranquil suburbs of London provides a wonderful example of the intriguing nexus between historical research and personal memory.

In some ways, it is quite natural that I have come to see affinities between Hobsbawm’s autobiography and Mazower’s memoir. Both are stories of an exodus from persecution in Central and Eastern Europe for the relative safety and stability of London. But the surface level similarities perhaps stop there. While Hobsbawm, of course, is writing mostly about himself, Mazower is keen to tell the remarkable story of his grandfather’s transformation from a revolutionary Bundist leader in the early twentieth-century to a somewhat conservative businessman in London (though, as he learned in the course of his research, the earlier revolutionary connections did not fade away easily and his grandparents’ household was always a welcome refuge for activists and revolutionaries from across the world.) However, on a deeper level, the similarities persist. For one thing, the attempt to measure personal memories against a historical record of some sort is what drives most of Mazower’s inquiries in the memoir.

The memories at work in Mazower’s account are of two kinds. The first, mostly about his grandfather whom he never met (Max Mazower died six years before his grandson Mark was born), are inherited from others and largely concern silences—hence the title What You Did Not Tell. Though Max Mazower was a revolutionary pamphleteer, amongst other things, in the Russian Empire, he kept quiet about his radical past during his later years. His grandfather’s silence appears to have perturbed Mazower and this plays a central role in his bid to dig deeper in archives across Europe to uncover traces of his grandfather’s extraordinary life. The other kind of memories, largely about his father, are more personal and urge Mazower to understand how his father came to be the gentle, practical, and affectionate man that Mazower remembered him to be. Naturally, in the course of phoning old acquaintances, acquiring information through historian friends with access to British Intelligence archives, and pouring through old family documents such as diaries and letters, Mazower’s memories have both been confirmed and challenged.

ows_151181013914759

Mark Mazower

In the case of his grandfather, while Mazower is able to solve quite a few puzzles through expert archival work and informed guessing, there are some that continue to evade satisfactory conclusion. Perhaps the thorniest amongst these is the parentage of his father’s half-brother André. Though most relatives knew that André had been Max’s son from a previous relationship with a fellow revolutionary named Sofia Krylenko, André himself came to doubt his paternity later in life, a fact that much disturbed Mazower’s father, who saw André’s doubts as a repudiation of their father and everything he stood for. Mazower’s own research into André’s paternity through naturalization papers and birth certificate appears to have both further confused and enlightened him. While he concludes that André’s doubts were most likely unfounded, a tinge of unresolved tension about the matter runs through the pages.

With his father, Mazower is naturally more certain of things. Yet, as he writes towards the beginning of the memoir, after his father’s death he realized that there was much about his life that he did not know. In most cases, he was pleasantly surprised with his discoveries. For instance, he seems to take satisfaction in the fact that, in his younger years, his father had a more competitive streak than he had previously assumed. But reconstructing the full web of his father’s friendships proved to be quite challenging. At one point, he called a local English police station from Manhattan to ask if they could check on a former acquaintance of his father whose phone had been busy for a few days. After listening to him sympathetically, the duty sergeant told him that this was not reason enough for the police to go knocking on someone’s door. Only later did he learn that he was unable to reach the person in question because she had been living in a nursing home and had died around the time that he had first tried to get in touch.

The Pandora’s Box opened by my reading of Hobsbawm’s autobiography is far from shut. It has led me from one memoir to another and each has presented a distinct dimension of the question of how historical research intersects with personal memories. In Hobsbawm’s case, there was the somewhat peculiar case of a historian using a primary source written by himself. Mazower’s multi-layered account, of course, moves across multiple types of memories interweaving straightforward archival research with personal impressions.

While these different examples hamper any attempt at offering a grand theory of personal memory and historical research, they do suggest an intriguing possibility. The now not so incipient field of memory studies has spread its wings from memories of the Reformation in seventeenth and eighteenth-century England to testimonies of Nazi and Soviet soldiers who fought at the Battle of Stalingrad. Perhaps it is now time to bring historians themselves under its scrutinizing gaze.

Pranav Kumar Jain is a doctoral student at Yale where his research focuses on religion and politics in early modern England.

Geneva’s Calvin

By Editor Spencer J. Weinreich

How the mightily Protestant have fallen. Almost five hundred years after Geneva deposed its (absentee) bishop and declared for the Reformation, there are nearly three Catholics and two agnostics/atheists for every Protestant Genevan. This, the city of John Calvin, acclaimed by his Scottish follower John Knox as the “the most perfect school of Christ that ever was in the earth since the days of the apostles” (qtd. in Reid, 15), revered (and reviled) as the “Protestant Rome.”

Of course, a lot changes in five centuries, as well it should. No longer a fief of the House of Savoy or a satellite dependent on the military might of Bern, Geneva has become the epitome of a global city, home to more international organizations than any other place on the planet. The rich diversity of twenty-first-century Geneva is a transformation undreamt-of in the days of Calvin—and one reflected in the astonishing diversity of the reformed tradition in contemporary Christianity, whose adherents are more likely to hail from Nigeria and Indonesia, Madagascar and Mexico, than from Geneva or Lausanne.

In a sense, I came to Geneva looking for John Calvin, as a student of the Reformation and more particularly of the city Calvin remolded in his three decades as the spiritual leader of Geneva. I came to participate in a summer course offered by the Université de Genève’s Institut d’histoire de la Réformation, whose very existence owes much to the special relationship between this city and the religious transformations of the sixteenth century. I came, too, to immerse myself in Geneva’s exceptional archives—principally the Archives d’Etat de Genève (the cantonal archives) and the Bibliothèque de Genève (a public research library operated by the city government)—to understand how Calvin and the structures he created maintained the vision and the day-to-day realities of a godly city.

So my eyes were peeled for the footprints of the reformer, far more than the average visitor to this beautiful city at the far western edge of Lake Léman. And as much the intervening years have changed the city, I did not need to look too far. For Calvin remains the most iconic (how he would have hated to be called iconic!) figure of whom Geneva can boast (though he was born some four hundred miles to the north and west, in Noyon). One of the city’s most famous tourist attractions is the International Monument to the Reformation, usually known as the Reformation Wall, a massive relief that spans one side of the Parc des Bastions on the grounds of the Université de Genève. Erected in 1909—the quatercentenary of Calvin’s birth and the 350th anniversary of the university’s foundation—the centerpiece of the memorial is a larger-than-life sculpture of Calvin flanked by three of his associates: Guillaume Farel (the French reformer who convinced Calvin to stay in Geneva), Theodore Beza (Calvin’s protégé and successor as the leader of the Genevan church), and Knox. The Reformation Wall offers a curious vision of Calvin: the gaunt, dour likeness of the reformer, in Bruce Gordon’s felicitous phrase, “casts him to look like some forgotten figure of Middle Earth” (147).

ReformationsdenkmalGenf1

The central relief of the Reformation Wall

The Reformation Wall is the most prominent monument to Calvin in Geneva. There is a memorial in the Cimitière des Rois at a grave long thought to be his, but the true location of his remains has been unknown since his death in 1564. There is the Auditoire de Calvin, a chapel next to the cathedral where the great man taught Scripture every morning. There is the Rue Jean-Calvin, running through the very heart of the Old Town. And, in a more diffuse fashion, there is Geneva’s abiding relationship with the Reformation: the Musée International de la Réforme, the Ecumenical Centre housing groups like the World Council of Churches, and the reformed services that take place each Sunday across the city.

IMG_4212Yet what has struck me in the past month has been the extent to which Calvin’s presence in Geneva slips the bounds of Reformation Studies, the early modern period, and even the persona of the reformer himself. Thus a restaurant in the Eaux-Vives neighborhood, whose logo traces out the features of its namesake. A learned friend mooted the possibility—which I suspect have not been actualized—of a restaurant run according to Calvinist theology: “One’s choice of dish is not conditional on how good the dish actually is.” Thus, too, Calvinus, a popular local brand of lager. (The man himself was certainly fond of good wine, at least [Gordon, 147].)

IMG_4405

Spotted in the gift shop of the Musée International de la Réforme

But perhaps my favorite sighting of Calvin came in the gift shop of the Musée International de la Réforme. In the children’s section, in pride of place among the illustrated biographies and primers on the world’s religions, were several volumes of that sublime theological exploration, Calvin and Hobbes.

It is worth noting that Bill Waterson, the peerless thinker behind said magnum opus, chose the names of his protagonists as deliberate nods to the early modern thinkers (1995, 21–22). And in their turn scholars have taken Waterson’s pairing as a jumping-off point for analyzing early modern thought: next to one of the albums of Calvin and Hobbesin the gift shop—rather incongruous in the children’s section—was Pierre-François Moreau, Olivier Abel, and Dominique Weber’s Jean Calvin et Thomas Hobbes: Naissance de la modernité politique (Labor et Fides, 2013), one of several scholarly works to juxtapose the authors of the Institutesand Leviathan. Charmingly, the influence occasionally flows in the other direction, as another friend flagged with the delightful art of Nina Matsumoto.

john_calvin_and_thomas_hobbes_by_spacecoyote

Nina MatsumotoJohn Calvin and Thomas Hobbes, used by kind permission of the artist.

Calvin is by no means unique in having his image and persona coopted by the new devotions of consumerism and mass media. Nabil Matar ended his keynote lecture, “The Protestant Reformation in Arabic Sources, 1517–1798,” at this year’s Renaissance Society of America meeting with the use of Luther’s likeness to advertise cold-cuts. Think of Caesar’s salads, King Arthur’s flour, Samuel Adams’s beer.

beer-3-001

Calvinus

I hasten to say that this post should not be taken as a lament for the (mythical) theological and intellectual rigor of yesteryear. I may not be thrilled that many Genevans will know Calvin first and foremost as the face on a bottle of lager, but nor would I particularly welcome a reinstatement of the kind of overwhelming public religiosity the man himself enforced on this city. Things change. Calvin’s Geneva is long gone, for better and for worse, and as a historian it is no bad thing that I can—must—look at it from without.

More to the point, the demise of Calvin the theologian is easy to exaggerate. The very fact that Calvin is used to sell beer and to brand restaurants indicates the enduring currency of his cultural profile. Furthermore, countless visitors to the Reformation Wall, to the Musée International de la Réforme, and to Geneva’s historic churches are devout members of one branch or other of the Calvinist tradition, coming to pay their respects to, and to learn something about, the place where their faith took shape. For millions of Christians across the world, John Calvin remains a towering spiritual presence, the forceful and penetrating thinker whose efforts even now structure their beliefs and practices. God isn’t quite dead, certainly not in “the most perfect school of Christ.”

Louis-Sebastien Mercier, between prophetic and historical engagement with time

by guest contributor Audrey Borowski

Louis-Sébastien_Mercier

Portrait of Louis-Sébastien Mercier (1740-1814), (Public Domain).

In his novel The Year 2440 published in 1770, the French homme de lettres Louis-Sebastien Mercier (1740-1814) evokes an idealized Paris in the twenty-fifth century. In it, Paris has been rebuilt on a scientific plan, luxury and idleness have been banished and education is governed by the ideas of Rousseau.  The historical past, that “shame for humanity, every page being crowded with [its] crimes and follies” has effectively been superseded in favour of an atemporal Enlightenment vision of the ancient regime in such a teleological fashion that the vision of the future delivered by Mercier has merely been “deduced” from the present with which it was already pregnant. Both present and future constitute two distinctive points on a same linear continuity that merely follows its natural course of indefinite ‘perfectibility’.

And yet Mercier seems to repudiate this vision almost as immediately as he had laid it down on paper by questioning the conditions of its enforceability, something The Year 2440 had neatly sidestepped through the device of the dream. In the sequel to The Year 2440, The Iron Man, Mercier presents the reader with a rather different picture, one of Paris in disarray in which an “iron man,” embodying force and justice, deambulates throughout the city in an attempt to wipe out any trace of its barbaric past through moralization, graffiti, and even force when necessary, and to bring it in harmony with the ideal it ought to be. His attempt to enforce the dream of The Year 2440’s enlightened ideal society, and to rewrite the past and script the future, fail miserably and rather farcically when the protagonist is captured and dismembered.

Mercier could not resist the urge to ridicule the demiurgic pretensions of the “fabricators of universe” and self-proclaimed sages of his time who had imposed the tyranny of their own certainty and dogmas on knowledge. They had confiscated knowledge and obfuscated reality through the dissemination of an inaccessible jargon in the shape of ‘artificial algebraic formulas’ which had reduced the infinite complexity of the world to illusory certitudes. And, in the belief in their own narrative of triumphant progress, they had spawned a new form of occultism and intolerance towards knowledge other than mathematical. Reality, in its infinite and sensible complexity was a force made up infiniments petits which escaped all control and defied any simple causal explanation and which, in the revolutionary period, had morphed into a “force,” a chemical poetic replete with “fermentations,” “toxins,” and “explosions.”

Ultimately, “the sphere of sentimental moralities was lit up by a sun whose phases were unknown to our calculators.” Against the tyranny of abstract calculations and predictions, Mercier sought to rehabilitate alternative forms of knowledge including linguistic, in particular. He rejected the mathematization of the sensible in favour of the vibrancy of language.  For Mercier, the writers were the true painters of their time. Language, under the influence of court society, had lost its “colour;” it had become stultified, denatured and “expressionless” and fallen into such a state of decomposition as to have become reduced to “exaggerations” and “unintelligible utterances.” The Terror itself had occurred through the “abuse of words” which, in its dissemination of “magical and sanguinary” jargon had turned words into “words that kill.” For Mercier, meaninglessness bred violence.

Language needed to be constantly renewed for only a language embedded in the here and now could enliven us and “make that unknown fibre vibrate” in us. It was not the preserve of grammarians or bound by fixed rules but a “mysterious art-form” which conveyed the “power of our ideas” and the “warmth of our feelings.” Periods of unrest, such as the French Revolution presented auspicious junctures for the renewal of language. In the preface to the Nouveau Paris, Mercier even advised young authors “to make their own idiom since [they] had to depict the unprecedented.” Mercier’s own endeavour to emancipate language from the hold of the academies culminated with his Neologie of 1801.

In Mercier’s two following Parisian works, Le Tableau de Paris (1781-1788) and Le Nouveau Paris (1798), Mercier reasserted his materialist take on history. Uchronia gave way to concrete and fragmented day to day accounts which sought to convey the urban tumult at the heart of Paris, that gigantic organism which bound together rich and poor, and criminal and law-abiding. In Le Tableau de Paris, Mercier was engaged in the seemingly impossible task of capturing in writing a contemporaneity which was constantly slipping away from beneath his feet.  The city he described was like a palimpsest, a vessel which was constantly renewed but whose past lay hidden just beneath the surface. In this manner, the history of the city of Paris was first and foremost thought through the multitude of superimposed layers which composed its ground. Its temporal density offered itself to the discerning eye; a walk through the city was a walk through time. Within this historical configuration, the past was fully integrated to the present into which it continuously flowed. Each corner and monument resurrected ghosts from the past, further blurring the different temporalities at play.

In Le Nouveau Paris (1798), written after the outbreak of the French Revolution, the experience of time had been still further accelerated and even the immediate past had already been historicized:  Mercier henceforth walked in Paris on “that no longer [was].” The Revolution had marked a radical rupture and given rise to a new sharpened historical consciousness. Historical writing henceforth played the role of a funerary rite, acknowledging the past whilst firmly excluding it. The Terror was held at an incommensurable distance and exorcized: the “shadow” of Robespierre was only evoked to be better purged.  Crucially, history, in the awesome forces it conjured, acquired an aesthetic dimension: in its admixture of greatness and brutality, it became a source of sublime.

Cut off from its past and with no discernible future, post-revolutionary Paris was drifting aimlessly in a state of generalized confusion caught between fantasies of regeneration and prospects of looming destruction. Construction inevitably seemed to spell future destruction, linking ruins, past and future in a strange and seemingly inexorable “poetic of ruins” before which all civilizations were called to disappear. Pure historicity seemed to have reached its logical conclusion.

Ultimately neither fiction nor any authority which struck Mercier as sacralized could ever stand the test of time nor the fickle judgment of the historical process. Writers could imagine the future from the past and speculate as to what of the past would survive but ultimately, only time would tell. In those circumstances, it was incumbent on us to resist the temptation of “pantheonising lightly.” Renown, like all else, was “beholden to the course of events.” Seeking to decide on ruins or great men from the present was absurd: only the future looking back on its past could pass those kinds of judgments. Attempts to overwrite history or to seek to predict it retrospectively through future projections were not only futile, but also hubristic.

Yet, Mercier’s gaze was never stable, torn between present and future, between a materialist and prophetic engagement with time. On the basis of his Year 2440, Mercier would, for instance, later self-style himself the “genuine prophet of the revolution” in the new preface he penned in 1798. He recognized that those projections also expressed a deep yearning toward something that went beyond the purely factual. As he wrote in Adieux à l’année 1789 published in the Annales patriotiques et littéraires, one still had to “dream of public felicity in order to erect its immutable edifice.” (« il faut encore rêver la félicité publique, afin d’en bâtir l’édifice immuable »)

Dream acted as a powerful motor in history; it offered the hope of tearing mankind “from the formless chaos” in which it was mired; it provided it with a horizon toward which it “could run with all its might,” ready to “precipitate its march to reach out for it and grasp it.” That The Year 2440 with its “mass of enlightened citizens” would never come about suddenly was eminently clear; but this would not stop man of dreaming of seeking to escape the historical predicament in which he found himself trapped and its final overcoming, once and for all.

Audrey Borowski is an historian of ideas at the University of Oxford.

A Pandemic of Bloodflower’s Melancholia: Musings on Personalized Diseases

By Editor Spencer J Weinreich

Samuel_Palmer_-_Self-Portrait_-_WGA16951

Peter Bloodflower? (actually Samuel Palmer, Self Portrait [1825])

I hasten to assure the reader that Bloodflower’s Melancholia is not contagious. It is not fatal. It is not, in fact, real. It is the creation of British novelist Tamar Yellin, her contribution to The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases, a brilliant and madcap medical fantasia featuring pathologies dreamed up by the likes of Neil Gaiman, Michael Moorcock, and Alan Moore. Yellin’s entry explains that “The first and, in the opinion of some authorities, the only true case of Bloodflower’s Melancholia appeared in Worcestershire, England, in the summer of 1813” (6). Eighteen-year-old Peter Bloodflower was stricken by depression, combined with an extreme hunger for ink and paper. The malady abated in time and young Bloodflower survived, becoming a friend and occasional muse to Shelley and Keats. Yellin then reviews the debate about the condition among the fictitious experts who populate the Guide: some claim that the Melancholia is hereditary and has plagued all successive generations of the Bloodflower line.

There are, however, those who dispute the existence of Bloodflower’s Melancholia in its hereditary form. Randolph Johnson is unequivocal on the subject. ‘There is no such thing as Bloodflower’s Melancholia,’ he writes in Confessions of a Disease Fiend. ‘All cases subsequent to the original are in dispute, and even where records are complete, there is no conclusive proof of heredity. If anything we have here a case of inherited suggestibility. In my view, these cannot be regarded as cases of Bloodflower’s Melancholia, but more properly as Bloodflower’s Melancholia by Proxy.’

If Johnson’s conclusions are correct, we must regard Peter Bloodflower as the sole true sufferer from this distressing condition, a lonely status that possesses its own melancholy aptness. (7)

One is reminded of the grim joke, “The doctor says to the patient, ‘Well, the good news is, we’re going to name a disease after you.’”

Master Bloodflower is not alone in being alone. The rarest disease known to medical science is ribose-5-phosphate isomerase deficiency, of which only one sufferer has ever been identified. Not much commoner is Fields’ Disease, a mysterious neuromuscular disease with only two observed cases, the Welsh twins Catherine and Kirstie Fields.

Less literally, Bloodflower’s Melancholia, RPI-deficiency, and Fields’ Disease find a curious conceptual parallel in contemporary medical science—or at least the marketing of contemporary medical science: personalized medicine and, increasingly, personalized diseases. Witness a recent commercial for a cancer center, in which the viewer is told, “we give you state-of-the-art treatment that’s very specific to your cancer.” “The radiation dose you receive is your dose, sculpted to the shape of your cancer.”

Put the phrase “treatment as unique as you are” into a search engine, and a host of providers and products appear, from rehab facilities to procedures for Benign Prostatic Hyperplasia, from fertility centers in Nevada to orthodontist practices in Florida.

The appeal of such advertisements is not difficult to understand. Capitalism thrives on the (mass-)production of uniqueness. The commodity becomes the means of fashioning a modern “self,” what the poet Kate Tempest describes as “The joy of being who we are / by virtue of the clothes we buy” (94). Think, too, of the “curated”—as though carefully and personally selected just for you—content online advertisers supply. It goes without saying that we want this in healthcare, to feel that the doctor is tailoring their questions, procedures, and prescriptions to our individual case.

And yet, though we can and should see the market mechanisms at work beneath “treatment as unique as you are,” the line encapsulates a very real medical-scientific phenomenon. In 1998, for example, Genentech and UCLA released Trastuzumab, an antibody extremely effective against (only) those breast cancers linked to the overproduction of the protein HER2 (roughly one-fifth of all cases). More ambitiously, biologist Ross Cagan proposes to use a massive population of genetically engineered fruit flies, keyed to the makeup of a patient’s tumor, to identify potential cocktails among thousands of drugs.

Personalized medicine does not depend on the wonders of twenty-first-century technology: it is as old as medicine itself. Ancient Greek physiology posited that the body was made up of four humors—blood, phlegm, yellow bile, and black bile—and that each person combined the four in a unique proportion. In consequence, treatment, be it medicine, diet, exercise, physical therapies, or surgery, had to be calibrated to the patient’s particular humoral makeup. Here, again, personalization is not an illusion: professionals were customizing care, using the best medical knowledge available.

Medicine is a human activity, and thus subject to the variability of human conditions and interactions. This may be uncontroversial: even when the diagnoses are identical, a doctor justifiably handles a forty-year-old patient differently from a ninety-year-old one. Even a mild infection may be lethal to an immunocompromised body. But there is also the long and shameful history of disparities in medical treatment among races, ethnicities, genders, and sexual identities—to say nothing of the “health gaps” between rich and poor societies and rich and poor patients. For years, AIDS was a “gay disease” or confined to communities of color, while cancer only slowly “crossed the color line” in the twentieth century, as a stubborn association with whiteness fell away. Women and minorities are chronically under-medicated for pain. If medication is inaccessible or unaffordable, a “curable” condition—from tuberculosis (nearly two million deaths per year) to bubonic plague (roughly 120 deaths per year)—is anything but.

Let us think with Bloodflower’s Melancholia, and with RPI-deficiency and Fields’ Disease. Or, let us take seriously the less-outré individualities that constitute modern medicine. What does that mean for our definition of disease? Are there (at least) as many pneumonias as there have ever been patients with pneumonia? The question need not detain medical practitioners too long—I suspect they have more pressing concerns. But for the historian, the literary scholar, and indeed the ordinary denizen of a world full to bursting with microbes, bodies, and symptoms, there is something to be gained in probing what we talk about when we talk about a “disease.”

TB_Culture.jpg

Colonies of M. tuberculosis

The question may be put spatially: where is disease? Properly schooled in the germ theory of disease, we instinctively look to the relevant pathogens—the bacterium Mycobacterium tuberculosis as the avatar of tuberculosis, the human immunodeficiency virus as that of AIDS. These microscopic agents often become actors in historical narratives. To take one eloquent example, Diarmaid MacCulloch writes, “It is still not certain whether the arrival of syphilis represented a sudden wanderlust in an ancient European spirochete […]” (95). The price of evoking this historical power is anachronism, given that sixteenth-century medicine knew nothing of spirochetes. The physician may conclude from the mummified remains of Ramses II that it was M. tuberculosis (discovered in 1882), and thus tuberculosis (clinically described in 1819), that killed the pharaoh, but it is difficult to know what to do with that statement. Bruno Latour calls it “an anachronism of the same caliber as if we had diagnosed his death as having been caused by a Marxist upheaval, or a machine gun, or a Wall Street crash” (248).

The other intuitive place to look for disease is the body of the patient. We see chicken pox in the red blisters that form on the skin; we feel the flu in fevers, aches, coughs, shakes. But here, too, analytical dangers lurk: many conditions are asymptomatic for long periods of time (cholera, HIV/AIDS), while others’ most prominent symptoms are only incidental to their primary effects (the characteristic skin tone of Yellow Fever is the result of the virus damaging the liver). Conversely, Hansen’s Disease (leprosy) can present in a “tuberculoid” form that does not cause the stereotypical dramatic transformations. Ultimately, diseases are defined through a constellation of possible symptoms, any number of which may or may not be present in a given case. As Susan Sontag writes, “no one has everything that AIDS could be” (106); in a more whimsical vein, no two people with chicken pox will have the same pattern of blisters. And so we return to the individuality of disease. So is disease no more than a cultural construction, a convenient umbrella-term for the countless micro-conditions that show sufficient similarities to warrant amalgamation? Possibly. But the fact that no patient has “everything that AIDS could be” does not vitiate the importance of describing these possibilities, nor their value in defining “AIDS.”

This is not to deny medical realities: DNA analysis demonstrates, for example, that the Mycobacterium leprae preserved in a medieval skeleton found in the Orkney Islands is genetically identical to modern specimens of the pathogen (Taylor et al.). But these mental constructs are not so far from how most of us deal with most diseases, most of the time. Like “plague,” at once a biological phenomenon and a cultural product (a rhetoric, a trope, a perception), so for most of us Ebola or SARS remain caricatures of diseases, terrifying specters whose clinical realities are hazy and remote. More quotidian conditions—influenza, chicken pox, athlete’s foot—present as individual cases, whether our own or those around us, analogized to the generic condition by memory and common knowledge (and, nowadays, internet searches).

Perhaps what Bloodflower’s Melancholia—or, if you prefer, Bloodflower’s Melancholia by Proxy—offers is an uneasy middle ground between the scientific, the cultural, and the conceptual. Between the nebulous idea of “plague,” the social problem of a plague, and the biological entity. Yersinia pestis is the individual person and the individual body, possibly infected with the pathogen, possibly to be identified with other sick bodies around her, but, first and last, a unique entity.

SONY DSC

Newark Bay, South Ronaldsay

Consider the aforementioned skeleton of a teenage male, found when erosion revealed a Norse Christian cemetery at Newark Bay on South Ronaldsay (one of the Orkney Islands). Radiocarbon dating can place the burial somewhere between 1218 and 1370, and DNA analysis demonstrates the presence of M. leprae. The team that found this genetic signature was primarily concerned with the scientific techniques used, the hypothetical evolution of the bacterium over time, and the burial practices associated with leprosy.

But this particular body produces its particular knowledge. To judge from the remains, “the disease is of long standing and must have been contracted in early childhood” (Taylor et al., 1136). The skeleton, especially the skull, indicates the damage done in a medical sense (“The bone has been destroyed…”), but also in the changes wrought to his appearance (“the profile has been greatly reduced”). A sizable lesion has penetrated through the hard palate all the way into the nasal cavity, possibly affecting breathing, speaking, and eating. This would also have been an omnipresent reminder of his illness, as would the several teeth he had probably lost (1135).

What if we went further? How might the relatively temperate, wet climate of the Orkneys have impacted this young man’s condition? What treatments were available for leprosy in the remote maritime communities of the medieval North Sea—and how would they interact with the symptoms caused by M. leprae? Social and cultural history could offer a sense of how these communities viewed leprosy; clinical understandings of Hansen’s Disease some idea of his physical sensations (pain—of what kind and duration? numbness? fatigue?). A forensic artist, with the assistance of contemporary symptomatology, might even conjure a semblance of the face and body our subject presented to the world. Of course, much of this would be conjecture, speculation, imagination—risks, in other words, but risks perhaps worth taking to restore a few tentative glimpses of the unique world of this young man, who, no less than Peter Bloodflower, was sick with an illness all his own.