By Contributing Writer John Handel
with Dan Bouk, How Our Days Became Numbered: Risk and the Rise of the Statistical Individual (UChicago, 2015);
William Deringer, Calculated Values: Finance, Politics, and the Quantitative Age (HUP, 2018);
and Jamie Pietruska, Looking Forward: Prediction and Uncertainty in Modern America
(UChicago, 2017);

 
In his landmark 1990 book The Taming of Chance, Ian Hacking attempted to make sense of the “avalanche of printed numbers,” that appeared across Europe during the 19th century. (46) Ironically,  Hacking himself was participating in an avalanche of work on the history of numbers that proliferated during the 1980s and 1990s.These studies of numbers and the history of quantification ranged widely from the history of double-entry bookkeeping (Poovey, History of the Modern Fact), probability theory (Daston, Classical Probability), and insurance, statistics (Porter, Rise of Statistical Thinking) to economics (Tooze, Statistics and the German State). While the studies were often diverse in the empirical cases they addressed, they all asked common questions about when, where, and why numbers and quantification had become so important to the structuring of everyday life and to the operations of the state.
Yet, if these studies encompassed a diverse set of quantified subjects, covering high mathematical theory to the history of the lottery, they tended to share the same methodological and critical concerns. Written during the ascendance and peak of cultural history, most of these works tended to see the authority of numbers as concomitant with the rise of state power and governmentality. Following Foucault, these authors stressed how numbers made increasingly diverse and numerous populations, as well as multiple spheres of life (the social,  political, and economic), able to be abstracted and made legible to the state and its “expert” managers. Encapsulating this trend was Theodore Porter’s monumental work, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life. Porter showed how expert communities such as scientists, engineers, and government bureaucrats used numbers as a technology of trust that asserted their power over broader public communities. According to these histories, quantification was the language of the modern state, and numbers’ power of abstraction, aggregation, and statistical measurement became the state’s chief tools of domination and governmentality. Modernity had created a quantified world, and all we could do was live in it.
But as Dan Bouk suggests in today’s podcast, perhaps the title of Porter’s book was too good–its argument about authoritative power wielded by numbers too easy and seductive. Indeed, the last decade could be categorized as a massive crisis of faith in the ability and power of the technocratic state that Porter et al. so vigorously critiqued in the 1980s and 90s. From the inability of mainstream economists and technocrats to predict the massive 2008 financial collapse (as in Tooze, Crashed), to Donald Trump’s utter disregard for and distrust in numbers, whether they be negative polling numbers, government produced unemployment statistics, or the most basic instrument of quantitative governmentality: the census (link, Gilman, “Dictatorships and Data Standards”).
Instead of explaining how numbers became authoritative and powerful, then, the new work discussed in this podcast radically departs from earlier approaches by showing the long history of numbers’ fragility. As William Deringer argues in his book, Calculated Values: Finance, Politics, and the Quantitative Age, increasingly complex and highly quantified calculations emerged not as an authoritative tool of control, but as an aggressive mode of political argument and combativeness during a period of financial and political revolution in England between 1688-1720. In response to Deringer, Bouk  has argued that this is not just an origin story or singular fluke about how quantification works, but rather something that can help us reframe the longer narrative of quantification and its relationship to politics. Attending to both the deeper origins of quantification’s use in political debates, as well as the current skepticism towards numbers’ authority, the view of quantification as an abstract and objective tool of state power seems less like a totalizing endpoint, and more like a particularly contingent configuration of the way numbers came to be representative of certain types of political argumentation.
Further provincializing this view of quantification from the 80s and 90s, both Jamie Pietruska’s, Looking Forward: Prediction and Uncertainty in Modern America, and Bouk’s, How Our Days Became Numbered: Risk and the Rise of the Statistical Individual  aim to recover what Pietruska calls the “everyday epistemology” of quantification. Pietruska shows how, in late 19th-century America, certain scientific and governmental schemes to regulate the prediction of uncertain events like crop yields or weather forecasts created uncertainty as a legible category itself. Yet, far from exerting expert power over the masses, the creation of uncertainty as a legitimate category of analysis actually allowed for the long policed and oppressed minority communities of fortune tellers and palm readers to win legal battles against the state. Similarly in late 19th -century America, Bouk tracks how insurance companies created new categories of risk and went about pricing and quantifying them. These new categories of risk encouraged insurance companies not just to have a better knowledge over the people they were insuring, but also to reach more invasively into their lives and instruct them on how to live in such a way as to be less of a risk. Yet these new categories too were turned against insurance companies, as African Americans, who had been discriminated against by insurers, won court cases ending discriminatory practices. Predictably, in response, insurance companies began to go out of their way to not insure African Americans at all.
In the final section of the podcast, the authors reflect on the methodological commitments that came with their rethinking of the 1980s and 1990s consensus around the history of quantification. They point to how a renewed focus on science and technology studies, along with deep dives into social history, can help write new histories of numbers more readily able to understand and rethink the current political moment, in which as numbers proliferate, they seem less and less trustworthy.