Transcribed by James J. Miller
I am going to talk today about the technological and infrastructural tendencies in contemporary capitalism. Particularly the shift to automation and computer meditation we see everywhere. Up on the screen now is sort of a schematic overview of different sectors of the economy and ongoing tendencies and technological changes that I think are going on right now. I think this is all quite closely related to Benjamin Bratton’s work on the black stack, Tiziana Terranova’s work on the red stack and also Nick Dyer-Witheford’s work on what he calls red plenty platforms. I think all of this is quite heavily interrelated.
The emphasis in this paper is going to be on the bottom two aspects, macroeconomic forecasting and macroeconomic intervention. Particularly on this modeling aspect. I think it is the nodal point, essentially, in which complexity, the aesthetics of cognitive mapping, and the organization of the decision making system all come together. I think this is all important because we essentially live within a promethean world. We’ve produced a world with immense complexity; vast technical infrastructures spanning the globe, speeding circuits of data and capital, unprecedented interventions into our biology and the ramifications of all of this on the environment. Yet in the face of this complexity, the left has largely turned away from questions of how to manage this complexity. Instead of grand images of collective self emancipation through democratic control over this complexity, we instead have the romanticization of endless insurrection like people like Tiqqun and the Invisible Committee in particular or the sort of ephemeral protest of horizontal movements like Occupy or just the straight out retreat into the small scale by well-meaning localists. So it seems to me that the left has largely renounced any sort of Promethean venture. But as Alberto Toscano writes ‘A diffuse anti-Promethean common sense [seems to] express a dangerous disavowal rather than a hard-won wisdom.’
So in the face of climate change, global infrastructures and increasing complexity, the point is not whether or not we should be promethean in our intentions. We already are in terms of shaping the environment, control over our biological body, and the manipulation of the global economy. So the point is rather that this promethean venture has so far been limited to the interest of capital accumulation – and increasingly a small portion even within the capitalist class. So the questions and problems raised by the prometheanism and large scale social change are problems to be faced up to, not rejected out of hand. In light of this, this paper seeks to examine how states have responded to complexity and it argues that states have responded by developing cognitive assemblages, essentially systems of human and non-human elements which function to produce a representation of a complex system.
We can see their existence most prominently within the climate change modeling centers, things like the Hadley Center within the UK. Here we have the massive power of the world’s leading supercomputers mobilized to filter through billions of data points, collected by a global observation network, and all with the aim of simulating global climate change. With that being said, this use of computation and digital technology for modeling and forecasting and planning are increasingly becoming ubiquitous. So insurance companies for instance, are spending million of dollars annually on catastrophe modeling. And from this, shaping decisions about insurance premiums and the level of capital held as a buffer. You also have it in the field of government energy policy, with the production, distribution, and consumption of energy all represented and understood through sixty to seventy different models. Likewise with financial investment or corporate investment. They are increasingly determined by things like value at risk models. And the global food system is also modeled and manipulated by computational technology.
So in other words what I think you see here is that every fundamental aspect of human society – food, energy, capital, nature and infrastructure – is today represented computationally. So the aim of this paper is to focus on these eyes of the state, in particular as they perceive and construct the national and global economy. This paper wants to uncover the ways in which particular constructions of the economy came about (how the economy was made visible essentially), how particular levers over the economy were made possible – so how the economy became an object of manipulation? And more speculatively, what post-capitalist potentials these modeling techniques might have? How could the economy be manipulated for the commons?
So the latter forms the normative thrust of this paper. And essentially, if the civilizational deepening of social complexity is mirrored in the expansion of abstract representational technologies, then it is these technologies that make complexity both cognitively tractable and pragmatically manipulable. It is these computational technologies which provide insights into how a post-capitalist society can re-appropriate and rationalize the productive forces of society. And indeed I think Hayek and Von Mises recognize the key debates in the 1920’s and 30’s. [00:55:00]The fate of whether we can move beyond the market, ultimately lies in the question of whether or not we can intervene into complex systems in a beneficial manner. If we can’t do this, then decentralized decision making by individuals is the only response. But if we can, if we can intervene in complex systems without making things worse, then all sorts of progressive options open up. So suddenly the question of rationally manipulating economies in the service of the commons becomes a live problem.
Throughout all this we have to recall that the economy is not a natural object,but is instead a social and technical construction. So with Physiocrats’ initial attempt to map the economy in 1758, to the development of national statistics in the early 20th century, and on to the elaborate computational models of the early 21st century, the economy has always been something to be constructed. This construction of the visible economy goes hand in hand in constructing ways to manipulate that economy. Given that states act upon economies on the basis of their understanding of the system, and that this understanding broadly, although not completely, is embedded in computational macroeconomic models, then shifts in these models both reflect and cause shifts in the ways that states manipulate economies. So modeling, therefore, is a crucial medium bridging conceptual understanding and pragmatic manipulation. And so therefore this paper is interested in the sorts of affordances these models offer and the politics that they embody. Now that being said, these models are not simple determiners of politics. They instead provide a mechanism for organizing thoughts about the economy, for suggesting a range of possible futures and simulating these sorts of things, and they are always supplemented by expert judgment. Now, in accomplishing their functions, models employ a type of thinking embodied within machines. In particular they construct and embody abstract formulations of conceptual systems. They operate by combining various elements together in a somewhat partially flexible way.
For instance, they link together analytical definitions, empirical regularities, local contexts, universal principles, disciplinary laws and concepts into some sort of consistent whole. These elements, while permitting of some flexibility in how they are put together are nevertheless constrained in in terms of how they produce a model. And part of this constraint comes from the medium through which the various elements are linked together. So mathematical formalism, for instance, permits some linkages while using paradigmatic examples permits other means.
In discussing models, Mary Morgan’s work has been particularly exemplary in demonstrating how model based reasoning functions. I think it is in her work we see how models gain their cognitive power. In particular by embodying two general sets of rules for manipulation. First there are rules imposed by the material of the model. And second there are the rules imposed by the subject of the model. In the former the material can be considered physical, for instance models that attempt to create scale models of the phenomenon, but it can also be ideational material, so models built in a particular algebraic language or a particular programing language for instance. But in both cases here one is bound by the rules on how one can manipulate such material. One cannot just manipulate them in any way.
The second broad set of rules comes from the subject matter itself. So the theoretical concepts and the interrelations the model builders have implemented into the technology. The consequence of the two sets of rules imposed by modeling is that one can have a precise pathway following a chain of consequences. On the basis of this, what gives contemporary computational technology their peculiar power, is a capacity to not only organize but also to outsource cognition. So while organizing cognition is a virtue in itself it is when these rules and chains of consequences are outsourced into a computational medium that they take on their uniquely modern power.
With such a representational technology in hand one can allow the calculative and inferential processes to expand beyond any human capacity. So models therefore solidify rules of thought into them and ramify the conceptual linkages. And this consolidation of this particular knowledge is the source of both their power and their limits. In addition to this, these laws also perform a social function which is namely to convince others. In this regard the construction is also a means to transform a conceptual argument into a medium for propagation and persuasion. This is more than just the argument that numbers give an illusion of certainty and precision. The point is rather that the very path of reasoning is altered and made more palatable or not depending on the medium through which it is made. So the point to be taken from all of this is that models condense a set of inferential and material rules into a medium that transforms the persuasiveness of the reasoning. Modelling is fundamentally a question of transforming indifferent matter and social complexity into something that is cognitively tractable. As a result, models have come to play increasingly central roles in the production of knowledge in a complex society.
So how has this operated historically in economic systems? In examining the historical record, I’ve focused on two aspects. First, how are cognitive assemblages used in practice? The idealisation that a model simply outputs policy is overly simple. Second, I‘ve been focusing on what are the political consequences of certain technical characteristics of these assemblages? What, for instance, are the political consequences of log-linearisation in dynamic stochastic general equilibrium models?
It is quite common to believe that we live in a post planning era. Communism has failed and neo-liberalism doctrines of privatization and competition have taken over. But planning still exists in significant areas of even the most neo-liberal capitalist economy. And this planning exists on at least three levels; state led investment, firm led planning, and state led monetary interventions. Now in particular here is the third form of planning which is the focus on this analysis. The use of modeling and the use of technical tools to construct an image of the economy, to generate levers for manipulating the economy through monetary instruments. In particular the focus is on the use of these theory heave models called dynamic stochastic general equilibrium models or DSGE. Central banks also use a variety of other representational techniques and technologies .Things like vector auto-regression models which are highly statistical, large-scale macro-econometric models, leading indicators, and economic surveys.
With that being said, DSGE models have been the workhorse of most forecasting. Notable though, on a sort of side-note – most private firms, like an investment bank for instance, don’t actually use DSGE models to model the economy. The only times they use DSGE models tend to be when they want to forecast what the central bank is likely to do. Instead, anecdotal evidence that I’ve gathered suggests that many investment banks take a much wider approach to modeling, incorporating models based on heterodox economists like Kalecki, Minsky and Godley for instance. So it is quite interesting, the private firms, the ones who are actually making money off the markets and off the economy, use heterodox economists to model what is going to happen.
So all that being said, returning to central banks. What happens when we look at the historical record of central bank modeling? We see the representation of the economy and construction of levers of the economy go hand in hand. So when stagflation hit the mature economies in the 1970s government intervention, primarily fiscal, was initially justified by virtue of the classical Keynesian models of the time. But with conceptual and technological developments, the 1980s brought a series of monetarists models which constructed new levers over the economy. This is partly why they were adopted at the time as well, because they gave policy makers new levers to try and combat stagflation. Now as its continued conceptual and technological development occurred, new levers were constructed via the New Keynesian models and by the mid 2000s the DSGE models were widely taken up by central banks.
There is a number of problems with DSGE models, and my longer paper goes through some of these, but for here I just want to point out one: the notion of equilibrium being central to DSGE models. In the practical use of these models, in particular, this means a single equilibrium rather than multiple equilibria. So on the basis of this, the research question, for a DSGE modeler, becomes what explains the fluctuations in the systems. In these models there are only external shocks, which are responsible for upsetting the otherwise perfect balance of the economy. By contrast, post-Walrasian models emphasise that the economy is subject to chaotic behavior, while something like the Minskyian approach would emphasis the tendency toward financial instability, and the Marxist approach, of course, would emphasis the tendency toward crisis which is built into the economy.
This equilibrium and the single equilibrium within DSGE models already negates the possibility of even asking these questions. [01:05:00] So in these latter cases, the Marxist and Minskian approach, the question become not explaining fluctuations, but instead, of building institutions, rules and programs to constrain a naturally chaotic system, or to build social movements to move to a new economic system. I think the reliance on DSGE models essentially forbids all these questions from the outset for the central bank and thus determines what they can do and they can’t do in terms of intervening. Now, in discussing all of this I don’t want to say modeling is a panacea or that it is independent of human cognitive systems. This isn’t a technical solution to a political problem; rather these models are situated in a community of socio-technical reasoning. All the models invoke exogenous variables, and they require implicit and explicit assumptions. And moreover these models are subject to constant conceptual revision as the economy changes. To give one example, the Bank of England recently spent 3.4 Million pounds on building a new model. These things are constantly being changed all the time. Models are therefore an important part of a socio-technical reasoning process in setting monetary policy. And as every personal account from a central banker attests to, these models act as hubs around which discussion centers. So there are sort of iterative processes of forecasting and discussion along with input from the expert judgments of the members.
The role of expertise, I think, comes in at a number of places in the socio-technical reasoning process. The Bank of England for instance, it incorporates more frequent information such as business surveys and other leading indicators. It incorporates this information by the judgments of the committee rather than through the core model itself. Human reasoning comes in here via the summarization of variables that cannot yet be computed. And since no single model is a perfect representation, there is also model uncertainty. Part of this uncertainty is minimized though the use of multiple models. This is similar to the recent shift to ensemble modeling in the climate sector as well. If models agree on the implications of a shift, if you use a bunch of models and they agree on the shift, it can be considered that there is little uncertainty. But if they disagree then obviously there is quite a bit of uncertainty about the models. Further estimation about how uncertain the models are though, depends on human judgment, relying on past experience to know the idiosyncrasies of a particular model, and areas where it often over or underestimates effects for instance. So I think that this is a second way that human judgment, human cognitive reasoning gets incorporated into these technical models as well.
Lastly, and perhaps more importantly, human reasoning interacts with technical models at the level of assumptions and the setting of parameters. DSGE models consistently require parameterization, which is the setting of certain variables. These parameters are usually set by historical data, yet key information is also derived from expertise, microeconomic research and Bayesian statistics. So human reasoning has an important role to play here as well. Model based reasoning is ultimately incorporated into a larger socio-technical form of reasoning. Now, it is here that the study of modeling – of socio-technical reasoning and the manipulation of the economy – I think it is here that they link up with the problematic of a post-capitalist society. We have seen the various ways in which the economy is being automated and augmented by technological developments, and we know from the philosophy of technology that technology is not neutral. Embedded in the design and functions of technologies are particular political affordances, even if these affordances are not exhausted by any particular political orientation. But if these technologies are not neutral then they become a terrain of contestation for any non-capitalist politics. The design and function of individual technologies and the large scale infrastructures of society, these are all open to being mobilized in different and post- capitalist ways.
Given that planning in economies in increasingly ubiquitous the question to be raised is what can be done with this state of affairs? Marx famously argued that the increased centralization of productive forces would facilitate the transition to communism. Can the same claim be made in relation to computational models, and models that manipulate the economy in subtle and significant ways? In other words, can these models be put towards collective ends? My wager here is that not only is this possible, but it is also necessary. Recognizing the limitations and political nature of current efforts, the open sort of research program involved here, is how to build and re-purpose cognitive assemblages to post-capitalist and democratic ends. Ultimately the aim here is to recover one of the traditional and now largely forgotten arguments for communism – which is that not only was it a more equal and just society, but it was also a more rational, efficient and productive society. So in the end, socialism or barbarism.