Category Archives: Economics

Heideggerian Economics

the-fabric-of-time-by-nataliekelsey

[A pdf version is available here]

Lately I’ve had the poor judgment to start reading Heidegger’s Being and Time. I’ve been putting it off for years now, largely because it has no connection with the kind of philosophy I’m interested in. Yet, among my philosophical acquaintances there is a clear line between those who have read Heidegger and those who haven’t—working through this book really does seem to let people reach a whole new level of abstraction.

To my great surprise, in Being and Time (1927: 413), Heidegger remarks:

[E]ven that which is ready-to-hand can be made a theme for scientific investigation and determination… The context of equipment that is ready-to-hand in an everyday manner, its historical emergence and utilization, and its factical role in Dasein — all these are objects for the science of economics. The ready-to-hand can become the ‘Object’ of science without having to lose its character as equipment. A modification of our understanding of Being does not seem to be necessarily constitutive for the genesis of the theoretical attitude ‘towards Things’.

Curiously, no other sources I’ve found mention this excerpt. More well-known is a passage from “What are Poets for?” in which Heidegger denounces marketization (1946: 114-5):

In place of all the world-content of things that was formerly perceived and used to grant freely of itself, the object-character of technological dominion spreads itself over the earth ever more quickly, ruthlessly, and completely. Not only does it establish all things as producible in the process of production; it also delivers the products of production by means of the market. In self-assertive production, the humanness of man and the thingness of things dissolve into the calculated market value of a market which not only spans the whole earth as a world market, but also, as the will to will, trades in the nature of Being and thus subjects all beings to the trade of a calculation that dominates most tenaciously in those areas where there is no need of numbers.

Thus it’s very easy to appeal to Heidegger’s authority to support various Leftist clichés about capitalism. It’s far harder to bring Heidegger’s thought to bear on actual economic modelling—its ‘worldly philosophy’. In this post I’ll survey several of the less hand-wavey attempts in this direction. My main question is whether a Heideggerian economics is possible at all, and if so, whether there is a specific subfield of economics to which Heideggerian philosophy especially lends itself. My overview of each specific thinker sticks closely to the source material, as I’m hardly fluent enough in Heideggerese to give a synoptic overview or clever reinterpretation. I don’t expect to ever develop a systematic interpretation of my own, but I hope this post might prove inspiring to some economist with philosophical tastes far different from my own.

1. Schalow on ‘The Question of Economics’

Schalow’s approach is quite refreshing because he is both an orthodox Heideggerian and takes the viewpoint of mainstream economics, as opposed to Heideggerian Marxism such as Marcuse’s One-Dimensional Man. Schalow’s question is at once simpler and deeper: whether Heidegger’s thought leaves any room for economics. Here, ‘economics’ is minimally defined as theorizing the production and distribution of goods to meet human needs. (So in theory, then, this applies to any sort of economics, classical or modern.) The most obvious answer would seem to be ‘No’ — he notes: “It is clear that Heidegger refrains from ‘theorizing’ of any kind, which for him constitutes a form of metaphysical rationality” (p. 249).

Thus, Schalow takes a more abstract route, viewing economics simply as “an inescapable concern of human being (Dasein) who is temporally and spatially situated within the world” (p. 250). Schalow advocates a form of ‘chrono-economics’, where ‘scarcity’ is framed through time as numeraire. In a sense, this operates between ‘economic theory’ as a mathematical science vs. as a “humanistic recipe for achieving social justice” (p. 251); instead, “economic concerns are an extension of human finitude” (p. 250). Schalow makes various pedantic points about etymology which I’ll spare the reader, except for this one: “the term ‘logos’ derives its meaning from the horticultural activity of ‘collecting’ and ‘dispersing’ seeds” (p. 252).

It’s natural to interpret Being & Time as “lay[ing] out the pre-theoretical understanding of the everyday work-world in which the self produces goods and satisfies its instrumental needs” (p. 253). Similarly, “work is the self’s way of ‘skillful coping’ in its everyday dealings with the world” (p. 254). Hence Heidegger emphasizes production — which he will later associate with technē — over exchange, which he associates with the ‘they-self’ (p. 254). Yet, Schalow points out, both production and exchange can be construed as a form of ‘care’. Care, in turn, is configured by temporality, which forces us to prioritize some things over others (p. 256).

“The paradox of time…is the fact that it is its transitoriness which imparts the pregnancy of meaning on what we do” (p. 257). Therefore, “time constitutes the ‘economy of all economies’,” in that “temporality supplies the limit of all limits in which any provision or strategy of allocation can occur” (ibid.). We can go on to say that “time economizes all the economies, in defining the horizon of finitude as the key for any plan of allocation” (p. 258).

In his later thought, Heidegger took on a more historical view, arguing that the structure of Being was experienced differently in different epochs. In our own time, the strongest influence on our notion of Being is technology. Schalow gives an interesting summary (p. 261):

The advance of technology…occurs only through a proportional ‘decline’ in which the manifestness of being becomes secondary to the beings that ‘presence’ in terms of their instrumental uses.

In an age where the economy is so large as to be inconceivable except through mathematical models, one can say that “the modern age of technology dawns with the reduction of philosophical questions to economic ones” (p. 260). Thus, Heidegger is more inclined to view economics as instrumental (technē) rather than as “the self-originative form of disclosure found in art (poiēsis).” Yet, rather than merely a quantitative “artifice of instrumentality,” it is also possible to interpret economics in terms of poiēsis, as “a vehicle by which human beings disclose their immersion in the material contingencies of existence” (p. 262). Economics thus becomes “a dynamic event by which human culture adjusts to ‘manage’ its natural limitations” (ibid.). Framing economics in terms of temporality (as ‘chrono-economics’) allows it to remain open to Being, and thereby “to connect philosophy with economics without effacing the boundary between them” (p. 263).

Read the rest of this entry

Avant-Garde Philosophy of Economics

by Tatiana Plakhova (2011)

[A pdf version is available here]

To most people, the title of this post is a triple oxymoron. Those left thoroughly traumatized by Econ 101 in college share their skepticism with those who have dipped their toe into hybrid fields like neuroeconomics and found them to be a synthesis of the dullest parts of both disciplines. For the vast, vast majority of cases, this sentiment is quite right: ‘philosophy of economics’ tends to be divided between heterodox schools of economics whose writings have entirely decoupled from economic formalism, and—on the other side of the spectrum—baroque econophysicists with lots to say about intriguing things like ‘quantum economics’ and negative probabilities via p-adic numbers, but typically within a dry positivist framework. As for the middle-ground material, a 20-page paper typically yields two or three salvageable sentences, if even that. Yet, as anyone who follows my Twitter knows, I look very hard for papers that aren’t terrible—and eventually I’ve found some.

Often the ‘giants’ of economic theory (e.g. Nobel laureates like Harsanyi or Lucas) have compelling things to say about methodology, but to include them on this list seems like cheating, so we’ll instead keep to scholars who most economists have never heard of. We also—naturally—want authors who write mainly in natural language, and whose work is therefore accessible to readers who are not specialists in economic theory. Lastly, let’s strike from the list those writers who do not engage directly with economic formalism itself, but only ‘the economy’. This last qualification is the most draconian of the lot, and manages to purge the philosophers of economics (e.g. Mäki, McCloskey) who tend to be the most well-known.

The remaining authors make up the vanguard of philosophy of economics—those who alchemically permute the elements of economic theory into transdisciplinary concoctions seemingly more at home in a story by Lovecraft or Borges than in academia, and who help us ascend to levels of abstraction we never could have imagined. Their descriptions are ordered for ease of exposition, building from and often contradicting one another. For those who would like to read more, some recommended readings are provided under each entry. I hope that readers will see that people have for a long time been thinking very hard about problems in economics, and that thinking abstractly does not mean avoiding practical issues.

Category Theory, by j5rson

M. Ali Khan

Khan is a fascinating character, and stands out even among the other members of this list: by training he is a mathematical economist, familiar with some of the highest levels of abstraction yet achieved in economic theory, but at the same time an avid fan of continental philosophy, liberally citing sources such as De Man (a very unique choice, even within the continental crowd!), Derrida, and similar figures on the more literary side of theory, such as Ricoeur and Jameson. It may be helpful to contrast Khan to Deirdre McCloskey, who has written a couple of books on writing in economics: McCloskey uses undergraduate-level literary theory to look at economics, which (let’s face it) is a fairly impoverished framework, forcing her to cut a lot of corners and sand away various rough edges that are very much worth exploring. An example is how she considers the Duhem-Quine thesis to be in her own camp, which she proudly labels ‘postmodern’—yet, just about any philosopher you talk to will consider this completely absurd: Quine was as modernist as they come. (Moreover, in the 30 years she had between the first and second editions, it appears she has never bothered to read the source texts.) Khan, by contrast, has thoroughly done his homework and then some.

Khan’s greatest paper is titled “The Irony in/of Economic Theory,” where he claims that this ‘irony’ operates as a (perhaps unavoidable) literary trope within economic theory as a genre of writing. Khan likewise draws from rhetorical figures such as synecdoche and allegory, and it will be helpful to start at a more basic level than he does and build up from there. The prevailing view of the intersection of mathematics and literary theory is that models are metaphors: this is due to two books by Max Black (1962) and Mary Hesse (1963) whose main thesis was exactly this point. While this is satisfying, and readily accepted by theorists such as McCloskey, Khan does not content himself with this statement, and we’ll shortly see why.

Consider: a metaphor compares one thing to another on the basis of some kind of structural similarity, and this is a very useful account of, say, models in physics, which use mathematical formulas to adequate certain patterns and laws of nature. However, in economics it often doesn’t matter nearly as much who the particular agents are that are depicted by the formulas: the Prisoner’s dilemma can model the behaviour of cancer cells just as well as it can model human relations. If we change the object of a metaphor (e.g. cancer cells → people), it becomes a different metaphor; what we need is a kind of rhetorical figure where it doesn’t matter if we replace one or more of the components, provided we retain the overall framework. This is precisely what allegory does: in one of Aesop’s fables, say “The Tortoise and the Hare,” we can replace the tortoise by a slug and the hare by a grasshopper, but nobody would consider this to be an entirely new allegory—all that matters here is that one character is slow and the other is fast. Moreover, we can treat this allegory itself as a metaphor, as when we compare an everyday situation to Aesop’s fable (which was exactly Aesop’s point), which is why it’s easy to treat economic models simply as metaphors, even though their fundamental structure is allegorical.

The reason this is important is because Khan takes this idea to a whole new level of abstraction: in effect, he wants to connect the allegorical structure of economic models to the allegorical nature of economic texts—in particular, Paul Samuelson’s Foundations of Economic Analysis, which begins with the enigmatic epigraph “Mathematics is a language.” For Khan: “the Foundations is an allegory of economic theory and…the epigraph is a prosopopeia for this allegory” (1993: 763). Since I had to look it up too, prosopopeia is a rhetorical device in which a speaker or writer communicates to the audience by speaking as another person or object. Khan is quite clear that he finds Samuelson’s epigraph quite puzzling, but instead of just saying “It’s wrong” (which would be tedious) he find a way to détourne it that is actually quite clever. He takes as a major theme throughout the paper the ways that the same economic subject-matter can be depicted in different ways by using different mathematical formalisms. Now, it’s fairly trivial that one can do this, but Khan focuses on how in many ways certain formalisms are observationally equivalent to each other. For instance, he gives the following chart (1993: 772):

correspondence between probability & measure theoretic terms (in Khan, 1993; 772)

Correspondence between probability & measure theory

That is to say, to present probabilistic ideas using the formalism of measure theory doesn’t at all affect the content of what’s being said: it’s essentially just using the full toolbox of real analysis instead of only set notation. What interests Khan here is how these new notations change the differential relations between ideas, creating brand new forms of Derridean différance in the realm of meaning—which, in turn, translate into new mathematical possibilities as our broadened horizons of meaning let us develop brand new interpretations of things we didn’t notice before. Khan’s favorite example here is nonstandard analysis, which he claims ought to make up a third column in the above chart, as probabilistic and measure theoretic concepts (and much else besides) can likewise be expressed in nonstandard terms. To briefly jot down what nonstandard analysis is: using mathematical logic, it is possible to rigorously define infinitesimals in a way that is actually usable, rather than simply gestured to by evoking marginal quantities. While theorems using such nonstandard tools often differ greatly from ‘standard’ theorems, it is provable that any nonstandard theorem can be proved standardly, and vice versa; yet, some theorems are far easier to prove nonstandardly, whence its appeal (Dauben, 1985). In economics, for example, an agent can be modelled as an infinitesimal quantity, which is handy for general equilibrium models where we care less about particulars than about aggregate properties, and part of Khan’s own mathematical work in general equilibrium theory does precisely this.

To underscore his overall point, Khan effectively puts Samuelson’s epigraph through a prism: “Differential Calculus is a Language”, “Convex Analysis is a Language”, “Nonsmooth Analysis is a Language”, and so on. Referring to Samuelson’s original epigraph, this lets Khan “interpret the word ‘language’ as a metonymy for the collectivity of languages” (1993: 768), which lets him translate it into: “Mathematics is a Tower of Babel.” Fittingly, in order to navigate this Tower of Babel, Khan (following Derrida) adopts a term originating from architecture: namely, the distinction between keystone and cornerstone. A keystone is a component of a structure that is meant to be the center of attention, and clinches its aesthetic ambiance; however, a keystone has no real architectural significance, but could be removed without affecting the rest of the structure. On the other hand, a cornerstone is an unassuming, unnoticed element that is actually crucial for the structural integrity of the whole; take it away and the rest goes crashing down.

Read the rest of this entry

The Shapley Value: An Extremely Short Introduction

at hierophants of escapism, by versatis

[For those who find the LaTeX formatting hard to read: pdf version + LaTeX version]

If we view economics as a method of decomposing (or unwriting) our stories about the world into the numerical and functional structures that let them create meaning, the Shapley value is perhaps the extreme limit of this approach. In his 1953 paper, Shapley noted that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). By embodying a player’s position in a game as a scalar number, we reach the degree zero of meaning, beyond which any sort of representation is severed entirely. And yet, this value recurs over and over throughout game theory, under widely disparate tools, settings, and axiomatizations. This paper will outline how the Shapley value’s axioms coalesce into an intuitive interpretation that operates between fact and norm, how the simplicity of its formalism is an asset rather than a liability, and its wealth of applications.

Overview

Cooperative game theory differs from non-cooperative game theory not only in its emphasis on coalitions, but also by concentrating on division of payoffs rather than how these payoffs are attained (Aumann, 2005: 719). It thus does not require the degree of specification needed for non-cooperative games, such as complete preference orderings by all the players. This makes cooperative game theory helpful for situations in which the rules of the game are less well-defined, such as elections, international relations, and markets in which it is unclear who is buying from and selling to whom (Aumann, 2005: 719). Cooperative games can, of course, be translated into non-cooperative games by providing these intermediate details—a minor industry known as the Nash programme (Serrano, 2008).

Shapley introduced his solution concept in 1953, two years after John F. Nash introduced Nash Equilibrium in his doctoral dissertation. One way of interpreting the Shapley value, then, is to view it as more in line with von Neumann and Morgenstern’s approach to game theory, specifically its reductionist programme. Shapley introduced his paper with the claim that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). All the peculiarities of a game are thus reduced to a single vector: one value for each of the players. Another common solution concept for cooperative games, the Core, uses sets, with the corollary that the core can be empty; the Shapley value, by contrast, always exists, and is unique.

To develop his solution concept, Shapley began from a set of desirable properties taken as axioms:

  • Efficiency: \sum_{i\in{N}} \Phi_i(v) = v(N).
  • Symmetry: If v(S ∪{i}) = v(S ∪{j}) for every coalition S not containing i & j, then ϕi(v) = ϕj(v).
  • Dummy Axiom: If v(S) = v(S ∪{i} for every coalition S not containing i, then ϕi(v) = 0.
  • Additivity: If u and v are characteristic functions, then ϕ(u + v) = ϕ(u) + ϕ(v)

In normal English, any fair allocation ought to divide the whole of the resource without any waste (efficiency), two people who contribute the same to every coalition should have the same Shapley value (symmetry), and someone who contributes nothing should get nothing (dummy). The first three axioms are ‘within games’, chosen based on normative ideals; additivity, by contrast, is ‘between games’ (Winter, 2002: 2038). Additivity is not needed to define the Shapley value, but helps a great deal in mathematical proofs, notably of its uniqueness. Since the additivity axiom is used mainly for mathematical tractability rather than normative considerations, much work has been done in developing alternatives to the additivity axiom. The fact that the Shapley value can be replicated under vastly different axiomatizations helps illustrate why it comes up so often in applications.

The Shapley value formula takes the form:

\Phi_i(v) = \sum\limits_{\substack{S\in{N}\\i\in{S}}} \frac{(|S|-1)!(n-|S|)!}{n!}[v(S)-v(S\backslash\{i\})]

where |S| is the number of elements in the coalition S, i.e. its cardinality, and n is the total number of players. The initial part of the equation will make far more sense once we go through several examples; for now we will focus on the second part, in square brackets. All cooperative games use a value function, v(S), in which v(Ø) ≡ 0 for mathematical reasons, and v(N) represents the ‘grand coalition’ containing each member of the game. The equation [v(S) – v(S\{i})] represents the difference in the value functions for the coalition S containing player i and the coalition which is identical to S except not containing player i (read: “S less i”). The additivity axiom implies that this quantity is always non-negative.

It is this tiny equation that lets us interpret the Shapley value in a way that is second-nature to economists, which is precisely one of its most remarkable properties. Historically, the use of calculus, which culminated in the supply-demand diagrams of Alfred Marshall, is what fundamentally defined economics as a genre of writing, as opposed to the political economy of Adam Smith and David Ricardo. The literal meaning of a derivative as infinitesimal movement along a curve was read in terms of ‘margins’: say, the change in utility brought about by a single-unit increase in good x. Thus, although these axioms specify nothing about marginal quantities, we can nonetheless interpret the Shapley value as the marginal contribution of a single member to each coalition in which he or she takes part. This marginalist interpretation was not built in by Shapley himself, but emerged over time as the Shapley value’s mathematical exposition was progressively simplified. It is this that allows us to illustrate by examples instead of derivations.

Examples 1 & 2: Shapley-Shubik Power Index (Shapley & Shubik, 1954)

Imagine a weighted majority vote: P1 has 10 shares, P2 has 30 shares, P3 has 30 shares, P4 has 40 shares.

For a coalition to be winning, it must have a higher number of votes than the quota, here q = \frac{110}{2} = 55

v(S) =\begin{cases} 1, & \text{if }q>55 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {2,3}, {2,4}, {3,4} & all supersets containing these.

Since the values only take on 0s and 1s, we can work with a shorter version of the Shapley value formula:

\Phi_i(v) = \sum\limits_{\substack{S\text{ winning}\\S\backslash\{i\}\text{ losing}}} \frac{(|S|-1)!(n-|S|)!}{n!}

Here, [v(S) – v(S\{i})] takes on a value of 1 iff a player is pivotal, making a losing coalition into a winning one. Otherwise it is either [0 – 0] = 0 for a losing coalition or [1 – 1] = 0 for a winning coalition.

For P1: v(S) – v(S\{1}) = 0 for all S, so ϕ1(v) = 0 (by dummy player axiom)

For P2: v(S) – v(S\{2}) ≠ 0 for S = {2,3}, {2,4}, {1,2,3}, {1,2,4}, so that:

\Phi_2(v)=2\frac{1!2!}{4!}+2\frac{2!1!}{4!}=\frac{8}{24}=\frac{1}{3}

By the symmetry axiom, ϕ2(v) = ϕ3(v) = ⅓. By the efficiency axiom, 0 + ⅓ + ⅓ + ϕ4(v) = v(N) = 1 → ϕ4(v) = ⅓

It is worth noting that, within the structure of our voting game, P4’s extra ten votes have no effect on his power to influence the outcome, as shown by the fact that ϕ2 = ϕ3 = ϕ4. A paper by Shapley (1981) notes an actual situation for county governments in New York in which each municipality’s number of votes was based on its population; in one particular county, three of the six municipalities had Shapley values of zero, similar to our dummy player P1 above. Upon realizing this, the quota was raised so that our three dummy players were now able to be pivotal for certain coalitions, giving them nonzero Shapley values (Ferguson, 2014: 18-9).

For a more realistic example, consider the United Nations Security Council, composed of 15 nations, where 9 of the 15 votes are needed, but the ‘big five’ nations have veto power. This is equivalent to a weighted voting game in which each of the big five gets 7 votes, and each of the other 10 nations gets 1 vote. This is because if all nations except one of the big five vote in favor of a resolution, the vote count is (35 – 7) + 10 = 38.

Thus we have weights of w1 = w2 = w3 = w4 = w5 = 7, and w6 → w15 = 1.

Our value function is v(S) =\begin{cases} 1, & \text{if }q>39 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {1,2,3,4,5, any 4+ of the 10}

For the 4 out of 10 ‘small’ nations needed for the vote to pass, the number of possible combinations is \frac{10!}{4!\,6!}.

Hence, in order to calculate the Shapley value for any member (say, P1) in the big five, we take into account that v(S) – v(S\{1}) ≠ 0 for all 210 coalitions, plus any coalitions with redundant members; this is just another way of expressing their veto power. In our previous example, we were able to count by hand the members in each pivotal coalition S and multiply that number by the Shapley value function for coalitions of that size. Here the number of pivotal coalitions for each size is so large that we must count them using combinatorics. Our next equation looks arcane, but it is only the number of pivotal coalitions multiplied by the Shapley function. First we have the minimal case where 4 of the 10 small members vote in favor of the resolution, then we have the case for 5 of the 10, and so on until we reach the case where all members unanimously vote together:

\Phi_1(v)=(\frac{10!}{4!6!})(\frac{8!6!}{15!})+(\frac{10!}{5!5!})(\frac{9!5!}{15!})+(\frac{10!}{6!4!})(\frac{10!4!}{15!})+(\frac{10!}{7!3!})(\frac{11!3!}{15!})+(\frac{10!}{8!2!})(\frac{12!2!}{15!})+(\frac{10!}{9!1!})(\frac{13!1!}{15!})+(\frac{14!}{15!})

=210\frac{1}{45045}+252\frac{1}{30030}+210\frac{1}{15015}+120\frac{1}{5460}+45\frac{1}{1365}+10\frac{1}{210}+1\frac{1}{15} = 0.19627

By the symmetry axiom, we know that all members of the big five have the same Shapley value of 0.19627. Also, as before, the efficiency axiom implies that the Shapley values for all the players sum to v(N) = 1. Since symmetry also implies that the Shapley values are the same for the 10 members without veto power, we need not engage in any tedious calculations for the remaining members, but can simply use the following formula:

\Phi_6=\cdots=\Phi_{10}=\frac{1-5(0.19627)}{10}=\frac{1-0.98135}{10}=0.001865

Part of the purpose of this example is to help the reader appreciate how quickly the complexity of such problems increases in the number of agents n. Weighted voting games are actually relatively simple to calculate because v(N) = 1, which is why we just sum together the Shapley formulas for each pivotal coalition’s size; in our next example we will relax this assumption. In so doing, the part of the Shapley formula v(S) – v(S\{i}) gains added importance as a ‘payoff’, whereas the Shapley formula used in our weighted voting game examples acts as a probability, so that the combined formula is reminiscent of von Neumann-Morgenstern utility. The Shapley formula can be construed as a probability in the following way (Roth, 1983: 6-7):

suppose the players enter a room in some order and that all n! orderings of the players in N are equally likely. Then ϕi(v) is the expected marginal contribution made by player i as she enters the room. To see this, consider any coalition S containing i and observe that the probability that player i enters the room to find precisely the players in S – i already there is (s – 1)!(n – s)!/n!. (Out of n! permutations of N there are (s – 1)! different orders in which the first s – 1 players can precede i, and (n – s)! different orders in which the remaining n – s players can follow, for a total of (s – i)!(n – s)! permutations in which precisely the players S – i precede i.

One drawback to this approach is its implicit assumption that each of the coalitions is equally likely (Serrano, 2013: 607). For cases such as the UN Security Council this is doubtful, and overlooks many very interesting questions. It also assumes that each player wants to join the grand coalition, whereas unanimous votes seldom occur in practice. The main advantage of the Shapley value in the above examples is that another common solution concept for cooperative games, the Core, tends to be empty in weighted voting games, giving it no explanatory power. The Shapley value can be extended to measure the power of shareholders in a company, and can even be used to predict expenditure among European Union member states (Soukenik, 2001). We will go through another relatively simple example, and then move on to several more challenging applications.

Read the rest of this entry

The Project of Econo-fiction

what is economics

I have an article up at the online magazine Non on what it entails to use Laruelle’s non-philosophy to talk about economics, intended as a retrospective of my essay “There is no economic world.” It contextualizes econo-fiction in terms of Laruelle’s lexicon, illustrates a philosophical quandary with viewing iterated prisoner’s dilemma experiments through the lense of ‘falsification’, and notes a few ways I’ve changed my mind since then and where I plan to go from here. While the example is deliberately simple, aimed toward readers with zero knowledge of economic theory, it shows very succinctly how the notion of ‘experiment’ in economics operates as a form of conceptual rhetoric. I’ve also included a lot of fascinating factoids I’ve discovered since then, which I plan to expand upon in upcoming posts here.

No other philosophical approach I’ve come across—not even Badiou’s—lends itself to economics as much as non-philosophy does. I’m very impressed with the way that NP can talk about the mathematical formalism in economics without overcoding it, and I’d very much like to experiment with applying NP to related disciplines. Laruelle himself hints toward new applications of his method in finance: “Philosophy is a speculation that sells short and long at the same time, that floats at once upward and downward” (2012: 331). That is, philosophy is a form of hedging. Conversely, the section containing this excerpt is entitled “Non-Philosophy Is Not a Short-Selling Speculation,” where short-selling is investing so that you make money if an asset’s price goes down. Of the continental philosophers of finance I’m familiar with, Ben Lozano’s Deleuzian approach tends to focus on the conceptual aspects of finance to the neglect of its formalism, and Élie Ayache’s brilliantly original reading of quantitative finance is in many ways quite eccentric—such as his insistence on the crucial importance of the market maker (the guys yelling at each other in retro movies about Wall Street) and that algorithms are fundamentally inferior to human traders. A Laruellean interpretation of mainstream finance would serve as a welcome foil to both.

Just the other day I discovered a form of mathematical notation that appears to open up a Laruellean interpretation of accounting, and I’m always on the lookout for quirky reinterpretations of business-related ideas. I find philosophy such a handy tool for getting myself intrinsically interested in dull (but very practical) topics and disciplines, and I’ve read a whole heap of papers over the past year, so I’m really looking forward to blogging again.

References

“There is no economic world.”

There is no economic world. There is only an abstract economic description. It is wrong to think that the task of economics is to find out how the economy is. Economics concerns what we can say about the economy…

This thesis (adapted from Niels Bohr, the father of quantum theory[1]) is, to anyone not thoroughly debauched by philosophy, clearly nonsensical—the sort of postmodern tripe that embodies everything wrong with ‘theory’. Yet, it is quite the opposite. François Laruelle argues that any notion of ‘world’—as a priori/mnemotechnic cognitive mapping—is a product of philosophical thinking; in fact, he often uses the words ‘philosophy’ and ‘world’ interchangeably. Therefore, if the corpus of economics has a ‘world’, this implies that any worthwhile statements it makes are translatable into philosophy, which thus becomes privileged as a meta-discourse in relation to the ‘regional knowledge’ of economics. Such a role has been traditionally claimed by Marxism, as well as obliquely by disciplines such as psychoanalysis, whose proponents believe that they can have knowledge of the economy by imposing their concepts a priori upon whatever data is at hand (regardless of whether said theorist knows minutiae such as the difference between stocks and bonds…). To subvert this hierarchy—to argue that economics is properly non-philosophical, thus eliminating all grounds for the use of postmodern tripe—the thesis that ‘there is no economic world’ becomes essential. This paper presents a unified theory of economics and philosophy, arguing that economics consists of nonknowledge rather than knowledge (episteme/technē), that economics operates through unwriting or deconceptualizing the material of the other social sciences, and that economic models should not be viewed as attempts to represent the world, but as a radically non-Bayesian method of framing events in their contingency.

§1. World versus ‘World’

There is a famous story involving the British analytic philosopher A.J. Ayer and the French continental philosophers Georges Bataille and Georges Ambrosino, in a midnight conversation in January 1951 (Bataille, 2001: 111-3). Ayer introduced the simple proposition that “the sun existed before man,” which as a scientific realist he saw no reason to doubt. Ambrosino, a physician steeped in French phenomenology, insisted that “certainly the sun had not existed before the world.” Bataille, on the other hand, was agnostic. As he wrote afterwards (111):

This is a proposition that indicates the perfect non-sense that a reasonable proposition can assume. A common meaning must have a meaning within all meaning when one asserts any proposition that in principle implies a subject & an object. In the proposition: there was the sun and there were no humans, there is a subject without an object.

The easy way out of this dilemma (or as Bataille put it, this “abyss between French philosophers and English philosophers”) is to say that while Ayer was talking about the sun (as a well-defined scientific object composed of various elements, etc.), Ambrosino and Bataille were talking about ‘the sun’ (as ideal representation of the Real).[2] While Ambrosino had taken a purely idealist position, Bataille’s stance is much more interesting: he had, in fact, hit upon a problem that would later become known as the ‘arché-fossil’. This idea would be central to Quentin Meillassoux’s attempt to philosophize in a way that avoids what he calls ‘correlationism’—that is, the idea that “we only ever have access to the correlation between thinking and being, and never to either term considered apart from the other” (2008: 5), with ‘thinking’ and ‘being’ meant in the sense of ‘models’ and ‘objects’. In more visual terms, Meillassoux is searching for a way of doing philosophy that doesn’t just involve the imposition of a ‘grid’ of concepts (or ‘syntax’) upon the mass of data comprising the world—as has been the norm in philosophy since Kant’s Critique of Pure Reason.[3] An arché-fossil is any sort of scientific object or datum describing the state of the universe prior to the existence of subjects (e.g. humans) that could experience it—or, recalling the above anecdote: the arché-fossil describes the state of the world prior to ‘the world’. After introducing this concept, Meillassoux goes on to outline the ‘mechanics’ of why this idea is so immediately absurd to philosophers in the phenomenological tradition. The existence of ‘ancestral’ data implies (15):

  • that being is not co-extensive with manifestation, since events have occurred in the past which were not manifest to anyone;
  • that what is preceded in time the manifestation of what is;
  • that manifestation itself emerged in time and space, and that consequently manifestation is not the givenness of a world, but rather an intra-worldly occurrence;
  • that this event can, moreover, be dated;
  • that thought is in a position to think manifestation’s emergence in being, as well as a being or a time anterior to manifestation;
  • that the fossil-matter is the givenness in the present of a being that is anterior to givenness; that is to say, that an arché-fossil manifests an entity’s anteriority vis-à-vis manifestation.

DCP_0086 by Phillip Stearns

The notion of the arché-fossil underscores the tension between the world and ‘the world’. From the perspective of ‘the world’ there is either ‘world’ or ‘non-world’, whose boundary is set by the existence of an experiencing subject. Yet, by carbon-dating a meteorite (for example), it is possible to state that the ‘non-world’ and the world existed simultaneously (or: co-extensively), and moreover, that the evidence for this is given to us within ‘the world’. Philosophically, this is clearly unacceptable. Yet, it sheds some light upon an old Daoist koan:

庄子:“如果把天下就藏在天下,就不会被丢失,这是一般事物的通理”

“Hide the world in the world and the world will never be lost—this is the eternal truth.” ~Zhuangzi[4]

Zhuangzi is the same person who, upon waking up from a dream that he was a butterfly, wondered if he was actually a butterfly dreaming that he was a man. The anecdote is no doubt as popular as it is because of its stark opposition of ‘world’ (dream) and world (reality). A dream, after all, proceeds according to an internal logic where any sort of (arché-)hints that it is a dream, e.g. words on a page changing the second time you look at them, somehow don’t count. The most absurd events may occur in the most bizarre of settings, but any sense of contingency (the idea that it could be otherwise) is lost. If we take the lack of contingency in dreams as a principle, however, the very fact that Zhuangzi can ask whether he’s a butterfly or a man proves he isn’t dreaming! Zhuangzi’s query creates a false partition—with ‘dream’ and ‘non-dream’ as the only members of the state space—and is thus self-defeating: nonknowledge is in fact the most useful kind of knowledge he can have. So in order to avoid a performative contradiction, Zhuangzi must accept that the principle can’t be psychologically necessary. This gives rise to a fundamental contingency, where in order to make a convincing case that he is a butterfly, Zhuangzi has to argue that the current rules of psychology (and perhaps even of nature) would have to be able to be other than they are—the same position as Meillassoux!

For Meillassoux, this division of world and ‘world’ is the problem, and ought to be gotten rid of; Zhuangzi’s stance is similar, though his method eliminates this opposition in an entirely different way—which is the same as that of economics. Anyone accustomed to think in philosophical terms may be inclined, on reading the following sections, to suppose that the argument rests on a tacit assumption of this dyad. If such a supposition is found helpful, there is no harm in the reader’s adopting it as a temporary working hypothesis. In fact, however, no such division is made.

§2. Econo-fiction

To verify the claim ‘oil prices are manipulated by the USA’, a researcher could (in theory) physically go to each stage of the oil production/distribution process, from oil wells to spot or futures markets, to various nodes along logistical networks, to gas stations, etc. In the above claim, ‘oil price’ is well-defined as a variable; moreover, its role as subject of the sentence makes the former claim ‘economic’ in its genre. (Cf. the political statement ‘the USA manipulates oil prices’, with its focus on agency.) ‘USA’ is of course vague, but suffices for the problem at hand. The verb ‘to manipulate’ reifies (in this context), but is in principle observable. Our researcher could measure the ‘value added’ in each stage as it is expressed in price, then perform an (unavoidably qualitative) analysis of how fluctuations in the magnitude of this value-added (with respect to production costs, etc.) can be causally traced to the USA. In this context, economic methods would not per se be needed, only mercantile arithmetic. Economics is often thought of as simply an armchair version of our poor researcher’s task (implying that an ideal model is one that is just as complex as the real world). Yet, in the above statement economics acknowledges not the subject, verb, or object, but the preposition ‘by’: in a sort of econo-fiction, it shows the numerical properties that make ‘manipulation’ meaningful.

Economics can be defined as the science of non-discursive social relations, with a broad definition of ‘discourse’ such that one could equally say ‘non-conceptual’.[5] In fact, economics takes place through a process of deconceptualizing the findings of business, finance, and politics. As soon as you think you can understand an economic notion (e.g. an algebraic relation) intuitively and talk about it lucidly, economists develop a way to formalize it (via econometrics and so on) so as to make it entirely untranslatable into normal language. John von Neumann once remarked: “in mathematics you don’t understand things. You just get used to them.” This is exactly what Bohr was saying! By continually deconceptualizing its former results economics systematically prevents itself from creating a ‘world’. As in Roland Barthes’ famous formulation, the task of economics is to inexpress the expressible.[6]

Read the rest of this entry

Exogeneity – Economics and Nonknowledge

2009_07_07, by Tas Vicze

An economic fact is structured as follows: “consumers in the sample place a premium on liquidity β = 0.73.” This serves its task in economic models and allows economists to draw conclusions that are correct for all practical purposes. But in everyday life such a number is meaningless. The reason for this is that this number cuts across all discourse, all affect, and the social conditions that engender it, rendering these causalities exogenous to the non-conceptual statements of economics. As such, an economic fact’s epistemological scope is not sufficiently expressed by the all-too-philosophical categories of episteme (‘know-what’) and technē (‘know-how’)—though obviously these cannot help but play a significant part—but can be better characterized as a form of nonknowledge. This use of the term ‘nonknowledge’ is, however, not reducible to the Confucian dictum “To know what you know and to know what you do not know, this is knowledge”—which merely redoubles epistemology on itself in a transcendental begging of the question. To know what we don’t know would require that we know what we don’t know we don’t know, ad infinitum. The nonknowledge of economics is, as it were, the last instance of the Confucian limit statement. This nonknowledge takes place in a single number, which unifies (without being unitary) and commensurates (without commensurability) disparate orders of causality. The purpose of such a number is, borrowing a phrase from Roland Barthes (a far better political economist than Althusser ever was), to inexpress the expressible. Exogeneity is the reason why economics must necessarily be (in)expressed by numbers, not words—as well as why philosophy, mired in discourse, is unable to speak of economics.

In its disjunction from Knowledge proper, economics is non-paranoiac precisely to the extent that it is okay not to know. This has in the past led to accusations that economics is a form of religion, by analogy with the latter’s suppression of questions via dogmatism. This is not internal to economics, however, but rather takes place in its traversal (and subsequent travesty) in(to) discourse, where causality is truncated and contingency forgotten. Economics deals in facticity without fact (Heidegger), which as such remains open, ‘closable’ in the last instance only. The clause “for all practical purposes” helps to underscore its (non-)answerality, its indecisionality between practice and theory. Yet, economics tra(ns)verses into discourse precisely by supplying this ‘last instance’—by endogenizing it, as best illustrated by Milton Friedman and the money supply. Philosophy cannot handle exogeneity. Its own limit statement is that economics become a Theory of Everything.

2009_06_30 by Tas Vicze

A concept is a model. This implies that the only form of ‘concept’ in economics is an economic theory itself, in its entirety. This goes unnoticed because the first idea associated with economics in the public mind is “supply and demand”—this despite the fact that real economists never use these terms in practice. An introductory course in economics (many people’s only exposure to the discipline) teaches students to play around with such concepts as supply and demand, interest rates, and so on. Philosophy ‘of’ economics likewise proceeds by attaching to an economic ‘object’ such as ‘labour’, then trying to relate it to other concepts. But those few who take an intermediate economics course find that they are being taught the same information, but without concepts. ‘Objects’ are replaced with exogenous variables, or rates of change x/y (read: “derivative of x with respect to y”). An economic model is an elaborate tautology, in the extended Wittgensteinian sense of the term where pq is a tautology, since the concept of p is contained in the concept of q. Its conclusions arise from the syzygy (roughly, coalignment) of its variables along with its presuppositions (made for the sake of simplification). In an elaborate process of synergy, this syzygy creates a concept that may be imposed at will. Conclusions (prescriptive and descriptive) seem obvious to an economist but are not so to a layperson, and philosophy students pontificate about neoliberalism while econometrics students are incapable even of articulating what they don’t understand in class. Yet, it’s precisely this para-conceptual syzygy that constitutes all that is valuable in economics. What separates a good model from a bad one is that in the latter, a specific presupposition may be shown to do all the ‘work’ in providing the model’s conclusion.

Semantically, all of the interesting statements of economics take place within the preposition of a philosophical statement. As Laruelle writes, “The identity of the with (the One with the One, God with God), is the true ‘mystical’ content of philosophy, its ‘black box’.” The armchair philosopher is forced to engage in an amphibological attempt to render (pseudo-)exogeneity as endogenous, forced to autopositionally posit black boxes in the form of virtus dormitiva (Molière). Philosophy creates names for the Real, and by these purports to have explained it. Conversely, an economic variable is a name that does not name (Lao Tzu: 名可名,非常名), but non-conceptually gestures toward radical ( absolute) exogeneity.

**Thanks to Tas Vicze for the artwork; you can view the rest of his portfolio here.

Currency War: An Extremely Short Introduction

Manhattan Nights, by Jeremy Mann

John Maynard Keynes once wrote that “There is no subtler, no surer means of overturning the existing basis of society than to debauch the currency.”[1] Bretton Woods, hyperinflation, and stagflation have increased this view’s sway, and some would argue that Keynes’ own economic theories have given his statement a veracity that it would not possess otherwise. Nevertheless, this statement is capable of being combined in a fruitful way with the notion that “the way a society makes war reflects the way it makes wealth.”[2] These two theses become crystallized in the concept of a currency war, the implications of which will first be outlined historically, then contextualized within contemporary discourse on international politics, and discussed in terms of how it problematizes typical discussions of security.

Prior to Bretton Woods, the value of money was pegged to the price of gold, i.e. the gold standard. This led to phenomena such as ‘Gresham’s Law’, where if the price of bullion was higher than the value of a coin made of a precious metal, people would melt down the coin and sell the bullion, pocketing the difference; this tendency can be observed even today with respect to the penny. More importantly, as Lyotard argues, this structured international trade into a zero-sum game. As he comments:

[T]he quantity of metallic money which is ‘circulating in all Europe’ being constant, and this gold being wealth itself, in order that the king grow richer he must seize the maximum of this gold. This is to condemn the partner to die, in the long or short term. It is to count the time of trade not up to infinity, but by limiting it to the moment when all the gold in Europe is in Versailles.[3]

This was not, however, a currency war per se, but rather a ‘wealth war’—the difference will shortly be made clear. In 1933, facing the Great Depression, the United States finally abandoned the gold standard, and soon after devalued its currency 40 percent, which greatly boosted the US economy as well as that of the rest of the world.[4] Deprived of a ‘universal’ numeraire, the currencies of the world subsequently became valued relative to each other, creating a competitive atmosphere of an entirely different kind. The lower a state’s currency is valued, the more businesses in other countries will be incentivized to import their products, and this fact (particularly in the case of Japan) is explicitly taken into account in monetary policy. The picture is complicated further when it is considered how the US dollar is a reserve currency—i.e. the ‘default’ currency which states use to allow for current account surpluses (i.e. countries importing more than they export) or to purposively modify exchange rates (particularly in the case of currencies pegged to another currency, such as China’s yuan to the dollar). As one article[5] describes: “the effect of a devaluation of a non-reserve currency…is implicitly to put upward buying pressure on the USD,” and conversely,[6] “every time the Fed debases the US Dollar it forces the Euro and other currencies higher, hurting those countries’ exports.”

Read the rest of this entry