Heideggerian Economics

the-fabric-of-time-by-nataliekelsey

Lately I’ve had the poor judgment to start reading Heidegger’s Being and Time. I’ve been putting it off for years now, largely because it has no connection with the kind of philosophy I’m interested in. Yet, among my philosophical acquaintances there is a clear line between those who have read Heidegger and those who haven’t—working through this book really does seem to let people reach a whole new level of abstraction.

To my great surprise, in Being and Time (1927: 413), Heidegger remarks:

[E]ven that which is ready-to-hand can be made a theme for scientific investigation and determination… The context of equipment that is ready-to-hand in an everyday manner, its historical emergence and utilization, and its factical role in Dasein — all these are objects for the science of economics. The ready-to-hand can become the ‘Object’ of science without having to lose its character as equipment. A modification of our understanding of Being does not seem to be necessarily constitutive for the genesis of the theoretical attitude ‘towards Things’.

Curiously, no other sources I’ve found mention this excerpt. More well-known is a passage from “What are Poets for?” in which Heidegger denounces marketization (1946: 114-5):

In place of all the world-content of things that was formerly perceived and used to grant freely of itself, the object-character of technological dominion spreads itself over the earth ever more quickly, ruthlessly, and completely. Not only does it establish all things as producible in the process of production; it also delivers the products of production by means of the market. In self-assertive production, the humanness of man and the thingness of things dissolve into the calculated market value of a market which not only spans the whole earth as a world market, but also, as the will to will, trades in the nature of Being and thus subjects all beings to the trade of a calculation that dominates most tenaciously in those areas where there is no need of numbers.

Thus it’s very easy to appeal to Heidegger’s authority to support various Leftist clichés about capitalism. It’s far harder to bring Heidegger’s thought to bear on actual economic modelling—its ‘worldly philosophy’. In this post I’ll survey several of the less hand-wavey attempts in this direction. My main question is whether a Heideggerian economics is possible at all, and if so, whether there is a specific subfield of economics to which Heideggerian philosophy especially lends itself. My overview of each specific thinker sticks closely to the source material, as I’m hardly fluent enough in Heideggerese to give a synoptic overview or clever reinterpretation. I don’t expect to ever develop a systematic interpretation of my own, but I hope this post might prove inspiring to some economist with philosophical tastes far different from my own.

1. Schalow on ‘The Question of Economics’

Schalow’s approach is quite refreshing because he is both an orthodox Heideggerian and takes the viewpoint of mainstream economics, as opposed to Heideggerian Marxism such as Marcuse’s One-Dimensional Man. Schalow’s question is at once simpler and deeper: whether Heidegger’s thought leaves any room for economics. Here, ‘economics’ is minimally defined as theorizing the production and distribution of goods to meet human needs. (So in theory, then, this applies to any sort of economics, classical or modern.) The most obvious answer would seem to be ‘No’ — he notes: “It is clear that Heidegger refrains from ‘theorizing’ of any kind, which for him constitutes a form of metaphysical rationality” (p. 249).

Thus, Schalow takes a more abstract route, viewing economics simply as “an inescapable concern of human being (Dasein) who is temporally and spatially situated within the world” (p. 250). Schalow advocates a form of ‘chrono-economics’, where ‘scarcity’ is framed through time as numeraire. In a sense, this operates between ‘economic theory’ as a mathematical science vs. as a “humanistic recipe for achieving social justice” (p. 251); instead, “economic concerns are an extension of human finitude” (p. 250). Schalow makes various pedantic points about etymology which I’ll spare the reader, except for this one: “the term ‘logos’ derives its meaning from the horticultural activity of ‘collecting’ and ‘dispersing’ seeds” (p. 252).

It’s natural to interpret Being & Time as “lay[ing] out the pre-theoretical understanding of the everyday work-world in which the self produces goods and satisfies its instrumental needs” (p. 253). Similarly, “work is the self’s way of ‘skillful coping’ in its everyday dealings with the world” (p. 254). Hence Heidegger emphasizes production — which he will later associate with technē — over exchange, which he associates with the ‘they-self’ (p. 254). Yet, Schalow points out, both production and exchange can be construed as a form of ‘care’. Care, in turn, is configured by temporality, which forces us to prioritize some things over others (p. 256).

“The paradox of time…is the fact that it is its transitoriness which imparts the pregnancy of meaning on what we do” (p. 257). Therefore, “time constitutes the ‘economy of all economies’,” in that “temporality supplies the limit of all limits in which any provision or strategy of allocation can occur” (ibid.). We can go on to say that “time economizes all the economies, in defining the horizon of finitude as the key for any plan of allocation” (p. 258).

In his later thought, Heidegger took on a more historical view, arguing that the structure of Being was experienced differently in different epochs. In our own time, the strongest influence on our notion of Being is technology. Schalow gives an interesting summary (p. 261):

The advance of technology…occurs only through a proportional ‘decline’ in which the manifestness of being becomes secondary to the beings that ‘presence’ in terms of their instrumental uses.

In an age where the economy is so large as to be inconceivable except through mathematical models, one can say that “the modern age of technology dawns with the reduction of philosophical questions to economic ones” (p. 260). Thus, Heidegger is more inclined to view economics as instrumental (technē) rather than as “the self-originative form of disclosure found in art (poiēsis).” Yet, rather than merely a quantitative “artifice of instrumentality,” it is also possible to interpret economics in terms of poiēsis, as “a vehicle by which human beings disclose their immersion in the material contingencies of existence” (p. 262). Economics thus becomes “a dynamic event by which human culture adjusts to ‘manage’ its natural limitations” (ibid.). Framing economics in terms of temporality (as ‘chrono-economics’) allows it to remain open to Being, and thereby “to connect philosophy with economics without effacing the boundary between them” (p. 263).

Read the rest of this entry

Chinese Logic: An Introduction

zao-wou-ki-1

[LaTeX version here; Chinese version here]

Introduction

As late as 1898, logic was seen by the Chinese as “an entirely alien area of intellectual inquiry”: the sole Chinese-language textbook on logic was labeled by Liang Qichao (梁启超)—at that time a foremost authority on Western knowledge—as “impossible to classify” (无可归类), alongside museum guides and cookbooks (Kurtz, 2011: 4-5). This same textbook had previously been categorized by Huang Qingcheng (黄庆澄) as a book on ‘dialects’ (方言). The Chinese word for logic (luójí 逻辑) itself is, according to the Cihai (《辞海》/ Sea of Words) dictionary, merely a transliteration from the English—the entire Chinese lexicon had no word resembling it (Lu, 2009: 98). Hence it never occurred even to specialists that this esoteric discipline might have close affinities with the roots of Chinese philosophy, from the I Ching (易经) to the ancient Chinese dialecticians (辩者), as well as the famous paradoxes of Buddhism.

With the advent of computers, “there is now more research effort in logics for computer science than there ever was in traditional logics” (Marek & Nerode, 1994: 281). This has led to a proliferation of logical methods, including modal logic, temporal logic, epistemic logic, and fuzzy logic. Further, such new logical systems permit multiple truth values, semantic patterns based on games, and even logical contradictions. In light of these possibilities, research in ‘Chinese logic’ aims to reinterpret the history of Chinese thought by means of such tools.

This essay consists of three parts: the mathematics of the I Ching, the debates within the School of Names, and the paradoxes of Buddhism. The first section will, through examining the binary arithmetic of the I Ching, provide an introduction to basic logical notation. The second section will explore Gongsun Long’s famous bái mǎ fēi mǎ (白马非马) paradox, as well as the logical system of the Mohist school. The third section will explain the seven-valued logic of the Buddhist monk Nāgārjuna by way of paraconsistent logic.

yijing-lattice

1. The I Ching (易经) & Binary Arithmetic

The I Ching is one of the oldest books in history. Throughout the world, there is no other text quite like it. Its original function was for divination, giving advice for future actions; yet, after centuries of commentary, it has taken on a fundamental role in Chinese culture. In part, this is because its commentaries became (apocryphally) associated with Confucius, thereby establishing it as a classic.

Its survival of the ‘burning of books and burying of scholars’ (焚书坑儒) in 213-210BC has magnified the I Ching’s importance. Historically, the Zhou dynasty was marked by hundreds of years of war and dissension. Finally, Qin Shi Huang united the nation in 221BC, to become China’s first emperor. According to the standard account, in order to unify thought and political opinion, Emperor Qin Shi Huang ordered that all books not about medicine, farming, or divination be burned. And so, the vast majority of ancient Chinese knowledge has been lost to history. Yet, since the I Ching was about divination, it avoided sharing the same fate. In a sense, then, the I Ching has come to represent the collective wisdom of ancient China—it embodies their entire philosophical cosmology.

Confucius’s interest in the I Ching is well known. In verse 7.16 of the Analects, he says: “If some years were added to my life, I would give fifty to the study of the Yi [I Ching], and then I might come to be without great faults.” Curiously, this appears at odds with the rest of his philosophy. After all, the Analects elsewhere says: “The subjects on which the Master did not talk, were—extraordinary things, feats of strength, disorder, and spiritual beings.” (7.20). That is, Confucius had no interest in oracles. Hence we can conclude that for Confucius, the main content of the I Ching was not divination, but philosophy.

The core tenet of the I Ching is deeply metaphysical, namely: the complementarity of Yin (阴) and Yang (阳). Yin represents negativity, femininity, winter, coldness and wetness. Yang represents positivity, masculinity, dryness, and warmth. Accordingly, the gua (卦) or fundamental components of the I Ching’s hexagrams, are two lines: ‘⚋’ for Yin, ‘⚊’ for Yang.

The trigrams, made up of three lines, have 8 combinations (2³ = 8), and so are called the bagua (八卦), where (八) means 8. The bagua and its associated meanings are: ☰ (乾/天: the Creative/Sky), ☱ (兑/泽: the Joyous/Marsh), ☲ (离/火: the Clinging/Fire), ☳ (震/雷: the Arousing/Thunder), ☴ (巽/风: the Gentle/Wind), ☵ (坎/水: the Abysmal/Water), ☶ (艮/山: Keeping Still/Mountain), ☷ (坤/地: the Receptive/Earth). The I Ching’s commentaries revolve around 64 hexagrams of six lines (2⁶ = 64 combinations). There are multiple ways of ordering the hexagrams: the most well-known is the King Wen (文王) sequence, but the most important for our purposes is the Fu Xi (伏羲) sequence.

diagram-of-i-ching-hexagrams-owned-by-german-mathematician-and-philosopher-gottfried-wilhelm-leibniz

diagram of the I Ching’s hexagrams owned by Leibniz

In the 17th century, the mathematician Gottfried Wilhelm Leibniz attempted to develop a system of arithmetic using only the numbers 0 and 1, called binary arithmetic. Binary arithmetic is in base 2: its key point is that any integer can be uniquely represented as a sum of powers of two. For example, 7 = 4 + 2 + 1 = 1×(2²) + 1×(2¹) + 1×(2⁰), and since each of the coefficients is 1, therefore the binary representation of 7 is (111). Conversely, 5 = 4 + 1 = 1×(2²) + 0×(2¹) + 1×(2⁰), where the middle coefficient is 0, so that 5 in binary is (101). For larger numbers, we simply include larger powers of two: 2³ = 8, 2⁴ = 16, etc.

Leibniz corresponded with various Christian missionaries in China, and had received a poster containing the Fu Xi sequence. To his astonishment, by letting ⚋ = 0 and ⚊ = 1, the Fu Xi sequence of 64 hexagrams exactly corresponds with the binary numbers from 0 to 63! Using the trigrams as a simplified example, from top to bottom we read: ☱ = (110) = 1×(2²) + 1×(2¹) + 0×(2⁰) = 4 + 2 = 6, ☵ = (010) = 0×(2²) + 1×(2¹) + 0×(2⁰) = 2, and so on. Thus, according to the Fu Xi and binary sequence, the bagua are ordered as: ☷, ☶, ☵, ☴, ☳, ☲, ☱, ☰.

Further, since we can treat the trigrams as numbers, we can also perform on them arithmetic operations such as addition and multiplication. To do this involves modular arithmetic, which for pedagogical purposes is occasionally called ‘clock arithmetic’. Its main feature is that it is cyclical: after arriving at the base number (‘mod n’, in our case: mod 2), we start up once again at zero. So in mod 2 arithmetic, 1 + 1 = 0: we only use the numbers 0 and 1. In the same way, a 12-hour clock only involves the numbers 1 to 12, and so is ‘mod 12’; hence, 15:00 is the same as 3:00, and so on. Therefore, the mod 2 addition of the I Ching’s trigrams can be represented by the following table:

logic-table1

Note that this is equivalent to the ‘⊻’ (exclusive or) operation in Boolean logic. (Boolean logic simply uses 0 for ‘false’ and 1 for ‘true’.) This logical point of view comes most in handy for defining multiplication, since binary multiplication is equivalent to the logical ‘∧’ (and) operation (Schöter, 1998: 6):

logic-table2

The advantage of logic over modular arithmetic is that we can define complements (¬). For example, Fire (☲/101) and Water (☵/010) are complementary, and so are Sky (☰/111) and Earth (☷/000). The use of logic is actually quite helpful in analyzing the trigrams’ associated meanings. Using the slightly different terminology of lattice theory (Schöter, 1998: 9):

  1. The Creative [乾/☰] is the union (⊻) of complements.
  2. The Joyous [兑/☱] is the union (⊻) of the Arousing [震/☳] and Abyss [坎/☵].
  3. Fire [火/☲] is the union (⊻) of the Arousing [震/☳] and Stillness [艮/☶].
  4. The Gentle [巽/☴] is the union (⊻) of the Abyss [坎/☵] and Stillness [艮/☶].
  5. Arousing [震/☳] is the intersection (∧) of the Joyous [兑/☱] and Fire [火/☲].
  6. Abyss [坎/☵] is the intersection (∧) of the Joyous [兑/☱] and Gentle [巽/☴].
  7. Stillness [艮/☶] is the intersection (∧) of Fire [火/☲] and the Gentle [巽/☴].
  8. The Receptive [坤/☷] is the intersection (∧) of complements.

In a beautiful essay, Goldenberg (1975) uses a branch of mathematics called group theory to unify the above points. A group is an algebraic structure with two operations (e.g. addition and multiplication). It turns out that the I Ching’s hexagrams satisfy many of the conditions for a group, which are as follows. 1) Closure: any operation between two hexagrams produces a new hexagram. 2) Associativity: in arithmetic operations, the order of the hexagrams does not matter, e.g. (☵ + ☴) + ☳ = ☵ + (☴ + ☳) = ☲. 3) Identity Element: there exists a hexagram (the identity element) such that an operation with it and any other hexagram produces that same hexagram, e.g. ☷ + ☱ = ☱, as well as ☰ × ☱ = ☱. 4) Inverse: for every hexagram, there exists another hexagram, such that an operation combining them produces the identity element; here, for the addition operation, every hexagram is its own inverse, e.g. ☶ + ☶ = ☷. Note, however, that there does not exist a multiplicative inverse. Further, addition and multiplication both satisfy the property that a ⋅ b = b ⋅ a, so that the hexagrams are commutative. So while the hexagrams’ lack of a multiplicative inverse precludes them from being a group, since they satisfy the remaining properties they are thus a ‘commutative ring’.

Read the rest of this entry

Élie Ayache’s The Medium of Contingency – A Review

Plakhova5

[All art by Tatiana Plakhova. Review in pdf here]

Élie Ayache, The Medium of Contingency: An Inverse View of the Market,
Palgrave-Macmillan, 2015, 414pp., $50.00 (hbk), ISBN 9781137286543.

Ayache’s project is to outline the ontology of quantitative finance as a discipline. That is, he wants to find what distinguishes it as a genre, distinct from economics or even stocks and bonds—what most of us associate with ‘finance’. Quantitative finance, dealing with derivatives, is a whole new level of abstraction. So Ayache has to show that economic and social concerns are exogenous (external) to derivative prices: the underlying asset can simply be treated as a stochastic process. His issue with probability is that it is epistemological—a shorthand for when we don’t know the true mechanism. Taleb’s notion of black swans as radically unforeseeable (unknowable) events is simply an extension of this. Conversely, market-makers—those groups of people yelling at each other in old movies about Wall Street—don’t need probability to do their jobs. Ayache’s aim is thus to introduce into theory the practice of derivatives trading—from within, rather than outside, the market. And it’s reasonable to think that delineating the ontology of this immensely rich field will yield insights applicable elsewhere in philosophy.

This is not a didactic book. People coming from philosophy will not learn about finance, nor about how derivatives work. Ayache reinterprets these, assuming familiarity with the standard view. Even Pierre Menard—Ayache’s claim to fame—is only given a few perfunctory mentions here. People coming from finance will not learn anything about philosophy, since Ayache assumes a graduate-level knowledge of it. Further, Ayache’s comments on Taleb’s Antifragile are limited to one page. The only conceivable reason to even skim this book is that you’d like to see just how abstract the philosophy of finance can get.

I got interested in Ayache because I write philosophy of economics. I wanted to learn what quantitative finance is all about, so several years ago I read through all his articles in Wilmott Magazine, gradually learning how to make sense of sentences like “Only in a diffusion framework is the one-touch option…replicable by a continuum of vanilla butterflies” (Sept 2006: 19). I’ve made it through all of Ayache’s published essays. Now I’ve read this entire book, and I deserve a goddamn medal. I read it so that you don’t have to.

Much of Ayache’s reception so far has been quite silly. I recently came across an article (Ferraro, 2016) that cited Ayache’s concept of ‘contingency’ as an inspiration behind a game based on sumo wrestling. (You can’t make this stuff up.) Frank Ruda (2013), an otherwise respectable philosopher, wrote a nonsensical article comparing him to Stalin![1] Philosophy grad students occasionally mention his work to give their papers a more ‘empirical’ feel (which is comparable in silliness to the sumo wrestling), especially Ayache’s clever reading of Borges’ short story on Pierre Menard—from which these graduate students draw sweeping conclusions about capitalism and high-frequency trading.

Ayache expects the reader to have already read The Blank Swan, which itself is not understandable without reading Meillassoux’s After Finitude. Thus, for most readers, decreasing returns will have long set in. My goal here is to summarize the main arguments and/or good ideas of each chapter, divested of the pages and pages of empty verbosity accompanying them. I try to avoid technical jargon from finance and philosophy except as needed to explain the arguments, though I do provide requisite background knowledge that Ayache has omitted. So first, let’s cover the most important concepts that the reader may find unfamiliar.

Read the rest of this entry

Combinatorial Game Theory: Surreal Numbers and the Void

Chess, by Andrew Phillips (small)

[A pdf version is available here; LaTeX here]

Any number can be written as a tuple of games played by the void with itself.

Denote the void by the empty set ∅. We write: {∅|∅} = 0, with | as partition. ‘Tuple’ signifies ordering matters, so that {0|∅} = 1 and {∅|0} = −1. Then recursively construct the integers: {n|∅} = n + 1. Plug {∅|∅} into {0|∅} to get {{∅|∅}|∅} = 1, then this into {1|∅} to get {{{|}|∅}|∅} = 2…

So if games exist, numbers exist. Or rather: if games exist, numbers don’t have to.

Mixed orderings generate fractions, e.g. {0|1} = {{∅|∅}|{{∅|∅}|∅}} = ½. Games with infinity (written ω) or infinitesimals (ε = 1/ω) permit irrationals, and thus all reals. Further, it is valid to define {ω|∅} = ω + 1, etc. Once arithmetic operations are defined, more complex games can define and use such quantities as ∛ω and ωω.

Therefore: by defining numbers as games, we can construct the surreal numbers.(1)

∗                                   ∗                                   ∗

As well as defining numbers as games, we can treat games like numbers.

{∅|∅} can be played as the zero game. Simply: player 1 cannot move, and loses. Any game where player 2 has a winning strategy is equivalent to the zero game. Take two games G and H, with G a 2nd player win. The player with a winning strategy in H can treat the games separately, only moving in G to respond to the opponent. Player 2 wins G, but does not affect H’s outcome. Conversely, given G′ and H′, with G′ a 1st player win, player 1 is last to move in G′, giving player 2 an extra move in H′, potentially altering its outcome. In terms of outcomes, we say G + H = 0 + H → G = 0.

For a game G, −G is G with the roles reversed, as by turning the board around in chess.

G = H if G + (−H) = 0, i.e. is a player 2 win, and so is equivalent to the zero game.

Two more properties are clear: G + (H + K) = (G + H) + K (associativity) and G + H = H + G (commutativity).

We can see that G + (−G) = 0 by a clever example called the Tweedledum & Tweedledee argument. In the game Blue-Red Hackenbush, players are given a drawing composed of separate edges. Each turn, player 1 removes a blue edge, plus any other edges no longer connected to the ground, and player 2 does likewise for red edges. Since in G + (−G) the number of pieces is the same, player 2 can just copy the moves of player 1 until all pieces are taken. Player 1 will not be able to move, and so will lose. Hence player 2 has a winning strategy.

tweedledum and tweedledee

Thus games are a proper mathematical object—namely, an Abelian group.(10)

∗                                   ∗                                   ∗

A new notation links all this to surreal numbers. For any set of games GL and GR, there exists a game G = {GL|GR}. Intuitively, the Left player moves in any game in GL, and likewise for Right in GR. The zero game {∅|∅} = 0 is valid, and we may construct the surreals as before. Now we can write the surreals more easily with sets: {1, 2, … , n|} = n + 1.

Read the rest of this entry

Avant-Garde Philosophy of Economics

by Tatiana Plakhova (2011)

[A pdf version is available here]

To most people, the title of this post is a triple oxymoron. Those left thoroughly traumatized by Econ 101 in college share their skepticism with those who have dipped their toe into hybrid fields like neuroeconomics and found them to be a synthesis of the dullest parts of both disciplines. For the vast, vast majority of cases, this sentiment is quite right: ‘philosophy of economics’ tends to be divided between heterodox schools of economics whose writings have entirely decoupled from economic formalism, and—on the other side of the spectrum—baroque econophysicists with lots to say about intriguing things like ‘quantum economics’ and negative probabilities via p-adic numbers, but typically within a dry positivist framework. As for the middle-ground material, a 20-page paper typically yields two or three salvageable sentences, if even that. Yet, as anyone who follows my Twitter knows, I look very hard for papers that aren’t terrible—and eventually I’ve found some.

Often the ‘giants’ of economic theory (e.g. Nobel laureates like Harsanyi or Lucas) have compelling things to say about methodology, but to include them on this list seems like cheating, so we’ll instead keep to scholars who most economists have never heard of. We also—naturally—want authors who write mainly in natural language, and whose work is therefore accessible to readers who are not specialists in economic theory. Lastly, let’s strike from the list those writers who do not engage directly with economic formalism itself, but only ‘the economy’. This last qualification is the most draconian of the lot, and manages to purge the philosophers of economics (e.g. Mäki, McCloskey) who tend to be the most well-known.

The remaining authors make up the vanguard of philosophy of economics—those who alchemically permute the elements of economic theory into transdisciplinary concoctions seemingly more at home in a story by Lovecraft or Borges than in academia, and who help us ascend to levels of abstraction we never could have imagined. Their descriptions are ordered for ease of exposition, building from and often contradicting one another. For those who would like to read more, some recommended readings are provided under each entry. I hope that readers will see that people have for a long time been thinking very hard about problems in economics, and that thinking abstractly does not mean avoiding practical issues.

Category Theory, by j5rson

M. Ali Khan

Khan is a fascinating character, and stands out even among the other members of this list: by training he is a mathematical economist, familiar with some of the highest levels of abstraction yet achieved in economic theory, but at the same time an avid fan of continental philosophy, liberally citing sources such as De Man (a very unique choice, even within the continental crowd!), Derrida, and similar figures on the more literary side of theory, such as Ricoeur and Jameson. It may be helpful to contrast Khan to Deirdre McCloskey, who has written a couple of books on writing in economics: McCloskey uses undergraduate-level literary theory to look at economics, which (let’s face it) is a fairly impoverished framework, forcing her to cut a lot of corners and sand away various rough edges that are very much worth exploring. An example is how she considers the Duhem-Quine thesis to be in her own camp, which she proudly labels ‘postmodern’—yet, just about any philosopher you talk to will consider this completely absurd: Quine was as modernist as they come. (Moreover, in the 30 years she had between the first and second editions, it appears she has never bothered to read the source texts.) Khan, by contrast, has thoroughly done his homework and then some.

Khan’s greatest paper is titled “The Irony in/of Economic Theory,” where he claims that this ‘irony’ operates as a (perhaps unavoidable) literary trope within economic theory as a genre of writing. Khan likewise draws from rhetorical figures such as synecdoche and allegory, and it will be helpful to start at a more basic level than he does and build up from there. The prevailing view of the intersection of mathematics and literary theory is that models are metaphors: this is due to two books by Max Black (1962) and Mary Hesse (1963) whose main thesis was exactly this point. While this is satisfying, and readily accepted by theorists such as McCloskey, Khan does not content himself with this statement, and we’ll shortly see why.

Consider: a metaphor compares one thing to another on the basis of some kind of structural similarity, and this is a very useful account of, say, models in physics, which use mathematical formulas to adequate certain patterns and laws of nature. However, in economics it often doesn’t matter nearly as much who the particular agents are that are depicted by the formulas: the Prisoner’s dilemma can model the behaviour of cancer cells just as well as it can model human relations. If we change the object of a metaphor (e.g. cancer cells → people), it becomes a different metaphor; what we need is a kind of rhetorical figure where it doesn’t matter if we replace one or more of the components, provided we retain the overall framework. This is precisely what allegory does: in one of Aesop’s fables, say “The Tortoise and the Hare,” we can replace the tortoise by a slug and the hare by a grasshopper, but nobody would consider this to be an entirely new allegory—all that matters here is that one character is slow and the other is fast. Moreover, we can treat this allegory itself as a metaphor, as when we compare an everyday situation to Aesop’s fable (which was exactly Aesop’s point), which is why it’s easy to treat economic models simply as metaphors, even though their fundamental structure is allegorical.

The reason this is important is because Khan takes this idea to a whole new level of abstraction: in effect, he wants to connect the allegorical structure of economic models to the allegorical nature of economic texts—in particular, Paul Samuelson’s Foundations of Economic Analysis, which begins with the enigmatic epigraph “Mathematics is a language.” For Khan: “the Foundations is an allegory of economic theory and…the epigraph is a prosopopeia for this allegory” (1993: 763). Since I had to look it up too, prosopopeia is a rhetorical device in which a speaker or writer communicates to the audience by speaking as another person or object. Khan is quite clear that he finds Samuelson’s epigraph quite puzzling, but instead of just saying “It’s wrong” (which would be tedious) he find a way to détourne it that is actually quite clever. He takes as a major theme throughout the paper the ways that the same economic subject-matter can be depicted in different ways by using different mathematical formalisms. Now, it’s fairly trivial that one can do this, but Khan focuses on how in many ways certain formalisms are observationally equivalent to each other. For instance, he gives the following chart (1993: 772):

correspondence between probability & measure theoretic terms (in Khan, 1993; 772)

Correspondence between probability & measure theory

That is to say, to present probabilistic ideas using the formalism of measure theory doesn’t at all affect the content of what’s being said: it’s essentially just using the full toolbox of real analysis instead of only set notation. What interests Khan here is how these new notations change the differential relations between ideas, creating brand new forms of Derridean différance in the realm of meaning—which, in turn, translate into new mathematical possibilities as our broadened horizons of meaning let us develop brand new interpretations of things we didn’t notice before. Khan’s favorite example here is nonstandard analysis, which he claims ought to make up a third column in the above chart, as probabilistic and measure theoretic concepts (and much else besides) can likewise be expressed in nonstandard terms. To briefly jot down what nonstandard analysis is: using mathematical logic, it is possible to rigorously define infinitesimals in a way that is actually usable, rather than simply gestured to by evoking marginal quantities. While theorems using such nonstandard tools often differ greatly from ‘standard’ theorems, it is provable that any nonstandard theorem can be proved standardly, and vice versa; yet, some theorems are far easier to prove nonstandardly, whence its appeal (Dauben, 1985). In economics, for example, an agent can be modelled as an infinitesimal quantity, which is handy for general equilibrium models where we care less about particulars than about aggregate properties, and part of Khan’s own mathematical work in general equilibrium theory does precisely this.

To underscore his overall point, Khan effectively puts Samuelson’s epigraph through a prism: “Differential Calculus is a Language”, “Convex Analysis is a Language”, “Nonsmooth Analysis is a Language”, and so on. Referring to Samuelson’s original epigraph, this lets Khan “interpret the word ‘language’ as a metonymy for the collectivity of languages” (1993: 768), which lets him translate it into: “Mathematics is a Tower of Babel.” Fittingly, in order to navigate this Tower of Babel, Khan (following Derrida) adopts a term originating from architecture: namely, the distinction between keystone and cornerstone. A keystone is a component of a structure that is meant to be the center of attention, and clinches its aesthetic ambiance; however, a keystone has no real architectural significance, but could be removed without affecting the rest of the structure. On the other hand, a cornerstone is an unassuming, unnoticed element that is actually crucial for the structural integrity of the whole; take it away and the rest goes crashing down.

Read the rest of this entry

The Shapley Value: An Extremely Short Introduction

at hierophants of escapism, by versatis

[For those who find the LaTeX formatting hard to read: pdf version + LaTeX version]

If we view economics as a method of decomposing (or unwriting) our stories about the world into the numerical and functional structures that let them create meaning, the Shapley value is perhaps the extreme limit of this approach. In his 1953 paper, Shapley noted that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). By embodying a player’s position in a game as a scalar number, we reach the degree zero of meaning, beyond which any sort of representation is severed entirely. And yet, this value recurs over and over throughout game theory, under widely disparate tools, settings, and axiomatizations. This paper will outline how the Shapley value’s axioms coalesce into an intuitive interpretation that operates between fact and norm, how the simplicity of its formalism is an asset rather than a liability, and its wealth of applications.

Overview

Cooperative game theory differs from non-cooperative game theory not only in its emphasis on coalitions, but also by concentrating on division of payoffs rather than how these payoffs are attained (Aumann, 2005: 719). It thus does not require the degree of specification needed for non-cooperative games, such as complete preference orderings by all the players. This makes cooperative game theory helpful for situations in which the rules of the game are less well-defined, such as elections, international relations, and markets in which it is unclear who is buying from and selling to whom (Aumann, 2005: 719). Cooperative games can, of course, be translated into non-cooperative games by providing these intermediate details—a minor industry known as the Nash programme (Serrano, 2008).

Shapley introduced his solution concept in 1953, two years after John F. Nash introduced Nash Equilibrium in his doctoral dissertation. One way of interpreting the Shapley value, then, is to view it as more in line with von Neumann and Morgenstern’s approach to game theory, specifically its reductionist programme. Shapley introduced his paper with the claim that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). All the peculiarities of a game are thus reduced to a single vector: one value for each of the players. Another common solution concept for cooperative games, the Core, uses sets, with the corollary that the core can be empty; the Shapley value, by contrast, always exists, and is unique.

To develop his solution concept, Shapley began from a set of desirable properties taken as axioms:

  • Efficiency: \sum_{i\in{N}} \Phi_i(v) = v(N).
  • Symmetry: If v(S ∪{i}) = v(S ∪{j}) for every coalition S not containing i & j, then ϕi(v) = ϕj(v).
  • Dummy Axiom: If v(S) = v(S ∪{i} for every coalition S not containing i, then ϕi(v) = 0.
  • Additivity: If u and v are characteristic functions, then ϕ(u + v) = ϕ(u) + ϕ(v)

In normal English, any fair allocation ought to divide the whole of the resource without any waste (efficiency), two people who contribute the same to every coalition should have the same Shapley value (symmetry), and someone who contributes nothing should get nothing (dummy). The first three axioms are ‘within games’, chosen based on normative ideals; additivity, by contrast, is ‘between games’ (Winter, 2002: 2038). Additivity is not needed to define the Shapley value, but helps a great deal in mathematical proofs, notably of its uniqueness. Since the additivity axiom is used mainly for mathematical tractability rather than normative considerations, much work has been done in developing alternatives to the additivity axiom. The fact that the Shapley value can be replicated under vastly different axiomatizations helps illustrate why it comes up so often in applications.

The Shapley value formula takes the form:

\Phi_i(v) = \sum\limits_{\substack{S\in{N}\\i\in{S}}} \frac{(|S|-1)!(n-|S|)!}{n!}[v(S)-v(S\backslash\{i\})]

where |S| is the number of elements in the coalition S, i.e. its cardinality, and n is the total number of players. The initial part of the equation will make far more sense once we go through several examples; for now we will focus on the second part, in square brackets. All cooperative games use a value function, v(S), in which v(Ø) ≡ 0 for mathematical reasons, and v(N) represents the ‘grand coalition’ containing each member of the game. The equation [v(S) – v(S\{i})] represents the difference in the value functions for the coalition S containing player i and the coalition which is identical to S except not containing player i (read: “S less i”). The additivity axiom implies that this quantity is always non-negative. It is this tiny equation that lets us interpret the Shapley value in a way that is second-nature to economists, which is precisely one of its most remarkable properties. Historically, the use of calculus, which culminated in the supply-demand diagrams of Alfred Marshall, is what fundamentally defined economics as a genre of writing, as opposed to the political economy of Adam Smith and David Ricardo. The literal meaning of a derivative as infinitesimal movement along a curve was read in terms of ‘margins’: say, the change in utility brought about by a single-unit increase in good x. Thus, although these axioms specify nothing about marginal quantities, we can nonetheless interpret the Shapley value as the marginal contribution of a single member to each coalition in which he or she takes part. This marginalist interpretation was not built in by Shapley himself, but emerged over time as the Shapley value’s mathematical exposition was progressively simplified. It is this that allows us to illustrate by examples instead of derivations.

Examples 1 & 2: Shapley-Shubik Power Index (Shapley & Shubik, 1954)

Imagine a weighted majority vote: P1 has 10 shares, P2 has 30 shares, P3 has 30 shares, P4 has 40 shares.

For a coalition to be winning, it must have a higher number of votes than the quota, here q = \frac{110}{2} = 55

v(S) =\begin{cases} 1, & \text{if }q>55 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {2,3}, {2,4}, {3,4} & all supersets containing these.

Since the values only take on 0s and 1s, we can work with a shorter version of the Shapley value formula:

\Phi_i(v) = \sum\limits_{\substack{S\text{ winning}\\S\backslash\{i\}\text{ losing}}} \frac{(|S|-1)!(n-|S|)!}{n!}

Here, [v(S) – v(S\{i})] takes on a value of 1 iff a player is pivotal, making a losing coalition into a winning one. Otherwise it is either [0 – 0] = 0 for a losing coalition or [1 – 1] = 0 for a winning coalition.

For P1: v(S) – v(S\{1}) = 0 for all S, so ϕ1(v) = 0 (by dummy player axiom)

For P2: v(S) – v(S\{2}) ≠ 0 for S = {2,3}, {2,4}, {1,2,3}, {1,2,4}, so that:

\Phi_2(v)=2\frac{1!2!}{4!}+2\frac{2!1!}{4!}=\frac{8}{24}=\frac{1}{3}

By the symmetry axiom, ϕ2(v) = ϕ3(v) = ⅓. By the efficiency axiom, 0 + ⅓ + ⅓ + ϕ4(v) = v(N) = 1 → ϕ4(v) = ⅓

It is worth noting that, within the structure of our voting game, P4’s extra ten votes have no effect on his power to influence the outcome, as shown by the fact that ϕ2 = ϕ3 = ϕ4. A paper by Shapley (1981) notes an actual situation for county governments in New York in which each municipality’s number of votes was based on its population; in one particular county, three of the six municipalities had Shapley values of zero, similar to our dummy player P1 above. Upon realizing this, the quota was raised so that our three dummy players were now able to be pivotal for certain coalitions, giving them nonzero Shapley values (Ferguson, 2014: 18-9).

For a more realistic example, consider the United Nations Security Council, composed of 15 nations, where 9 of the 15 votes are needed, but the ‘big five’ nations have veto power. This is equivalent to a weighted voting game in which each of the big five gets 7 votes, and each of the other 10 nations gets 1 vote. This is because if all nations except one of the big five vote in favor of a resolution, the vote count is (35 – 7) + 10 = 38.

Thus we have weights of w1 = w2 = w3 = w4 = w5 = 7, and w6 → w15 = 1.

Our value function is v(S) =\begin{cases} 1, & \text{if }q>39 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {1,2,3,4,5, any 4+ of the 10}

For the 4 out of 10 ‘small’ nations needed for the vote to pass, the number of possible combinations is \frac{10!}{4!\,6!}.

Hence, in order to calculate the Shapley value for any member (say, P1) in the big five, we take into account that v(S) – v(S\{1}) ≠ 0 for all 210 coalitions, plus any coalitions with redundant members; this is just another way of expressing their veto power. In our previous example, we were able to count by hand the members in each pivotal coalition S and multiply that number by the Shapley value function for coalitions of that size. Here the number of pivotal coalitions for each size is so large that we must count them using combinatorics. Our next equation looks arcane, but it is only the number of pivotal coalitions multiplied by the Shapley function. First we have the minimal case where 4 of the 10 small members vote in favor of the resolution, then we have the case for 5 of the 10, and so on until we reach the case where all members unanimously vote together:

\Phi_1(v)=(\frac{10!}{4!6!})(\frac{8!6!}{15!})+(\frac{10!}{5!5!})(\frac{9!5!}{15!})+(\frac{10!}{6!4!})(\frac{10!4!}{15!})+(\frac{10!}{7!3!})(\frac{11!3!}{15!})+(\frac{10!}{8!2!})(\frac{12!2!}{15!})+(\frac{10!}{9!1!})(\frac{13!1!}{15!})+(\frac{14!}{15!})

=210\frac{1}{45045}+252\frac{1}{30030}+210\frac{1}{15015}+120\frac{1}{5460}+45\frac{1}{1365}+10\frac{1}{210}+1\frac{1}{15} = 0.19627

By the symmetry axiom, we know that all members of the big five have the same Shapley value of 0.19627. Also, as before, the efficiency axiom implies that the Shapley values for all the players sum to v(N) = 1. Since symmetry also implies that the Shapley values are the same for the 10 members without veto power, we need not engage in any tedious calculations for the remaining members, but can simply use the following formula:

\Phi_6=\cdots=\Phi_{10}=\frac{1-5(0.19627)}{10}=\frac{1-0.98135}{10}=0.001865

Part of the purpose of this example is to help the reader appreciate how quickly the complexity of such problems increases in the number of agents n. Weighted voting games are actually relatively simple to calculate because v(N) = 1, which is why we just sum together the Shapley formulas for each pivotal coalition’s size; in our next example we will relax this assumption. In so doing, the part of the Shapley formula v(S) – v(S\{i}) gains added importance as a ‘payoff’, whereas the Shapley formula used in our weighted voting game examples acts as a probability, so that the combined formula is reminiscent of von Neumann-Morgenstern utility. The Shapley formula can be construed as a probability in the following way (Roth, 1983: 6-7):

suppose the players enter a room in some order and that all n! orderings of the players in N are equally likely. Then ϕi(v) is the expected marginal contribution made by player i as she enters the room. To see this, consider any coalition S containing i and observe that the probability that player i enters the room to find precisely the players in S – i already there is (s – 1)!(n – s)!/n!. (Out of n! permutations of N there are (s – 1)! different orders in which the first s – 1 players can precede i, and (n – s)! different orders in which the remaining n – s players can follow, for a total of (s – i)!(n – s)! permutations in which precisely the players S – i precede i.

One drawback to this approach is its implicit assumption that each of the coalitions is equally likely (Serrano, 2013: 607). For cases such as the UN Security Council this is doubtful, and overlooks many very interesting questions. It also assumes that each player wants to join the grand coalition, whereas unanimous votes seldom occur in practice. The main advantage of the Shapley value in the above examples is that another common solution concept for cooperative games, the Core, tends to be empty in weighted voting games, giving it no explanatory power. The Shapley value can be extended to measure the power of shareholders in a company, and can even be used to predict expenditure among European Union member states (Soukenik, 2001). We will go through another relatively simple example, and then move on to several more challenging applications.

Read the rest of this entry

The Project of Econo-fiction

what is economics

I have an article up at the online magazine Non on what it entails to use Laruelle’s non-philosophy to talk about economics, intended as a retrospective of my essay “There is no economic world.” It contextualizes econo-fiction in terms of Laruelle’s lexicon, illustrates a philosophical quandary with viewing iterated prisoner’s dilemma experiments through the lense of ‘falsification’, and notes a few ways I’ve changed my mind since then and where I plan to go from here. While the example is deliberately simple, aimed toward readers with zero knowledge of economic theory, it shows very succinctly how the notion of ‘experiment’ in economics operates as a form of conceptual rhetoric. I’ve also included a lot of fascinating factoids I’ve discovered since then, which I plan to expand upon in upcoming posts here.

No other philosophical approach I’ve come across—not even Badiou’s—lends itself to economics as much as non-philosophy does. I’m very impressed with the way that NP can talk about the mathematical formalism in economics without overcoding it, and I’d very much like to experiment with applying NP to related disciplines. Laruelle himself hints toward new applications of his method in finance: “Philosophy is a speculation that sells short and long at the same time, that floats at once upward and downward” (2012: 331). That is, philosophy is a form of hedging. Conversely, the section containing this excerpt is entitled “Non-Philosophy Is Not a Short-Selling Speculation,” where short-selling is investing so that you make money if an asset’s price goes down. Of the continental philosophers of finance I’m familiar with, Ben Lozano’s Deleuzian approach tends to focus on the conceptual aspects of finance to the neglect of its formalism, and Élie Ayache’s brilliantly original reading of quantitative finance is in many ways quite eccentric—such as his insistence on the crucial importance of the market maker (the guys yelling at each other in retro movies about Wall Street) and that algorithms are fundamentally inferior to human traders. A Laruellean interpretation of mainstream finance would serve as a welcome foil to both.

Just the other day I discovered a form of mathematical notation that appears to open up a Laruellean interpretation of accounting, and I’m always on the lookout for quirky reinterpretations of business-related ideas. I find philosophy such a handy tool for getting myself intrinsically interested in dull (but very practical) topics and disciplines, and I’ve read a whole heap of papers over the past year, so I’m really looking forward to blogging again.

References