Chinese Logic: An Introduction

zao-wou-ki-1

[LaTeX version here; Chinese version here]

Introduction

As late as 1898, logic was seen by the Chinese as “an entirely alien area of intellectual inquiry”: the sole Chinese-language textbook on logic was labeled by Liang Qichao (梁启超)—at that time a foremost authority on Western knowledge—as “impossible to classify” (无可归类), alongside museum guides and cookbooks (Kurtz, 2011: 4-5). This same textbook had previously been categorized by Huang Qingcheng (黄庆澄) as a book on ‘dialects’ (方言). The Chinese word for logic (luójí 逻辑) itself is, according to the Cihai (《辞海》/ Sea of Words) dictionary, merely a transliteration from the English—the entire Chinese lexicon had no word resembling it (Lu, 2009: 98). Hence it never occurred even to specialists that this esoteric discipline might have close affinities with the roots of Chinese philosophy, from the I Ching (易经) to the ancient Chinese dialecticians (辩者), as well as the famous paradoxes of Buddhism.

With the advent of computers, “there is now more research effort in logics for computer science than there ever was in traditional logics” (Marek & Nerode, 1994: 281). This has led to a proliferation of logical methods, including modal logic, temporal logic, epistemic logic, and fuzzy logic. Further, such new logical systems permit multiple truth values, semantic patterns based on games, and even logical contradictions. In light of these possibilities, research in ‘Chinese logic’ aims to reinterpret the history of Chinese thought by means of such tools.

This essay consists of three parts: the mathematics of the I Ching, the debates within the School of Names, and the paradoxes of Buddhism. The first section will, through examining the binary arithmetic of the I Ching, provide an introduction to basic logical notation. The second section will explore Gongsun Long’s famous bái mǎ fēi mǎ (白马非马) paradox, as well as the logical system of the Mohist school. The third section will explain the seven-valued logic of the Buddhist monk Nāgārjuna by way of paraconsistent logic.

yijing-lattice

1. The I Ching (易经) & Binary Arithmetic

The I Ching is one of the oldest books in history. Throughout the world, there is no other text quite like it. Its original function was for divination, giving advice for future actions; yet, after centuries of commentary, it has taken on a fundamental role in Chinese culture. In part, this is because its commentaries became (apocryphally) associated with Confucius, thereby establishing it as a classic.

Its survival of the ‘burning of books and burying of scholars’ (焚书坑儒) in 213-210BC has magnified the I Ching’s importance. Historically, the Zhou dynasty was marked by hundreds of years of war and dissension. Finally, Qin Shi Huang united the nation in 221BC, to become China’s first emperor. According to the standard account, in order to unify thought and political opinion, Emperor Qin Shi Huang ordered that all books not about medicine, farming, or divination be burned. And so, the vast majority of ancient Chinese knowledge has been lost to history. Yet, since the I Ching was about divination, it avoided sharing the same fate. In a sense, then, the I Ching has come to represent the collective wisdom of ancient China—it embodies their entire philosophical cosmology.

Confucius’s interest in the I Ching is well known. In verse 7.16 of the Analects, he says: “If some years were added to my life, I would give fifty to the study of the Yi [I Ching], and then I might come to be without great faults.” Curiously, this appears at odds with the rest of his philosophy. After all, the Analects elsewhere says: “The subjects on which the Master did not talk, were—extraordinary things, feats of strength, disorder, and spiritual beings.” (7.20). That is, Confucius had no interest in oracles. Hence we can conclude that for Confucius, the main content of the I Ching was not divination, but philosophy.

The core tenet of the I Ching is deeply metaphysical, namely: the complementarity of Yin (阴) and Yang (阳). Yin represents negativity, femininity, winter, coldness and wetness. Yang represents positivity, masculinity, dryness, and warmth. Accordingly, the gua (卦) or fundamental components of the I Ching’s hexagrams, are two lines: ‘⚋’ for Yin, ‘⚊’ for Yang.

The trigrams, made up of three lines, have 8 combinations (2³ = 8), and so are called the bagua (八卦), where (八) means 8. The bagua and its associated meanings are: ☰ (乾/天: the Creative/Sky), ☱ (兑/泽: the Joyous/Marsh), ☲ (离/火: the Clinging/Fire), ☳ (震/雷: the Arousing/Thunder), ☴ (巽/风: the Gentle/Wind), ☵ (坎/水: the Abysmal/Water), ☶ (艮/山: Keeping Still/Mountain), ☷ (坤/地: the Receptive/Earth). The I Ching’s commentaries revolve around 64 hexagrams of six lines (2⁶ = 64 combinations). There are multiple ways of ordering the hexagrams: the most well-known is the King Wen (文王) sequence, but the most important for our purposes is the Fu Xi (伏羲) sequence.

diagram-of-i-ching-hexagrams-owned-by-german-mathematician-and-philosopher-gottfried-wilhelm-leibniz

diagram of the I Ching’s hexagrams owned by Leibniz

In the 17th century, the mathematician Gottfried Wilhelm Leibniz attempted to develop a system of arithmetic using only the numbers 0 and 1, called binary arithmetic. Binary arithmetic is in base 2: its key point is that any integer can be uniquely represented as a sum of powers of two. For example, 7 = 1 + 2 + 4 = 1×(2⁰) + 1×(2¹) + 1×(2²), and since each of the coefficients is 1, therefore the binary representation of 7 is (111). Conversely, 5 = 1 + 4 = 1×(2⁰) + 0×(2¹) + 1×(2²), where the middle coefficient is 0, so that 5 in binary is (101). For larger numbers, we simply include larger powers of two: 2³ = 8, 2⁴ = 16, etc.

Leibniz corresponded with various Christian missionaries in China, and had received a poster containing the Fu Xi sequence. To his astonishment, by letting ⚋ = 0 and ⚊ = 1, the Fu Xi sequence of 64 hexagrams exactly corresponds with the binary numbers from 0 to 63! Using the trigrams as a simplified example, from top to bottom we read: ☱ = (011) = 0×(2⁰) + 1×(2¹) + 1×(2²) = 2 + 4 = 6, ☵ = (010) = 0×(2⁰) + 1×(2¹) + 0×(2²) = 2, and so on. Thus, according to the Fu Xi and binary sequence, the bagua are ordered as: ☷, ☶, ☵, ☴, ☳, ☲, ☱, ☰.

Further, since we can treat the trigrams as numbers, we can also perform on them arithmetic operations such as addition and multiplication. To do this involves modular arithmetic, which for pedagogical purposes is occasionally called ‘clock arithmetic’. Its main feature is that it is cyclical: after arriving at the base number (‘mod n’, in our case: mod 2), we start up once again at zero. So in mod 2 arithmetic, 1 + 1 = 0: we only use the numbers 0 and 1. In the same way, a 12-hour clock only involves the numbers 1 to 12, and so is ‘mod 12’; hence, 15:00 is the same as 3:00, and so on. Therefore, the mod 2 addition of the I Ching’s trigrams can be represented by the following table:

logic-table1

Note that this is equivalent to the ‘⊻’ (exclusive or) operation in Boolean logic. (Boolean logic simply uses 0 for ‘false’ and 1 for ‘true’.) This logical point of view comes most in handy for defining multiplication, since binary multiplication is equivalent to the logical ‘∧’ (and) operation (Schöter, 1998: 6):

logic-table2

The advantage of logic over modular arithmetic is that we can define complements (¬). For example, Fire (☲/101) and Water (☵/010) are complementary, and so are Sky (☰/111) and Earth (☷/000). The use of logic is actually quite helpful in analyzing the trigrams’ associated meanings. Using the slightly different terminology of lattice theory (Schöter, 1998: 9):

  1. The Creative [乾/☰] is the union (⊻) of complements.
  2. The Joyous [兑/☱] is the union (⊻) of the Arousing [震/☳] and Abyss [坎/☵].
  3. Fire [火/☲] is the union (⊻) of the Arousing [震/☳] and Stillness [艮/☶].
  4. The Gentle [巽/☴] is the union (⊻) of the Abyss [坎/☵] and Stillness [艮/☶].
  5. Arousing [震/☳] is the intersection (∧) of the Joyous [兑/☱] and Fire [火/☲].
  6. Abyss [坎/☵] is the intersection (∧) of the Joyous [兑/☱] and Gentle [巽/☴].
  7. Stillness [艮/☶] is the intersection (∧) of Fire [火/☲] and the Gentle [巽/☴].
  8. The Receptive [坤/☷] is the intersection (∧) of complements.

In a beautiful essay, Goldenberg (1975) uses a branch of mathematics called group theory to unify the above points. A group is an algebraic structure with two operations (e.g. addition and multiplication). It turns out that the I Ching’s hexagrams satisfy many of the conditions for a group, which are as follows. 1) Closure: any operation between two hexagrams produces a new hexagram. 2) Associativity: in arithmetic operations, the order of the hexagrams does not matter, e.g. (☵ + ☴) + ☳ = ☵ + (☴ + ☳) = ☲. 3) Identity Element: there exists a hexagram (the identity element) such that an operation with it and any other hexagram produces that same hexagram, e.g. ☷ + ☱ = ☱, as well as ☰ × ☱ = ☱. 4) Inverse: for every hexagram, there exists another hexagram, such that an operation combining them produces the identity element; here, for the addition operation, every hexagram is its own inverse, e.g. ☶ + ☶ = ☷. Note, however, that there does not exist a multiplicative inverse. Further, addition and multiplication both satisfy the property that a ⋅ b = b ⋅ a, so that the hexagrams are commutative. So while the hexagrams’ lack of a multiplicative inverse precludes them from being a group, since they satisfy the remaining properties they are thus a ‘commutative ring’.

Read the rest of this entry

Élie Ayache’s The Medium of Contingency – A Review

Plakhova5

[All art by Tatiana Plakhova. Review in pdf here]

Élie Ayache, The Medium of Contingency: An Inverse View of the Market,
Palgrave-Macmillan, 2015, 414pp., $50.00 (hbk), ISBN 9781137286543.

Ayache’s project is to outline the ontology of quantitative finance as a discipline. That is, he wants to find what distinguishes it as a genre, distinct from economics or even stocks and bonds—what most of us associate with ‘finance’. Quantitative finance, dealing with derivatives, is a whole new level of abstraction. So Ayache has to show that economic and social concerns are exogenous (external) to derivative prices: the underlying asset can simply be treated as a stochastic process. His issue with probability is that it is epistemological—a shorthand for when we don’t know the true mechanism. Taleb’s notion of black swans as radically unforeseeable (unknowable) events is simply an extension of this. Conversely, market-makers—those groups of people yelling at each other in old movies about Wall Street—don’t need probability to do their jobs. Ayache’s aim is thus to introduce into theory the practice of derivatives trading—from within, rather than outside, the market. And it’s reasonable to think that delineating the ontology of this immensely rich field will yield insights applicable elsewhere in philosophy.

This is not a didactic book. People coming from philosophy will not learn about finance, nor about how derivatives work. Ayache reinterprets these, assuming familiarity with the standard view. Even Pierre Menard—Ayache’s claim to fame—is only given a few perfunctory mentions here. People coming from finance will not learn anything about philosophy, since Ayache assumes a graduate-level knowledge of it. Further, Ayache’s comments on Taleb’s Antifragile are limited to one page. The only conceivable reason to even skim this book is that you’d like to see just how abstract the philosophy of finance can get.

I got interested in Ayache because I write philosophy of economics. I wanted to learn what quantitative finance is all about, so several years ago I read through all his articles in Wilmott Magazine, gradually learning how to make sense of sentences like “Only in a diffusion framework is the one-touch option…replicable by a continuum of vanilla butterflies” (Sept 2006: 19). I’ve made it through all of Ayache’s published essays. Now I’ve read this entire book, and I deserve a goddamn medal. I read it so that you don’t have to.

Much of Ayache’s reception so far has been quite silly. I recently came across an article (Ferraro, 2016) that cited Ayache’s concept of ‘contingency’ as an inspiration behind a game based on sumo wrestling. (You can’t make this stuff up.) Frank Ruda (2013), an otherwise respectable philosopher, wrote a nonsensical article comparing him to Stalin![1] Philosophy grad students occasionally mention his work to give their papers a more ‘empirical’ feel (which is comparable in silliness to the sumo wrestling), especially Ayache’s clever reading of Borges’ short story on Pierre Menard—from which these graduate students draw sweeping conclusions about capitalism and high-frequency trading.

Ayache expects the reader to have already read The Blank Swan, which itself is not understandable without reading Meillassoux’s After Finitude. Thus, for most readers, decreasing returns will have long set in. My goal here is to summarize the main arguments and/or good ideas of each chapter, divested of the pages and pages of empty verbosity accompanying them. I try to avoid technical jargon from finance and philosophy except as needed to explain the arguments, though I do provide requisite background knowledge that Ayache has omitted. So first, let’s cover the most important concepts that the reader may find unfamiliar.

Read the rest of this entry

Combinatorial Game Theory: Surreal Numbers and the Void

Chess, by Andrew Phillips (small)

[A pdf version is available here; LaTeX here]

Any number can be written as a tuple of games played by the void with itself.

Denote the void by the empty set ∅. We write: {∅|∅} = 0, with | as partition. ‘Tuple’ signifies ordering matters, so that {0|∅} = 1 and {∅|0} = −1. Then recursively construct the integers: {n|∅} = n + 1. Plug {∅|∅} into {0|∅} to get {{∅|∅}|∅} = 1, then this into {1|∅} to get {{{|}|∅}|∅} = 2…

So if games exist, numbers exist. Or rather: if games exist, numbers don’t have to.

Mixed orderings generate fractions, e.g. {0|1} = {{∅|∅}|{{∅|∅}|∅}} = ½. Games with infinity (written ω) or infinitesimals (ε = 1/ω) permit irrationals, and thus all reals. Further, it is valid to define {ω|∅} = ω + 1, etc. Once arithmetic operations are defined, more complex games can define and use such quantities as ∛ω and ωω.

Therefore: by defining numbers as games, we can construct the surreal numbers.(1)

∗                                   ∗                                   ∗

As well as defining numbers as games, we can treat games like numbers.

{∅|∅} can be played as the zero game. Simply: player 1 cannot move, and loses. Any game where player 2 has a winning strategy is equivalent to the zero game. Take two games G and H, with G a 2nd player win. The player with a winning strategy in H can treat the games separately, only moving in G to respond to the opponent. Player 2 wins G, but does not affect H’s outcome. Conversely, given G′ and H′, with G′ a 1st player win, player 1 is last to move in G′, giving player 2 an extra move in H′, potentially altering its outcome. In terms of outcomes, we say G + H = 0 + H → G = 0.

For a game G, −G is G with the roles reversed, as by turning the board around in chess.

G = H if G + (−H) = 0, i.e. is a player 2 win, and so is equivalent to the zero game.

Two more properties are clear: G + (H + K) = (G + H) + K (associativity) and G + H = H + G (commutativity).

We can see that G + (−G) = 0 by a clever example called the Tweedledum & Tweedledee argument. In the game Blue-Red Hackenbush, players are given a drawing composed of separate edges. Each turn, player 1 removes a blue edge, plus any other edges no longer connected to the ground, and player 2 does likewise for red edges. Since in G + (−G) the number of pieces is the same, player 2 can just copy the moves of player 1 until all pieces are taken. Player 1 will not be able to move, and so will lose. Hence player 2 has a winning strategy.

tweedledum and tweedledee

Thus games are a proper mathematical object—namely, an Abelian group.(10)

∗                                   ∗                                   ∗

A new notation links all this to surreal numbers. For any set of games GL and GR, there exists a game G = {GL|GR}. Intuitively, the Left player moves in any game in GL, and likewise for Right in GR. The zero game {∅|∅} = 0 is valid, and we may construct the surreals as before. Now we can write the surreals more easily with sets: {1, 2, … , n|} = n + 1.

Read the rest of this entry

Avant-Garde Philosophy of Economics

by Tatiana Plakhova (2011)

To most people, the title of this post is a triple oxymoron. Those left thoroughly traumatized by Econ 101 in college share their skepticism with those who have dipped their toe into hybrid fields like neuroeconomics and found them to be a synthesis of the dullest parts of both disciplines. For the vast, vast majority of cases, this sentiment is quite right: ‘philosophy of economics’ tends to be divided between heterodox schools of economics whose writings have entirely decoupled from economic formalism, and—on the other side of the spectrum—baroque econophysicists with lots to say about intriguing things like ‘quantum economics’ and negative probabilities via p-adic numbers, but typically within a dry positivist framework. As for the middle-ground material, a 20-page paper typically yields two or three salvageable sentences, if even that. Yet, as anyone who follows my Twitter knows, I look very hard for papers that aren’t terrible—and eventually I’ve found some.

Often the ‘giants’ of economic theory (e.g. Nobel laureates like Harsanyi or Lucas) have compelling things to say about methodology, but to include them on this list seems like cheating, so we’ll instead keep to scholars who most economists have never heard of. We also—naturally—want authors who write mainly in natural language, and whose work is therefore accessible to readers who are not specialists in economic theory. Lastly, let’s strike from the list those writers who do not engage directly with economic formalism itself, but only ‘the economy’. This last qualification is the most draconian of the lot, and manages to purge the philosophers of economics (e.g. Mäki, McCloskey) who tend to be the most well-known.

The remaining authors make up the vanguard of philosophy of economics—those who alchemically permute the elements of economic theory into transdisciplinary concoctions seemingly more at home in a story by Lovecraft or Borges than in academia, and who help us ascend to levels of abstraction we never could have imagined. Their descriptions are ordered for ease of exposition, building from and often contradicting one another. For those who would like to read more, some recommended readings are provided under each entry. I hope that readers will see that people have for a long time been thinking very hard about problems in economics, and that thinking abstractly does not mean avoiding practical issues.

Category Theory, by j5rson

M. Ali Khan

Khan is a fascinating character, and stands out even among the other members of this list: by training he is a mathematical economist, familiar with some of the highest levels of abstraction yet achieved in economic theory, but at the same time an avid fan of continental philosophy, liberally citing sources such as De Man (a very unique choice, even within the continental crowd!), Derrida, and similar figures on the more literary side of theory, such as Ricoeur and Jameson. It may be helpful to contrast Khan to Deirdre McCloskey, who has written a couple of books on writing in economics: McCloskey uses undergraduate-level literary theory to look at economics, which (let’s face it) is a fairly impoverished framework, forcing her to cut a lot of corners and sand away various rough edges that are very much worth exploring. An example is how she considers the Duhem-Quine thesis to be in her own camp, which she proudly labels ‘postmodern’—yet, just about any philosopher you talk to will consider this completely absurd: Quine was as modernist as they come. (Moreover, in the 30 years she had between the first and second editions, it appears she has never bothered to read the source texts.) Khan, by contrast, has thoroughly done his homework and then some.

Khan’s greatest paper is titled “The Irony in/of Economic Theory,” where he claims that this ‘irony’ operates as a (perhaps unavoidable) literary trope within economic theory as a genre of writing. Khan likewise draws from rhetorical figures such as synecdoche and allegory, and it will be helpful to start at a more basic level than he does and build up from there. The prevailing view of the intersection of mathematics and literary theory is that models are metaphors: this is due to two books by Max Black (1962) and Mary Hesse (1963) whose main thesis was exactly this point. While this is satisfying, and readily accepted by theorists such as McCloskey, Khan does not content himself with this statement, and we’ll shortly see why.

Consider: a metaphor compares one thing to another on the basis of some kind of structural similarity, and this is a very useful account of, say, models in physics, which use mathematical formulas to adequate certain patterns and laws of nature. However, in economics it often doesn’t matter nearly as much who the particular agents are that are depicted by the formulas: the Prisoner’s dilemma can model the behaviour of cancer cells just as well as it can model human relations. If we change the object of a metaphor (e.g. cancer cells → people), it becomes a different metaphor; what we need is a kind of rhetorical figure where it doesn’t matter if we replace one or more of the components, provided we retain the overall framework. This is precisely what allegory does: in one of Aesop’s fables, say “The Tortoise and the Hare,” we can replace the tortoise by a slug and the hare by a grasshopper, but nobody would consider this to be an entirely new allegory—all that matters here is that one character is slow and the other is fast. Moreover, we can treat this allegory itself as a metaphor, as when we compare an everyday situation to Aesop’s fable (which was exactly Aesop’s point), which is why it’s easy to treat economic models simply as metaphors, even though their fundamental structure is allegorical.

The reason this is important is because Khan takes this idea to a whole new level of abstraction: in effect, he wants to connect the allegorical structure of economic models to the allegorical nature of economic texts—in particular, Paul Samuelson’s Foundations of Economic Analysis, which begins with the enigmatic epigraph “Mathematics is a language.” For Khan: “the Foundations is an allegory of economic theory and…the epigraph is a prosopopeia for this allegory” (1993: 763). Since I had to look it up too, prosopopeia is a rhetorical device in which a speaker or writer communicates to the audience by speaking as another person or object. Khan is quite clear that he finds Samuelson’s epigraph quite puzzling, but instead of just saying “It’s wrong” (which would be tedious) he find a way to détourne it that is actually quite clever. He takes as a major theme throughout the paper the ways that the same economic subject-matter can be depicted in different ways by using different mathematical formalisms. Now, it’s fairly trivial that one can do this, but Khan focuses on how in many ways certain formalisms are observationally equivalent to each other. For instance, he gives the following chart (1993: 772):

correspondence between probability & measure theoretic terms (in Khan, 1993; 772)

Correspondence between probability & measure theory

That is to say, to present probabilistic ideas using the formalism of measure theory doesn’t at all affect the content of what’s being said: it’s essentially just using the full toolbox of real analysis instead of only set notation. What interests Khan here is how these new notations change the differential relations between ideas, creating brand new forms of Derridean différance in the realm of meaning—which, in turn, translate into new mathematical possibilities as our broadened horizons of meaning let us develop brand new interpretations of things we didn’t notice before. Khan’s favorite example here is nonstandard analysis, which he claims ought to make up a third column in the above chart, as probabilistic and measure theoretic concepts (and much else besides) can likewise be expressed in nonstandard terms. To briefly jot down what nonstandard analysis is: using mathematical logic, it is possible to rigorously define infinitesimals in a way that is actually usable, rather than simply gestured to by evoking marginal quantities. While theorems using such nonstandard tools often differ greatly from ‘standard’ theorems, it is provable that any nonstandard theorem can be proved standardly, and vice versa; yet, some theorems are far easier to prove nonstandardly, whence its appeal (Dauben, 1985). In economics, for example, an agent can be modelled as an infinitesimal quantity, which is handy for general equilibrium models where we care less about particulars than about aggregate properties, and part of Khan’s own mathematical work in general equilibrium theory does precisely this.

To underscore his overall point, Khan effectively puts Samuelson’s epigraph through a prism: “Differential Calculus is a Language”, “Convex Analysis is a Language”, “Nonsmooth Analysis is a Language”, and so on. Referring to Samuelson’s original epigraph, this lets Khan “interpret the word ‘language’ as a metonymy for the collectivity of languages” (1993: 768), which lets him translate it into: “Mathematics is a Tower of Babel.” Fittingly, in order to navigate this Tower of Babel, Khan (following Derrida) adopts a term originating from architecture: namely, the distinction between keystone and cornerstone. A keystone is a component of a structure that is meant to be the center of attention, and clinches its aesthetic ambiance; however, a keystone has no real architectural significance, but could be removed without affecting the rest of the structure. On the other hand, a cornerstone is an unassuming, unnoticed element that is actually crucial for the structural integrity of the whole; take it away and the rest goes crashing down. Read the rest of this entry

The Shapley Value: An Extremely Short Introduction

at hierophants of escapism, by versatis

[For those who find the LaTeX formatting hard to read: pdf version + LaTeX version]

If we view economics as a method of decomposing (or unwriting) our stories about the world into the numerical and functional structures that let them create meaning, the Shapley value is perhaps the extreme limit of this approach. In his 1953 paper, Shapley noted that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). By embodying a player’s position in a game as a scalar number, we reach the degree zero of meaning, beyond which any sort of representation is severed entirely. And yet, this value recurs over and over throughout game theory, under widely disparate tools, settings, and axiomatizations. This paper will outline how the Shapley value’s axioms coalesce into an intuitive interpretation that operates between fact and norm, how the simplicity of its formalism is an asset rather than a liability, and its wealth of applications.

Overview

Cooperative game theory differs from non-cooperative game theory not only in its emphasis on coalitions, but also by concentrating on division of payoffs rather than how these payoffs are attained (Aumann, 2005: 719). It thus does not require the degree of specification needed for non-cooperative games, such as complete preference orderings by all the players. This makes cooperative game theory helpful for situations in which the rules of the game are less well-defined, such as elections, international relations, and markets in which it is unclear who is buying from and selling to whom (Aumann, 2005: 719). Cooperative games can, of course, be translated into non-cooperative games by providing these intermediate details—a minor industry known as the Nash programme (Serrano, 2008).

Shapley introduced his solution concept in 1953, two years after John F. Nash introduced Nash Equilibrium in his doctoral dissertation. One way of interpreting the Shapley value, then, is to view it as more in line with von Neumann and Morgenstern’s approach to game theory, specifically its reductionist programme. Shapley introduced his paper with the claim that if game theory deals with agents’ evaluations of choices, one such choice should be the game itself—and so we must construct “the value of a game [that] depends only on its abstract properties” (1953: 32). All the peculiarities of a game are thus reduced to a single vector: one value for each of the players. Another common solution concept for cooperative games, the Core, uses sets, with the corollary that the core can be empty; the Shapley value, by contrast, always exists, and is unique.

To develop his solution concept, Shapley began from a set of desirable properties taken as axioms:

  • Efficiency: \sum_{i\in{N}} \Phi_i(v) = v(N).
  • Symmetry: If v(S ∪{i}) = v(S ∪{j}) for every coalition S not containing i & j, then ϕi(v) = ϕj(v).
  • Dummy Axiom: If v(S) = v(S ∪{i} for every coalition S not containing i, then ϕi(v) = 0.
  • Additivity: If u and v are characteristic functions, then ϕ(u + v) = ϕ(u) + ϕ(v)

In normal English, any fair allocation ought to divide the whole of the resource without any waste (efficiency), two people who contribute the same to every coalition should have the same Shapley value (symmetry), and someone who contributes nothing should get nothing (dummy). The first three axioms are ‘within games’, chosen based on normative ideals; additivity, by contrast, is ‘between games’ (Winter, 2002: 2038). Additivity is not needed to define the Shapley value, but helps a great deal in mathematical proofs, notably of its uniqueness. Since the additivity axiom is used mainly for mathematical tractability rather than normative considerations, much work has been done in developing alternatives to the additivity axiom. The fact that the Shapley value can be replicated under vastly different axiomatizations helps illustrate why it comes up so often in applications.

The Shapley value formula takes the form:

\Phi_i(v) = \sum\limits_{\substack{S\in{N}\\i\in{S}}} \frac{(|S|-1)!(n-|S|)!}{n!}[v(S)-v(S\backslash\{i\})]

where |S| is the number of elements in the coalition S, i.e. its cardinality, and n is the total number of players. The initial part of the equation will make far more sense once we go through several examples; for now we will focus on the second part, in square brackets. All cooperative games use a value function, v(S), in which v(Ø) ≡ 0 for mathematical reasons, and v(N) represents the ‘grand coalition’ containing each member of the game. The equation [v(S) – v(S\{i})] represents the difference in the value functions for the coalition S containing player i and the coalition which is identical to S except not containing player i (read: “S less i”). The additivity axiom implies that this quantity is always non-negative. It is this tiny equation that lets us interpret the Shapley value in a way that is second-nature to economists, which is precisely one of its most remarkable properties. Historically, the use of calculus, which culminated in the supply-demand diagrams of Alfred Marshall, is what fundamentally defined economics as a genre of writing, as opposed to the political economy of Adam Smith and David Ricardo. The literal meaning of a derivative as infinitesimal movement along a curve was read in terms of ‘margins’: say, the change in utility brought about by a single-unit increase in good x. Thus, although these axioms specify nothing about marginal quantities, we can nonetheless interpret the Shapley value as the marginal contribution of a single member to each coalition in which he or she takes part. This marginalist interpretation was not built in by Shapley himself, but emerged over time as the Shapley value’s mathematical exposition was progressively simplified. It is this that allows us to illustrate by examples instead of derivations.

Examples 1 & 2: Shapley-Shubik Power Index (Shapley & Shubik, 1954)

Imagine a weighted majority vote: P1 has 10 shares, P2 has 30 shares, P3 has 30 shares, P4 has 40 shares.

For a coalition to be winning, it must have a higher number of votes than the quota, here q = \frac{110}{2} = 55

v(S) =\begin{cases} 1, & \text{if }q>55 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {2,3}, {2,4}, {3,4} & all supersets containing these.

Since the values only take on 0s and 1s, we can work with a shorter version of the Shapley value formula:

\Phi_i(v) = \sum\limits_{\substack{S\text{ winning}\\S\backslash\{i\}\text{ losing}}} \frac{(|S|-1)!(n-|S|)!}{n!}

Here, [v(S) – v(S\{i})] takes on a value of 1 iff a player is pivotal, making a losing coalition into a winning one. Otherwise it is either [0 – 0] = 0 for a losing coalition or [1 – 1] = 0 for a winning coalition.

For P1: v(S) – v(S\{1}) = 0 for all S, so ϕ1(v) = 0 (by dummy player axiom)

For P2: v(S) – v(S\{2}) ≠ 0 for S = {2,3}, {2,4}, {1,2,3}, {1,2,4}, so that:

\Phi_2(v)=2\frac{1!2!}{4!}+2\frac{2!1!}{4!}=\frac{8}{24}=\frac{1}{3}

By the symmetry axiom, ϕ2(v) = ϕ3(v) = ⅓. By the efficiency axiom, 0 + ⅓ + ⅓ + ϕ4(v) = v(N) = 1 → ϕ4(v) = ⅓

It is worth noting that, within the structure of our voting game, P4’s extra ten votes have no effect on his power to influence the outcome, as shown by the fact that ϕ2 = ϕ3 = ϕ4. A paper by Shapley (1981) notes an actual situation for county governments in New York in which each municipality’s number of votes was based on its population; in one particular county, three of the six municipalities had Shapley values of zero, similar to our dummy player P1 above. Upon realizing this, the quota was raised so that our three dummy players were now able to be pivotal for certain coalitions, giving them nonzero Shapley values (Ferguson, 2014: 18-9).

For a more realistic example, consider the United Nations Security Council, composed of 15 nations, where 9 of the 15 votes are needed, but the ‘big five’ nations have veto power. This is equivalent to a weighted voting game in which each of the big five gets 7 votes, and each of the other 10 nations gets 1 vote. This is because if all nations except one of the big five vote in favor of a resolution, the vote count is (35 – 7) + 10 = 38.

Thus we have weights of w1 = w2 = w3 = w4 = w5 = 7, and w6 → w15 = 1.

Our value function is v(S) =\begin{cases} 1, & \text{if }q>39 \\ 0, & \text{otherwise}\end{cases}  Winning coalitions: {1,2,3,4,5, any 4+ of the 10}

For the 4 out of 10 ‘small’ nations needed for the vote to pass, the number of possible combinations is \frac{10!}{4!\,6!}.

Hence, in order to calculate the Shapley value for any member (say, P1) in the big five, we take into account that v(S) – v(S\{1}) ≠ 0 for all 210 coalitions, plus any coalitions with redundant members; this is just another way of expressing their veto power. In our previous example, we were able to count by hand the members in each pivotal coalition S and multiply that number by the Shapley value function for coalitions of that size. Here the number of pivotal coalitions for each size is so large that we must count them using combinatorics. Our next equation looks arcane, but it is only the number of pivotal coalitions multiplied by the Shapley function. First we have the minimal case where 4 of the 10 small members vote in favor of the resolution, then we have the case for 5 of the 10, and so on until we reach the case where all members unanimously vote together:

\Phi_1(v)=(\frac{10!}{4!6!})(\frac{8!6!}{15!})+(\frac{10!}{5!5!})(\frac{9!5!}{15!})+(\frac{10!}{6!4!})(\frac{10!4!}{15!})+(\frac{10!}{7!3!})(\frac{11!3!}{15!})+(\frac{10!}{8!2!})(\frac{12!2!}{15!})+(\frac{10!}{9!1!})(\frac{13!1!}{15!})+(\frac{14!}{15!})

=210\frac{1}{45045}+252\frac{1}{30030}+210\frac{1}{15015}+120\frac{1}{5460}+45\frac{1}{1365}+10\frac{1}{210}+1\frac{1}{15} = 0.19627

By the symmetry axiom, we know that all members of the big five have the same Shapley value of 0.19627. Also, as before, the efficiency axiom implies that the Shapley values for all the players sum to v(N) = 1. Since symmetry also implies that the Shapley values are the same for the 10 members without veto power, we need not engage in any tedious calculations for the remaining members, but can simply use the following formula:

\Phi_6=\cdots=\Phi_{10}=\frac{1-5(0.19627)}{10}=\frac{1-0.98135}{10}=0.001865

Part of the purpose of this example is to help the reader appreciate how quickly the complexity of such problems increases in the number of agents n. Weighted voting games are actually relatively simple to calculate because v(N) = 1, which is why we just sum together the Shapley formulas for each pivotal coalition’s size; in our next example we will relax this assumption. In so doing, the part of the Shapley formula v(S) – v(S\{i}) gains added importance as a ‘payoff’, whereas the Shapley formula used in our weighted voting game examples acts as a probability, so that the combined formula is reminiscent of von Neumann-Morgenstern utility. The Shapley formula can be construed as a probability in the following way (Roth, 1983: 6-7):

suppose the players enter a room in some order and that all n! orderings of the players in N are equally likely. Then ϕi(v) is the expected marginal contribution made by player i as she enters the room. To see this, consider any coalition S containing i and observe that the probability that player i enters the room to find precisely the players in S – i already there is (s – 1)!(n – s)!/n!. (Out of n! permutations of N there are (s – 1)! different orders in which the first s – 1 players can precede i, and (n – s)! different orders in which the remaining n – s players can follow, for a total of (s – i)!(n – s)! permutations in which precisely the players S – i precede i.

One drawback to this approach is its implicit assumption that each of the coalitions is equally likely (Serrano, 2013: 607). For cases such as the UN Security Council this is doubtful, and overlooks many very interesting questions. It also assumes that each player wants to join the grand coalition, whereas unanimous votes seldom occur in practice. The main advantage of the Shapley value in the above examples is that another common solution concept for cooperative games, the Core, tends to be empty in weighted voting games, giving it no explanatory power. The Shapley value can be extended to measure the power of shareholders in a company, and can even be used to predict expenditure among European Union member states (Soukenik, 2001). We will go through another relatively simple example, and then move on to several more challenging applications.

Read the rest of this entry

The Project of Econo-fiction

what is economics

I have an article up at the online magazine Non on what it entails to use Laruelle’s non-philosophy to talk about economics, intended as a retrospective of my essay “There is no economic world.” It contextualizes econo-fiction in terms of Laruelle’s lexicon, illustrates a philosophical quandary with viewing iterated prisoner’s dilemma experiments through the lense of ‘falsification’, and notes a few ways I’ve changed my mind since then and where I plan to go from here. While the example is deliberately simple, aimed toward readers with zero knowledge of economic theory, it shows very succinctly how the notion of ‘experiment’ in economics operates as a form of conceptual rhetoric. I’ve also included a lot of fascinating factoids I’ve discovered since then, which I plan to expand upon in upcoming posts here.

No other philosophical approach I’ve come across—not even Badiou’s—lends itself to economics as much as non-philosophy does. I’m very impressed with the way that NP can talk about the mathematical formalism in economics without overcoding it, and I’d very much like to experiment with applying NP to related disciplines. Laruelle himself hints toward new applications of his method in finance: “Philosophy is a speculation that sells short and long at the same time, that floats at once upward and downward” (2012: 331). That is, philosophy is a form of hedging. Conversely, the section containing this excerpt is entitled “Non-Philosophy Is Not a Short-Selling Speculation,” where short-selling is investing so that you make money if an asset’s price goes down. Of the continental philosophers of finance I’m familiar with, Ben Lozano’s Deleuzian approach tends to focus on the conceptual aspects of finance to the neglect of its formalism, and Élie Ayache’s brilliantly original reading of quantitative finance is in many ways quite eccentric—such as his insistence on the crucial importance of the market maker (the guys yelling at each other in retro movies about Wall Street) and that algorithms are fundamentally inferior to human traders. A Laruellean interpretation of mainstream finance would serve as a welcome foil to both.

Just the other day I discovered a form of mathematical notation that appears to open up a Laruellean interpretation of accounting, and I’m always on the lookout for quirky reinterpretations of business-related ideas. I find philosophy such a handy tool for getting myself intrinsically interested in dull (but very practical) topics and disciplines, and I’ve read a whole heap of papers over the past year, so I’m really looking forward to blogging again.

References

Spectres of Capital: The Political Economy of Ghostwriting

Ghost City, by raysheaf

Nearly every book you have read by a celebrity or politician has been written by someone else: the ghostwriter, whose name remains unknown (or else slyly inserted in the ‘acknowledgements’ section). At a moment’s thought we know this; many people would be quite offended, after all, if they thought that Barack Obama truly sat down and wrote the several(!) books under his name. Likewise, for a CEO to actually take the time to write a business book would be “widely perceived as an act both desperate and pathetic”—in a word, “it would have made him [or her] a schmuck” (Hitt, 1997). Yet, nobody thinks about this—we cling to the reified notion of The Author even as it becomes more and more separate from that of the Writer. The present essay addresses ghostwriting in all its apparitions, from celebrity ‘autobiographies’ to its increasing presence in music and online dating. We will trace out its phantasms in ancient and contemporary philosophy, from Aristotle to hauntology, underscoring its implications for both theory and anti-theory. And lastly, we will argue that increasing ‘spectrification’ of society (and the emergent spectra and spectralities arising in its wake) places deeply into question the method of ‘textual analysis’ of capitalism.

§1. “I care not who writes a nation’s laws, as long as I can write its op-eds”

In the film Ghostwriter, Ewan McGregor explains the process to a client: “I interview you and turn your answers into prose.” We might recall Molière’s bourgeois gentilhomme, who realized with pride that he had been speaking prose all his life—but writing prose is another matter entirely, as any modern ‘ink-stained wretch’ will tell you. Writing is hard, yes, but no one seems to care: surveys show that most authors earn less than $1,000 per year (D’Agnese, 2014). The task of writing is an increasingly precarious one in light of the looming prospect of speech recognition technology phasing out the writer’s role entirely (replaced by that of the editor), as well as the increasing prevalence of algorithmic journalism.

Furthermore, as of 2011 (the latest year for which data is available) the number of new books published in the US reached 292,014—the highest in the world, followed by 241,986 in China (as of 2012) and 149,800 in the UK (as of 2011). Adding up the latest data for each country yields a total of 2,200,000 (via; see also). These, moreover, are the best of the lot, the ones that managed to escape the ‘slush pile’—every publisher and agent has one—of “unsolicited manuscripts, synopses and letters of enquiry lying in wait for someone to pick them up and respond with glowing encouragement” (Crofts, 8). In short, it’s virtually impossible for an unknown writer to make themselves heard, even in the unlikely situation that they have something interesting to say.

The process of ghostwriting is disarmingly simple. Often only two or three days of intensive interviewing are needed—one interview for the synopsis, several more for the full-length manuscript (Crofts, 104, 116): maybe 50 hours in total, 20 if they’re especially concise. The ghostwriter Sally Collings gets by with 10 interviews, each an hour long, followed by about four months of writing (or up to a year for larger projects)—far less personal than one might expect (Mayyasi, 2013). In return, ghosts are able to make a steady living doing what they love. One of the more ‘famous’ ghostwriters, Andrew Crofts, quotes a passage from the narrator in The Great Gatsby: “I was within and without, simultaneously enchanted and repelled by the inexhaustible variety of life” (in Crofts, 4). This, he says, “sums up the attraction of ghostwriting.” One peculiar case is Janofsky (2013), who found himself ghostwriting blog posts for an Arabian sheikh in exile; he even wrote a series of reflections on Ramadan—despite being Jewish—that were published verbatim. Culture shock is a concrete problem, keeping ghostwriters on their toes: Crofts (2004: 114) recalls writing an autobiography for an African chief who was modest to the point of nearly obscuring his actual importance in his home country, “and indeed in the international business community.” Another of his examples is ghostwriting for the Chinese billionaire Tan Sri Loy, who flew Crofts to China to meet his relatives: “there were extraordinary things about his background that he would have taken for granted and not mentioned if I hadn’t seen them for myself” (ibid, 106).

“Ghosting a book for someone,” says Crofts, “is like being paid to be educated by the best teachers in the world.” The ghostwriter’s position also lets them query their subjects in ways that would otherwise be obnoxious: it’s part of the job to ask someone how much they earn, who they’re sleeping with, why on earth they married who they married—and the client is obliged to answer (ibid, 15). This joint venture of Writer and Author is often win/win: even if someone enjoys researching, there’s no guarantee of finding a publisher for their book after the months or even years required for its completion. Given that advances are at historic lows, and that in the absence of authorial cachet, work-for-hire and ghost gigs bring the highest advances (D’Agnese, 2014), the immediate appeal is clear. The process is even qualitatively easier than writing on one’s own, since the ghostwriter needn’t grapple with their own insecurities and daunting standards: ghostwriting an entire book may well be easier than writing several blog posts for oneself (Kihara, 2014). Another consideration is that it’s easier to elicit readers’ pathos through first person rather than third person narrative (Crofts, 9); evocative tropes such as dream sequences are awkward to write in a biography of someone else. For many struggling writers, the lack of a byline is a small price to pay.

These are D’Anglese’s self-reported gross revenues; for the net value subtract 15% agenting fees from each. They were also paid in halves, thirds, and fourths.

These are D’Anglese’s self-reported gross revenues, paid in halves, thirds, & fourths. For the net value, subtract 15% agenting fees from each.

The author’s motivation is simple enough—namely, outsourcing. Many authors initially have a go at writing on their own, but find that the job involves far more work than anticipated; the opportunity cost is just too high. For a successful expert (and/or celebrity, CEO, etc.), the main appeal of hiring a ghost is saving countless hours of niggling with a pen that could be far better spent contributing to their enterprise. Ghostwriters often even perform the author’s email interviews and blog posts during the publicity run (Huff, 2013), letting the author focus on making contacts and enjoying the spotlight. In short, ghostwriting embodies the principle of comparative advantage. Ghosts are defined by the lack of opportunities on their part: their universe of possibilities is far smaller, and it is precisely this discrepancy in ‘potentiality capital’ (Guattari) that makes ghostwriting a worthwhile venture. The receipt of money from the author in turn opens up the ghostwriter’s ‘universe’ more than they could have done alone, so that both parties gain from trade. It is easy to show numerically that, provided ‘transaction costs’ are sufficiently low, there will be mutual gains even if the client is a better writer than the ghostwriter they hire, due simply to their differing relative costs. In a list of common misconceptions about ghostwriting, Deckers (2012) comments:

[People often] don’t think they have a high-enough position to need a ghost writer. They don’t think they’re that important to ‘deserve’ it. They think their company needs to be bigger, or they need to have a more prestigious position. I saw this a lot when I was doing speechwriting for a Congressional candidate in 2004. It’s not a matter of prestige, it’s a matter of having the time to do it.

Counterintuitively, it becomes clear upon researching the subject that most professional ghostwriters don’t write well. Articles on the subject are replete with gratuitous and absurd similes, purple prose, and even simple grammatical errors. Rather than a troupe of down-on-their-luck Joyces, Raphaels (or Hemingways, Dostoevskies…) without hands, and other poets manqué—many ghostwriters’ main comparative (and competitive) advantage lies in unapologetically producing dull writing. “Some editors are failed writers, but so are most writers” (T.S. Eliot). In fact, this is often a selling point—as one successful academic ghostwriter boasts (Dante, 2010):

Over the years, I’ve refined ways of stretching papers. I can write a four-word sentence in 40 words. Just give me one phrase of quotable text, and I’ll produce two pages of ponderous explanation. I can say in 10 pages what most normal people could say in a paragraph. […] I think about how Dickens got paid per word and how, as a result, Bleak House is…well, let’s be diplomatic and say exhaustive. Dickens is a role model for me.

Read the rest of this entry