# Category Archives: Review

## ‘Patatime

###### [Art by Tatiana Plakhova. LaTeX version here.]

‘Pataphysics is the science of the trans-ontological: it finds imaginary solutions to bridge incompatible worlds (ontologies). As such, ‘pataphysics can only be a science of the particular; otherwise, these worlds would be subsumed under a more general ontology—thus intra-ontological, and not truly trans-ontological.

Its most radical concept is pataphor: a ‘metaphor-squared’ that leaps between ontologies, without any reduction or hierarchy.[1] So: if continental theory is an exercise in overcoming binary oppositions—conceptual distinctions that split our thinking into two separate ontologies—then it is deeply pataphorical in nature.

Pataphor and its many variations can be expressed by the following formula:

Here, A is one world (or ontology) and C is an incompatible world: A $\neq$ C. Next, f is a metaphorical (or any meta-x) relation, and g is a non-figurative (or just x) relation. Last, B is an object that is metaphorically compared to something in world A, but which exists in world C. The concept is surprisingly general, occurring anywhere from economics and finance to various forms of art.

Not infrequently, some concept I have long found interesting turns out to have a pataphorical structure. Pataphor thus has a dual role, of both explaining why certain concepts are profound, and helping us know where to look—either to view old ideas in a new light, or even to synthesize bizarre new concepts.

This paper defines ‘patatime, by framing time travel as a ‘metatime’ relation. The first section shows how ‘patatime arises from the interaction of time and metatime in two well-known time travel paradoxes. The second section interprets Nick Land’s concept of templexity through ‘patatime. The last section identifies a pataphorical structure underlying many classic paradoxes and quasi-paradoxes.

#### 1. Time Paradoxes & ‘Pataphysics

##### “It is fine to live two different moments of time as one: that alone allows one to live authentically a single moment of eternity, indeed all eternity since it has no moments.” ~Alfred Jarry – Days & Nights

There are two paradoxes associated with time travel. The grandfather paradox makes a change that creates a new timeline and annuls the original. The bootstrap paradox makes a change that is in fact continuous with the original timeline. Clearly, one and the same change cannot both annul and be consistent with the original timeline. Since a change must do one or the other, we get two paradoxes.

In the grandfather paradox, someone goes back in time and kills their grandfather before he ever had children; yet, by so doing, the traveller could never have existed, and so could not have killed their grandfather. Killing one’s grandfather alters the course of history—the new future is incompatible with the original.

The grandfather paradox has the following ‘patatemporal structure:

Here, A is the present point in time where the time traveller begins; B is the point in the past where they kill their grandfather, annulling the original timeline to A; from the new timeline, C is the alternate present in which the time traveller was never born, and so could never kill their grandfather. Clearly, A belongs to a different ontology than C (i.e., A $\neq$ C), since they belong to separate timelines.

This gives a relation A—ᶠ→B—ᵍ→C, where f is a ‘metatime’ relation (travelling to the past), while g is ‘time’. Thus, the grandfather paradox is a pataphor. The paradox holds for any change that rules out the future timeline that led to it.

It’s paradoxical because the metatime relation must still have a real effect even after the timeline it’s based on is annulled. So the grandfather paradox turns on the question of whether metatime can exist without an underlying time.

‘Bootstrap paradox’ is from the phrase, ‘to pull oneself up by one’s bootstraps’. Someone is inspired by some object or information from the past—say, a poem. They travel back in time to see who created it, and it turns out that they write it themselves, from memory. Thus, this object or information has no origin: it is a causal loop. Time is changed, but in a way presupposed by the original timeline.

As self-reifying, the bootstrap paradox resembles hyperstition: fiction that makes itself real. Hyperstition is in fact a ‘literal’ pataphor. In a fictional ontology A, we speak ‘figuratively’ (f) of an object B, which we speak of non-figuratively (g) within a real ontology C. Written as a pataphor, hyperstition’s autopoiesis is ‘exogenous’, while in the bootstrap paradox it is precisely what’s at issue.

The bootstrap paradox is likewise a form of ‘patatime:

Here, A is the thing that inspires the traveller to go back in time—a metatime relation f—journeying to the point in time B where the thing was supposedly created. Last, the traveller ends up (re-)creating the thing C that later inspires them to travel back in time in the first place. (This occurs as a time relation g.) Here A $\neq$ C, since A comes from an external source, but the traveller creates C.

## Lacan on the Number 13 and the Logic of Suspicion

Lacan once remarked on the Cartesian cogito that etymologically, “the French verb penser (to think)…means nothing other than peser (to weigh)” (1961-2: 14). Lacan’s mix of bad puns, abuse of notation, cryptic aphorisms & immense erudition has created a new style of writing, and even of thinking. A new language, in short, lacking any method for non-experts to weigh its words.

Of all things, Lacan’s earliest papers take the form of math puzzles—a lucid (albeit horrendously verbose) derivation, then reframing of the problem as metaphor. One such paper—“The Number Thirteen and the Logical Form of Suspicion”—has largely been forgotten. This post aims to recast the puzzle using discrete mathematics, and show how it bears upon Lacan’s later ideas.

What I like about this mathematical allegory is that even if one believes Lacan is a charlatan, here there’s no need to immerse oneself in psychoanalytic concepts, but only to think.

I hope that Lacanians will find working through my derivation a challenging exercise, and that mathematicians will be piqued at the idea of treating a math problem as a philosophical ‘text’. I for one would be glad to see more such texts.

#### 1. Lacan’s Algorithm

We are given 12 identical pieces, and told one of them is ‘bad’—either lighter or heavier than the rest, we’re not sure which. Having only a scale with two plates, and no way to gauge numerical weight, we must find the bad piece in 3 weighings.

If we knew whether the bad piece was lighter or heavier, the problem would be easy: just split the pieces into two groups of 6, then split the ‘bad’ half into two groups of 3, then simply weigh two of the bad three. But we don’t.

Here, we’ll overview Lacan’s account for 12 pieces, and then in the next section we’ll consider n pieces, and try to explain why Lacan’s algorithm works.

Lacan begins by placing on the scales two groups of 4. Suppose they balance. Then the bad piece is in the remaining 4, so we can just weigh any 2 of the 4. If those balance, the 2 left-out pieces are bad; if they don’t, the 2 pieces on the scale are bad. So weigh one of the bad 2 against a good piece: if they balance, the other piece is bad; if they don’t, then the piece on the scale is bad. Simple.

Note how this was equivalent to the sub-problem of finding a bad piece out of 4 pieces, in 2 weighings. The sub-problem is embedded in the larger problem.

If the two groups of 4 don’t balance, we use the method of tripartite rotation.

Tripartite rotation

The scales don’t balance, so one is heavier (H), one lighter (L). So, we select 3 pieces from H, L, and the remainder (R), and rotate them: H → L → R → H.[1]

Case 1: Scales balance — the bad piece is in the 3 moved to R, and too light.
Case 2: Balance shifts — the bad piece is in the 3 moved to L, and too heavy.
Case 3: Unbalance doesn’t change — the bad piece is in the 2 unmoved pieces.

In cases 1 and 2, just weigh 2 of the bad pieces: if they’re equal, the remainder is bad; if not, we know the bad piece is the lighter (case 1) or heavier piece (case 2). For case 3, just pick one and weigh it against a good piece. And we’re done.

Lacan then considers the case of 13 pieces: 4 on each scale, 5 remainders. It’s clear that if the scales don’t balance, the problem is the same as with 12 pieces when the scale didn’t balance—the remainders are all good, whether 5 or 4.

Here, when the scales balance, we have a new problem. Recall how we could treat 4 pieces as a separate problem. So let’s examine the 5-piece sub-problem.

Start with 2 pieces on each scale and 1 remainder. If we’re lucky, the scales balance and the remainder is bad. If not, we have 4 pieces, but we know the 4-piece case takes two weighings, so the 5-piece case must take three weighings.

It’s the same even for 1 piece on each scale and 3 remainders. If we’re unlucky, the scales balance, giving a new sub-problem with 3 pieces—the smallest solvable version of Lacan’s problem. Weigh any 2. If they balance, the remainder is bad. If not, weigh a piece on the scale against the good piece. Total: three weighings.

So both the 3-piece and 4-piece cases take two weighings, 5 pieces takes three weighings, so it would seem that 13 pieces must take four weighings. Nope.

Actually, for 13 pieces, the 5 remainders aren’t truly a separate sub-problem. There’s a difference: we have 8 good pieces. For 3 or 4 pieces, this doesn’t matter, but for 5 pieces, Lacan can introduce a new trick: the ‘by-three-and-one’ position.

The ‘by-three-and-one’ position

Here, we have 2 pieces in each plate, with one of the 4 a good piece, and 2 remainders. If the scales balance, just weigh one remainder against a good piece and we’re done. If they don’t balance, here’s the trick: we can do the smallest possible tripartite rotation, H → L → R → H, where R is a good piece.

Case 1: Scales balance — the bad piece is in R.
Case 2: Balance shifts — the bad piece is in L.
Case 3: No change — the unmoved piece is bad.

Thus, the 5 remainders take two weighings, and the 13-piece case takes three.

In this case, treating the 5 remainders as a sub-problem was the wrong way to go, making it seem impossible to solve in 3 weighings. More pieces means more ways to divide between scales and remainder, increasing the risk of such pitfalls.

Thus, Lacan’s task is to find a general algorithm for any number of pieces, including a uniform way to divide them. The algorithm must minimize the maximum amount of weighings—i.e. find the minimum, assuming we don’t get lucky.

The problem also raises some new questions. The main one is: for a given number of pieces, how many weighings are needed? As in the solutions outlined above, Lacan answers this question, but fails to explain why his solution works.

Hence, the next section will diverge from Lacan’s exposition, using discrete mathematics to give an algorithm for n pieces. This will help us see how Lacan’s problem relates to the logic of suspicion, which we will outline in the final section.

## Heideggerian Economics

Lately I’ve had the poor judgment to start reading Heidegger’s Being and Time. I’ve been putting it off for years now, largely because it has no connection with the kind of philosophy I’m interested in. Yet, among my philosophical acquaintances there is a clear line between those who have read Heidegger and those who haven’t—working through this book really does seem to let people reach a whole new level of abstraction.

To my great surprise, in Being and Time (1927: 413), Heidegger remarks:

[E]ven that which is ready-to-hand can be made a theme for scientific investigation and determination… The context of equipment that is ready-to-hand in an everyday manner, its historical emergence and utilization, and its factical role in Dasein — all these are objects for the science of economics. The ready-to-hand can become the ‘Object’ of science without having to lose its character as equipment. A modification of our understanding of Being does not seem to be necessarily constitutive for the genesis of the theoretical attitude ‘towards Things’.

Curiously, no other sources I’ve found mention this excerpt. More well-known is a passage from “What are Poets for?” in which Heidegger denounces marketization (1946: 114-5):

In place of all the world-content of things that was formerly perceived and used to grant freely of itself, the object-character of technological dominion spreads itself over the earth ever more quickly, ruthlessly, and completely. Not only does it establish all things as producible in the process of production; it also delivers the products of production by means of the market. In self-assertive production, the humanness of man and the thingness of things dissolve into the calculated market value of a market which not only spans the whole earth as a world market, but also, as the will to will, trades in the nature of Being and thus subjects all beings to the trade of a calculation that dominates most tenaciously in those areas where there is no need of numbers.

Thus it’s very easy to appeal to Heidegger’s authority to support various Leftist clichés about capitalism. It’s far harder to bring Heidegger’s thought to bear on actual economic modelling—its ‘worldly philosophy’. In this post I’ll survey several of the less hand-wavey attempts in this direction. My main question is whether a Heideggerian economics is possible at all, and if so, whether there is a specific subfield of economics to which Heideggerian philosophy especially lends itself. My overview of each specific thinker sticks closely to the source material, as I’m hardly fluent enough in Heideggerese to give a synoptic overview or clever reinterpretation. I don’t expect to ever develop a systematic interpretation of my own, but I hope this post might prove inspiring to some economist with philosophical tastes far different from my own.

#### 1. Schalow on ‘The Question of Economics’

Schalow’s approach is quite refreshing because he is both an orthodox Heideggerian and takes the viewpoint of mainstream economics, as opposed to Heideggerian Marxism such as Marcuse’s One-Dimensional Man. Schalow’s question is at once simpler and deeper: whether Heidegger’s thought leaves any room for economics. Here, ‘economics’ is minimally defined as theorizing the production and distribution of goods to meet human needs. (So in theory, then, this applies to any sort of economics, classical or modern.) The most obvious answer would seem to be ‘No’ — he notes: “It is clear that Heidegger refrains from ‘theorizing’ of any kind, which for him constitutes a form of metaphysical rationality” (p. 249).

Thus, Schalow takes a more abstract route, viewing economics simply as “an inescapable concern of human being (Dasein) who is temporally and spatially situated within the world” (p. 250). Schalow advocates a form of ‘chrono-economics’, where ‘scarcity’ is framed through time as numeraire. In a sense, this operates between ‘economic theory’ as a mathematical science vs. as a “humanistic recipe for achieving social justice” (p. 251); instead, “economic concerns are an extension of human finitude” (p. 250). Schalow makes various pedantic points about etymology which I’ll spare the reader, except for this one: “the term ‘logos’ derives its meaning from the horticultural activity of ‘collecting’ and ‘dispersing’ seeds” (p. 252).

It’s natural to interpret Being & Time as “lay[ing] out the pre-theoretical understanding of the everyday work-world in which the self produces goods and satisfies its instrumental needs” (p. 253). Similarly, “work is the self’s way of ‘skillful coping’ in its everyday dealings with the world” (p. 254). Hence Heidegger emphasizes production — which he will later associate with technē — over exchange, which he associates with the ‘they-self’ (p. 254). Yet, Schalow points out, both production and exchange can be construed as a form of ‘care’. Care, in turn, is configured by temporality, which forces us to prioritize some things over others (p. 256).

“The paradox of time…is the fact that it is its transitoriness which imparts the pregnancy of meaning on what we do” (p. 257). Therefore, “time constitutes the ‘economy of all economies’,” in that “temporality supplies the limit of all limits in which any provision or strategy of allocation can occur” (ibid.). We can go on to say that “time economizes all the economies, in defining the horizon of finitude as the key for any plan of allocation” (p. 258).

In his later thought, Heidegger took on a more historical view, arguing that the structure of Being was experienced differently in different epochs. In our own time, the strongest influence on our notion of Being is technology. Schalow gives an interesting summary (p. 261):

The advance of technology…occurs only through a proportional ‘decline’ in which the manifestness of being becomes secondary to the beings that ‘presence’ in terms of their instrumental uses.

In an age where the economy is so large as to be inconceivable except through mathematical models, one can say that “the modern age of technology dawns with the reduction of philosophical questions to economic ones” (p. 260). Thus, Heidegger is more inclined to view economics as instrumental (technē) rather than as “the self-originative form of disclosure found in art (poiēsis).” Yet, rather than merely a quantitative “artifice of instrumentality,” it is also possible to interpret economics in terms of poiēsis, as “a vehicle by which human beings disclose their immersion in the material contingencies of existence” (p. 262). Economics thus becomes “a dynamic event by which human culture adjusts to ‘manage’ its natural limitations” (ibid.). Framing economics in terms of temporality (as ‘chrono-economics’) allows it to remain open to Being, and thereby “to connect philosophy with economics without effacing the boundary between them” (p. 263).

## Élie Ayache’s The Medium of Contingency – A Review

###### [All art by Tatiana Plakhova. Review in pdf here]

Élie Ayache, The Medium of Contingency: An Inverse View of the Market,
Palgrave-Macmillan, 2015, 414pp., \$50.00 (hbk), ISBN 9781137286543.

Ayache’s project is to outline the ontology of quantitative finance as a discipline. That is, he wants to find what distinguishes it as a genre, distinct from economics or even stocks and bonds—what most of us associate with ‘finance’. Quantitative finance, dealing with derivatives, is a whole new level of abstraction. So Ayache has to show that economic and social concerns are exogenous (external) to derivative prices: the underlying asset can simply be treated as a stochastic process. His issue with probability is that it is epistemological—a shorthand for when we don’t know the true mechanism. Taleb’s notion of black swans as radically unforeseeable (unknowable) events is simply an extension of this. Conversely, market-makers—those groups of people yelling at each other in old movies about Wall Street—don’t need probability to do their jobs. Ayache’s aim is thus to introduce into theory the practice of derivatives trading—from within, rather than outside, the market. And it’s reasonable to think that delineating the ontology of this immensely rich field will yield insights applicable elsewhere in philosophy.

This is not a didactic book. People coming from philosophy will not learn about finance, nor about how derivatives work. Ayache reinterprets these, assuming familiarity with the standard view. Even Pierre Menard—Ayache’s claim to fame—is only given a few perfunctory mentions here. People coming from finance will not learn anything about philosophy, since Ayache assumes a graduate-level knowledge of it. Further, Ayache’s comments on Taleb’s Antifragile are limited to one page. The only conceivable reason to even skim this book is that you’d like to see just how abstract the philosophy of finance can get.

I got interested in Ayache because I write philosophy of economics. I wanted to learn what quantitative finance is all about, so several years ago I read through all his articles in Wilmott Magazine, gradually learning how to make sense of sentences like “Only in a diffusion framework is the one-touch option…replicable by a continuum of vanilla butterflies” (Sept 2006: 19). I’ve made it through all of Ayache’s published essays. Now I’ve read this entire book, and I deserve a goddamn medal. I read it so that you don’t have to.

Much of Ayache’s reception so far has been quite silly. I recently came across an article (Ferraro, 2016) that cited Ayache’s concept of ‘contingency’ as an inspiration behind a game based on sumo wrestling. (You can’t make this stuff up.) Frank Ruda (2013), an otherwise respectable philosopher, wrote a nonsensical article comparing him to Stalin![1] Philosophy grad students occasionally mention his work to give their papers a more ‘empirical’ feel (which is comparable in silliness to the sumo wrestling), especially Ayache’s clever reading of Borges’ short story on Pierre Menard—from which these graduate students draw sweeping conclusions about capitalism and high-frequency trading.

Ayache expects the reader to have already read The Blank Swan, which itself is not understandable without reading Meillassoux’s After Finitude. Thus, for most readers, decreasing returns will have long set in. My goal here is to summarize the main arguments and/or good ideas of each chapter, divested of the pages and pages of empty verbosity accompanying them. I try to avoid technical jargon from finance and philosophy except as needed to explain the arguments, though I do provide requisite background knowledge that Ayache has omitted. So first, let’s cover the most important concepts that the reader may find unfamiliar.

## Avant-Garde Philosophy of Economics

To most people, the title of this post is a triple oxymoron. Those left thoroughly traumatized by Econ 101 in college share their skepticism with those who have dipped their toe into hybrid fields like neuroeconomics and found them to be a synthesis of the dullest parts of both disciplines. For the vast, vast majority of cases, this sentiment is quite right: ‘philosophy of economics’ tends to be divided between heterodox schools of economics whose writings have entirely decoupled from economic formalism, and—on the other side of the spectrum—baroque econophysicists with lots to say about intriguing things like ‘quantum economics’ and negative probabilities via p-adic numbers, but typically within a dry positivist framework. As for the middle-ground material, a 20-page paper typically yields two or three salvageable sentences, if even that. Yet, as anyone who follows my Twitter knows, I look very hard for papers that aren’t terrible—and eventually I’ve found some.

Often the ‘giants’ of economic theory (e.g. Nobel laureates like Harsanyi or Lucas) have compelling things to say about methodology, but to include them on this list seems like cheating, so we’ll instead keep to scholars who most economists have never heard of. We also—naturally—want authors who write mainly in natural language, and whose work is therefore accessible to readers who are not specialists in economic theory. Lastly, let’s strike from the list those writers who do not engage directly with economic formalism itself, but only ‘the economy’. This last qualification is the most draconian of the lot, and manages to purge the philosophers of economics (e.g. Mäki, McCloskey) who tend to be the most well-known.

The remaining authors make up the vanguard of philosophy of economics—those who alchemically permute the elements of economic theory into transdisciplinary concoctions seemingly more at home in a story by Lovecraft or Borges than in academia, and who help us ascend to levels of abstraction we never could have imagined. Their descriptions are ordered for ease of exposition, building from and often contradicting one another. For those who would like to read more, some recommended readings are provided under each entry. I hope that readers will see that people have for a long time been thinking very hard about problems in economics, and that thinking abstractly does not mean avoiding practical issues.

### M. Ali Khan

Khan is a fascinating character, and stands out even among the other members of this list: by training he is a mathematical economist, familiar with some of the highest levels of abstraction yet achieved in economic theory, but at the same time an avid fan of continental philosophy, liberally citing sources such as De Man (a very unique choice, even within the continental crowd!), Derrida, and similar figures on the more literary side of theory, such as Ricoeur and Jameson. It may be helpful to contrast Khan to Deirdre McCloskey, who has written a couple of books on writing in economics: McCloskey uses undergraduate-level literary theory to look at economics, which (let’s face it) is a fairly impoverished framework, forcing her to cut a lot of corners and sand away various rough edges that are very much worth exploring. An example is how she considers the Duhem-Quine thesis to be in her own camp, which she proudly labels ‘postmodern’—yet, just about any philosopher you talk to will consider this completely absurd: Quine was as modernist as they come. (Moreover, in the 30 years she had between the first and second editions, it appears she has never bothered to read the source texts.) Khan, by contrast, has thoroughly done his homework and then some.

Khan’s greatest paper is titled “The Irony in/of Economic Theory,” where he claims that this ‘irony’ operates as a (perhaps unavoidable) literary trope within economic theory as a genre of writing. Khan likewise draws from rhetorical figures such as synecdoche and allegory, and it will be helpful to start at a more basic level than he does and build up from there. The prevailing view of the intersection of mathematics and literary theory is that models are metaphors: this is due to two books by Max Black (1962) and Mary Hesse (1963) whose main thesis was exactly this point. While this is satisfying, and readily accepted by theorists such as McCloskey, Khan does not content himself with this statement, and we’ll shortly see why.

Consider: a metaphor compares one thing to another on the basis of some kind of structural similarity, and this is a very useful account of, say, models in physics, which use mathematical formulas to adequate certain patterns and laws of nature. However, in economics it often doesn’t matter nearly as much who the particular agents are that are depicted by the formulas: the Prisoner’s dilemma can model the behaviour of cancer cells just as well as it can model human relations. If we change the object of a metaphor (e.g. cancer cells → people), it becomes a different metaphor; what we need is a kind of rhetorical figure where it doesn’t matter if we replace one or more of the components, provided we retain the overall framework. This is precisely what allegory does: in one of Aesop’s fables, say “The Tortoise and the Hare,” we can replace the tortoise by a slug and the hare by a grasshopper, but nobody would consider this to be an entirely new allegory—all that matters here is that one character is slow and the other is fast. Moreover, we can treat this allegory itself as a metaphor, as when we compare an everyday situation to Aesop’s fable (which was exactly Aesop’s point), which is why it’s easy to treat economic models simply as metaphors, even though their fundamental structure is allegorical.

The reason this is important is because Khan takes this idea to a whole new level of abstraction: in effect, he wants to connect the allegorical structure of economic models to the allegorical nature of economic texts—in particular, Paul Samuelson’s Foundations of Economic Analysis, which begins with the enigmatic epigraph “Mathematics is a language.” For Khan: “the Foundations is an allegory of economic theory and…the epigraph is a prosopopeia for this allegory” (1993: 763). Since I had to look it up too, prosopopeia is a rhetorical device in which a speaker or writer communicates to the audience by speaking as another person or object. Khan makes clear that he finds Samuelson’s epigraph puzzling, but instead of just saying “It’s wrong” (which would be tedious) he find a way to détourne it that is actually quite clever. He takes as a major theme throughout the paper the ways that the same economic subject-matter can be depicted in different ways by using different mathematical formalisms. Now, it’s fairly trivial that one can do this, but Khan focuses on how in many ways certain formalisms are observationally equivalent to each other. For instance, he gives the following chart (1993: 772):

Correspondence between probability & measure theory

That is to say, to present probabilistic ideas using the formalism of measure theory doesn’t at all affect the content of what’s being said: it’s essentially just using the full toolbox of real analysis instead of only set notation. What interests Khan here is how these new notations change the differential relations between ideas, creating brand new forms of Derridean différance in the realm of meaning—which, in turn, translate into new mathematical possibilities as our broadened horizons of meaning let us develop brand new interpretations of things we didn’t notice before. Khan’s favorite example here is nonstandard analysis, which he claims ought to make up a third column in the above chart, as probabilistic and measure theoretic concepts (and much else besides) can likewise be expressed in nonstandard terms. To briefly jot down what nonstandard analysis is: using mathematical logic, it is possible to rigorously define infinitesimals in a way that is actually usable, rather than simply gestured to by evoking marginal quantities. While theorems using such nonstandard tools often differ greatly from ‘standard’ theorems, it is provable that any nonstandard theorem can be proved standardly, and vice versa; yet, some theorems are far easier to prove nonstandardly, whence its appeal (Dauben, 1985). In economics, for example, an agent can be modelled as an infinitesimal quantity, which is handy for general equilibrium models where we care less about particulars than about aggregate properties, and part of Khan’s own mathematical work in general equilibrium theory does precisely this.

To underscore his overall point, Khan effectively puts Samuelson’s epigraph through a prism: “Differential Calculus is a Language”, “Convex Analysis is a Language”, “Nonsmooth Analysis is a Language”, and so on. Referring to Samuelson’s original epigraph, this lets Khan “interpret the word ‘language’ as a metonymy for the collectivity of languages” (1993: 768), which lets him translate it into: “Mathematics is a Tower of Babel.” Fittingly, in order to navigate this Tower of Babel, Khan (following Derrida) adopts a term originating from architecture: namely, the distinction between keystone and cornerstone. A keystone is a component of a structure that is meant to be the center of attention, and clinches its aesthetic ambiance; however, a keystone has no real architectural significance, but could be removed without affecting the rest of the structure. On the other hand, a cornerstone is an unassuming, unnoticed element that is actually crucial for the structural integrity of the whole; take it away and the rest goes crashing down.

## Alessandro Bellini on Sraffa and The ‘Capital-World’

{The following is excerpted from Bellini’s dissertation (supervised by François Laruelle), entitled “Suspension of the Capital-World for the Production of Jouissance” (pp. 35-41; abstract here), shittily translated from the French by moi. I’m interested to hear what philosophers (e.g. Lyotard) have to say about Sraffa, but although Bellini’s description is initially quite interesting it eventually resorts to shameless straw man arguments, as well as rejecting Sraffa’s position purely on the basis of metaphysical preferences. I’ve added a couple of translator’s notes specifying the most egregious distortions of Sraffa’s work, and my more lengthy criticisms can be found at the bottom of the post, above the endnotes.}

§ 3. Production of commodities by means of commodities

The interpretation that the Italian economist Claudio Napoleoni has given of Piero Sraffa’s Production of Commodities by Means of Commodities[20] is a very radical interpretation, in disagreement with both the apologists of the Cambridge economist and their neoclassical adversaries, and it is rooted in a vision of political economy as critical knowledge that seeks always to emphasize the philosophical issues that relate to the theory and that constantly pushes its way forward with inexhaustible political authority.[21]

What should first be noted is the way in which Piero Sraffa places himself in the classical tradition of the history of economic thought, which follows from the perfect circularity of his model, and unfolds through the role played by surplus; yet “the fact that the image of the economic process based on the concept of surplus is presented in the classics in a way logically untenable but historically significant, whereas in Sraffa it is presented in a way that is logically rigorous but historically silent” was for Claudio Napoleoni one of the fundamental features of the theoretical context in which the 1960 Production of Commodities by Means of Commodities appeared.[22] A solution such as Sraffa’s must therefore be interpreted as a break with the Marxian structure – in its Classical sense – rather than as its extension.

Let us try then to go a bit into details of the book, despite its level of abstraction, paying very specific attention to its use of the language of classical economic theory, of which we have already given an overview [in §2]. According to Claudio Napoleoni, what the Sraffa model presupposes is a given configuration of production – that is to say, a system of algebraic equations which represent the contributions that each branch of the productive system provides to the aggregate of economic processes, without including demand for goods – through which we may define a “net product” or a surplus in physiocratic and Ricardian terms. Sraffa’s theoretical aim is to show that if one separates the determination of price from the general problem of equilibrium one performs an operation endowed with meaning, because it is precisely by this link that prices are determinable.[23]

Indeed, the operation performed by Sraffa is a revival – through its definition of surplus – of Ricardian theory, though abandoning the pretension to link price-formation to quantities of labor objectified in commodities. It consequently eliminates any circular reasoning, thanks to the simultaneous determination of the rate of profit and of prices.

In particular, according to Claudio Napoleoni, the Sraffian “reduction to dated quantities of labor” can be used as a critique of the labor theory of value, although Sraffa does not make explicit his criticisms of Böhm-Bawerk’s theory of capital. From the ‘reduction equation’ used by Sraffa, it seems clear that in fact the price of a commodity depends not only on the amount of labor contained in it, but it also depends on the distribution of labor between direct and indirect labor: therefore, if there is a change in the distribution, the reasons for exchange between commodities vary, even if the quantities of work contained in the commodities does not change.[24]

It is then possible to state that “Sraffa’s system is the first theory of price that is formulated entirely outside of a theory of value, or at least the two theories of value that had previously been presented in the history of economic thought.”[25] In this way, the possibility of developing a theory of economic foundations vanishes. From this there arises, in fact, a definitive fracture between scientific analysis and the philosophical dimension, in the sense that Sraffa’s model no longer refers to any philosophical position; it simply adapts to the reality of capital to explain its pure functionality.

§ 4. Economic science

In the beginning of the century, in fact, Gustav Cassel had posed the problem of breaking free from the metaphysics which, in both theoretical traditions, sought a foundation in value as separate from price.[26] Sraffa was not the only one to realize the goal of Cassel, since at the same time a rigorous formulation of the theory of general economic equilibrium was achieved by Debreu.[27] The latter, through the explicit assumption of an axiomatic method, also obtained results leading to “a perfect conceptual identity, or a nullification of value by price.”[28] That’s why, starting from Gustav Cassel, both Sraffa and Debreu “seek to construct a non-founded economic theory—that is to say, one which does not require a foundation outside itself.”[29]

Consequently the idea that with Sraffa there is a definitive solution to the problem of a stable measure of value as the basis of relative prices – which according to Claudio Napoleoni takes the form of a suppression and not a solution to the question of value – represents unequivocally the final term in the history of political economy, as a science founded precisely on its decision as regards the problem of value: if we recognize that Sraffa’s theoretical proposal overcomes all non-empirical or purely metaphysical presuppositions so as to obtain full formal coherence, one is forced to recognize at the same time the end of political economy.

## The Ontology of the Commons

[This is an assignment for my Environmental Politics class, which I think is interesting enough post here. My first answer is a sort of immanent critique of ‘intrinsic value’ to show its emptiness as a concept. The second question is clearly anthropocentric, which is likely the part we’re meant to criticize, but I think it’s much more interesting to see how this simple statement forecloses any possible argument on its own terms. My third answer mostly paraphrases Debord, but it’s a nice example of how the terms of a question (i.e. historical revolution) often delimit the possible answers to it.]

1. Why is the notion of ‘the commons’ significant in terms of understanding the fundamental conflicts in the politics of the environment? (300 words)

McKenzie takes the following description as representative of ecocentrism:[1]

An ecocentric view sees the world as “an intrinsically dynamic, interconnected web of relations in which there are no absolute discrete entities and no absolute dividing lines between the living and the nonliving, the animate and the inanimate, or the human and the nonhuman.” In other words, all beings ― human and non-human ― possess intrinsic value.

Foreman includes inanimate objects (e.g. mountains) in McKenzie’s category of ‘beings’.[2] If this is the case, then all matter is intrinsically valuable. A true ecocentrist would then accept the proposition that all matter must be commons, since matter’s intrinsic value cannot be made into anyone’s property, and since there can be no moral argument that any instance of matter is not free to be utilized by any other instance of matter.

If it is true that at the quantum level all matter is energy, and if the first law of thermodynamics is true (energy cannot be created or destroyed), then it does not matter what form matter takes, even if it is entirely vaporized by nuclear warfare, since it, as energy, still exists, and still possesses ‘intrinsic value’. Thus it is impossible to not preserve the commons. Therefore, the moral ground for preserving the earth’s environment as we know it must be zoocentric or sentientist[3], both of which do not abstractly view humans as a subtype of matter, but deal with humans in their capacity as living beings, i.e. politically.[4] The function of Green political theory, then, is to delineate what constitutes the commons, since, as we have seen, if everything is taken to be commons, then it can just as well be said that nothing is a commons. Read the rest of this entry