Category Archives: Science

Avant-Garde Philosophy of Economics

by Tatiana Plakhova (2011)

[A pdf version is available here]

To most people, the title of this post is a triple oxymoron. Those left thoroughly traumatized by Econ 101 in college share their skepticism with those who have dipped their toe into hybrid fields like neuroeconomics and found them to be a synthesis of the dullest parts of both disciplines. For the vast, vast majority of cases, this sentiment is quite right: ‘philosophy of economics’ tends to be divided between heterodox schools of economics whose writings have entirely decoupled from economic formalism, and—on the other side of the spectrum—baroque econophysicists with lots to say about intriguing things like ‘quantum economics’ and negative probabilities via p-adic numbers, but typically within a dry positivist framework. As for the middle-ground material, a 20-page paper typically yields two or three salvageable sentences, if even that. Yet, as anyone who follows my Twitter knows, I look very hard for papers that aren’t terrible—and eventually I’ve found some.

Often the ‘giants’ of economic theory (e.g. Nobel laureates like Harsanyi or Lucas) have compelling things to say about methodology, but to include them on this list seems like cheating, so we’ll instead keep to scholars who most economists have never heard of. We also—naturally—want authors who write mainly in natural language, and whose work is therefore accessible to readers who are not specialists in economic theory. Lastly, let’s strike from the list those writers who do not engage directly with economic formalism itself, but only ‘the economy’. This last qualification is the most draconian of the lot, and manages to purge the philosophers of economics (e.g. Mäki, McCloskey) who tend to be the most well-known.

The remaining authors make up the vanguard of philosophy of economics—those who alchemically permute the elements of economic theory into transdisciplinary concoctions seemingly more at home in a story by Lovecraft or Borges than in academia, and who help us ascend to levels of abstraction we never could have imagined. Their descriptions are ordered for ease of exposition, building from and often contradicting one another. For those who would like to read more, some recommended readings are provided under each entry. I hope that readers will see that people have for a long time been thinking very hard about problems in economics, and that thinking abstractly does not mean avoiding practical issues.

Category Theory, by j5rson

M. Ali Khan

Khan is a fascinating character, and stands out even among the other members of this list: by training he is a mathematical economist, familiar with some of the highest levels of abstraction yet achieved in economic theory, but at the same time an avid fan of continental philosophy, liberally citing sources such as De Man (a very unique choice, even within the continental crowd!), Derrida, and similar figures on the more literary side of theory, such as Ricoeur and Jameson. It may be helpful to contrast Khan to Deirdre McCloskey, who has written a couple of books on writing in economics: McCloskey uses undergraduate-level literary theory to look at economics, which (let’s face it) is a fairly impoverished framework, forcing her to cut a lot of corners and sand away various rough edges that are very much worth exploring. An example is how she considers the Duhem-Quine thesis to be in her own camp, which she proudly labels ‘postmodern’—yet, just about any philosopher you talk to will consider this completely absurd: Quine was as modernist as they come. (Moreover, in the 30 years she had between the first and second editions, it appears she has never bothered to read the source texts.) Khan, by contrast, has thoroughly done his homework and then some.

Khan’s greatest paper is titled “The Irony in/of Economic Theory,” where he claims that this ‘irony’ operates as a (perhaps unavoidable) literary trope within economic theory as a genre of writing. Khan likewise draws from rhetorical figures such as synecdoche and allegory, and it will be helpful to start at a more basic level than he does and build up from there. The prevailing view of the intersection of mathematics and literary theory is that models are metaphors: this is due to two books by Max Black (1962) and Mary Hesse (1963) whose main thesis was exactly this point. While this is satisfying, and readily accepted by theorists such as McCloskey, Khan does not content himself with this statement, and we’ll shortly see why.

Consider: a metaphor compares one thing to another on the basis of some kind of structural similarity, and this is a very useful account of, say, models in physics, which use mathematical formulas to adequate certain patterns and laws of nature. However, in economics it often doesn’t matter nearly as much who the particular agents are that are depicted by the formulas: the Prisoner’s dilemma can model the behaviour of cancer cells just as well as it can model human relations. If we change the object of a metaphor (e.g. cancer cells → people), it becomes a different metaphor; what we need is a kind of rhetorical figure where it doesn’t matter if we replace one or more of the components, provided we retain the overall framework. This is precisely what allegory does: in one of Aesop’s fables, say “The Tortoise and the Hare,” we can replace the tortoise by a slug and the hare by a grasshopper, but nobody would consider this to be an entirely new allegory—all that matters here is that one character is slow and the other is fast. Moreover, we can treat this allegory itself as a metaphor, as when we compare an everyday situation to Aesop’s fable (which was exactly Aesop’s point), which is why it’s easy to treat economic models simply as metaphors, even though their fundamental structure is allegorical.

The reason this is important is because Khan takes this idea to a whole new level of abstraction: in effect, he wants to connect the allegorical structure of economic models to the allegorical nature of economic texts—in particular, Paul Samuelson’s Foundations of Economic Analysis, which begins with the enigmatic epigraph “Mathematics is a language.” For Khan: “the Foundations is an allegory of economic theory and…the epigraph is a prosopopeia for this allegory” (1993: 763). Since I had to look it up too, prosopopeia is a rhetorical device in which a speaker or writer communicates to the audience by speaking as another person or object. Khan makes clear that he finds Samuelson’s epigraph puzzling, but instead of just saying “It’s wrong” (which would be tedious) he find a way to détourne it that is actually quite clever. He takes as a major theme throughout the paper the ways that the same economic subject-matter can be depicted in different ways by using different mathematical formalisms. Now, it’s fairly trivial that one can do this, but Khan focuses on how in many ways certain formalisms are observationally equivalent to each other. For instance, he gives the following chart (1993: 772):

correspondence between probability & measure theoretic terms (in Khan, 1993; 772)

Correspondence between probability & measure theory

That is to say, to present probabilistic ideas using the formalism of measure theory doesn’t at all affect the content of what’s being said: it’s essentially just using the full toolbox of real analysis instead of only set notation. What interests Khan here is how these new notations change the differential relations between ideas, creating brand new forms of Derridean différance in the realm of meaning—which, in turn, translate into new mathematical possibilities as our broadened horizons of meaning let us develop brand new interpretations of things we didn’t notice before. Khan’s favorite example here is nonstandard analysis, which he claims ought to make up a third column in the above chart, as probabilistic and measure theoretic concepts (and much else besides) can likewise be expressed in nonstandard terms. To briefly jot down what nonstandard analysis is: using mathematical logic, it is possible to rigorously define infinitesimals in a way that is actually usable, rather than simply gestured to by evoking marginal quantities. While theorems using such nonstandard tools often differ greatly from ‘standard’ theorems, it is provable that any nonstandard theorem can be proved standardly, and vice versa; yet, some theorems are far easier to prove nonstandardly, whence its appeal (Dauben, 1985). In economics, for example, an agent can be modelled as an infinitesimal quantity, which is handy for general equilibrium models where we care less about particulars than about aggregate properties, and part of Khan’s own mathematical work in general equilibrium theory does precisely this.

To underscore his overall point, Khan effectively puts Samuelson’s epigraph through a prism: “Differential Calculus is a Language”, “Convex Analysis is a Language”, “Nonsmooth Analysis is a Language”, and so on. Referring to Samuelson’s original epigraph, this lets Khan “interpret the word ‘language’ as a metonymy for the collectivity of languages” (1993: 768), which lets him translate it into: “Mathematics is a Tower of Babel.” Fittingly, in order to navigate this Tower of Babel, Khan (following Derrida) adopts a term originating from architecture: namely, the distinction between keystone and cornerstone. A keystone is a component of a structure that is meant to be the center of attention, and clinches its aesthetic ambiance; however, a keystone has no real architectural significance, but could be removed without affecting the rest of the structure. On the other hand, a cornerstone is an unassuming, unnoticed element that is actually crucial for the structural integrity of the whole; take it away and the rest goes crashing down.

Read the rest of this entry

Advertisements

A Genealogy of Nothing: Ether and the Case for Fallibilism

[The following essay is directed toward an imaginary positivist with unflinching faith in the veracity of Einstein’s research programme of relativity, because of relativity’s overwhelming empirical success. Rather than being anti-science (quite the opposite, actually), my humble goal here is simply to show that empiricism does not provide the full picture, and that fallibilism is justified as a default position when considering contemporary science. I would have liked to explicitly dwell upon specific philosophers of science (particularly Kuhn, Lakatos, and Feyerabend), but that will have to wait for another time. Lastly, this is unfortunately not an introductory essay, and is directed toward those who are at least superficially familiar with relativity and the history of physics preceding it.]

The pessimistic meta-induction is the supposition that just as so many theories in the history of science have been superseded, current theories will likewise be found to be unsatisfactory, despite their empirical success. This can be taken in a strong or a weak sense: the strong sense implies that current theories are completely wrong (just as phlogiston, to contemporary scientists, is completely wrong), and the weak sense (fallibilism) acknowledges the empirical success of current theories while insisting that they may be incomplete—epiphenomena, of sorts, of a larger pattern. It is the aim of this essay to make a case for fallibilism, illustrating its case with examples from special relativity, general relativity, and quantum theory; once the latter case is made, the strong pessimistic meta-induction will be left as a possibility, since by definition no positive case (save the explicit falsification of current theories) can be made for its correctness, but only a negative case. Starting with a brief glance into Einstein’s epistemology, the historical development of the concept of ether will be documented, and upon finding that it is not necessarily as “superfluous” as Einstein may have once thought, the implications of this incompleteness will be examined.

“You do not really understand something unless you can explain it to your grandmother,” Einstein is reputed to have said. This strikes the reader as a surprising statement to come from one so notorious for the abstruseness of his theories, but it reveals a striking distinction for philosophies of science: that between how a theory works (in all its mathematical intricacy) and what it means. As Hegel writes in his Shorter Logic,[1] “The chemist places a piece of flesh in his retort, tortures it in many ways, and then informs us that it consists of nitrogen, carbon, hydrogen, etc. True: but these abstract matters have ceased to be flesh.” Here we see that Hegel rejects the mechanical in favor of the conceptual, presumably reacting to the reductionist tendency of scientists to favor the former at the expense of the latter, but we see in Einstein a desire to retain the two in all their incommensurability. Yet, we can also proceed backwards from Einstein’s distinction: if mathematics is a formal delineation of the relations between terms, then insofar as mathematical physics is an empirical science, its terms cannot merely be mathematical variables, but objects, to which correspond concepts. With physics in particular, however, the boundaries separating concepts are of prime importance, and it is these mutable boundaries that pose the primary weak point of scientific research, to the point where fallibilism becomes a rational mindset for scientists regardless of the empirical success of any given theory taken on its own. Read the rest of this entry

Simple Introduction to Non-Euclidean Geometry

For further explanation, see here.

Modernism In The Streets

Please click the four arrows to view in fullscreen. The creators’ description can be read here.

The title of this post, so literally exemplified in the video’s example of Manhattan, is taken from a despairing Lionel Trilling as his students occupied Columbia University in 1968. Not to compare the contemporary Occupy movement(s) with May ’68, which is so tacky, but the quotation gains an intriguing new meaning now, which the above video helps to draw out. Though I’m in no state to define modernism, it can roughly be described as the belief in the capacity of science and reason to encapsulate all the variables of the universe in order to achieve a state of total control & perfection. This is to be contrasted with postmodernism, which is, quite frankly, impossible to succinctly describe. With our cliché definition of modernism out of the way, however, we can focus our attention on the much more interesting elements entailed by this weltanschauung.  Case in point:

High modernist subjectivity gives an extraordinary privilege, for example, to judgement and especially to cognition. It correspondingly devalues the faculty of perception, so that vision itself is so to speak colonized by cognition. The modern predominance of reading fosters epistemologies of representation, of a visual paradigm in the sphere of art […]. High modernist subjectivity seems furthermore to privilege the cognitive and moral over the aesthetic and the libidinal, the ego over the id, the visual over touch, and discursive over figural communication. It gives primacy to culture over nature, to the individual over the community, As an ethics of responsibility, high modernist personality and Lebensfürung [life-course] it allows the individual to be somehow ‘closed’ instead of open; to be somehow obsessed with self-mastery and self-domination.

Lash, S. & Friedman, J. (Eds.). (1993). Modernity & Identity. Massachusetts: Blackwell, pg. 5

To conclude, here is Microsoft’s projection of our technological future:

For the 2009 version, see here.

The Ontology of the Commons

[This is an assignment for my Environmental Politics class, which I think is interesting enough post here. My first answer is a sort of immanent critique of ‘intrinsic value’ to show its emptiness as a concept. The second question is clearly anthropocentric, which is likely the part we’re meant to criticize, but I think it’s much more interesting to see how this simple statement forecloses any possible argument on its own terms. My third answer mostly paraphrases Debord, but it’s a nice example of how the terms of a question (i.e. historical revolution) often delimit the possible answers to it.]

1. Why is the notion of ‘the commons’ significant in terms of understanding the fundamental conflicts in the politics of the environment? (300 words)

McKenzie takes the following description as representative of ecocentrism:[1]

An ecocentric view sees the world as “an intrinsically dynamic, interconnected web of relations in which there are no absolute discrete entities and no absolute dividing lines between the living and the nonliving, the animate and the inanimate, or the human and the nonhuman.” In other words, all beings ― human and non-human ― possess intrinsic value.

Foreman includes inanimate objects (e.g. mountains) in McKenzie’s category of ‘beings’.[2] If this is the case, then all matter is intrinsically valuable. A true ecocentrist would then accept the proposition that all matter must be commons, since matter’s intrinsic value cannot be made into anyone’s property, and since there can be no moral argument that any instance of matter is not free to be utilized by any other instance of matter.

If it is true that at the quantum level all matter is energy, and if the first law of thermodynamics is true (energy cannot be created or destroyed), then it does not matter what form matter takes, even if it is entirely vaporized by nuclear warfare, since it, as energy, still exists, and still possesses ‘intrinsic value’. Thus it is impossible to not preserve the commons. Therefore, the moral ground for preserving the earth’s environment as we know it must be zoocentric or sentientist[3], both of which do not abstractly view humans as a subtype of matter, but deal with humans in their capacity as living beings, i.e. politically.[4] The function of Green political theory, then, is to delineate what constitutes the commons, since, as we have seen, if everything is taken to be commons, then it can just as well be said that nothing is a commons. Read the rest of this entry

Polyphonic Metamodelization

Chaos and instability, concepts only beginning to acquire formal definitions, were not the same at all. A chaotic system could be stable if its particular brand of irregularity persisted in the face of small disturbances. [Edward] Lorenz’s system was an example… The chaos Lorenz discovered, with all its unpredictability, was as stable as a marble in a bowl. You could add noise to this system, jiggle it, stir it up, interfere with its motion, and then when everything settled down, the transients dying away like echoes in a canyon, the system would return to the same peculiar pattern of irregularity as before. It was locally unpredictable, globally stable. Real dynamical systems played by a more complicated set of rules than anyone had imagined. The example described in the letter from Smale’s colleague was another simple system, discovered more than a generation earlier and all but forgotten. As it happened, it was a pendulum in disguise: an oscillating electronic circuit. It was nonlinear and it was periodically forced, just like a child on a swing.

It was just a vacuum tube, really, investigated in the twenties by a Dutch electrical engineer named Balthasar van der Pol. A modern physics student would explore the behavior of such an oscillator by looking at the line traced on the screen of an oscilloscope. Van der Pol did not have an oscilloscope, so he had to monitor his circuit by listening to changing tones in a telephone handset. He was pleased to discover regularities in the behavior as he changed the current that fed it. The tone would leap from frequency to frequency as if climbing a staircase, leaving one frequency and then locking solidly onto the next. Yet once in a while van der Pol noted something strange. The behavior sounded irregular, in a way that he could not explain. Under the circumstances he was not worried. “Often an irregular noise is heard in the telephone receivers before the frequency jumps to the next lower value,” he wrote in a letter to Nature. “However, this is a subsidiary phenomenon.” He was one of many scientists who got a glimpse of chaos but had no language to understand it. For people trying to build vacuum tubes, the frequency-locking was important. But for people trying to understand the nature of complexity, the truly interesting behavior would turn out to be the “irregular noise” created by the conflicting pulls of a higher and lower frequency.

~Gleick – Chaos: Making A New Science, pp. 48-9

My question: what if van der Pol could not have noticed the patterns he did if he had simply used a graph? What if the structures of music (e.g. chord progressions, key, octaves) can allow insight into patterns that cannot be fully conveyed via visual media, i.e. graphs?

There’s a flash game related to this topic here. Though I normally avoid such frivolous things, this one is quite simple, yet allows for a great amount of creativity. If Noam Chomsky could develop syntax out of a little grammar game he would play between sessions of ‘serious’ linguistic work, so, perhaps, one might be able to eventually come up with some practical application for playthings like this…

People’s seemingly inherent attraction to games is something that I still don’t understand, but it is nonetheless quite fascinating, not to mention (potentially) useful, as in this case.

On World Population, Hydroponic Cucumbers, & Milk

Countries of the world proportioned according to their populations as of 2010

I just found an excellent video on CBC News. Apparently, “the world prepares to welcome its seventh billion inhabitant sometime this year.” Its eighth billionth is projected to appear in 2025, but world population is expected to settle at 9-10 billion by 2100. As well, India is projected to become the most populous country by 2050.

Oh, and by the same institution (Agence France-Presse): Malthus, anyone? No, to invoke Malthus is to be overly pessimistic; I think that hydroponic growth sounds quite promising, especially if we can manage to do such farming in multiple floors of skyscrapers, which would provide a more efficient use of space than our clumsy acre system, and the lack of wasted resources would allow the world’s poor to be fed with little to no extra water and nutrients used (which is especially pertinent given the looming water crisis). The main problem is accumulating energy cheaply enough to make these projects profitable…

Hydroponics allows an indoor (i.e. weather-independent) means of growing foods with no waste of water & nutrients; with every variable known & controlled, hydroponics epitomizes the modernist project.

In 1988, The Newfoundland government (Canada) donated $13 million of taxpayers’ money to build a “space-age greenhouse” which would hydoponically grow cucumbers which would sprout to full size within six days. Unfortunately, because of the market being flooded with cucumbers, the company, Enviroponics, had to sell their cucumbers at $0.55 wholesale, while each cucumber costed them $1.10 each. According to a survey near that time, the average Newfoundlander ate only half a cucumber a year, and Enviroponics could not export their cucumbers at a profit, so surplus cucumbers flooded Newfoundland’s market, and its dumps (reminiscent of the semi-recent European milk crisis, except less morally ambiguous and more inept; point your mouse at the links for explanation). In 1989 Enviroponics went bankrupt, selling its facility to another company for $1. A total of about 800,000 cucumbers were produced, and the cost to taxpayers per cucumber was $27.50, compared to 50 cents for cucumbers produced out of province and sold in Newfoundland grocery stores. This “boondoggle” (i.e. fiasco) has since become a symbol of foolish government spending. (via)

Close-up of a hydroponic apparatus (cf. the diagram above)

Just a little history lesson. Nevertheless, it’s been 20 years, no? Surely hydroponic science has progressed a bit further since then. At any rate, however, the world is in no state to revolutionize farming methods anytime soon. Still, hopefully the above has suggested that the modernist dream of ‘mapping’ every variable of the world is still going strong, despite the postmodernists clamor. But then, social science is still in its infancy compared to the mass progress of the natural sciences (as Imre Lakatos asserts, with whom I more or less agree), yet it’s precisely this latter field that will most likely give representatives of the modernist project a run for their money (hopefully in the literal as well as the figurative sense).