Category Archives: Technology
Please click the four arrows to view in fullscreen. The creators’ description can be read here.
The title of this post, so literally exemplified in the video’s example of Manhattan, is taken from a despairing Lionel Trilling as his students occupied Columbia University in 1968. Not to compare the contemporary Occupy movement(s) with May ’68, which is so tacky, but the quotation gains an intriguing new meaning now, which the above video helps to draw out. Though I’m in no state to define modernism, it can roughly be described as the belief in the capacity of science and reason to encapsulate all the variables of the universe in order to achieve a state of total control & perfection. This is to be contrasted with postmodernism, which is, quite frankly, impossible to succinctly describe. With our cliché definition of modernism out of the way, however, we can focus our attention on the much more interesting elements entailed by this weltanschauung. Case in point:
High modernist subjectivity gives an extraordinary privilege, for example, to judgement and especially to cognition. It correspondingly devalues the faculty of perception, so that vision itself is so to speak colonized by cognition. The modern predominance of reading fosters epistemologies of representation, of a visual paradigm in the sphere of art […]. High modernist subjectivity seems furthermore to privilege the cognitive and moral over the aesthetic and the libidinal, the ego over the id, the visual over touch, and discursive over figural communication. It gives primacy to culture over nature, to the individual over the community, As an ethics of responsibility, high modernist personality and Lebensfürung [life-course] it allows the individual to be somehow ‘closed’ instead of open; to be somehow obsessed with self-mastery and self-domination.
Lash, S. & Friedman, J. (Eds.). (1993). Modernity & Identity. Massachusetts: Blackwell, pg. 5
To conclude, here is Microsoft’s projection of our technological future:
For the 2009 version, see here.
In pre-modern times the gathering of honey was a difficult affair. Even if bees were housed in straw hives, harvesting the honey usually meant driving off the bees and often destroying the colony. The arrangement of brood chambers and honey cells followed complex patterns that varied from hive to hive―patterns that did not allow for neat extractions. The modern beehive, in contrast, is designed to solve the beekeeper’s problem. With a device called a ‘queen excluder’, it separates the brood chambers below from the honey supplies above, preventing the queen from laying eggs above a certain level. Furthermore, the wax calls are arranged neatly in vertical frames, nine or ten to a box, which enable the easy extraction of honey, wax, and propolis. Extraction is made possible by observing ‘bee space’―the precise distance between the frames that the bees will leave open as passages rather than bridging the frames by building intervening honeycomb. From the beekeeper’s point of view, the modern hive is an orderly, ‘legible’ hive allowing the beekeeper to inspect the condition of the colony and queen, judge its honey production (by weight), enlarge or contract the size of the hive by standard units, move it to a new location, and, above all, extract just enough honey (in temperate climates) to ensure that the colony will overwinter successfully.
I do not want to push the analogy further than it will go, but much of early modern European statecraft seemed similarly devoted to rationalizing and standardizing what was a social hieroglyph into a legible and administratively more convenient format. The social simplifications thus introduced not only permitted a finely tuned system of taxation and conscription but also greatly enhanced state capacity. They made possible quite discriminating interventions of every kind, such as public-health measures, political surveillance, and relief for the poor. […]
[Such state attempts at simplification as t]he Great Leap Forward in China, collectivization in Russia, and compulsory villagization in Tanzania, Mozambique, and Ethiopia are among the great tragedies of the twentieth century, in terms of both lives lost and lives irreparably disrupted. At a less dramatic but far more common level, the history of Third World development is littered with the debris of huge agricultural schemes and new cities (think of Brasília or Chandigarh) that have failed their residents.
~Scott, J. (1998). Seeing Like a State. New Haven: Yale University Press, pp. 2-3
[This is an assignment for my Environmental Politics class, which I think is interesting enough post here. My first answer is a sort of immanent critique of ‘intrinsic value’ to show its emptiness as a concept. The second question is clearly anthropocentric, which is likely the part we’re meant to criticize, but I think it’s much more interesting to see how this simple statement forecloses any possible argument on its own terms. My third answer mostly paraphrases Debord, but it’s a nice example of how the terms of a question (i.e. historical revolution) often delimit the possible answers to it.]
1. Why is the notion of ‘the commons’ significant in terms of understanding the fundamental conflicts in the politics of the environment? (300 words)
McKenzie takes the following description as representative of ecocentrism:
An ecocentric view sees the world as “an intrinsically dynamic, interconnected web of relations in which there are no absolute discrete entities and no absolute dividing lines between the living and the nonliving, the animate and the inanimate, or the human and the nonhuman.” In other words, all beings ― human and non-human ― possess intrinsic value.
Foreman includes inanimate objects (e.g. mountains) in McKenzie’s category of ‘beings’. If this is the case, then all matter is intrinsically valuable. A true ecocentrist would then accept the proposition that all matter must be commons, since matter’s intrinsic value cannot be made into anyone’s property, and since there can be no moral argument that any instance of matter is not free to be utilized by any other instance of matter.
If it is true that at the quantum level all matter is energy, and if the first law of thermodynamics is true (energy cannot be created or destroyed), then it does not matter what form matter takes, even if it is entirely vaporized by nuclear warfare, since it, as energy, still exists, and still possesses ‘intrinsic value’. Thus it is impossible to not preserve the commons. Therefore, the moral ground for preserving the earth’s environment as we know it must be zoocentric or sentientist, both of which do not abstractly view humans as a subtype of matter, but deal with humans in their capacity as living beings, i.e. politically. The function of Green political theory, then, is to delineate what constitutes the commons, since, as we have seen, if everything is taken to be commons, then it can just as well be said that nothing is a commons. Read the rest of this entry
I just found an excellent video on CBC News. Apparently, “the world prepares to welcome its seventh billion inhabitant sometime this year.” Its eighth billionth is projected to appear in 2025, but world population is expected to settle at 9-10 billion by 2100. As well, India is projected to become the most populous country by 2050.
Oh, and by the same institution (Agence France-Presse): Malthus, anyone? No, to invoke Malthus is to be overly pessimistic; I think that hydroponic growth sounds quite promising, especially if we can manage to do such farming in multiple floors of skyscrapers, which would provide a more efficient use of space than our clumsy acre system, and the lack of wasted resources would allow the world’s poor to be fed with little to no extra water and nutrients used (which is especially pertinent given the looming water crisis). The main problem is accumulating energy cheaply enough to make these projects profitable…
In 1988, The Newfoundland government (Canada) donated $13 million of taxpayers’ money to build a “space-age greenhouse” which would hydoponically grow cucumbers which would sprout to full size within six days. Unfortunately, because of the market being flooded with cucumbers, the company, Enviroponics, had to sell their cucumbers at $0.55 wholesale, while each cucumber costed them $1.10 each. According to a survey near that time, the average Newfoundlander ate only half a cucumber a year, and Enviroponics could not export their cucumbers at a profit, so surplus cucumbers flooded Newfoundland’s market, and its dumps (reminiscent of the semi-recent European milk crisis, except less morally ambiguous and more inept; point your mouse at the links for explanation). In 1989 Enviroponics went bankrupt, selling its facility to another company for $1. A total of about 800,000 cucumbers were produced, and the cost to taxpayers per cucumber was $27.50, compared to 50 cents for cucumbers produced out of province and sold in Newfoundland grocery stores. This “boondoggle” (i.e. fiasco) has since become a symbol of foolish government spending. (via)
Just a little history lesson. Nevertheless, it’s been 20 years, no? Surely hydroponic science has progressed a bit further since then. At any rate, however, the world is in no state to revolutionize farming methods anytime soon. Still, hopefully the above has suggested that the modernist dream of ‘mapping’ every variable of the world is still going strong, despite the postmodernists’ clamor. But then, social science is still in its infancy compared to the mass progress of the natural sciences (as Imre Lakatos asserts, with whom I more or less agree), yet it’s precisely this latter field that will most likely give representatives of the modernist project a run for their money (hopefully in the literal as well as the figurative sense).
Neuromarketing is one of the latest paradigms making itself felt in the sphere of marketing. Its premise is simple: given the vast amount of inaccuracy in data-collecting methods (e.g. disparities between stated preference in surveys & revealed preference in purchasing), a more objective means of assessing consumer responses is to use neurotechnology to get straight to the heart of the consumer. Functional Magnetic Resonance Imaging (fMRI) and Electroencephalography (EEG) are used while exposing consumers to products and advertisements, and their cognitive responses are recorded and interpreted. Significant progress has been made, but due to the current expense of neurotechnology (not to mention the legal issues surrounding it, as in the case of France), neuromarketing companies are relatively scarce, with 13 worldwide as of 2007. One of the more prominent companies, NeuroCo, charged $90,000 per study in 2005 (Mucha, 2005: 2-3). Nevertheless, many powerful companies have begun to enlist the service of neuromarketers, such as Hewlett-Packard, Frito-Lay, Google, Motorola, Coca-Cola, Microsoft, Nestlé, Unilever, Proctor & Gamble, L’Oréal, and Fox, for issues ranging from the optimal color of packaging to the effectiveness of movie trailers. Inevitably, the widening availability of such technologies has lead to much bombast and panic, particularly fears about locating a ‘buy button’ in the consumer’s mind, forcing them to buy things they don’t need or to eat until they’re obese. In this essay I hope to briefly explain the technology in use by neuromarketers, to address some of the fears (groundless and justified) about neuromarketing, and to highlight some cases of neuromarketing in practice.
Read the rest of this entry